“Scaling and anonymizing Bitcoin at layer 1 with client-side validation” - our new proposal, also sent to bitcoin-dev mail list. “We propose a way to upgrade Bitcoin layer 1 (blockchain/timechain) without a required softfork. The upgrade leverages properties of client-side validation, can be gradual, has a permissionless deployment option (i.e. not requiring majority support or miner cooperation) and will have the scalability sufficient to host billions of transactions per second. It also offers higher privacy (absence of publically available ledger, transaction graphs, addresses, keys, signatures) and bounded Turing-complete programmability with a rich state provided by RGB or another client-side-validated smart contract system.”
RGB is a computing platform. Like each of the other computing platforms (OS, Web, embedded, cloud, blockchain-based, VM-based) it has its own distinctive features. Unlike blockchain-based computing platforms, it has access to ephemeral state data, which may be a part of the Lightning channel state, or data provided by a decentralized data network. This is possible since in client-side validation, unlike in blockchain, a single contract may have an invalid state and this doesn’t affect the state of the platform as a whole. For instance, in Ethereum, if an invalid transaction under some contract is included in the blockchain, the whole blockchain becomes invalid (and a different tip is selected). In RGB no global consensus on the validity of all contracts and transactions is required. RGB isolates each of the programs (“smart contracts”) in its sandbox environment, which provides much better scalability and security than blockchain-based platforms. Unlike device-based and Web platforms, RGB doesn’t provide random memory access, I/O, or UI, which makes RGB well-suited for embedded devices and environments. One of the distinctive features of the platform is the use of the functional registry-based virtual machine (#AluVM) and functional type system. RGB is the first computing platform utilizing PRISM computing model, which is closer to cellular automation computing than instruction-based or neural networks. PRISM stands for “partially replicated state machines”, which at their core represent a highly-parallel multi-agent system made with a functional approach. Today, RGB (together with AluVM) can be run on x86, AMD64, Aarch64, microcontrollers, and WASM instruction set architectures, i.e. it is a ubiquitous platform (desktop, mobile, server, embedded, Web). image
Web2, Web3, Web5… What are those? Let’s start with defining Web itself. My take: #Web is a computing platform - like POSIX, Windows, Java, embedded etc. Web differs from Internet the same way Windows differs from BIOS. As a computing platform Web brings a number of protocols, toolchains, SDKs and technologies: 1. Networking is restricted to the TCP/IP subset: HTTP(s), WebSocket and WebRTL 2. Supported instruction set architectures: WASM, JavaScript virtual machine(s), both browser- and server (NodeJS)-based. 3. UI uses HTML, CSS, DOM, WebGL, Canvas. On top of that UI frameworks proliferate - like in POSIX world we have Qt, GTK etc in Web world we have React, Angular, Vue, Svelte etc. Why Web is so popular? It was the first computing platform created at the age of networking - and for network-based apps first. It allows to run apps without installing them - and do that on any consumer UI-based device: desktop, laptop or mobile. It allows simple creation of cross-platforms apps. It avoids censorship of app stores. The drawbacks of Web are mostly direct consequences of its advantages: - low security: a remote code is executed locally; - privacy leaks as a result of client-server model; - agility allowing cross-platform UI and schema-less network messaging results in “spaghetti code” and wired JavaScript VM non-determinism - Web is poorly decentralized and censorship-resistant: an inherited client-server model doesn’t allows proper decentralization. Web passed through a generations: Web, Web2 - and now attempts of Web3 and Web5 are there. The main difference between Web and Web2 was: - interactivity (brought through JavaScript AJAX, and later WebSockets); - dynamic UI (with JavaScript DOM manipulations); - abandoning of Java applets; - move from CGI to custom web servers with embedded server-side business logic (NodeJS, Python and web frameworks in almost each language); - better markup languages (HTML5, CSS3), including graphic markup (SVG, Canvas, WebGL). What people were looking for in post Web2-era etc? - better decentralizaiton and censorship-resistance; - integration of native internet money and payment methods; - smart contracts (complex automations based on cryptographic and economic incentives); - better privacy. Does Web3 or Web5 delivers on that? No: it promises to deliver, but fails: there can’t be a privacy nor scalability with blockchain-based things; there can’t be censorship-resistance with PoS; there can’t be decentralization with the old client-server hosting of content. How the proper “next Web” should look like? - based on P2P (where is possible) or relay-based systems (where P2P is impossible); with relays being self-hosted; - end-to-end encrypted communications; - over Mix networks (Tor, Nym, I2P etc); - authentication based on public key cryptography (and not passwords) and decentralized identities (SSH, GPG and future systems); - based on zero-knowledge state; i.e. not leaking privacy data to the web servers or nodes; - using deterministic functional computing; - using PoW and bitcoin single-use-seals - but not for storing a state like in Web2 (!); only for cryptographic commitments (OTS etc); - using client-side-valdiated smart contracts like RGB; - integrated with Lightning payments and #BiFi (bitcoin finance); - using decentralized data protocols like #Storm, #Slashtags, #Nostr-based and like solutions. I call this future Web4, and we are working on it at @lnp_bp, @pandoraprime_ch, @cyphernet_io together with parter projects like @nymproject @radicle @DarkFiSquad doing things like mixnets, end-to-end encryption, #reNostr, #Storm, #RGB smart contracts and other exciting projects. Everyone is welcome to check one of our releases we did this year: cyphernet, a Rust library providing support for mixnets and pure rust implementation of Noise E2E encryption: More fill follow soon!
Cryptography is the ultimate computing science. What resembles the main value in computing science is computationally irreducible computing. Cryptography is the science of NP!=P, i.e. computationally irreducible computing. The real intelligence is computationally irreducible; future civilization will compute only in irreducible way; i.e. there will be no forms of computing which is not a cryptography. Also on the topic:
My comparison of different elliptic-curve based signature schemes. Overall, #ECDSA and #Schnorr look poorly comparing to #EdDSA and #BLS; I see no reasons of selecting them. EdDSA is better than BLS due to support of adaptor signatures (and scriptless scripts like DLCs); BLS are better in size and possible Lamport combination. Thinking in terms of #reNostr, the obvious choice should be not Schnorr but EdDSA (not BLS, since EdDSA are used in most of identity systems like SSH and GPG). Use of Schnorr sigs in #Nostr are noncence: public key re-use (a condition for a social network) leaks private key. image