Summary
When a new computing paradigm emerges, it sometimes provides an excellent solution to a space that had no good tooling beforehand. That space then experiences a Cambrian explosion of innovation.
We propose a new paradigm focused on verification of execution, thus allowing a free market in execution. This should be particularly suited to handle unknown future information, optionality, and the need to discover an efficient execution path.
An example of a bespoke solution to a single verification problem is the $1.3 trillion Bitcoin blockchain, which does not care how you obtained that hash, only that you have come up with it. An example of a problem that might be built on a general verification environment is a game tournament which pays out the winners based on proof of winning.
Because verification becomes the crucial target, we propose to make it trustless by creating a decentralised verification engine running on a new blockchain optimised for verification. If Ethereum is the world computer, Saline is the world verifier. And the gateway to the Cambrian explosion in verification based computing.
Previous paradigm shifts
In the 2010s, “data science” exploded because of new tools and methods that lowered the barrier to entry drastically. You can teach someone most relevant SQL in an afternoon, and enough Python in a few weeks, to turn them into an analyst practically able to run a mid sized company’s entire reporting system by themselves, although actual machine learning may take a bit longer.
Similarly, hardware for massively parallel computation happens to be very useful to gamers, and more recently, to large language models, causing NVidia’s share price to go from $150 in 2023 to over $1,000 a year later. Crucially, a large part of NVidia’s success stems from their heavy investment in the community, particularly with CUDA which drastically lowered barriers to entry to software developers and enabled the rapid stream of innovations that eventually led to ChatGPT and its competitors going mainstream.
It should be obvious from these examples that a paradigm is not just a language, or a tool, but a combination of things that, happening together, allow massive participation in a hitherto unexplored specialist kingdom.
It is also worth noting that the users of such a stack are not necessarily those for whom it was designed - NVidia’s gamer-to-H100s transition was for a while only detectable by the large premiums found on eBay for certain classes of (gaming) GPUs. Machine learning moved to GPUs only out of necessity, having few other architectures readily available at low cost for training and inference.
Each paradigm has properties that make it more appealing than general computing for that crowd. For example, Python is known as a “glue language” - easy to learn and omnipresent, with a massive ecosystem of well developed libraries, you can easily add functionality to your product with few barriers to entry from a web server to graphs for reports, giving an analyst an almost superhuman ability to launch fast and iterate.
Similarly, most programs do not need mass parallelism, but multiplying very large matrices turns out to be the bottleneck for both gaming graphics and model training and inference (and zero knowledge proofs of execution, but this is a subject for another day).
The question therefore is not whether a new paradigm serves general use cases better, but which use cases will become feasible, and undergo this rapid expansion.
A new paradigm: verification based computing.
You state the space of possible futures, the condition under which execution will happen. For example: your bank account probably has various daily limits (that you can change) for different types of payments, such as by credit card, direct transfer, or (in Singapore) PayNow. The limit might be displayed as “I am happy to transfer anything up to $10,000 a day using this method”. And when you attempt to make a payment, the bank verifies whether this payment will take you over the limit and authorise or deny it accordingly.
This is a very simple form of verification based computing. The bank does not care about which account the money comes from or where it is going, only whether the daily total has been reached.
A daily limit is an immediate, present condition, but conditions can also deal with unknowns that will only be resolved in the future, such as which of a whitelist of signatories will trigger future transactions from a joint account.
Now imagine a gaming tournament. Every entrant pays a fee, and the winner collects the sum total of earnings (minus an administrative charge, naturally). In this case, the winner is not known, but the game and its entrants want to pay them once they become known. However the game might be implemented, it will at the conclusion of the tournament (or upon a withdrawal request) verify that the winner did, in fact, win. But this future verification is decided upon right now, ahead of any entrant paying their fee and starting to play.
Present and future conditions can therefore also encapsulate optionality. For example, a venture capitalist might be issued an option (warrant) by a company to provide 5% of its preferred stock in exchange for $500,000, expiring in 3 months. The venture capitalist can decide, up to 3 months in the future, to exercise their warrant or not. Here, the company would verify upon exercise request that the expiry has not passed, and that the amount has been transferred, before transferring the shares. Whether or not the venture capitalist will exercise their warrant is unknown, until expiry or exercise. But both sides sign today on the two possible futures of exercise or no exercise - with optionality granted to the venture capitalist.
One interesting feature of verification based computing is that the execution path does not matter. Provided that the end state meets the conditions, the state change is accepted. Provided that the total for the day is not reached, the bank authorises the transaction, regardless of payee. Provided that the winner proves their victory, the game transfers them the winnings, regardless of strategy and tactics used, enemies destroyed, territory conquered, team affiliations…
This leads to a free market in execution. Assuming some advantage for providing a state change that passes verification can be had, people will compete to reach the state in the cheapest and fastest way possible. Like capitalism, this creates a discovery process for (probably) the best possible execution, rather than attempting to design it upfront. Meanwhile, whoever is doing the verification has constant and low “compute” costs, consisting of verifying whether a logical statement is true if the provided data is plugged in.
Let us look at the biggest example of a successful free market in execution in verification based computing: proof of work blockchains. Bitcoin has, as of this writing, a market capitalisation of over a trillion USD. When Bitcoin was mined with CPUs, you could also get it for free from online faucets, and in 2010, two pizzas were exchanged for 10,000 BTC, which is worth close to $700m today. Satoshi, having vanished in the same year the pizzas were purchased, does not (presumably) care or think about how 2024’s Bitcoins are mined. But as early as 2018, Bitmain, which created the leading mining ASICs (the only hardware to profitably mine today), reported a net profit of $742m. All this from a single, specialist verification problem!
The Bitcoin blockchain is a custom-built verification system where third parties race to solve exactly one problem (finding the next hash). What use cases might benefit from a general verification system?
In our opinion, it’ll be where any combination of these factors is present in the desired state changes:
Unknown, relevant information that will become known ahead of the transaction
Optionality
Need for discovery of efficient execution
Need for predictable (and low) cost
Trustlessness: the most important property
Because verification “compresses” the work done to its proof, tampering with verification becomes an exceptional attack vector with enormous prizes. Thus, the most important property of a verification based computing environment is trustlessness: the system must be designed to be very difficult to tamper with.
It just so happens that this is the exact problem solved by decentralised ledgers such as blockchains. Ethereum generalises the Bitcoin idea to many classes of computing problems, of which verification is only a subset. Just as mining Bitcoin with CPUs is, today, an uncomfortable proposition, verification requires a custom built verification chain, one that “speaks” maths natively, that can understand and verify proofs, data and any other dimensions required.
This is, essentially, why we are building the Saline Network: a chain designed, from the ground up, for verification. As we opened: if Ethereum is the world computer, Saline is the world verifier. And the gateway to the Cambrian explosion in verification based computing.