The Compliance-Innovation Trade-off
tl;dr: DeFi cannot be permissionless, allow arbitrary innovation and comply with any meaningful regulations. You can only choose two of those properties. If you accept a limited form of innovation you can have two-and-a-half of them.
Fundamental results in logic and computer science impose a trade-off on any permissionless system’s ability to both permit innovation and achieve compliance with non-trivial regulations. This result depends only on long-settled concepts and the assumption a financial system must provide a logically consistent view of payments and balances to users.
This is a semi-technical treatment, with more formal work proceeeding elsewhere. On this blog we are going to sketch the argument and link to Wikipedia. None of this math is new — the referenced papers are from 1931, 1943 and 1951 — and none of the assumptions or definitions is intended to feel controversial.
At the same time the works we build atop were considered groundbreaking when new and are considered foundational to mathematics today. That none of this math is traditionally considered related to finance is irrelevant. DeFi represents the first serious attempt to fully automate an intermediary-free financial system. These questions simply never came up before.
In 1931 Godel proved that any system capable of expressing arithmetic was either:
- Inconsistent: you could prove X and -X were both true
- Incomplete: there were valid statements you cannot prove true or false
This shocked the math, logic and philosophy worlds. A few years later, in 1936, Church and Turing proved that the “halting problem” — a well posed, reasonable sounding question to ask a computer — had no answer. In essence they showed that not all well posed problems can be answered by computers.
Then in 1943 Kleene connected these concepts (here’s a paper, sorry there is no wiki) by showing that all systems that can express arithmetic contain the halting problem and offer the degree of computational power we call Turing-complete today. This means if you can do arithmetic you can build any computer program. And it also means your system cannot solve the halting problem.
Next, in 1951, Rice extended the Church-Turing halting result to something far more general: you cannot write a program that checks if another program has any particular non-trivial property. For example you cannot programmatically check if a program will run out of memory without running it. Or if it will finish within a given time limit.
Importantly Rice’s result depends only on those conditions Kleene proved any system that can handle arithmetic must posses. A version of that proof is taught in every introduction to the theory of computation course. It is theorem 5.8.5 in this highly regarded textbook. That textbook’s author taught me using that book when it first came out. In theory you can go argue with him. He’s the only person mentioned so far who is still alive so there’s that. Plus, even if Godel was still alive, arguing with someone who’s Wikipedia biography literally compares them to Aristotle is probably a bad idea.
Finally, the most powerful sort of computer we can have that doesn’t experience the halting problem and therefore Rice’s extension is known as a linear bounded automaton (LBA). That’s also covered in the textbook. They do not come up much in the world because they do not allow the expressiveness we call Turing-completeness and therefore are not terribly useful. We are going to use the LBA a bit later.
What does it mean to be trustless? Or to trust? These words are thrown around a lot in crypto but not carefully. So we are going to propose a definition for trust. If you can prove something is true you do not need trust. Trust but verify is more properly classified as a joke — albeit a very Russian one — than a policy. While the precise origin of the phrase is unknown it is connected to both Lenin and Stalin. They are both quite famous but certainly not for being trusting.
Anyway once we verify something we no longer need to trust anybody as we know the answer. At the same time Godel tells us that some times we know it is impossible to prove a given statement true or false. And this suggests a natural definition.
trust n to believe something is true with probability 1 even when in possession of a proof that no proof this belief is accurate, or mistaken, can ever be produced.
This is not a philosophy blog but we will briefly note this is not the same as faith. One can take on faith something that is manifestly false. One can also take on faith an assertion that could in theory be proven true or false without investigating which it is. By trust we are specifically focusing on questions that are formally unresolvable and not simply questions where we did not bother to find out.
Sketch Of Proof
The initial setup:
- We want to mechanize finance
- Finance needs arithmetic
- Godel tells us if we can do arithmetic we need to choose if our system is inconsistent or incomplete
- We need certainty over whether a given payment was made and around account balances, so we cannot pick inconsistency and therefore choose incompleteness
- Kleene tells us our system has the halting problem and all that comes with it
- We can then copy Savage’s proof for Rice’s theorem and adopt that result
So at this point we cannot automatically prove anything about our system because of Rice’s theorem. We need to see concrete code to check compliance.
There are two types of code: code which calls external functions and code which does not. If we call external functions then we cannot make any guarantees.
What if we don’t call external functions? Then things are more complicated. Can we edit the code? There are a few answers:
- No. In which case we can prove properties of the system including compliance. But we cannot allow any innovation as new programs cannot be published.
- Yes, without restriction. So here we are again screwed. We cannot promise anything because the code can change arbitrarily at any time.
- Yes, but only with an LBA. Now we are fine as far as compliance goes but limited on the innovation front because an LBA cannot do as much as , you know, a general purpose computer.
Some people are probably thinking “an LBA is just a Turing machine with limited memory.” But they are only partially correct. These are properties of languages. If your programming language is Turing-complete you do not magically get access to infinite memory.
In our restricted system we can only effect edits possible in a context-sensitive language rather than a recursively enumerable one. This imposes limits on what we can change.
So how does this relate to DeFi? It’s all about who gets to push updates. So let’s consider each case above in turn.
- If the system already makes external calls we can make no guarantees to start with.
- If arbitrary changes can be pushed then a permissionless system cannot make guarantees because anybody could change it.
- If we can only push changes via an LBA the system itself can enforce compliance with policies on a going-forward basis. It can reliably reject changes that would lead to non-compliance. so long as the initial state was compliant.
- If the system is immutable it does not matter who can access it — it can never be changed. It remains as compliant as it was on day 1.
Now we are getting somewhere. Permissionless systems that offer changeability >LBA cannot credibly comply with any regulations.
Permissioned systems can — to the extent we trust whoever set up the system and does the permissioning going forward.
Satoshi & The Arrow Of Time
Now we are in a great position to reconsider part of the Bitcoin whitepaper. Satoshi describes the system thus:
Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone.
Notice that Bitcoin is only making promises about the past. As we now understand this is not a shortcoming of the system. Since the smart contract language can do arithmetic — it is Turing-complete per Craig Stephen Wright — and the system is permissionless it cannot make firm promises about the future.
This offers perspective on Bitcoin as well. One can reasonably well sum up Bitcoin’s view on our definition of trust as “healthy distrust is a good basis for joint work.” That is a quote attributed to Stalin. The system only offers trust guarantees about the past.
This proof encompasses a huge range of known impossibility results. We started talking about compliance but what we proved concerns any non-trivial subset of the recursively enumerable languages. For anyone that has not studied the theory of computation: that means pretty much any set of rules you can write down.
In math we call an important-but-straightforward result that easily falls out of something difficult a corollary. Usually the hard proof is technical in nature but it does 99% of the work for the important conclusion. We will leave as an exercise for the reader proof of the following corollaries:
- Trustless & permissionless bridges are impossible
- Decentralized & permissionless identity is impossible
- Trustless & permissionless escrow is impossible
- Decentralized & permissionless risk-free yield is impossible
- Decentralized & permissionless stablecoins are impossible
Note we insert the word permissionless into these claims. People do not generally include this word — but they mean it. We are now able to parse this difference.
Published proofs of these impossibilities, inclusive of the permisionless requirement, exist. We wrote some of them. But those are long and complex papers. Proving those results as corollaries here requires the following process:
- Define the thing you want as a proper subset of the recursively enumerable languages. This sort of exercise is a university first year homework assignment. Read the textbook above.
- Link the paper.
At the same time you can prove something is possible by showing how it restricts innovation, updates, arbitrary interactions or any number of other things. Plenty of products are possible — just not all of them. And this technique allows you to fairly easily constructively prove a working product really works.
This Is Not Deep
What Godel figured out was groundbreaking. Similarly Church and Turing’s work birthed the theory of computation. Kleene and Rice connected and generalized those insights.
All we have done here is take these earlier theorems seriously and ask “what happens if we assume a reasonable financial system and reason from there.” It turns out lots of very-visibly-difficult-to-solve problems are in fact unsolvable for the same reason Hilbert’s 10th problem was resolved in a weird way.
If you want to be able to make any credible promises about the future your system either needs permissions or to somehow limit edits to those that can be performed by a linear bounded automaton.