David Wong | Cryptologie | Markdown http://www.cryptologie.net/ About my studies in Cryptography. en-us Thu, 17 Oct 2024 02:21:52 +0200 I like whiteboards David Wong Thu, 17 Oct 2024 02:21:52 +0200 http://www.cryptologie.net/article/622/i-like-whiteboards/ http://www.cryptologie.net/article/622/i-like-whiteboards/#comments
* [Proof is in the Pudding 02: zkTLS](https://www.youtube.com/watch?v=k4fylgnJRPE)
* [Proof is in the Pudding 01: Arithmetization](https://www.youtube.com/watch?v=QjNVYgEorec&t=)
* [ZK Whiteboard Sessions - S2M1: What is Zero-Knowledge (like, actually)?](https://www.youtube.com/watch?v=ksTTyt0GTvQ) ]]>
Don't go in debt, and other mistakes not to make when receiving stocks or crypto tokens as payment David Wong Wed, 09 Oct 2024 02:27:13 +0200 http://www.cryptologie.net/article/621/dont-go-in-debt-and-other-mistakes-not-to-make-when-receiving-stocks-or-crypto-tokens-as-payment/ http://www.cryptologie.net/article/621/dont-go-in-debt-and-other-mistakes-not-to-make-when-receiving-stocks-or-crypto-tokens-as-payment/#comments
> Disclaimer: if you are in this situation don't just trust me, do your own research and hire your own financial advisor.

It all started when some of my coworkers at Facebook warned me that when the financial year came to an end, they realized that they still owed dozens of thousands of dollars in taxes. This might sound like an outrageous number, but one might think that it's also OK as "if you earn more it's normal to pay more taxes". Years later, when this happened to me, I realized that I could almost have ended up in debt.

Let me explain: stocks or tokens that you receive as payment is paper money, **but not for the IRS**. For the government it's worth as much as the "fair market value" of that stock or token at the moment your employer sends it to you. For the government, it's like income in USD, so they'll still tax you on that even if you haven't converted these in USD yourself.

Let me give you an example: your company has a token that's worth 1,000,000 USD. They send you 1 token, which the IRS will see as an event of you receiving one million dollars of income. In that moment, if you don't sell, or if you're too slow to sell, and the price drops to 1 USD, you're still going to owe the IRS one million dollars.

What's tricky is that even if you decide to sell the stock/token directly, its fair market value (however you decide to calculate it) can be highly uncorrelated to the price you sell it at. That's because tokens are known to be fairly volatile, and (if you're lucky) especially during the time it takes to receive and then sell it.

If that's not enough, you also pay taxes (called capital gain taxes) when you sell and convert to USD, and these are going to be high if you do it within a year (they'll be taxed like income).

OK but usually, you don't have to care too much about that, because your company will withhold for you, meaning that they will sell some stock/token to cover for your taxes before sending you the rest. But it is sometimes not enough! Especially if they think you're in some specific tax bracket. It seems like if you're making too little, you'll be fine, and if you're making too much, you'll be fine too. But if you're in the middle, chances are that your company won't withhold enough for you, and you'll be responsible to sell some on reception of the stock/token to cover for taxes later (if you're a responsible human being).

By the time I realized that, my accountant on the phone was telling me that I had to sell all the tokens I had left to cover for taxes. The price had crashed since I had received them.

That year was not a great year. At the same time I was happy that while I did not make any money, I also had not lost any. Can you imagine if I had to take loans to cover for my taxes?

The second lesson is that when you sign a grant which dictates how you'll "vest" some stock/token over time, you can decide to pay taxes at that point in time on the value the stock/token already has. This is called an 83b form and it only makes sense if you're vesting, and if you're still within the month after you signed the grant. If the stock/token hasn't launched, this most likely means that you can pay a very small amount of taxes up front. Although I should really disclaim that I'm not financially literate (as you can see) and so you shouldn't just trust me on that. ]]>
Some news from founding a startup (zkSecurity) David Wong Thu, 19 Sep 2024 20:56:43 +0200 http://www.cryptologie.net/article/620/some-news-from-founding-a-startup-zksecurity/ http://www.cryptologie.net/article/620/some-news-from-founding-a-startup-zksecurity/#comments
I posted a retrospect on the main blog of zkSecurity: [A Year of ZK Security](https://www.zksecurity.xyz/blog/posts/a-year-of-zksecurity/), but more time has passed since and here's how things are looking like.

We've had a good stream of clients, and we are now much more financially stable. We've managed to ramp up the team so that we stop losing work opportunities due to lack of availability on our side (we're now 15 engineers, interns included). Not only is the founding team quite the dream team, but the team we created are made of people more qualified than me, so we have a good thing going on.

Everybody seems to have quite a different background, some people are more focused on research, others are stronger devs, and others are CTFs people wearing the security hat. So much so that our differing interests have led us to expand to more than just auditing ZK. We now do development, formal verification work, and design/research work as well. We also are not solely looking at ZK anymore, but at advanced cryptography in general. Think consensus protocols, threshold cryptography, MPC, FHE, etc.

Perhaps naming the company "zk"security was a mistake :) but at least we made a name for ourselves in a smaller market, and are now expanding to more markets!

That's it. ]]>
Two And A Half Coins #9 - Tradfi, Banks, SWIFT, CBDCs, with Xavier Lavayssière and Clément Berthou David Wong Tue, 30 Jul 2024 00:27:35 +0200 http://www.cryptologie.net/article/619/two-and-a-half-coins-9-tradfi-banks-swift-cbdcs-with-xavier-lavayssire-and-clment-berthou/ http://www.cryptologie.net/article/619/two-and-a-half-coins-9-tradfi-banks-swift-cbdcs-with-xavier-lavayssire-and-clment-berthou/#comments Two And A Half Coins #8 - Consensus protocols, Bitcoin, Fastpay, and Linera with Mathieu Baudet David Wong Tue, 30 Jul 2024 00:27:16 +0200 http://www.cryptologie.net/article/618/two-and-a-half-coins-8-consensus-protocols-bitcoin-fastpay-and-linera-with-mathieu-baudet/ http://www.cryptologie.net/article/618/two-and-a-half-coins-8-consensus-protocols-bitcoin-fastpay-and-linera-with-mathieu-baudet/#comments They're all SNARKs David Wong Sat, 20 Jul 2024 19:19:18 +0200 http://www.cryptologie.net/article/617/theyre-all-snarks/ http://www.cryptologie.net/article/617/theyre-all-snarks/#comments
It all started from a clever pun "succinct non-interactive argument of knowledge" and ended up with weird consequences as not every new scheme was deemed "succinct". So much so that naming branched (STARKs are "scalable" and not "succinct") or some schemes can't even be called anything. This is mostly because succinct not only means really small proofs, but also really small verifier running time.

If we were to classify verifier running time between the different schemes it usually goes like this:

* KZG (used in Groth16 and Plonk): super fast
* FRI (used in STARKs): fast
* Bulletproof (used in kimchi): somewhat fast

Using the almost-standardized categorization, only the first one can be called a SNARK, the second one is usually called a STARK, and I'm not even sure how we call the third one, a NARK?

But does it really make sense to reserve SNARK to the first scheme? It turns out people are using all three schemes because they are all dope and fast(er) than running the program by yourself. Since SNARK has become the main term for general-purpose zero-knowledge proofs, then let's just use that!

I'm not the only one that wants to call STARKs and bulletproofs SNARKs, [Justin Thaler also makes that point here](https://a16zcrypto.com/posts/article/17-misconceptions-about-snarks/):

> the “right” definition of succinct should capture any protocol with qualitatively interesting verification costs – and by “interesting,” I mean anything with proof size and verifier time that is smaller than the associated costs of the trivial proof system. By “trivial proof system,” I mean the one in which the prover sends the whole witness to the verifier, who checks it directly for correctness ]]>
The case against slashing? David Wong Fri, 19 Jul 2024 05:54:45 +0200 http://www.cryptologie.net/article/616/the-case-against-slashing/ http://www.cryptologie.net/article/616/the-case-against-slashing/#comments
If you didn't know, **slashing** is the act of punishing malicious validators in [BFT consensus protocols](https://en.wikipedia.org/wiki/Byzantine_fault) by taking away tokens. Often tokens that were deposited and locked by the validators themselves in order to participate in the consensus protocol. Perhaps the first time that this concept of slashing appeared was in [Tendermint](https://docs.cosmos.network/main/build/modules/slashing)? But I'm not sure.

In the article they make the point that BFT consensus protocols **need** slashing, and are less secure without it. This is an interesting claim as there's a number of BFT consensus protocols that are running without slashing (perhaps a majority of them?)

Slashing is mostly applied to safety violations (a fork of the state of the network) that can be proved. This is often done by witnessing two conflicting messages being signed by the same node in the protocol (often called **equivocation**). Any "forking attack" that want to be successful will require a threshold (usually a third) of the nodes signing conflicting messages.

Slashing only affects nodes that still have stake in the system, meaning that forking old history to target a node that’s catching up (without a checkpoint) doesn’t get affected by slashing (what we call **long-range attacks**). The post argues that we need to separate the *cost-of-corruption* from the *profit-from-corruption* and seem to only assume that the cost-of-corruption is always the full stake of all attackers, in which case this analysis only makes sense in attacks aiming at forking the tip/head of the blockchain.

The post present a new model, the **Corruption-Analysis model**, in order to analyze slashing. They accompany the model with the following table that showcases the different outcomes of an attack in a protocol that **does not have** slashing:

![before](/upload/GS0Y_AjWQAEJv3f.png)

Briefly, it shows that (in the bottom-left corner) failed attacks don't really have a cost as attackers keep their stake (`S`) and potentially get paid (`B1`) to do the attack. On the other hand it shows that (in the top-right corner) people will likely punish everyone in case of an attack by mass selling and taking the token price down to $0$ (an implied feature they call **token toxicity**).

On the other hand, this is the table they show once slashing is integrated:

![after](/upload/GS0Yvd_WkAAmbSV.png)

As one can see, the bottom-left corner and the top-right corner have changed: a failed attack now has a cost, and a successful attack almost doesn't have one anymore. Showing that slashing is strictly better than not slashing.

While this analysis is great to read, I'm not sure I fully agree with it. First, where did the "token toxicity" go in the case of a successful attack? I would argue that a successful attack would impact the protocol in similar ways. Perhaps not as intensely, as as soon as the attack is detected the attackers would lose their stake and not be able to perform another attack, but still this would show that the economic security of the network is not good enough and people would most likely lose their trust in the token.

Second, is the case where the attack was not successful really a net improvement? My view is that failed attacks generally happen due to accidents rather than legitimate attempts, as an attacker would most likely succeed if they had enough to perform an attack. And indeed, I believe all of the slashing events we've seen were all accidents, and no successful BFT attacks was ever witnessed (slashing or no slashing). (That being said, there are cases where an attacker might have difficulties properly isolating a victim's node, in which case it might be harder to always be successful at performing the attack. This really depends on the protocol and on the power of the adversary.)

In addition, attacks are all different. The question of who is targeted matters: if the victim reacts, how long does it take them to punish the attackers and publish the equivocation proofs? Does the protocol has a long-enough unstaking period to give the victim time to punish them? And if a third of the network is Byzantine, can they prevent the slashing from happening anyway by censoring transactions or something? Or worst case, can they kill the liveness of the network instead of getting slashed, punishing everyone and not just them (up until a hard fork occurs)?

Food for thought, but this shows that slashing is still quite hard to model. After all, it's a heuristic, and not a mechanism that you will find in any BFT protocol whitepaper. As such, it is part of the overall economic security of the deployed protocol and has to be measured via both its upsides and downsides. ]]>
Interactive Arithmetization and Iterative Constraint Systems David Wong Thu, 11 Jul 2024 21:37:46 +0200 http://www.cryptologie.net/article/615/interactive-arithmetization-and-iterative-constraint-systems/ http://www.cryptologie.net/article/615/interactive-arithmetization-and-iterative-constraint-systems/#comments
For example, you can sort of make these generalization and explain most modern general-purpose ZKP systems with them:

* They all use a [polynomial commitment scheme (PCS)](https://cryptologie.net/article/525/pairing-based-polynomial-commitments-and-kate-polynomial-commitments/). Thank humanity for that abstraction. The PCS is the core engine of any proof system as it dictates how to commit to vectors of values, how large the proofs will be, how heavy the work of the verifier will be, etc.
* Their constraint systems are basically all acting on [execution trace tables](https://cryptologie.net/article/601/how-starks-work-if-you-dont-care-about-fri/), where columns can be seen as registers in a CPU and the rows are the values in these registers at different moments in time. R1CS has a single column, both AIR and plonkish have as many columns as you want.
* They all reduce these columns to polynomials, and then use the fact that for some polynomial $f$ that should vanish on some points ${w_0, w_1, \cdots}$ then we have that $f(x) = [(x - w_0)(x-w_1)\cdots] \cdot q(x)$ for some $q(x)$
* And we can easily check the previous identity by checking it at a single random point ([which is highly secure thanks to what Schartz and Zippel said a long time ago](https://www.cryptologie.net/article/507/the-missing-explanation-of-zk-snarks-part-1/))
* They also all use the fact that proving that several identities are correct (e.g. $a_i = b_i$ for all $i$) is basically the same as proving that their random linear combination is the same (i.e. $\sum_i r_i (a_i - b_i) = 0$), which allows us to "compress" checks all over the place in these systems.

Knowing these 4-5 points will get you a very long way. For example, you should be able to quickly understand [how STARKs work](https://cryptologie.net/article/601/how-starks-work-if-you-dont-care-about-fri/).

Having said that, I more recently noticed another pattern that is used all over the place by all these proof systems, yet is never really abstracted away, the **interactive arithmetization** pattern (for lack of a better name). Using this concept, you can pretty much see AIR, Plonkish, and a number of protocols in the same way: they're basically constraint systems that are **iteratively built** using challenges from a verifier.

Thinking about it this way, the difference between STARK's AIR and Plonks' plonkish arithmetization is now that one (Plonk) has fixed columns that can be preprocessed and the other doesn't. The permutation of Plonk is now nothing special, the write-once memory of Cairo is nothing special as well, they're both interactive arithmetizations.

Let's look at plonk as a table, where the left table is the one that is fixed at compilation time, and the right one is the execution trace that is computed at runtime when a prover runs the program it wants to prove:

![plonk tables](https://i.imgur.com/koxAAm1.png)

One can see the permutation argument of plonk as an extra circuit, that requires 1) the first circuit to be ran and 2) a challenge from the verifier, in order to be secure.

As a diagram, it would look like this:

![interactive arithmetization](https://i.imgur.com/DwpqUn1.png)

Now, one could see the write-once memory primitive of Cairo in the same way (which [I explained here](https://zksecurity.github.io/stark-book/cairo/memory.html)), or the lookup arguments of a number of proof systems in the same way.

For example, the log-derivative lookup argument used in [protostar](https://eprint.iacr.org/2023/620) (and in most protocols nowadays) looks like this. Notice that:

* in step 4 the execution trace of the main circuit is sent
* in step 6 the verifier sends a challenge back
* and in step 7 the prover sends the execution trace of the second circuit (that implements a lookup circuit) using the challenge

![protostar](https://i.imgur.com/nVAejj2.png)

As such, the point I really want to make is that a number of ZKP primitives and interaction steps can be seen as an interactive process to construct a super constraint system from a number of successive constraint system iterated on top of each other. Where constraint system means "optionally some new fixed columns, some new runtime columns, and some constraints on all the tables including previous tables". That's it.

Perhaps we should call these **iterative constraint systems**. ]]>
Two And A Half Coins #7 - It's time to talk about Ethereum David Wong Mon, 08 Jul 2024 22:26:17 +0200 http://www.cryptologie.net/article/614/two-and-a-half-coins-7-its-time-to-talk-about-ethereum/ http://www.cryptologie.net/article/614/two-and-a-half-coins-7-its-time-to-talk-about-ethereum/#comments
<iframe width="560" height="315" src="https://www.youtube.com/embed/Mviz9KLIlBQ?si=lTfVCMOyrYTFaiRM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> ]]>
Two And A Half Coins: Exploring Layer 2 Solutions on Bitcoin with Kevin Hurley and Alex Akselrod David Wong Wed, 03 Jul 2024 01:40:19 +0200 http://www.cryptologie.net/article/613/two-and-a-half-coins-exploring-layer-2-solutions-on-bitcoin-with-kevin-hurley-and-alex-akselrod/ http://www.cryptologie.net/article/613/two-and-a-half-coins-exploring-layer-2-solutions-on-bitcoin-with-kevin-hurley-and-alex-akselrod/#comments

<iframe width="560" height="315" src="https://www.youtube.com/embed/z3I8KnA6OrE?si=laUX9J_aLxJTxoXX" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> ]]>
An introduction to multi-party computation (videos) David Wong Wed, 15 May 2024 21:43:21 +0200 http://www.cryptologie.net/article/612/an-introduction-to-multi-party-computation-videos/ http://www.cryptologie.net/article/612/an-introduction-to-multi-party-computation-videos/#comments
<iframe width="560" height="315" src="https://www.youtube.com/embed/L_ND1YPmI5E?si=geMn3yHQp1d_ZFc3" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

<iframe width="560" height="315" src="https://www.youtube.com/embed/XggHA6FU2gA?si=7X5Oq0fQPhdVAY3W" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> ]]>
A note on the elliptic curve pairing checks in zero-knowledge proofs David Wong Tue, 26 Mar 2024 15:10:33 +0100 http://www.cryptologie.net/article/611/a-note-on-the-elliptic-curve-pairing-checks-in-zero-knowledge-proofs/ http://www.cryptologie.net/article/611/a-note-on-the-elliptic-curve-pairing-checks-in-zero-knowledge-proofs/#comments ## Using Schwartz-Zippel with no multiplication

First, let me say that there's typically two types of "nice" polynomial commitment schemes that people use with elliptic curves: **Pedersen commitments** and **KZG commitments**.

Pedersen commitments are basically hidden random linear combinations of the coefficients of a polynomial. That is, if your polynomial is $f(x) = \sum c_i \cdot x^i$ your commitment will look like $[\sum r_i \cdot c_i] G$ for some base point $G$ and **unknown** random values $r_i$. This is both good and bad: since we have access to the coefficients we can try to use them to evaluate a polynomial from its commitment, but since it's a random linear combination of them [things can get ugly](https://cryptologie.net/article/528/what-is-an-inner-product-argument-part-1).

On the other hand, KZG commitments can be seen as hidden evaluations of your polynomials. For the same polynomial $f$ as above, a KZG commitment of $f$ would look like $[f(s)]G$ for some **unknown** random point $s$. Not knowing $s$ here is much harder than not knowing the values $r_i$ in Pedersen commitments, and this is why KZG usually requires a [trusted setup](https://www.cryptologie.net/article/560/zk-faq-whats-a-trusted-setup-whats-a-structured-reference-string-whats-toxic-waste/) whereas Pedersen doesn't.

In the rest of this post we'll use KZG commitments to prove identities.

Let's use $[a]$ to mean "commitment of the polynomial $a(x)$", then you can easily check that $a(x) = b(x)$ knowing only the commitments to $a(x)$ and $b(x)$ by checking that $[a] = [b]$ or $[a] - [b] = [0]$. This is because of the [Schwartz-Zippel (S-Z) lemma](https://www.cryptologie.net/article/507/the-missing-explanation-of-zk-snarks-part-1/) which tells us that checking this identity at a random point is convincing with high-enough probability.

When multiplication with scalars is required, then things are fine. As you can do $i \cdot [a]$ to obtain $[i \cdot a]$, checking that $i \cdot a = j \cdot b$ is as simple as checking that $i \cdot [a] - j \cdot [b] = [0]$.

This post is about explaining how pairing helps us when we want to check an identity that involves multiplying $a$ and $b$ together.
## Using elliptic curve pairings for a single multiplication

It turns out that elliptic curve pairings allow us to perform a single multiplication. Meaning that once things get multiplied, they move to a different planet where things can only get added together and compared. No more multiplications.

Pairings give you this function $e$ which allows you to move things in the exponent like this: $e([a], [b]) = e([1], [1])^{ab}$. Where, remember, $ab$ is the multiplication of the two polynomials evaluated at a random point: $a(s) \cdot b(s)$.

As such, if you wanted to check something like this for example: $a \cdot b = c + 3$ with commitments only, you could check the following pairings:

$$
e([a], [b]) = e([c] + 3 [1], [1])
$$
By the way, the left argument and the right argument of a pairing are often in different groups for "reasons". So we usually write things like this:

$$
e([a]_1, [b]_2) = e([c]_1 + 3 [1]_1, [1]_2)
$$
And so it is important to have commitments in the right groups if you want to be able to construct your polynomial identity check.
## Evaluations can help with more than one multiplication

But what if you want to check something like $a \cdot b \cdot c = d + 4$? Are we doomed?

We're not! One insight that plonk brought to me (which potentially came from older papers, I don't know, I'm not an academic, leave me alone), is that you can reduce the number of multiplication with "*this one simple trick*". Let me explain...

A typical scenario includes you wanting to check an identity like this one:

$$a(x) \cdot b(x) \cdot c(x) = d(x)$$

and you have KZG commitments to all three polynomials $[a], [b], [c]$. (So in other words, hidden evaluations of these polynomials at the same unknown random point $s$)

You can't compute the commitment of the left-hand side because you can't perform the multiplication of the three commitments.

The trick is to **evaluate** ([using KZG](https://cryptologie.net/article/525/pairing-based-polynomial-commitments-and-kate-polynomial-commitments/)) the previous identity at a different point, let's say $\zeta$, and **pre-evaluate** (using KZG as well) as many polynomials as you can to $\zeta$ to reduce the number of multiplications down to 0.

> Note: that is, if we want to check that $a(x) - b(x) = 0$ is true, and we want to use S-Z to do that at some point $\zeta$, then we can pre-evaluate $a$ (or $b$) and check the following identity $a(\zeta) - b(x) = 0$ at some point $\zeta$ instead.

More precisely, we'll choose to pre-evaluate $b(\zeta) = \bar{b}$ and $c(\zeta) = \bar{c}$, for example. This means that we'll have to produce a quotient polynomial $q_b$ and $q_c$ such that:

1. $b(s) - \bar{b} = (s - \zeta) \cdot q_b(s)$
2. $c(s) - \bar{c} = (s - \zeta) \cdot q_c(s)$

which means that the verifier will have to perform the following two pairings (after having been sent the evaluation $\bar{b}$ and $\bar{c}$ in the clear):

1. $e([b]_1 - \bar{b} \cdot [1]_1, [1]_2) = e([x]_1 - \zeta \cdot [1]_1, [q_b]_2)$
2. $e([c]_1 - \bar{c} \cdot [1]_1, [1]_2) = e([x]_1 - \zeta \cdot [1]_1, [q_c]_2)$

Then, they'll be able to check the first identity at $\zeta$ and use $\bar{b}$ and $\bar{c}$ in place of the commitments $[b]$ and $[c]$. The verifier check will look like the following pairing (after receiving a commitment $[q]$ from the prover):
$$e( \bar{b} \cdot \bar{c} \cdot [a]_1 - [d] - 0, [1]_2) = e([x]_1 - \zeta \cdot [1]_1, [q]_2)$$
which proves using KZG that $a(\zeta)b(\zeta)c(\zeta) - d(\zeta) = 0$ (which proves that the identity checks out with high probability thanks to S-Z).

## Aggregating all the KZG evaluation proofs

In the previous explanation, we actually perform 3 KZG evaluation proofs instead of one:

* $2$ pairings that are KZG evaluation proofs that pre-evaluate different polynomials from the main check at some random point $\zeta$.
* $1$ pairing that evaluates the main identity at $\zeta$, after it was [linearized](https://cryptologie.net/article/557/linearization-in-plonk-and-kimchi-why/) to get rid of any multiplication of commitments.

Pairings can be aggregated by simply creating a random linear combinations of the pairings. That is, with some random values $r_i$ we can aggregate the checks where the left-hand side is:
$$
b(s) - \bar{b} + r_1 (c(s) - \bar{c}) + r_2 (\bar{b} \cdot \bar{c} \cdot a(s) - d(s) - 0])
$$
and the right-hand side is:
$$ = (s - \zeta) \cdot q_b(s) + r_1 ((s - \zeta) \cdot q_c(s)) + r_2 ((s - \zeta) \cdot q(s))$$
]]>
Plonk's permutation, the definitive explanation David Wong Mon, 11 Mar 2024 21:56:39 +0100 http://www.cryptologie.net/article/610/plonks-permutation-the-definitive-explanation/ http://www.cryptologie.net/article/610/plonks-permutation-the-definitive-explanation/#comments ## Multiset equality check

Suppose that you have two [ordered sets](https://en.wikipedia.org/wiki/Set_(mathematics)) of values $D = \{d_1, d_2, d_3, d_4\}$ and $E = \{e_1, e_2, e_3, e_4\}$, and that you want to check that they contain the same values. That is, you want to check that there exists a [permutation](https://en.wikipedia.org/wiki/Permutation) of the elements of $D$ (or $E$) such that the [multisets](https://en.wikipedia.org/wiki/Multiset) (sets where some values can repeat) are the same, but you don't care about which permutation exactly gets you there. You're willing to accept ANY permutation.

$$\{d_1, d_2, d_3, d_4\} = \text{some\_permutation}(\{e_1, e_2, e_3, e_4\})$$
For example, it could be that re-ordering $E$ as $\{e_2, e_3, e_1, e_4\}$ gives us exactly $D$.

### Trick 1: multiply things to reduce to a single value

One way to do perform our multiset equality check is to compare the product of elements on both sides:

$$d_1 \cdot d_2 \cdot d_3 \cdot d_4 = e_1 \cdot e_2 \cdot e_3 \cdot e_4$$

If the two sets contain the same values then our identity checks out. But the reverse is not true, and thus this scheme is not secure.

> Can you see why?

For example, $D = (1, 1, 1, 15)$ and $E = (3, 5, 1, 1)$ are obviously different multisets, yet the product of their elements will match!

### Trick 2: use polynomials, because maybe it will help...

What we can do to fix this issue is to encode the values of each lists as roots of two polynomials:

* $d(x) = (x - d_1)(x - d_2)(x - d_3)(x - d_4)$
* $e(x) = (x - e_1)(x - e_2)(x - e_3)(x - e_4)$

These two polynomials are equal if they have the same roots with the same multiplicities (meaning that if a root repeats, it must repeat the same number of times).

### Trick 3: optimize polynomial identities with Schwartz-Zippel

Now is time to use the [Schwartz-Zippel lemma](https://en.wikipedia.org/wiki/Schwartz%E2%80%93Zippel_lemma) to optimize the comparison of polynomials! Our lemma tells us that if two polynomials are equal, then they are equal on all points, but if two polynomials are not equal, then **they differ on MOST points**.

So one easy way to check that they match with high probability is to sample a random evaluation point, let's say some random $\gamma$. Then evaluate both polynomials at that random point $\gamma$ to see if their evaluations match:
$$(\gamma - d_1)(\gamma - d_2)(\gamma - d_3)(\gamma - d_4) = (\gamma - e_1)(\gamma - e_2)(\gamma - e_3)(\gamma - e_4)$$
## Permutation check

The previous check is not useful for wiring different cells within some execution trace. There is no specific "permutation" being enforced. So we can't use it as in in plonk to implement our [copy constraints](https://vitalik.eth.limo/general/2019/09/22/plonk.html).

### Trick 4: random linear combinations to encode tuples

To enforce a permutation, we can compare tuples of elements instead! For example, let's say we want to enforce that $E$ must be re-ordered using the permutation $(1 3 2) (4)$ in [cycle notation](https://en.wikipedia.org/wiki/Permutation#Cycle_notation). Then we would try to do the following identity check:
$$
((1, d_1), (2, d_2), (3, d_3), (4, d_4)) = ((2, e_1), (3, e_2), (1, e_3), (4, e_4))
$$
Here, we are enforcing that $d_1$ is equal to $e_3$, and that $d_2$ is equal to $e_1$, etc. This allows us to re-order the elements of $E$:
$$
((1, d_1), (2, d_2), (3, d_3), (4, d_4)) = ((1, e_3), (2, e_1), (3, e_2), (4, e_4))
$$
But how can we encode our tuples into the polynomials we've seen previously?
The trick is to use a **random linear combination**! (And that is often the answer in a bunch of ZK protocol.)

So if we want to encode $(2, d_2)$ in an equation, for example, we write $2 + \beta \cdot d_2$ for some random value $\beta$.

> Note: The rationale behind this idea is still due to Schwartz-Zippel: if you have two tuples $(a,b)$ and $(a', b')$ you know that the polynomials $a + x \cdot b$ is the same as the polynomial $a' + x \cdot b'$ if $a = a'$ and $b = b'$, or if you have $x = \frac{a' - a}{b - b'}$ . If $x$ is chosen at random, the probability that it is exactly that value is $\frac{1}{N}$ with $N$ the size of your sampling domain (i.e. the size of your field) which is highly unlikely.

So now we can encode the previous lists of tuples as these polynomials:

* $d(x, y) = (1 + y \cdot d_1 - x)(2 + y \cdot d_2 - x)(3 + y \cdot d_3 - x)(4 + y \cdot d_4 - x)$
* $e(x, y) = (2 + y \cdot e_1 - x)(3 + y \cdot e_2 - x)(1 + y \cdot e_3 - x)(4 + y \cdot e_4 - x)$

And then reduce both polynomials to a single value by sampling random values for $x$ and $y$. Which gives us:

* $(1 + \beta \cdot d_1 - \gamma)(2 + \beta \cdot d_2 - \gamma)(3 + \beta \cdot d_3 - \gamma)(4 + \beta \cdot d_4 - \gamma)$
* $(2 + \beta \cdot e_1 - \gamma)(3 + \beta \cdot e_2 - \gamma)(1 + \beta \cdot e_3 - \gamma)(4 + \beta \cdot e_4 - \gamma)$

If these two values match, with overwhelming probability we have that the two polynomials match and thus our permutation of $E$ matches $D$.
## Wiring within a single execution trace column

Let's now see how we can use the (optimized) checks we've learn previously in plonk. We will first learn how to wire cells of a single execution trace column, and in the next section we will expand this to three columns (as vanilla Plonk uses three columns).

> Take some moment to think about how can we use the previous stuff.

The answer is to see the execution trace as your list $E$, and then see if it is equal to a fixed permutation of it ($D$). Note that this permutation is decided when you write your circuit, and precomputed into the verifier key in Plonk.

Remember that the formula we're trying to check is the following for some random $\beta$ and $\gamma$, and for some permutation function $\sigma$ that we defined:

$$
\prod_{i=1} (i + \beta \cdot d[i] - \gamma) = \prod_{i=1} (\sigma(i) + \beta \cdot e[i] - \gamma)
$$

### Trick 5: write a circuit for the permutation check

To enforce the previous check, we will write a mini-circuit (yes an actual circuit!) which will progressively accumulate the result of dividing the left-hand side with the right-hand side. This circuit only requires one variable/register we'll call $z$ (and so it will add a new column $z$ in our execution trace) which will start with the initial value 1 and will end with the following value:

$$
\prod_{i=1} \frac{i+\beta \cdot d[i] - \gamma}{\sigma(i) + \beta \cdot e[i] - \gamma} = 1
$$

Let's rewrite it using only the first wire/column $a$ of Plonk, and using our generator $\omega$ as index in our tuples (because this is how we handily index things in Plonk):

$$
\prod_{i=1} \frac{\omega^i+\beta \cdot a[i] - \gamma}{\sigma(\omega^i) + \beta \cdot a[i] - \gamma} = 1
$$

We can then constrain the last value to be equal to 1, which will enforce that the two polynomials encoding our list of value and its permutation are equal (with overwhelming probability).

In plonk, a gate can only access variables/registers from the same row. So we will use the following extra gate (reordering the previous equation, as we can't divide in a circuit) throughout the circuit:

$$
z[i+1] \cdot (\sigma(i) + \beta \cdot a[i] - \gamma) = z[i] \cdot (i + \beta \cdot a[i] - \gamma)
$$
Now, how do we encode this gate in the circuit? The astute eye will have noticed that we are using a cell of the next row ($z[i+1]$) which we haven't done in Plonk so far.
### Trick 6: you're in a multiplicative subgroup, remember?

Enforcing things across rows is actually possible in plonk because we encode our polynomials in a multiplicative subgroup of our field! Due to this, we can reach for the next value(s) by multiplying an evaluation point with the subgroup's generator.

That is, values are encoded in our polynomials at evaluation points $\omega, \omega^2, \omega^3, \cdots$, and so multiplying an evaluation point by $\omega$ (the generator) brings you to the next cell in an execution trace.

As such, the verifier will later try to enforce that the following identity checks out in the multiplicative subgroup:

$$
z(x \cdot \omega) \cdot (\sigma(x) + \beta \cdot a(x) - \gamma) = z(x) \cdot (x + \beta \cdot a(x) - \gamma)
$$

> Note: This concept was generalized in turboplonk, and is used extensively in the AIR arithmetization (used by STARKs). This is also the reason why in Plonk we have to evaluate the $z$ polynomial at $\zeta \omega$.

There will also be two additional gates: one that checks that the initial value is 1, and one that check that the last value is 1, both applied only to their respective rows. One trick that Plonk uses is that the last value is actually obtained in the last row. As `last_value + 1 = 0` in our multiplicative subgroup, we have that $z[\text{last\_value} + 1] = z[0]$ is constrained automatically. As such, checking that $z[0] = 1$ is enough.

You can see these two gates added to the vanilla plonk gate in the computation of the quotient polynomial $t$ in plonk. Take a look at this screenshot of the round 3 of the protocol, and squint really hard to ignore the division by $Z_H(X)$, the powers of $\alpha$ being used to aggregate the different gate checks, and the fact that $b$ and $c$ (the other wires/columns) are used:

![round 3](/upload/Screenshot_2024-03-11_at_1.57_.01 PM_.png)

The first line in the computation of $t$ is the vanilla plonk gate (that allows you to do multiplication and addition); the last line constrains that the first value of $z$ is $1$; and the other lines encode the permutation gate as I described (again, if you ignore the terms involving $b$ and $c$).

### Trick 7: create your execution trace in steps

There's something worthy of note: the extra execution trace column $z$ contains values that use other execution trace columns. For this reason, the other execution trace columns must be fixed BEFORE anything is done with the permutation column $z$.

In Plonk, this is done by waiting for the prover to send commitments of $a$, $b$, and $c$ to the verifier, before producing the random challenges $\beta$ and $\gamma$ that will be used by the prover to produce the values of $z$.

## Wiring multiple execution trace columns

The previous check only works within the cells of a single execution trace, how does Plonk generalizes this to several execution trace columns?

Remember: we indexed our first execution trace column with the values of our circuit domain (that multiplicative subgroup), we simply have to find a way to index the other columns with distinct values.
### Trick 8: use cosets

A coset is simply a set that is the same size as another set, but that is completely disjoint from that set. Handily, a coset is also defined as something that's very easy to compute if you know a subgroup: just multiply it with some element $k$.

Since we want a similar-but-different set from the elements of our multiplicative subgroup, we can use cosets!

Plonk produces the values $k_1$ and $k_2$ (which can be the values $2$ and $3$, for example), which when multiplied with the values of our multiplicative subgroup ($\{\omega, \omega^2, \omega^3, \cdots\}$) produces a different set of the same size. It's not a subgroup anymore, but who cares!

We now have to create three different permutations, one for each set, and each permutation can point to the index of any of the sets.


]]>
Are you into finding bugs and learning ZK? Here's a challenge for you David Wong Mon, 26 Feb 2024 20:41:49 +0100 http://www.cryptologie.net/article/609/are-you-into-finding-bugs-and-learning-zk-heres-a-challenge-for-you/ http://www.cryptologie.net/article/609/are-you-into-finding-bugs-and-learning-zk-heres-a-challenge-for-you/#comments
It was a lot of fun and I hope that some people are inspired to try to break it :)

We're using the challenge to hire people who are interested in doing security work in the ZK space, so if that interests you, or if you purely want a new challenge, try it out here: https://github.com/zksecurity/zkBank

And of course, since this is an active [wargame](https://en.wikipedia.org/wiki/Wargame_(hacking)) please do not release your own solution or write up! ]]>
Want to learn more about zkBitcoin? I've made some videos David Wong Sun, 28 Jan 2024 05:59:45 +0100 http://www.cryptologie.net/article/608/want-to-learn-more-about-zkbitcoin-ive-made-some-videos/ http://www.cryptologie.net/article/608/want-to-learn-more-about-zkbitcoin-ive-made-some-videos/#comments
So as requested, I made a number of videos to explain what [zkBitcoin](https://github.com/sigma0-xyz/zkbitcoin) is.

<iframe width="560" height="315" src="https://www.youtube.com/embed/2a0UYT5nbEA?si=KqPWKGwcvJp2244P" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

<iframe width="560" height="315" src="https://www.youtube.com/embed/3Y-Z4nZB8FE?si=emFf13-oloyuBWv0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

<iframe width="560" height="315" src="https://www.youtube.com/embed/gSNrRPauIEA?si=M8gD7pCuw1fgtH6v" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> ]]>
Zero-knowledge proofs in stateful applications David Wong Wed, 24 Jan 2024 01:36:13 +0100 http://www.cryptologie.net/article/607/zero-knowledge-proofs-in-stateful-applications/ http://www.cryptologie.net/article/607/zero-knowledge-proofs-in-stateful-applications/#comments
> Note: circuits are actually not _strictly_ pure, as they are _non-deterministic_. For example, you might be able to use out-of-circuit randomness in your circuit.

So when mutation of persistent state is needed, you need to provide the previous state as input, and return the new state as output. This not only produces a constraint on the previous state (time of read VS time of write issues), but it also limits the size of your state.

[I've talked about the first issue here](https://cryptologie.net/article/604/the-zk-update-conflict-issue-in-multi-user-applications/):

> The problem of update conflicts comes when one designs a protocol in which multiple participants decide to update the same value, and do so using local execution. That is, instead of having a central service that executes some update logic sequentially, participants can submit the result of their updates in parallel. In this situation, each participant locally executes the logic on the current state assuming that it will not have changed. But this doesn't work as soon as someone else updates the shared value. In practice, someone's update will invalidate someone else's.

The second issue of state size is usually solved with Merkle trees, which allow you to compress your state in a verifiable way, and allow you to access or update the state without having to decompress the ENTIRE state.

That's all. ]]>
Verifying zero-knowledge proofs on Bitcoin? David Wong Sun, 21 Jan 2024 05:46:42 +0100 http://www.cryptologie.net/article/606/verifying-zero-knowledge-proofs-on-bitcoin/ http://www.cryptologie.net/article/606/verifying-zero-knowledge-proofs-on-bitcoin/#comments
A few months ago [Ivan](https://twitter.com/imikushin) told me "how cool would it be if we could verify zero-knowledge proofs on Bitcoin?" A week later, we had a prototype of the best solution we could come up with: a multi-party computation to manage a Bitcoin wallet, and a committee willing to unlock funds only in the presence of valid zero-knowledge proofs. A few iterations later and we had something a bit cooler: stateful apps with states that can be tracked on-chain, and committee members that don't need to know anything about Bitcoin. Someone might put it this way: a Bitcoin L2 with minimal trust assumption of a "canonical" Bitcoin blockchain.

From what we understand, a better way to verify zero-knowledge proofs on Bitcoin is not going to happen, and this is the best we ca have. And we built it! And we're running it in testnet. [Try it here](https://github.com/sigma0-xyz/zkbitcoin)!

]]>
What's out there for ECDSA threshold signatures David Wong Sat, 13 Jan 2024 23:26:06 +0100 http://www.cryptologie.net/article/605/whats-out-there-for-ecdsa-threshold-signatures/ http://www.cryptologie.net/article/605/whats-out-there-for-ecdsa-threshold-signatures/#comments
The **threshold** part means that not every participant who has a share has to participate. If there's $N$ participants, then only $t < N$ has to participate for the protocol to succeed. The $t$ and $N$ depend on the protocol you want to design, on the overhead you're willing to eat, the security you want to attain, etc.

Threshold protocols are not just for signing, they're everywhere. The NIST has a [Multi-Party Threshold Cryptography competition](https://csrc.nist.gov/projects/threshold-cryptography), in which you can see proposals for threshold signing, but also threshold decryption, threshold key exchanges, and others.

This post is about threshold signatures for ECDSA specifically, as it is the most commonly used signature scheme and so has attracted a number of researchers.
In addition, I'm only going to talk about the history of it, because I haven't written an actual explainer on how these works, and because the history of threshold signing for ECDSA is really messy and confusing and understanding what constructions exist out there is near impossible due to the naming collisions and the number of papers released without proper nicknames (unlike [FROST](https://eprint.iacr.org/2020/852), which is the leading threshold signing algorithm for schnorr signatures).

So here we are, the main line of work for ECDSA threshold signatures goes something like this, and seems to mainly involve two Gs (Gennaro and Goldfeder):

1. **[GG18](https://eprint.iacr.org/2019/114.pdf)**. This paper is more officially called "Fast Multiparty Threshold ECDSA with Fast Trustless Setup" and improves on [BGG: Using level-1 homomorphic encryption to improve threshold DSA signatures for bitcoin wallet security (2017)](https://www.cs.haifa.ac.il/~orrd/LC17/paper72.pdf) and [GGN: Threshold-optimal dsa/ecdsa signatures and an application to bitcoin wallet security (2016)]().
2. **GG19**. This has the same name as GG18, but fixes some of the issues in GG18. I think this is because GG18 was published in a journal, so they couldn't update it. But GG18 on eprint is the updated GG19 one. (Yet few people refer to it as GG19.) It fixes a number of bugs, including the ones brought by the [Alpha-Rays attack](https://hackmd.io/@omershlo/Sk_8JT-qt), and [A note about the security of GG18](https://info.fireblocks.com/hubfs/A_Note_on_the_Security_of_GG.pdf).
3. **[GG20](https://eprint.iacr.org/2020/540.pdf)**. This paper is officially called "One Round Threshold ECDSA with Identifiable Abort" and builds on top of GG18/GG19 to introduce the ability to identify who caused the abort. (In other words, who messed up if something was messed up during the multi-party computation.) Note that there are still some bugs in this paper.
4. **[CGGMP21](https://eprint.iacr.org/2021/060)**. This one combines GG20 with [CMP20](https://eprint.iacr.org/2020/492) (another work on threshold signatures). This is supposed to be the latest work in this line of work and is probably the only version that has no known issues.

Note that there's also another line of work that happened in parallel from another team, and which is similar to GG18 except that they have different bugs: [Lindell-Nof: Fast secure multiparty ecdsa with practical distributed key generation and applications to cryptocurrency custody (2018)](https://eprint.iacr.org/2018/987).

PS: thanks to [Rosario Gennaro](https://twitter.com/rgennaro67) for help figuring this out :) ]]>
The ZK update conflict issue in multi-user applications David Wong Thu, 11 Jan 2024 20:53:13 +0100 http://www.cryptologie.net/article/604/the-zk-update-conflict-issue-in-multi-user-applications/ http://www.cryptologie.net/article/604/the-zk-update-conflict-issue-in-multi-user-applications/#comments
Let's take a step back. Zero-knowledge proofs allow you to prove the result of the execution of some logic. Like signatures attached to data you receive, ZK proofs can be attached to a computation result. This means that with ZK, internet protocols can be rethought and redesigned. If execution of the protocol logic had to happen somewhere trusted, now some of it can be moved around and delegated to untrusted places, or for privacy-reasons some of it can be moved to places where private data should remain.

How do we design protocols using ZK? It's easy, assume that when a participant of your protocol computes something, they will do it honestly. Then, when you implement the protocol, use ZK proofs to enforce that they behave as intended.

The problem of update conflicts comes when one designs a protocol in which multiple participants decide to update the same value, and do so using local execution. That is, instead of having a central service that executes some update logic sequentially, participants can submit the result of their updates in parallel. In this situation, each participant locally executes the logic on the current state assuming that it will not have changed. But this doesn't work as soon as someone else updates the shared value. In practice, someone's update will invalidate someone else's.

This issue is not just a _ZK_ issue, if you know anything about databases then how to perform conflict resolution has been an issue for a very long time. For example, in distributed databases with more than one _writer_, conflicts could happen as two nodes attempt to update the same value at the same time. Conflict can also happen in the same way in applications where multiple users want to update the same data, think Google Docs.

The solutions as far as I know can be declined in the following categories:

1. **Resolve conflicts automatically**. The simplest example is the [Thomas write rule](https://en.wikipedia.org/wiki/Thomas_write_rule) which discards any outdated update. In situations were discarding updates is unacceptable more algorithm can take over. For example, Google Docs uses an algorithm called [Operational Transformation](https://drive.googleblog.com/2010/09/whats-different-about-new-google-docs_22.html) to figure out how to merge two independent updates.
2. **Ask the user for help if needed**. For example, the `git merge` command that can sometimes ask for your help to resolve conflicts.
3. **Refuse to accept any conflicts**. This often means that the application is written in such a way that conflicts can't arise, and in distributed databases this always mean that there can only be a single node that can write (with all other nodes being read-only). Although applications can also decide to simply deny updates that lead to conflicts, which would lead to poor performance in concurrency-heavy scenarios, as well as poor user experience.

As one can see, the barrier between application and database doesn't matter too much, besides the fact that a database has poor ways of prompting a user: when conflict resolution must be done by a user it is generally the role of the application to reach out.

What about ZK though? From what I've seen, the last "avoid conflicts" solution is always chosen. Perhaps this is because my skewed view has only been within the blockchain world, which can't afford to play conflict resolution with $$$.

For example, simpler ZK protocols like Zcash will often massage their protocol such that proofs are only computed on immutable data. For example, arguments of a function cannot be the latest root of a merkle tree (as it might get updated before we can publish the result of running the function) but it can easily be the root of a merkle tree that was seen previously (we're using a previous state, not the latest state, that's fine).

Another technique is to extract the parts of updates that occur on a shared data structure, and sequence them before running them. For example, the set of nullifiers in zcash is updated outside of a ZK execution by the network, according to some logic that only gets executed sequentially. More complicated ZK platforms like Aleo and Mina do that as well. In Aleo's case, the user can split the logic of its smart contracts by choosing what can be executed locally (provided a proof) and what has to be executed serially by the network (Ethereum-style). In Mina's case, updates that have the potential to lead to conflicts are queued up and later on a single user can decide (if authorized) to process the queued updates serially but in ZK. ]]>
Cairo's public memory David Wong Tue, 21 Nov 2023 17:53:13 +0100 http://www.cryptologie.net/article/603/cairos-public-memory/ http://www.cryptologie.net/article/603/cairos-public-memory/#comments
If you'd rather watch a 25min video of the article, here it is:

<iframe width="560" height="315" src="https://www.youtube.com/embed/VkGH3U4L2n4?si=dRrvpDs4wt14UVwJ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

The AIR arithmetization is limited on how it can handle public inputs and outputs, as it only offer boundary constraints.
These boundary constraints can only be used on a few rows, otherwise they're expensive to compute for the verifier.
(A verifier would have to compute $\prod_{i \in S} (x - g^i)$ for some given $x$, so we want to keep $|S|$ small.)

For this reason Cairo introduce another way to get the program and its public inputs/outputs in: **public memory**. This public memory is strongly related to the **memory** vector of cairo which a program can read and write to.

In this article we'll talk about both. This is to accompany [this video](https://www.youtube.com/watch?v=VkGH3U4L2n4) and section [9.7 of the Cairo paper](https://eprint.iacr.org/2021/1063).

## Cairo's memory

Cairo's memory layout is a single vector that is indexed (each rows/entries is assigned to an address starting from 1) and is segmented. For example, the first $l$ rows are reserved for the program itself, some other rows are reserved for the program to write and read cells, etc.

Cairo uses a very natural "constraint-led" approach to memory, by making it **write-once** instead of read-write. That is, all accesses to the same address should yield the same value. Thus, we will check at some point that for an address $a$ and a value $v$, there'll be some constraint that for any two $(a_1, v_1)$ and $(a_2, v_2)$ such that $a_1 = a_2$, then $v_1 = v_2$.

## Accesses are part of the execution trace

At the beginning of our STARK, we saw in [How STARKs work if you don't care about FRI](https://cryptologie.net/article/601/how-starks-work-if-you-dont-care-about-fri/) that the prover encodes, commits, and sends the columns of the execution trace to the verifier.

The memory, or memory accesses rather (as we will see), are columns of the execution trace as well.

The first two columns introduced in the paper are called $L_1.a$ and $L_1.v$. For each rows in these columns, they represent the access made to the address $a$ in memory, with value $v$. As said previously, we don't care if that access is a write or a read as the difference between them are blurred (any read for a specific address could be _the_ write).

These columns can be used as part of the Cairo CPU, but they don't really prevent the prover from lying about the memory accesses:

1. First, we haven't proven that all accesses to the same addresses $a_i$ always return the same value $v_i$.
2. Second, we haven't proven that the memory contains fixed values in specific addresses. For example, it should contain the program itself in the first $l$ cells.

Let's tackle the first question first, and we will address the second one later.

## Another list to help

In order to prove that the two columns in the $L_1$ part of the execution trace, Cairo adds two columns to the execution trace: $L_2.a'$ and $L_2.v'$. These two columns contain essentially the same things as the $L_1$ columns, except that these times the accesses are sorted by address.

> One might wonder at this point, why can't L1 memory accesses be sorted? Because these accesses represents the actual memory accesses of the program during runtime, and this row by row (or step by step). The program might read the next instruction in some address, then jump and read the next instruction at some other address, etc. We can't force the accesses to be sorted at this point.

We will have to prove (later) that $L_1$ and $L_2$ represent the same accesses (up to some permutation we don't care about).

So let's assume for now that $L_2$ correctly contains the same accesses as $L_1$ but sorted, what can we check on $L_2$?

The first thing we want to check is that it is indeed sorted. Or in other words:

* each access is on the same address as previous: $a'_{i+1} = a'_i $
* or on the next address: $a'_{i+1} = a'_i + 1$

For this, Cairo adds a **continuity constraint** to its AIR:

![Screenshot 2023-11-21 at 10.55.07 AM](https://hackmd.io/_uploads/S1Yz6u546.png)

The second thing we want to check is that accesses to the same addresses yield the same values. Now that things are sorted its easy to check this! We just need to check that:

* either the values are the same: $v'_{i+1} = v'_i$
* or the address being accessed was bumped so it's fine to have different values: $a'_{i+1} = a'_i + 1$

For this, Cairo adds a **single-valued constraint** to its AIR:

![Screenshot 2023-11-21 at 10.56.11 AM](https://hackmd.io/_uploads/HJIITd5NT.png)

And that's it! We now have proven that the $L_2$ columns represent correct memory accesses through the whole memory (although we didn't check that the first access was at address $1$, not sure if Cairo checks that somewhere), and that the accesses are correct.

That is, as long as $L_2$ contains the same list of accesses as $L_1$.

## A multiset check between $L_1$ and $L_2$

To ensure that two list of elements match, up to some permutation (meaning we don't care how they were reordered), we can use the same permutation that Plonk uses (except that plonk fixes the permutation).

The check we want to perform is the following:

$$
\{ (a_i, v_i) \}_i = \{ (a'_i, v'_i) \}_i
$$

But we can't check tuples like that, so let's get a random value $\alpha$ from the verifier and encode tuples as linear combinations:

$$
\{ a_i + \alpha \cdot v_i \}_i = \{ a'_i + \alpha \cdot v'_i \}_i
$$

Now, let's observe that instead of checking that these two sets match, we can just check that two polynomials have the same roots (where the roots have been encoded to be the elements in our lists):

$$
\prod_i [X - (a_i + \alpha \cdot v_i)] = \prod_i [X - (a'_i + \alpha \cdot v'_i)]
$$

Which is the same as checking that

$$
\frac{\prod_i [X - (a_i + \alpha \cdot v_i)]}{\prod_i [X - (a'_i + \alpha \cdot v'_i)]} = 1
$$

Finally, we observe that we can use Schwartz-Zippel to reduce this claim to evaluating the LHS at a random verifier point $z$. If the following is true at the random point $z$ then with high probability it is true in general:

$$
\frac{\prod_i [z - (a_i + \alpha \cdot v_i)]}{\prod_i [z - (a'_i + \alpha \cdot v'_i)]} = 1
$$

The next question to answer is, how do we check this thing in our STARK?

## Creating a circuit for the multiset check

Recall that our AIR allows us to write a circuit using successive pairs of rows in the columns of our execution trace.

That is, while we can't access all the $a_i$ and $a'_i$ and $v_i$ and $v'_i$ in one shot, we can access them row by row.

So the idea is to write a circuit that produces the previous section's ratio row by row. To do that, we introduce a new column $p$ in our execution trace which will help us keep track of the ratio as we produce it.

$$
p_i = p_{i-1} \cdot \frac{z - (a_i + \alpha \cdot v_i)}{z - (a'_i + \alpha \cdot v'_i)}
$$

This is how you compute that $p$ column of the execution trace as the prover.

Note that on the verifier side, as we can't divide, we will have to create the circuit constraint by moving the denominator to the right-hand side:

$$
p(g \cdot x) \cdot [z - (a'(x) + \alpha \cdot v'(x))] = p(x) \cdot [z - (a(x) + \alpha \cdot v(x))]
$$

There are two additional (boundary) constraints that the verifier needs to impose to ensure that the multiset check is coherent:

* the initial value $p_0$ should be computed correctly ($p_0 = \frac{z - (a_0 + \alpha \cdot v_0)}{z - (a'_0 + \alpha \cdot v'_0)}$)
* the final value $p_{-1}$ should contain $1$

Importantly, let me note that this new column $p$ of the execution trace cannot be created, encoded to a polynomial, committed, and sent to the verifier in the same round as other columns of the execution trace. This is because it makes uses of two verifier challenges $z$ and $\alpha$ which have to be revealed _after_ the other columns of the execution trace have been sent to the verifier.

> Note: a way to understand this article is that the prover is now building the execution trace interactively with the help of the verifier, and parts of the circuits (here a permutation circuit) will need to use these columns of the execution trace that are built at different stages of the proof.

## Inserting the public memory in the memory

Now is time to address the second half of the problem we stated earlier:

> Second, we haven't proven that the memory contains fixed values in specific addresses. For example, it should contain the program itself in the first $l$ cells.

To do this, the first $l$ accesses are replaced with accesses to $(0,0)$ in $L_1$. $L_2$ on the other hand uses acceses to the first parts of the memory and retrieves values from the public memory $m^\*$ (e.g. $(1, m^\*[0]), (2, m^\*[1]), \cdots$).

This means two things:

1. the nominator of $p$ will contain $z - (0 + \alpha \cdot 0) = z$ in the first $l$ iterations (so $z^l$). Furthermore, these will not be cancelled by any values in the denominator (as $L_2$ is supposedly using actual accesses to the public memory)
2. the denominator of $p$ will contain $\prod_{i \in [[0, l]]} [z - (a'_i + \alpha \cdot m^\*[i])]$, and these values won't be canceled by values in the nominator either

As such, the final value of the accumulator should look like this if the prover followed our directions:

$$
\frac{z^l}{\prod_{i \in [[0, l]]} [z - (a'_i + \alpha \cdot m^\*[i])]}
$$

which we can enforce (as the verifier) with a boundary constraint.

Section 9.8 of the Cairo paper writes exactly that:

![Screenshot 2023-11-21 at 11.31.39 AM](https://hackmd.io/_uploads/HkUiHYcV6.png)

]]>