David Wong | Cryptologie | Markdown http://www.cryptologie.net/ About my studies in Cryptography. en-us Fri, 31 Mar 2023 17:46:10 +0200 Paillier's additively homomorphic cryptosystem David Wong Fri, 31 Mar 2023 17:46:10 +0200 http://www.cryptologie.net/article/591/pailliers-additively-homomorphic-cryptosystem/ http://www.cryptologie.net/article/591/pailliers-additively-homomorphic-cryptosystem/#comments
Homomorphic encryption, if you haven't heard of it, is the ability to operate on the ciphertext without having to decrypt it. If that still doesn't ring a bell, check [my old blogpost](https://www.cryptologie.net/article/278/homomorphic-encryption-part-1/) on the subject. In this post I will just explain the intuition behind the scheme, for a less formal overview check [Lange's excellent video](https://www.youtube.com/watch?v=bhebAMgRZMs).

Paillier's scheme is only homomorphic for the addition, which is still useful enough that it's been used in different kind of cryptographic protocols. For example, [cryptdb](https://www.microsoft.com/en-us/research/video/cryptdb-processing-queries-on-an-encrypted-database/#:~:text=CryptDB%20is%20a%20system%20that,SQL%20queries%20over%20encrypted%20data.) was using it to allow some types of updates on encrypted database rows. More recently, threshold signature schemes have been using Paillier's scheme as well.

## The actual algorithm

As with any asymmetric encryption scheme, you have the good ol' key gen, encryption, and decryption algorithms:

**Key generation**. Same as with RSA, you end up with a public modulus $N = pq$ where $p$ and $q$ are two large primes.

**Encryption**. This is where it gets weird, encryption looks more like a [Pedersen commitment](https://cryptologie.net/article/528/what-is-an-inner-product-argument-part-1/) (which does not allow decryption). To encrypt, sample a random $r$ and produce the ciphertext as:

$$(N+1)^m \cdot r^N \mod{N^2}$$

where $m$ is the message to be encrypted. My thought at this point was "_WOOT. A message in the exponent? How will we decrypt?_"

**Decryption**. Retrieve the message from the ciphertext $c$ as

$$\frac{c^{\varphi(N)} -1}{N} \cdot \varphi(N)^{-1} \mod{N^2}$$

Wait, what? How is this recovering the message which is currently the discrete logarithm of $(N+1)^m$?

## How decryption works

The **trick** is in expanding this exponentiation (using the [Binomial expansion](https://www.cuemath.com/binomial-expansion-formula/)).

> The relevant variant of the Binomial formula is the following:
>
> $$(1+x)^n = \binom{n}{0}x^0 + \binom{n}{1}x^1 + \cdots + \binom{n}{n} x^n$$
>
> where $\binom{a}{b} = \frac{a!}{b!(a-b)!}$

So in our case, if we only look at $(N+1)^m$ we have:

$$
\begin{align}
(N+1)^m &= \binom{m}{0} + \binom{m}{1} N + \binom{m}{2} N^2 + \cdots + \binom{m}{m} N^m \\\\
&= \binom{m}{0} + \binom{m}{1} N \mod{N^2}\\\\
&= 1 + m \cdot N \mod{N^2}
\end{align}
$$

Tada! Our message is now back in plain sight, extracted from the exponent. Isn't this magical?

This is of course not _exactly_ what's happening. If you really want to see the real thing, read the next section, otherwise thanks for reading!

## The deets

If you understand that, you should be able to reverse the actual decryption:

$$
\begin{align}
c^{\varphi(N)} &= ((N+1)^m \cdot r^N)^{\varphi(N)}\\\\
&= (N+1)^{m\cdot\varphi(N)} \cdot r^{N\varphi(N)} \mod{N^2}
\end{align}
$$

It turns out that the $r^{N\varphi(N)} = 1 \mod{N^2}$ because $N\varphi(N)$ is exactly the order of our group modulo $N^2$. You can visually think about why by looking at my fantastic drawing:

![](https://i.imgur.com/uUXpFXE.jpg)

On the other hand, we get something similar to what I've talked before:

$$
(N+1)^{m\varphi(N)} = (1 + mN)^\varphi(N) = 1 + m\varphi(N)N \mod{N^2}
$$

Al that is left is to cancel the terms that are not interesting to us, and we get the message back.
]]>
Learn How to Code a zkApp Hello World With Me Using TypeScript David Wong Sat, 11 Mar 2023 14:01:45 +0100 http://www.cryptologie.net/article/590/learn-how-to-code-a-zkapp-hello-world-with-me-using-typescript/ http://www.cryptologie.net/article/590/learn-how-to-code-a-zkapp-hello-world-with-me-using-typescript/#comments
<iframe width="560" height="315" src="https://www.youtube.com/embed/prx2oNpy0vo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> ]]>
zkVMs are cool, but have you heard of zkCPUs? David Wong Thu, 02 Mar 2023 22:21:48 +0100 http://www.cryptologie.net/article/589/zkvms-are-cool-but-have-you-heard-of-zkcpus/ http://www.cryptologie.net/article/589/zkvms-are-cool-but-have-you-heard-of-zkcpus/#comments
I like to describe Ethereum as a gigantic computer floating in the sky. A computer everyone can use by installing their own applications there, and using each other's applications. It's **the world's computer**. I'm not the only one seeing it like this by the way. [Dfinity](https://dfinity.org/) called their Ethereum-like protocol the "**Internet computer**". Sounds pretty cool.

These internet computers are quite clunky at the moment though, forcing everyone (including you and me) to reexecute everything, to make sure that the computer hasn't made a mistake. But fear not, this is all about to stop! With the recent progress around [zero-knowledge proofs](https://cryptologie.net/article/575/whats-the-deal-with-zkapps/) (ZKPs), we're seeing a move to enhance these internet computers with **computational integrity**. Or in other words, only the computer has to compute, the others can trust the result due to cryptography!

A lot of the attempts that are reimplementing a "provable" internet computer have been making use of "zkVMs", an equivalent to the VMs of the previous era of blockchains but enhanced with zero-knowledge proofs. But what are these zkVMs? And is it the best we can come up with? In this post I will respond to both of these questions, and I will then introduce a new concept: the **zkCPU**.

## Let's talk about circuits

The lowest level of development for general-purpose zero-knowledge proof systems (the kind of zero-knowledge proof systems that allow you to write programs) is the **arithmetic circuit**.

Arithmetic circuits are an intermediate representation which represent an actual circuit, but using math, so that we can prove it using a proof system.

In general, you can follow these steps to make use of a general-purpose ZKP system:

1. take a program you like
2. compile it into an (arithmetic) circuit
3. execute your circuit in a same way you'd execute your program (while recording the state of the memory at each step)
4. use your proof system to prove that execution

What does an arithmetic circuit really look like? Well, it looks like a circuit! It has gates, and wires, although its gates are not the typical circuit ones like AND, OR, NAND, XOR, etc. Instead, it has "arithmetic gates": a gate to multiply two inputs, and a gate to add two inputs.

![](https://i.imgur.com/6Icodir.png)

(taken from my book [Real-World Cryptography](https://www.amazon.com/Real-World-Cryptography-David-Wong/dp/1617296716/ref=sr_1_1?crid=1JB4MBN21A87R&keywords=real+world+cryptography&qid=1669830462&sprefix=real+world+cryptograph%2Caps%2C143&sr=8-1))

## Now we're ready to talk virtual machines

A virtual machine can usually be broken down into three components:

1. Some **memory** that can be used to store and read values (for example, if you add two values together, where do you read the two values from? and where do you store the result?)
2. A set of instructions that people can use to form programs.
3. Some logic that can interpret these instructions.

In other words, a VM looks very much like this:

```
for instruction in program {
parse_instruction_and_do_something(instruction);
}
```

For example, using the instructions supported by the [Ethereum VM](https://www.evm.codes/?fork=merge) you can write the following program that makes use of a stack to add two numbers:

```
PUSH1 5 // will push 5 on the stack
PUSH1 1 // will push 1 on the stack
ADD // will remove the two values from the stack and push 6
POP // will remove 6 from the stack
```

Most of the difference between a CPU and a VM comes from the **V**. A virtual machine is created in software, whereas the CPU is pure hardware and is the lowest level of abstraction.

## So what about zkVMs then?

From the outside, a zkVM is almost the same stuff as a VM: it executes programs and returns their outputs, but it also returns a cryptographic proof that one can verify.

Looking inside a zkVM reveals some arithmetic circuits, the same ones I've talked about previously! And these arithmetic circuits "simply" implement the VM loop I wrote above. I put "simply" in quote because it's not that simple to implement in practice, but that's the basic idea behind zkVMs.

From a developer's perspective a zkVM isn't that different from a VM, they still have access to the same set of instructions, which like most VMs is usually just the base for a nicer higher-level language (which can compile down to instructions).

We're seeing a lot of zkVMs poping out these days. There's some that introduce completely new VMs, optimized for ZKPs. For example, we have [Cairo](https://starkware.co/cairo/) from Starkware and [Miden](https://github.com/0xPolygonMiden/miden-vm) from Polygon. On the other side, we also have zkVMs that aim at supporting known VMs, for example a number of projects seek to support Ethereum's VM (the EVM) --Vitalik wrote an article comparing all of them [here](https://vitalik.ca/general/2022/08/04/zkevm.html)-- or more interestingly real-world VMs like the RISC-V architecture (see [Risc0](https://www.risczero.com/) here).

## What if we people could directly write circuits?

Supporting VMs is quite an attractive proposal, as developers can then write programs in higher-level abstractions without thinking about arithmetic circuits (and avoid bugs that can happen when writing for zero-knowledge proof systems directly).

But doing things at this level means that you're limited to what the zkVM does. You can only use their set of instructions, you only have access to accelerated operations that they have accelerated for you, and so on. At a time where zk technology is only just flourishing, and low-level optimizations are of utmost important, not having access to the silicon is a problem.

Some systems have taken a different approach: **they let users write their own circuits**. This way, users have much more freedom in what they can do. Developers can inspect the impact of each line of code they write, and work on the optimizations they need at the circuit level. Hell, they can write their own VMs if that's what they want. The sky's the limit.

This is what the [Halo2](https://github.com/zcash/halo2) library from Zcash has done so far, for example, allowing different projects to create their own zkCPUs. (To go full circle, some zkEVMs use Halo2.)

## Introducing the world's CPU

So what's the **world zkCPU**? Or what's the **Internet zkCPU** (as Dfinity would say)? It's [Mina](https://minaprotocol.com/).

Like a CPU, it is designed so that gates wired together form a circuit, and values can be stored and read from a number of registers (3 at the moment of this writing, 15 in the new [kimchi update](https://github.com/MinaProtocol/MIPs/pull/9/files?short_path=f52746c#diff-f52746c52b8ca1d3d8d8e38af8bba6b173407c910141561cf9fe4665dbbd7f17)). Some parts are accelerated, perhaps akin to the [ALU](https://en.wikipedia.org/wiki/Arithmetic_logic_unit) component of a real CPU, via what we call custom gates (and soon lookup tables).

Mina is currently a zkCPU with two circuits as its core logic:

* the transaction circuit
* the blockchain circuit

The transaction circuit is used to create blocks of transactions, wheereas the blockchain circuits chains such blocks of transactions to form the blockchain.

> Interestingly, both circuits are recursive circuits, which allows Mina to compress all of the proofs created into a single proof. This allows end users, like you and me, to verify the whole blockchain in a single proof of 22kB.

Soon, Mina will launch [zkApps](https://cryptologie.net/article/575/whats-the-deal-with-zkapps/), which will allow anyone to write their own circuits and attach them as modules to the Mina zkCPU.

User circuits will have access to the same zkCPU as Mina, which means that they can extend it in all kind of ways. For example, internally a zkApp could use a different proof system allowing for different optimizations (like the Halo2 library), or it could implement a VM, or it could do something totally different.

I'm excited to see what people will develop in the future, and how all these zkApps will benefit from getting interoperability for free with other zkApps. Oh, and by the way, zkApps are currently turned on in [testnet](https://minaprotocol.com/blog/berkeley-testnet-alpha-participation-guide) if you can't wait to test this in mainnet.

![zkcpu](/upload/IMG_3D530937BA64-1.jpg)

EDIT: I know that zkFPGA would have been technically more correct, but nobody knows an FPGA is
]]>
Dealing with the unknown David Wong Wed, 01 Mar 2023 22:02:01 +0100 http://www.cryptologie.net/article/588/dealing-with-the-unknown/ http://www.cryptologie.net/article/588/dealing-with-the-unknown/#comments
In any case, I was slow to make progress, but I did not give up. I have this personal theory that ALL successful projects and learnings are from not giving up and working long enough on something. Any large project started as a side project, or something very small, and became big after YEARS of continuous work. I understood that early when I saw who the famous bloggers were around me: people who had been blogging nonstop for years. They were not the best writers, they didn't necessarily have the best content, they just kept at if for years and years.

In any case, I digress, today I wanted to talk about two things that have inspired me a lot in the last half.

The first thing, is this blog post untitled [Just Know Stuff](https://kidger.site/thoughts/just-know-stuff/). It's directed to PhD students, but I always feel like I run into the same problems as PhD students and so I tend to read what they write. I often run away from complexity, and I often panic and feel stressed and do as much as I can to avoid learning what I don't need to learn. The problem with that is that I only get shallow knowledge, and I develop breath over depth. It's useful if you want to teach (and I think this is what made [Real-World Cryptography](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto&a_bid=ad500e09) a success), but it's not useful if you want to become an expert in one domain. (And you can't be an expert in so many domains.)

Anyway, the blogpost makes all of that very simple: "just know stuff". The idea is that if you want to become an expert, you'll have to know that stuff anyway, so don't avoid it. You'll have to understand all of the corner cases, and all of the logic, and all of the lemmas, and so on. So don't put it off, spend the time to learn it. And I would add: doesn't matter if you're slow, as long as you make progress towards that goal you'll be fine.

The second thing is a comment I read on [HN](https://news.ycombinator.com/). I can't remember where, I think it was in relation to dealing with large unknown codebases. Basically the comment said: "don't use your brain and read code trying to understand it, your brain is slow, use the computer, the computer is fast, change code, play with code, compile it, see what it does".

That poorly paraphrased sentence was an epiphany for me. It instantly made me think of Veritasium's video [The 4 things it takes to be an expert](https://www.youtube.com/watch?v=5eW6Eagr9XA). The video said that to learn something really really well, to become an expert, you need to have a testing environment with FAST and VALID feedback. And I think a compiler is exactly that, you can quickly write code, test things, and the compiler tells you "YES, YOU ARE RIGHT" or "BEEEEEEEP, WRONG" and your brain will do the rest.

So the learning for me was that to learn something well, I had to stop spending time reading it, I had to play with it, I had to test my understanding, and only then would I really understand it.

![veritasum](/upload/Screenshot_2023-03-01_at_9.59_.41_PM_.png)

> "timely feedback" and "valid environment" are the rules I'm referring to. Not that the other ones are less important. ]]>
A new series of videos on zero-knowledge proof composition and recursion (part 1) David Wong Sun, 26 Feb 2023 20:23:01 +0100 http://www.cryptologie.net/article/587/a-new-series-of-videos-on-zero-knowledge-proof-composition-and-recursion-part-1/ http://www.cryptologie.net/article/587/a-new-series-of-videos-on-zero-knowledge-proof-composition-and-recursion-part-1/#comments
In this new series of videos I will explain how proof composition and recursion work with different schemes. Spoiler: we'll talk about Sangria, Nova, PCD, IVC, BCTV14 and Halo (and perhaps more if more comes up as I record these).

Here's the first one:

<iframe width="560" height="315" src="https://www.youtube.com/embed/6mcCyQXm8vo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

you can access the [full playlist here](https://www.youtube.com/watch?v=6mcCyQXm8vo&list=PLBJMt6zV1c7GeKkR2SUhzx9KSJ9TsEx6n). ]]>
Real-World Cryptography, a bit more than a year later David Wong Sun, 26 Feb 2023 12:12:46 +0100 http://www.cryptologie.net/article/586/real-world-cryptography-a-bit-more-than-a-year-later/ http://www.cryptologie.net/article/586/real-world-cryptography-a-bit-more-than-a-year-later/#comments
> source: [redbubble](https://www.redbubble.com/i/t-shirt/Life-mural-by-mimoozai/135893840.WFLAH)

Three years ago, in the middle of writing my book Real-World Cryptography, I wrote about [Why I'm writing a book on cryptography](https://www.cryptologie.net/article/504/why-im-writing-a-book-on-cryptography/). I believed there was a market of engineers (and researchers) that was not served by the current offerings. There ought to be something more approachable, with less equations and theory and history, with more diagrams, and including advanced topics like cryptocurrencies, post-quantum cryptography, multi-party computations, zero-knowledge proof, hardware cryptography, end-to-end encrypted messaging, and so on.

The blogpost went viral and I ended up reusing it as a prologue to the book.

Now that Real-world cryptography has been released for more than a year, it turns out my 2-year bet was not for nothing :). The book has been very well received, including being used in a number of universities by professors, and has been selling quite well.

The only problem is that it mostly sold through [Manning](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto&a_bid=ad500e09) (my publisher), meaning that the book did not receive many reviews on [Amazon](https://www.amazon.com/Real-World-Cryptography-David-Wong/dp/1617296716).

So this is post is for you. If you've bought the book, and enjoyed it (or parts of it), please leave a review [over there](https://www.amazon.com/Real-World-Cryptography-David-Wong/dp/1617296716). This will go a long way to help me establish the book and allow more people to find it (Amazon being the biggest source of readers today). ]]>
Contributing to open source projects and about learning zero-knowledge proofs David Wong Sun, 05 Feb 2023 01:08:14 +0100 http://www.cryptologie.net/article/585/contributing-to-open-source-projects-and-about-learning-zero-knowledge-proofs/ http://www.cryptologie.net/article/585/contributing-to-open-source-projects-and-about-learning-zero-knowledge-proofs/#comments
![kimchi in the Mina stack](/upload/Screenshot_2023-02-04_at_3.54_.18_PM_.png)

Kimchi by itself is only a backend to create proofs. The whole picture includes:

* Pickles, the recursion layer, for verifying proofs within proofs (ad infinitum)
* Snarky, the frontend that allows developers to write programs in a higher-level abstraction that Kimchi and Pickles can prove

Today, both of these parts are written in OCaml and not really meant to be used outside of Mina. With the advent of [zkapps](https://cryptologie.net/article/575/whats-the-deal-with-zkapps/) most users are able to use all of this **right now** using typescript in a user-friendly toolbox called [snarkyjs](https://github.com/o1-labs/snarkyjs).

> If you're only interested in using the main tool in typescript, head over to the [snarkyjs](https://github.com/o1-labs/snarkyjs) repo.

Still, we would benefit from having the pickles + snarky + kimchi combo in a single language (Rust). This would allow us to move faster, and we would be able to improve performances even more. On top of that, a number of users have been looking for an all-in-one zero-knowledge proof Rust library that supports recursion without relying on a [trusted setup](https://cryptologie.net/article/560/zk-faq-whats-a-trusted-setup-whats-a-structured-reference-string-whats-toxic-waste/).

For this reason, we've been moving more to the Rust side.

![the kimchi stack in rust and ocaml](/upload/Screenshot_2023-02-04_at_3.54_.27_PM_.png)

What does this have to do with you? Well, while we're doing this, kimchi could use some help from the community! We love open source, and so everything's developed in the open.

I've [talked about external contributions to kimchi](https://twitter.com/cryptodavidw/status/1502503708931739651) in the past and have since received a tremendous amount of replies and interest:

![kimchi on twitter](/upload/Screenshot_2023-02-04_at_4.10_.24_PM_.png)

A year later, we're now **the open source zero-knowledge project with the highest number of contributors** (as far as I can tell), and we even hired a number of them!

Some of the contributors were already knowledgeable in ZKPs, some were already knowledgeable in Rust, some didn't know anything. It didn't really matter, as we followed the best philosophy for an open source project: **the more transparent and understandable a project is, the more people will be able to contribute and build on top of it**.

Kimchi has an excellent [introduction to contributing](https://github.com/o1-labs/proof-systems/blob/master/CONTRIBUTING.md) (including a short [video](https://www.youtube.com/watch?v=WUP54nqVedc)), a [book](https://o1-labs.github.io/proof-systems/) explaining a number of concepts behind the implementation, a list of [easy tasks](https://github.com/o1-labs/proof-systems/issues?q=is%3Aopen+is%3Aissue+label%3A%22easy+task+to+start+contributing%22) to start with, and my personal support over twitter or Github =)

So if you're interested in any of these things, don't be shy, look at these links or come talk to me and I'll help you onboard to your first contribution! ]]>
Learning OCaml for non-functional language people like me David Wong Sun, 29 Jan 2023 11:04:36 +0100 http://www.cryptologie.net/article/584/learning-ocaml-for-non-functional-language-people-like-me/ http://www.cryptologie.net/article/584/learning-ocaml-for-non-functional-language-people-like-me/#comments
But I digress, I wanted to write this note to "past me" (and anyone like that guy). It's a note about what you should do to get past the OCaml bump. There's two things: getting used to read types, and understanding how to parse compiler errors.

For the first one, a breakthrough in how effective I am at reading OCaml code came when I understood the importance of types. I'm used to just reading code and understanding it through variable names and comments and general organization. But OCaml code is much more like reading math papers I find, and you often have to go much slower, and you have to read the types of everything to understand what some code does. I find that it often feels like reverse engineering. Once you accept that you can't really understand OCaml code without looking at type signatures, then everything will start falling into place.

For the second one, the OCaml compiler has horrendous errors it turns out (which I presume is the major reason why [people give up on OCaml](https://discuss.ocaml.org/t/what-are-the-biggest-reasons-newcomers-give-up-on-ocaml/10958/1)). Getting an OCaml error can sometimes really feel like a death sentence. But surprisingly, following some unwritten heuristics can most often fix it. For example, when you see a long-ass error, it is often due to two types not matching. In these kind of situations, just read the end of the error to see what are the types that are not matching, and if that's not enough information then work you way up like you're reading an inverted stack trace. Another example is that long errors might actually be several errors concatenated together (which isn't really clear due to formatting). Copy/pasting errors in a file and adding line breaks manually often helps.

I'm not going to write up exactly how to figure out how each errors should be managed. Instead, I'm hopping that core contributors to OCaml will soon seriously consider improving the errors. In the mean time though, the best way to get out of an error is to ask on on [discord](https://discuss.ocaml.org/t/ocaml-discord-server/1884), or on [stackoverflow](https://stackoverflow.com/), how to parse the kind of errors you're getting. And sometimes, it'll lead you to read about advanced features of the language (like [polymorphic recursion](https://v2.ocaml.org/manual/polymorphism.html#s%3Apolymorphic-recursion)).
]]>
Permutation-Based Crypto 2023 David Wong Fri, 20 Jan 2023 23:09:24 +0100 http://www.cryptologie.net/article/583/permutation-based-crypto-2023/ http://www.cryptologie.net/article/583/permutation-based-crypto-2023/#comments
Things have changed a lot since the previous one took place (pre-covid!) so I expect some new developments to join the party (wink wink [SNARK-friendly sponges](https://eprint.iacr.org/2019/458)).

If you're interested in presenting some research, check the [call for contributions](https://permutationbasedcrypto.org/2023/files/PBC_2023_call.pdf)! ]]>
Creating cryptographic protocols with multiplications David Wong Mon, 05 Dec 2022 00:14:16 +0100 http://www.cryptologie.net/article/582/creating-cryptographic-protocols-with-multiplications/ http://www.cryptologie.net/article/582/creating-cryptographic-protocols-with-multiplications/#comments
Addition is often free, but it seems like multiplication is a pain in most cryptographic protocols.

If you're multiplying two known values together, it's OK. But if you want to multiply one known value with another unknown value, then you will most likely have to reach out to the discrete logarithm problem. With that in hand, you can multiply an unknown value with a known value.

This is used, for example, in key exchanges. In such protocols, a public key usually masks a number. For example, the public key `X` in `X = [x] G` masks the number `x`. To multiply `x` with another number, we do this hidden in the exponent (or in the scalar since I'm using the elliptic curve notation here): `[y] X = [y * x] G`. In key exchanges, you use this masked result as something useful.

If you're trying to multiply two unknown values together, you need to reach for pairings. But they only give you one multiplication. With `masked(x)` and `masked(y)` you can do `pairing(masked(x), masked(y))` and obtain something that's akin to `locked_masked(x * y)`. It's locked as in, you can't do these kind of multiplications anymore with it. ]]>
State monads in OCaml David Wong Fri, 18 Nov 2022 22:26:19 +0100 http://www.cryptologie.net/article/581/state-monads-in-ocaml/ http://www.cryptologie.net/article/581/state-monads-in-ocaml/#comments
Little did I know, there's more to monads, or at least monads are so vague that they can be used in all sorts of ways. One example of this is state monads.

A state monad is a monad which is defined on a type that looks like this:

```
type 'a t = state -> 'a * state
```

In other word, the type is actually a function that performs a state transition and also returns a value (of type `'a`).

When we act on state monads, we're not really modifying a value, but a function instead. Which can be brain melting.

---

The bind and return functions are defined very differently due to this.

The return function should return a function (respecting our monad type) that does nothing with the state:

```
let return a = fun state -> (a, state)
let return a state = (a, state) (* same as above *)
```

This has the correct type signature of `val return : 'a -> 'a t` (where, remember, `'a t` is `state -> ('a, state)`). So all good.

The bind function is much more harder to parse. Remember the type signature first:

```
val bind : 'a t -> f:('a -> 'b t) -> 'b t
```

which we can extend, to help us understand what this means when we're dealing with a monad type that holds a function:


```
val bind : (state -> ('a, state)) -> f:('a -> (state -> ('b, state))) -> (state -> ('b, state))
```

you should probably spend a few minutes internalizing this type signature. I'll describe it in other words to help: `bind` takes a state transition function, and another function `f` that takes the output of that first function to produce another state transition (along with another return value `'b`).

The result is a new state transition function. That new state transition function can be seen as the chaining of the first function and the additional one `f`.

OK let's write it down now:

```
let bind t ~f = fun state ->
(* apply the first state transition first *)
let a, transient_state = t state in
(* and then the second *)
let b, final_state = f a transient_state in
(* return these *)
(b, final_state)
```

Hopefully that makes sense, we're really just using this to chain state transitions and produce a larger and larger main state-transition function (our monad type `t`).

---

How does that look like when we're using this in practice? As most likely when a return value is created, we want to make it available to the whole scope. This is because we want to really write code that looks like this:

```
let run state =
(* use the state to create a new variable *)
let (a, state) = new_var () state in
(* use the state to negate variable a *)
let (b, state) = negate a state in
(* use the state to add a and b together *)
let (c, state) = add a b state in
(* return c and the final state *)
(c, state)
```

where `run` is a function that takes a `state`, applies a number of state transition on that state, and return the new state as well as a value produced during that computation.
The important thing to take away there is that we want to apply these state transition functions with values that were created previously at different point in time.

Also, if that helps, here are the signatures of our imaginary state transition functions:

```
val new_var -> unit -> state -> (var, state)
val negate -> var -> state -> (var, state)
val add -> var -> var -> state -> (var, state)
```

Rewriting the previous example with our state monad, we should have something like this:

```
let run =
bind (new_var ()) ~f:(fun a ->
bind (negate a) ~f:(fun b -> bind (add a b) ~f:(fun c ->
return c)))
```

Which, as I explained in my previous post on monads, can be written more clearly using something like a `let%` operator:

```
let t =
let%bind a = new_var () in
let%bind b = negate a in
let%bind c = add a b in
return c
```

And so now we see the difference: monads are really just way to do things we can already do but without having to pass the state around.

It can be really hard to internalize how the previous code is equivalent to the non-monadic example. So I have a whole example you can play with, which also inline the logic of `bind` and `return` to see how they successfuly extend the state. (It probably looks nicer on [Github](https://gist.github.com/mimoo/64fec5a67f67bdb5340f027279291d80)).

```
type state = { next : int }
(** a state is just a counter *)

type 'a t = state -> 'a * state
(** our monad is a state transition *)

(* now we write our monad API *)

let bind (t : 'a t) ~(f : 'a -> 'b t) : 'b t =
fun state ->
(* apply the first state transition first *)
let a, transient_state = t state in
(* and then the second *)
let b, final_state = f a transient_state in
(* return these *)
(b, final_state)

let return (a : int) (state : state) = (a, state)

(* here's some state transition functions to help drive the example *)

let new_var _ (state : state) =
let var = state.next in
let state = { next = state.next + 1 } in
(var, state)

let negate var (state : state) = (0 - var, state)
let add var1 var2 state = (var1 + var2, state)

(* Now we write things in an imperative way, without monads.
Notice that we pass the state and return the state all the time, which can be tedious.
*)

let () =
let run state =
(* use the state to create a new variable *)
let a, state = new_var () state in
(* use the state to negate variable a *)
let b, state = negate a state in
(* use the state to add a and b together *)
let c, state = add a b state in
(* return c and the final state *)
(c, state)
in
let init_state = { next = 2 } in
let c, _ = run init_state in
Format.printf "c: %d\n" c

(* We can write the same with our monad type [t]: *)

let () =
let run =
bind (new_var ()) ~f:(fun a ->
bind (negate a) ~f:(fun b -> bind (add a b) ~f:(fun c -> return c)))
in
let init_state = { next = 2 } in
let c, _ = run init_state in
Format.printf "c2: %d\n" c

(* To understand what the above code gets translated to, we can inline the logic of the [bind] and [return] functions.
But to do that more cleanly, we should start from the end and work backwards.
*)
let () =
let run =
(* fun c -> return c *)
let _f1 c = return c in
(* same as *)
let f1 c state = (c, state) in
(* fun b -> bind (add a b) ~f:f1 *)
(* remember, [a] is in scope, so we emulate it by passing it as an argument to [f2] *)
let f2 a b state =
let c, state = add a b state in
f1 c state
in
(* fun a -> bind (negate a) ~f:f2 a *)
let f3 a state =
let b, state = negate a state in
f2 a b state
in
(* bind (new_var ()) ~f:f3 *)
let f4 state =
let a, state = new_var () state in
f3 a state
in
f4
in
let init_state = { next = 2 } in
let c, _ = run init_state in
Format.printf "c3: %d\n" c

(* If we didn't work backwards, it would look like this: *)
let () =
let run state =
let a, state = new_var () state in
(fun state ->
let b, state = new_var () state in
(fun state ->
let c, state = add a b state in
(fun state -> (c, state)) state)
state)
state
in
let init_state = { next = 2 } in
let c, _ = run init_state in
Format.printf "c4: %d\n" c
```
]]>
Some unrelated rambling about counter strike David Wong Thu, 17 Nov 2022 09:44:18 +0100 http://www.cryptologie.net/article/580/some-unrelated-rambling-about-counter-strike/ http://www.cryptologie.net/article/580/some-unrelated-rambling-about-counter-strike/#comments
My first lesson was how intense competition is. I never had any experience come close to it since then, and miss competition dearly. Once you start competing, and you start getting good at it, you feel the need to do everything to get the advantage. Back then, I would watch every frag movie that came out (even producing [some](https://vimeo.com/manage/videos/8035680)), I would know all the best players of every clan (and regularly play with them), and I would participate in 3 online tournaments a day. I would wake up every day around noon, play the first tournament at 1pm, then practice the afternoon, then play the tournament of 9pm, and then the last one at 1am. If my team lost, I would volunteer to replace a player dropping out from a winning team. Rinse and repeat, every day. There's no doubt in my mind that I must have reached Gladwell's 10,000 hours.

I used the same kind of technique years later when I started my master in cryptography. I thought: I know how to become the best at something, I just have to do it all the time, constantly. I just need to obsess. So I started blogging here, I started subscribing to a number of blogs on cryptography and security and I would read everything I could every single hours of every day. I became a sponge, and severally addicted to RSS feeds. Of course, reading about cryptography is not as easy as playing video games and I could never maintain the kind of long hours I would when I was playing counter strike. I felt good, years later, when I decided to not care as much about new notifications in my RSS feed.

Younger, my dream was to train with a team for a week in a gaming house, which is something that some teams were starting to do. You'd go stay in some house together for a week, and every day practice and play games together. It really sounded amazing. Spend all my hours with people who cared as much as me. It seemed like a career in gaming was possible, as more and more money was starting to pour into esport. Some players started getting salaries, and even coaches and managers. It was a crazy time, and I felt really sad when I decided to stop playing. I knew a life competing in esport didn't make sense for me, but competition really was fullfiling in a way that most other things proved not to be.

An interesting side effect of playing counter strike every day, competitively, for many years, is that I went through many teams. I'm not sure how many, but probably more than 50, and probably less than 100. A team was usually 5 players, or more (if we had a rotation). We would meet frequently to spend hours practicing together, figuring out new strategies that we could use in our next game, doing practice matches against other teams, and so on. I played with all different kind of people during that time, spending hours on teamspeak and making life-long friendships. One of my friend, I remember, would get salty as fuck when we were losing, and would often scream at us over the microphone. One day we decided to have an intervention, and he agreed to stop using his microphone for a week in order not to get kicked out of the team. This completely cured his raging.

One thing I noticed was that the mood of the team was responsible for a lot in our performance. If we started losing during a game, a kind of "loser" mood would often take over the team. We would become less enthusiastic, some team mates might start raging at the incompetence of their peers, or they might say things that made the whole team want to give up. Once this kind of behavior started, it usually meant we were on a one-way road to a loss.
But sometimes, when people kept their focus on the game, and tried to motivate others, we would make incredible come backs. Games were usually played in two halves of 15 rounds. First in 16 would win. We sometimes went from a wooping 0-15 to an insane strike of victories leading us to a 16-15. These were insane games, and I will remember them forever.
The common theme between all of those come back stories were in how the whole team faced adversity, together. It really is when things are not going well that you can judge a team. A bad team will sink itself, a strong team will support one another and focus on doing its best, potentially turning the tide.

During this period, I also wrote a web service to create tournaments easily. Most tournaments started using it, and I got some help to translate it in 8 different european languages. Thousands of tournaments got created through the interface, in all kind of games (not just CS). Years later I ended up [open sourcing](https://github.com/mimoo/wiitop) the tool for others to use. This really made me understand how much I loved creating products, and writing code that others could directly use.

Today I have almost no proof of that time, besides a few IRC screenshots. (I was completely addicted to IRC.)


![](https://i.imgur.com/s66iZjd.jpg)
]]>
ZK Security - A Whole New Layer to Worry About David Wong Thu, 17 Nov 2022 08:37:21 +0100 http://www.cryptologie.net/article/579/zk-security-a-whole-new-layer-to-worry-about/ http://www.cryptologie.net/article/579/zk-security-a-whole-new-layer-to-worry-about/#comments
<iframe width="560" height="315" src="https://www.youtube.com/embed/SxI8uNBp05k" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ]]>
Simple introduction to monads in OCaml David Wong Sat, 12 Nov 2022 23:25:56 +0100 http://www.cryptologie.net/article/578/simple-introduction-to-monads-in-ocaml/ http://www.cryptologie.net/article/578/simple-introduction-to-monads-in-ocaml/#comments
I was surprised to learn that monads are really just two functions: `return` and `bind`:

```
module type Monads = sig
type 'a t
val return : 'a -> 'a t
val bind : 'a t -> ('a -> 'b t) -> 'b t
end
```

Before I explain exactly what these does, let's look at something that's most likely to be familiar: the `option` type.

An option is a variant (or enum) that either represents nothing, or something. It's convenient to avoid using real values to represent emptiness, which is often at the source of many bugs. For example, in languages like C, where `0` is often used to terminate strings, or Golang, where a `nil` pointer is often used to represent nothing.

An `option` has the following signature in OCaml:

```
type 'a option = None | Some of 'a
```

Sometimes, we want to chain operations on an option. That is, we want to operate on the value it might contain, or do nothing if it doesn't contain a value.
For example:

```
let x = Some 5 in
let y = None
match x with
| None -> None
| Some v1 -> (match y with
| None -> None
| Some v2 -> v1 + v2)
```

the following returns nothing if one of `x` or `y` is `None`, and something if both values are set.

Writing these nested match statements can be really tedious, especially the more there are, so there's a [`bind`](https://v2.ocaml.org/api/Option.html) function in OCaml to simplify this:

```
let x = Some 5 in
let y = None in
Option.bind x (fun v1 ->
Option.bind y (fun v2 ->
Some (v1 + v2)
)
)
```

This is debatably less tedious.

This is where two things happened in OCaml if I understand correctly:

* Jane street came up with a ppx called [ppx_let](https://github.com/janestreet/ppx_let) to make it easier to write and read such statements
* [OCaml introduce the same feature without ppxs](https://jobjo.github.io/2019/04/24/ocaml-has-some-new-shiny-syntax.html)

Let's explain the syntax introduced by OCaml first.

To do this, I'll define our monad by extending the OCaml `Option` type:

```
module Monad = struct
type 'a t = 'a option

let return x = Some x
let bind = Option.bind
let ( let* ) = bind
end
```

The syntax introduced by Ocaml is the `let*` which we can define ourselves. (There's also `let+` if we want to use that.)

We can now rewrite the previous example with it:

```
open Monad

let print_res res =
match res with
| None -> print_endline "none"
| Some x -> Format.printf "some: %d\n" x

let () =
(* example chaining Option.bind, similar to the previous example *)
let res : int t =
bind (Some 5) (fun a -> bind (Some 6) (fun b -> Some (a + b)))
in
print_res res;

(* same example but using the let* syntax now *)
let res =
let* a = Some 5 in
let* b = Some 6 in
Some (a + b)
in
print_res res
```

Or I guess you can use the `return` keyword to write something a bit more idiomatic:

```
let res =
let* a = Some 5 in
let* b = Some 6 in
return (a + b)
in
print_res res
```

Even though this is much cleaner, the new syntax should melt your brain if you don't understand exactly what it is doing underneath the surface.

But essentially, this is what monads are. A container (e.g. `option`) and a `bind` function to chain operations on that container.

Bonus: this is how the jane street's ppx_let syntax works (only defined on `result`, a similar type to `option`, in their library):

```
open Base
open Result.Let_syntax

let print_res res =
match res with
| Error e -> Stdio.printf "error: %s\n" e
| Ok x -> Stdio.printf "ok: %d\n" x

let () =
(* example chaining Result.bind *)
let res =
Result.bind (Ok 5) ~f:(fun a ->
Result.bind (Error "lol") ~f:(fun b -> Ok (a + b)))
in
print_res res;

(* same example using the ppx_let syntax *)
let res =
let%bind a = Ok 5 in
let%bind b = Error "lol" in
Ok (a + b)
in
print_res res
```

You will need to following `dune` file if you want to run it (assuming your file is called `monads.ml`):

```
(executable
(name monads)
(modules monads)
(libraries base stdio)
(preprocess
(pps ppx_let)))
```

And run it with `dune exec ./monads.exe` ]]>
The intuition behind the sum-check protocol in 5 minutes David Wong Sat, 12 Nov 2022 20:29:03 +0100 http://www.cryptologie.net/article/577/the-intuition-behind-the-sum-check-protocol-in-5-minutes/ http://www.cryptologie.net/article/577/the-intuition-behind-the-sum-check-protocol-in-5-minutes/#comments
<iframe width="560" height="315" src="https://www.youtube.com/embed/XV62OB022tU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ]]>
Intro to Kimchi @ 0xPARC CARML Weekend David Wong Fri, 11 Nov 2022 01:28:55 +0100 http://www.cryptologie.net/article/576/intro-to-kimchi-0xparc-carml-weekend/ http://www.cryptologie.net/article/576/intro-to-kimchi-0xparc-carml-weekend/#comments
<iframe width="560" height="315" src="https://www.youtube.com/embed/QbOGZ-oBXvA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ]]>
What's the deal with zkapps? David Wong Thu, 20 Oct 2022 18:28:39 +0200 http://www.cryptologie.net/article/575/whats-the-deal-with-zkapps/ http://www.cryptologie.net/article/575/whats-the-deal-with-zkapps/#comments
Zkapps, or **zero-knowledge applications**, are the modern and secure solution we found to allow someone else to compute arbitrary programs, while allowing us to trust the result.
And all of that thanks to a recently rediscovered cryptographic construction called **general-purpose zero-knowledge proofs**.
With it, no need to trust the hardware to behave correctly, especially if you're not the one running it (cough cough [intel SGX](https://www.google.com/search?q=attacks+on+intel+sgx&oq=attacks+on+intel+sgx&aqs=chrome..69i57j0i271l2j69i60l4j69i61.1713j0j9&sourceid=chrome&ie=UTF-8)).

Today, we're seeing zero-knowledge proofs impacting cryptocurrencies (which as a whole have been a petri dish for cryptographic innovation), but tomorrow I argue that most applications (not just cryptocurrencies) will be directly or indirectly impacted by zero-knowledge technology.

> Because I've spent so much time with cryptocurrencies in recent years, auditing blockchains like Zcash and Ethereum at NCC Group, and working on projects like Libra/Diem at Facebook, I'm mostly going to focus on what's happening in the blockchain world in this post.
> If you want a bigger introduction to all of these concepts, check my book [Real-World Cryptography](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto&a_bid=ad500e09).

The origin of the story starts with the ancient search for solutions to the problem of **verifiable computation**; being able to verify that the result of a computation is correct. In other words, that whoever run the program is not lying to us about the result.

Most of the solutions, until today, were based on hardware. Hardware chips were first invented to be "tamper resistant" and "hard to analyze". Chips capable of performing simple cryptographic operations like signing or encryption.
You would typically find them in sim cards, TV boxes, and in credit cards.
While all of these are being phased out, they are being replaced by equivalent chips called "secure enclaves" that can be found in your phone.
On the enterprise side, more recently technologies were introduced to provide programmability. Chips capable of running arbitrary programs, while providing (signed) attestation that the programs were run correctly. These chips would typically be certified by some vendor (for example, Intel SGX) with some claim that it’s hard to tamper with them.
Unfortunately for Intel and others, the security community has found a lot of interest in publishing attacks on their "secure" hardware, and we see new hacks coming up pretty much every year.
It's a game of cat and mouse.

Cryptocurrencies is just another field that's been dying to find a solution to this verifiable computation problem.
The previous hardware solutions I’ve talked about can be found in oracles like [town crier](https://www.town-crier.org/), in bridges like the [Ethereum-Avalanche bridge](https://medium.com/avalancheavax/new-avalanche-bridge-builds-on-intel-sgx-technology-in-breakthrough-for-cross-chain-8f854e0e72e0), or even at the core of cryptocurrencies like [MobileCoin](https://mobilecoin.com/).

Needless to say, I'm not a fan, but I'll be the first to conceive that in some scenarios you just don't have a choice.
And being expensive enough for attackers to break is a legitimate solution.
I like to be able to pay with my smartphone.

But in recent years, an old cryptographic primitive that can solve our verifiable computation problem for real has made a huge comeback. Yes you know which one I'm talking about: **general-purpose zero-knowledge proofs (ZKPs)**.

With it, there is no need to trust the hardware: whoever runs the program can simply create a cryptographic proof to convince you that the result is correct.

ZKPs have been used to solve ALL kind of problems in cryptocurrency:

* "I wish we could process many more transactions" -> simply let someone else run a program that verifies all the transactions and outputs a _small_ list of changes to be made to the blockchain (and a proof that the output is correct). This is what zk rollups do.
* "I wish we could mask the sender, recipient, and the amount being transacted" -> just encrypt your transaction! And use a zero-knowledge proof to prove that what's encrypted is correct. This is what ZCash has done (and Monero, to some extent).
* "It takes ages for me to download the whole Bitcoin ledger..." -> simply have someone else do it for you, and give you the resulting latest state (with a proof that it's correct). That's what Mina does.
* "Everybody using cryptocurrency is just blindly trusting some public server (e.g. Infura) instead of running their own nodes" -> use ZKP to make light clients verifiable! This is what Celo does with Plumo.
* "Why can't I easily transfer a token from one blockchain to another one?" -> use these verifiable light clients. This is what [zkBridge](https://rdi.berkeley.edu/zkp/zkBridge/zkBridge.html) proposes.

There's many more, but I want to focus on zkapps (remember?) in this post.
Zkapps are a new way to implement smart contracts. Smart contracts were first pioneered by Ethereum, to allow user programs to run on the blockchain itself.

To explain smart contracts, I like the analogy of a single supercomputer floating in the sky above us. We're all using the same computer, the one floating in the sky. We all can install our programs on the floating computer, and everyone can execute functions of these programs (which might mutate the state of the program).

The solution found by Ethereum at the time was to implement the concept naively and without using cryptography:

* Users can install a program by placing the program's code in a transaction.
* Users can execute functions of a program by writing in a transaction the function they want to execute and with what arguments.
* **Everyone** running a node has to run the functions found in users transactions. All of them. In order to get the result (e.g. move X tokens to wallet Y, update the state of the smart contract, etc.)

The last point is the biggest limitation of Ethereum.
We can't have the user provide the result of executing a function, or anyone else really, because we can't trust them.
And so, not only does this mean that everyone is always re executing the same stuff (which is redundant, and slows down the network), but this also means that everything in a smart contract must be public (as everyone must be able to run the function).
There can be no secrets used.
There can be no asynchronous calls or interaction outside of the network while this happens.

This is where zkapps enters the room. Zkapps allow users to run the programs themselves and give everyone else the result (along with a proof).

This not only solves the problem of having everyone re-execute the same smart contract calls constantly, but it also opens up new applications as computations can be non-deterministic: they can use randomness, they can use secrets, they can use asynchronous calls, etc.

More than that, the state of a zkapp can now mostly live off-chain, like real applications before Ethereum used to do. Reducing the size of the entire blockchain ([Today, Ethereum is almost 1 terabyte!](https://ycharts.com/indicators/ethereum_chain_full_sync_data_size)).
These applications are not limited by the speed of the blockchain, or by the capabilities of the language exposed by the blockchain anymore.

Perhaps, it would be more correct to describe them as mini-blockchains of their own, that can be run as centralized or decentralized applications, similar to L2s or Cosmos zones.

OK. So far so good, but do these zkapps really exist or is it just talk? Well, not yet. But a few days ago, the [Mina](https://minaprotocol.com/) cryptocurrency released their implementations of zkapps on a testnet. And if the testnet goes well, there is no reason to believe this won't unlock a gigantic number of applications we haven't seen before on blockchains.

You can read the [hello world tutorial](https://docs.minaprotocol.com/zkapps/tutorials/hello-world) and deploy your first zkapp in like 5 minutes (I kid you not).
So I highly recommend you to try it. This is the future :)
]]>
OCaml wishlist David Wong Thu, 22 Sep 2022 04:37:59 +0200 http://www.cryptologie.net/article/574/ocaml-wishlist/ http://www.cryptologie.net/article/574/ocaml-wishlist/#comments
I find the language and the experience very close to C in many ways, while at the same time boasting a state of the art type system. It's weird. I think there's a real emphasis on the expressiveness, but little on the engineering. Perhaps this is due to the language not having enough traction in the industry.

So about this, I have two things I'd like to say. The first, is that if you're looking for a somewhat low-level (there's a garbage collector) language you can make a real dent in, OCaml might be the one. It's pretty bare bone, not that many libraries exist, and if they do they are barely usable due to a lack of documentation. My first contribution was [a library to encode and decode hexadecimal strings](https://github.com/mimoo/hexstring), because I couldn't find one that I could use. That should tell you something.

My second contribution was [a tool](https://github.com/o1-labs/ocamlbyexample) to build "by example" websites. I used it to make a website to learn [OCaml by examples](https://o1-labs.github.io/ocamlbyexample/), and another one to learn [Nix by example](https://mimoo.github.io/nixbyexample/). How cool would it be if people started using it to build a number of "by examples" websites in the OCaml ecosystem :D?

Anyway, I digress, the second thing I wanted to say is: if you're working on OCaml (or want to contribute to a new language), here's my wishlist:

- a tool like [cargo](https://doc.rust-lang.org/cargo/) to manage dependencies (& versions), start projects, run tests, etc. Two important things: it should use a real configuration language (e.g. toml, json, yml) and it should work in a [convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) model.
- better integration with vscode. Every time I write or read OCaml I find myself missing rust-analyzer and its integration with vscode. I just want to be able to go to definitions, even in the presence of functors, and easily find the types of things (and see their implementations).
- being able to run a single test. It is crazy to me that today, you still can't write an inline test and run it. It's the best way to debug or test something.
- better compiler error messages. I think the lack of a tool like cargo, and this, are the biggest impediment to the language. [See this issue for an example](https://github.com/ocaml/ocaml/issues/11539).
- better default for ocamlformat. OCaml is hard to read, some of the reasons are hard to change, but the formatting can be fixed and it really needs some work.
- a linter like clippy. It's 2022, every project should be able to run an OCaml linter in CI.
- good documentation for stdlib and 3rd party libraries. Documentation is really subpar in OCaml.
- a `use` keyword to import specific values in scope (as opposed to "opening" a whole module in scope)

PS: would someone actually be interested to work on any of these for a grant? There's a number of new-ish companies in the OCaml space that would probably pay for someone to solve these. I guess reach out to me on the [contact page](/contact) if you're interested. ]]>
noname developer update #4: showcasing method calls David Wong Mon, 19 Sep 2022 03:36:21 +0200 http://www.cryptologie.net/article/573/noname-developer-update-4-showcasing-method-calls/ http://www.cryptologie.net/article/573/noname-developer-update-4-showcasing-method-calls/#comments
I just implemented method calls in noname, and I made a video showcasing it and checking if the implementation is correct with some simple examples.

Don't forget, if you want to play with it check it out here: https://github.com/mimoo/noname and if you have any questions or are running into something weird please leave a comment on the Github repo!

<iframe width="560" height="315" src="https://www.youtube.com/embed/5Ruc52HjYPg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ]]>
noname developer update #3: user-defined functions David Wong Sat, 17 Sep 2022 05:02:05 +0200 http://www.cryptologie.net/article/572/noname-developer-update-3-user-defined-functions/ http://www.cryptologie.net/article/572/noname-developer-update-3-user-defined-functions/#comments
In this update, I showcase a new feature: functions, and go through the debug compilation of an example program to see if the implementation is sound (that it constrains what it is supposed to constrain). In this video I do something more: I optimize the implementation of `assert_eq` so that you can see a bit of the compiler internals. I also end the video abruptly, thinking I found a bug in the implementation. If you were an attentive student, you would have figured out that there was no bugs: doing things on constants does not create any gates.

[You can play with the noname language here](https://github.com/mimoo/noname)! And you can read the [noname book](https://mimoo.github.io/noname/) as well.

<iframe width="560" height="315" src="https://www.youtube.com/embed/K8edXaIidUM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ]]>