David Wong | Cryptologie | HTML http://www.cryptologie.net/ About my studies in Cryptography. en-us Fri, 18 Nov 2022 22:26:19 +0100 State monads in OCaml David Wong Fri, 18 Nov 2022 22:26:19 +0100 http://www.cryptologie.net/article/581/state-monads-in-ocaml/ http://www.cryptologie.net/article/581/state-monads-in-ocaml/#comments Previously I talked about monads, which are just a way to create a "container" type (that contains some value), and let people chain computation within that container. It seems to be a pattern that's mostly useful in functional languages as they are often limited in the ways they can do things.

Little did I know, there's more to monads, or at least monads are so vague that they can be used in all sorts of ways. One example of this is state monads.

A state monad is a monad which is defined on a type that looks like this:

type 'a t = state -> 'a * state

In other word, the type is actually a function that performs a state transition and also returns a value (of type 'a).

When we act on state monads, we're not really modifying a value, but a function instead. Which can be brain melting.


The bind and return functions are defined very differently due to this.

The return function should return a function (respecting our monad type) that does nothing with the state:

let return a = fun state -> (a, state)
let return a state = (a, state) (* same as above *)

This has the correct type signature of val return : 'a -> 'a t (where, remember, 'a t is state -> ('a, state)). So all good.

The bind function is much more harder to parse. Remember the type signature first:

val bind : 'a t -> f:('a -> 'b t) -> 'b t

which we can extend, to help us understand what this means when we're dealing with a monad type that holds a function:

val bind : (state -> ('a, state)) -> f:('a -> (state -> ('b, state))) -> (state -> ('b, state))

you should probably spend a few minutes internalizing this type signature. I'll describe it in other words to help: bind takes a state transition function, and another function f that takes the output of that first function to produce another state transition (along with another return value 'b).

The result is a new state transition function. That new state transition function can be seen as the chaining of the first function and the additional one f.

OK let's write it down now:

let bind t ~f = fun state ->
    (* apply the first state transition first *)
    let a, transient_state = t state in
    (* and then the second *)
    let b, final_state = f a transient_state in
    (* return these *)
    (b, final_state)

Hopefully that makes sense, we're really just using this to chain state transitions and produce a larger and larger main state-transition function (our monad type t).


How does that look like when we're using this in practice? As most likely when a return value is created, we want to make it available to the whole scope. This is because we want to really write code that looks like this:

let run state =
    (* use the state to create a new variable *)
    let (a, state) = new_var () state in
    (* use the state to negate variable a *)
    let (b, state) = negate a state in
    (* use the state to add a and b together *)
    let (c, state) = add a b state in
    (* return c and the final state *)
    (c, state)

where run is a function that takes a state, applies a number of state transition on that state, and return the new state as well as a value produced during that computation. The important thing to take away there is that we want to apply these state transition functions with values that were created previously at different point in time.

Also, if that helps, here are the signatures of our imaginary state transition functions:

val new_var -> unit -> state -> (var, state)
val negate -> var -> state -> (var, state)
val add -> var -> var -> state -> (var, state)

Rewriting the previous example with our state monad, we should have something like this:

let run =
    bind (new_var ()) ~f:(fun a ->
        bind (negate a) ~f:(fun b -> bind (add a b) ~f:(fun c -> 
            return c)))

Which, as I explained in my previous post on monads, can be written more clearly using something like a let% operator:

let t = 
    let%bind a = new_var () in
    let%bind b = negate a in
    let%bind c = add a b in
    return c

And so now we see the difference: monads are really just way to do things we can already do but without having to pass the state around.

It can be really hard to internalize how the previous code is equivalent to the non-monadic example. So I have a whole example you can play with, which also inline the logic of bind and return to see how they successfuly extend the state. (It probably looks nicer on Github).

type state = { next : int }
(** a state is just a counter *)

type 'a t = state -> 'a * state
(** our monad is a state transition *)

(* now we write our monad API *)

let bind (t : 'a t) ~(f : 'a -> 'b t) : 'b t =
 fun state ->
  (* apply the first state transition first *)
  let a, transient_state = t state in
  (* and then the second *)
  let b, final_state = f a transient_state in
  (* return these *)
  (b, final_state)

let return (a : int) (state : state) = (a, state)

(* here's some state transition functions to help drive the example *)

let new_var _ (state : state) =
  let var = state.next in
  let state = { next = state.next + 1 } in
  (var, state)

let negate var (state : state) = (0 - var, state)
let add var1 var2 state = (var1 + var2, state)

(* Now we write things in an imperative way, without monads.
   Notice that we pass the state and return the state all the time, which can be tedious.
*)

let () =
  let run state =
    (* use the state to create a new variable *)
    let a, state = new_var () state in
    (* use the state to negate variable a *)
    let b, state = negate a state in
    (* use the state to add a and b together *)
    let c, state = add a b state in
    (* return c and the final state *)
    (c, state)
  in
  let init_state = { next = 2 } in
  let c, _ = run init_state in
  Format.printf "c: %d\n" c

(* We can write the same with our monad type [t]: *)

let () =
  let run =
    bind (new_var ()) ~f:(fun a ->
        bind (negate a) ~f:(fun b -> bind (add a b) ~f:(fun c -> return c)))
  in
  let init_state = { next = 2 } in
  let c, _ = run init_state in
  Format.printf "c2: %d\n" c

(* To understand what the above code gets translated to, we can inline the logic of the [bind] and [return] functions.
   But to do that more cleanly, we should start from the end and work backwards.
*)
let () =
  let run =
    (* fun c -> return c *)
    let _f1 c = return c in
    (* same as *)
    let f1 c state = (c, state) in
    (* fun b -> bind (add a b) ~f:f1 *)
    (* remember, [a] is in scope, so we emulate it by passing it as an argument to [f2] *)
    let f2 a b state =
      let c, state = add a b state in
      f1 c state
    in
    (* fun a -> bind (negate a) ~f:f2 a *)
    let f3 a state =
      let b, state = negate a state in
      f2 a b state
    in
    (* bind (new_var ()) ~f:f3 *)
    let f4 state =
      let a, state = new_var () state in
      f3 a state
    in
    f4
  in
  let init_state = { next = 2 } in
  let c, _ = run init_state in
  Format.printf "c3: %d\n" c

(* If we didn't work backwards, it would look like this: *)
let () =
  let run state =
    let a, state = new_var () state in
    (fun state ->
      let b, state = new_var () state in
      (fun state ->
        let c, state = add a b state in
        (fun state -> (c, state)) state)
        state)
      state
  in
  let init_state = { next = 2 } in
  let c, _ = run init_state in
  Format.printf "c4: %d\n" c
]]>
Some unrelated rambling about counter strike David Wong Thu, 17 Nov 2022 09:44:18 +0100 http://www.cryptologie.net/article/580/some-unrelated-rambling-about-counter-strike/ http://www.cryptologie.net/article/580/some-unrelated-rambling-about-counter-strike/#comments When I was younger, I used to play a lot of counter strike (CS). CS was (and still is) this first-person shooter game where a team of terrorists would play against a team of counter terrorists. Victory would follow from successfully bombing a site or killing all the terrorists. Pretty simple. I discovered the game at a young age, and I got hooked right away. You could play with other people in cyber cafes, and if you had a computer at home with an internet connection you could even play with others online. This was just insane! But what really hooked me was the competitive aspect of the game. If you wanted, you could team up with 4 other players and play matches against others. This was all new to me, and I progressively spent more and more hours playing the game. That is, until I ended up last of my class, flunked my last year of highschool, failed the Bacalaureat. I decided to re-prioritize my passion in gaming in order not to fail at the path that was laid in front of me. But a number of lessons stayed with me, and I always thought I should write about them.

My first lesson was how intense competition is. I never had any experience come close to it since then, and miss competition dearly. Once you start competing, and you start getting good at it, you feel the need to do everything to get the advantage. Back then, I would watch every frag movie that came out (even producing some), I would know all the best players of every clan (and regularly play with them), and I would participate in 3 online tournaments a day. I would wake up every day around noon, play the first tournament at 1pm, then practice the afternoon, then play the tournament of 9pm, and then the last one at 1am. If my team lost, I would volunteer to replace a player dropping out from a winning team. Rinse and repeat, every day. There's no doubt in my mind that I must have reached Gladwell's 10,000 hours.

I used the same kind of technique years later when I started my master in cryptography. I thought: I know how to become the best at something, I just have to do it all the time, constantly. I just need to obsess. So I started blogging here, I started subscribing to a number of blogs on cryptography and security and I would read everything I could every single hours of every day. I became a sponge, and severally addicted to RSS feeds. Of course, reading about cryptography is not as easy as playing video games and I could never maintain the kind of long hours I would when I was playing counter strike. I felt good, years later, when I decided to not care as much about new notifications in my RSS feed.

Younger, my dream was to train with a team for a week in a gaming house, which is something that some teams were starting to do. You'd go stay in some house together for a week, and every day practice and play games together. It really sounded amazing. Spend all my hours with people who cared as much as me. It seemed like a career in gaming was possible, as more and more money was starting to pour into esport. Some players started getting salaries, and even coaches and managers. It was a crazy time, and I felt really sad when I decided to stop playing. I knew a life competing in esport didn't make sense for me, but competition really was fullfiling in a way that most other things proved not to be.

An interesting side effect of playing counter strike every day, competitively, for many years, is that I went through many teams. I'm not sure how many, but probably more than 50, and probably less than 100. A team was usually 5 players, or more (if we had a rotation). We would meet frequently to spend hours practicing together, figuring out new strategies that we could use in our next game, doing practice matches against other teams, and so on. I played with all different kind of people during that time, spending hours on teamspeak and making life-long friendships. One of my friend, I remember, would get salty as fuck when we were losing, and would often scream at us over the microphone. One day we decided to have an intervention, and he agreed to stop using his microphone for a week in order not to get kicked out of the team. This completely cured his raging.

One thing I noticed was that the mood of the team was responsible for a lot in our performance. If we started losing during a game, a kind of "loser" mood would often take over the team. We would become less enthusiastic, some team mates might start raging at the incompetence of their peers, or they might say things that made the whole team want to give up. Once this kind of behavior started, it usually meant we were on a one-way road to a loss. But sometimes, when people kept their focus on the game, and tried to motivate others, we would make incredible come backs. Games were usually played in two halves of 15 rounds. First in 16 would win. We sometimes went from a wooping 0-15 to an insane strike of victories leading us to a 16-15. These were insane games, and I will remember them forever. The common theme between all of those come back stories were in how the whole team faced adversity, together. It really is when things are not going well that you can judge a team. A bad team will sink itself, a strong team will support one another and focus on doing its best, potentially turning the tide.

During this period, I also wrote a web service to create tournaments easily. Most tournaments started using it, and I got some help to translate it in 8 different european languages. Thousands of tournaments got created through the interface, in all kind of games (not just CS). Years later I ended up open sourcing the tool for others to use. This really made me understand how much I loved creating products, and writing code that others could directly use.

Today I have almost no proof of that time, besides a few IRC screenshots. (I was completely addicted to IRC.)

]]>
ZK Security - A Whole New Layer to Worry About David Wong Thu, 17 Nov 2022 08:37:21 +0100 http://www.cryptologie.net/article/579/zk-security-a-whole-new-layer-to-worry-about/ http://www.cryptologie.net/article/579/zk-security-a-whole-new-layer-to-worry-about/#comments I spoke at a Delendum event about the security of ZK Systems, and hosted a panel at the end of the event. You can watch it here:

]]>
Simple introduction to monads in OCaml David Wong Sat, 12 Nov 2022 23:25:56 +0100 http://www.cryptologie.net/article/578/simple-introduction-to-monads-in-ocaml/ http://www.cryptologie.net/article/578/simple-introduction-to-monads-in-ocaml/#comments I'm not a language theorist, and even less of a functional programming person. I was very confused trying to understand monads, as they are often explained in complicated terms and also mostly using haskell syntax (which I'm not interested in learning). So if you're somewhat like me, this might be the good short post to introduce you to monads in OCaml.

I was surprised to learn that monads are really just two functions: return and bind:

module type Monads = sig
    type 'a t
    val return : 'a -> 'a t
    val bind : 'a t -> ('a -> 'b t) -> 'b t
end

Before I explain exactly what these does, let's look at something that's most likely to be familiar: the option type.

An option is a variant (or enum) that either represents nothing, or something. It's convenient to avoid using real values to represent emptiness, which is often at the source of many bugs. For example, in languages like C, where 0 is often used to terminate strings, or Golang, where a nil pointer is often used to represent nothing.

An option has the following signature in OCaml:

type 'a option = None | Some of 'a

Sometimes, we want to chain operations on an option. That is, we want to operate on the value it might contain, or do nothing if it doesn't contain a value. For example:

let x = Some 5 in
let y = None
match x with
  | None -> None
  | Some v1 -> (match y with
    | None -> None
    | Some v2 -> v1 + v2)

the following returns nothing if one of x or y is None, and something if both values are set.

Writing these nested match statements can be really tedious, especially the more there are, so there's a bind function in OCaml to simplify this:

let x = Some 5 in
let y = None in
Option.bind x (fun v1 -> 
  Option.bind y (fun v2 ->
    Some (v1 + v2)
  )
)

This is debatably less tedious.

This is where two things happened in OCaml if I understand correctly:

Let's explain the syntax introduced by OCaml first.

To do this, I'll define our monad by extending the OCaml Option type:

module Monad = struct
  type 'a t = 'a option

  let return x = Some x
  let bind = Option.bind
  let ( let* ) = bind
end

The syntax introduced by Ocaml is the let* which we can define ourselves. (There's also let+ if we want to use that.)

We can now rewrite the previous example with it:

open Monad

let print_res res =
  match res with
  | None -> print_endline "none"
  | Some x -> Format.printf "some: %d\n" x

let () =
  (* example chaining Option.bind, similar to the previous example *)
  let res : int t =
    bind (Some 5) (fun a -> bind (Some 6) (fun b -> Some (a + b)))
  in
  print_res res;

  (* same example but using the let* syntax now *)
  let res =
    let* a = Some 5 in
    let* b = Some 6 in
    Some (a + b)
  in
  print_res res

Or I guess you can use the return keyword to write something a bit more idiomatic:

  let res =
    let* a = Some 5 in
    let* b = Some 6 in
    return (a + b)
  in
  print_res res

Even though this is much cleaner, the new syntax should melt your brain if you don't understand exactly what it is doing underneath the surface.

But essentially, this is what monads are. A container (e.g. option) and a bind function to chain operations on that container.

Bonus: this is how the jane street's ppx_let syntax works (only defined on result, a similar type to option, in their library):

open Base
open Result.Let_syntax

let print_res res =
  match res with
  | Error e -> Stdio.printf "error: %s\n" e
  | Ok x -> Stdio.printf "ok: %d\n" x

let () =
  (* example chaining Result.bind *)
  let res =
    Result.bind (Ok 5) ~f:(fun a ->
        Result.bind (Error "lol") ~f:(fun b -> Ok (a + b)))
  in
  print_res res;

  (* same example using the ppx_let syntax *)
  let res =
    let%bind a = Ok 5 in
    let%bind b = Error "lol" in
    Ok (a + b)
  in
  print_res res

You will need to following dune file if you want to run it (assuming your file is called monads.ml):

(executable
 (name monads)
 (modules monads)
 (libraries base stdio)
 (preprocess
  (pps ppx_let)))

And run it with dune exec ./monads.exe

]]>
The intuition behind the sum-check protocol in 5 minutes David Wong Sat, 12 Nov 2022 20:29:03 +0100 http://www.cryptologie.net/article/577/the-intuition-behind-the-sum-check-protocol-in-5-minutes/ http://www.cryptologie.net/article/577/the-intuition-behind-the-sum-check-protocol-in-5-minutes/#comments The sum check protocol allows a prover to convince a verifier that the sum of a multivariate polynomial is equal to some known value. It’s an interactive protocol used in a lot of zero-knowledge protocols. I assume that you know the Schwartz Zippel lemma in this video, and only give you intuitions about how the protocol works.

]]>
Intro to Kimchi @ 0xPARC CARML Weekend David Wong Fri, 11 Nov 2022 01:28:55 +0100 http://www.cryptologie.net/article/576/intro-to-kimchi-0xparc-carml-weekend/ http://www.cryptologie.net/article/576/intro-to-kimchi-0xparc-carml-weekend/#comments I was invited to talk at the 0xPARC workshop in SF last week end. I talked about Kimchi and how it fits in the Mina ecosystem.

]]>
What's the deal with zkapps? David Wong Thu, 20 Oct 2022 18:28:39 +0200 http://www.cryptologie.net/article/575/whats-the-deal-with-zkapps/ http://www.cryptologie.net/article/575/whats-the-deal-with-zkapps/#comments Vitalik recently mentioned zkapps at ETHMexico. But what are these zkapps? By the end of this post you will know what they are, and how they are going to change the technology landscape as we know it.

Zkapps, or zero-knowledge applications, are the modern and secure solution we found to allow someone else to compute arbitrary programs, while allowing us to trust the result. And all of that thanks to a recently rediscovered cryptographic construction called general-purpose zero-knowledge proofs. With it, no need to trust the hardware to behave correctly, especially if you're not the one running it (cough cough intel SGX).

Today, we're seeing zero-knowledge proofs impacting cryptocurrencies (which as a whole have been a petri dish for cryptographic innovation), but tomorrow I argue that most applications (not just cryptocurrencies) will be directly or indirectly impacted by zero-knowledge technology.

Because I've spent so much time with cryptocurrencies in recent years, auditing blockchains like Zcash and Ethereum at NCC Group, and working on projects like Libra/Diem at Facebook, I'm mostly going to focus on what's happening in the blockchain world in this post. If you want a bigger introduction to all of these concepts, check my book Real-World Cryptography.

The origin of the story starts with the ancient search for solutions to to the problem of verifiable computation; being able to verify that the result of a computation is correct. In other words, that whoever run the program is not lying to us about the result.

Most of the solutions, until today, were based on hardware. Hardware chips were first invented to be "tamper resistant" and "hard to analyze". Chips capable of performing simple cryptographic operations like signing or encryption. You would typically find them in sim cards, TV boxes, and in credit cards. While all of these are being phased out, they are being replaced by equivalent chips called "secure enclaves" that can be found in your phone. On the enterprise side, more recently technologies were introduced to provide programmability. Chips capable of running arbitrary programs, while providing (signed) attestation that the programs were run correctly. These chips would typically be certified by some vendor (for example, Intel SGX) with some claim that it’s hard to tamper with them. Unfortunately for Intel and others, the security community has found a lot of interest in publishing attacks on their "secure" hardware, and we see new hacks coming up pretty much every year. It's a game of cat and mouse.

Cryptocurrencies is just another field that's been dying to find a solution to this verifiable computation problem. The previous hardware solutions I’ve talked about can be found in oracles like town crier, in bridges like the Ethereum-Avalanche bridge, or even at the core of cryptocurrencies like MobileCoin.

Needless to say, I'm not a fan, but I'll be the first to conceive that in some scenarios you just don't have a choice. And being expensive enough for attackers to break is a legitimate solution. I like to be able to pay with my smartphone.

But in recent years, an old cryptographic primitive that can solve our verifiable computation problem for real has made a huge comeback. Yes you know which one I'm talking about: general-purpose zero-knowledge proofs (ZKPs).

With it, there is no need to trust the hardware: whoever runs the program can simply create a cryptographic proof to convince you that the result is correct.

ZKPs have been used to solve ALL kind of problems in cryptocurrency:

  • "I wish we could process many more transactions" -> simply let someone else run a program that verifies all the transactions and outputs a small list of changes to be made to the blockchain (and a proof that the output is correct). This is what zk rollups do.
  • "I wish we could mask the sender, recipient, and the amount being transacted" -> just encrypt your transaction! And use a zero-knowledge proof to prove that what's encrypted is correct. This is what ZCash has done (and Monero, to some extent).
  • "It takes ages for me to download the whole Bitcoin ledger..." -> simply have someone else do it for you, and give you the resulting latest state (with a proof that it's correct). That's what Mina does.
  • "Everybody using cryptocurrency is just blindly trusting some public server (e.g. Infura) instead of running their own nodes" -> use ZKP to make light clients verifiable! This is what Celo does with Plumo.
  • "Why can't I easily transfer a token from one blockchain to another one?" -> use these verifiable light clients. This is what zkBridge proposes.

There's many more, but I want to focus on zkapps (remember?) in this post. Zkapps are a new way to implement smart contracts. Smart contracts were first pioneered by Ethereum, to allow user programs to run on the blockchain itself.

To explain smart contracts, I like the analogy of a single supercomputer floating in the sky above us. We're all using the same computer, the one floating in the sky. We all can install our programs on the floating computer, and everyone can execute functions of these programs (which might mutate the state of the program).

The solution found by Ethereum at a time was to implement the concept naively and without using cryptography:

  • Users can install a program by placing the program's code in a transaction.
  • Users can execute functions of a program by writing in a transaction the function they want to execute and with what arguments.
  • Everyone running a node has to run the functions found in users transactions. All of them. In order to get the result (e.g. move X tokens to wallet Y, update the state of the smart contract, etc.)

The last point is the biggest limitation of Ethereum. We can't have the user provide the result of executing a function, or anyone else really, because we can't trust them. And so, not only this means that everyone is always re executing the same stuff (which is redundant, and slows down the network), but this also means that everything in a smart contract must be public (as everyone must be able to run the function). There can be no secrets used. There can be no asynchronous calls or interaction outside of the network while this happens.

This is where zkapps enters the room. Zkapps allow users to run the programs themselves and give everyone else the result (along with a proof).

This not only solves the problem of having everyone re execute the same smart contract calls constantly, but it also opens up new applications as computations can be non-deterministic: they can use randomness, they can use secrets, they can use asynchronous calls, etc.

More than that, the state of a zkapp can now mostly live off-chain, like real applications before Ethereum used to do. Reducing the size of the entire blockchain (Today, Ethereum is almost 1 terabyte!). These applications are not limited by the speed of the blockchain, or by the capabilities of the language exposed by the blockchain anymore.

Perhaps, it would be more correct to describe them as mini-blockchains of their own, that can be run as centralized or decentralized applications, similar to L2s or Cosmos zones.

OK. So far so good, but do these zkapps really exist or is it just talk? Well, not yet. But a few days ago, the Mina cryptocurrency released their implementations of zkapps on a testnet. And if the testnet goes well, there is no reason to believe this won't unlock a gigantic number of applications we haven't seen before on blockchains.

You can read the hello world tutorial and deploy your first zkapp in like 5 minutes (I kid you not). So I highly recommend you to try it. This is the future :)

]]>
OCaml wishlist David Wong Thu, 22 Sep 2022 04:37:59 +0200 http://www.cryptologie.net/article/574/ocaml-wishlist/ http://www.cryptologie.net/article/574/ocaml-wishlist/#comments I've been writing (although mostly reading) OCaml on-and-off this last year. It's been quite a painful experience, even though I had some experience with functional languages already (erlang, which I really liked).

I find the language and the experience very close to C in many ways, while at the same time boasting a state of the art type system. It's weird. I think there's a real emphasis on the expressiveness, but little on the engineering. Perhaps this is due to the language not having enough traction in the industry.

So about this, I have two things I'd like to say. The first, is that if you're looking for a somewhat low-level (there's a garbage collector) language you can make a real dent in, OCaml might be the one. It's pretty bare bone, not that many libraries exist, and if they do they are barely usable due to a lack of documentation. My first contribution was a library to encode and decode hexadecimal strings, because I couldn't find one that I could use. That should tell you something.

My second contribution was a tool to build "by example" websites. I used it to make a website to learn OCaml by examples, and another one to learn Nix by example. How cool would it be if people started using it to build a number of "by examples" websites in the OCaml ecosystem :D?

Anyway, I digress, the second thing I wanted to say is: if you're working on OCaml (or want to contribute to a new language), here's my wishlist:

  • a tool like cargo to manage dependencies (& versions), start projects, run tests, etc. Two important things: it should use a real configuration language (e.g. toml, json, yml) and it should work in a convention over configuration model.
  • better integration with vscode. Every time I write or read OCaml I find myself missing rust-analyzer and its integration with vscode. I just want to be able to go to definitions, even in the presence of functors, and easily find the types of things (and see their implementations).
  • being able to run a single test. It is crazy to me that today, you still can't write an inline test and run it. It's the best way to debug or test something.
  • better compiler error messages. I think the lack of a tool like cargo, and this, are the biggest impediment to the language. See this issue for an example.
  • better default for ocamlformat. OCaml is hard to read, some of the reasons are hard to change, but the formatting can be fixed and it really needs some work.
  • a linter like clippy. It's 2022, every project should be able to run an OCaml linter in CI.
  • good documentation for stdlib and 3rd party libraries. Documentation is really subpar in OCaml.
  • a use keyword to import specific values in scope (as opposed to "opening" a whole module in scope)

PS: would someone actually be interested to work on any of these for a grant? There's a number of new-ish companies in the OCaml space that would probably pay for someone to solve these. I guess reach out to me on the contact page if you're interested.

]]>
noname developer update #4: showcasing method calls David Wong Mon, 19 Sep 2022 03:36:21 +0200 http://www.cryptologie.net/article/573/noname-developer-update-4-showcasing-method-calls/ http://www.cryptologie.net/article/573/noname-developer-update-4-showcasing-method-calls/#comments This is part 4 of a series on noname. See part 1, part 2, and part 3.

I just implemented method calls in noname, and I made a video showcasing it and checking if the implementation is correct with some simple examples.

Don't forget, if you want to play with it check it out here: https://github.com/mimoo/noname and if you have any questions or are running into something weird please leave a comment on the Github repo!

]]>
noname developer update #3: user-defined functions David Wong Sat, 17 Sep 2022 05:02:05 +0200 http://www.cryptologie.net/article/572/noname-developer-update-3-user-defined-functions/ http://www.cryptologie.net/article/572/noname-developer-update-3-user-defined-functions/#comments I guess this is part 3. With part 1 introducing noname, a programming language inspired by rust that makes use zero-knowledge proofs to provide verifiable computation, part 2 going over a simple arithmetic example and part 3 going over custom types.

In this update, I showcase a new feature: functions, and go through the debug compilation of an example program to see if the implementation is sound (that it constrains what it is supposed to constrain). In this video I do something more: I optimize the implementation of assert_eq so that you can see a bit of the compiler internals. I also end the video abruptly, thinking I found a bug in the implementation. If you were an attentive student, you would have figured out that there was no bugs: doing things on constants does not create any gates.

You can play with the noname language here! And you can read the noname book as well.

]]>
noname developer update #2: structs are working! David Wong Fri, 16 Sep 2022 23:03:43 +0200 http://www.cryptologie.net/article/571/noname-developer-update-2-structs-are-working/ http://www.cryptologie.net/article/571/noname-developer-update-2-structs-are-working/#comments noname is the DSL I'm working on to write rust-like programs and prove their executions using zero-knowledge proofs.

The previous post used noname as an education tool to explain how programs get compiled to gates and constraints.

In this post, I showcase a new feature: custom gates, and go through the debug compilation of an example program to see if the implementation is sound (that it constrains what it is supposed to constrain).

Don't forget, you can play with the noname language here! And you can read the noname book as well.

]]>
noname developer update #1: learning about zero-knowledge apps and circuits using the noname educational DSL David Wong Sun, 11 Sep 2022 07:03:39 +0200 http://www.cryptologie.net/article/570/noname-developer-update-1-learning-about-zero-knowledge-apps-and-circuits-using-the-noname-educational-dsl/ http://www.cryptologie.net/article/570/noname-developer-update-1-learning-about-zero-knowledge-apps-and-circuits-using-the-noname-educational-dsl/#comments I've talked about noname previously, my toy side project to create a DSL to write zkapps on top of kimchi.

I've recorded a video that showcase it, and walks through the circuit output by compiling one of the example. It might not make too much sense if you don't know about kimchi or plonk. And so you might want to watch my series of videos on plonk beforehands.

you can play with the noname language here

]]>
Two And A Half Coins episode 4: More on Bitcoin: 51% attacks and Merkle trees! David Wong Thu, 08 Sep 2022 03:20:51 +0200 http://www.cryptologie.net/article/569/two-and-a-half-coins-episode-4-more-on-bitcoin-51-attacks-and-merkle-trees/ http://www.cryptologie.net/article/569/two-and-a-half-coins-episode-4-more-on-bitcoin-51-attacks-and-merkle-trees/#comments New episode on the podcast!

]]>
the noname language, a toy DSL for zkapps David Wong Sat, 03 Sep 2022 12:59:05 +0200 http://www.cryptologie.net/article/568/the-noname-language-a-toy-dsl-for-zkapps/ http://www.cryptologie.net/article/568/the-noname-language-a-toy-dsl-for-zkapps/#comments I've been spending a few weekends working on a DSL for writing zero-knowledge programs. It's been a fun project that's taught me a LOT about programming languages and their designs. I've always thought that it'd be boring to write a programming language, but my years of frustrations with languages X and Y, and my love for features Z and T of L, have made me really appreciate tweaking my own little language to my preferences. I can do whatever I want!

It's called "noname" because I really didn't know what name to give it, and I still haven't found a good one. You can play with it here: https://github.com/mimoo/noname, it is still quite bare bone but I'm amazed at what it can already do :D

It is very close to Rust, with some ideas from Golang that I liked. For example, I do not implement too much inference because it decreases readability, I force the user to qualify each library calls, I forbid shadowing within a function, etc.

Programs look like this:

use std::crypto;

fn main(pub public_input: Field, private_input: [Field; 2]) {
    let x = private_input[0] + private_input[1];
    assert_eq(x, 2);

    let digest = crypto::poseidon(private_input);
    assert_eq(digest[0], public_input);
}

and you can use the CLI to write your own programs, and generate proofs (and verify them). Underneath the kimchi proof system is used.

But the really cool feature is how transparent it is for developers! If you run the following command with the --debug option for example:

$ cargo run -- --path data/for_loop.no --private-inputs '{"private_input": ["2", "3", "4"]}' --public-inputs '{"public_input": ["9"]}' --debug

you will get a nice output teaching you about the layout of the compiled circuits:

layout

as well as how the wiring is done:

wiring

I spent some time explaining on my twitter how to read this output: https://twitter.com/cryptodavidw/status/1566014420907462656

I think there's a lot of educational benefit from this approach. Perhaps I should make a video going over a few circuits :)

]]>
I'm launching a podcast: Two And A Half Coins David Wong Wed, 31 Aug 2022 03:04:09 +0200 http://www.cryptologie.net/article/567/im-launching-a-podcast-two-and-a-half-coins/ http://www.cryptologie.net/article/567/im-launching-a-podcast-two-and-a-half-coins/#comments It's been forever since I've been thinking of doing a podcast. Today I decided screw it, I'll do it live.

From the description:

This series focuses on teaching you about cryptocurrencies from the very start! We'll start with databases, banks, distributed systems, and we will then go further with Bitcoin, Ethereum, Mina, and other cryptocurrencies that are making the world advance today.

Listen to it here or on your favorite platform:

Or just listen to it here:

Cryptocurrencies from scratch: databases, the banking world, and distributed systems

This is the first episode of a series on cryptocurrencies. In this series I will tell you about cryptocurrencies from scratch. This first episode is quite non-technical and should be easy to understand for most folks. I will explain what databases are, how banks around the world move money around, and what cryptocurrencies fundamentally offer.

Bitcoin from scratch: The Bitcoin client, public keys and signatures, and the miners

In the previous episode we briefly talked about what cryptocurrencies fundamentally are, and what problems they attempt to solve. In this episode we'll dig int our very first cryptocurrency: Bitcoin. This will be a high-level overview that will cover what the software looks like, what cryptographic signatures are, and who are these miners who process transactions.

More on Bitcoin: How does mining work? What are hash functions? And what is Proof of Work?

In this third episode we'll dig more into the tech of Bitcoin. But for that, we'll need to talk briefly about cryptographic hash functions and how they work. Once done, we'll talk about mining, proof of work, chains of blocks, and mining pools!

]]>
JournalDuCoin: Cryptographie, VMs & circuits David Wong Sun, 28 Aug 2022 23:59:29 +0200 http://www.cryptologie.net/article/566/journalducoin-cryptographie-vms-circuits/ http://www.cryptologie.net/article/566/journalducoin-cryptographie-vms-circuits/#comments I was on JournalDuCoin and spent 2 hours talking with Sam and Louis (Starkware) about zero-knowledge, circuits, VMs, etc.

Beware, it's in French, and you watch it here:

]]>
CFF22 trip report David Wong Mon, 25 Jul 2022 23:18:59 +0200 http://www.cryptologie.net/article/565/cff22-trip-report/ http://www.cryptologie.net/article/565/cff22-trip-report/#comments This Monday I was at the Crypto Finance Forum 2022 in Paris. It was my first time going to a crypto conference that revolves around traditional finance and regulations (aka the current banking system). It was a really interesting experience, and here's a list of things I thought were interesting (to me) during the event. Note that it was hard to aggregate things as panels were often going in all sorts of directions:

Dress code. Am I going to be the only one in shorts and flip flops? Is everyone going to be wearing suits? Thanks no! There were a number of underdressed people, in addition to the suits of course.

Government-approved. A minister showed up. That’s the closest I ever got to the French government :D

Open banking. Some talks about how open banking laws in Europe created a boom of innovation and a fintech sector, and similarities with cryptocurrencies. (me: Indeed, cryptocurrencies seem to allow services to directly interact with the financial backbone and for users to share data across services more easily). Employees of banks at the time were leaving for fintech companies, now we're seeing a shift: they're leaving for crypto companies.

Open Banking emerged from the EU PSD2 regulation, whose original intent was to introduce increased competition and innovation into the financial services sector. PSD2 forces banks to offer dedicated APIs for securely sharing their customers' financial data for account aggregation and payment initiation. bankinghub.eu

AML. A panelist mentioned that cryptocurrencies have better tooling for AML (anti money laundering) and customer protection. (me: I guess currently banks will do their best to collect data and perhaps report to government agencies if they think they should. Not ideal compared to cryptocurrencies like Bitcoin, but that's ignoring privacy coins like Zcash.)

MICA. Throughout the conference, MICA kept being mentioned. It's a European regulation to "protect investors and preserve financial stability by creating a regulatory framework for the cryptoassets market that allows for continued innovation and maintains the attractiveness of the crypto currencies sector which is evolving quickly". It apparently was proposed at a time where NFTs were not really that big, so it is explicitly excluding NFTs from its scope. It applies to the 27 states of the EU which is quite practical as service providers won't have to comply to many laws to serve in many jurisdictions (although some panelists said that MICA would still throttle crypto innovation in Europe). Of course, the UK seems to be going a different way. Interestingly, while some of the US regulations were mentioned, nothing was said about Asian regulations (but I guess the audience did not care about that).

Stable coins. A euro stablecoin was mentioned, with projects like EUROC from Circle. People deplored the fact that a US company was at the source of this new stablecoin, citing it as one more example of the issues with Europe slow adoption of yet another technology.

Traditional players. A panelist mentioned that some traditional players, like Paypal and Visa, were investing in their own stablecoin projects. Although banks are still in observation mode (me: talking to some bankers at the event, it seems like banks have tiny crypto teams that have been investigating the field for years, but nothing concrete has been approved. What is changing is that more and more clients are asking to invest into crypto, and so banks have to look into it.)

Not part of the discussion. Interestingly, the word "layer one" was never pronounced during the whole event. There was one panel that had a regulation specialist working at NEAR, but that's it. It seemed like cryptocurrencies were seen as blackboxes that regulations could not touch; only service providers can be regulated. It seems really weird to me that there's no talk of having cryptocurrencies meet in the middle with regulations, as there's a lot that a cryptocurrency can do at the protocol layer (perhaps because no regulations exist for currencies? As one of the panelist mentioned).

For example, if you try to off-ramp, and convert some crypto into fiat at your favorite exchange (called a CASP in MICA, and a VASP in FATF, of course they had to choose a different name), they will surely ask you where the funds came from. You might have got the crypto by selling a ticket for an event (apparently a lot of the tickets for EthCC were paid in USDC), and that sounds good enough, but what about the buyer? Who says that they didn't get their tokens selling weapons or something?

In the current banking system, it is assumed that every transaction goes through a bank and so things are recorded eventually (with the exception of cash and other hard-to-move-lots-of-money ways). But in crypto, that's not true anymore. So it seems to me that regulators just don't understand this, or are OK with it (which seems weird to me, considering how much they seem to want to know).

The interesting thing is that you can probably think of schemes that work and are privacy-enhanced by using zero-knowledge proofs. For example, a layer one could force every transaction to provide a zero-knowledge proof that some approved entity has recorded the transaction and its metadata. (Perhaps a signature from that entity is enough though.)

(Smart contracts were rarely talked about as well.)

Reconciliation. One of the big topic of the event was reconciliation between the current/traditional financial world and crypto. This is of course a very interesting topic to me, as I worked on the diem (libra) project which was heavily betting on that (before it got shut down). In crypto conferences I go to, I mostly see updates about projects, but I seldom hear about banks or payment networks and how they could switch to using crypto. Interesting remark from a panelist: it will be on crypto actors to train and learn about the financial world and its regulations, and on the regulators to try to adapt to cryptocurrencies.

Environmental impact. There was a lot of confusion and misunderstanding about the environmental impact of cryptocurrencies. I felt like some of the panelists sounded like Bitcoin maximalists, repeating many of the false narratives you can hear from the Bitcoin community (proof-of-stake is less secure and decentralized than proof of work, proof of work is special because it directly converts energy into money, with proof of work there is a real cost to mine, proof of work actually needs to be compared to the consumption of the current financial system (greenwashing?), etc.)

(me: there are many problems with proof of work: it's a race to the top, you're constantly incentivized to use more energy; it's slow; it is less secure (51% attacks have happened regularly whereas proof of stake hasn't had any attacks as far as I know); it hinders interoperability between blockchains or light clients; etc.)

"Impact investing" was the word du jour. It seems to refer to investments "made into companies, organizations, and funds with the intention to generate a measurable, beneficial social or environmental impact alongside a financial return” (wikipedia).

Keywords. "metaverse", "web3", "NFTs", "gaming", "supply-chain". People seem to want to appear in-the-loop. Even the minister mentioned web3. It's interesting as I would have expected bankers and others to be more skeptical about some of these and focus more on the fast, secure, and interoperable settlement capabilities.

The need for regulation in Europe. Interestingly, it was mentioned several times that even though Europe has a lot of good engineers, regulations have already slowed down crypto adoption and innovation. Some panelists argued about how much exactly, of the world's total assets, are in crypto. It sounded like it was negligible, and that no issues in crypto could trigger a global crisis, and so it was not the right time to regulate. A panelist even said that it was "questionable" to do so, as nobody has any idea what they are trying to regulate today. We are lucky to witness the burst of a new ecosystem and like every new ecosystem the beginning is pure chaos. We need to make sure that we have the chance to experiment and fail.

One of the panelist said that Terra (the stablecoin) failing now was good news, as it would be worse if a stablecoin would have failed while being used at a massive scale in the world economy. (Although, building confidence takes years, and losing trust takes days.) Another panelist argued that Terra was centralized, and that is why it failed (me: that's not my understanding), and then mentioned tether resisting attacks (me: it seemed like people have confidence in tether, which is interesting). Panelists also mentioned DAI, saying that regulators had to understand these new types of algorithmic stablecoins (not backed by actual fiat money, but referencing the price of fiat). Someone argued that without liquid assets to back a stablecoin, you're facing a crisis.

Another interesting thing from an economist who didn't seem to like stablecoins or crypto in general, was that he liked crypto when used for ICOs. I'd really like to know why that specific use-case was interesting to him.

Some of the discussion focused on how all of the largest banks and payment networks in the world are American. "In Europe, we are always wondering what is the best way, in the US they're wondering if something works". (It seems like Europe is still debating if they need crypto or not.)

Stablecoins. The recent crashes around stablecoins have led to a lot of regulatory attention. The FSB (Financial stability board comprising many members) for example said this month:

Stablecoins should be captured by robust regulations and supervision of relevant authorities if they are to be adopted as a widely used means of payment or otherwise play an important role in the financial system. A stablecoin that enters the mainstream of the financial system and is widely used as a means of payments and/or store of value in multiple jurisdictions could pose significant risks to financial stability in the absence of adequate regulation. Such a stablecoin needs to be held to high regulatory and transparency standards, maintain at all times the reserves that preserve stability of value and meet relevant international standards. (https://www.fsb.org/2022/07/fsb-issues-statement-on-the-international-regulation-and-supervision-of-crypto-asset-activities/)

A panelist said that the ECB (European central bank) had failed its main goal: stability of prices, and that it should not be able to regulate currencies. There was then some question about legal tender, and if stablecoins could be considered legal tender at some point. Crypto was defined as a partial currency, not a full currency, because in most countries it is still not legal tender.

Legal tender is anything recognized by law as a means to settle a public or private debt or meet a financial obligation, including tax payments, contracts, and legal fines or damages. The national currency is legal tender in practically every country. A creditor is legally obligated to accept legal tender toward repayment of a debt. (...) Legal tender laws effectively prevent the use of anything other than the existing legal tender as money in the economy.  (https://www.investopedia.com/terms/l/legal-tender.asp)

Central banks and regulators are in the same boat here, and they're going to resist stablecoins and crypto. The role of a central bank is to preserve financial stability (the FEDs were given as an example, as they were created right after a crisis). Blockchain as technology can be decoupled from blockchain as cryptocurrencies, and countries (central banks?) could adopt the technology.

There was some talks about Mica putting all stablecoins in the same basket, which was apparently annoying. Someone said that tokenizing a money market fund would make the best stablecoin (me: whats a money market fund?)

CBDCs. CBDCs are central bank digital coins. They are the governments and their central banks' answers to private currencies (libra was mentioned several times as something that could have been THE private currency, a currency not directly backed by a government).

It seems like Banque de France (France's central bank) was already working on a wholesale CBDC, but not for retail use (me: retail always refers to normal people using the digital coin directly, like with metamask). It was mentioned that this week, Russia's central bank was launching a digital coin. China also is working on a CBDC (me: probably inspired by the prevalence of wechat taking over digital payment in China (you can't pay in cash anymore in China)). The upside, from China's perspective, is being able to track its citizens more and mix social score (their new way to tame their citizens) with money, and also having a way to make the yuan more present in the world's economy. The US, on the other hand, already has many stablecoins running on blockchains.

Panelists seemed worried that Europe could be left behind, once again. (Someone even mentioned Europe not having been bullish on the technology of the Internet fast enough.)

A lot of the discussion focused on how the USD was THE dominant currency in the world, probably because they were the first mover. 60% of central banks reserve is in USD (2% for the yuan). Apparently there's always tend to be a single dominant currency internationally. China is trying to be the first mover in the CBDC space, and perhaps will take advantage of that to get a bigger slice of the world currency pie.

There was also some concerns about how CBDCs might want to replace crypto with their own solutions, which might be bad for privacy and used as a surveillance tool.

The debate then focused on currencies as a definition (a mean of payment, a mean of exchange, a unit of account, a storage of value, ...). One of the panelist argued that euro was not a store of value anymore, and that the ECB had abandoned its mission to issue a storage of value object (and that people in general had stopped using currencies as storage of value).

Again, there was a lot of worries that the US was going to take too much of an advantage, and that Europe will end up using US services built on crypto (you can't talk to a pension fund in the US today that hasn't invested in crypto, apparently). Interesting phrasing from a panelist: "When you're thinking of investing on a thesis, you shouldn't wait for an antithesis to invest. If you wait for it to be a success then its too late to invest".

Finally, someone mentioned that nobody talked about the future of cash. But that was at the end of a panel.

]]>
What are zkVMs? And what's the difference with a zkEVM? David Wong Thu, 21 Jul 2022 11:46:11 +0200 http://www.cryptologie.net/article/564/what-are-zkvms-and-whats-the-difference-with-a-zkevm/ http://www.cryptologie.net/article/564/what-are-zkvms-and-whats-the-difference-with-a-zkevm/#comments I've been talking about zero-knowledge proofs (the "zk" part) for a while now, so I'm going to skip that and assume that you already know what "zk" is about.

The first thing we need to define is: what's a virtual machine (VM)? In brief, it's a program that can run other programs, usually implemented as a loop that executes a number of given instructions (the other program). Simplified, a VM could look like this:

let stack = Stack::new();
loop {
    let inst = get_next_instruction();
    apply_instruction(inst, stack);
}

The Ethereum VM is the VM that runs Ethereum smart contracts. The original list of instructions the EVM supports, and the behavior of these instructions, was specified in 2014 in the seminal yellow paper (that thing is literally yellow) by Gavin Wood. The paper seems to be continuously updated so it should be representative of the current instructions supported.

a zk VM, is simply a VM implemented as a circuit for a zero-knowledge proof (zkp) system. So, instead of proving the execution of a program, as one would normally do in zkp systems, you prove the execution of a VM. As such, some people say that non-VM zkps are part of the FPGA approach, while zkVMs are part of the CPU approach.

Since programs (or circuits) in zkp systems are fixed forever (like a binary compiled from some program really), our VM circuit actually implements a fixed number of iteration for the loop (you can think of that as unrolling the loop). In our previous example, it would look like this if we wanted to support programs of 3 instructions tops:

let stack = Stack::new();
let inst = get_next_instruction();
apply_instruction(inst, stack);
let inst = get_next_instruction();
apply_instruction(inst, stack);
let inst = get_next_instruction();
apply_instruction(inst, stack);

EDIT: Bobbin Threadbare pointed to me that with STARKs, as there is no preprocessing (like plonk), there's no strict limit on the number of iteration.

This is essentially what a zkVM is; a zk circuit that runs a VM. The actual program's instructions can be passed as public input to that circuit so that everyone can see what program is really being proven. (There are a number of other ways to pass the program's instructions to the VM if you're interested.)

There's a number of zkVMs out there, I know of at least three important projects:

  • Cairo. This is used by Starknet and I highly recommend reading the paper which is a work of art! We also have an experimental implementation in kimchi we call turshi.
  • Miden. This is a work-in-progress project from Polygon.
  • Risczero. This is a work-in-progress project that aims at supporting the set of RISC-V instructions (a popular standard outside of the blockchain world).

All of them supports a different instruction set, so they are not compatible.

A zkEVM, on the other hand, aims at supporting the Ethereum VM. It seems like there is a lot of debate going on about the actual meaning of "supporting the EVM", as three zkEVM were announced this week:

So one can perhaps divide the field of zk VMs into two types:

  • zk-optimized VMs. Think Cairo or Miden. These types of VMs tend to be much faster as they are designed specifically to make it easier for zkp systems to support them.
  • real-world VMs. Think RiscZero supporting the RISC-V instruction set, or the different zkEVMs supporting the Ethereum VM.

And finally, why would anyone try to support the EVM? If I'd have to guess, I would come up with two reasons: if you're Ethereum it could allow you to create proofs of the entire state transition from genesis to the latest state of Ethereum (which is what Mina does), and if you're not Ethereum it allows your project to quickly copy/paste projects from the Ethereum ecosystem (e.g. uniswap) and steal developers from Ethereum.

]]>
Panel: how ZKPs and other advanced cryptographic schemes can be deployed to solve hard problems in decentralized systems David Wong Thu, 21 Jul 2022 10:33:50 +0200 http://www.cryptologie.net/article/563/panel-how-zkps-and-other-advanced-cryptographic-schemes-can-be-deployed-to-solve-hard-problems-in-decentralized-systems/ http://www.cryptologie.net/article/563/panel-how-zkps-and-other-advanced-cryptographic-schemes-can-be-deployed-to-solve-hard-problems-in-decentralized-systems/#comments I spoke at a panel at Privacy Evolution with Aleo and Anoma, which coincidentally was my first time ever speaking at a panel :D this was actually a lot of fun and I'm thinking that I should do this again. The topic was "how ZKPs and other advanced cryptographic schemes can be deployed to solve hard problems in decentralized systems"

You can watch the whole thing, or skip directly to the panel I was in at 4:10

Privacy Evolution with Aleo and Anoma from Alex Miles on Vimeo.

]]>
Smart contracts with private keys David Wong Wed, 06 Jul 2022 21:24:30 +0200 http://www.cryptologie.net/article/562/smart-contracts-with-private-keys/ http://www.cryptologie.net/article/562/smart-contracts-with-private-keys/#comments In most blockchains, smart contracts cannot hold a private key. The reason is that everyone on the network needs to be able to run the logic of any smart contract (to update the state when they see a transaction). This means that, for example, you cannot implement a smart contract on Ethereum that will magically sign a transaction for Bitcoin if you execute one of its functions.

In zero-knowledge smart contracts, like zkapps, the situation is a bit different in that someone can hold a smart contract's private key and run the logic associated to the private key locally (for example, signing a Bitcoin transaction). Thanks to the zero-knowledge proof (ZKP), anyone can attest that the private key was used in a correct way according to the contract, and everyone can update the state without knowing about the private key.

Only that one person can use the key, but you can encode your contract logic so that they can respond to requests from other users. For example:

-- some user calls smart_contract.request(): Hey, can you sign this Bitcoin transaction? -- key holder calls smart_contract.response(): here's the signature

The elephant in the room is that you need to trust the key holder not to leak the key or use it themselves (for example, to steal all the Bitcoin funds it protects).

The first step to a solution is to use cryptographic protocols called multi-party computations (MPCs) (see chapter 15 of Real-World Cryptography). MPCs allow you to split a private key between many participants, thereby decentralizing the usage of the private key. Thanks to protocols like decentralized key generations (DKGs) the private key is never fully present anywhere, and as long as enough of the participants act honestly (and don't collude) the protocol is fully secure. This is what Axelar, for example, implements to allow different blockchains to interoperate.

This solution is limited, in that it requires a different protocol for every different usage you might have. Signing is one thing, but secret-key cryptography is about decrypting messages, encrypting to channels, generating random numbers, and much more! A potential solution here is to mix MPC with zero-knowledge proofs. This way, you can essentially run any program in a correct way (the ZKP part) where different parts of the program might come from different people (the MPC part).

A recent paper (2021) presented a solution to do just that: Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets . As far as I know, nobody has implemented such a concept onchain, but I predict that this will be one of the next big thing to unlock for programmable blockchains in general.

]]>