David Wong | Cryptologie | Markdown http://www.cryptologie.net/ About my studies in Cryptography. en-us Sat, 23 May 2020 20:56:33 +0200 User authentication with passwords, What’s SRP? David Wong Sat, 23 May 2020 20:56:33 +0200 http://www.cryptologie.net/article/503/user-authentication-with-passwords-whats-srp/ http://www.cryptologie.net/article/503/user-authentication-with-passwords-whats-srp/#comments Specifically, SRP is an **asymmetric or augmented PAKE**: it’s a key exchange where only one side is authenticated thanks to a password. This is usually useful for **user authentication protocols**. Theoretically any client-server protocol that relies on passwords (like SSH) could be doing it, but instead such protocols often have the password directly sent to the server (hopefully on a secure connection). As such, asymmetric PAKEs offer an interesting way to augment user authentication protocols to avoid the server learning about the user’s password.

Note that the other type of PAKE is called a symmetric or balanced PAKE. In a symmetric PAKE two sides are authenticated thanks to the same password. This is usually useful in **user-aided authentication protocols** where a user attempts to pair two physical devices together, for example a mobile phone or laptop to a WiFi router. (Note that the recent WiFi protocol WPA3 uses the DragonFly symmetric PAKE for this.)

In this blog post I will answer the following questions:

* What is SRP?
* How does SRP work?
* Should I use SRP today?

## What is SRP?

The [stanford SRP homepage](http://srp.stanford.edu/) puts it in these words:

> The Secure Remote Password protocol performs secure remote authentication of short human-memorizable passwords and resists both passive and active network attacks. Because SRP offers this unique combination of password security, user convenience, and freedom from restrictive licenses, it is the most widely standardized protocol of its type, and as a result is being used by organizations both large and small, commercial and open-source, to secure nearly every type of human-authenticated network traffic on a variety of computing platforms.

and goes on to say:

> The SRP ciphersuites have become established as the solution for secure mutual password authentication in SSL/TLS, solving the common problem of establishing a secure communications session based on a human-memorized password in a way that is crytographically sound, standardized, peer-reviewed, and has multiple interoperating implementations. As with any crypto primitive, it is almost always better to reuse an existing well-tested package than to start from scratch.

But the Stanford SRP homepage seems to date from the late 90s.

SRP was standardized for the first time in 2000 in [RFC 2944 - Telnet Authentication: SRP](https://tools.ietf.org/html/rfc2944).
Nowadays, most people refer to SRP as the implementation used in TLS. This one was specified in 2007 in [RFC 5054 - Using the Secure Remote Password (SRP) Protocol for TLS Authentication](https://tools.ietf.org/html/rfc5054).

## How does SRP work?

The Stanford SRP homage lists [4 different versions of SRP](http://srp.stanford.edu/design.html), with the last one being SRP 6. Not sure where version 4 and 5 are, but version 6 is the version that is standardized [and implemented](https://github.com/google/boringssl/blob/master/include/openssl/tls1.h#L516) in TLS. There is also the revision SRP 6a, but I’m also not sure if it’s in use anywhere today.

To register, **Alice** sends her identity, a random $salt$, and a salted hash $x$ of her password.
Right from the start, you can see that a hash function is used (instead of a password hash function like Argon2) and thus anyone who sees this message can efficiently brute-force the hashed password. Not great. The use of the user-generated salt though, manage to prevent brute-force attacks that would impact all users.

The server can then register Alice by exponentiating a generator of a pre-determined ring (an additive group with a multiplicative operation) with the hashed password. This is an important step as you will see that anyone with the knowledge of $x$ can impersonate Alice.

What follows is the login protocol:

You can now see why this is called a password authenticated **key exchange**, the login flow includes the standard ephemeral key exchange with a twist: the server’s public key $B’$ is blinded or hidden with $v$, a random value derived from Alice’s password. (Note here $k$ is a constant fixed by the protocol so we will just ignore it.)

Alice can only **unblinds the server’s ephemeral key** by deriving $v$ herself. To do this, she needs the $salt$ she registered with (and this is why the server sends it back to Alice as part of the flow).
For Alice, the SRP login flow goes like this:

* Alice re-computes $x = H(salt, password)$ using her password and the salt received from the server.
* Alice unblinds the server’s ephemeral key by doing $B=B’- kg^x = g^b$
* Alice then computes the shared secret $S$ by multiplying the results of two key exchanges:
* $B^a$, the ephemeral key exchange
* $B^{ux}$, a key exchange between the server’s public key and a value combining the hashed password and the two ephemeral public keys

Interestingly, the second key exchange makes sure that the hashed password and the transcript gets involved in the computation of the shared secret. But strangely, only the public keys and not the full transcript are used.

The server can then compute the shared secret $S$ as well, using the multiplication of the same two key exchanges:

* $A^b$, the ephemeral key exchange
* $v^{ub}$, the other key exchange involving the hashed password and the two ephemeral public keys

The final step is for both sides to hash the shared secret and use it as the session key $K = H(S)$.
Key confirmation can then happen after both sides make successful use of this session key. (Without key confirmation, you’re not sure if the other side managed to perform the PAKE.)

## Should I use SRP today?

The SRP scheme is a much better way to handle user passwords, but it has a number of flaws that make the PAKE protocol less than ideal. For example, someone who intercepts the registration process can then easily impersonate Alice as the password is never directly used in the protocol, but instead the salted hash of the password which is communicated during the registration process.

This was noticed by multiple security researchers along the years. Matthew Green in 2018 wrote [Should you use SRP?](https://blog.cryptographyengineering.com/should-you-use-srp/), in which he says:

> Lest you think these positive results are all by design, I would note that there are [five prior versions] of the SRP protocol, each of which contains vulnerabilities. So the current status seems to have arrived through a process of attrition, more than design.

After noting that the combination of multiplication and addition makes it impossible to implement in elliptic curve groups, Matthew Green concludes with:

> In summary, SRP is just weird. It was created in 1998 and bears all the marks of a protocol invented in the prehistoric days of crypto. It’s been repeatedly broken in various ways, though the most recent [v6] revision doesn’t seem obviously busted — as long as you implement it carefully and use the right parameters. It has no security proof worth a damn, though some will say this doesn’t matter (I disagree with them.)

Furthermore, SRP is not available in the last version of TLS (TLS 1.3).

Since then, many schemes have been proposed, and even standardized and productionized (for example [PAK](https://tools.ietf.org/html/rfc5683) was standardized by Google in 2010)
The [IETF 104, March 2019 - Overview of existing PAKEs and PAKE selection criteria](https://www.ietf.org/proceedings/104/slides/slides-104-cfrg-pake-selection-01.pdf) has a list:

In the summer of 2019, the **Crypto Forum Research Group (CFRG)** of the IETF started a [PAKE selection process](https://github.com/cfrg/pake-selection), with goal to pick one algorithm to standardize for each category of PAKEs (symmetric/balanced and asymmetric/augmented):

Two months ago (March 20th, 2020) the CFRG announced the end of the PAKE selection process, selecting:

* [CPace](https://eprint.iacr.org/2018/286) as the symmetric/balanced PAKE (from Björn Haase and Benoît Labrique)
* [OPAQUE](https://eprint.iacr.org/2018/163.pdf) as the asymmetric/augmented PAKE (from Stanislaw Jarecki, Hugo Krawczyk, and Jiayu Xu)

Thus, my recommendation is simple, today you should use **OPAQUE**!

]]>
Alternatives to PGP David Wong Sun, 10 May 2020 06:12:28 +0200 http://www.cryptologie.net/article/502/alternatives-to-pgp/ http://www.cryptologie.net/article/502/alternatives-to-pgp/#comments
As a recap of what's bad with PGP:

* No authenticated encryption. This is my biggest issue with PGP personally.
* Receiving a signed message means nothing about who sent it to you (see picture below).
* Usability issues with GnuPG (the main implementation).
* Discoverability of public keys issue.
* No forward secrecy.

For more, see my post on [a history of end-to-end encryption and the death of PGP](https://www.cryptologie.net/article/487/a-history-of-end-to-end-encryption-and-the-death-of-pgp/).

(excerpt from the book [Real World Cryptography](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto&a_bid=ad500e09))

The latter two I don't care that much. Integration with email is doomed from my point of view. And there's just not way to have forward secrecy if we want a near-stateless system.

> Email is insecure. Even with PGP, it’s default-plaintext, which means that even if you do everything right, some totally reasonable person you mail, doing totally reasonable things, will invariably CC the quoted plaintext of your encrypted message to someone else (we don’t know a PGP email user who hasn’t seen this happen). PGP email is forward-insecure. Email metadata, including the subject (which is literally message content), are always plaintext. ([Thomas Ptatcek](https://latacora.micro.blog/2019/07/16/the-pgp-problem.html))

OK so what can I advise to my readers? What are the alternatives out there?

For **file signing**, Frank Denis wrote [minisign](https://jedisct1.github.io/minisign/) which looks great.

For **file encryption**, I wrote [eureka](https://github.com/mimoo/eureka) which does the job.
There's also [magic wormhole](https://github.com/warner/magic-wormhole) which is often mentioned, and does some really interesting cryptography, but does not seem to address a real use-case (in my opinion) for the following reason: it's synchronous. We already have a multitude of asynchronous ways to transfer files nowadays (dropbox, google drive, email, messaging, etc.) so the problem is not there. Actually there's really no problem... we just all need to agree on one way of encrypting a file and eureka does just that in a hundred lines of code.

(There is a use-case for synchronous file transfer though, and that's when we're near by. Apple's airdrop is for that.)

For **one-time authenticated messaging** (some people call that signcryption) which is pretty much the whole use-case of PGP, there seems to be only one contender so far: [saltpack](https://saltpack.org/). The format looks pretty great and seems to address all the issues that PGP had (except for forward secrecy, but again I don't consider this a deal breaker). It seems to only have two serious implementations: [keybase](https://keybase.io/) and [keys.pub](https://keys.pub/). Keybase a bit more involved, and keys.pub is dead simple and super well put.
Note that [age](https://github.com/FiloSottile/age) and [rage](https://github.com/str4d/rage) (which are excellent engineering work) seem to try to address this use case. Unfortunately they do not provide signing as [Adam Caudill pointed out](https://github.com/FiloSottile/age/issues/51). Let's keep a close eye on these tools though as they might evolve in the right direction.
To obtain public keys, the web of trust (signing other people keys) hasn't been proven to really scale, instead we are now in a different key distribution model where people broadcast their public keys on various social networks in order to instill their identity to a specific public key. I don't think there's a name for it... but I like to call it broadcast of trust.

For **encrypted communications**, [Signal](https://signal.org/) has clearly succeeded as a proprietary solution, but everyone can benefit from it by using other messaging apps like [WhatsApp](https://www.whatsapp.com/) and [Wire](https://wire.com/en/) or even federated protocols like [Matrix](https://matrix.org/). Matrix' main implementation seems to be [Riot](https://about.riot.im/) which I've been using and really digging so far. It also looks like [the French government agrees with me](https://lwn.net/Articles/779331/).
Same thing here, the web of trust doesn't seem to work, and instead what seems to be working is relying on centralized key distribution servers and TOFU but verify (trust the first public key you see, but check the fingerprint out-of-band later).

Hardware Solutions To Highly-Adversarial Environments Part 3: Trusted Execution Environment (TEE), SGX, TrustZone and Hardware Security Tokens David Wong Mon, 20 Apr 2020 04:24:32 +0200 http://www.cryptologie.net/article/501/hardware-solutions-to-highly-adversarial-environments-part-3-trusted-execution-environment-tee-sgx-trustzone-and-hardware-security-tokens/ http://www.cryptologie.net/article/501/hardware-solutions-to-highly-adversarial-environments-part-3-trusted-execution-environment-tee-sgx-trustzone-and-hardware-security-tokens/#comments I’ve written about smart cards and secure elements in [part 1](https://www.cryptologie.net/article/499/hardware-solutions-to-highly-adversarial-environments-part-1-whitebox-crypto-vs-smart-cards-vs-secure-elements-vs-host-card-emulation-hce/) and about HSMs and TPMs in [part 2](https://www.cryptologie.net/article/500/hardware-solutions-to-highly-adversarial-environments-part-2-hsm-vs-tpm-vs-secure-enclave/).

## Trusted Execution Environment (TEE)

So far, all of the hardware solutions we’ve talked about have been **standalone** secure hardware solutions (with the exceptions of smart cards which can be seen as tiny computers).
Secure elements, HSMs, and TPMs can be seen as an additional computer.

Let’s now talk about **integrated** secure hardware!

**Trusted Execution Environment (TEE)** is a concept that extends the instruction set of a processor to allow for programs to run in a separate secure environment. The separation between this secure environment and the ones we are used to deal with already (often called “rich” execution environment) is done via hardware. So what ends up happening is that modern CPUs run both a normal OS as well as a secure OS simultaneously. Both have their own set of registers but share most of the rest of the CPU architecture (and of course system). By using clever CPU-enforced logic, data from the secure world cannot be accessed from the normal world.
Due to TEE being implemented directly on the main processor, not only does it mean a TEE is a faster and cheaper product than a TPM or secure element, it also comes for free in a lot of modern CPUs.

TEE like all other hardware solutions has been a concept developed independently by different vendors, and then a standard trying to play catch up (by Global Platform).
The most known TEEs are Intel’s **Software Guard Extensions (SGX)** and ARM’s **TrustZone**. But there are many more like AMD PSP, RISC-V MultiZone and IBM Secure Service Container.

By design, since a TEE runs on the main CPU and can run any code given to it (in a separate environment called an “enclave”), it offers more functionality than secure elements, HSMs, TPMs (and TPM-like chips).
For this reason TEEs are used in a wilder range of applications. We see it being used in clouds when clients [don’t trust servers with their own data](https://signal.org/blog/private-contact-discovery/), multi-party computation (see [CCF](https://github.com/microsoft/CCF)), to run [smart contracts](https://research.nccgroup.com/2020/03/24/smart-contracts-inside-sgx-enclaves-common-security-bug-patterns/).

TEE’s goal is to first and foremost thwart **software attacks**. While the claimed software security seems to be really attractive, it is in practice hard to segregate execution while on the same chip as can attest the many software attacks against SGX:

* 2017 - [Software Grand Exposure](https://www.usenix.org/system/files/conference/woot17/woot17-paper-brasser.pdf)
* 2018 - [SGXSpectre](https://arxiv.org/pdf/1802.09085.pdf)
* 2019 - [RIDL](https://mdsattacks.com/)
* 2019 - [Plundervolt](https://plundervolt.com/) and [V0LTpwn](https://arxiv.org/abs/1912.04870)
* 2020 - [LVI](https://lviattack.eu/)

Trustzone is not much better, Quarkslab has [a list of paper](https://blog.quarkslab.com/introduction-to-trusted-execution-environment-arms-trustzone.html) successfully attacking it as well.

(picture taken from [Certification of the Trusted Execution Environment – one step ahead for secure mobile devices](https://www.commoncriteriaportal.org/iccc/ICCC_arc/presentations/T2_D1_3_30pm_Lavatelli_Cert_of_the_Trusted_Exec_Env.pdf))

In theory a TPM can be re-implemented in software only via a TEE ([which was done by Microsoft](https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/raj)) but one must be careful as again, TEE as a concept provides no resistance against hardware attacks besides the fact that things at this microscopic level are way too tiny and tightly packaged together to analyze without expensive equipment. But by default a TEE does not mean you’ll have a secure internal storage (you need to have a fused key that can’t be read to encrypt what you want to store), or a hardware random number generator, and other wished hardware features. But every manufacturers sure has different offers with different levels of physical security and tamper resistance when it comes to chip that supports TEE.

## Hardware Security Tokens

Finally, hardware security tokens are keys that you can usually plug into your machine and that can do some cryptographic operations. For example yubikeys are small dongles that you can plug in the USB port of a laptop, and that will perform some cryptographic operations if you touch its yellow ring.

The word “token” in hardware security token comes from the fact that using it produces a “token” per-authentication request instead of sending the same credentials over and over again.

Yubikeys started as a way to provide 2nd factor authentication, usually in addition to a password, which an attacker can’t exploit in a phishing attack. The idea is that if an attacker calls your grandmother, and asks her to spell out the yubikey output, she won’t be able to. There is no output. Furthermore, modern yubikeys implement the FIDO 2 protocol which will not produce the correct response unless you are on the right webpage (if we are talking about usage for the web). The reason is that the protocol signs metadata that is linked to what’s in the url bar of your browser.

More recently laptops and mobile devices have started offering other ways to provide the same value as a hardware security token via their own secure module. For example Apple provides a biometric-protected (Touch ID or Face ID) authenticator via the secure enclave.

It’s not clear how much protection against hardware attacks your typical hardware security token has to implement since the compromise of one is not enough to authenticate as a user in most cases (unless you use one as single factor authentication). Yet yubikeys are known to have secure elements inside. Still, this doesn’t exclude software attacks if badly programmed.
For example in 2013, a [low-cost and non-intrusive side-channel attack managed to extract keys from a yubikey](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.642.5552&rep=rep1&type=pdf).

Cryptocurrency has similar dongles that will sign transactions for a user, but the threat model is different and they will usually have to authenticate the user in some ways and provide tamper resistance. Here is a picture of a [Nano ledger](https://www.ledger.com/).

As with any hardware solutions, attacks have been found there as well (for example one the [trezor](https://blog.trezor.io/our-response-to-ledgers-mitbitcoinexpo-findings-194f1b0a97d4)).

## Conclusion

As a summary, this 3-part blog series surveys different techniques that exist to deal with **physical attacks**:

* **Smart cards** are microcomputers that needs to be turned on by an external device like a payment terminal. They can run arbitrary java applications. Bank cards are smart cards for example.
* **Secure elements** are a generalization of smart cards, which rely on a set of Global Platform standards. SIM Cards are secure elements for example.
* **TPMs** are re-packaged secure elements plugged on personal and enterprise computers’ motherboards. They follow a standardized API (by the Trusted Computing Group) that are used in a multitude of ways from measured/secure boot with FDE to remote attestation.
* **HSMs** can be seen as external and big secure elements for servers. They’re faster and more flexible. Seen mostly in data centers to store keys.
* **TEEs** like TrustZone and SGX can be thought of secure elements implemented within the CPU. They are faster and cheaper but mostly provide resistance against software attacks unless augmented to be tamper-resistant. Most modern CPUs ship with TEEs and various level of defense against hardware attacks.
* **Hardware Security Tokens** are dongles like yubikeys that often repackage secure elements to provide a 2nd factor by implementing some authentication protocol (usually TOTP or FIDO2).
* There are many more that I haven’t talked about. In reality vendors can do whatever they want. We’ve seen a lot of TPM-like chips. Apple has the secure enclave, Google has Titan, Microsoft has Pluton, Atmel for example sells “crypto elements”.

Keep in mind that no hardware solution is the panacea, you're only increasing the attack's cost. Against a sophisticated attacker all of that is pretty much useless. For this reason design your system so that one device compromised doesn't imply all devices are compromised. Even against normal adversaries, compromising the main operating system often means that you can make arbitrary calls to the secure element. Design your protocol to make sure that the secure element doesn't have to trust the caller by either verifying queries, or relying on an external trusted part, or by relying on a trusted remote party, or by being self-contained, etc. And after all of that, you still have to worry about side channel attacks :)

PS: thanks to Gabe Pike for the many discussions around TEE! ]]>
Hardware Solutions To Highly-Adversarial Environments Part 2: HSM vs TPM vs Secure Enclave David Wong Sun, 05 Apr 2020 22:38:22 +0200 http://www.cryptologie.net/article/500/hardware-solutions-to-highly-adversarial-environments-part-2-hsm-vs-tpm-vs-secure-enclave/ http://www.cryptologie.net/article/500/hardware-solutions-to-highly-adversarial-environments-part-2-hsm-vs-tpm-vs-secure-enclave/#comments
* The threat today is not just an attacker intercepting messages over the wire, but an attacker stealing or tampering with the device that runs your cryptography. So called Internet of Things (IoT) devices often run into this type of threats and are by default unprotected against sophisticated attackers.
* **Hardware can help protect cryptography applications in highly-adversarial environment**. One of the idea is to provide a device with a tamper-resistant chip to store and perform crypto operations. That is, if the device falls in the hands of an attacker, extracting keys or modifying the behavior of the chip will be hard. But hardware-protected crypto is not a panacea, it is merely **defense-in-depth**, effectively slowing down and **increasing the cost of an attack**.
* **smart cards** were one of the first such secure microcontroller that could be used as a micro computer to store secrets and perform cryptographic operations with them. These are supposed to use a number of techniques to discourage physical attackers.
* the concept of a smart card was generalized as a **secure element**, which is a term employed differently in different domains, but boils down to a smart card that can be used as a coprocessor in a greater system that already has a main processor.
* Google having troubles dealing with the telecoms to host credit card information on SIM cards (which are secure elements), the concept of **secure element in the cloud** was born. In the payment space this is called **host card emulation (HCE)**. It works simply by storing the credit card information (which is a 3DES symmetric key shared with the bank) in a secure element in the cloud, and only giving a **single-use token** to the user: if the phone is compromised, the attacker can only use it to pay once.

All good?

In this part 2 of our blog series you will learn about more hardware that supports cryptographic operations! These are all secure elements in concept, and are all doing sort of the same things but in different contexts. Let’s get started!

## Hardware Security Module (HSM)

If you understood what a secure element was, well a **hardware secure module (HSM)** is pretty much a **bigger secure element**.
Not only the form factor of secure elements require specific ports, but they are also slow and low on memory. (Note that being low on memory is sometimes OK, as you can encrypt keys with a secure element master key, and then store the encrypted keys outside of the secure element.)
So HSM is a solution for a more portable, more efficient, more multi-purpose secure element. Like some secure elements, some HSMs can run arbitrary code as well.

HSMs are also subject to their own set of standards and security level. One of the most widely accepted standard is [FIPS 140-2: Security Requirements for Cryptographic Modules](https://csrc.nist.gov/publications/detail/fips/140/2/final), which defines security levels between 1 and 4, where level 1 HSMs do not provide any protection against physical attacks and level 4 HSMs will wipe their whole memory if they detect any intrusion!

Typically, you find an HSM as an external device with its own shelf on a rack (see the picture of a luna HSM below) plugged to an enterprise server in a data center.

(To go full circle, some of these HSMs can be administered using smart cards.)

Sometimes you can also find an HSM as a PCIe card plugged into a server’s motherboard, like the IBM Crypto Express in the picture below.

Or even as small dongles that you can plug via USB (if you don’t care about performance), see the picture of a YubiHSM below.

HSMs are highly used in some industries. Every time you enter your PIN in an ATM or a payment terminal, the PIN ends up being verified by an HSM somewhere. Whenever you connect to a website via HTTPS, the root of trust comes from a Certificate Authority (CA) that [stores its private key in an HSM](https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.6.9.pdf#page=48), and the TLS connection is possibly terminated by an HSM. You have an Android or iPhone? Chances are [Google](https://security.googleblog.com/2018/10/google-and-android-have-your-back-by.html) or [Apple](https://www.youtube.com/watch?v=BLGFriOKz6U) are keeping a backup of your phone safe with a fleet of HSMs. This last case is interesting because the threat model is reversed: the user does not trust the cloud with its data, and thus the cloud service provider claims that its service can’t see the user’s encrypted backup nor can access the keys used to encrypt it.

HSMs don’t really have a standard, but most of them will at least implement the **Public-Key Cryptography Standard 11 (PKCS#11)**, one of these old standards that were started by the RSA company and that were progressively moved to the OASIS organization (2012) in order to facilitate adoption of the standards.

While PKCS#11 last version (2.40) was released in 2015, it is merely an update of a standard that originally started in 1994. For this reason it specifies a number of old cryptographic algorithms, or old ways of doing things. Nevertheless, it is good enough for many uses, and specifies an interface that allow different systems to easily interoperate with each other.

While HSMs’ real goals are to make sure nobody can extract key material from them, their security is not always shining.
A lot about the security of these hardware solutions really relies on their high price, the protection techniques used not being disclosed, and the certifications (like FIPS and Common Criteria) mostly focusing on the hardware side of things. In practice, devastating software bugs have been found and it is not always straight forward to know if the HSM you use is vulnerable to any of these vulnerabilities (Cryptosense has a [good summary of known attacks against HSMs](https://www.youtube.com/watch?v=lP_QxJ-zjBU)).

> By the way, not only the price of one HSM is high (it can easily be dozens of thousands of dollars depending on the security level), in addition to an HSM you often have another HSM you use for testing, and another one you use for backup (in case your first HSM dies with its keys in it). It can add up!

Furthermore, I still haven’t touched on the elephant in the room with all of these solutions: while you might prevent most attackers from reaching your secret keys, you can't prevent attackers from compromising the system and making their own calls to the secure hardware module (be it a secure element or an HSM). Again, these hardware solutions are not a panacea and depending on the scenario they provide more or less defense-in-depth.

> By the way, if it applies to your situation modern cryptography can offer better ways of reducing the consequences of key material compromise and mis-use. For example using multi-signatures! Check my [blog post on the subject](https://www.cryptologie.net/article/486/difference-between-shamir-secret-sharing-sss-vs-multisig-vs-aggregated-signatures-bls-vs-distributed-key-generation-dkg-vs-threshold-signatures/).

## Trusted Platform Module (TPM)

A **Trusted Platform Module** (TPM) is first and foremost a **standard** (unlike HSMs) developed in the open by the non-profit [Trusted Computing Group](https://trustedcomputinggroup.org/) (TCG).
The latest version is TPM 2.0, published with the ISO/IEC (International Organization for Standardization and the International Electrotechnical Commission).

A TPM complying with the TPM 2.0 standard is a secure microcontroller that carries a hardware random number generator also called true random number generator (TRNG), secure memory for storing secrets, cryptographic operations, and the whole thing is tamper resistant.
If this description reminds you of smart cards, secure element, and HSMs well… I told you that everything we were going to be talking about in this chapter were going to be secure elements of some form. (And actually, it’s common to see TPMs implemented as repackaging of secure elements.)

You usually find a TPM directly soldered to the motherboard of many enterprise servers, laptops, and desktop computers (see picture below).

Unlike solutions that we’ve seen previously though, a TPM does not run arbitrary code. It offers a well-defined interface that a greater system can take advantage of. Due to these limitations, a TPM is usually pretty cheap (even cheap enough that some IoT devices will ship with one!).

Here is a non-exhaustive list of interesting applications that a TPM can enable:

* **User authentication**. Ever heard of the [FBI iPhone fiasco](https://en.wikipedia.org/wiki/FBI%E2%80%93Apple_encryption_dispute)? TPMs can be used to require a user PIN or password. In order to prevent low entropy credentials to be easily bruteforced, a TPM can rate limit or even count the number of failed attempts.
* **Secure boot**. Secure boot is about starting a system in a known trusted state in order to avoid tampering of the OS by malware or physical intrusion. This can be done by using a platform’s TPM and the Unified Extensible Firmware Interface (UEFI) which is the piece of code that launches an operating system. Whenever the image of a new boot loader or OS or driver is loaded, the TPM can store the associated expected hash and compare it before running the code, and failing if the hash of the image is different. If you hold a public key you can also verify that a piece of code has been signed before running it. This is a gross over-simplification of how secure boot works in practice, but the crypto is pretty straight forward.
* **Full disk encryption (FDE)**. This allows to store the key (or encrypt the key) that encrypts all data on the device at rest. If the device has been proven to be in a known good state (via secure boot) and the user authenticates correctly, the key can be released to decrypt data. When the devices is locked or shut down, the key vanishes from memory and has to be released by the TPM again. This is a must feature if you lose, or get your device stolen.
* **Remote attestation**. This allows a device to authenticate itself or prove that it is running specific software. In other words, a TPM can sign a random challenge and/or metadata with a key that can be tied to a unique per-TPM key (and is signed by the TPM vendor). Every TPM comes with such a unique key (called an endorsement key) along with the vendor’s certificate authority signature on the public key part. For example, during employee onboarding a company can add a new employee’s laptop’s TPM endorsement key to a whitelist of approved devices. Later, if the user wants to access one of the company’s service, the service can request the TPM to sign a random challenge along with hashes of what OS was booted to authenticate the user and prove the well-being of the user’s device.

There are more functionalities that a TPM can enable (there's afterall hundreds of commands that a TPM implements) which might even benefit user applications (which should be able to call the TPM).

Note that having a standard is great for inter-operability, and for us to understand what is going on, but unfortunately not everyone use TPMs. Apple has the [secure enclave](https://support.apple.com/guide/security/secure-enclave-overview-sec59b0b31ff/web), Microsoft has [Pluton](https://azure.microsoft.com/en-us/blog/anatomy-of-a-secured-mcu/), Google has [Titan](https://cloud.google.com/blog/products/gcp/titan-in-depth-security-in-plaintext).

Perhaps, on a darker note, it is good to note that TPMs have their own controversies and have also been subjected to devastating vulnerabilities. For example the [ROCA attack](https://crocs.fi.muni.cz/public/papers/rsa_ccs17) found that an estimated million TPMs (and even smart cards) from the popular Infineon vendor had been wrongly generating RSA private keys for years (the prime generation was flawed).

* HSMs. They are external, bigger and faster secure elements. They do not follow any standard interface, but usually implement the PKCS#11 standard for cryptographic operations. HSMs can be certified with different levels of security via some NIST standard (FIPS 140-2).
* TPMs. They are chips that follow the TPM standard, more specifically they are a type of secure element with a specified interface. A TPM is usually a secure chip directly linked to the motherboard and perhaps implemented using a secure element. While it does not allow to run arbitrary programs like some secure elements, smart cards, and HSMs do, it enables a number of interesting applications for devices as well as user applications.

That’s it for now, check this blog again to read part 3 which will be about TEEs!

Many thanks to Jeremy O'Donoghue, Thomas Duboucher, Charles Guillemet, and Ryan Sleevi who provided help and reviews!
]]>
Hardware Solutions To Highly-Adversarial Environments Part 1: Whitebox Crypto vs Smart Cards vs Secure Elements vs Host-Card Emulation (HCE) David Wong Sat, 28 Mar 2020 20:22:43 +0100 http://www.cryptologie.net/article/499/hardware-solutions-to-highly-adversarial-environments-part-1-whitebox-crypto-vs-smart-cards-vs-secure-elements-vs-host-card-emulation-hce/ http://www.cryptologie.net/article/499/hardware-solutions-to-highly-adversarial-environments-part-1-whitebox-crypto-vs-smart-cards-vs-secure-elements-vs-host-card-emulation-hce/#comments
Makes sense right?

In these lands, you are going to run into scenarios where attackers can be quite close to your applications.

Imagine using your credit card on an [ATM skimmer](https://krebsonsecurity.com/all-about-skimmers/) (a doodads that a thief can place on top of the card reader of an ATM in order to copy the content of your credit card, see picture bellow); downloading an application on your mobile phone that compromises the OS; hosting a web application in a colocated server shared with a malicious customer; managing highly-sensitive secrets in a data center that gets breached; and so on.

These scenarios suck, and are very **counterintuitive** to most cryptographers. This is because cryptography has come a long way since the historical “*Alice wants to encrypt a message to Bob without Eve intercepting it*”. Nowadays, it’s often more like “*Alice wants to encrypt a message to Bob, **but Alice is also Eve***”.

The key here is that in these scenarios, there’s not much that can be done cryptographically (unless you believe in [whitebox crypto](https://www.matthieurivain.com/files/slides-cardis17.pdf)) and **hardware** can go a long way to help.

OK, so now we have a whole world of new doohickeys to learn about, and there’s a lot of thingamabob believe me (hence the dense title).
It can be quite confusing to learn about all of this, so here we go: my promise is that by the end of this blogpost series you’ll have a better understanding of what are all these different hardware solutions.

Keep in mind that none of these solutions are pure cryptographic solutions: they are all **defense-in-depth** (and sometimes **dubious**) solutions that serve to **hide** secrets and their associated sensitive cryptographic operations. They also all have a **given cost**, meaning that if a sophisticated attacker decides to break the bank, there’s not much we can do (besides raising the cost of an attack).

OK let's get started.

## Obfuscation

By definition obfuscation has nothing to do with security: it is the act of scrambling something so that it still work but is hard to understand. So for the laugh, let’s first mention **whitebox cryptography** which attempts to “cryptographically” obfuscate the key inside of an algorithm. That’s right, you have the source code of some AES-based encryption algorithm with a fixed key, and it encrypts and decrypts fine, but the key is mixed so well with the implementation that it is too confusing for anyone to extract the key from the algorithm. That's the theory. Unfortunately in practice, [no published whitebox crypto algorithm has been found to be secure](https://www.cryptoexperts.com/whibox2019/), and most commercial solutions are closed-source due to this fact (security through obscurity kinda works in the real world). Again, it’s all about raising the cost and making it harder for attackers.

All in all, whitebox crypto is a big industry that sells dubious products to businesses in need of DRM solutions. On the more serious side, there is a branch of cryptography called **Indistinguishability obfuscation (iO)** that attempts to do this cryptographically (so for realz). iO is a very theoretical, impractical, and so far not-really-proven field of research. We’ll see how that one goes.

(Timeline of whitebox cryptography, taken from [Matthieu Rivain’s slides](https://www.matthieurivain.com/files/slides-cardis17.pdf))

## Smart Cards

OK, whitebox crypto is not great, and worse: even if you can’t extract the key, you can still copy the program instead of trying to extract the key (and use it to do whatever cryptographic operation it features). It would be great if we could prevent people from copying secrets from sensitive devices though, or even prevent them from seeing what’s going on when the device performs cryptographic operations.
A **smart card** is exactly this. It’s what you commonly find in credit cards, and is activated by either inserting it into, or using Near-field Communication (NFC) by getting the smart card close enough to, a **payment terminal** (also called Point of Sale or PoS terminal).

Smart cards are pretty old, and started as a practical way to get everyone a **pocket computer**. Indeed, a smart cart embarks a CPU, memory (RAM, ROM and EEPROM), input/output, hardware random number generator (so called TRNGs), etc.) unlike the not-so-smart cards that only had data stored in them via a magnetic stripe (which in turn can be easily copied via the skimmers I talked about previously).
Today, it seems like the same people all have a much more powerful computer in their pockets, so smart cards are probably going to die.
(Rob Wood is pointing to me that [more than a quarter of the US still doesn’t have a smart phone](https://www.statista.com/statistics/201183/forecast-of-smartphone-penetration-in-the-us/), so there’s still some time before this prophecy come to fruition.)

Smart cards mix a number of physical and logical techniques to prevent observation, extraction, and modification of its execution environment and some of its memory (where secrets are stored). But as I said earlier, it’s all about how much money you want an attacker to be spending, and there exist [many techniques that attempt at breaking these cards](http://www.infosecwriters.com/text_resources/pdf/Known_Attacks_Against_Smartcards.pdf):

* Non-invasive attacks such as differential power analysis (DPA) analyze the power consumption of the smart card while it is doing cryptographic operations in order to extract the associated keys.
* Semi-invasive attacks require access to the chip’s surface to mount attacks such as differential fault analysis (DFA) which use heat, lasers, and other techniques to modify the execution of a program running on the smart card in order to leak the key via cryptographic attacks (see my post on [RSA signature fault attacks](https://www.cryptologie.net/article/371/fault-attacks-on-rsas-signatures/) for an example).
* Finally invasive silicon attacks can modify the circuitry in the silicon itself to alter its function and reveal secrets.

## Secure Elements

Smart cards got really popular really fast, and it became obvious that having such a secure blackbox in other devices could be useful. The concept of a secure element was born: a tamper-resistant microcontroller that can be found in a pluggable form factor like UUICs (SIM cards required by carriers to access their 3G/4G/5G network) or directly bonded on chips and motherboards like the embedded SE (eSE) attached to an iPhone’s NFC chip. Really just a small **separate** piece of hardware meant to protect your secrets and their usage in cryptographic operations.

> SEs are an evolution of the traditional chip that resides in smart cards, which have been adapted to suit the needs of an increasingly digitalized world, such as smartphones, tablets, set top boxes, wearables, connected cars, and other internet of things (IoT) devices. ([GlobalPlatform](https://globalplatform.org/wp-content/uploads/2018/05/Introduction-to-Secure-Element-15May2018.pdf))

Secure elements are a key concept to protect cryptographic operations in the Internet of Things (IoTs), a colloquial (and overloaded) term to refer to devices that can communicate with other devices (think smart cards in credit cards, SIM cards in phones, biometric data in passports, garage keys, smart home sensors, and so on).

Thus, you can see all of the solutions that will follow in this blogpost series as secure elements implemented in different form factors, using different techniques, and providing different level of defense-in-depth.

If you are required to use a secure element (to store credit card data for example), you also most likely have to get it certified. The main definition and standards around a secure element come from [GlobalPlatform](https://globalplatform.org/resource-publication/introduction-to-secure-elements/), but there exist more standards like Common Criteria (CC), NIST’s FIPS, EMV (for Europay, Mastercard, and Visa), and so on.
If you’re in the market of buying secure microcontrollers, you will often see claims like “FIPS 140-2 certified” and “certified CC EAL 5+” next to it. Claims that can be obtained after spending some quality time, and a lot of money, with licensed certification labs.

## Host Card Emulation (HCE)

It’s 2020, most people have a computer in their pocket: a smart phone. What’s the point of a credit card anymore? Well, not much, nowadays more and more payment terminals support contactless payment via the Near-field Communication (NFC) protocol, and more and more smartphones ship with an NFC chip that can potentially act as a credit card.

NFC for payment is specified as **Card Emulation**. Literally: it emulates a bank card.
Banks allow you to do this **only if you have a secure element**.

Since Apple has full control over its hardware, it can easily add a secure element to its new iPhones to support payment, and this is what Apple did (with an embedded SE bonded to the NFC chip since the iPhone 6). iPhone users can register a bank card with the Apple wallet application, Apple can then obtain the card’s secrets from the issuing bank, and the card secrets can finally be stored in the eSE. The secure element communicates directly with the NFC chip and then to NFC readers, thus a compromise of the phone OS does not impact the secure element.

Google, on the other hand, had quite a hard time introducing payment to Android-based mobile phones due to phone vendors all doing different things. The saving technology for Google ended up being a **cloud-based secure element** called **Host Card Emulation (HCE)** introduced in 2013 in Android 4.4.

(Note that some Android devices do have an eSE that can be used instead of HCE, and some SIM cards can also be used as secure elements for payment.)
This concept of replacing sensitive long-term information with short-lived tokens is called **tokenization**.
Sending a random card number that can be linked to your real one is great for privacy: merchants can’t track you as it’ll look like you’re always using a new card number. If your phone gets compromised, the attacker only gets access to a short-lived secret that can only be used for a single payment.
Tokenization is a common concept in security: replace the sensitive data with some random stuff, and have a table secured somewhere safe that maps the random stuff to the real data.

Wikipedia has some cool diagram to show what’s going on whenever you pay with Android Pay or Apple Pay:

Although Apple theoretically doesn't have to use tokenization, since iPhones have secure elements that can store the real PAN, they do use it in order to gain more privacy (it's afterall their new bread and butter).

In [part 2](https://www.cryptologie.net/article/500/hardware-solutions-to-highly-adversarial-environments-part-2-hsm-vs-tpm-vs-secure-enclave/) of this blog series I’ll cover HSMs, TPMs, and much more :)

(I would like to thank Rob Wood, Thomas Duboucher, and Lionel Rivière for answering my many questions!)

PS: I'm writing a book which will contain this and much more, [check it out](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto)! ]]>
Coronavirus and cryptography David Wong Tue, 17 Mar 2020 02:39:13 +0100 http://www.cryptologie.net/article/498/coronavirus-and-cryptography/ http://www.cryptologie.net/article/498/coronavirus-and-cryptography/#comments
The IACR announced on March 14th that multiple conferences were postponed:

> **FSE** 2020, which was supposed to be held in Athens, Greece, during 22-26 March 2020, has been postponed to 8-12 November 2020.

> **PKC** 2020, which was supposed to be held in Edinburgh, Scotland, during 4-7 May 2020, has been postponed.

> **EUROCRYPT** 2020, which was supposed to be held in Zagreb, Croatia, during 10-14 May 2020, has been postponed.

While some others were not:

> No changes have been made at this time to the schedule of **CRYPTO** 2020, **CHES** 2020, **TCC** 2020, and **ASIACRYPT** 2020, but we will continue to closely monitor the situation and will inform members if changes are needed.

While many workplaces (including mine) are moving to a WFH (work from home) model, will conferences follow?

It seems to be the case at least for [Consensus 2020](https://www.coindesk.com/events/consensus-2020), a cryptocurrency conference organized by coindesk, which is moving to an online model:

> Consensus 2020 will now be a completely virtual experience, where attendees from all over the world can participate online at no charge.

On a more dramatic note it seems like several participants of EthCC, which was held in Paris almost a week ago, have contracted the virus. A [google spreadsheet](https://docs.google.com/spreadsheets/d/1UorrYGPbVh-KJliw4KDyeeWDGDSKRCsYgM3ieV6mh74/htmlview?sle=true#gid=0) has been circulating in order to self-report and figure out who else could have potentially contracted the virus.
Even Vitalik Buterin is rumored to have had mild COVID-19 symptoms. Nobody is out of reach.

On a lighter note, my coworker Kostas presented on proofs of solvency at the lightning talks of [Real World Crypto 2020](https://www.youtube.com/watch?v=qJv7-fxFVC8). With his merkle tree-like construction he hopes to make governments accountable when they count the number of people who counted positive to the virus.

]]>
EdDSA, Ed25519, Ed25519-IETF, Ed25519ph, Ed25519ctx, HashEdDSA, PureEdDSA, WTF? David Wong Fri, 13 Mar 2020 00:58:27 +0100 http://www.cryptologie.net/article/497/eddsa-ed25519-ed25519-ietf-ed25519ph-ed25519ctx-hasheddsa-pureeddsa-wtf/ http://www.cryptologie.net/article/497/eddsa-ed25519-ed25519-ietf-ed25519ph-ed25519ctx-hasheddsa-pureeddsa-wtf/#comments
You've heard of **EdDSA** right? The shiny and new signature scheme (well new, it's been here since 2008, wake up).

Since its inception, EdDSA has evolved quite a lot, and some amount of standardization process has happened to it. It's even doomed to be adopted by the NIST in [FIPS 186-5](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-5-draft.pdf)!

First, some definition:

* EdDSA stands for **Edwards-curve Digital Signature Algorithm**. As its name indicates, it is supposed to be used with twisted Edwards curves (a type of elliptic curve). Its name can be deceiving though, as it is not based on the Digital Signature Algorithm (DSA) but on [Schnorr signatures](https://www.cryptologie.net/article/193/schnorrs-signature-and-non-interactive-protocols/)!
* Ed25519 is the name given to the algorithm combining EdDSA and the Edwards25519 curve (a curve somewhat equivalent to Curve25519 but discovered later, and much more performant).

EdDSA, Ed25519, and the more secure Ed448 are all specified in [RFC 8032](https://tools.ietf.org/html/rfc8032).

## RFC 8032: Edwards-Curve Digital Signature Algorithm (EdDSA)

RFC 8032 takes some new direction from [the original paper](https://ed25519.cr.yp.to/ed25519-20110926.pdf):

* It specifies a **malleability check** during verification, which prevents ill-intentioned people to forge an additional valid signature from an existing signature of yours. Whenever someone talks about **Ed25519-IETF**, they probably mean "the algorithm with the malleability check".
* It specifies a number of Ed25519 **variants**, which is the reason of this post.
* Maybe some other stuff I'm missing.

To sign with Ed25519, the original algorithm defined in the paper, here is what you're supposed to do:

1. compute the nonce as HASH(nonce_key || message)
2. compute the commitment R = [nonce]G with G the generator of the group.
3. compute the challenge as HASH(commitment || public_key || message)
4. compute the proof S = nonce + challenge × signing_key
5. the signature is (R, S)

where HASH is just the SHA-512 hash function.

At a high-level this is similar to Schnorr signatures, except for the following differences:

* The **nonce** is generated **deterministically** (as opposed to probabilistically) using a fixed nonce_key (derived from your private key, and the message M. This is one of the cool feature of Ed25519: it prevents you from re-using the same nonce twice.
* The **challenge** is computed not only with the commitment and the message to sign, but with the **public key of the signer** as well. Do you know why?

Important: notice that the message here does not need to be hashed before being passed to the algorithm, as it is already hashed as part of the algorithm.

Anyway, we still don't know WTF all the variants specified are.

## PureEdDSA, ContextEdDSA and HashEdDSA

Here are the variants that the RFC actually specifies:

* **PureEdDSA**, shortened as **Ed25519** when coupled with Edwards25519.
* **HashEdDSA**, shortened as **Ed25519ph** when coupled with Edwards25519 (and where ph stands for "prehash").
* **Something with no name we'll call ContextEdDSA**, defined as **Ed25519ctx** when coupled with Edwards25519.

All three variants can share the same keys. They differ only in their signing and verification algorithms.

By the way Ed448 is a bit different, so from now on I'll focus on EdDSA with the Edwards25519 curve.

**Ed25519 (or pureEd25519)** is the algorithm I described in the previous section.

Easy!

**Ed25519ctx (or ContextEd25519)** is pureEd25519 with some additional modification: the HASH(.) function used in the signing protocol I described above is re-defined as HASH(x) = SHA-512(some_encoding(flag, context) || x) where:
* flag is set to 0
* context is a context string (mandatory only for Ed25519ctx)

In other words, the two instances of hashing in the signing algorithm now include some prefix.
(Intuitively, you can also see that these variants are totally incompatible with each other.)

Right out of the bat, you can see that **ContextEd25519** big difference is just that it mandates some domain separation to Ed25519.

**Ed25519ph (or HashEd25519)**, finally, builds on top of ContextEd25519 with the following modifications:

* flag is set to 1
* context is now optional, but advised
* the message is replaced with a hash of the message (the specification says that the hash has to be SHA-512, but I'm guessing that it can be anything in reality)

OK. So the big difference now seems that we are doubly-hashing.

## Why HashEdDSA and why double hashing?

First, pre-hashing sucks, this is because **it kills the collision resistance of the signature algorithm**.
In PureEdDSA we assume that we take the original message and not a hash.
(Although this is not always true, the caller of the function can do whatever they want.)
Then a collision on the hash function wouldn't matter (to make you create a signature that validates two different messages) because you would have to find a collision on the nonce which is computed using a secret (the nonce key).

But if you pre-hash the message, then finding a collision there is enough to obtain a signature that validates two messages.

Thus, **you should use PureEdDSA if possible**. And use it correctly (pass it the correct message.)

Why is HashEdDSA a thing then?

The [EdDSA for more curves](https://cryptojedi.org/papers/eddsa-20150704.pdf) paper which was the first to introduce the algorithm has this to say:

> The main motivation for HashEdDSA is the following storage issue (which is irrelevant to most well-designed signature applications). Computing the PureEdDSA signature of M requires reading through M twice from a buffer as long as M, and therefore does not support a small-memory “InitUpdate-Final” interface for long messages. Every common hash function H0 supports a smallmemory “Init-Update-Final” interface for long messages, so H0 -EdDSA signing also supports a small-memory “Init-Update-Final” interface for long messages. Beware, however, that analogous streaming of verification for long messages means that verifiers pass along forged packets from attackers, so it is safest for protocol designers to split long messages into short messages to be signed; this splitting also eliminates the storage issue.

## Why am I even looking at this rabbit hole?

Because [I'm writing a book](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto), and it'd be nice to explain what the hell is going on with Ed25519. ]]>
What's a key exchange? David Wong Tue, 10 Mar 2020 02:49:26 +0100 http://www.cryptologie.net/article/496/whats-a-key-exchange/ http://www.cryptologie.net/article/496/whats-a-key-exchange/#comments
I've been writing about cryptography for [a book](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto) for a year now, and it has brought me some interesting challenges.
One of them is that I constantly have to throw away what I've learned a long time ago, and imagine what it feels like not to know about a concept.

For example what are key exchanges?

The most intuitive explanation that I knew of (up until recently) was the one given by the [wikipedia page on key exchanges](https://en.wikipedia.org/wiki/Key_exchange).
It's a picture that involves paint. Take a look at it, but don't try to understand what is going on if you don't know about key exchanges yet. You can come back to it later.

I thought this was great. At least until I tried to explain key exchanges to my friends using this analogy. Nobody got it.

Nobody.

The other problem was that I couldn't use colors to explain anything in my book, as it'll be printed in black & white.

So I sat on the sad realization that I didn't have a great explanation for key exchanges, this for a number of months, and that until a more intuitive idea came to my mind.

The idea goes like this. Imagine that Alice and Bob wants to share a secret, but are afraid that someone is intercepting their communications.
What they do is that they go to the store and buy the same bottle of generic soda.

Once home, they both start a random timer and shake their respective bottles until their timer end.

What they obtain are some shaked, pressurized, ready to gush out bottles of sodas.
Each of the bottles will release a different amount of pressure.

After that, they swap bottles. Now Alice has the bottle of Bob, and Bob has Alice's bottle.

What do they do now? They restart their timers and shake the other person's bottle for the same amount of time.

Shake shake shake!

What do they finally obtain? Try to guess.

If I did my job correctly, then I gave you an intuition of how key exchanges work.
Both Alice and Bob should now have two bottle of sodas that will release the same pressure once opened.
And that's the secret!

And even if I steal the two bottles, I can't get a bottle that combines both bottles' pressure.

I recap the whole flow in the picture below:

Did you know about key exchanges before? Did you get it? Or did you think the painting example made more sense?

Please tell me in the comment!

This is probably what I'll include in [my book](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto) as an introduction of what key exchanges are, unless I find a better way to explain it :) ]]>
Cryptographic Signatures, Surprising Pitfalls, and LetsEncrypt David Wong Sun, 08 Mar 2020 21:09:48 +0100 http://www.cryptologie.net/article/495/cryptographic-signatures-surprising-pitfalls-and-letsencrypt/ http://www.cryptologie.net/article/495/cryptographic-signatures-surprising-pitfalls-and-letsencrypt/#comments
On August 11th, 2015, Andrew Ayer posted [the following email](https://mailarchive.ietf.org/arch/msg/acme/F71iz6qq1o_QPVhJCV4dqWf-4Yc/) to the IETF mailing list:

> I recently reviewed draft-barnes-acme-04 and found vulnerabilities in the DNS, DVSNI, and Simple HTTP challenges that would allow an attacker to fraudulently complete these challenges.

(The author has since then written [a more complete explanation of the attack](https://www.agwa.name/blog/post/duplicate_signature_key_selection_attack_in_lets_encrypt).)

The *draft-barnes-acme-04* mentioned by Andrew Ayer is a document specifying **ACME**, one of the protocols behind the [Let's Encrypt](https://letsencrypt.org/) Certificate Authority. The thing that your browser trust and that signs the public keys of websites you visit.

The attack was found merely 6 weeks before major browsers were supposed to ship with Let's Encrypt's public keys in their trust store. The draft has since become [RFC 8555: Automatic Certificate Management Environment (ACME)](https://tools.ietf.org/html/rfc8555), mitigating the issues. Since then no cryptographic attacks are known on the protocol.

But how did we get there? What's the deal with signature schemes these days? and are all of our protocols doomed? This is what this blog post will answer.

## Let's Encrypt Use Of Signatures

Let's Encrypt is a pretty big deal. Created in 2014, it is a certificate authority ran as a non-profit, and currently providing trust to ~200 million of websites.

The key to Let's Encrypt's success are two folds:

* It is **free**. Before Let's Encrypt most certificate authorities charged fees from webmasters who wanted to obtain certificates.
* It is **automated**. If you follow their standardized protocol, you can request, renew and even revoke certificates via an API. Contrast that to other certificate authorities who did most processing manually, and took time to issue certificates.

If a webmaster wants her website example.com to provide a secure connection to her users (via HTTPS), she can request [a certificate](https://www.cryptologie.net/article/262/what-are-x509-certificates-rfc-asn1-der/) from Let's Encrypt, and after proving that she owns the domain example.com and getting her certificate issued, she will be able to use it to negotiate a secure connection with any browser trusting Let's Encrypt.

That's the theory.

In practice the flow is the following:

1. Alice registers on Let's Encrypt with an RSA public key.
2. Alice asks Let's Encrypt for a certificate for example.com.
3. Let's Encrypt asks Alice to prove that she owns example.com, for this she has to sign some data and upload it to example.com/.well-known/acme-challenge/some_file.
4. Once Alice has signed and uploaded the signature, she asks Let's Encrypt to go check it.
5. Let's Encrypt checks if it can access the file on example.com, if it successfully downloaded the signature and the signature is valid then Let's Encrypt issues a certificate to Alice.

I recapitulate some of this flow in the following figure:

Now, you might be wondering, what if Alice does not own example.com and manage to man-in-the-middle Let's Encrypt in step 5? That's a real issue that's been bothering me ever since Let's Encrypt launched, and turns out a team of researchers at Princeton demonstrated exactly this in [Bamboozling Certificate Authorities with BGP](https://www.princeton.edu/~pmittal/publications/bgp-tls-usenix18.pdf):

> We perform the first real-world demonstration of BGP attacks to obtain bogus certificates from top CAs in an ethical manner. To assess the vulnerability of the PKI, we collect a dataset of 1.8 million certificates and find that an adversary would be capable of gaining a bogus certificate for the vast majority of domains

The paper continues and proposes two solutions to sort of remediate, or at least reduce the risk of these attacks:

> Finally, we propose and evaluate two countermeasures to secure the PKI: 1) CAs verifying domains from multiple vantage points to make it harder to launch a successful attack, and 2) a BGP monitoring system for CAs to detect suspicious BGP routes and delay certificate issuance to give network operators time to react to BGP attacks.

Recently Let's Encrypt implemented the first solution [multi-perspective domain validation](https://letsencrypt.org/2020/02/19/multi-perspective-validation.html), which changes the way step 5 of the above flow is performed: now Let's Encrypt downloads the proof from example.com from multiple places.

## How Did The Let's Encrypt Attack Worked

But let's get back to what I was talking about, the attack that Andrew Ayer found in 2015.

In it, Andrew proposes a way to gain control of a Let's Encrypt account that has already validated a domain (let's say example.com)

The attack goes like this:

1. Alice registers and goes through the process of verifying her domain example.com by uploading some signature over some data on example.com/.well-known/acme-challenge/some_file. She then successfully manages to obtain a certificate from Let's Encrypt.
2. Later, Eve signs up to Let's Encrypt with a new account and new RSA public key, and request to recover the example.com domain
3. Let's Encrypt asks Eve to sign some new data, and upload it to example.com/.well-known/acme-challenge/some_file (note that the file is still lingering there from Alice's previous domain validation)
4. Eve crafts a new malicious keypair, and updates her public key on Let's Encrypt. She then asks Let's Encrypt to check the signature
5. Let's Encrypt obtains the signature file from example.com, the signature matches, Eve is granted ownership of the domain example.com.

I recapitulate the attack in the following figure:

Wait what?

What happened there?

## Key Substitution Attack With RSA

In the above attack Eve managed to create a valid public key that validates a given signature and message.

This is because, as Andrew Ayer wrote:

> A digital signature does not uniquely identify a key or a message

If you remember [how RSA works](https://www.cryptologie.net/article/182/airbus-crypto-challenge-write-up/), this is actually not too hard to understand.

For a fixed signature and (PKCS#1 v1.5 padded) message, a public key (e, N) must satisfy the following equation to validate the signature:

\$$\text{signature} = \text{message}^e \pmod{N}\$$

One can easily craft a public key that will (most of the time) satisfy the equation:

* \$$e = 1\$$
* \$$N = \text{signature} - \text{message}\$$

You can easily verify that the validation works:

\begin{align} &\text{signature} = \text{message}^e \pmod{N}\\\\ \iff&\text{signature} = \text{message} \pmod{\text{signature} - \text{message}}\\\\ \iff&\text{signature} - \text{message} = 0 \pmod{\text{signature} - \text{message}} \end{align}

By definition the last line is true.

## Security of Cryptographic Signatures

Is this issue surprising?

It should be.

And if so why?

This is because of the gap that exists between the theoretical world and the applied world, between the security proofs and the implemented protocol.

Signatures in cryptography are usually analyzed with the [EUF-CMA model](https://blog.cryptographyengineering.com/euf-cma-and-suf-cma/), which stands for **Existential Unforgeability under Adaptive Chosen Message Attack**.

In this model YOU generated a key pair, and then I request YOU to sign a number of arbitrary messages. While I observe the signatures you produce, I win if I can at some point in time produce a valid signature over a message I hadn't requested.

Unfortunately, even though our modern signature schemes seem to pass the EUF-CMA test fine, they tend to exhibit some **surprising properties**.

## Subtle Behaviors of Signature Schemes

The excellent paper [Seems Legit: Automated Analysis of Subtle Attacks on Protocols that Use Signatures](https://eprint.iacr.org/2019/779) by Dennis Jackson, Cas Cremers, Katriel Cohn-Gordon, and Ralf Sasse attempts to list these surprising properties and the signature schemes affected by them (and then find a bunch of these in protocols using formal verification, it's a cool paper read it).

![Seems Legit: Automated Analysis of Subtle Attacks on Protocols that Use Signatures](/upload/Screen_Shot_2020-03-08_at_12.39_.11_PM_.png)

Let me briefly describe each properties:

**Conservative Exclusive Ownership (CEO)/Destructive Exclusive Ownership (DEO)**. This refers to what Koblitz and Menezes used to call [Duplicate Signature Key Selection (DSKS)](https://eprint.iacr.org/2011/343.pdf). In total honesty, I don't think any of these terms are self-explanatory. I find these attacks easier to remember if thought of as the following two variants:

1. **key substitution** attacks (CEO), where a different keypair or public key is used to validate a given signature over a given message.
2. **message key substitution** attacks (DEO), where a different keypair or public key is used to validate given signature over a new message.

To recap: the first attack fixes both the message and the signature, the second one only fixes the signature.

**Malleability**. Most signature schemes are malleable, meaning that if you give me a valid signature I can tamper with it so that it becomes a different but still valid signature. Note that if I'm the signer I can usually create different signatures for the same message, but here malleability refers to the fact that someone who has zero knowledge of the private key can also create a new valid signature for the same signed message. It is not clear if this has any impact on any real world protocol, eventhough the bitcoin MtGox exchange blamed their loss of funds on this one. From the paper [Bitcoin Transaction Malleability and MtGox](https://arxiv.org/abs/1403.6676):

> In February 2014 MtGox, once the largest Bitcoin exchange, closed and filed for bankruptcy claiming that attackers used malleability attacks to drain its accounts.

Note that a newer security model called [SUF-CMA](https://blog.cryptographyengineering.com/euf-cma-and-suf-cma/) (for strong EUF-CMA) attempts to include this behavior in the security definition of signature schemes, and some recent standards (like [RFC 8032](https://tools.ietf.org/html/rfc8032) that specifies Ed25519) are mitigating malleability attacks on their signature schemes.

**Re-signability**. This one is simple to explain. To validate a signature over message you often don't need the message itself but its digest. This would allow anyone to re-sign the message with their own keys without knowing the message itself. How is this impactful in real world protocols? Not sure, but we never know.

**Collidability**. This is another not-so-clear if it'll bite you one day: some schemes allow you to craft signatures that will validate under several messages. Worse, Ed25519 as designed allows one to craft a public key and a signature that would validate any message with high probability. (This has been fixed in some implementations like libsodium.)

I recapitulate the substitution attacks in the diagram below:

What to do with all of this information?

Well, for one signature schemes are definitely not broken, and you probably shouldn't worry if your use of them are mainstream.

But if you're designing cryptographic protocols, or if you're implementing something that's more complicated than the every day use of cryptography you might want to keep these in the back of your mind.

> ![book](https://realworldcryptography.com/images/building4.png)
> Did you like this content? This is part of a book about how to apply modern cryptography in real world applications. [Check it out](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto)!

]]>
Authentication What The Fuck: Part II David Wong Mon, 17 Feb 2020 22:26:19 +0100 http://www.cryptologie.net/article/494/authentication-what-the-fuck-part-ii/ http://www.cryptologie.net/article/494/authentication-what-the-fuck-part-ii/#comments
Writing about real world cryptography, it seems like what I end up writing a lot about is protocols and how they solve origin/identity authentication.

Don't get me wrong, confidentiality has interesting problems to (e.g. how to bring confidentiality to a blockchain), but authentication is most of what applied cryptography is about, for realz.

Do I need to convince you?

If you think about it, most protocols are about finding ways to provide authentication to different scenarios. And that's why they can get complicated!

I'll take my life for example, here is the authentication problems and solutions that I use:

* **insecure → one-side authenticated**. Every day I use HTTPS, which uses the web public-key infrastructure (web PKI) to allow my browser to authenticate any websites on the web. It's a mess, but that's how you scale machine-to-machine authentication nowadays.
* **one-side authenticated → mutually-authenticated**. Whenever I log into a website, over a secure HTTPS connection, this is what happens. A machine asks me to present some password (in clear, or oblivious via an asymmetric password-authenticated key exchange), or maybe a one-time password (via TOTP), or maybe I'll have to press my thumb on a yubikey (FIDO 2), or maybe I'll have to do a combination of several things (MFA). These are usually machine authenticating humans-type of flow.
* **insecure → mutually-authenticated**. Whenever I talk to someone on Signal, or connect to a new WiFi, or pair a bluetooth device (like my phone with a car), I go from an insecure connection to a mutually-authenticated connection. There is a bit more nuance here, as sometimes I'll authenticate a machine (a WiFi access point for example) and sometimes I'll authenticate a human (end-to-end encryption). So different techniques work best depending on the type of peer you're trying to talk to.

In the end, I think these are the main three big categories of origin authentication.
Can you think of a better classification? ]]>
What Are Short Authenticated Strings (SAS)? David Wong Mon, 17 Feb 2020 00:10:57 +0100 http://www.cryptologie.net/article/493/what-are-short-authenticated-strings-sas/ http://www.cryptologie.net/article/493/what-are-short-authenticated-strings-sas/#comments
See, we often like to talk about how key exchanges can also be authenticated, by mean of public-key infrastructures (e.g. HTTPS) or by pre-exchanging secrets (e.g. [my previous post on sPAKE](https://www.cryptologie.net/article/490/how-symmetric-password-authenticated-key-exchanges-work-spake/)), but we seldom talk about **post-handshake authentication**.

Post-handshake authentication is the idea that you can connect to something (often a hardware device) insecurely, and then "augment" the connection via some information provided in a special **out-of-band** channel.

But enough blabla, let me give you a real-world example: you link your phone with your car and are then asked to compare a few digits (this pairing method is called "numeric comparison" in the bluetooth spec). Here:

* **Out-of-band channel**. you are in your car, looking at your screen, this is your out-of-band channel. It provides integrity (you know you can trust the numbers displayed on the screen) but do not necessarily provide confidentiality (someone could look at the screen through your window).
* **Short authenticated string (SAS)**. The same digits displayed on the car's screen and on your phone are the SAS! If they match you know that the connection is secure.

This SAS thing is **extremely practical and usable**, as it works without having to provision devices with long secrets, or having the user compare long strings of unintelligible characters.

How to do this? You're probably thinking "easy!". An indeed it seems like we could just do a key exchange, and then pass the output in some KDF to create this SAS.

**NOPE**.

This has been [discussed](https://moderncrypto.org/mail-archive/curves/2017/000896.html) [long-and-large](https://vnhacker.blogspot.com/2016/08/the-internet-of-broken-protocols.html) on [the internet](https://research.kudelskisecurity.com/2017/04/25/should-ecdh-keys-be-validated/): with key exchange protocols like X25519 it **doesn't work**.
The reason is that X25519 does not have **contributory behavior**: I can send you a public key that will lead to a predictable shared secret. In other words: your public key does not **contribute** (or very little) to the output of the algorithm.

The correct™ solution here is to give more than just the key exchange output to your KDF: give it your protocol transcript. All the messages sent and received. This puts any man-in-the-middle attempts in the protocol to a stop. (And the lesson is that you shouldn't naively customize a key exchange protocol, it can lead to real world failures.)

The next question is, what's more to SAS-based protocols? After all, [Sylvain Pasini wrote a 300-page thesis on the subject](http://secu.famillepasini.ch/files/2009/phd/pasini_phd_thesis.pdf). I'll answer that in my next post. ]]>
a sPAKE is first and foremost a **PAKE**, which stands for **Password-Authenticated Key Exchange**.
This simply means that authentication in the key exchange is provided via the knowledge of a password.
The s (resp. b) in front means symmetric (resp. balanced). This indicates that both sides know the password.

![Alice and Bob trying to use a sPAKE to authenticate a key exchange](/upload/Screen_Shot_2020-02-09_at_5.21_.17_PM_.png)

Other PAKEs where only one side knows the password exist, these are called aPAKE for asymmetric (or augmented) PAKEs.
Yes I know the nomenclature is a bit confusing :)

The most promising sPAKE scheme currently seems to be **SPAKE2**, which is [in the process of being standardized here](https://tools.ietf.org/html/draft-irtf-cfrg-spake2-09).
There are other sPAKEs, like Dragonfly which is used in WPA3, but they don't seem to provide as strong properties as SPAKE2.

The trick to a symmetric PAKE is to use the password to blind the key exchange's ephemeral keypairs.

![The first part of a sPAKE with SPAKE2](/upload/Screen_Shot_2020-02-09_at_5.22_.49_PM_.png)

* Pass the password into a memory-hard hash function like Argon2 to obtain w. Can you guess why we do this? (leave a comment if you do!)
* Convert it to a group element. To do this we simply consider w a scalar and do a scalar multiplication with a generator of our subgroup (M or N depending if you're the client or the server, can you guess why we use [different generators](https://eprint.iacr.org/2019/1194.pdf)?)

> NOTE: If you know BLS or OPAQUE, you might be wondering why we don't use a "hash-to-curve" algorithm, this is because we don't need to obtain a group element with an unknown discrete logarithm in SPAKE2.

Once the blinded (with the password) public keys have been exchanged, both sides can compute a shared group element:

* Alice computes K = h × alice_private_key × (S - w × N)
* Bob computes K = h × bob_private_key × (T - w × M)

Spend a bit of your time to understand these equations.
What happens is that both Alice and Bob first unblind the public key they've received, then perform a key exchange with it, then multiply it with the value h. What's this value h? The cofactor, or simply put: the other annoying subgroup.

Finally Alice and Bob **hash the whole transcript**, which is the concatenation of:

* Alice's identity.
* Bob's identity.
* The message Bob sent S.
* The message Alice sent T.
* The shared group element K.
* The hardened password w.

The hash of this transcript gives us two things:

* A **shared secret** !
* A key that is further expanded (via a KDF) to obtain two authentication keys.

These authentication keys sole purpose is to provide **key confirmation** in the last round-trip of messages.
That is to say at this point, if we don't do anything, we don't know if either Alice or Bob truly managed to compute the shared secret.

Key confirmation is pretty simple, both sides just have to compute an authentication tag with one of the authentication key produced over the transcript.

The final protocol looks a bit dense, but you should be able to decipher it if you've read this far.

Authentication, What The Fuck? David Wong Sun, 26 Jan 2020 23:42:36 +0100 http://www.cryptologie.net/article/489/authentication-what-the-fuck/ http://www.cryptologie.net/article/489/authentication-what-the-fuck/#comments
In the context of **cryptographic primitives** like message authentication codes (MACs) and authenticated encryption with associated data (AEAD), authentication really refers to authenticity or integrity. And as the [Cambridge dictionary](https://dictionary.cambridge.org/us/dictionary/english/authenticity) says:

> **Authenticity**. the quality of being real or true.
> The poems are supposed to be by Sappho, but they are actually of doubtful authenticity.
> The authenticity of her story is beyond doubt.

The proof is in the pudding. When talking about the security properties of primitives like MACs, cryptography talks about **unforgeability**, which does relate to authenticity.

So whenever you hear things like "*is this payload authenticated with HMAC?*", think authenticity, think integrity.

In the context of **protocols** though (e.g. TLS) authentication refers to **identification**: the concept of **proving who you are**.

So whenever you hear things like "*Is the server authenticated?*", think "identities are being proven".

This dual sense really annoys me, but in the end this ambiguity is encompassed in the definition of authentication:

> the process or action of proving or showing something to be true, genuine, or valid.

[Diego F. Aranha](https://twitter.com/dfaranha/status/1221558605259988994) proposes a clever way to disambiguate the two:

* **origin/entity authentication**. You're proving that an entity really is who they say they are.
* **message authentication**. You're proving that a message is genuine.

Note that an argument against this distinction is the following: to authenticate a message, you need a key. This key comes from somewhere (it's your context, or your "who"). So when you authenticate a message, you are really authenticating the context. This falls short in scenarios where for example you trust the root hash of a merkle tree, which authenticates all of its leaves.

The bottom line is, authentication is about proving that something is what it is supposed to be. And that thing can be a person, or a message, or maybe even something else.

This is not all. In the security world people are confused with authorization vs authentication :)

([part 2 is here](https://cryptologie.net/article/494/authentication-what-the-fuck-part-ii/)) ]]>
Messaging Layer Security: A Few Thoughts David Wong Sun, 12 Jan 2020 21:31:53 +0100 http://www.cryptologie.net/article/488/messaging-layer-security-a-few-thoughts/ http://www.cryptologie.net/article/488/messaging-layer-security-a-few-thoughts/#comments I really appreciate what the people are doing there, and what they are trying to solve.
I think **group messaging** is currently a huge **mess**, as every application I have seen/audited seemed to invent a new way to implement group chat.
A common standard and guidelines would greatly help.

MLS' goal is to provide a solution to end-to-end encryption for group chats. A solution that **scales**.

If you don't know how the MLS protocol works, I advise you to read [Michael Rosenberg's blog post](https://mrosenberg.pub/cryptography/2019/07/10/molasses.html) or to watch the Real World Crypto talk on the subject (might not be available at the moment).

Thinking about the standard, I have two questions:

1. **Does a group chat loses any notion of privacy/confidentiality after it gets too large?** For example, if you are in a Hong Kong group trying to organize a protest and there are more than 1,000 people in the group, what are the odds that one of them is a [cop](https://www.youtube.com/watch?v=_s5R9cFdRmg)?
2. Would a group chat protocol targeting groups with small numbers of participant (let's say 50 at most) be able to provide **better security insurances efficiently**?

---

For example, here are two security properties (taken from [SoK: Secure Messaging](https://oaklandsok.github.io/papers/unger2014.pdf)) that MLS does not provide:

> **Speaker Consistency**: All participants agree on the sequence of messages sent by each participant.

This means that if Alice (who is part of a group chat with Bob and Eve) colludes with the server, she can send "I like cats" to Bob and "I like dogs" to Eve.

> **Global Transcript**: All participants see all messages in the same order. Note that this implies speaker consistency

This means that if Alice sends the following messages:

* you must decide

a server could re-order these messages so that Bob would see them in the same order, but Eve would see:

* you must decide

---

I have the following open questions:

* Are these attacks important to protect against?
* Is there an efficient protocol to prevent these attacks for groups of reasonable size?
* If we cannot prevent them, can we detect them and warm the users?
* If we are willing to change the protocol when going from 2 participants to 3 participants, would be willing to change the protocol when going from N to N+1 participants (where N is the number of participants threshold where confidentiality/privacy fades away)?

]]>
A history of end-to-end encryption and the death of PGP David Wong Thu, 02 Jan 2020 13:57:57 +0100 http://www.cryptologie.net/article/487/a-history-of-end-to-end-encryption-and-the-death-of-pgp/ http://www.cryptologie.net/article/487/a-history-of-end-to-end-encryption-and-the-death-of-pgp/#comments
---
This is were everything starts, we now have an open peer-to-peer protocol that everyone on the internet can use to communicate.

---

* 1991
* The US government introduces the 1991 Senate Bill 266, which attempts to allow "the Government to obtain the plain text contents of voice, data, and other communications when appropriately authorized by law" from "providers of electronic communications services and manufacturers of electronic communications service equipment". The bill fails to pass into law.
* **Pretty Good Privacy (PGP) - released by Phil Zimmermann.**
* 1993 - The US Government launches a criminal investigation against Phil Zimmermann for sharing a cryptographic tool to the world (at the time crypto exporting laws are a thing).
* 1995 - Zimmermann publishes PGP's source code in a book via MIT Press, dodging the criminal investigation by using the first ammendment's protection of books.

---
That's it, PGP is out there, people now have a weapon to fight government surveillance. As Zimmermann puts it:

> PGP empowers people to take their privacy into their own hands. There's a growing social need for it. That's why I wrote it.

---

* 1995 - The RSA Data Security company proposes S/MIME as an alternative to PGP.
* 1996
* criminal investigation against Zimmermann and PGP is dropped.
* PGP Inc is founded by Zimmermann, PGP becomes licensed-software.
* [RFC 1991 - PGP Message Exchange Formats](https://www.ietf.org/rfc/rfc1991.txt)
* 1997
* **GNU Privacy Guard (GPG)** - version 0.0.0 released by Werner Koch.
* PGP 5 is released.
> The original agreement between Viacrypt and the Zimmermann team had been that Viacrypt would have even-numbered versions and Zimmermann odd-numbered versions. Viacrypt, thus, created a new version (based on PGP 2) that they called PGP 4. To remove confusion about how it could be that PGP 3 was the successor to PGP 4, PGP 3 was renamed and released as PGP 5 in May 1997
* 1997 - PGP Inc is acquired by Network Associates
* 1998 - [RFC 2440 - OpenPGP Message Format](https://www.ietf.org/rfc/rfc2440.txt)
> OpenPGP - This is a definition for security software that uses PGP 5.x as a basis.
* 1999
* GPG version 1.0 released
* **[Extensible Messaging and Presence Protocol (XMPP)](https://xmpp.org/)** is developed by the open source community. XMPP is a federated chat protocol (users can run their own servers) that does not have end-to-end encryption and requires communications to be synchronous (both users have to be online).
* 2002 - PGP Corporation is formed by ex-PGP members and the PGP license/assets are bought back from Network Associates
* **2004 - Off-The-Record (OTR) is introduced by Nikita Borisov, Ian Avrum Goldberg, and Eric A. Brewer as an extension of the XMPP chat protocol in "[Off-the-Record Communication, or, Why Not To Use PGP](https://otr.cypherpunks.ca/otr-wpes.pdf)"**
> We argue that [...] the encryption must provide perfect forward secrecy to protect from future compromises [...] the authentication mechanism must offer repudiation, so that the communications remain personal and unverifiable to third parties

---
We now have an interesting development: messaging (which is seen as a different way of communication for most people) is getting the same security treatment as email.

---

* 2006 - GPG version 2.0 released
* 2007 - [RFC 4880 - OpenPGP Message Format](https://www.ietf.org/rfc/rfc4880.txt)
* 2010 - Symantec purchases the rights for PGP for \$300 million.
* 2011 - [Cryptocat](https://en.wikipedia.org/wiki/Cryptocat) is released.
* **2013 - The TextSecure (now Signal) application is introduced, built on top of the TextSecure protocol with Axolotl (now the Signal protocol with the double ratchet) as an evolution of OTR and SCIMP. It provides asynchronous communication unlike other messaging protocols, closing the gap between messaging and email.**
* 2014
* [Matrix](https://matrix.org/) is introduced as a modern alternative to XMPP.
* [Matthew Green - What’s the matter with PGP?](https://blog.cryptographyengineering.com/2014/08/13/whats-matter-with-pgp/)

---
PGP becomes increasingly criticized, as Matt Green puts it in 2014:

> It’s time for PGP to die.

---

* 2015
* XMPP gets end-to-end encryption with the [OMEMO](https://en.wikipedia.org/wiki/OMEMO) extension (which re-uses the Signal protocol)
* [SoK: Secure Messaging](http://cacr.uwaterloo.ca/techreports/2015/cacr2015-02.pdf)
* [Moxie - GPG and me](https://moxie.org/blog/gpg-and-me/)
* 2016
* [Filippo Valsorda - I'm giving up on PGP](https://blog.filippo.io/giving-up-on-long-term-pgp/)
> All in all, I should be the perfect user for PGP. Competent, enthusiast, embedded in a similar community. But it just didn't work.
* WhatsApp now uses the Signal protocol, adding end-to-end encryption for its billions of users.

---
Another unexpected development: security professionals are now giving up on encrypted emails, and are moving to secure messaging.
Is messaging going to replace email, even though it feels like a different mean of communication?

Moxie's quotes are quite interesting:

> In the 1990s, I was excited about the future, and I dreamed of a world where everyone would install GPG. Now I’m still excited about the future, but I dream of a world where I can uninstall it.

> In addition to the design philosophy, the technology itself is also a product of that era. As Matthew Green has noted, “poking through an OpenPGP implementation is like visiting a museum of 1990s crypto.” The protocol reflects layers of cruft built up over the 20 years that it took for cryptography (and software engineering) to really come of age, and the fundamental architecture of PGP also leaves no room for now critical concepts like forward secrecy.

> In 1997, at the dawn of the internet’s potential, the working hypothesis for privacy enhancing technology was simple: we’d develop really flexible power tools for ourselves, and then teach everyone to be like us. Everyone sending messages to each other would just need to understand the basic principles of cryptography. [...]

> The GnuPG man page is over sixteen thousand words long; for comparison, the novel Fahrenheit 451 is only 40k words. [...]

> Worse, it turns out that nobody else found all this stuff to be fascinating. Even though GPG has been around for almost 20 years, there are only ~50,000 keys in the “strong set,” and less than 4 million keys have ever been published to the SKS keyserver pool ever. By today’s standards, that’s a shockingly small user base for a month of activity, much less 20 years.

---

* 2018
* the first draft of **Messaging Layer Security (MLS)** is published, a standard for end-to-end encrypted group chat protocols.
* [EFAIL](https://efail.de/) releases damaging vulnerabilities against most popular PGP and S/Mime implementations.
> In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.
* 2019 - [Latacora - The PGP Problem](https://latacora.micro.blog/2019/07/16/the-pgp-problem.html)
> Why do people keep telling me to use PGP? The answer is that they shouldn’t be telling you that, because PGP is bad and needs to go away.

---

EFAIL is the straw that broke the camel's back. PGP is officially dead.

---

* 2019
* Matrix is out of beta and working on making end-to-end encryption the default.
* Moxie gives a [controversial talk at CCC](https://peertube.co.uk/videos/watch/12be5396-2a25-4ec8-a92a-674b1cb6b270) arguing that advancements in security, privacy, censorship resistance, etc. are incompatible with slow moving decentralized protocols. Today, most serious end-to-end encrypted messaging apps use the Signal protocol (Signal, Facebook Messenger, WhatsApp, Skype, etc.)
* XMPP's response: [Re: the ecosystem is moving](https://blog.jabberhead.tk/2019/12/29/re-the-ecosystem-is-moving/)
* Matrix's response: [On privacy versus freedom](https://matrix.org/blog/2020/01/02/on-privacy-versus-freedom/)

> did you like this? This will part of a book on cryptography! [Check it out here](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto&a_bid=ad500e09). ]]>
Difference between shamir secret sharing (SSS) vs Multisig vs aggregated signatures (BLS) vs distributed key generation (dkg) vs threshold signatures David Wong Thu, 19 Dec 2019 18:08:17 +0100 http://www.cryptologie.net/article/486/difference-between-shamir-secret-sharing-sss-vs-multisig-vs-aggregated-signatures-bls-vs-distributed-key-generation-dkg-vs-threshold-signatures/ http://www.cryptologie.net/article/486/difference-between-shamir-secret-sharing-sss-vs-multisig-vs-aggregated-signatures-bls-vs-distributed-key-generation-dkg-vs-threshold-signatures/#comments
Let me introduce the problem: Alice owns a private key which can sign transactions. The problem is that she has a lot of money, and she is scared that someone will target her to steal all of her funds.

Cryptography offers some solutions to avoid this being a key management problem.

The first one is called **Shamir Secret Sharing (SSS)**, which is simply about splitting the signing private key into n shares.
Alice can then split the shares among her friends. When Alice wants to sign a transaction, she would then have to ask her friends to give her back the shares, that she can use to recreate the signing private key. Note that SSS has many many variants, for example VSSS allows participants to **verify** that malicious shares are not being used, and PSSS allows participants to **proactively** rotate their shares.

This is not great though, as there is a small timeframe in which Alice is the single point of failure again (the moment she holds all the shares).

A logical next step is to **change the system**, so that Alice cannot sign a transaction by herself.
A **multi-signature** system (or **multisig**) would require n participants to sign the same transaction and send the n signatures to the system.
This is much better, except for the fact that n signatures means that the transaction size increases linearly with the number of signers required.

We can do better: a multi-signature system with **aggregated signatures**. Signature schemes like **BLS** allow you to compress the n signatures in a single signature. Note that it is currently much slower than popular signature schemes like ECDSA and EdDSA, so there must be a trade off between speed and size.

We can do even better though!

So far one still has to maintain a set of n public keys so that a signature can be verified. **Distributed Key Generation (DKG)** allows a set of participant to collaborate on the construction of a key pair, and on signing operations.
This is very similar to SSS, except that there is never a single point of failure. This makes DKG a **Multi-Party Computation (MPC)** algorithm.

The BLS signature scheme can also aggregate public keys into a single key that will verify their aggregated signatures, which allows the construction of a DKG scheme as well.

Interestingly, you can do this with schnorr signatures too! The following diagram explains a simplified version of the scheme:

Note two things:

* All these schemes can be augmented to become **threshold schemes**: we don't need n signatures from the n signers anymore, but only a threshold m of n. (Having said that, when people talk about **threshold signatures**, they often mean the threshold version of DKG.) This way if someone loses their keys, or is on holiday, we can still sign.
* Most of these schemes assume that all participants are honest and by default don't tolerate malicious participants. More complicated schemes made to tolerate malicious participants exist.

Unfortunately all of this is pretty new, and as an active field of study no standard has been decided on one algorithm so far.

That's the difference!

One last thing: there's been some recent ideas to use **zero knowledge proofs (ZKP)** to do what aggregated signatures do but for multiple messages (because all the previous solutions all signed the same message). The idea is to release a proof that you have verified all the signatures associated to a set of messages. If the zero knowledge proof is shorter than all the signatures, it did its job!

> did you like this? This will part of a book on cryptography! [Check it out here](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto&a_bid=ad500e09).

EDIT: thanks to [Dowhile and bascule](https://www.reddit.com/r/crypto/comments/edqrky/difference_between_shamir_secret_sharing_sss_vs/fbl62rb/) for pointing errors in the post. ]]>
Writing a book is hard David Wong Sun, 20 Oct 2019 23:04:54 +0200 http://www.cryptologie.net/article/485/writing-a-book-is-hard/ http://www.cryptologie.net/article/485/writing-a-book-is-hard/#comments It doesn't help that I started writing right before accepting a new position for a very challenging (and interesting) project.
But here I am, half-way there, and I think I'm onto something. I can't wait to get there and look at the finished project as a real paper book :)

To give you some insight into this process, let me share some thoughts.

**Writing is hard**. I have realized that I need at least a full day to write something. It does take time to get into the zone, and writing in the morning before work just doesn't work for me (and writing after work is even worse). As [JP Aumasson put it (about his process of writing Serious Cryptography)](https://research.kudelskisecurity.com/2017/10/16/the-making-of-serious-cryptography/):

> I quickly realized that I didn’t know everything about crypto. The book isn’t just a dump of my own knowledge, but rather the fruit of hours of research—sometimes a single page would take me hours of reading before writing a single word.

So when I don't have a full day ahead of me, I use my limited time to read articles and do research in topics that I don't fully understand. This is useful, and I make more progress during the week end once I have time to write.

**Revising is hard**. If writing a chapter takes some effort X, revising a chapter takes effort X^3 . After each chapter, several people at Manning, and in my circle, provide feedback. At the same time, I realize that there's much more I want to write about subject Y and I start pilling up articles and papers that I want to read before I revise the chapter. I end up spending a TON of effort revising and re-visiting chapters.

**Getting feedback is hard**. I am lucky, I know a lot of people with different levels of knowledge in cryptography. This is very useful when I want to test how different audiences read different chapters. Unfortunately people are good at providing good feedback, and bad at providing bad feedback. And only the bad feedback ends up being useful feedback. If you want to help, [the first chapters are free to read](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto&a_bid=ad500e09
) and I'm ready to buy you a beer for some constructive negative feedback.

**Laying out a chapter is hard**. Writing a blog is relatively easy. It's short, self-contained, and often something I've been thinking about for weeks, months, before I put it into writing. Writing a chapter for a book is more like writing a paper: you want it to be perfect. Knowing a lot about the subject makes this even more difficult: you know you can make something great and not achieving that would be disappointing. One strategy that I wish I would have more time to spend on is the following one:

* create a presentation about the subject of a chapter
* give the presentation and observe what diagrams need revisiting and what parts are hard for an audience to understand
* after many iterations put the slides into writing

I'm convinced this is the right approach, but I am not sure how I could optimize for this. If you're in SF and wants me to give you a presentation on one of the chapter of the book, leave a comment here :) ]]>
Algorand's cryptographic sortition David Wong Thu, 26 Sep 2019 03:44:39 +0200 http://www.cryptologie.net/article/484/algorands-cryptographic-sortition/ http://www.cryptologie.net/article/484/algorands-cryptographic-sortition/#comments Their breakthrough was to make a leader-based BFT algorithm work in a permissionless setting (and I believe they are the first ones who managed to do this).
At the center of their system lies a **cryptography sortition** algorithm. It's quite interesting, so I made a video to explain it!

<iframe width="560" height="315" src="https://www.youtube.com/embed/XfP862hCrDM" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

PS: I've been doing these videos for a while, and I still don't have a cool intro, so if you want to make me a cool intro please do :D ]]>
What's my birthday? David Wong Sat, 21 Sep 2019 08:17:04 +0200 http://www.cryptologie.net/article/483/whats-my-birthday/ http://www.cryptologie.net/article/483/whats-my-birthday/#comments
I've been asked similar questions, and every time my answer goes something like this:

> you need to calculate **the number of outputs** you need to generate in order to get good odds of finding collisions. If that number is impressively large, then it's fine.

The **birthday bound** is often used to calculate this. If you crypto, you must have heard of something like this:

> with the SHA-256 hash function, you need to generate at least 2<sup>128</sup> hashes in order to have more than 50% chance of finding collisions.

And you know that usually, you can just divide the exponent of your domain space by two to find out how much output you need to generate to reach such a collision.

Now, this figure is a bit **deceiving** when it comes to **real world cryptography**. This is because we probably don't want to define "**OK, this is bad**" as someone reaching the point of having 50% chance of finding a collision. Rather, we want to say:

> someone reaching **one in a billion** chance (or something much lower) to find a collision would be bad.

In addition, what does it mean for us? How many identifiers are we going to generate per second? How much time are we willing to keep this thing secure?

To truly answer this question, one needs to plug in the correct numbers and play with the birthday bound formula. Since this is not the first time I had to do this, I thought to myself "why don't I create an app for this?" and [voila](https://www.davidwong.fr/whatsmybirthday).