Hey! I'm David, cofounder of zkSecurity and the author of the Real-World Cryptography book. I was previously a crypto architect at O(1) Labs (working on the Mina cryptocurrency), before that I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.
After defending my master thesis in Labri's amphitheatrum I thought I would never have to go back there again. Little did I know, ECC 2015 took place in the exact same room. I was back in school.
Talks
It was a first for me, but for many people it was only one more ECC. Most people knew each other, a few were wandering alone, mostly students. The atmosphere was serious although relaxed. People were mostly in their late 30s and 40s, a good part was french, others came from all over the world, a good minority were government people. Rumor has it that NSA was somewhere hidden.
Nothing really groundbreaking was introduced, as everybody knows ECC is more about politics than math these days. The content was so rare that a few talks were not even talking about ECC. Like that talk about Logjam (was a good talk though) or a few about lattices.
We got warmed up by a one hour cocktail party organized by Microsoft, by 6pm most people were "canard" as the belgium crypto people were saying. We left Bordeaux's Magnificient sun and sat back into the hot room with our wine glasses. Then every 5 minutes a random person would show up on stage and present something, sometimes serious, sometimes ridiculous, sometimes funny.
Panel
The panel was introduced by Benjamin Smith and was composed of 7 figures. Dan Bernstein that needs no introduction, Bos from NXP, Flori from the french government agency ANSSI, Hamburg from Cryptography Research (who was surprised that his company let him assist to the panel), Lochter from BSI (German government) and Moody from NIST.
It was short and about standardization, here are the notes I took then. Please don't quote anything from here, it's inexact and redacted after the fact.
Presenter: you have very different people in front of you, you have exactly 7 white people in front of you, hopefuly it will be different next year.
The consensus is that standardisation in ECC is not working at all. Maybe it should be more like the AES one. Also, people are disapointed that not enough academics were involved... general sadness.
Lochter: it's not good to change too much, things are working for now and Post-Quantum will replace ECC. We should start standardizing PQ. Because everything is slow, mathematics takes years to get standardized, then implemented, etc... maybe the problem is not in standardization but keeping software up-to-date.
Hamburg: PQ is the end of every DLP-based cryptosystem.
Bos: I agree we shouldn't do this (ECC2015) too often. Also we should have a framework where we can plugin different parameters and it would work with any kind of curves.
Someone: why build new standards if the old/current one is working fine. This is distracting implementers. How many crypto standards do we already have? (someone else: a lot)
Bos: Peter's talk was good (about formal verification, other panelists echoed that after). It would be nice for implementers to have tools to test. Even a database with a huge amount of test vectors would be nice
Flori: people don't trust NIST curves anymore, surely for good reasons, so if we do new curves we should make them trustable. Did anyone here tried generating nist, dan, brainpool etc...? (3 people raised their hands).
Bernstein: you're writing a paper? Why don't you put the Sage script online? Like that people won't make mistakes or won't run into a typo in your paper, etc...
Lochter: people have to implement around patents all the time (ranting).
Presenter: NSA said, if you haven't moved to ECC yet, since there will be PQ, don't get into too much trouble trying to move to ECC. Isn't that weird?
Bernstein: we've known for years that PQ computers are coming. There is no doubt. When? It is not clear. NSA's message is nice. Details are weird though.
We've talked to people at the NSA about that. Really weird. Everybody we've talked to has said "we didn't see that in advance" (the announcement). So who's behind that? No one knows. (someone in the audience says that maybe the NSA's website got hacked)
Flori: I agree it's hard to understand what the NSA is saying. So if someone in the audience wants to make some clarification... (waiting for some hidden NSA agent to speak. No one speaks. People laugh).
Hamburg: usually they say they do not deny, or they say they do not confirm. This time they said both (the NSA about Quantum computers).
Lochter: 30 years is the lifetime of secret data, could be 60 years if you double it (grace period?). We take the NSA's announcement seriously, satelites have stuff so we can upgrade them with curves (?)
Presenter: maybe they (the NSA) are scared of all the curve standardization happening and that we might find a curve by accident that they can't break. (audience laughing)
Bos: we have to follow standards when we implement in smartcards...
Lochter: we can't blame the standard. Look at Openssl, they did this mess themselves.
Moody: standards give a false sense a security but we are better with them than without (lochter looks at him weirdly, Moody seems embarassed that he doesn't have anything else to say about it).
Bernstein: we can blame it on the standard!
Lochter: blame the process instead. Implementers should get involved in the standardization process.
Bernstein: I'll give you an example of implementers participating in standardization, Rivest sent a huge comment to the NIST ("implementers have enough rope to hang themselve"). It was one scientific involved in the standardization.
Presenter: we got 55 minutes of the panel done before the first disagreement happened. Good. (everybody laughs)
Bos: we don't want every app dev to be able to write crypto. It is not ideal. We can't blame the standards. We need cryptographers to implement crypto.
A team of researchers ran an attack for nine months, and from 4.8 billion of ephemeral handshakes with different TLS servers they recovered hundreds of private keys.
The theory of the attack is actually pretty old, Lenstra's famous memo on the CRT optimization was written in 1996. Basicaly, when using the CRT optimization to compute a RSA signature, if a fault happens, a simple computation will allow the private key to be recovered. This kind of attacks are usually thought and fought in the realm of smartcards and other embedded devices, where faults can be induced with lasers and other magical weapons.
The research is novel in a way because they made use of Accidental Faults Attack, which is one of the rare kind of remote side-channel attacks.
This is interesting, the oldest passive form of Accidental Fault Attack I can think of is Bit Squatting that might go back to 2011 at that defcon talk.
But first, what is vulnerable?
Any library that uses the CRT optimization for RSA might be vulnerable. A cheap countermeasure would be to verify the signature after computing it, which is what most libraries do. The paper has a nice list of who is doing that.
Implementation
Verification
cryptlib 3.4.2
disabled by default
GnuPG 1.4.1.8
yes
GNUTLS
see libgcrypt and Nettle
Go 1.4.1
no
libgcrypt 1.6.2
no
Nettle 3.0.0
no
NSS
yes
ocaml-nocrypto 0.5.1
no
OpenJDK 8
yes
OpenSSL 1.0.1l
yes
OpenSwan 2.6.44
no
PolarSSL 1.3.9
no
But is it about what library you are using? Your server still has to be defective to produce a fault. The paper also have a nice table displaying what vendors, in their experiments, where most prone to have this vulnerability.
Vendor
Keys
PKI
Rate
Citrix
2
yes
medium
Hillstone Networks
237
no
low
Alteon/Nortel
2
no
high
Viprinet
1
no
always
QNO
3
no
medium
ZyXEL
26
no
low
BEJY
1
yes
low
Fortinet
2
no
very low
If you're using one of these you might want to check with your vendor if a firmware update or other solutions were talked about after the discovery of this attack. You might also want to revoke your keys.
Since the tests were done on a broad scale and not on particular machines, it is obvious that more are vulnerable to this attack. Also only instances connected to internet that offered TLS on port 443 were tested. The vulnerability could potentially exist in any stack using this CRT optimization with RSA.
The first thing you should do is asses where in your stack the RSA algorithm is used to sign. Does it use CRT? If so, does it verifies the signature? Note that the blinding techniques we talked about in one of our cryptography bulletin (may first of this year) will not help.
What can cause your server to produce such erroneous signatures
They list 5 reasons in the paper:
old or vulnerable libraries that have broken operations on integer. For example CVE-2014-3570 the square operations of OpenSSL was not working properly for some inputs
race conditions, when applications are multithreaded
errors in the CPU cache, other caches or the main memory.
Note that at the end of the paper, they investigate if a special hardware might be the cause and end up with the conclusion that several devices leaking the private keys were using Cavium hardware, and in some cases their "custom" version of OpenSSL.
I'm curious. How does that work?
RSA-CRT
Remember, RSA signature is basically \(y = x^d \pmod{n}\) with \(x\) the message, \(d\) the private key and \(n\) the public modulus. Also you might want to use a padding system but we won't cover that here. And then you can verify a signature by doing \(y^e \pmod{n}\) and verify if it is equal to \(x\) (with \(e\) the public exponent).
CRT is short for Chinese Remainder Theorem (I should have said that earlier). It's an optimization that allows to compute the signatures in \(\mathbb{Z}_p\) and \(\mathbb{Z}_q\) and then combine it into \(\mathbb{Z}_n\) (remember \(n = pq\)). It's way faster like that.
and then combine these two values to get the signature:
$$ y = y_p q (q^{-1} \pmod{p}) + y_q p (p^{-1} \pmod{q}) \pmod{n} $$
And you can verify yourself, this value will be equals to \(y_p \pmod{p}\) and \(y_q \pmod{q}\).
The vulnerability
Let's say that an error occurs in only one of these two elements. For example, \(y_p\) is not correctly computed. We'll call it \(\widetilde{y_p}\) instead. It is then is combined with a correct \(y_q\) to produce a wrong signature that we'll call \(\widetilde{y}\) .
Let's notice that if we raise that to the power \(e\) and remove \(x\) from it we get:
$$
\begin{cases}
\widetilde{y}^e - x = \widetilde{y}^e_p - x = a \pmod{p}\\
\widetilde{y}^e - x = y_q^e - x = 0 \pmod{q}
\end{cases}
$$
This is it. We now know that \(q \mid \widetilde{y}^e - x\) while it also divides \(n\). Whereas \(p\) doesn't divide \(\widetilde{y}^e - x\) anymore. We just have to compute the Greatest Common Divisor of \(n\) and \(\widetilde{y}^e - x\) to recover \(q\).
The attack
The attack could potentially work on anything that display a RSA signature. But the paper focuses itself on TLS.
A normal TLS handshake is a two round trip protocol that looks like this:
The client (the first person who speaks) first sends a helloClient packet. A thing filled with bytes saying things like "this is a handshake", "this is TLS version 1.0", "I can use this algorithm for the handshake", "I can use this algorithm for encrypting our communications", etc...
Here's what it looks like in Wireshark:
The server (the second person who speaks) replies with 3 messages: a similar ServerHello, a message with his certificate (and that's how we authenticate the server) and a ServerHelloDone message only consisting of a few bytes saying "I'm done here!".
A second round trip is then done where the client encrypts a key with the server's public key and they later use it to compute the TLS shared key. We won't cover them.
Another kind of handshake can be performed if both the client and the server accepts ephemeral key exchange algorithms (Diffie-Hellman or Elliptic Curve Diffie-Hellman). This is to provide Perfect Forward Secrecy: if the conversations are recorded by a third party, and the private key of the server is later recovered, nothing will be compromised. Instead of using the server's public key to compute the shared key, the server will generate a ephemeral public key and use it to perform an ephemeral handshake. Usualy just for this session or for a limited number.
An extra packet called ServerKeyExchange is sent. It contains the server's ephemeral public key.
Interestingly the signature is not computed over the algorithm used for the ephemeral key exchange, that led to a long series of attack which recently ended with FREAK and Logjam.
By checking if the signature is correctly performed this is how they checked for the potential vulnerability.
I'm a researcher, what's in it for me?
Well what are you waiting for? Go read the paper!
But here are a list of what I found interesting:
instead of DDoSing one target, they broadcasted their attack.
We implemented a crawler which performs TLS handshakes and looks for miscomputed RSA signatures. We ran this crawler for several months.
The intention behind this configuration is to spread the load as widely as possible. We did not want to target particular servers because that might have been viewed as a denial-of-service attack by individual server operators. We assumed that if a vulnerable implementation is out in the wild and it is somewhat widespread, this experimental setup still ensures the collection of a fair number of handshake samples to show its existence.
We believe this approach—probing many installations across the Internet, as opposed to stressing a few in a lab—is a novel way to discover side-channel vulnerabilities which has not been attempted before.
Some TLS servers need a valid Server Name Indication to complete a handshake, so connecting on port 443 of random IPs should not be very efficient. But they found that it was actually not a problem and most key found like that were from weird certificates that wouldn't even be trusted by your browser.
To avoid too many DNS resolutions they bypassed the TTL values and cached everything (they used PowerDNS for that)
They guess what devices were used to perform the TLS handshakes from what was written in the x509 certificates in the subject distinguished name field or Common Name field
They used SSL_set_msg_callback() (see doc) to avoid modifying OpenSSL.
I gave a talk about my paper at the NCC Group office in Chicago and recorded myself.
If you have any questions, you think something was not clear, badly explained, etc... I'll take any feedback since this is going to be my master defense in two weeks.
Alright! My master thesis is done. Here's a download link
It's a timing attack on an vulnerable version of OpenSSL. In particular its ECDSA signature with binary curves.
There was an optimization right before the constant-time scalar multiplication of the nonce with the public point. That leads to a timing attack that leaks the length of the ephemeral keys of an Openssl server's signatures.
In this paper I explain how to setup such an attack, how to use lattices to recover the private key out of just knowing the lengths of the nonces of a bunch of signatures taken during an ephemeral handshake.
If this doesn't make sense to you just read the paper :D
Also everything is on this github repo. You can reproduce my setup for a vulnerable server and an attacker. Patch and tools are there. If you end up getting better results than the ones in the paper, well tell me!
In public-key cryptography and computer security, a root key ceremony is a procedure where a unique pair of public and private root keys is generated. Depending on the certificate policy, the generation of the root Keys may require notarization, legal representation, witnesses and ‘key holders’ to be present, as the information on the system is a responsibility of the parties. The 'best practice' is to follow the SAS 70 standard for root key ceremonies.
The actual Root Key-Pair generation is normally conducted in a secure vault that has no communication or contact with the outside world other than a single telephone line or intercom. Once the vault is secured, all personnel present must prove their identity using at least two legally recognized forms of identification. Every person present, every transaction and every event is logged by the lawyer in a Root Key Ceremony Log Book and each page is notarized by the notary. From the moment the vault door is closed until it is re-opened, everything is also video recorded. The lawyer and the organization’s two signatories must sign the recording and it too is then notarized.
Finally, as part of the above process, the Root Key is broken into as many as twenty-one parts and each individual part is secured in its own safe for which there is a key and a numerical lock. The keys are distributed to as many as twenty-one people and the numerical code is distributed to another twenty-one people.
Hey you! So mmm, I don't know if you've been reading my blog for long, but it all started when I got accepted at the University of Bordeaux' Cryptography Master. At first it was just a place where I would talk about my (then) new life in Bordeaux and what I was doing in class.
2 years and 287 blog posts later, here I am, still blogging and still in school. But not for long! Well not in school for long, I'm still gonna blog don't worry.
So yeah, the big news is, I'll be starting full time as a security consultant for the Cryptography Services team of NCC Group in November!
Woop woop!
Pardon? You are here for the crypto? ah umm, wait, I have this: