On May 15th, I approached Yuval Yarom with a few issues I had found in some TLS implementations. This led to a collaboration between Eyal Ronen, Robert Gillham, Daniel Genkin, Adi Shamir, Yuval Yarom and I. Spearheaded by Eyal, the research has now been published here. And as you can see, the inventor of RSA himself is now recommending you to deprecate RSA in TLS.
The issues were disclosed back in August and the teams behind these projects were given time to resolve them. Everyone was very cooperative and CVEs have been dispatched (CVE-2018-12404, CVE-2018-19608, CVE-2018-16868, CVE-2018-16869, CVE-2018-16870).
The attack leverages a side-channel leak via cache access timings of these implementations in order to break the RSA key exchanges of TLS implementations. The attack is interesting from multiple points of view (besides the fact that it affects many major TLS implementations):
It affects all versions of TLS (including TLS 1.3) and QUIC. Where the latter version of TLS does not even offer an RSA key exchange! This prowess is achieve because of the only known downgrade attack on TLS 1.3.
It uses state-of-the-art cache attack techniques. Flush+Reload? Prime+Probe? Branch-Predition? We have it.
The attack is very efficient. We found ways to ACTIVELY target any browsers, slow some of them down, or use the long tail distribution to repeatdly try to break a session. We even make use of lattices to speed up the problem.
Manger and Ben-Or on RSA PKCS#1 v1.5. You heard of Bleichenbacher's million messages attack? Guess what, we found better. We use Manger's OAEP attack on RSA PKCS#1 v1.5 and even Ben-Or's algorithm which is more efficient than and was published BEFORE Bleichenbacher's work in 1998. I uploaded some of the code here.
While Ben-Or et al. research was initially used to support the security proofs of RSA, it was none-the-less already enough to attack the protocol. But it is only in 1998 that Daniel Bleichenbacher discovers a padding oracle and devise his own practical attack on RSA. The consequences are severe, most TLS implementations could be broken, thus mitigations were designed to prevent Daniel's attack. Follows a series of "re-discovery" where the world realizes that it is not so easy to implement such mitigations:
Let's be realistic, the mitigations that developers had to implement were unrealistic. Furthermore, an implementation that would attempt to log such attacks would actually help the attacks. Isn't that funny?
The research I'm talking about today can be seen as one more of these "re-discovery". My previous boss' boss (Scott Stender) once told me: "you can either be the first paper on the subject, or the best written one, or the last one". We're definitely not the first one, I don't know if we write that well, but we sure are hopping to be the last one :)
RSA in TLS?
Briefly, SSL/TLS (except TLS 1.3) can use an RSA key exchange during a handshake to negotiate a shared secret. An RSA key exchange is pretty straight forward: the client encrypts a shared secret under the server's RSA public key, then the server receives it and decrypts it. If we can use our attack to decrypt this value, we can then passively decrypt the session (and obtain a cookie for example) or we can actively impersonate one of the peer.
Attacking Browsers, In Practice.
We employ the BEAST-attack model (in addition to being colocated with the victim server for the cache attack) which I have previously explained in a video here.
Why several connections instead of just one? Because most browsers (except Firefox which we can fool) will time out after some time (usually 30s). If an RSA key exchange is negotiated between the two peers: it's great, we have all the time in the world to passively attack the protocol. If an RSA key exchange is NOT negotiated: we need to actively attack the session to either downgrade or fake the server's RSA signature (more on that later). This takes time, because the attack requires us to send thousands of messages to the server. It will likely fail. But if we can try again, many times? It will likely succeed after a few trials. And that is why we continuously send connection attempts to bank.com.
Attacking TLS 1.3
There exist two ways to attack TLS 1.3. In each attack, the server needs
to support an older version of the protocol as well.
The first technique relies on the fact that the current server’s public key is an RSA public
key, used to sign its ephemeral keys during the handshake, and that the
older version of TLS that the server supports re-use the same keys.
The second one relies on the fact that both peers support an older version
of TLS with a cipher suite supporting an RSA key exchange.
While TLS 1.3 does not use the RSA encryption algorithm for its key
exchanges, it does use the signature algorithm of RSA for it; if the
server’s certificate contain an RSA public key, it will be used to sign
its ephemeral public keys during the handshake. A TLS 1.3 client can
advertise which RSA signature algorithm it wants to support (if any)
between RSA and RSA-PSS. As most TLS 1.2 servers already provide support
for RSA , most will re-use their certificates instead of updating to the
more recent RSA-PSS. RSA digital signatures specified per the standard
are really close to the RSA encryption algorithm specified by the same
document, so close that Bleichenbacher’s decryption attack on RSA
encryption also works to forge RSA signatures. Intuitivelly, we have
$pms^e$ and the decryption attack allows us to find $(pms^e)^d = pms$,
for forging signatures we can pretend that the content to be signed
$tbs$ (see RFC 8446) is $tbs = pms^e$ and obtain $tbs^d$ via the attack, which is
by definition the signature over the message $tbs$. However, this
signature forgery requires an additional step (blinding) in the
conventional Bleichenbacher attack (in practice this can lead to
hundreds of thousands of additional messages).
Key-Reuse has been shown in the past to allow
for complex cross-protocol attacks on TLS. Indeed, we can successfully
forge our own signature of the handshake transcript (contained in the
CertificateVerify message) by negotiating a previous version of TLS
with the same server. The attack can be carried if this new connection
exposes a length or bleichenbacher oracle with a certificate using the
same RSA key but for key exchanges.
Downgrading to TLS 1.2
Every TLS connection starts with a negotiation of the TLS version and
other connection attributes. As the new version of TLS (1.3) does not
offer an RSA key exchange, the exploitation of our attack must first
begin with a downgrade to an older version of TLS. TLS 1.3 being
relatively recent (August 2018), most servers supporting it will also
support older versions of TLS (which all provide support for RSA key
exchanges). A server not supporting TLS 1.3 would thus respond with an
older TLS version’s (TLS 1.2 in our example) server hello message. To
downgrade a client’s connection attempt, we can simply spoof this answer
from the server. Besides protocol downgrades, other techniques exist to
force browser clients to fallback onto older TLS versions: network
glitches, a spoofed TCP RST packet, a lack of response, etc. (see POODLE)
Continuing with a spoofed TLS 1.2 handshake, we can simply present the
server’s RSA certificate in a ServerCertificate message and then end
the handshake with a ServerHelloDone message. At this point, if the
server does not have a trusted certificate allowing for RSA key
exchanges, or if the client refuse to support RSA key exchanges or older
versions than TLS 1.2, the attack is stopped. Otherwise, the client uses
the RSA public key contained in the certificate to encrypt the TLS
premaster secret, sends it in a ClientKeyExchange message and ends its
part of the handshake with a ChangeCipherSpec and a Finished
At this point, we need to perform our attack in order to decrypt the RSA
encrypted premaster secret. The last Finished message that we send must
contains an authentication tag (with HMAC) of the whole transcript, in
addition of being encrypted with the transport keys derived from the
premaster secret. While some clients will have no handshake timeouts,
most serious applications like browsers will give up on the connection
attempt if our response takes too much time to arrive. While the attack
only takes a few thousand of messages, this might still be too much in
practice. Fortunately, several techniques exist to slow down the
we can send the ChangeCipherSpec message which might reset the client’s timer
Once the decryption attack terminates, we can send the expected Finished
message to the client and finalize the handshake. From there everything
is possible, from passively relaying and observing messages to the
impersonated server through to actively tampering requests made to it.
This downgrade attack bypasses multiple downgrade mitigations: one
server-side and two client-side. TLS 1.3 servers that negotiate older
versions of TLS must advertise this information to their peers. This is
done by setting a quarter of the bytes from the server_random field in
the ServerHello message to a known value. TLS 1.3 clients that end
up negotiating an older version of TLS must check for these values and
abort the handshake if found. But as noted by the RFC, “It does not
provide downgrade protection when static RSA is used.” – Since we alter
this value to remove the warning bytes, the client has no opportunity to
detect our attack. On the other hand, a TLS 1.3 client that ends up
falling back to an older version of TLS must advertise this information in their subsequent client hellos, since
we impersonate the server we can simply ignore this warning.
Furthermore, a client also includes the version used by the client hello
inside of the encrypted premaster secret. For the same reason as
previously, this mitigation has no effect on our attack. As it stands,
RSA is the only known downgrade attack on TLS 1.3, which we are the
first to successfuly exploit in this research.
Attacking TLS 1.2
As with the previous attack, both the client and the server targetted
need to support RSA key exchanges. As this is a typical key exchange
most known browsers and servers support them, although they will often
prefer to negotiate a forward secret key exchange based on ephemeral
versions of the Elliptic Curve or Finite Field Diffie-Hellman key
exchanges. This is done as part of the cipher suite negotiation during
the first two handshake messages. To avoid this outcome, the
ClientHello message can be intercepted and modified to strip it out of
any non-RSA key exchanges advertised. The server will then only choose
from a set of RSA-key-exchange-based cipher suites which will allow us
to perform the same attack as previously discussed. Our modification of
the ClientHello message can only be detected with the Finished
message authenticating the correct handshake transcript, but since we
are in control of this message we can forge the expected tag.
On the other side, if both peers end up negotiating an RSA key exchange
on their own, we can passively observe the connection and take our time
to break the session.
LiveOverflow recently pushed a video about Nadim Kobeissi's last paper. It is quite informative and invite to the discussion: can an end-to-end encryption app served by a webserver be secure? If you haven't seen the video, go watch it. It gives a good explanation of something that is actually quite simple. Here's someone's opinion on unnecessary complicated papers:
“We use the standard Bellare-Rogaway definition of key exchange, which we briefly summarize below. [Fourteen pages of dense notation omitted] And in conclusion most Bluetooth devices use ‘000000’ as the hardcoded password.”
I have seen this kind of things happen quite often in the cryptography and security community. While some companies are 100% snake oils, some others do play on the border with their marketing claims but very real research. For example, advances in encrypted databases are mostly pushed by companies while attacks come from researchers not happy with their marketing claims. But this is a story for another time.
If you look at it from the other side, are mobile applications that more secure? While the threat model is not THAT different, in both of these solutions (thanks Nik Kinkel and Mason Hemmel for pointing that out to me) no specific targetting can happen. And this is probably a good argument if you're paranoia stops at this level (and you trust the app store, your OS, your hardware, etc.)
Indeed, a webapp can easily target you, serving a different app depending on the client's IP, whereas mobile apps need the cooperation of the App store to do that. So you're one level deeper in your defense in depth.
What about Signal's "native" desktop application? Is it any better? Well... you're probably downloading it from their webserver anyway and you're probably updating the app frequently as well...
I've been implementing the Disco protocol in C. It makes sense since Disco was designed specifically for embedded devices. The library is only 1,000 lines-of-code, including all the cryptographic primitives, and does everything the Go implementation does except for signing.
If you don't know what Disco is, it's a cryptographic library that allows you to secure communications (like TLS) and to hash, encrypt, authenticate, derive keys, generate random numbers, etc.
It's not as plug-and-play as the Golang version. There are no wrappers yet to encrypt, authenticate, hash, derive keys, etc. and I haven't made a decision as to what algorithm I should support for signatures (ed25519 with strobe? Or Strobe's Schnorr-variant with Curve25519?)
So it's mostly for people who know what they're doing for now.
Don't let that deter you though! I need people to play with it in order to improve the library. If you need help I'm here!
The funny one is this realistically proportional figure where the areas of the different circles are representing the number of lines-of-code of each libraries.
The C library is currently awful, so I won't link to it until I get it to a prettier place, but as a proof of concept it shows that this can be achieve in a mere 1,000 lines-of-code. That while supporting the same functionalities of a TLS library and even more. The following diagram is the dependency graph or "trust graph" of an implementation of Disco:
As one can see, Disco relies on Strobe (which further relies on keccak-f) for the symmetric cryptography, and X25519 for the asymmetric cryptography. The next diagram shows the trust graph of a biased TLS 1.3 implementation for comparison:
This was done mostly for fun, so I might be missing some things, but you can see that it's starting to get more involved. Finally, I made a final diagram on what most installations actually depend on:
In this one I included other versions of TLS, but not all. I also did not include their own trust graph. Thus, this diagram is actually less complex that it could be in reality, especially knowning that some companies continue to support SSL 3.0 and TLS 1.0.
I've also included non-cryptographic things like x509 certificates and their parsers, because it is a major dependency which was dubbed the most dangerous code in the world by M. Georgiev, S. Iyengar, S. Jana, R. Anubhai, D. Boneh, and V. Shmatikov.
I love Real World Crypto. It's just the best applied crypto conference and a must if you work in the industry and do cryptography. I've tried to cover it many times; here is RWC 2018, RWC 2017 and here is my coverage of RWC 2016. I will be at RWC 2019 because I just won't miss it. It's in San Jose this year. You should go too.
Encrypting a file is hard. I often need to do it to protect confidential data before sending it to someone. Besides PGP (yerk) there doesn't seem to be any light tools to do that easily. The next best option is often to have a common messaging app like Signal. So I made my own. It's called Eureka and it's available in binaries or if you have Golang installed on your device, directly by doing this:
$ go get github.com/mimoo/eureka
It's also 100 LOC. It's just doing a simple job that seems to be missing from most default tooling.
Disco is a specification that once implemented allows you to encrypt sessions (like TLS) and encrypt, authenticate, hash, generate random numbers, derive keys, etc. (like a cryptographic library). All of that usually only needs less than a thousand lines of code.
Here's how you can do it:
1. Strobe. The first step is to find a Strobe implementation (Disco uses Strobe for all the symmetric crypto). Reference implementations of Strobe exist in C and Python, unofficial ones exist in Golang (from yours truly) and in Rust (from Michael Rosenberg). but if you're dealing with another language, you'll have to implement the Strobe specification first!
2. Noise. Read the "How to Read This Document and Implement Disco" section of the Disco specification. What it tells you is to implement the Noise specification but to ignore its SymmetricState and CipherState sections. (You can also ignore any symmetric crypto in there.) You can find Noise libraries in any languages, but implementing it yourself is usually pretty straight forward (here you only really have to implement the HandshakeState).
3. Disco. Once you have that (which should take 500 LOC top), implement the SymmetricState specified by Disco.
I'll personally introduce sponge constructions, Strobe and Disco:
Today, SSL/TLS is the de-facto standard for encrypting communication. While its last version (1.3) is soon to be released, new actors in the field are introducing more modern and better designed protocols. This talk is about the past, the present and the future of session encryption. We will see how TLS led the way, how the Noise protocol framework allowed the standardization of more modern and targeted protocols and how the duplex construction helped change the status quo.
Facebook has released their TLS 1.3 library Fizz in open source. In their post they mention early data (0-RTT):
Using early data in TLS 1.3 has several caveats, however. An attacker can easily replay the data, causing it to be processed twice by the server. To mitigate this risk, we send only specific whitelisted requests as early data, and we’ve deployed a replay cache alongside our load balancers to detect and reject replayed data. Fizz provides simple APIs to be able to determine when transports are replay safe and can be used to send non-replay safe data.
My guess is that either all GET requests are considered safe, or only GET requests on the / route are considered safe.
I'm wondering why they use a replay cache on the other side as this overhead could nullify the benefits of 0-RTT.
They also mention every state transitions being stored in one place, this is true:
You sometimes don't want or can't use TCP, and you thus have to deal with UDP.
To Simplify, TCP is just a collection of algorithms that extend UDP to make it support in-order delivery of streams. UDP on the other hand does not care about such streams and instead sends blocks of messages (called datagrams) in whatever-order and provides no guarantee what-so-ever that you will ever receive them.
TCP also provides some security guarantees on top of IP by starting a session with a TCP handshake it allows both endpoints of a communication to provide a proof of IP ownership, or at least that they can read whatever is sent to their claimed IPs. This means that to mess up with TCP, you need to be an on-path attacker man-in-the-middle'ing the connection between the two endpoints. UDP has none of that, it has no notion of sessions. Whatever packets are received from an IP, it'll just accept them. This means that an off-the-path attacker can trivially send packets that look like they are coming from any IP, effectively spoofing the IP of either one of the endpoint or anyone on the network. If the protocol built on top of UDP does not do anything to detect and prevent this, then bad things might happen (from complex attacks to simple denial of services).
This is not the only bad thing that can happen though. Sometimes (meaning for some protocols) a well-crafted message to an endpoint will trigger a large and disproportionate response. Malicious actors on the internet can use this to perform amplification attacks, which are denial-of-service attacks. To do that, the actor can send these special type of messages pretending to be a victim IP and then observe the endpoint respond with a large amount of data to the victim IP.
Intuitively, it sounds like both of these issues can be tackled by doing some sort of TCP handshake, but in practice it is rarely the case as the very first message of your protocol (which hasn't been able to provide a proof of IP ownership yet) can still trigger large messages. This is why in QUIC, the very first message from a client needs to be padded with 0s in order to make it as large as the server's response. Meaning that an attacker would have to at least spend as much resources that is provided by the attack, nullifying its benefits.
Looking at another protocol built on top of UDP, DTLS (TLS for UDP) has a notion of "cookie" which is really some kind of bearer token that the client will have to keep providing to the server in relevant messages, this in order to prove that it is indeed the same endpoint talking to the server.
TLS 1.3 has been released as RFC 8446. It took 28 drafts and more than 4 years since draft 0 to come out. Cloudflare has a long blog post about it. Some questions about the deployment of 1.3:
Will we see a fast deployment of the protocol? It seems like browsers are ready, but web servers will have to follow.
Who will use 0-RTT? I'm expecting the big players to use it (largely because they've been requesting it) but what about the small ones?
Are we going to see vulnerabilities in the protocol? It seems highly unlikely, TLS 1.2 itself (with AES-GCM) has remained solid for more than 10 years.
Are we going to see vulnerabilities in the implementations? We will see about that. If anything happens, I'm expecting it to happen around 0-RTT, PSKs and key exports. But let's hope that libraries have learned their lessons.
Is BearSSL going to implement TLS 1.3? It sounds like it.
First, there is nothing new in being able to man-in-the-middle and decrypt your own TLS sessions (+ a simple protocol on top). Sure the tool is neat, but it is not breaking WhatsApp in this regard, it is merely allowing you to look at (and to modify) what you're sending to the WhatsApp server.
The blog post goes through some interesting ways to mess with a WhatsApp group chat, as it seems that the application relies in some parts on metadata that you are in control of. This is bad hygiene, but for me the interesting attack is attack number 3: you can send messages to SOME members of the group, and send different messages to OTHER members of the group.
At first I thought: this is nothing new. If you read the WhatsApp whitepaper it is a clear limitation of the protocol: you do not have transcript consistency. And by that I mean, nothing is cryptographically enforcing that all members of a group chat are seeing the exact same thing.
It is always hard to ensure that the last messages have been seen by everyone of course (some people might be offline), but transcript consistency really only cares about ordering, dropping, and tampering of the messages.
Let's talk about WhatsApp some more. Its protocol is very different from what Signal does and in group chats, each member shares their unique symmetric key with the other members of the group (separately). This means that when you join a group with Alice and Bob, you first create some random symmetric key. After that, you encrypt it under Alice's public key and you send it to her. You then do the same thing with Bob. Once all the members have knowledge of your random symmetric key, you can encrypt all of your messages with it (perhaps using a ratchet). When a member leaves, you have to go through this dance again in order to provide forward secrecy to the group (leavers won't be able to read messages anymore). If you understood what I said, the protocol does not really gives you way to enforce transcript consistency, you are in control of the keys so you choose who you encrypt what messages to.
But wait! Normally, the server should distribute the messages in a fan-out way (the server distributes one encrypted message to X participants), forcing you to collude with a [email protected] in order to perform this kind of shenanigans. In the blog post's attack it seems like you are able to bypass this and do not need the help of WhatsApp's servers. This is bad and I'm still trying to figure out what really happened.
By the way, to my knowledge no end-to-end encrypted protocol has this property of transcript consistency for group chats. Interestingly, the Messaging Layer Security (MLS) which is the latest community effort to standardize a messaging protocol does not have a solution for this either. I'll probably talk about MLS in a different blog post because it is still very interesting.
The last thing I wanted to mention is trust inside of a group chat. We've been trying to solve trust in a one-to-one conversation for many many years, and between PGP being broken and the many wars between the secure messaging applications, it seems like this is still something we're struggling with. Just yesterday, a post titled I don't trust Signal made the front page on hackernews. So is there hope for trust in a group chat anytime soon?
First, there are three kinds of group chat:
large group chats
medium-sized group chats
small group chats
I'll argue that large group chats have given up on trust, as it is next to impossible to figure out who is who. Unless of course we're dealing with a PKI and a company enforcing onboarding with a CA. And even this is has issues (beyond the traitors and snoops).
I'll also argue that small group chats are fine with the current protocols, because you're probably trusting people not to run this kind of attacks.