david wong

Hey! I'm David, cofounder of zkSecurity and the author of the Real-World Cryptography book. I was previously a crypto architect at O(1) Labs (working on the Mina cryptocurrency), before that I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.

Quick access to articles on this page:

more on the next page...

Discrete logarithms in Multiplicative prime groups posted March 2016

I'm doing some tests on how Pollard Rho performs. I implemented the thing in Sage here and it doesn't perform that well I found. Pollard Kangaroo is also bad, but that must come from my implementation (I didn't really go further here since I don't really need Kangaroo: I already know the order + the value I'm looking for is not in any particular interval)

stats

old_rho is Pollard rho, rho_lambda is the mislabeled Pollard Kangaroo algorithm, trials is the simple enumeration.

I implemented the algorithm in Go, along some nice functions/variables that make Go's bignumber library a bit easier to tolerate. And guess what? What takes Sage 63 seconds to compute only take Go 5 seconds. The implementation is a copy/paste of what I did in Sage, no optimizations.

comment on this story

DROWN attack on OpenSSL posted March 2016

There is a new attack on OpenSSL. It's called DROWN.

drown

Two problems:

  1. in OpenSSL versions prior to January of this year, SSLv2 is by default not disabled. They thought that removing all the SSLv2 cipher suites from the default cipher string (back in 2010) would work but... nope. Even if not advertised in the serverHello, you can still do a handshake with whatever SSLv2 cipher you want. Another way of completely disabling SSLv2 exists, but it's recent and it is not the default option.

  2. A padding oracle attack still exists in SSLv2. This is because of the export cipher suites. These weak ciphers and key lengths the USA government was forcing on OpenSSL so that people overseas could use it. So, these export cipher suites, nowadays they are bruteforce-able. It takes a few hours though, and a few hundred dollars, so no easy active MITM. It's a rather passive attack.

This is a cross-protocol attack. This means that you are a MITM, but you leave the client doing his thing on a TLS 1.2 or whatever SSLv3+ protocol. In the mean time though, you use the SSLv2 connection as an Oracle to recover the premaster-key (and thus the session key that is derived from it).

stolen image

Three things:

  1. The attack works on RSA handshakes. In the handshake (precisely in the clientKeyExchange) the client will encrypt his premaster-key with the server's RSA public key, this is what the attack decrypts. The server doesn't support RSA handshakes? You'll have to attack another server.

  2. The server doesn't have to work with SSLv2. If another server (could even be a mail server) sharing the same RSA key and supporting SSLv2 exists, then you can use it as your oracle during the attack! Practical much?

  3. To use the oracle, you need to first transform the RSA encrypted premaster-key into a valid SSLv2 RSA encrypted master-key. It is quite different, because of protocol differences, and you need to use quite a few tricks (trimmers!). It doesn't work all the time, around 1 out of 1,000 RSA encrypted premaster-key can be decrypted. This is often more than enough to steal the cookies and have consequences. If you're targeting a specific individual it can take time though, so to speed up these 1,000 handshakes just inject some javascript in a non-https webpage!

That's pretty much everything. I'm still going through the paper, trying to understand the math. There is a tool here to test your website. Another way of doing this (especially for internal servers) is to get an openssl version prior to january this year and do that on all of your subdomains/domains: openssl s_client -ssl2 -connect www.cryptologie.net:443

comment on this story

Briefs about crypto posted March 2016

I'm posting a bunch of things all the time on twitter/facebook. There are mostly quotes of things I'm reading and that I find interesting.

If you are a fan of learning by reading snippets of random crypto stuff you should follow me on twitter or/and facebook

here are the last ones:

pollard rho

tls timeline

stuff1

stuff2

worth studying?

worth?

cosmic ray

So yeah. Follow me on twitter

3 comments

Checking your Diffie-Hellman parameters posted February 2016

I made a simple script to check for your DH modulus. Is it long enough? Is it a safe prime? I thought some non-cryptographers could benefit from such a tool, since usually all I have to do is fire up Sage and run some tests, but if you don't have Sage this can be tricky and annoying so...

Here's test_DHparams

test_DHparams

1 comment

NIST and Quantum Computers posted February 2016

A few weeks ago, NIST released a draft on their report on Post-Quantum Cryptography.

As we all know, some things are happening in the quantum computing world. Some are saying it will never work, some are saying it will but that it will take time until large enough quantum computers could break today's crypto.

So reading this paragraph taken from the NIST document, it can make sense on why we would want to move today to post-quantum crypto:

Historically, it has taken almost 20 years to deploy our modern public key cryptography infrastructure. It will take significant effort to ensure a smooth and secure migration from the current widely used cryptosystems to their quantum computing resistant counterparts. Therefore, regardless of whether we can estimate the exact time of the arrival of the quantum computing era, we must begin now to prepare our information security systems to be able to resist quantum computing.

Let's see where is this number coming from. SSL/TLS, its protocol or its implementation, its coverage or its efficiency, has been a huge mess so far:

  • In 2009, 7 years ago, moxie introduced SSLStrip at Blackhat, a technique to render https completely useless without preloaded HSTS.

  • It's only in 2013, 3 years ago, that facebook finally made the whole app https-only just blows my mind. And that's not thinking of the myriad of companies, commerce, banks and other websites that were all accessible through http back then.

  • Nowdays most websites are still vulnerable to moxie's 2009 attack. Think about it, TLS is supposed to protect the communications against a passive and an active attacker on the network. In the passive case, I think it succeeded (in most cases). In the active case? Even HSTS or HPKP can still be somehow circumvented. Only browsers are fully capable of protecting us nowadays.

  • And this is ignoring all the horrible implementations flaws like heartbleed, the broken cert validations of browsers, the broken basicConstraints of most CAs...

We could also talk about the deprecation of md5 and sha1, but sleevi does that better than me:

  • 1996, 20 years ago, researches recommend to switch from md5 to sha1 because of recent advances.

  • 2013, 17 years after the recommendation, Apple finally removes its support for MD5 in certificates.

  • We're still in the middle of deprecating sha1, and it's a mess.

(there's also a graphical timeline made by ange: here)

Or what about the deprecation of DES? Or RC4? Or 1024 bit DH? ..

To come back to the NIST's report, here's a nice table of the impact of quantum computing on today's algorithms:

quantum impact

sums up pretty well what djb wrote:

Imagine that it's fifteen years from now. Somebody announces that he's built a large quantum computer. RSA is dead. DSA is dead. Elliptic curves, hyperelliptic curves, class groups, whatever, dead, dead, dead.

Contrarily to the european initiative PQCrypto, they seem to imply that they will recommend lattice-based crypto whenever their new suite B will be done. I find hard to trust any system's security proof that rely on lattice's theorical bounds because as it is known with LLL, BKZ and others: practical results are way better than these theorical limits. I don't know much about lattice crypto though, and I would you out to this paper in my to read list: Lattice-based crypto for beginners.

They agree on Hash-based signatures (which are explained in a 4 posts series on my blog), which is timy because a new version of the RFC draft for XMSS has came out, which might be the most polished hash-based signature system out there (although it is stateful unlike SPHINCS).

The paper ends on these wise words that explains how security estimation works (and has always worked):

We note that none of the above proposals have been shown to guarantee security against all quantum attacks. A new quantum algorithm may be discovered which breaks some of these schemes. However, this is similar to the state today. Although most public-key cryptosystems come with a security proof, these proofs are based on unproven assumptions. Thus the lack of known attacks is used to justify the security of public-key cryptography currently in use.

To talk about quantum computing advances, I don't know much about it but here are some notes:

  • Shor’s algorithm (the one that breaks everything) was born on 1994.

  • Late 1990s, error correcting codes and threshold theorems for quantum computing. Quantum computing might be possible?

  • 2011, "the world's first commercially available quantum computer" is released by D-Wave. I believe this angered many people because this wasn't really quantum computing.

  • 2015, Google and NASA have D-wave computers.

To finish this blogpost, a few things I remember from last month Real World Crypto conference:

  • Tanja asked the first speaker presenting the blackphone about quantum crypto. His response: "post-quantum right now is marketing". People laughed.

  • On day 3, str4d announced that they wanted to move to post-quantum algorithms for i2p (a thing like Tor). People did not receive that as a good news. I heard people quoting djb's "crypto should be boring" line.

There is definitely a skepticism in the crypto world about quantum computing, as there is a gold rush into designing new post-quantum crypto.

2 comments

Moxie and TLS posted February 2016

I have mix feelings about the UI of Signal, but watching these two videos from Moxie at different periods of TLS' life, I now have a brand new admiration for the person.

The two videos are here:

In both of them, he start talking about his sslsnif: a great tool that you can find here, written in C++ and that allows you to serve clients fallacious certs taking advantage of browsers vulnerabilities (such as not checking for basic constraints fields in the certificate chain, stop reading subject names after null bytes, etc...)

Another tools that he released, coded in python, is sslstrip. The thing takes advantage of the fact that almost no one types https:// in the address bar when navigating to a website directly. A man-in-the-middle attacker can stop the redirection to https and serve the entire website either through http or through another https website which url looks similar to the victim's website (thanks to some unicode to make it look like victimwebsite.com/something/?yourwebsite.com where yourwebsite.com is the real website being visited).

https' only purpose is to defend against man-in-the-middle attacker. Because of sslstrip, any active attacker on the network can render https completely useless. Security measures to prevent that, that I can think of, are HSTS headers, preloaded HSTS (see chrome's one) and HPKP (see Tim Taubert's blogpost.)

Both tools needs arpspoof to create the man-in-the-middle position. I've discovered another tool that seems to combine all of these tools in one (but I'm not really sure what's the difference here): bettercap. It looks good also.

Other tools I've discovered, that verify how the client's handle certificate verification: one is developed by some coworker and is called tlspretense, the other one is called x509test and seem to be pretty popular too. I have no idea how both these tools perform, I guess I will check that next time I to.

1 comment

Openssl dhparam 2048 posted February 2016

While some people would tell you to use pre-defined Diffie-Hellman parameters (rfc3526, rfc5114) so as not to mess something up during the generation (hum hum socat), others would tell you "hey, look at what happened with logjam and their hardcoded DH params!" and will point you to openssl dhparam to generate your own customized parameters.

But how does it really work? The high level code is less than 450 lines long. It's basically taking several stuff into consideration ("do you want to use 2 or 5 as a generator? No? Let's use 2 anyway!") and then using some DH_generate_parameters() function that will set up everything.

Inside of that function relies probable_prime_dh_safe() that will look for a particular kind of prime as the DH modulus: a safe prime.

A safe prime looks like this: \(2p + 1\) with \(p\) a prime as well.

Actually, we can also say that the function looks for a Sophie Germain prime, a prime \(p\) such that \(2p+1\) is also prime.

Like that you know =), safe primes are not the same as Sophie Germain prime, but they are closely related.

(Not to mix up with strong prime as well, which are just primes that have some properties.)

Another important thing: the function returns a safe probable primes. That is a function that has not been mathematically proven to be a prime (but is close enough).

To do that, first a random number is generated. Then a test called the Miller-Rabin test is applied to the random number a number of time. The more you run the test and the more certainty you get that it is a prime, if the test ever fail you know for sure that the number is not a prime (so a composite) and you need to regenerate a new random number and start again with the tests.

How many times should you apply this test? Enough time so that if everyone tried to generate primes every second for the lifespan of earth we would still have low chances to find a false positive (a composite number passing X tests of Miller-Rabin). The numbers used in OpenSSL come from the table 4.4 in the Handbook of Applied Cryptography.

(To make the test faster a bunch of small primes are first tested as divisors, then some trial divisions follow.)

2 comments