david wong

Hey ! I'm David, a security consultant at Cryptography Services, the crypto team of NCC Group . This is my blog about cryptography and security and other related topics that I find interesting.

NCC Con and the NCC T-shirt 5 days ago

After Real World Crypto, like every year, NCC Con follows next.

NCC Con is the NCC Group conference, a conference for its employees only, with a bunch of talks and drinks. I gave a talk on TLS 1.3 that I hope I can translate to a blog post at some point.

This year I also designed the NCC Group's Mascot! Which was given away as a T-shirt and a sticker to all the employees at the conference. It was a pretty surreal moment for me to see people around me, at the conference and in the casinos (it was in Vegas) wearing that shirt I designed :D

tshirt

sticker

comments (2)

Real World Crypto 2017: Day 3 2 weeks ago

(The notes for day 2 are here.)

The first talk on Quantum Computers was really interesting, but unfortunately mostly went over my head. Although I'm glad we had a pro who could come and talk to us on the subject. The take away I took from it was to go read the SMBC comics on the same subject.

After that there was a talk about TPMs and DAA for Direct Anonymous Attestation. I should probably read the wikipedia page because I have no idea what that was about.

Helena Handschuh from Cryptography Research talked about DPA Resistance for Real People. There are many techniques we use as DPA countermeasures but it seems like we still don't have the secret sauce to completely prevent that kind of issues, so what we really end up doing is rotating the keys every 3 encryption/decryption operations or so... That's what they do at Rambus, and at least what I've heard other people doing when I was at CHES this year. Mike Hamburg describes the way they rotate keys in his STROBE project a bit. Handschuh also talked about the two ways to certify a DPA-resistant product. There are evaluations like Common Criteria, which is usually the normal route, but now there is also validation. Don't ask me about that.

David Cash then entered the stage and delivered what I believe was the best talk of the conference. He started with a great explanation of ORE vs OPE. OPE (Order Preserving Encryption) allows you to encrypt data in a way that ciphertexts conserve the same lexicographic order, ORE (Order Revealing Encryption) does not, but some function over the ciphertexts end up revealing the order of the plaintexts. So they're kind of the same in the end and the distiction doesn't seem to really matter for his attacks. What matters is the distinction between Ideal ORE and normal ORE (and the obviously, the latter is what is used in the real world).

Ideal ORE only reveals the order while the ORE schemes we use also reveal other things, for example the MSDB (most significant different bits) which is the position of the first non-similar bit between two plaintexts.

Previous research focused on attacking a single column of encrypted data while their new research attacks columns of correlated data. David gives the example of coordinates and shows several illustrations of his attack revealing an encrypted image of the linux penguin, encrypted locations on a map or his go-about saved and encrypted by a GPS. Just by looking at the order of coordinates everything can be visually somewhat restored.

Just by analyzing the MSDB property, a bunch of bits from the plaintexts can be restored as well. It seemed like very bad news for the ORE schemes analyzed.

Finally, two points that seemed really important in this race for the perfect ORE scheme is that first: the security proofs of these constructions are considering any database data as uniformly random, whereas we know that we rarely need to store completely random data :) Especially columns are often correlated with one another. Second, even an (hypothetical) ideal ORE was vulnerable to their research and to previous research (he gave the example of encrypting the whole domain in which case the order would just reveal the plaintexts).

This is a pretty bad stab at ORE scheme in general, showing that it is intuitively limited.

Paul Grubbs followed with an explanation of BoPETS, a term that I believe he recently coined, meaning "Building on Property revealing EncrypTion". He gave a good list of products that I replicated below in a table.

Order Preserving Encryption SAP, Cipherbase, skyhigh, CipherCloud, CryptDB
Searchable Encryption Shadowcrypt, Mylar, kryptonostik, gitzero, ZeroDB, CryptDB
Deterministic Encryption Perspecsys, skyhigh, CipherCloud, CryptDB

They looked at Mylar and saw if they could break it from 3 different attacker setups: a snapshot setup (smash and grab attack), passive (attacker got in, and is now observing what is happening), active (attacker can do pretty much anything I believe).

Mylar uses some encrypted data blob/structure in addition to a "principal graph" containing some metadata, ACL, etc... related to the encrypted data. Grubbs showed how he could recover most of the plaintexts from all the different setups.

Tal Malkin interjected that these attacks would probably not apply to some different PPE systems like IBM OXT. Grubbs said it did not matter. Who's right, I don't know.

As for the active attacker problem, there seem to exist no intuitive solution there. If the attacker can do anything, you're probably screwed.

Raluca Ada Popa (Mylar) followed Grubbs by talking about her product Mylar and rejected all of Grubbs claims, saying that there were out of the scope of Mylar OR were attacking mis-use of the product. IIRC the same thing happened when CryptDB was "broken", CryptDB released a paper arguing that these were false claims.

After Mylar, Popa intend to release two new products with better security: Verena and Opaque.

David Mcgrew mentionned Joy and gave a long talk about detecting PRNG failures. Basically look for public values affected by a PRNG like signatures or the server/client random in TLS.

And that was it. See you next year.

If you have anything to say about my notes, the talks, or other people's notes, please do so in the comments :)

There was a HACS workshop right after RWC, and Tim Taubert wrote some notes on it here.

comment on this story

Real World Crypto 2017: Day 2 3 weeks ago

Here we go again. Some really short notes as well for today. (The notes for day 1 are here.)

Trevor Perrin talked about Message Encryption from an historical point of view, from key directories to public key infrastructures and how to authenticate users to each other. Something interesting that Trevor talked about was CONIKS, some sort of Certificate Transparency-like protocol but for secure messaging (they call it key transparency).

when Alice wants to send a secure message to some other user, say Bob, her CONIKS client looks up Bob's key at the key directory, and verifies that this key has not changed unexpectedly over time. It also checks that this key is consistent with the key other clients are seeing for Bob. Only if these two consistency checks pass will the CONIKS client send Alice's message to Bob. The CONIKS client also performs these same checks for Alice's own key on a regular basis to ensure that the service provider is not tampering with Alice's key.

This sounds like an audit system (users can check what a key distribution server has been up to) + a gossip protocol (users can talk between them to verify consistency of the obtained public keys). Which seems like an excellent idea and makes me wonder why would Signal not use it.

djb mentioned the Self-Healing feature of ZRTP, similar to the recovery feature of Signal.

ZRTP caches symmetric key material used to compute secret session keys, and these values change with each session. If someone steals your ZRTP shared secret cache, they only get one chance to mount a MiTM attack, in the very next session. If they miss that chance, the retained shared secret is refreshed with a new value, and the window of vulnerability heals itself, which means they are locked out of any future opportunities to mount a MiTM attack. This gives ZRTP a "self-healing" feature if any cached key material is compromised.

Signal's self-healing property comes from the fact that an ephemeral Diffie-Hellman key agreement is continuously happening during communication. Like ZRTP it seems like it works out well only if the attacker is slow to act, thus it doesn't seem to be exactly comparable to backward secrecy (which might just be impossible in a protocol).

Later, someone (I don't know who from Felix Günther, Britta Hale, Tibor Jager and Sebastian Lauer because the program doesn't specify who the speaker is), presented on a 0-RTT system that would provide forward secrecy and anti-replayability. 0-RTT is one of the feature of TLS 1.3, which allows a client to start sending encrypted data to a server during its very first flight of messages. Unfortunately, and this was the topic of many discussions on TLS 1.3, these messages are replayable. The work builds on top of Math Green's work with Puncturable Encryption where the server (and the client?) use some key derivation system and remove parts of it after a message has been sent using the 0-RTT feature. I am not sure if this system is really efficient though, especially since the point of 0-RTT is to be able to be fast. If this solution isn't faster or, worse, slower than doing a normal TLS 1.3 handshake (1.5 round trips) then the 0-RTT has no meaning in life anymore.

It also seems like this wouldn't be applicable to the "ticket" way of doing 0-RTT in TLS 1.3, which basically encrypts the whole state and hand up the opaque blob to the client, this way the server doesn't store anything.

Hugo Krawczyk (the HKDF guy) talked about passwords and leaks with some Comic Sans MS (and there was this handy website to check if your username/password/... had been compromised). Hugo then presented some of his recent work on SPHINX, PPSS, X-PAKE, ... everything is listed with link to papers here.

SPHINX is a client-focused and transparent-to-the-server password manager (like all of them really). The desktop password manager uses some derivation parameter stored online or on a user's mobile phone to derive any website key from a master password. The online service or the mobile phone never sees anything (thanks to a simple blinding technique, reminding me of what Ari Juel did last year's RWC with PASS). Because of that, no offline attack are possible. The slides are here and are pretty self explanatory. I have to admit that the design makes a lot of sense to me. I dozed off for the second part of the talk but it was about "How to store a secret" and his PPSS thing (Password Protected Secret Sharing), same for the third part of the talk that was about X-PAKE, which I can imagine was a mix of his ideas with the PAKE protocol.

There were two talks about memory-hardness and proving that password hashing functions are memory-hard. It seemed like some people think it's important that these functions be data-independent as well (probably because in some cases cache attacks might be an issue). Most of the techniques here seemed to make sure that a minimum amount of memory was to be used at all time, and that this couldn't be reduced. I would have liked to see a comparison between Argon2 (the winner of the PHC), Blake 2 (which seems to be the thing people like and use) and Balloon Hashing (which seems to be Dan Boneh's new thing).

George Tankersley and Filippo Valsorda finished the day with a talk on Cloudflare and their CAPTCHA problem. A lot of attacks/spam seems to come from TOR, which has deteriorated the reputation of the TOR nodes' IPs. This means that when Cloudflare sees some traffic coming from TOR, it will present the user with a CAPTCHA to make sure they are dealing with a human. TOR users tend to strongly dislike Cloudflare because these CAPTCHA are shown for every different website, and for every time the TOR path is changed (10 minutes?). This, in addition to TOR already slowing down your browsing efficiency, has annoyed more than one person. Cloudflare is trying to remediate the problem by giving not one, but N tokens to the user for one CAPTCHA solved. By using blind signatures Cloudflare hopes to demonstrate its inability to deanonymize users by using a CAPTCHA token as a tracking cookie.

(I have been made aware of this problem in the past and have manually added TOR visitors as an "allowed country" in my Cloudflare's setup for cryptologie.net., which is one of the solution given to Cloudflare's customers.)

I believe the drafted solution is readable here if you're interested.

Here are more resources:

The notes for day 3 are here.

comment on this story

Real World Crypto 2017: Day 1 3 weeks ago

Today was the first day of Real World Crypto. My favorite con (I think I've been saying that enough). I have avoided taking long notes about this one (as I did for the previous one). But fortunately a live stream was/is available here.

The Lechvin prize was given to Joan Daemen, co-inventor of AES and SHA3, and to Moxie Marlinspike and Trevor Perrin for their work on the development on secure messaging.

Daemen talked about how block cipher might become a thing from the past, replaced by more efficient and faster permutation constructions (like the permutation-baed sponge construction they developed for SHA3).

Moxie Marlinspike gave an brilliant speech. Putting that into words would only uglify whatever was said, you will have to take my words for it.

Rich Salz gave a touching talk about the sad history of OpenSSL.

Thai Duong presented his Project Wycheproof that test java cryptographic libraries for common cryptographic pitfalls. They have something like 80 test vectors (easy to export to test other languages) and have uncovered 40+ vulnerabilities. One is being commented here.

L Jean Camp gave a talk on some X.509 statistics across phishing websites and the biggest websites (according to some akamai ranking). No full ipv4 range stats. Obviously the phishing websites were not bothering with TLS. And the speaker upset several people by saying that phishing websites should not be able to obtain certificates for similar-looking domains. Adam Langley took the mic to explain to her how orthogonal these issues were, and dropped the mic with a "we will kill the green lock".

Quan Nguyen gave a nice talk about some fun crypto vulns. Unfortunately I was dozing off, but everyone seemed to have appreciated the talk and I will be checking these slides as soon as they come up. (Some "different" ways to retrieve the authentication key from AES-GCM)

Daniel Franke presented the NTS (Network Time Security) protocol. It looks like it could protect NTP. Is it a competitor of roughtime? On the roughtime page we can read:

The obvious answer to this problem is to authenticate NTP replies. Indeed, if you want to do this there‘s NTPv4 Autokey from six years ago and NTS, which is in development. A paper at USENIX Security this year detailed how to do it so that it’s stateless at the server and still mostly using fast, symmetric cryptography.

But that's what NTP should have been, 15 years ago—it just allows the network to be untrusted. We aim higher these days.

So I guess NTS is not coming fast enough, hence the creation of roughtime. I personally like how anyone can audit roughtime servers.

Tancrède Lepoint presented on CRYSTAL, a lattice-based key exchange that seems like a competitor to New Hope. He also talked about Open Quantum Safe that contains a library of post quantum primitives as well as a fork of OpenSSL making use of this library. Someone from the public appeared to be pretty angry not to be cited first in the research, but the session chair (Dan Boneh) smoothly saved us from an awkward Q/A.

Mike Hamburg came up with STROBE, a bespoke TLS-like protocol based on one sponge construction. It targets embedded devices but isn't really focusing on speed (?) It's also heavily influenced by BLINKER and tries to improve it. It kinda felt like a competitor of the Noise Protocol Framework but looking at the paper it seems more confusing than that and much more interesting as well. From the paper:

Strobe is a framework for building cryptographic two-party protocols. It can also be used for symmetric cryptosystems such as hashing, AEAD, MACs, PRFs and PRNGs. It is also useful as the symmetric part of a Schnorr-style signature scheme.

That's it. If anyone can point me to other notes on the talks I'd gladly post a list of links in here as well:

The notes for day 2 are here.

comments (2)

About Sweet32 November 2016

I've been thinking a lot about sweet32 recently. And I decided to try to reproduce their results.

First let me tell you that the attack is highly impractical. It requires the user to execute some untrusted javascript for dozens of hours without interruption. The other problem that I encountered was that I couldn't reach the amount of requests they were able to make a client send. In their paper, they claim to be able to reach up to 2,000 requests per second.

I tried to achieve such good numbers with "normal" browsers, and the amount of requests I was able to make was ridiculously low. Then I realized that they used a specific browser: Firefox Developer Edition. A browser made for developing websites. For some unknown reason, it was true that this specific browser was able to send an impressive amount of requests per second. Although I was never able to reach that magical number of 2,000. And even then, who really uses Firefox Developer Edition?

It should be noted that their attack was done in a lab, with a small distance between the client and the server, under perfect condition, when no other traffic was slowing down the attack, etc... I can't imagine this attack being practical at all in real settings.

Note that I can imagine different settings than TLS, at a different point in time in the future, being able to send enough requests per second that this attack would be deemed practical. And in that sense, Sweet32 should be taken seriously. But for now, and especially in the case of TLS, I wouldn't freak out if I see a 64-bit block cipher being used.

comments (1)

1/n-1 split to circumvent BEAST November 2016

A lot of attacks are theorized only to become practical years or decades later. This was the case with Bleichenbacher's and Vaudenay's padding oracle attacks, but also BEAST.

Realizing that Chosen-plaintext attacks were do-able on TLS -- because browsers would execute untrusted code on demand (javascript) -- a myriad of cryptanalysts decided to knock down on TLS.

POODLE was the first vulnerability that made us deprecate SSL 3.0. It broke the protocol in such a critical way that even a RFC was published about it.

BEAST was the one that made us move away from TLS 1.0. But a lot of embedded devices and products still use these lower versions and it would be unfair not to say that an easy patch can be applied to implementations of TLS 1.0 to counter this vulnerability.

BEAST comes from the fact that in TLS 1.0 the next message being encrypted with CBC will use the previous ciphertext's last block as IV. This makes the IV predictable and allow you to decrypt ciphertexts by sending many chosen plaintexts.

cbc encryption

the diagram of CBC for encryption taken from wikipedia. Here imagine that the IV is the previous ciphertext's last block.

The counter measures server-side are well known: move to greater versions of TLS. But if the server cannot fix this, one simple counter measure can be applied on the client-side (remember, this is a client-side vulnerability, it allows a MITM attacker to recover session IDs, cookies, etc...).

Again: BEAST works because the MITM attacker can predict the next IV. He can just observe the previous ciphertext block and craft the plaintext based on it. It's an interactive attack.

One way of preventing this is to send an empty message before sending each message. The empty message will produce a ciphertext (of essentially the MAC), which the attacker will not be able to predict. The message that the attacker asked the browser to encrypt will thus be encrypted with this unpredictable IV. The attacked is circumvented.

This counter measure is called a 0/n split.

diagram 0 split

Unfortunately a lot of servers did not like this countermeasures too much. Chrome pushed that first and kind of broke the web for some users. Adam Langley talks about them paying the price for fixing this "too soon". Presumably this "no data" message would be seen by some implementations as a EOF (End Of File value).

One significant drawback of the current proposed countermeasure (sending empty application data packets) is that the empty packet might be rejected by the TLS peer (see comments #30/#50/others: MSIE does not accept empty fragments, Oracle application server (non-JSSE) cannot accept empty fragments, etc.)

To fix this, Firefox pushed a patch called a 1/n-1 split, where the message to be sent would be split into two messages, the first one containing only 1 byte of the plaintext, and the second one containing the rest.

cbc split

If you look at a fixed client implementation sending messages over a negotiated TLS 1.0 connection, you will see that first it will send the first byte (in the screenshot below, the "G" letter), and then send the rest in a different TLS message.

cbc split burp

If you're curious, you can see that being done in code in the recent BearSSL TLS library of Thomas Porning.

static unsigned char *
cbc_encrypt(br_sslrec_out_cbc_context *cc,
    int record_type, unsigned version, void *data, size_t *data_len)
{
    unsigned char *buf, *rbuf;
    size_t len, blen, plen;
    unsigned char tmp[13];
    br_hmac_context hc;

    buf = data;
    len = *data_len;
    blen = cc->bc.vtable->block_size;

    /*
     * If using TLS 1.0, with more than one byte of plaintext, and
     * the record is application data, then we need to compute
     * a "split". We do not perform the split on other record types
     * because it turned out that some existing, deployed
     * implementations of SSL/TLS do not tolerate the splitting of
     * some message types (in particular the Finished message).
     *
     * If using TLS 1.1+, then there is an explicit IV. We produce
     * that IV by adding an extra initial plaintext block, whose
     * value is computed with HMAC over the record sequence number.
     */
    if (cc->explicit_IV) {
        /*
         * We use here the fact that all the HMAC variants we
         * support can produce at least 16 bytes, while all the
         * block ciphers we support have blocks of no more than
         * 16 bytes. Thus, we can always truncate the HMAC output
         * down to the block size.
         */
        br_enc64be(tmp, cc->seq);
        br_hmac_init(&hc, &cc->mac, blen);
        br_hmac_update(&hc, tmp, 8);
        br_hmac_out(&hc, buf - blen);
        rbuf = buf - blen - 5;
    } else {
        if (len > 1 && record_type == BR_SSL_APPLICATION_DATA) {
            /*
             * To do the split, we use a recursive invocation;
             * since we only give one byte to the inner call,
             * the recursion stops there.
             *
             * We need to compute the exact size of the extra
             * record, so that the two resulting records end up
             * being sequential in RAM.
             *
             * We use here the fact that cbc_max_plaintext()
             * adjusted the start offset to leave room for the
             * initial fragment.
             */
            size_t xlen;

            rbuf = buf - 4 - ((cc->mac_len + blen + 1) & ~(blen - 1));
            rbuf[0] = buf[0];
            xlen = 1;
            rbuf = cbc_encrypt(cc, record_type, version, rbuf, &xlen);
            buf ++;
            len --;
        } else {
            rbuf = buf - 5;
        }
    }

    /*
     * Compute MAC.
     */
    br_enc64be(tmp, cc->seq ++);
    tmp[8] = record_type;
    br_enc16be(tmp + 9, version);
    br_enc16be(tmp + 11, len);
    br_hmac_init(&hc, &cc->mac, cc->mac_len);
    br_hmac_update(&hc, tmp, 13);
    br_hmac_update(&hc, buf, len);
    br_hmac_out(&hc, buf + len);
    len += cc->mac_len;

    /*
     * Add padding.
     */
    plen = blen - (len & (blen - 1));
    memset(buf + len, (unsigned)plen - 1, plen);
    len += plen;

    /*
     * If an explicit IV is used, the corresponding extra block was
     * already put in place earlier; we just have to account for it
     * here.
     */
    if (cc->explicit_IV) {
        buf -= blen;
        len += blen;
    }

    /*
     * Encrypt the whole thing. If there is an explicit IV, we also
     * encrypt it, which is fine (encryption of a uniformly random
     * block is still a uniformly random block).
     */
    cc->bc.vtable->run(&cc->bc.vtable, cc->iv, buf, len);

    /*
     * Add the header and return.
     */
    buf[-5] = record_type;
    br_enc16be(buf - 4, version);
    br_enc16be(buf - 2, len);
    *data_len = (size_t)((buf + len) - rbuf);
    return rbuf;
}

Note that this does not protect the very first byte we send. Is this an issue? Not for browsers. But the next time you encounter this in a different setting, think about it.

comment on this story

What Diffie-Hellman parameters to use? October 2016

I see some discussions on some mailing lists about what parameters to use for Diffie-Hellman (DH).

It seems like the recent line of papers about weak Diffie-Hellman parameters (Logjam) and Diffie-Hellman backdoors (socat, the RFC 5114, the special primes, ...) has troubled more than one.

Less than two weeks ago, a study from Dorey et al. based on these previous results was released, uncovering many problems in how Diffie-Hellman is implemented (or even backdoored!) in the wild.

This is a non-problem. We don't need a RFC to choose Diffie-Hellman groups. A simple openssl gendh -out keyfile -2 2048 will generate a 2048-bit safe prime along with correct DH parameters for you to use. If you're worried about "special primes" issues, either make it yourself with this command, or pick a larger (let's say 4096-bit safe prime) from a list and verify that it's a safe prime. You can use this tool for that.

But since some people really don't want to do the work, here are some safe parameters you can use.

2048-bit parameters for Diffie-Hellman

Here's is the .pem file containing the parameters:

-----BEGIN DH PARAMETERS-----
MIIBCAKCAQEA7WJTTl5HMXOi8+kEeze7ftMRbIiX+P7tLkmwci30S+P6xc6wG1p4
SwbpPyewFlyasdL2Dd8PkhYFtE1xD3Ssj1De+P8T0UcJn5rCHn+g2+0k/CalysKT
XrobEzihlSLeQO1NsgBt1F1XCMO+6inLVvSGVbb3Cei4q+5Djnc7Yjjq0kxGY6Hd
ds/YQnyc1xdJU8NBi3zO1XY2Uk6BSd+NN5KnLh9zRq8t/b0RiIb/fY9mJ9BCtgPo
2m4AfJE8+5dE1ttpQAJFSlA8Ku3/9Vp8sMMWATVk2Q1z9PdkikKQYRfMPYDBSIa/
8Y2l9Hh7vNYOwXd4WF5Q55RHP46RB+F+swIBAg==
-----END DH PARAMETERS-----

You can parse it yourself by piping it to openssl dh -noout -text. It uses 2 as a generator and this big hexstring as a safe prime:

ed62534e5e473173a2f3e9047b37bb7ed3116c8897f8feed2e49b0722df44be3fac5ceb01b5a784b06e93f27b0165c9ab1d2f60ddf0f921605b44d710f74ac8f50def8ff13d147099f9ac21e7fa0dbed24fc26a5cac2935eba1b1338a19522de40ed4db2006dd45d5708c3beea29cb56f48655b6f709e8b8abee438e773b6238ead24c4663a1dd76cfd8427c9cd7174953c3418b7cced57636524e8149df8d3792a72e1f7346af2dfdbd118886ff7d8f6627d042b603e8da6e007c913cfb9744d6db694002454a503c2aedfff55a7cb0c316013564d90d73f4f7648a42906117cc3d80c14886bff18da5f4787bbcd60ec17778585e50e794473f8e9107e1

4096-bit parameters for Diffie-Hellman

Here's the .pem file:

-----BEGIN DH PARAMETERS-----
MIICCAKCAgEA66H+vEJiWaduRv/wwBhPkW1ivReF8yaq2W7166jcw62i069pkXlK
EeLtSDtGKiHu2YrjN03uf+q2urPhG+KXCMBgONlz7eGfPAD/xVSqK53jT8DFgqo6
Y1BccaJeU1iVy3GAVG1xS0aVPqyiNFDGpfd2WSuPIAi3tlgfD0Iu0sCBDLNcuA0d
4pCqBqEKOunD/vDcxp5jnVcj+36fozIkSQ3M2AzG6r44Qb0LFRrAuBM7QKs13COt
oGZwAJE/9GdfNC0jQsScrIVYV0MATv81mbFDAE6e0rLVqMeCdIY7gH0A0qWUVA4S
I3Mr5iPTYxFlA+vWuBPdZ1OXiQN5pzx0TTdnfkI6Q23rOeJG5eIa/LIZ/B+0OlqF
W8UwJL1eZoQGOtx9Al285OIiPktH0fJcpkfbMUmBG9r7xYuCw9vUQ1ed+BIQ366+
9mDTKjTOtmw7HahV48W/T8OPSoSFe/QgnX0VB7c6pnWZ2d4WFu2jVEeG3/pzqMM0
/VCFPXM6Zc0iat5V9qcn2OnOhaSjItuGEc75+8OIeEcdhAUq+PgkKoUIwCNv65RA
A17Skfgi3CLeLy5nKRd7nNv13PSXpjOkNYvRjjatEHZY9DTlfV0vqAlTZNjAt/yx
yNunxd5Sg1kxD1dZCkJXhAbcR8bPSU6OgjSDvGvRk+Dv3Qv5MtTWYisCAQI=
-----END DH PARAMETERS-----

And here is the hexstring value of the safe prime (note that it still uses 2 as a generator):

eba1febc426259a76e46fff0c0184f916d62bd1785f326aad96ef5eba8dcc3ada2d3af6991794a11e2ed483b462a21eed98ae3374dee7feab6bab3e11be29708c06038d973ede19f3c00ffc554aa2b9de34fc0c582aa3a63505c71a25e535895cb7180546d714b46953eaca23450c6a5f776592b8f2008b7b6581f0f422ed2c0810cb35cb80d1de290aa06a10a3ae9c3fef0dcc69e639d5723fb7e9fa33224490dccd80cc6eabe3841bd0b151ac0b8133b40ab35dc23ada0667000913ff4675f342d2342c49cac85585743004eff3599b143004e9ed2b2d5a8c78274863b807d00d2a594540e1223732be623d363116503ebd6b813dd675397890379a73c744d37677e423a436deb39e246e5e21afcb219fc1fb43a5a855bc53024bd5e6684063adc7d025dbce4e2223e4b47d1f25ca647db3149811bdafbc58b82c3dbd443579df81210dfaebef660d32a34ceb66c3b1da855e3c5bf4fc38f4a84857bf4209d7d1507b73aa67599d9de1616eda3544786dffa73a8c334fd50853d733a65cd226ade55f6a727d8e9ce85a4a322db8611cef9fbc38878471d84052af8f8242a8508c0236feb9440035ed291f822dc22de2f2e6729177b9cdbf5dcf497a633a4358bd18e36ad107658f434e57d5d2fa8095364d8c0b7fcb1c8dba7c5de528359310f57590a42578406dc47c6cf494e8e823483bc6bd193e0efdd0bf932d4d6622b
comments (3)

Looking for a crypto intern at Cryptography Services! October 2016

internship

The Cryptography Services team of NCC Group is looking for a summer 2017 intern!

We are looking for you if you're into cryptography and security! The internship would allow you to follow consultants on the job as well as lead your own research project.

Who are we? We are consultants! Big companies come to us and ask us to hack their stuff (legally), review their code and advise on their design. If we're not doing that, we spend our time reading papers, researching, attending conferences, giving talks and teaching classes, ... whatever floats our boat. Not one week is like the other! If you've spent some time doing cryptopals challenges you will probably like what we are doing.

We can't say much about who are the clients we work for, except for the public audits we sometimes do. For example we've performed public audits for TrueCrypt, OpenSSL, Let's Encrypt, Docker and more recently Zcash.

I was myself the first intern of Cryptography Services and I'd be happy to answer any question you might have =)

Take a look at the challenges here if you're interested!

comments (4)

Common x509 certificate validation/creation pitfalls September 2016

I wrote a gist here on certificate validation/creation pitfalls. I don't know if it is up for release but I figured I would get more input, and things to add to it, if I would just released it. So, go check it out and give me your feedback here!

Here's a copy of the current version:

Certificate validation/creation pitfalls

A x509 certificate, and in particular the latest version 3, is the standard for authentication in Public Key Infrastructures (PKIs). Think about Google proving that he's Google before you can communicate with him.

So. Heh. This x509 thing is a tad complicated. Trying to parse such a thing usually end up in the creation of a lot of different vulnerabilities. I won't talk about that here. I will talk about the other complicated thing about them: using them correctly!

So here's a list of pitfalls in the creation of such certificates, but also in the validation and use of them when encountering them in the wild wild web (or in your favorite infrastructure).

  1. KeyUsage
  2. Validity Dates
  3. Critical Extensions
  4. Hostname Validation
  5. Basic Constraints
  6. Name Constraint

KeyUsage

explanation: keyUsage is a field inside a x509 v3 certificate that limits the power of the public key inside the certificate. Can you only use it to sign? Or can it be used as part of a key Exchange as well (ECDH)? etc...

relevant RFC: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile

relevant ASN.1:

KeyUsage ::= BIT STRING {
           digitalSignature        (0),
           nonRepudiation          (1),
           keyEncipherment         (2),
           dataEncipherment        (3),
           keyAgreement            (4),
           keyCertSign             (5),
           cRLSign                 (6),
           encipherOnly            (7),
           decipherOnly            (8) }

seen in attacks: KCI

best practice: Specify the KeyUsage at creation, verify the keyUsage when encountering the certificate. keyCertSign should be used if the certificate is a CA, keyAgreement should be used if a key exchange can be done with the public key of the certificate.

see more: Extended Key Usage

Validity Dates

explanation: a certificate is only to be valid in a specific interval of time. This interval is specified by the notBefore and notAfter fields.

relevant RFC: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile

relevant ASN.1:

Validity ::= SEQUENCE {
    notBefore      Time,
    notAfter       Time }

best practice: Reject certificates that have a notBefore date posterior to the current date, or that have a notAfter date anterior to the current date.

Critical extensions

explanation: x509 certificate is an evolving standard, exactly like TLS, through extensions. To preserve backward compatibility, not being able to parse an extension is often considered OK, that is unless the extension is considered critical (important).

relevant RFC: https://tools.ietf.org/html/rfc5280#section-4.2

relevant ASN.1:

Extension  ::=  SEQUENCE  {
    extnID      OBJECT IDENTIFIER,
    critical    BOOLEAN DEFAULT FALSE,
    extnValue   OCTET STRING
                -- contains the DER encoding of an ASN.1 value
                -- corresponding to the extension type identified
                -- by extnID
}

best practice: at creation mark every important extensions as critical. At verification make sure to process every critical extensions. If a critical extension is not recognized, the certificate MUST be rejected.

Hostname Validation

explanation:

Knowing who you're talking to is really important. A x509 certificate is tied to a specific domain/organization/email/... if you don't check who it is tied to, you are prone to impersonation attacks. Because of reasons, these things can be seen in different places in the subject field or in the Subject Alternative Name (SAN) extension. For TLS, things are standardized differently and it will always need to be checked in the latter field.

This is one of the trickier issues in this list as hostname validation is protocol specific (as you can see TLS does things differently) and left to the application. To quote OpenSSL:

One common mistake made by users of OpenSSL is to assume that OpenSSL will validate the hostname in the server's certificate

relevant RFC:

relevant ASN.1:

TBSCertificate  ::=  SEQUENCE  {
    version         [0]  EXPLICIT Version DEFAULT v1,
    serialNumber         CertificateSerialNumber,
    signature            AlgorithmIdentifier,
    issuer               Name,
    validity             Validity,
    subject              Name,

seen in attacks:

best practice: During creation, check for the subject as well as the subject alternative name fields. During verification, check that the leaf certificate matches the domain/person you are talking to. If TLS is the protocol being used, check that in the subject alternative name field, only one level of wildcard is allowed and it must be on the leftmost position (*.domain.com is allowed, sub.*.domain.com is forbidden). Consult RFC 6125 for more information.

see more:

Basic Constraints

explanation: the BasicConstraints extension dictates if a certificate is a CA (can sign others) or not. If it is, it also says how many CAs can follow it before a leaf certificate.

relevant RFC: https://tools.ietf.org/html/rfc5280#section-4.2.1.9

relevant ASN.1:

id-ce-basicConstraints OBJECT IDENTIFIER ::=  { id-ce 19 }

BasicConstraints ::= SEQUENCE {
    cA                      BOOLEAN DEFAULT FALSE,
    pathLenConstraint       INTEGER (0..MAX) OPTIONAL }

seen in attacks: Moxie - New Tricks For Defeating SSL In Practice (2009)

best practice: set this field to the relevant value when creating a certificate. When validating a certificate chain, make sure that the pathLen is valid and the cA field is set to TRUE for each non-leaf certificate.

Name Constraints

explanation: the NameConstraints extension contains a set of limitations for CA certificates, on what kind of certificates can follow them in the chain.

relevant RFC: https://tools.ietf.org/html/rfc5280#section-4.2.1.10

relevant ASN.1:

id-ce-nameConstraints OBJECT IDENTIFIER ::=  { id-ce 30 }

NameConstraints ::= SEQUENCE {
     permittedSubtrees       [0]     GeneralSubtrees OPTIONAL,
     excludedSubtrees        [1]     GeneralSubtrees OPTIONAL }

GeneralSubtrees ::= SEQUENCE SIZE (1..MAX) OF GeneralSubtree

GeneralSubtree ::= SEQUENCE {
     base                    GeneralName,
     minimum         [0]     BaseDistance DEFAULT 0,
     maximum         [1]     BaseDistance OPTIONAL }

BaseDistance ::= INTEGER (0..MAX)

best practice: when creating a CA certificate, be aware of the constraints chained certificates should have and document it in the NameConstraints field. When verifying a CA certificate, verify that each certificate in the certificate chain is valid according to the requirements of upper certificates.

Out of scope

  • Certificate Chain Validation
  • Certificate Revocation

Acknowledgements

Thanks to Chris Palmer, Vincent Lynch and Jeff Jarmoc

comment on this story

Key Compromise Impersonation attacks (KCI) September 2016

You might have heard of KCI attacks on TLS: an attacker gets to install a client certificate on your device and can then impersonate websites to you. I had thought the attack was a highly impractical one, but looking at the video of the attack (done against facebook.com at the time) it seemed to contradict my first instinct.

I skimmed the paper and it answered some of my questions and doubts. So here's a tl;dr of it.


The issue is pretty easy to comprehend once you understand how a Diffie-Hellman key exchange works.

Imagine that you navigate to myCompany.com and you do a key exchange with both of your public keys.

  • your public key: \(g^a \pmod{n}\)

  • your company's public key: \(g^b \pmod{n}\)

where \(g\) and \(n\) are public parameters that you agreed on. Let's ignore \(n\) from now on: here, both ends can create the shared secret by doing either \((g^b)^a\) or \((g^a)^b\).


In other words, shared secret = (other dude's public key)(my private key)


If you know either one of them's private key, you can observe the other public key (it's public!) and compute the shared secret. Then you have enough to replace one of the end in the TLS communication.

That's the theory.

If you replace the client's certificate with your own keypair, or know the private key of the certificate, you can break whatever key exchange they try to do with that certificate. What I mean by "break": from a key exchange you can compute the session keys and replace any party during the following communications.

My doubts had originally came from the fact that most implementations rarely use plain Diffie-Hellman, instead they usually offer ephemeral DH or RSA-based key exchanges (which are not vulnerable to this attack). The paper brought me back to reality:

Support for fixed DH client authentication has been very recently added to the OpenSSL 1.0.2 branch.

But then how would you make the client do such a un-used key exchange? The paper washed me from doubts once again:

the attacker, under the hood, interferes with the connection initialization to facebook, and forces the client to use an insecure handshake with client authentication, requesting the previously installed certificate from the system.

Now a couple of questions come to my mind.

How does facebook have these non-ephemeral DH ciphersuites?

→ from the paper, the server doesn't even need to use a static DH ciphersuite. If it has an ECDSA certificate, and didn't specify that it's only to be used to sign then you can use it for the attack (the keyUsage field in the certificate is apparently never used correctly, the few values listed in this RFC tell you how to correctly limit the authorized usages of your public key)

How can you trigger the client to use this ciphersuite?

→ just reply with a serverHello only displaying static ECDH in its ciphersuite list (contrarily to ECDHE, notice the last E for ephemeral). Then show the real ECDSA certificate (the client will interpret that as a ECDH cert because of the fake ciphersuites) and then ask the client for the specific cert you know the private key of.


In addition to the ECDSA → ECDH trumpery, this is all possible because none of these messages are signed at that moment. They are later authenticated via the shared secret, but it is too late =) I don't know if TLS 1.3 is doing a better job in this regard.

It also seems to me that in the attack, instead of installing a client cert you could just install a CA cert and MITM ALL TLS connections (except the pinned ones). But then, this attack is more sneaky, and it's another way of doing exactly this. So I appreciate that.

comments (1)

Fault attacks on RSA's signatures September 2016

Facebook was organizing a CTF last week and they needed some crypto challenge. I obliged, missed a connecting flight in Phoenix while building it, and eventually provided them with one idea I had wanted to try for quite some time. Unfortunately, as with the last challenge I wrote for a CTF, someone solved it with a tool instead of doing it by hand (like in the good ol' days). You can read the quick write up there, or you can read a more involved one here.

The challenge

The challenge was just a file named capture.pcap. Opening it with Wireshark would reveal hundreds of TLS handshakes. One clever way to find a clue here would be to filter them with ssl.alert_message.

handshakes

From that we could observe a fatal alert being sent from the client to the server, right after the server Hello Done.

Mmmm

Several hypothesis exist. One way of guessing what went wrong could be to run these packets to openssl s_client with a -debug option and see why the client decides to terminate the connection at this point of the handshake. Or if you had good intuition, you could have directly verified the signature :)

After realizing the signature was incorrect, and that it was done with RSA, one of the obvious attack here is the RSA-CRT attack! Faults happen in RSA, sometimes because of malicious reasons (lasers!) or just because of random errors that can happen in different parts of the hardware. One random bit shifting and you have a fault. If it happens at the wrong place, at the wrong time, you have a cryptographic failure!

RSA-CRT

RSA is slow-ish, as in not as fast as symmetric crypto: I can still do 414 signatures per second and verify 15775 signatures per second (according to openssl speed rsa2048).

Let's remember a RSA signature. It's basically the inverse of an encryption with RSA: you decrypt your message and use the decrypted part as a signature.

rsa signature

To verify a signature over a message, you do the same kind of computation on the signature using the public exponent, which gives you back the message:

rsa verify


We remember here that \(N\) is the public modulus used in both the signing and verifying operations. Let \(N=pq\) with \(p, q\) two large primes.

This is the basis of RSA. Its security relies on the hardness to factor \(N\).


I won't talk more about RSA here, so check Wikipedia) if you need a recap =)

It's also obvious for a lot of you reading this that you do not sign the message directly. You first hash it (this is good especially for large files) and pad it according to some specifications. I will talk about that in the later sections, as this distinction is not immediately important to us. We have now enough background to talk about The Chinese Remainder Theorem (CRT), which is a theorem we use to speed up the above equation.

So what if we could do the calculation mod \(p\) and \(q\) instead of this huge number \(N\) (usually 2048 bits)? Let's stop with the what ifs because this is exactly what we will do:

crt

Here we compute two partial signatures, one mod \(p\), one mod \(q\). With \(d_p = d \pmod{p-1}\) and \(d_q = d \pmod{q-1}\). After that, we can use CRT to stich these partial signatures together to obtain the complete one.

I won't go further, if you want to know how CRT works you can check an explanation in my latest paper.

RSA-CRT fault attack

Now imagine that a fault happens in one of the equation mod \(p\) or \(q\):

crt fault

Here, because one of the operation failed (\(\widetilde{s_2}\)) we obtain a faulty signature \(\widetilde{s}\). What can we do with a faulty signature you may ask? We first observe the following facts on the faulty signature.

attack

See that? \(p\) divides this value (that we can calculate since we know both the faulty signature, the public exponent \(e\), and the message). But \(q\) does not divide this value. This means that \(\widetilde{s}^e - m\) is of the form \(pk\) with some integer \(k\). What follows is naturally that \(gcd(\widetilde{s}^e - m, N)\) will give out \(p\)! And as I said earlier, if you know the factorization of the public modulus \(N\) then it is game over.

Applying the RSA-CRT attack

Now that we got that out of the way, how do we apply the attack on TLS?

TLS has different kind of key exchanges, some basic ones and some ephemeral (forward secure) ones. The basic key exchange we've used a lot in the past is pretty straight forward: the client uses the RSA public key found in the server's certificate to encrypt the shared secret with it. Then both parties derive the session keys out of that shared secret

TLS RSA key exchange

Now, if the client and the server agree to do a forward-secure key exchange, they will use something like Diffie-Hellman or Elliptic Curve Diffie-Hellman and the server will sign his ephemeral (EC)DH public key with his long term key. In our case, his public key is a RSA key, and the fault happens during that particular signature.

TLS ephemeral key exchange

Now what's the message being signed? We need to check TLS 1.2's RFC:

  struct {
      select (KeyExchangeAlgorithm) {
          ...
          case dhe_rsa:
              ServerDHParams params;
              digitally-signed struct {
                  opaque client_random[32];
                  opaque server_random[32];
                  ServerDHParams params;
              } signed_params;
              ...
  } ServerKeyExchange;
  • the client_random can be found in the client hello message
  • the server_random in the server hello message
  • the ServerDHParams are the different parameters of the server's ephemeral public key, from the same TLS 1.2 RFC:
  struct {
      opaque dh_p<1..2^16-1>;
      opaque dh_g<1..2^16-1>;
      opaque dh_Ys<1..2^16-1>;
  } ServerDHParams;     /* Ephemeral DH parameters */

  dh_p
     The prime modulus used for the Diffie-Hellman operation.

  dh_g
     The generator used for the Diffie-Hellman operation.

  dh_Ys
     The server's Diffie-Hellman public value (g^X mod p).

TLS is old, they use a non provably secure scheme to sign: PKCS#1 v1.5. Instead they should be using RSA-PSS but it's a whole different story :)

PKCS#1 v1.5's padding is pretty straight forward:

padding

  • The ff part shall be long enough to make the bitsize of that padded message as long as the bitsize of \(N\)
  • The hash prefix part is a hexstring representing the hash function being used to sign

And here's the sage code!

# hash the signed_params

h = hashlib.sha384()
h.update(client_nonce.decode('hex'))
h.update(server_nonce.decode('hex'))
h.update(server_params.decode('hex'))
hashed_m = h.hexdigest()

# PKCS#1 v1.5 padding
prefix_sha384 = "3041300d060960864801650304020205000430"

modulus_len = (len(bin(modulus)) - 2 + 7) // 8
pad_len = len(hex(modulus))//2 - 3 - len(hashed_m)//2 - len(prefix_sha384)//2

padded_m = "0001" + "ff" * pad_len + "00" + prefix_sha384 + hashed_m

# Attack to recover p
p = gcd(signature^public_exponent - int(padded_m, 16), modulus)

# recover private key
q = modulus // p
phi = (p-1) * (q-1)

privkey = inverse_mod(public_exponent, phi)

What now?

Now what? You have a private key, but that's not the flag we're looking for... After a bit of inspection you realize that the last handshake made in our capture.pcap file has a different key exchange: a RSA key exchange!!!

What follows is then pretty simple, Wireshark can decrypt conversations for you, you just need a private key file. From the previous section we retrieved the private key, to make a .pem file out of it (see this article to know what a .pem is) you can use rsatool.

Tada! Hope you enjoyed the write up =)

comments (3)

64-bit ciphers attack in 75 hours => AES-GCM attack in 75 hours? August 2016

cake

There is a new attack/paper from the INRIA (Matthew Green has a good explanation on the attack) that continues the trend introduced by rc4nomore of long attacks. The paper is branded as "Sweet32" which is a collision attack playing on the birthday paradox (hence the cake in the logo) to break 64-bit ciphers like 3DES or Blowfish in TLS.

Rc4nomore was showing off with numbers like 52 hours to decrypt a cookie. This new attack needed more queries (\(2^{32}\), hence the 32 in the name) and so it took longer in practice: 75 hours. And if the numbers are correct, this should answer the question I raised in one of my blogpost a few weeks ago:

the nonce is the part that should be different for every different message you encrypt. Some increment it like a counter, some others generate them at random. This is interesting to us because the birthday paradox tells us that we'll have more than 50% chance of seeing a nonce repeat after \(2^{32}\) messages. Isn't that pretty low?

The number of queries here are the same for these 64-bit ciphers and AES-GCM. As the AES-GCM attack pointed:

we discovered over 70,000 HTTPS servers using random nonces

Does this means that 70,000 HTTPS servers are vulnerable to a 75 hours BEAST-style attack on AES-GCM?

Fortunately not, As the sweet32 paper points out, Not every server will allow a connection to be kept alive for that long. (It also seems like no browser place a limit on the amount of time a connection can be kept alive.) An interesting project would be to figure out how many of these 70,000 HTTPS servers are allowing long-lived connections.

It's good to note that these two attacks have different results as well. The sweet32 attack targets 64-bit ciphers like 3DES and Blowfish that really nobody use. The attack is completely impractical in that sense. Whereas AES-GCM is THE cipher commonly negociated. On the other hand, while both of them are Beast-style attacks, The sweet32 attack results in decrypting a cookie whereas the AES-GCM attack only allows you to redirect the user to a valid (but tampered) https page of the misbehaving website. In this sense the sweet32 attack has directly heavier consequences.

PS: Both Sean Devlin and Hanno Bock told me that the attacks are quite different in that the 64-bit one needs a collision on the IV or on a previous ciphertext. Whereas AES-GCM needs a collision on the nonce. This means that in the 64-bit case you can request for longer messages instead of querying for more messages, you also get more information with a minimum sized query than in a AES-GCM attack. This should make the AES-GCM even less practical, especially in the case of re-keying happening around the birthday bound.

comments (1)

Crypto and CHES Debriefing August 2016

While Blackhat and Defcon are complete mayhem (especially Defcon), Crypto and CHES look like cute conferences compared to them. I've been twice at blackhat and defcon, and still can't seem to be productive and really find ways to take advantage of the learning well there. Besides meeting nice people it's more of a hassle to walk around and find what to listen to or do. On the other side, it's only my first time at Crypto and CHES but I already have an idea on how to make the most of it. The small size makes it easy to talk to great people (or at least be 2 meters away from them :D) and the small amount of tracks (2 for Crypto and 1 for CHES) makes it easy not to miss anything and avoid getting exhausted quickly from walking and waiting between talks. The campus is really nice and cozy, it's a cute university with plenty of rooms with pool tables and Foosball, couches and lounges.

miam

The beach is right in front of the campus and most of the lunch and dinners are held outside on the campus' loans, or sometimes on the beach. The talks are sometimes quite difficult to get into, the format is of 25 minutes and the speakers are not all the time trying to pass along the knowledge but rather are trying to market their papers or checking their PHD todo list. On the bright side, the workshops are all awesome, the vendors are actually knowledgeable and will answer all your stupid questions, and posters are displayed in front of the conference with usually someone knowledgeable near by.

posters

The rump session is a lot of fun as well, but prepare for a lot of private jokes unless you've been here for many years.

  • Here are my notes on the whitebox and indistinghuishability obfuscation workshop. The tl;dr: iO is completely impractical, and probably will continue to be at least for a good decade. Cryptographically secure Whitebox currently do not exist, at least not in public publications, but even the companies involved seemed comparable to hardware vendors trying to find side-channel countermeasures: mostly playing with heuristics that raise the bar high enough (in dollars) for potential attackers to break their systems.

  • Here are my notes on the smartcards and cache attacks workshop. The tl;dr is that when you build hardware that does crypto you need to go through some evaluations. Certification centers have labs for that with equipment to do EM analysis (more practical than power) and fault attacks. FIB (Focused Ion Beams) are too expensive and they will never have this kind of equipment, but more general facilities have them and you can ask to borrow the equipment for a few hours (although they never do that). Government have their own labs as well. Getting a certification is expensive and so you will probably want to do a pre-eval internally or with a third-party before that. For smartcards the consensus is to get a Common Criteria evaluation, it gives you a score calculated according to the possible attacks, and how long/how much money/how many experts they need to happen. Cache attacks in the cloud or in embedded devices where you corrupt a core/process are practical. And as with fault attacks, there is a lot you can do there.

  • Here are my notes on the PROOFS workshop. No real take away here other than we've been ignoring hardware when trying to implement secure software, and we should probably continue to do so because it's a big clusterfuck down there.

I'll try to keep an up-to-date list of blogposts on the talks that were given at both Crypto and CHES. If you have any blogpost on a talk please share it in the comments and I will update this page! Also if you don't have a blog but you would be happy to write an invited blogpost on one of the talk on this blog, contact me!

Besides that, we had a panel at CHES that I did not pay too close attention too but Alex Gantman from Qualcomm said a few interesting things: 10 years ago, in 2006, was already a year before the first iPhone, and two years before the first android phone. Only 20 years ago did we discover side-channel attacks and stack smashing for fun and profit was released. Because of that it is highly questionable how predictable the next 10 years will be. (Although I'm sure car hacking will be one of the new nest of vulns). He also said something along the lines:

I've never seen, in 15 years of experience, someone saying "you need to add this security thingy so that we can reach regulation". It is usually the other way around: "you don't need to add more security measures, we've already reached regulation". Regulations are seen as a ceiling.

And because I asked a bunch of questions to laser people (thanks to AlphaNov, eShard and Riscure!), here's my idea on how you would go to do a fault attack:

laser

  • You need to get a laser machine. Riscure and a bunch of others are buying lasers from AlphaNov, a company from Bordeaux!
  • You need to power the chip you're dealing with while you do your analysis
  • look at the logic, it's usually the smallest component on the chip. The biggest is the memory.
  • You need a trigger for your laser, which is usually made with an FPGA because it needs to be fast to create the laser beam. It usually takes 15 nanoseconds with more or less 8 picoseconds.
  • You can see, with infrared camera, through the silicium where your laser is hitting as well. This way if you know where to hit, you can precisely set the hitting point.
  • The laser will go through the objective of the microscope you're using to observe the chip/IC/SoCs with.
  • Usually since you're targeting the logic and it's small you can't see the gates/circuit, so you blindly probe it with short burst of photons, cartographying it until you find the right XOR or operation you are trying to target.
  • It seems like you would also need to use test vectors for that: known plaintext/ciphertext pairs where you know what the fault is supposed to look like on the ciphertext (if you're targeting an encryption)
  • Once you have the right position, you're good to try your attack.

Anyway! Enough for the blogpost. Crypto will be held at Santa Barbara next year (comme toujours), and CHES will be in Taiwan!

comment on this story