david wong

Hey! I'm David, cofounder of zkSecurity and the author of the Real-World Cryptography book. I was previously a crypto architect at O(1) Labs (working on the Mina cryptocurrency), before that I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.

Quick access to articles on this page:

more on the next page...

Real World Crypto: Day 3 posted January 2016

This is the 3rd post of a series of blogpost on RWC2016. Find the notes from day 1 here.

I'm a bit washed out after three long days of talk. But I'm also sad that this comes to an end :( It was amazing seeing and meeting so many of these huge stars in cryptography. I definitely felt like I was part of something big. Dan Boneh seems like a genuine good guy and the organization was top notch (and the sandwiches amazing).

SGX morning

The morning was filled with talks on SGX, the new Intel technology that could allow for secure VMMs. I didn't really understood these talks as I didn't really know what was SGX. White papers, manual, blogposts and everything else is here.

10:20pm - Practical Attacks on Real World Cryptographic Implementations

tl;dw: bleichenbacher pkcs1 v1.5 attack, invalid curve attack

If you know both attacks, don't expect anything new.

  • many attacks nowadays are based on really old papers
    • BEAST in 2011 is from a 2004 paper
    • 2013/14 POODLE and lucky13 comes from a 2002 paper
    • 2012 xml encryption attack is from a 1998 bleichenbacher paper
  • bleichenbacher attack
    • rsa-pkcs#1 v1.5 is used to encrypt symmetric keys, it's vulnerable to CCA
    • 2 countermeasures:
      • OAEP (pkcs#1 v2)
      • if padding is incorrect return random
    • padding fail in RWC: in apache WSS4J XML Encryption they generated 128 bytes instead of 128 bits of random
    • practical attacks found as well in TLS on JSSE, Bouncy Castle, ...
      • exception occurs if padding is wrong, it's caught and the program generates a random. But exception consumes about 20 microseconds! -> timing attacks (case JSSE CVE-2014-411)
  • invalid curve attack
    • send invalid point to the server (of small order)
    • server doesn't check if the point is on the EC
    • attacker gets information on the discrete log modulo the small order
    • repeat until you have enough to do a large CRT
    • they analyzed 8 libraries, found 2 vulnerable
    • pretty serious attack -> allows you to extract server private keys really easily
    • works on ECDH, not on ECDHE (but in practice, it depends how long they keep the ephemeral key)
  • HSM scenarios: keys never leave the HSM
    • they are good candidates for these kind of "oracle" attacks
    • they tested and broke Ultimaco HSMs (CVE-2015-6924)
    • <100 queries to get a key

11:10am - On Deploying Property-Preserving Encryption

tl;dw: how it is to deploy SSE or PPE, and why it's not dead

  • lots of "proxy" companies that translates your queries to do EDB without re-teaching stuff to people (there was a good slide on that that I missed, if someone has it)
  • searchable symmetric encryption (SSE): you just replace words by token
    • threat model is different, clients don't care if they hold both the indexes and the keys
  • two kinds of order preserving encryption (OPE):
    • stateless OPE (deterministic -> unclear security)
    • interactive OPE (stateful)
    • talks about how hard it is to deploy a stateful scheme
  • many leakage-abused attacks on PPE
  • crypto researcher on PPE: "it's over!", but the cost and legacy are so that PPE will still be used in the future

I think the point is that there is nothing practical that is better than PPE, so rather than using non-encrypted DB... PPE will still hold.

11:30am - Inference Attacks on Property-Preserving Encrypted Databases

tl;dw: PPE is dead, read the paper

approach to EDB over time

implemented EDB

  • analysis have been done and it is known what leaks and cryptanalysis have been done from these information
  • real data tends to be "non-uniform" and "low entropy", not like assumptions of security proofs
  • inference attacks:
    • frequency analysis
    • sorting attack
    • Lp-optimization
    • cumulative attacks
  • frequency analysis: come on we all know what that is
    • Lp-optimization: better way of mapping the frequency of auxilliary data and the ciphertexts
  • sorting attacks: just sort ciphertextxs and your auxiliary data, map them
    • this fails if there is missing items in the ciphertexts set
    • cumulative attack improve on this

check page 6 of the paper for explanations on these attacks. All I was expecting from this talk was explanation of the improvements (Lp and cumulative) but they just flied through them (fortunately they seem to be pretty easy to understand in the paper). Other than that, nothing new that you can't read from their paper.

2:00pm - Cache Attacks on the Cloud

tl;dw: cache attacks can work, maybe

  • hypervisor (VMM) ensures isolation through virtualization
  • VMs might feel each other's load on some low-level resources -> potential side channels
  • covert channel in the cloud?
    • LLC is cross core (L3 cache)
  • cache attacks
    • prime+probe
      • priming: find eviction set: memory line that when loaded to cache L3 will occupy a line we want to monitor
      • probing: when trying to access the memory line again, if it's fast that means no one has used the L3 cache line

primeprobe

  • to get crypto keys from that you need to detect key-dependent cache accesses
    • for RSA check timing and number of times the cache is accessed -> multiplications
    • for AES detect the lookup table access in the last round (??)
  • cross-VM cache attacks are realistic?
    • attack 1 (can't remember) (hu)
    • co-location: detect if they are on the same machine (dropbox) [RTS09]
      • they tried the same on AWS EC2, too hard now (hu)
      • new technique: LLC Cache accesses (hu)
      • new technique: memory bus contention [xww15, vzrs15]
  • once they knew they were on the same machine through colocation what to target?
  • libgcrypt's RSA use CRT, sliding window exponentiation and message blinding (see end of my paper to see explanation of message blinding)

conclusion:

  • cache attacks in public cloud work
    • but still noise and colocation problem
  • open problem: countermeasures?
  • what about non-crypto code?

Why didn't they talk of flush+reload and others?

2:30am - Practicing Oblivious Access on Cloud Storage: the Gap, the Fallacy, and the New Way Forward

tl;dw: ORAM, does it work? Is it practical?

paper is here

  • Oblivious RAM, he doesn't want to explain how it works
  • how close is ORAM to practice?
  • implemented 4 different ORAM system from the litterature and got some results from it
  • CURIOUS, what they made from these research, is open-source. It's made in Java... such sadness.

Didn't get much from this talk. I know this is "real world" crypto but a better intro on ORAM would have been nicer, also where does ORAM stands in all the solutions we already have (fortunately the previous talk had a slide on that already). Also, I only read about it in FHE papers/presentations, but there was no mention of FHE in this talk :( well... no mention of FHE at all in this convention. Such sadness.

From their paper:

An Oblivious RAM scheme is a trusted mechanism on a client, which helps an application or the user access the untrusted cloud storage. For each read or write operation the user wants to perform on her cloud-side data, the mechanism converts it into a sequence of operations executed by the storage server. The design of the ORAM ensures that for any two sequences of requests (of the same length), the distributions of the resulting sequences of operations are indis-tinguishable to the cloud storage. Existing ORAM schemes typically fall into one of the following categories: (1) layered (also called hierarchical), (2) partition-based, (3) tree-based; and (4) large-message ORAMs.

2:50pm Replacing Weary Crypto: Upgrading the I2P network with stronger primitives

tl;dw: the i2p protocol

  • i2p is like Tor? both started around 2003, both using onion routing, both vulnerable to traffic confirmation attacks, etc...
    • but Tor is ~centralized, i2p is ~decentralized
    • tor use an asymmetric design, i2p is symmetric (woot?)
    • in i2p traffic works in circle (responses comes from another path)
      • so twice as many nodes are exposed
      • but you can only see one direction
      • this difference with Tor hasn't really been researched
    • ...

4:20pm - New developments in BREACH

tl;dw: BREACH is back

But first, what is BREACH/CRIME?

This talk was a surprise talk, apparently to replace a canceled one?

  • original BREACH attack introduced at blackhat USA 2013
    • compression/encryption attack (similar to CRIME)
    • crime was attaking the request, breach attack the response
    • based on the fact that tls leaks length
    • the https server compresses responses with gzip
    • inject content in victim when he uses http
      • the content injected is a script that queries the https server
    • attack is still not mitigated but now we use block cipher so it's OK
  • extending the BREACH attack:
    • attack noisy endpoints
    • attack block ciphers
    • optimized
    • no papers?
  • aes-128 is vulnerable
  • mitigation proposed:
    • google is introducing some randomness in their responsness (not really working)
    • facebook is trying to generate a mask XORed to the CSRF token (but CSRF tokens are not the only secrets)
  • they will demo that at blackhat asia 2016 in Singapore

4:40pm - Lucky Microseconds: A Timing Attack on Amazon's s2n Implementation of TLS

tl;dw: read the paper, attack is impractical

a debriefing of the convention can be found here

comment on this story

Real World Crypto: Day 2 posted January 2016

This is the 2nd post of a series of blogpost on RWC2016. Find the notes from day 1 here.

disclaimer: I realize that I am writing notes about talks from people who are currently surrounding me. I don't want to alienate anyone but I also want to write what I thought about the talks, so please don't feel offended and feel free to buy me a beer if you don't like what I'm writing.

And here's another day of RWC! This one was a particularly long one, with a morning full of blockchain talks that I avoided and an afternoon of extremely good talks, followed by a suicidal TLS marathon.

stanford

09:30 - TLS 1.3: Real-World Design Constraints

tl;dw: hello tls 1.3

DJB recently said at the last CCC:

"With all the current crypto talks out there you get the idea that crypto has problems. crypto has massive usability problems, has performance problems, has pitfalls for implementers, has crazy complexity in implementation, stupid standards, millions of lines of unauditable code, and then all of these problems are combined into a grand unified clusterfuck called Transport Layer Security.

For such a complex protocol I was expecting the RWC speakers to make some effort. But that first talk was not clear (as were the other tls talks), slides were tiny, the speaker spoke too fast for my non-native ears, etc... Also, nothing you can't learn if you already read this blogpost.

10:00 - Hawk: Privacy-Preserving Blockchain and Smart Contracts

tl;dw: how to build smart contracts using the blockchain

  • first slide is a picture of the market cap of bitcoin...
  • lots of companies are doing this block chain stuff:

blockchain

  • DAPS. No idea what this is, but he's talking about it.

Dapps are based on a token-economy utilizing a block chain to incentivize development and adoption.

  • bitcoin privacy guarantees are abysmal because of the consensus on the block chain.
  • contracts done through bitcoin are completely public
    • their solution: Hawk (between zerocash and ethereum)
    • uses zero knowledge proofs to prove that functions are computed correctly
    • blablabla, lots of cool tech, cool crypto keywords, etc.

if you're really interested, they have a tech report here (pdf)

As for me, this tweet sums up my interest in the subject.

blockchain

So instead of playing games on my mac (see bellow (who plays games on a mac anyway?)). I took off to visit the Stanford campus and sit in one of their beautiful library

guy

12:00 - Lightning talks.

lightning

I'm back after successfuly avoiding the blockchain morning. Lightning talks are mini talks of 1 to 3 minutes where slides are forbidden. Most were just people hiring or saying random stuff. Not much to see here but a good way to get into the talking thing it seems.

In the middle of them was Tancrede Lepoint asking for comments on his recent Million Dollar Curve paper. Some people quickly commented without really understanding what it was.

tanja

(Sorry Tanja :D). Overall the idea of the paper is how to generate a safe curve that the public can trust. They use the Blum Blum Shub PRNG to generate the parameters of the curve, iterating the process until it completes a list of checks (taken from SafeCurves), and seeding with several drawings from lotteries around the world in a particular timeframe (I think they use a commitment for the time frame) so that people can see that these numbers were not chosen in a certain ways (and would thus be NUMS).

14:00 - An Update on the Backdoor in Juniper's ScreenOS

tl;dw: Juniper

Slides are here. The talk was entertaining and really well communicated. But there was nothing majorly new that you can't already read in my blogpost here.

  • it happened around Christmas, lots of security people have nothing to do around this period of the year and so the Juniper code was reversed really quickly (haha).
  • the password that looks like a format string was already an idea taken straight from a phrack 2009 issue (0x42)

Developing a Trojaned Firmware for Juniper ScreenOS Platforms

  • unfiltered Dual EC outputs (the 30 bytes of output and 2 other bytes of a following Dual EC output) from a IKE nonce
    • but the Key Exchange is done before generating the nonce? They're still working on verifying this on real hardware (they will publish a paper later)
    • in earlier versions of ScreenOS the nonces used to be 20 bytes, the RNG would output 20 bytes only

timeline

  • When they introduced Dual EC in their code (Juniper), they also changed the nonce length from 20 bytes to 32 bytes (which is perfect for easy use of the Dual EC backdoor). Juniper did that! Not the hackers.
  • they are aware, through their disclosure, that it is "exploitable"
  • the new patch (17 dec 2015) removed the SSH backdoor and restored the Dual EC point.

A really good question from Tom Ritter: "how many bytes do you need to do the attack". Answer: truncated output of Dual EC is 30 bytes (instead of 32), so you need to bruteforce the 2 bytes. To narrow the search space, 2 bytes from the next output is practical and enough. So ideally 30 bytes and 2 bytes from a following output allows for easy use of the Dual EC backdoor.

(which is something I forgot to mention in my own explanation of Dual EC)

14:20 - Pass: Strengthening and Democratizing Enterprise Password Hardening

tl;dw: use a external PRF

  • Ashley Madison and other recent breaches taught us that hashing was not enough to protect passwords
  • smash and grab attacks

A smash and grab raid or smash and grab attack (or simply a smash and grab) is a particular form of burglary. The distinctive characteristics of a smash and grab are the elements of speed and surprise. A smash and grab involves smashing a barrier, usually a display window in a shop or a showcase, grabbing valuables, and then making a quick getaway, without concern for setting off alarms or creating noise.

  • The Ashley Madison breach is interesting because they used bcrypt and salting with high cost parameter, which is better than industry norms to protect passwords.
  • he cracked 4000 passwords from the leaks anyway

cracked

  • millions of password were cracked a few weeks after
  • He has done some research and has come up with a response: PASS, password hardening and typo correctors
  • facebook password onion from last year's RWC looks like an "archeological record"

facebook

  • the hmac with the private key transforms the offline attack in an online attack because the attacker now needs to query the PRF service repeatidly.
  • "the facebook approach" is to use a queriable "PRF service" for the hmac, it makes it easier to detect attacks.
  • but several drawbacks:
    • 1) online attackers can instead record the hashes (mostly because of this legacy code)
    • 2) the PRF is not called with a per-user granularity (same for all users) -> hard to implement fined-grained rate limiting (throtteling/rate limiting attempts, you are only able to detect global attacks)
    • 3) no support for periodic key rotations -> if they detect an attack, they now need to add new lines in their key hashing rotting onion
  • PASS uses a PRF Service, same as facebook but also:
    • 1) blinding (PRF can't see the password)
    • 2) graceful key rotation
    • 3) per-user monitoring

po-prf

  • the blinding is a hash raised to a power, unblinding is done by taking the square root of that power (but maybe he simplified an inverse modulo something?)
  • a tweek t is sent as well, basically the user id, it doesn't have to be blinded and so they invented a new concept of "partially oblivious PRF" (PO-PRF)

existing crypto primitives insufficient

  • the tweak and the blinded password are sent to the PRF which uses a bilinear pairing construction to do the PO-PRF thingy (this is a new use case fo bilinear pairing apparently).

  • it's easy to implement, completely transparent to users, highly scalable.

  • typos corrector: idea of a password correctors for famous typos (ex: a capitalized first letter)
    • facebook does this, vanguard does this...
  • intuition tells you it's bad: an attacker tries a password, and you help him find it if it's almost correct.
  • they instrumented dropbox for a period of 24 hours (for all users) to implement this thing
  • they took problems[:3] = [accidental caps lock key, not hitting the shift key to capitalize the first letter, extra unwanted character]
    • they corrected 9% of failed password submissions
    • minimal security impact, according to their research "virtually no security loss"
  • paper seems interesting
  • there is some open source code somewhere

Question from Dmitry Khovratovich: Makwa does something like this, exactly like this (outch!). Answer: "I'm not familiar with that"

14:50 - Argon2 and Egalitarian Computing

tl;dw: argon2 hash function efficient against ASICs

  • passwords are not long (PIN, human has to remember the password) -> brute force attacks are possible
  • password cracking is easier with GPU or FPGAs or even ASICs
  • ASICs? -> ex: bitcoin, they switched to ASICs (2^32 hashes/joule on ASIC, 2^17 hashes/joule on GPU)
  • Argon2 created for the password hashing competition
  • memory-intensive computation: make a password hashing function so that you need a lot of memory to use it -> the ASIC advantage vanishes (if someone wants to explain to me how is that, feel free).

password competition

  • winner: Argon2
  • they wanted the function to be as simple as possible (to simplify analysis)
  • you need the previous block to do the next computation (badly parallelizable) and a reference block (takes memory)

design of Argon2

  • add some parallelism... there was another slide I have no image and no comment :(
  • this concept of slowing down attackers has other applications -> egalitarian computing
  • for ex: in bitcoin they wanted every user to be able to mine on his laptop, but now there are pools taking up more than 50% (danger: 51% attack)
  • can use it for client puzzles for denial of service protection.
  • egalitarian computing -> ensures that attacker and defender are the same (no advantage using special computers)

samuel colt

15:10 - Cryptographic pitfalls

tl;dw: 5 stories about cute and subtle crypto fails

  • talker is explicit about his non-involvement with Juniper (haha)
  • he's narrating the tales of previously disclosed vulns, 5 case studies, mostly because of "following best practice" attitude (not that it's bad but usually not enough).

  • 1)
    • concept of zeroisation
    • HSM manufacturer had a sandbox for user code, always zeroed memory when it was freed
    • problem is, sometimes memory doesn't get freed, like when you pull the power out.
    • (reminds me of the cold boot attack of the other day).

zeroisation

  • 2)

    • concept of "reusing components rather than designing new ones"
    • vpn uses dsa/dh for ipsec
    • over prime fields
    • pkcs#3 defines something
    • PKIXS says something else, subtle difference

pkcs3

using dsa keys

  • 3)
    • concept of "using external events to seed your randomness pool", gotta get your entropy from somewhere!
    • entropy was really bad from the start because they would boot the device right after production line, nothing to build entropy from (the same thing happened in the carjacking talk at blackhat us2015)
    • so the key was almost the same because of that, juniper fixed it after customer report (the customer changed his device, but he didn't get an error that the key had changed)

customer report

  • 4)
    • concept of "randomness in RSA factors"
    • government of some country use smartcards
    • then they want to use cheaper smartcards, but re-used the same code
    • the new RNG was bad

rng fail

  • 5)
    • everything is blanked out (he can't really talk about it)
    • they used CRC for integrity (instead of a MAC/signature)

lessons

from the audience, someone from Netscape speaks out "yup we forget that things have to be random as well" (cf predictable Netscape seed)

16:00 - TLS at the scale of Facebook

tl;dw: how they deployed https, wasn't easy

Timeline of the https deployment:

  • In 2010: facebook uses https almost only for login and payments
  • during a hackaton they tried to change every http url to https. It turns out it's not that simple.
  • not too long after firesheep happened, then Tunisia only ISP started doing script injection to non-https traffic. They had to do something
  • In 2011, they tried mixing secure and insecure. Then tried to make ALL apps support https (outch!)
  • In 2012, they wanted https only (no https opt-in)
  • In 2013, https is the default. At the end of the year they finally succeed: https-only
    • (And thinking that not so long ago it was normal to login without a secure connection... damn things have changed)

present

  • Edge Networks: use of CDNs like Akamai or cloudflare or spread your servers in the world
  • Proxygen, open source c++ http framework
  • they have a client-side TLS (are they talking about mobile?) built on top of proxygen. This way they can ship improvement to TLS before the platform does, blablabla, there was a nice slide on that.
  • they really want 0-RTT, but tls 1.3 is not here, so they modified QUIC crypto to make it happen on top of TCP: it's called Zero.

zero

Server Name Indication (SNI) is an extension to the TLS computer networking protocol[1] by which a client indicates which hostname it is attempting to connect to at the start of the handshaking process. This allows a server to present multiple certificates on the same IP address and TCP port number and hence allows multiple secure (HTTPS) websites (or any other Service over TLS) to be served off the same IP address without requiring all those sites to use the same certificate

  • stats:
    • lots of session resumption by ticket -> this is good
    • low number of handshakes -> that means they store a lot of session tickets!
    • very low resumption by session ID (why is this a good thing?)
    • they haven't turned off RC4 yet!
      • something in the audience tells him about downgrade attacks, outch!
  • the referrer field in the http header is empty when you go on another website from a https page! Is that important... no?
  • it's easy for a simple website to go https (let's encrypt, ...), but for a big company, fiou it's hard!
  • still new feature phones that can't access tls (do they care? mff)

16:30 - No More Downgrades: Protecting TLS from Legacy Crypto

tl;dw: SLOTH

downgrade

  • brainstorming: "how do we fix that in tls 1.3?"
  • explanation of Logjam (see my blogpost here)
  • at the end of the protocol there is a finish message where is included all the negotiation in a mac:
    • but this is already too late: the attacker can forge the mac as well at this point
    • this is because the downgrade protection mechanism (this mac at the end) itself depends on downgradeable parameters (the idea behind logjam)
  • in tls 1.3 they use a signature instead of the mac
    • but you sign a hash function! -> SLOTH (which was released yesterday)

Didn't understand much, but I know that all the answers are in this paper. So stay tuned for a blogpost on the subject, or just read the freaking paper!

sloth

  • sloth is a transcript collision attack
  • he talks about sigma protocol for some reason (proof of knowledge)

primer on collision

  • tls 1.3 includes a version downgrade resilience system:
    • the server chooses the version
    • the server has to choose the highest common version
    • ...
    • only solution they came up with: put all the versions supported in the server nonce. This nonce value (server.random to be exact) is in all tls versions and is signed before the key exchange happens.

16:50 - The OPTLS Protocol and TLS 1.3

tl;dw: how does OPTLS works

  • paper is here
  • tls 1.3 improved RTT and PFS
  • agreement + confidentiality are the fundamental requirements for a key exchange protocol
  • OPTLS is a key exchange that they want tls 1.3 to use

The OPTLS design provides the basis for the handshake modes specified in the current TLS 1.3 draft including 0-RTT, 1-RTT variants, and PSK modes

I have to admit I was way too tired at that point to follow anything. Everything looked like David Chaum's presentation. So we'll skip the last talk in this blogpost.

day 3 notes are here

comment on this story

Real World Crypto: Day 1 posted January 2016

Everyone was at CCC before new years eve, and everyone keeps talking about how great it was and how good a time they had... :(

But now is RWC2016 (nothing to do with the real world cup)! and it's awesome! and it's so far the best crypto convention I've attended!

rwc

Global overview and dumb summary of the day (followed by notes of the talks, so just skip this list):

  • one big room composed of around 400-500 people in the middle of Stanford university
  • it's raining, locals are happy, we are not
  • talks of various times (10-40min) chained, on many different topics
  • awkwardly meeting people
  • free food (and great food!) and starbucks coffee (I think I'm the only one happy about that) and cypher wine

cipher wine

  • 12 talks, only 1 girl
  • every one in cryptography is here (diffie, rivest, watson ladd, boneh, djb, tanja, phong nguyen, lochter, trevor perrin, filippo, tancrede lepoint...)

09:30 - The Blackphone

tl;dw: marketing speech

Two human beings verbally compare the Short Authentication String, drawing the human brain directly into the protocol. And this is a Good Thing.

ZRTP seems to be a normal DH key exchange, except that you have to compare SAS (a hash) of the public keys aloud on the phone.

There is also the concept of key continuity, where you keep some value that will be used in the following DH key exchange.

If the MiTM is not present in the first call, he is locked out of subsequent calls

Makes me think, why the ratchet in Axolotl? Why the need to constantly change the key being used to encrypt? If someone knows the answer :)

  • "don't let cryptographers design UX"
  • the password is generated from their server and handed to the user... they say they don't want user to generate weak password. WTF
  • encrypted dB -> they removed it because it was annoying people. WTF
  • they replaced AES, sha2, etc... with "non-NIST" suite (twofish, ...) that comes from NIST funded competitions. WTF
  • they used tanja and bernstein gifted curve (41417). They wanted a unique curve. WTF
  • tanja asks a question (missed it), he answers "post-quantum right now is marketing". Haha

10:00 - Cryptographic directions in Tor: past and future

tl;dw: not much crypto at first, now crypto, in the future more weird crypto?

  • they took a crypto class in 2004 and chose the ciphersuite for Tor from that -> AES-CTR, truncated sha1, plenty of bad decisions
    • key negociation: RSA1024 + DH1024 + AES-CTR (no name, then they called it "TAP")
    • links? they used TLS 1.0 (a link is the connection between two relays)
  • they replaced a lot of it now
    • key negociation: curve25519 + sha256 (called "ntor")
    • tls >= 1.0, with ecdh (P256)
  • work remains:
    • truncated sha1 is too malleable
    • not enough post quantum (and people are scared of that)
    • need to remove rsa1024
  • AES-CTR is malleable, MAC allows tagging attacks if first and third relays are evil -> woot?

Here's a blogpost from Tom Ritter about tagging attacks. The idea: the first node XOR some data to the ciphertext, the third node sees the modified data in clear (if the data is not going through https). So with two evil nodes, being the first and last, you can know who is visiting what website (traffic correlation).

There was also something about doing it with the sha1, and something about adding a MAC between each relay. But I missed that part, if someone can fill in the blanks for me?

  • they want to use AEZ in the future (rogaway)? or HHFHFH? (djb)

    • This is scary as many have stated. Djb said "crypto should be boring" (at least I heard he said that), and he's totally right. Or at least double encrypt (AES(AEZ(m)))
    • AEZ is an authenticated cipher (think AES-GCM or chacha20-poly1305) that is part of the CAESAR competition
    • HHFHFH is ...? No idea, if someone knows what Nick is talking about?
  • Nick talks about newhope as well (I presume he's talking about this post-quantum key exchange?). He wants more post-quantum stuff in his thing.

10:20 - Anonize: An anonymous survey system

tl;dw: how to do an anonymous survey product with a nice UX (paper is here)

  • problems of a surveys:
    • you want authenticity (only authorized users, only one vote per person)
    • anonymity (untracable response)
  • surveymonkey (6% of the survey online):
    • they don't care about double votes
    • special URLs to trace responses to single users/groups
    • anyone who infiltrate the system can get that info
    • they do everything wrong
  • Anonize overview
    • 1) you create a public key
    • 2) create survey, unique URL for everyone
    • 3) you fill out something, you get a QR code
    • what you submit is a [response, token] with the token a ZK proof for... something.
  • they will publish API, and it's artistic

The talk was mostly spent on showing how beautiful the UX was. I would have prefered something clearer on how the protocol was really working (but maybe other understood better than me...)

11:10 - Cryptography in AllJoyn, an Open Source Framework for IoT

tl;dw: the key exchange protocol behind their AllJoyn, the security of devices that uses this AllJoyn api/interface...

What's AllJoyn? Something that you should use in your IoT stuff apparently:

AllJoyn is an open source software framework that makes it easy for devices and apps to discover and communicate with each other. Developers can write applications for interoperability regardless of transport layer, manufacturer, and without the need for Internet access. The software has been and will continue to be openly available for developers to download, and runs on popular platforms such as Linux and Linux-based Android, iOS, and Windows, including many other lightweight real-time operating systems.

  • they want security to be the same whatever they use (tcp, udp, ip, bluetooth, etc.) so they created their own TLS-like protocol with way less options
  • EC-SPEKE will replace PSK
  • key exchange overview (~4 round trips to derive a session key, outch!)
  • ECHDE_ECDSA key exchange
  • devices are in "claimable" state when they join the network
    • they get assigned a cert/policy by the security manager
      • policy reffers to level of access/control
    • app has a "manifest" of interfaces it wants to use (like facebook permissions)
      • security manager has a change to see that and accept/refuse it

...

11:40 - High-assurance Cryptography

tl;dw: they have open source code to formaly verify cryptographic implementations

  • they (Galois) have tools since 99 to verify crypto, using SMT solver, SAT solver, etc.
    • acts as a compiler (from the user's point of view)
  • everything he's talking about is open source:
    • Cryptol, functionnal language. cryptol.net
    • SAW The Software Assurance Workbench

SAW supports analysis of programs written in C, Java™, MATLAB®, and Cryptol, and uses efficient SAT and SMT solvers such as ABC and Yices.

https://galois.com/project/software-analysis-workbench/

  • Verification engineers can use SAW to prove that a program implements its specification.
  • Security analysts can have SAW generate models identifying constraints on program control flow to identify inputs that can reach potentially dangerous parts of a program.
  • Cryptographers can have SAW generate models from production cryptographic code for import and use within Cryptol.
  • takes to 10-100 minutes to verify a crypto primitive
  • if you have a high formulation of your algorithm, why not make it write code?

12:00 - The first Levchin prize for contributions to real-word cryptography

tl;dw: dude with a lot of money decides to give some to influencal cryptographers every year, also gives them his name as a reward.

The Levchin prize honors significant contributions to real-world cryptography. The award celebrates recent advances that have had a major impact on the practice of cryptography and its use in real-world systems. Up to two awards will be given every year and each carries a prize of $10,000.

http://levchinprize.com/

$10,000, twice a year. It's the first edition. Max Levchin is the co-founder of paypal, he likes puzzles.

rogaway

  • first prize is awarded to Phillip Rogaway (unanimously) -> concrete security analysis, authenticated encryption, OCD, synack, format-preserving encryption, surveillance resistance crypto, etc. Well the guy is famous.
  • second award goes to several people from INRIA for the miTLS project (Karthikeyan Bhargavan, Cedric Fournet, Markulf Kohlweiss, Alfredo Pironti). Well deserved.

14:00 - PrivaTegrity: online communication with strong privacy

tl;dw:

Well. David Chaum, Privategrity: "A wide range of consumer transcations multiparty/multijurisdiction -- efficientyl!"

I won't comment on that. Everything is in these slides:

chaum1

chaum1

I mean seriously, if you use slides like that, and talk really loud, people will think you are a genius? Or maybe the inverse. I'm really confused as to why that guy was authorized to give a talk.

More comments here: https://news.ycombinator.com/item?id=10850192

14:30 - Software vulnerabilities in the Brazilian voting machine

tl;dw: br voting machine is a shitstorm

voting machine

A direct-recording electronic (DRE) voting machine records votes by means of a ballot display provided with mechanical or electro-optical components that can be activated by the voter (typically buttons or a touchscreen); that processes data by means of a computer program; and that records voting data and ballot images in memory components. After the election it produces a tabulation of the voting data stored in a removable memory component and as printed copy. The system may also provide a means for transmitting individual ballots or vote totals to a central location for consolidating and reporting results from precincts at the central location. The device started to be massively used in 1996, in Brazil, where 100% of the elections voting system is carried out using machines. In 2004, 28.9% of the registered voters in the United States used some type of direct recording electronic voting system, up from 7.7% in 1996.

  • 13 millions LOC. WTF
  • 1) print zero tape first to prove no one has voted (meaningless)
  • 2012, gov organized an open contest to find vulns in the system (what he did), extremly restricted, just a few hours, no pen/paper
  • he found hardcoded keys in plain sight
  • gov says it's a "voting software that checks itself" (what does it mean? canary in the assembly code? Complety nonsense and non-crypto)
  • he tried a grep -r rand * and...
    • got a match in a file: srand(time(NULL))
    • this is predictible if you know the time, and they know the machines are launched between 7 and 8am. Bruteforce?
    • the time is actually public, no need for brute force...
    • gov asked if hashing the time would work, no? Well hashing the time twice then?
    • finally fixed by using /dev/urandom although the voting machines have two hardware RNGs
  • YouInspect: initiative, take pictures of the vote ticket, upload it (didn't get what was the point of that, didn't seem to yield any useful results)

14:50 - The State of the Law: 2016

tl;dw: blablabla

15:50 - QUIC Crypto

  • the only talk with very few slides. Adam Langley only used them when he needed pedagogical support. This is brilliant.
  • forward secure part of QUIC is better than forward secure in TLS (how? Didn't get that)
  • QUIC crypto will be replaced by TLS 1.3
  • QUIC will go on, but TLS works over TCP so they will have to make some changes?

There was this diagram where a client would send something to the server, if he didn't have the right ticket it wouldn't work, otherwise it would work... If you understood that part please tell me :)

16:20 - On the Security of TLS 1.3 and QUIC Against Weaknesses in PKCS#1 v1.5 Encryption

  • most used tls version is 1.0 (invented in 1999, when windows 98 was the most used OS)
  • pkcs#1 v1.5 is removed from tls 1.3
  • the bleichenbacher attack on pkcs#1 v1.5 is still possible.... attack explained here (the thing works if you have a server which supports both 1.3 and older versions)
  • idea of a solution?: use different certificates for 1.3 and 1.0

Someone from the audience: "no cool name and logo?"

16:40 - The State of Transport Security in the E-Mail Ecosystem

10 minutes of (painful) talk (but good job nonetheless Aaron: you went through to the end).

paper is here if you're interested: http://arxiv.org/pdf/1510.08646v2.pdf

16:50 - Where the Wild Warnings Are: The TLS Story

tl;dw: users get certificate errors browsing the net because their clock is not correct.

  • users are getting used to errors, and tend to dismiss them
  • making stats of errors warning to users:
    • 51% of warnings are non-overridable
    • 41% of warnings are for facebook, youtube, google (because they are "portals to the web")
  • errors come from the client:
    • client clock misconfiguration (61%)
      • they have an error for that that allows you to fix your clock on android
      • can't send messages on whatsapp because of this problem as well
    • captive portals
    • security products
    • ...
  • errors come from the server:
    • government has tons of errors with weird certificates
    • ...

palmer

Someone in the public is suggesting that this is because the governments are trying to teach people to ignore these errors (obviously joking). Another one is saying that they might want users to add their "special certificate". Because it can overrides HSTS on rogue certificates. Don't know if this is true. But I'm thinking, why not add certificates for only the website that requests it. Like a certificate jail. Or maybe save the certificate in a different "user-added" folder, websites being signed by certificates from this folder would make chrome display "this website is signed by a certificate you added. If you think this is not normal blablabla".

APF is talking about how they are scared that users will get desensitized by errors, but why display errors? Why not just display a warning. That would annoy real servers and oblige them to get their certs in order, that would make the users suspicious but not unable to access their website (and to google for solutions like "just add the certificates in your root store").

Watson Ladd (that the host recognized) asked her how far from the real time the clock were setup. He thought maybe it could be the battery killing the laptop, NTP not working right away (I missed why) and so the time difference would be negative. In my understanding the clock difference was causing a problem because of certificates notBefore or notAfter fields, so that wouldn't be a problem.

Also people are wondering why these clocks are different, if they should fix it for the user? But maybe not since it might be that the user want his clock to be incorrect... I just remember a time when I would purposely modify the time so that I could keep using my time limited trials (photoshop?).

day 2 notes are here

6 comments

Happy New Year posted January 2016

First: happy new year dear readers !

As I'm packing my bags to leave the temporary comfort of my parent's place, I'm taking the time to write a bit about my life (this is a blog after all).

I started this blog more than 2 years ago right before moving to Bordeaux to start a master in Cryptography. I had just finished a long bachelor of Mathematics between the universities of Lyon (France) and McMaster (Canada) and had decided to merge my major with an old passion of mine (Computer Science).

Hey guys, I'm David Wong, a 24 years old french dude who's going to start a Master of Cryptology in the university of Bordeaux 1.

I had been blogging for decades and it's naturally that I decided to start something that I could look at and be proud at the end of my master. Sort of a journal of a 2 years project. I was also counting on it to give me some motivation in time of adversity, and it started taking shape with tutorial videos on classes I couldn't understand (here's my first on Differential Power Analysis) and long articles about failed interviews (here's the one I made after interviewing with Cloudflare)

I still have no clue what my future job will be, that's why I had the idea of making this small blog where I could post about my ventures into this new world and, hopefully, being able to take a step back and see what I did, what I liked, what happened in two years of Master (and maybe more).

Fast forward and I was interning at Cryptography Services, the crypto team of NCC Group. An amazing internship of around 5 months spent in Chicago in the Matasano office: working on public audits (OpenSSL, Let's Encrypt) and private ones, giving presentations at the company, publishing a research paper, training a class in crypto at Blackhat, hanging out at Defcon and writing several articles for our crypto bulletin.

I'll also post some thoughts about the new city I'll be moving to : Bordeaux. This is for at least 2 years, or less if I change my mind. Anyway, this is going to be exciting!

That was 2 years ago, and indeed those years are now filled with memories and achievements that I will forever cherish. If you're passing by France and you didn't plan a visit in Bordeaux, you're missing out.

But anyway, as you probably know since you don't miss any of my blogpost, I've been since hired by the same people and will be back in the office in two weeks. In two weeks because before then I will be at the Real World Crypto convention in Stanford university, and after that at NCC Con in Austin. A lot is going to happen in just a few weeks, plus I'll have to find a new place to live and re-calibrate with the desk I left behind...

Now, here we go again.

comment on this story

PQCHacks: A gentle introduction to post-quantum cryptography posted December 2015

With all the current crypto talks out there you get the idea that crypto has problems. crypto has massive usability problems, has performance problems, has pitfalls for implementers, has crazy complexity in implementation, stupid standards, millions of lines of unauditable code, and then all of these problems are combined into a grand unified clusterfuck called Transport Layer Security.

Check that 32c3 video of djb and tanja

It explains a few of the primitives backed by PQCrypto for a post-quantum world. I did myself a blog series on hash-based signatures which I think is clearer than the above video.

comment on this story

How to parse scans.io public keys in python posted December 2015

I wanted to check for weak private exponents in RSA public keys of big website's certificates. I went on scans.io and downloaded the Alex Top 1 Million domains handshake of the day. The file is called zgrab-results and weighs 6.38GB uncompressed (you need google's lz4 to uncompress it, get it with brew install lz4).

Then the code to parse it in python:

with open('rro2asqbnwy45jrm-443-https-tls-alexa_top1mil-20151223T095854-zgrab-results.json') as ff:
    for line in ff:
        lined = json.loads(line)
        if 'tls' not in lined["data"] or 'server_certificates' not in lined["data"]["tls"].keys() or 'parsed' not in lined["data"]["tls"]["server_certificates"]["certificate"]:
            continue
        server_certificate = lined["data"]["tls"]["server_certificates"]["certificate"]["parsed"]
        public_key = server_certificate["subject_key_info"]
        signature_algorithm = public_key["key_algorithm"]["name"]
        if signature_algorithm == "RSA":
            modulus = base64.b64decode(public_key["rsa_public_key"]["modulus"])
            e = public_key["rsa_public_key"]["exponent"]
            N = int(modulus.encode('hex'), 16)
            print "modulus:", N
            print "exponent:", e

I figured if the public exponent was too small (e.g. smaller than 1000000, an arbitrary lower bound), it would not work. Unfortunately it seemed like every single one of these RSA public keys were using the public exponent 65537.

PS: to parse other .csv files, just open sqlite and write .import the_file.csv tab, then .schema tab or any SQL query on tab will work ;)

comment on this story

Juniper's backdoor posted December 2015

intro

A few days ago, Juniper made an announcement on 2 backdoors in their ScreenOS application.

screenos

But no details were to be found in this advisory. Researchers from the twitter-sphere started digging, and finally the two flaws were found. The first vulnerability is rather crypto-y and this is what I will explain here.

First, some people realized by diffing strings of the patched and vulnerable binaries that some numbers were changed

diff

Then they realized that these numbers were next to the parameters of the P-256 NIST ECC curve. Worse, they realized that the modified values were these of the Dual EC PRNG: from a Juniper's product information page you could read that Dual EC had been removed from most of their products except ScreenOS. Why's that? No one knows, but they assured that the implementation was not visible from the outside, and thus the NSA's backdoor would be unusable.

dual ec

Actually, reading the values in their clean binaries, it looks like they had changed the NSA's values introducing their own \(Q\) point and thus canceling NSA's backdoor. But at the same time, maybe, introducing their own backdoor. Below the NSA's values for the point \(P\) and \(Q\) from the cached NIST publications:

nist

Reading the previous blog post, you can see how they could have easily modified \(Q\) to introduce their own backdoor. This doesn't mean that it is what they did. But at the time of the implementation, it was not really known that Dual EC was a backdoor, and thus there was no real reason to change these values.

changes

According to them, and the code, a second PRNG was used and Dual EC's only purpose was to help seeding it. Thus no real Dual EC output would see the surface of the program. The second PRNG was a FIPS approved one based on 3DES and is -- as far as I know -- deemed secure.

screenos dual ec

Another development came along and some others noticed that the call for the second PRNG was never made, this was because a global variable pnrg_output_index was always set to 32 through the prng_reseed() function.

Excerpt of the full code:

code screenos

This advance was made because of Juniper's initial announcement that there were indeed two vulnerabilities. It seems like they were aware of the fact that Dual EC was the only PRNG being used in their code.

fail screenos

Now, how is the Dual EC backdoor controlled by the hackers? You could stop reading this post right now and just watch the video I made about Dual EC, but here are some more explanations anyway:

prng

This above is the basis of a PRNG. You start it with a seed \(s_0\) and every time you need a random number you first create a new state from the current one (here with the function \(f\)), then you output a transformation of the state (here with the function \(g\)).

If the function \(g\) is one-way, the output doesn't allow you to retrieve the internal state and thus you can't predict future random numbers, neither retrieve past ones.

If the function \(f\) is one-way as well, retrieving the internal state doesn't allow you to retrieve past state and thus past random numbers generated by the PRNG. This makes the PRNG forward-secure.

dual ec

This is Dual EC. Iterating the state is done by multiplying the current state with the point \(P\) and then taking it's \(x\)-th coordinate. The point \(P\) is a point on a curve, with \(x\) and \(y\) coordinates, multiplying it with an integer gives us a new point on the curve. This is a one-way function because of the elliptic curve discrete logarithm problem and thus our PRNG is forward-secure (the ECDLP states that if you know \(P\) and \(Q\) in \(P = dQ\), it's really hard to find \(d\)).

dual ec fail 1

The interesting thing is that, in the attacker knows the secret integer \(d\) he can recover the next internal state of the PRNG. First, as seen above, the attacker recovers one random output, and then tries to get the full output: the real random output is done by truncating the first 16 bits of the full output. This is done in \(2^16\) iterations. Easy.

With our random number \(r_1\) (in our example), which is the \(x\) coordinate of our point \(s_1 Q\), we can easily recover the \(y\) coordinate and thus the entire point \(s_1 Q\). This is because of how elliptic curves are shaped.

Multiplying this point with our secret value \(d\) we obtain the next internal state as highlighted at the top of this picture:

dual ec fail 2

This attack is pretty destructive and in the order of mere minutes according to Dan Bernstein et al

djb

For completeness, it is important to know that there were two other constructions of the Dual EC PRNG with additional inputs, that allowed to add entropy to the internal state and thus provide backward secrecy: retrieving the internal state doesn't allow you to retrieve future states.

The first construction in 2006 broke the backdoor, the second in 2007 re-introduced it. Go figure...

dual ec adin

1 comment

How to check if a binary contains the Dual EC backdoor for the NSA posted December 2015

tl;dr:

this is what you should type:

strings your_binary | grep -C5 -i "c97445f45cdef9f0d3e05e1e585fc297235b82b5be8ff3efca67c59852018192\|8e722de3125bddb05580164bfe20b8b432216a62926c57502ceede31c47816edd1e89769124179d0b695106428815065\|1b9fa3e518d683c6b65763694ac8efbaec6fab44f2276171a42726507dd08add4c3b3f4c1ebc5b1222ddba077f72943b24c3edfa0f85fe24d0c8c01591f0be6f63"

After all the Jupiner fiasco, I wondered how people could look if a binary contained an implementation of Dual EC, and worse, if it contained Dual EC with the NSA's values for P and Q.

The easier thing I could think of is the use of strings to check if the binary contains the hex values of some Dual EC parameters:

strings your_binary | grep -C5 -i `python -c "print '%x' % 115792089210356248762697446949407573530086143415290314195533631308867097853951"`

This is the value of the prime p of the curve P-256. Other curves can be used for Dual EC though, so you should also check for the curve P-384:

strings your_binary | grep -C5 -i `python -c "print '%x' % 39402006196394479212279040100143613805079739270465446667948293404245721771496870329047266088258938001861606973112319"`

and the curve P-521:

strings your_binary | grep -C5 -i `python -c "print '%x' % 6864797660130609714981900799081393217269435300143305409394463459185543183397656052122559640661454554977296311391480858037121987999716643812574028291115057151 "`

I checked the binaries of ScreenOS (taken from here) and they contained these three curves parameters. But this doesn't mean anything, just that these curves are stored, maybe used, maybe used for Dual EC...

To check if it uses the NSA's P and Q, you should grep for P and Q x coordinates from the same NIST paper.

nist_paper_dual_ec

This looks for all the x coordinates of the different P for each curves. This is not that informative since it is the standard point P as a generator of P-256

strings your_binary | grep -C5 -i "6b17d1f2e12c4247f8bce6e563a440f277037d812deb33a0f4a13945d898c296\|aa87ca22be8b05378eb1c71ef320ad746e1d3b628ba79b9859f741e082542a385502f25dbf55296c3a545e3872760ab7\|c6858e06b70404e9cd9e3ecb662395b4429c648139053fb521f828af606b4d3dbaa14b5e77efe75928fe1dc127a2ffa8de3348b3c1856a429bf97e7e31c2e5bd66"

Testing the ScreenOS binaries, I get all the matches. This means that the parameters for P-256 and maybe Dual EC are indeed stored in the binaries.

dual ec match

weirdly, testing for Qs I don't get any match. So Dual EC or not?

strings your_binary | grep -C5 -i "c97445f45cdef9f0d3e05e1e585fc297235b82b5be8ff3efca67c59852018192\|8e722de3125bddb05580164bfe20b8b432216a62926c57502ceede31c47816edd1e89769124179d0b695106428815065\|1b9fa3e518d683c6b65763694ac8efbaec6fab44f2276171a42726507dd08add4c3b3f4c1ebc5b1222ddba077f72943b24c3edfa0f85fe24d0c8c01591f0be6f63"

Re-reading CVE-2015-7765:

The Dual_EC_DRBG 'Q' parameter was replaced with 9585320EEAF81044F20D55030A035B11BECE81C785E6C933E4A8A131F6578107 and the secondary ANSI X.9.31 PRNG was broken, allowing raw Dual_EC output to be exposed to the network. Please see this blog post for more information.

Diffing the vulnerable (and patched binaries. I see that only the P-256 curve \(Q\) was modified from Juniper's values, other curves were left intact. I guess this means that only the P-256 curve was being used in Dual EC.

If you know how Dual EC works (if you don't check my video), you know that to establish a backdoor into it you need to generate \(P\) and \(Q\) accordingly. So changing the value \(Q\) with no correlation to \(P\) is not going to work, worse it could be that \(Q\) is too "close" to P and thus the secret \(d\) linking them could be easily found ( \(P = dQ \)).

Now one clever way to generate a secure \(Q\) with a strong value \(d\) that only you would know is to choose a large and random \(d\) and calculate its inverse \(d^{-1} \pmod{ord_{E}} \). You have your \(Q\) and your \(d\)!

\[ d^{-1} P = Q \]

bonus: here's a small script that attempts to find \(d\) in the hypothesis \(d\) would be small (the fastest way to compute an elliptic curve discrete log is to use Pollard Rho's algorithm)

p256 = 115792089210356248762697446949407573530086143415290314195533631308867097853951
a256 = p256 - 3
b256 =  41058363725152142129326129780047268409114441015993725554835256314039467401291

## base point values
gx = 48439561293906451759052585252797914202762949526041747995844080717082404635286
gy = 36134250956749795798585127919587881956611106672985015071877198253568414405109

## order of the curve
qq = 115792089210356248762697446949407573529996955224135760342422259061068512044369

# init curve
FF = GF(p256)
EE = EllipticCurve([FF(a256), FF(b256)]) 

# define the base point
G = EE(FF(gx), FF(gy)) 

# P and Q
P = EE(FF(0x6b17d1f2e12c4247f8bce6e563a440f277037d812deb33a0f4a13945d898c296), FF(0x4fe342e2fe1a7f9b8ee7eb4a7c0f9e162bce33576b315ececbb6406837bf51f5))

# What is Q_y ?
fakeQ_x = FF(0x9585320EEAF81044F20D55030A035B11BECE81C785E6C933E4A8A131F6578107)
fakeQ = EE.lift_x(fakeQ_x)

print discrete_log(P, fakeQ, fakeQ.order(), operation='+')

The lift_x function allows me to get back the \(y\) coordinate of the new \(Q\):

EE.lift_x(fakeQ_x)
(67629950588023933528541229604710117449302072530149437760903126201748084457735 : 36302909024827150197051335911751453929694646558289630356143881318153389358554 : 1)
comment on this story