Hey! I'm David, cofounder of zkSecurity and the author of the Real-World Cryptography book. I was previously a crypto architect at O(1) Labs (working on the Mina cryptocurrency), before that I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.
I never understood why Firefox doesn't display a warning when visiting non-https websites. Maybe it's too soon and there are too many no-tls servers out there and the user would learn to ignore the warning after a while?
A few weeks ago I wrote about testing RSA public keys from the most recent Alexa's top 1 million domains handshake log that you can get on scans.io.
Most public exponents \(e\) were small and so no small private key attack (Boneh and Durfee) should have happened. But I didn't explained why.
Why
The private exponent \(d\) is the inverse of \(e\), that means that \(e * d = 1 \pmod{\varphi(N)}\).
\(\varphi(N)\) is a number almost as big as \(N\) since \(\varphi(N) = (p-1)(q-1)\) in our case. So that our public exponent \(e\) multiplied with something would be equal to \(1\), we would at least need to loop through all the elements of \(\mathbb{Z}_{\varphi(N)}\) at least once.
Or put differently, since \(e > 1 \pmod{\varphi(N)}\), increasing \(e\) over \(\varphi(N)\) will allow us to get a \(1\).
This quick test with Sage shows us that with a small public exponent (like 3, or even 10,000,000), you need to multiply it with a number greater than 1000 bits to reach the end of the group and possibly ending up with a \(1\).
All of this is interesting because in 2000, Boneh and Durfee found out that if the private exponent \(d\) was smaller than a fraction of the modulus \(N\) (the exact bound is \(d < N^{0.292}\)), then the private exponent could be recovered in polynomial time via a lattice attack. What does it mean for the private exponent to be "small" compared to the modulus? Let's get some numbers to get an idea:
That's right, for a 1024 bits modulus that means that the private exponent \(d\) has to be smaller than 300 bits. This is never going to happen if the public exponent used is too small (note that this doesn't necessarely mean that you should use a small public exponent).
Moar testing
So after testing the University of Michigan · Alexa Top 1 Million HTTPS Handshakes, I decided to tackle a much much larger logfile: the University of Michigan · Full IPv4 HTTPS Handshakes. The first one is 6.3GB uncompressed, the second is 279.93GB. Quite a difference! So the first thing to do was to parse all the public keys in search for greater exponents than 1,000,000 (an arbitrary bound that I could have set higher but, as the results showned, was enough).
I only got 10 public exponents with higher values than this bound! And they were all still relatively small (633951833, 16777259, 1065315695, 2102467769, 41777459, 1073741953, 4294967297, 297612713, 603394037, 171529867).
Here's the code I used to parse the log file:
import sys, json, base64
with open(sys.argv[1]) as ff:
for line in ff:
lined = json.loads(line)
if 'tls' not in lined["data"] or 'server_certificates' not in lined["data"]["tls"].keys() or 'parsed' not in lined["data"]["tls"]["server_certificates"]["certificate"]:
continue
server_certificate = lined["data"]["tls"]["server_certificates"]["certificate"]["parsed"]
public_key = server_certificate["subject_key_info"]
signature_algorithm = public_key["key_algorithm"]["name"]
if signature_algorithm == "RSA":
modulus = base64.b64decode(public_key["rsa_public_key"]["modulus"])
e = public_key["rsa_public_key"]["exponent"]
# ignoring small exponent
if e < 1000000:
continue
N = int(modulus.encode('hex'), 16)
print "[",N,",", e,"]"
This is the 3rd post of a series of blogpost on RWC2016. Find the notes from day 1 here.
I'm a bit washed out after three long days of talk. But I'm also sad that this comes to an end :( It was amazing seeing and meeting so many of these huge stars in cryptography. I definitely felt like I was part of something big. Dan Boneh seems like a genuine good guy and the organization was top notch (and the sandwiches amazing).
SGX morning
The morning was filled with talks on SGX, the new Intel technology that could allow for secure VMMs. I didn't really understood these talks as I didn't really know what was SGX. White papers, manual, blogposts and everything else is here.
10:20pm - Practical Attacks on Real World Cryptographic Implementations
practical attacks found as well in TLS on JSSE, Bouncy Castle, ...
exception occurs if padding is wrong, it's caught and the program generates a random. But exception consumes about 20 microseconds! -> timing attacks (case JSSE CVE-2014-411)
invalid curve attack
send invalid point to the server (of small order)
server doesn't check if the point is on the EC
attacker gets information on the discrete log modulo the small order
repeat until you have enough to do a large CRT
they analyzed 8 libraries, found 2 vulnerable
pretty serious attack -> allows you to extract server private keys really easily
works on ECDH, not on ECDHE (but in practice, it depends how long they keep the ephemeral key)
HSM scenarios: keys never leave the HSM
they are good candidates for these kind of "oracle" attacks
they tested and broke Ultimaco HSMs (CVE-2015-6924)
<100 queries to get a key
11:10am - On Deploying Property-Preserving Encryption
tl;dw: how it is to deploy SSE or PPE, and why it's not dead
lots of "proxy" companies that translates your queries to do EDB without re-teaching stuff to people (there was a good slide on that that I missed, if someone has it)
searchable symmetric encryption (SSE): you just replace words by token
threat model is different, clients don't care if they hold both the indexes and the keys
two kinds of order preserving encryption (OPE):
stateless OPE (deterministic -> unclear security)
interactive OPE (stateful)
talks about how hard it is to deploy a stateful scheme
many leakage-abused attacks on PPE
crypto researcher on PPE: "it's over!", but the cost and legacy are so that PPE will still be used in the future
I think the point is that there is nothing practical that is better than PPE, so rather than using non-encrypted DB... PPE will still hold.
11:30am - Inference Attacks on Property-Preserving Encrypted Databases
tl;dw: PPE is dead, read the paper
analysis have been done and it is known what leaks and cryptanalysis have been done from these information
real data tends to be "non-uniform" and "low entropy", not like assumptions of security proofs
inference attacks:
frequency analysis
sorting attack
Lp-optimization
cumulative attacks
frequency analysis: come on we all know what that is
Lp-optimization: better way of mapping the frequency of auxilliary data and the ciphertexts
sorting attacks: just sort ciphertextxs and your auxiliary data, map them
this fails if there is missing items in the ciphertexts set
cumulative attack improve on this
check page 6 of the paper for explanations on these attacks. All I was expecting from this talk was explanation of the improvements (Lp and cumulative) but they just flied through them (fortunately they seem to be pretty easy to understand in the paper). Other than that, nothing new that you can't read from their paper.
2:00pm - Cache Attacks on the Cloud
tl;dw: cache attacks can work, maybe
hypervisor (VMM) ensures isolation through virtualization
VMs might feel each other's load on some low-level resources -> potential side channels
covert channel in the cloud?
LLC is cross core (L3 cache)
cache attacks
prime+probe
priming: find eviction set: memory line that when loaded to cache L3 will occupy a line we want to monitor
probing: when trying to access the memory line again, if it's fast that means no one has used the L3 cache line
to get crypto keys from that you need to detect key-dependent cache accesses
for RSA check timing and number of times the cache is accessed -> multiplications
for AES detect the lookup table access in the last round (??)
cross-VM cache attacks are realistic?
attack 1 (can't remember) (hu)
co-location: detect if they are on the same machine (dropbox) [RTS09]
they tried the same on AWS EC2, too hard now (hu)
new technique: LLC Cache accesses (hu)
new technique: memory bus contention [xww15, vzrs15]
once they knew they were on the same machine through colocation what to target?
libgcrypt's RSA use CRT, sliding window exponentiation and message blinding (see end of my paper to see explanation of message blinding)
conclusion:
cache attacks in public cloud work
but still noise and colocation problem
open problem: countermeasures?
what about non-crypto code?
Why didn't they talk of flush+reload and others?
2:30am - Practicing Oblivious Access on Cloud Storage: the Gap, the Fallacy, and the New Way Forward
Oblivious RAM, he doesn't want to explain how it works
how close is ORAM to practice?
implemented 4 different ORAM system from the litterature and got some results from it
CURIOUS, what they made from these research, is open-source. It's made in Java... such sadness.
Didn't get much from this talk. I know this is "real world" crypto but a better intro on ORAM would have been nicer, also where does ORAM stands in all the solutions we already have (fortunately the previous talk had a slide on that already). Also, I only read about it in FHE papers/presentations, but there was no mention of FHE in this talk :( well... no mention of FHE at all in this convention. Such sadness.
From their paper:
An Oblivious RAM scheme is a trusted mechanism on a client, which helps an application or the user access the untrusted cloud storage. For each read or write operation the user wants to perform on her cloud-side data, the mechanism converts it into a sequence of operations executed by the storage server. The design of the ORAM ensures that for any two sequences of requests (of the same length), the distributions of the resulting sequences of operations are indis-tinguishable to the cloud storage. Existing ORAM schemes typically fall into one of the following categories: (1) layered (also called hierarchical), (2) partition-based, (3) tree-based; and (4) large-message ORAMs.
2:50pm Replacing Weary Crypto: Upgrading the I2P network with stronger primitives
tl;dw: the i2p protocol
i2p is like Tor? both started around 2003, both using onion routing, both vulnerable to traffic confirmation attacks, etc...
but Tor is ~centralized, i2p is ~decentralized
tor use an asymmetric design, i2p is symmetric (woot?)
in i2p traffic works in circle (responses comes from another path)
so twice as many nodes are exposed
but you can only see one direction
this difference with Tor hasn't really been researched
...
4:20pm - New developments in BREACH
tl;dw: BREACH is back
But first, what is BREACH/CRIME?
This talk was a surprise talk, apparently to replace a canceled one?
original BREACH attack introduced at blackhat USA 2013
compression/encryption attack (similar to CRIME)
crime was attaking the request, breach attack the response
based on the fact that tls leaks length
the https server compresses responses with gzip
inject content in victim when he uses http
the content injected is a script that queries the https server
attack is still not mitigated but now we use block cipher so it's OK
extending the BREACH attack:
attack noisy endpoints
attack block ciphers
optimized
no papers?
aes-128 is vulnerable
mitigation proposed:
google is introducing some randomness in their responsness (not really working)
facebook is trying to generate a mask XORed to the CSRF token (but CSRF tokens are not the only secrets)
they will demo that at blackhat asia 2016 in Singapore
4:40pm - Lucky Microseconds: A Timing Attack on Amazon's s2n Implementation of TLS
This is the 2nd post of a series of blogpost on RWC2016. Find the notes from day 1 here.
disclaimer: I realize that I am writing notes about talks from people who are currently surrounding me. I don't want to alienate anyone but I also want to write what I thought about the talks, so please don't feel offended and feel free to buy me a beer if you don't like what I'm writing.
And here's another day of RWC! This one was a particularly long one, with a morning full of blockchain talks that I avoided and an afternoon of extremely good talks, followed by a suicidal TLS marathon.
09:30 - TLS 1.3: Real-World Design Constraints
tl;dw: hello tls 1.3
DJB recently said at the last CCC:
"With all the current crypto talks out there you get the idea that crypto has problems. crypto has massive usability problems, has performance problems, has pitfalls for implementers, has crazy complexity in implementation, stupid standards, millions of lines of unauditable code, and then all of these problems are combined into a grand unified clusterfuck called Transport Layer Security.
For such a complex protocol I was expecting the RWC speakers to make some effort. But that first talk was not clear (as were the other tls talks), slides were tiny, the speaker spoke too fast for my non-native ears, etc... Also, nothing you can't learn if you already read this blogpost.
10:00 - Hawk: Privacy-Preserving Blockchain and Smart Contracts
tl;dw: how to build smart contracts using the blockchain
first slide is a picture of the market cap of bitcoin...
lots of companies are doing this block chain stuff:
DAPS. No idea what this is, but he's talking about it.
Dapps are based on a token-economy utilizing a block chain to incentivize development and adoption.
bitcoin privacy guarantees are abysmal because of the consensus on the block chain.
contracts done through bitcoin are completely public
their solution: Hawk (between zerocash and ethereum)
uses zero knowledge proofs to prove that functions are computed correctly
blablabla, lots of cool tech, cool crypto keywords, etc.
As for me, this tweet sums up my interest in the subject.
So instead of playing games on my mac (see bellow (who plays games on a mac anyway?)). I took off to visit the Stanford campus and sit in one of their beautiful library
12:00 - Lightning talks.
I'm back after successfuly avoiding the blockchain morning. Lightning talks are mini talks of 1 to 3 minutes where slides are forbidden. Most were just people hiring or saying random stuff. Not much to see here but a good way to get into the talking thing it seems.
In the middle of them was Tancrede Lepoint asking for comments on his recent Million Dollar Curve paper. Some people quickly commented without really understanding what it was.
(Sorry Tanja :D). Overall the idea of the paper is how to generate a safe curve that the public can trust. They use the Blum Blum Shub PRNG to generate the parameters of the curve, iterating the process until it completes a list of checks (taken from SafeCurves), and seeding with several drawings from lotteries around the world in a particular timeframe (I think they use a commitment for the time frame) so that people can see that these numbers were not chosen in a certain ways (and would thus be NUMS).
14:00 - An Update on the Backdoor in Juniper's ScreenOS
tl;dw: Juniper
Slides are here. The talk was entertaining and really well communicated. But there was nothing majorly new that you can't already read in my blogpost here.
it happened around Christmas, lots of security people have nothing to do around this period of the year and so the Juniper code was reversed really quickly (haha).
the password that looks like a format string was already an idea taken straight from a phrack 2009 issue (0x42)
Developing a Trojaned Firmware for Juniper ScreenOS Platforms
unfiltered Dual EC outputs (the 30 bytes of output and 2 other bytes of a following Dual EC output) from a IKE nonce
but the Key Exchange is done before generating the nonce? They're still working on verifying this on real hardware (they will publish a paper later)
in earlier versions of ScreenOS the nonces used to be 20 bytes, the RNG would output 20 bytes only
When they introduced Dual EC in their code (Juniper), they also changed the nonce length from 20 bytes to 32 bytes (which is perfect for easy use of the Dual EC backdoor). Juniper did that! Not the hackers.
they are aware, through their disclosure, that it is "exploitable"
the new patch (17 dec 2015) removed the SSH backdoor and restored the Dual EC point.
A really good question from Tom Ritter: "how many bytes do you need to do the attack".
Answer: truncated output of Dual EC is 30 bytes (instead of 32), so you need to bruteforce the 2 bytes. To narrow the search space, 2 bytes from the next output is practical and enough. So ideally 30 bytes and 2 bytes from a following output allows for easy use of the Dual EC backdoor.
A smash and grab raid or smash and grab attack (or simply a smash and grab) is a particular form of burglary. The distinctive characteristics of a smash and grab are the elements of speed and surprise. A smash and grab involves smashing a barrier, usually a display window in a shop or a showcase, grabbing valuables, and then making a quick getaway, without concern for setting off alarms or creating noise.
The Ashley Madison breach is interesting because they used bcrypt and salting with high cost parameter, which is better than industry norms to protect passwords.
he cracked 4000 passwords from the leaks anyway
millions of password were cracked a few weeks after
He has done some research and has come up with a response: PASS, password hardening and typo correctors
the hmac with the private key transforms the offline attack in an online attack because the attacker now needs to query the PRF service repeatidly.
"the facebook approach" is to use a queriable "PRF service" for the hmac, it makes it easier to detect attacks.
but several drawbacks:
1) online attackers can instead record the hashes (mostly because of this legacy code)
2) the PRF is not called with a per-user granularity (same for all users) -> hard to implement fined-grained rate limiting (throtteling/rate limiting attempts, you are only able to detect global attacks)
3) no support for periodic key rotations -> if they detect an attack, they now need to add new lines in their key hashing rotting onion
PASS uses a PRF Service, same as facebook but also:
1) blinding (PRF can't see the password)
2) graceful key rotation
3) per-user monitoring
the blinding is a hash raised to a power, unblinding is done by taking the square root of that power (but maybe he simplified an inverse modulo something?)
a tweek t is sent as well, basically the user id, it doesn't have to be blinded and so they invented a new concept of "partially oblivious PRF" (PO-PRF)
the tweak and the blinded password are sent to the PRF which uses a bilinear pairing construction to do the PO-PRF thingy (this is a new use case fo bilinear pairing apparently).
it's easy to implement, completely transparent to users, highly scalable.
typos corrector: idea of a password correctors for famous typos (ex: a capitalized first letter)
facebook does this, vanguard does this...
intuition tells you it's bad: an attacker tries a password, and you help him find it if it's almost correct.
they instrumented dropbox for a period of 24 hours (for all users) to implement this thing
they took problems[:3] = [accidental caps lock key, not hitting the shift key to capitalize the first letter, extra unwanted character]
they corrected 9% of failed password submissions
minimal security impact, according to their research "virtually no security loss"
memory-intensive computation: make a password hashing function so that you need a lot of memory to use it -> the ASIC advantage vanishes (if someone wants to explain to me how is that, feel free).
winner: Argon2
they wanted the function to be as simple as possible (to simplify analysis)
you need the previous block to do the next computation (badly parallelizable) and a reference block (takes memory)
add some parallelism... there was another slide I have no image and no comment :(
this concept of slowing down attackers has other applications -> egalitarian computing
for ex: in bitcoin they wanted every user to be able to mine on his laptop, but now there are pools taking up more than 50% (danger: 51% attack)
can use it for client puzzles for denial of service protection.
egalitarian computing -> ensures that attacker and defender are the same (no advantage using special computers)
15:10 - Cryptographic pitfalls
tl;dw: 5 stories about cute and subtle crypto fails
talker is explicit about his non-involvement with Juniper (haha)
he's narrating the tales of previously disclosed vulns, 5 case studies, mostly because of "following best practice" attitude (not that it's bad but usually not enough).
concept of "reusing components rather than designing new ones"
vpn uses dsa/dh for ipsec
over prime fields
pkcs#3 defines something
PKIXS says something else, subtle difference
3)
concept of "using external events to seed your randomness pool", gotta get your entropy from somewhere!
entropy was really bad from the start because they would boot the device right after production line, nothing to build entropy from (the same thing happened in the carjacking talk at blackhat us2015)
so the key was almost the same because of that, juniper fixed it after customer report (the customer changed his device, but he didn't get an error that the key had changed)
4)
concept of "randomness in RSA factors"
government of some country use smartcards
then they want to use cheaper smartcards, but re-used the same code
the new RNG was bad
5)
everything is blanked out (he can't really talk about it)
they used CRC for integrity (instead of a MAC/signature)
from the audience, someone from Netscape speaks out "yup we forget that things have to be random as well" (cf predictable Netscape seed)
16:00 - TLS at the scale of Facebook
tl;dw: how they deployed https, wasn't easy
Timeline of the https deployment:
In 2010: facebook uses https almost only for login and payments
during a hackaton they tried to change every http url to https. It turns out it's not that simple.
not too long after firesheep happened, then Tunisia only ISP started doing script injection to non-https traffic. They had to do something
In 2011, they tried mixing secure and insecure. Then tried to make ALL apps support https (outch!)
In 2012, they wanted https only (no https opt-in)
In 2013, https is the default. At the end of the year they finally succeed: https-only
(And thinking that not so long ago it was normal to login without a secure connection... damn things have changed)
Edge Networks: use of CDNs like Akamai or cloudflare or spread your servers in the world
they have a client-side TLS (are they talking about mobile?) built on top of proxygen. This way they can ship improvement to TLS before the platform does, blablabla, there was a nice slide on that.
they really want 0-RTT, but tls 1.3 is not here, so they modified QUIC crypto to make it happen on top of TCP: it's called Zero.
certificate pinning: they cancel MITM by disallowing locally-installed CAs on android, iOS they cannot.
Server Name Indication (SNI) is an extension to the TLS computer networking protocol[1] by which a client indicates which hostname it is attempting to connect to at the start of the handshaking process. This allows a server to present multiple certificates on the same IP address and TCP port number and hence allows multiple secure (HTTPS) websites (or any other Service over TLS) to be served off the same IP address without requiring all those sites to use the same certificate
stats:
lots of session resumption by ticket -> this is good
low number of handshakes -> that means they store a lot of session tickets!
very low resumption by session ID (why is this a good thing?)
they haven't turned off RC4 yet!
something in the audience tells him about downgrade attacks, outch!
the referrer field in the http header is empty when you go on another website from a https page! Is that important... no?
it's easy for a simple website to go https (let's encrypt, ...), but for a big company, fiou it's hard!
still new feature phones that can't access tls (do they care? mff)
16:30 - No More Downgrades: Protecting TLS from Legacy Crypto
at the end of the protocol there is a finish message where is included all the negotiation in a mac:
but this is already too late: the attacker can forge the mac as well at this point
this is because the downgrade protection mechanism (this mac at the end) itself depends on downgradeable parameters (the idea behind logjam)
in tls 1.3 they use a signature instead of the mac
but you sign a hash function! -> SLOTH (which was released yesterday)
Didn't understand much, but I know that all the answers are in this paper. So stay tuned for a blogpost on the subject, or just read the freaking paper!
tls 1.3 includes a version downgrade resilience system:
the server chooses the version
the server has to choose the highest common version
...
only solution they came up with: put all the versions supported in the server nonce. This nonce value (server.random to be exact) is in all tls versions and is signed before the key exchange happens.
agreement + confidentiality are the fundamental requirements for a key exchange protocol
OPTLS is a key exchange that they want tls 1.3 to use
The OPTLS design provides the basis for the handshake modes specified in the current TLS 1.3 draft including 0-RTT, 1-RTT variants, and PSK modes
I have to admit I was way too tired at that point to follow anything. Everything looked like David Chaum's presentation. So we'll skip the last talk in this blogpost.
not enough post quantum (and people are scared of that)
need to remove rsa1024
AES-CTR is malleable, MAC allows tagging attacks if first and third relays are evil -> woot?
Here's a blogpost from Tom Ritter about tagging attacks. The idea: the first node XOR some data to the ciphertext, the third node sees the modified data in clear (if the data is not going through https). So with two evil nodes, being the first and last, you can know who is visiting what website (traffic correlation).
There was also something about doing it with the sha1, and something about adding a MAC between each relay. But I missed that part, if someone can fill in the blanks for me?
they want to use AEZ in the future (rogaway)? or HHFHFH? (djb)
This is scary as many have stated. Djb said "crypto should be boring" (at least I heard he said that), and he's totally right. Or at least double encrypt (AES(AEZ(m)))
AEZ is an authenticated cipher (think AES-GCM or chacha20-poly1305) that is part of the CAESAR competition
HHFHFH is ...? No idea, if someone knows what Nick is talking about?
tl;dw: how to do an anonymous survey product with a nice UX (paper is here)
problems of a surveys:
you want authenticity (only authorized users, only one vote per person)
anonymity (untracable response)
surveymonkey (6% of the survey online):
they don't care about double votes
special URLs to trace responses to single users/groups
anyone who infiltrate the system can get that info
they do everything wrong
Anonize overview
1) you create a public key
2) create survey, unique URL for everyone
3) you fill out something, you get a QR code
what you submit is a [response, token] with the token a ZK proof for... something.
they will publish API, and it's artistic
The talk was mostly spent on showing how beautiful the UX was. I would have prefered something clearer on how the protocol was really working (but maybe other understood better than me...)
11:10 - Cryptography in AllJoyn, an Open Source Framework for IoT
tl;dw: the key exchange protocol behind their AllJoyn, the security of devices that uses this AllJoyn api/interface...
What's AllJoyn? Something that you should use in your IoT stuff apparently:
AllJoyn is an open source software framework that makes it easy for devices and apps to discover and communicate with each other. Developers can write applications for interoperability regardless of transport layer, manufacturer, and without the need for Internet access. The software has been and will continue to be openly available for developers to download, and runs on popular platforms such as Linux and Linux-based Android, iOS, and Windows, including many other lightweight real-time operating systems.
they want security to be the same whatever they use (tcp, udp, ip, bluetooth, etc.) so they created their own TLS-like protocol with way less options
Verification engineers can use SAW to prove that a program implements its specification.
Security analysts can have SAW generate models identifying constraints on program control flow to identify inputs that can reach potentially dangerous parts of a program.
Cryptographers can have SAW generate models from production cryptographic code for import and use within Cryptol.
takes to 10-100 minutes to verify a crypto primitive
if you have a high formulation of your algorithm, why not make it write code?
12:00 - The first Levchin prize for contributions to real-word cryptography
tl;dw: dude with a lot of money decides to give some to influencal cryptographers every year, also gives them his name as a reward.
The Levchin prize honors significant contributions to real-world cryptography. The award celebrates recent advances that have had a major impact on the practice of cryptography and its use in real-world systems. Up to two awards will be given every year and each carries a prize of $10,000.
$10,000, twice a year. It's the first edition. Max Levchin is the co-founder of paypal, he likes puzzles.
first prize is awarded to Phillip Rogaway (unanimously) -> concrete security analysis, authenticated encryption, OCD, synack, format-preserving encryption, surveillance resistance crypto, etc. Well the guy is famous.
second award goes to several people from INRIA for the miTLS project (Karthikeyan Bhargavan, Cedric Fournet, Markulf Kohlweiss, Alfredo Pironti). Well deserved.
14:00 - PrivaTegrity: online communication with strong privacy
tl;dw:
Well. David Chaum, Privategrity: "A wide range of consumer transcations multiparty/multijurisdiction -- efficientyl!"
I won't comment on that. Everything is in these slides:
I mean seriously, if you use slides like that, and talk really loud, people will think you are a genius? Or maybe the inverse. I'm really confused as to why that guy was authorized to give a talk.
A direct-recording electronic (DRE) voting machine records votes by means of a ballot display provided with mechanical or electro-optical components that can be activated by the voter (typically buttons or a touchscreen); that processes data by means of a computer program; and that records voting data and ballot images in memory components. After the election it produces a tabulation of the voting data stored in a removable memory component and as printed copy. The system may also provide a means for transmitting individual ballots or vote totals to a central location for consolidating and reporting results from precincts at the central location. The device started to be massively used in 1996, in Brazil, where 100% of the elections voting system is carried out using machines.
In 2004, 28.9% of the registered voters in the United States used some type of direct recording electronic voting system, up from 7.7% in 1996.
13 millions LOC. WTF
1) print zero tape first to prove no one has voted (meaningless)
2012, gov organized an open contest to find vulns in the system (what he did), extremly restricted, just a few hours, no pen/paper
he found hardcoded keys in plain sight
gov says it's a "voting software that checks itself" (what does it mean? canary in the assembly code? Complety nonsense and non-crypto)
he tried a grep -r rand * and...
got a match in a file: srand(time(NULL))
this is predictible if you know the time, and they know the machines are launched between 7 and 8am. Bruteforce?
the time is actually public, no need for brute force...
gov asked if hashing the time would work, no? Well hashing the time twice then?
finally fixed by using /dev/urandom although the voting machines have two hardware RNGs
YouInspect: initiative, take pictures of the vote ticket, upload it (didn't get what was the point of that, didn't seem to yield any useful results)
14:50 - The State of the Law: 2016
tl;dw: blablabla
15:50 - QUIC Crypto
the only talk with very few slides. Adam Langley only used them when he needed pedagogical support. This is brilliant.
forward secure part of QUIC is better than forward secure in TLS (how? Didn't get that)
QUIC crypto will be replaced by TLS 1.3
QUIC will go on, but TLS works over TCP so they will have to make some changes?
There was this diagram where a client would send something to the server, if he didn't have the right ticket it wouldn't work, otherwise it would work... If you understood that part please tell me :)
16:20 - On the Security of TLS 1.3 and QUIC Against Weaknesses in PKCS#1 v1.5 Encryption
most used tls version is 1.0 (invented in 1999, when windows 98 was the most used OS)
pkcs#1 v1.5 is removed from tls 1.3
the bleichenbacher attack on pkcs#1 v1.5 is still possible.... attack explained here (the thing works if you have a server which supports both 1.3 and older versions)
idea of a solution?: use different certificates for 1.3 and 1.0
Someone from the audience: "no cool name and logo?"
16:40 - The State of Transport Security in the E-Mail Ecosystem
10 minutes of (painful) talk (but good job nonetheless Aaron: you went through to the end).
16:50 - Where the Wild Warnings Are: The TLS Story
tl;dw: users get certificate errors browsing the net because their clock is not correct.
users are getting used to errors, and tend to dismiss them
making stats of errors warning to users:
51% of warnings are non-overridable
41% of warnings are for facebook, youtube, google (because they are "portals to the web")
errors come from the client:
client clock misconfiguration (61%)
they have an error for that that allows you to fix your clock on android
can't send messages on whatsapp because of this problem as well
captive portals
security products
...
errors come from the server:
government has tons of errors with weird certificates
...
Someone in the public is suggesting that this is because the governments are trying to teach people to ignore these errors (obviously joking). Another one is saying that they might want users to add their "special certificate". Because it can overrides HSTS on rogue certificates. Don't know if this is true. But I'm thinking, why not add certificates for only the website that requests it. Like a certificate jail. Or maybe save the certificate in a different "user-added" folder, websites being signed by certificates from this folder would make chrome display "this website is signed by a certificate you added. If you think this is not normal blablabla".
APF is talking about how they are scared that users will get desensitized by errors, but why display errors? Why not just display a warning. That would annoy real servers and oblige them to get their certs in order, that would make the users suspicious but not unable to access their website (and to google for solutions like "just add the certificates in your root store").
Watson Ladd (that the host recognized) asked her how far from the real time the clock were setup. He thought maybe it could be the battery killing the laptop, NTP not working right away (I missed why) and so the time difference would be negative. In my understanding the clock difference was causing a problem because of certificates notBefore or notAfter fields, so that wouldn't be a problem.
Also people are wondering why these clocks are different, if they should fix it for the user? But maybe not since it might be that the user want his clock to be incorrect... I just remember a time when I would purposely modify the time so that I could keep using my time limited trials (photoshop?).
As I'm packing my bags to leave the temporary comfort of my parent's place, I'm taking the time to write a bit about my life (this is a blog after all).
I started this blog more than 2 years ago right before moving to Bordeaux to start a master in Cryptography. I had just finished a long bachelor of Mathematics between the universities of Lyon (France) and McMaster (Canada) and had decided to merge my major with an old passion of mine (Computer Science).
Hey guys, I'm David Wong, a 24 years old french dude who's going to start a Master of Cryptology in the university of Bordeaux 1.
I had been blogging for decades and it's naturally that I decided to start something that I could look at and be proud at the end of my master. Sort of a journal of a 2 years project. I was also counting on it to give me some motivation in time of adversity, and it started taking shape with tutorial videos on classes I couldn't understand (here's my first on Differential Power Analysis) and long articles about failed interviews (here's the one I made after interviewing with Cloudflare)
I still have no clue what my future job will be, that's why I had the idea of making this small blog where I could post about my ventures into this new world and, hopefully, being able to take a step back and see what I did, what I liked, what happened in two years of Master (and maybe more).
I'll also post some thoughts about the new city I'll be moving to : Bordeaux. This is for at least 2 years, or less if I change my mind. Anyway, this is going to be exciting!
That was 2 years ago, and indeed those years are now filled with memories and achievements that I will forever cherish. If you're passing by France and you didn't plan a visit in Bordeaux, you're missing out.
But anyway, as you probably know since you don't miss any of my blogpost, I've been since hired by the same people and will be back in the office in two weeks. In two weeks because before then I will be at the Real World Crypto convention in Stanford university, and after that at NCC Con in Austin. A lot is going to happen in just a few weeks, plus I'll have to find a new place to live and re-calibrate with the desk I left behind...
With all the current crypto talks out there you get the idea that crypto has problems. crypto has massive usability problems, has performance problems, has pitfalls for implementers, has crazy complexity in implementation, stupid standards, millions of lines of unauditable code, and then all of these problems are combined into a grand unified clusterfuck called Transport Layer Security.
It explains a few of the primitives backed by PQCrypto for a post-quantum world. I did myself a blog series on hash-based signatures which I think is clearer than the above video.