Do people use Socat in real life?
posted February 2016
Just to get an idea, could you please respond here?
Thank-you very much!
comment on this storyHey! I'm David, a security consultant at Cryptography Services, the crypto team of NCC Group . This is my blog about cryptography and security and other related topics that I find interesting.
If you don't know where to start, you might want to check these blogposts:
Here are the latest links posted:
You can also suggest a link.
Just to get an idea, could you please respond here?
Thank-you very much!
comment on this storyI made a simple script to check for your DH modulus. Is it long enough? Is it a safe prime? I thought some non-cryptographers could benefit from such a tool, since usually all I have to do is fire up Sage and run some tests, but if you don't have Sage this can be tricky and annoying so...
Here's test_DHparams
1 commentA few weeks ago, NIST released a draft on their report on Post-Quantum Cryptography.
As we all know, some things are happening in the quantum computing world. Some are saying it will never work, some are saying it will but that it will take time until large enough quantum computers could break today's crypto.
So reading this paragraph taken from the NIST document, it can make sense on why we would want to move today to post-quantum crypto:
Historically, it has taken almost 20 years to deploy our modern public key cryptography infrastructure. It will take significant effort to ensure a smooth and secure migration from the current widely used cryptosystems to their quantum computing resistant counterparts. Therefore, regardless of whether we can estimate the exact time of the arrival of the quantum computing era, we must begin now to prepare our information security systems to be able to resist quantum computing.
Let's see where is this number coming from. SSL/TLS, its protocol or its implementation, its coverage or its efficiency, has been a huge mess so far:
In 2009, 7 years ago, moxie introduced SSLStrip at Blackhat, a technique to render https completely useless without preloaded HSTS.
It's only in 2013, 3 years ago, that facebook finally made the whole app https-only just blows my mind. And that's not thinking of the myriad of companies, commerce, banks and other websites that were all accessible through http back then.
Nowdays most websites are still vulnerable to moxie's 2009 attack. Think about it, TLS is supposed to protect the communications against a passive and an active attacker on the network. In the passive case, I think it succeeded (in most cases). In the active case? Even HSTS or HPKP can still be somehow circumvented. Only browsers are fully capable of protecting us nowadays.
We could also talk about the deprecation of md5 and sha1, but sleevi does that better than me:
1996, 20 years ago, researches recommend to switch from md5 to sha1 because of recent advances.
2013, 17 years after the recommendation, Apple finally removes its support for MD5 in certificates.
(there's also a graphical timeline made by ange: here)
Or what about the deprecation of DES? Or RC4? Or 1024 bit DH? ..
To come back to the NIST's report, here's a nice table of the impact of quantum computing on today's algorithms:
sums up pretty well what djb wrote:
Imagine that it's fifteen years from now. Somebody announces that he's built a large quantum computer. RSA is dead. DSA is dead. Elliptic curves, hyperelliptic curves, class groups, whatever, dead, dead, dead.
Contrarily to the european initiative PQCrypto, they seem to imply that they will recommend lattice-based crypto whenever their new suite B will be done. I find hard to trust any system's security proof that rely on lattice's theorical bounds because as it is known with LLL, BKZ and others: practical results are way better than these theorical limits. I don't know much about lattice crypto though, and I would you out to this paper in my to read list: Lattice-based crypto for beginners.
They agree on Hash-based signatures (which are explained in a 4 posts series on my blog), which is timy because a new version of the RFC draft for XMSS has came out, which might be the most polished hash-based signature system out there (although it is stateful unlike SPHINCS).
The paper ends on these wise words that explains how security estimation works (and has always worked):
We note that none of the above proposals have been shown to guarantee security against all quantum attacks. A new quantum algorithm may be discovered which breaks some of these schemes. However, this is similar to the state today. Although most public-key cryptosystems come with a security proof, these proofs are based on unproven assumptions. Thus the lack of known attacks is used to justify the security of public-key cryptography currently in use.
To talk about quantum computing advances, I don't know much about it but here are some notes:
Shor’s algorithm (the one that breaks everything) was born on 1994.
Late 1990s, error correcting codes and threshold theorems for quantum computing. Quantum computing might be possible?
2011, "the world's first commercially available quantum computer" is released by D-Wave. I believe this angered many people because this wasn't really quantum computing.
To finish this blogpost, a few things I remember from last month Real World Crypto conference:
Tanja asked the first speaker presenting the blackphone about quantum crypto. His response: "post-quantum right now is marketing". People laughed.
There is definitely a skepticism in the crypto world about quantum computing, as there is a gold rush into designing new post-quantum crypto.
2 commentsI have mix feelings about the UI of Signal, but watching these two videos from Moxie at different periods of TLS' life, I now have a brand new admiration for the person.
The two videos are here:
In both of them, he start talking about his sslsnif: a great tool that you can find here, written in C++ and that allows you to serve clients fallacious certs taking advantage of browsers vulnerabilities (such as not checking for basic constraints fields in the certificate chain, stop reading subject names after null bytes, etc...)
Another tools that he released, coded in python, is sslstrip. The thing takes advantage of the fact that almost no one types https:// in the address bar when navigating to a website directly. A man-in-the-middle attacker can stop the redirection to https and serve the entire website either through http or through another https website which url looks similar to the victim's website (thanks to some unicode to make it look like victimwebsite.com/something/?yourwebsite.com
where yourwebsite.com is the real website being visited).
https' only purpose is to defend against man-in-the-middle attacker. Because of sslstrip, any active attacker on the network can render https completely useless. Security measures to prevent that, that I can think of, are HSTS headers, preloaded HSTS (see chrome's one) and HPKP (see Tim Taubert's blogpost.)
Both tools needs arpspoof to create the man-in-the-middle position. I've discovered another tool that seems to combine all of these tools in one (but I'm not really sure what's the difference here): bettercap. It looks good also.
Other tools I've discovered, that verify how the client's handle certificate verification: one is developed by some coworker and is called tlspretense, the other one is called x509test and seem to be pretty popular too. I have no idea how both these tools perform, I guess I will check that next time I to.
1 commentWhile some people would tell you to use pre-defined Diffie-Hellman parameters (rfc3526, rfc5114) so as not to mess something up during the generation (hum hum socat), others would tell you "hey, look at what happened with logjam and their hardcoded DH params!" and will point you to openssl dhparam
to generate your own customized parameters.
But how does it really work? The high level code is less than 450 lines long. It's basically taking several stuff into consideration ("do you want to use 2 or 5 as a generator? No? Let's use 2 anyway!") and then using some DH_generate_parameters() function that will set up everything.
Inside of that function relies probable_prime_dh_safe() that will look for a particular kind of prime as the DH modulus: a safe prime.
A safe prime looks like this: \(2p + 1\) with \(p\) a prime as well.
Actually, we can also say that the function looks for a Sophie Germain prime, a prime \(p\) such that \(2p+1\) is also prime.
Like that you know =), safe primes are not the same as Sophie Germain prime, but they are closely related.
(Not to mix up with strong prime as well, which are just primes that have some properties.)
Another important thing: the function returns a safe probable primes. That is a function that has not been mathematically proven to be a prime (but is close enough).
To do that, first a random number is generated. Then a test called the Miller-Rabin test is applied to the random number a number of time. The more you run the test and the more certainty you get that it is a prime, if the test ever fail you know for sure that the number is not a prime (so a composite) and you need to regenerate a new random number and start again with the tests.
How many times should you apply this test? Enough time so that if everyone tried to generate primes every second for the lifespan of earth we would still have low chances to find a false positive (a composite number passing X tests of Miller-Rabin). The numbers used in OpenSSL come from the table 4.4 in the Handbook of Applied Cryptography.
(To make the test faster a bunch of small primes are first tested as divisors, then some trial divisions follow.)
2 commentsSomeone asked that question on reddit, and so I replied with a high level answer that should provide a clear enough view of the algorithm:
From a high level, here's what a PRNG is supposed to look like:
you start with a seed
(if you re-use the same seed
you will obtain the same random
numbers), you initialize it into a state
. Then, every time you want to obtain a random
number, you transform that state
with a one-way function \(g\). This is because you don't want people to find out the state out of the random
output.
You want another random
number? You first transform the state
with a one way function \(f\): this is because you don't want people who found out the state
to be able to retrieve past states
(forward secrecy). And then you use your function \(g\) again to output a random
number.
Mersenne Twister (MT) is like that, except:
state
is not used to output any random
numbersstate
allows you to output not only one, but 624 random
numbers (although this could be thought as one big random
number)With more details, here's what MT looks like:
the \(f\) function is called "twist", the \(g\) function is called "temper". You can find out how each functions work by looking at the working code on the wikipedia page of MT.
1 commentThe socat thingy created some interest in my brain and I'm now wondering how to build a NOBUS (Nobody But Us) backdoor inside Diffie-Hellman and how to reverse it if it's not a proper NOBUS.
Ways to do that is to imagine how the DH non-prime modulus could have been generated to allow for a backdoor. For it to be a NOBUS it should not easily be factorable, but for it to allow a Pohlig-Hellman attack it must have a B-smooth order with B small enough for the adversary to compute the discrete log in a subgroup of order B.
I'm currently summing up my research in the open on a github repo: How to backdoor Diffie-Hellman, lessons learned from the Socat non-prime prime. If anyone is interested in any parts of this research (factorizing the modulus, thinking of ways to build the backdoored modulus, ...) please shoot me a message :)
If you go on the github repository you will see an already working proof of concept that explains each of the steps (generation, attack)
This proof of concept underlines one of the ways the malicious committer could have generated the non-prime modulus \(p = p_1 p_2\) with both \(p_i\) primes such that \(p_i - 1\) are smooth. The attack works, but I'm thinking about ways of reversing such a non-prime modulus that would disable the NOBUS property of the backdoor. Spoiler alert: Pollard's p-1 factorization algorithm.
Anyway, if you're interested in contributing to that research, or if you have any comments that could be useful, please shoot me a message =)
comment on this storyOn February 1st 2016, a security advisory was posted to Openwall by a Socat developer: Socat security advisory 7 - Created new 2048bit DH modulus
In the OpenSSL address implementation the hard coded 1024 bit DH p parameter was not prime. The effective cryptographic strength of a key exchange using these parameters was weaker than the one one could get by using a prime p. Moreover, since there is no indication of how these parameters were chosen, the existence of a trapdoor that makes possible for an eavesdropper to recover the shared secret from a key exchange that uses them cannot be ruled out.
A new prime modulus p parameter has been generated by Socat developer using OpenSSL dhparam command.
In addition the new parameter is 2048 bit long.
This is a pretty weird message with a Juniper feeling to it.
Socat's README tells us that you can use their free software to setup an encrypted tunnel for data transfer between two peers.
Looking at the commit logs you can see that they used a 512 bits Diffie-Hellman modulus until last year (2015) january when it was replaced with a 1024 bits one.
Socat did not work in FIPS mode because 1024 instead of 512 bit DH prime is required. Thanks to Zhigang Wang for reporting and sending a patch.
The person who pushed the commit is Gerhard Rieger who is the same person who fixed it a year later. In the comment he refers to Zhigang Wang, an Oracle employee at the time who has yet to comment on his mistake.
There are a lot of interesting things to dig into now. One of them is to check if the new parameter was generated properly.
It is a prime. Hourray! But is it enough?
It usually isn't enough. The developper claims having generated the new prime with openssl's dhparam command (openssl dhparam 2048 -C
), but is it enough? Or even, is it true?
To get the order of the DH group, a simple \(p - 1\) suffice (\(p\) is the new modulus here). This is because \(p\) is prime. If it is not prime, you need to know its factorization. This is why the research on the previous non-prime modulus is slow... See Thai Duong's blogpost here, the stackexchange question here or reddit's thread.
Now the order is important, because if it's smooth (factorable into "small" primes) then active attacks (small subgroup attacks) and passive attacks (Pohlig-Hellman) become possible.
So what we can do, is to try to factor the order of this new prime.
Here's a small script I wrote that tries all the primes until... you stop it:
# the old "fake prime" dh params
dh1024_p = 0xCC17F2DC96DF59A446C53E0EB826550CE388C1CEA7BCB3BF1694D8A945A2CEA95B22255F9259941C22BFCBC8C857CBBFBC0EE840F98703BF609B08C68E99C605FC00D66D90A8F5F8D38D43C88F7ABDBB28AC04694A0B867337F06D4F04F6F5AFBFAB8ECE75534D7F7D17780E12464AAF9599EFBCA6C54177437AB9EC8E073C6D
dh1024_g = 2
# the new dh params
dh2048_p = 0x00dc216456bd9cb2acbec998ef953e26fab557bcd9e675c043a21c7a85df34ab57a8f6bcf6847d056904834cd556d385090a08ffb537a1a38a370446d2933196f4e40d9fbd3e7f9e4daf08e2e8039473c4dc0687bb6dae662d181fd847065ccf8ab50051579bea1ed8db8e3c1fd32fba1f5f3d15c13b2c8242c88c87795b38863aebfd81a9baf7265b93c53e03304b005cb6233eea94c3b471c76e643bf89265ad606cd47ba9672604a80ab206ebe07d90ddddf5cfb4117cabc1a384be2777c7de20576647a735fe0d6a1c52b858bf2633815eb7a9c0ee581174861908891c370d524770758ba88b3011713662f07341ee349d0a2b674e6aa3e299921bf5327363
dh2048_g = 2
# is_prime(dh2048_p) -> True
order = dh2048_p - 1
factors = [2]
print "2 divides the order"
# let's try to factorize the order by trial divisions
def find_factors(number):
factors = []
# use different techniques to get primes, dunno which is faster
index = 0
for prime in Primes():
if Mod(number, prime) == 0:
print prime, "divides the order"
factors.append(prime)
if index == 10000:
print "tested up to prime", prime, "so far"
index = 0
else:
index += 1
return factors
factors += find_factors(order / 2)
It has been running for a while now (up to 82018837, a 27bits number) and nothing has been found so far...
The thing is, a Pohlig-Hellman attack is do-able as long as you can compute the discrete log modulo each factors. There is no notion of "small enough factor" defined without a threat model. This backdoor is not gonna be usable by small players obviously, but by bigger players? By state-sized attackers? Who knows...
EDIT: I forgot order/2 could be a prime as well. But nope.
comment on this storyI was watching this excellent video on the birth of elliptic curves by Dan Boneh, and I felt like the explanation of Diffie-Hellman (DH) felt short. In the video, Dan goes on to explain the typical DH key exchange:
Alice and Bob each choose a public point \(g\) and a public modulus \(N\).
By the way. If you ever heard of "DH-1024" or some big number associated to Diffie-Hellman, that was probably the bitsize of this public modulus \(N\).
The exchange then goes like this:
Alice generates her private key \(a\) and sends her public key \(g^a\pmod{N}\) to Bob.
Bob generates his own private key \(b\) and sends his public key \(g^b\pmod{N}\) to Alice.
Dan then explains why this is secure: because given \((g, g^a, g^b)\) (the only values an eavesdropper can observe from this exchange) it's very hard to compute \(g^{ab}\), and this is called the Computational Diffie-Hellman problem (CDH).
But this doesn't really explain how the scheme works. You could wonder: but why doesn't the attacker do the exact same thing Bob and alice just did? He could just iterate the powers of \(g\) until \(g^a\) or \(g^b\) is found, right?
Let's replace the exponentiation by a hash function. Don't worry I'll explain:
\(g\) will be our public input and \(h\) will be our hash function (e.g. sha256). One more thing: \(h^{3}(g)\) translates to \(h(h(h(g)))\).
So now our key exchange looks like this:
Alice generates an integer \(a\) large enough and compute \(a\) iteration of the hash function \(h\) over \(g\), then sends it to Bob.
Bob does the same with an integer \(b\) and sends \(h^b(g)\) to Alice (exact same thing that Alice did, different phrasing.)
So if you understood the last part: Alice and Bob both iterated the hash functions on the starting input \(g\) a number of \(a+b\) times. If Alice's public key was \(h(h(g))\) and Bob's public key was \(h(h(h(g)))\) then they both end up computing \(h(h(h(h(h(g)))))\).
That seems to work. But how is this scheme secure?
You're right, it is not. The attacker can just hash \(g\) over and over until he finds either Alice's or Bob's public key.
So let's ask ourselves this question: how could we make it secure?
If Bob or Alice had a way to compute \(h^c(x)\) without computing every single hash (\(c\) hash computations) then he or she would take way less time to compute their public key than an attacker would take to retrieve it.
This makes it easier to understand how the normal DH exchange in finite groups is secure.
The usual assumptions we want for DH to works were nicely summed up in Boneh's talk
The point of view here is that discrete log is difficult AND CDH holds.
Another way to see this, is to see that we have algorithm to quickly calculate \(g^c \pmod{n}\) without having to iterate through every integers before \(c\).
To be more accurate: the algorithms we have to quickly exponentiate numbers in finite groups are way faster than the ones we have to compute the discrete logarithm of elements of finite groups. Thanks to these shortcuts the good folks can quickly compute their public keys while the bad folks have to do all the work.
comment on this storyTaken from the SLOTH paper, the current estimated complexities of the best known attacks against MD5 and SHA-1:
Common-prefix collision | Chosen-prefix collision | |
MD5 | 2^16 | 2^39 |
SHA-1 | 2^61 | 2^77 |
MD5|SHA-1 | 2^67 | 2^77 |
MD5|SHA-1 is a concatenation of the outputs of both hashes on the same input. It is a technique aimed at reducing the efficiency of these attacks, but as you can see it, it is not that efficient.
comment on this storyI never understood why Firefox doesn't display a warning when visiting non-https websites. Maybe it's too soon and there are too many no-tls servers out there and the user would learn to ignore the warning after a while?
I don't know, so I wrote a few lines and made the add-on here
Just drag and drop the .xpi
in your firefox. You can also review the ultra-minimal code in index.js and build the xpi yourself with Mozilla's JDK
A few weeks ago I wrote about testing RSA public keys from the most recent Alexa's top 1 million domains handshake log that you can get on scans.io.
Most public exponents \(e\) were small and so no small private key attack (Boneh and Durfee) should have happened. But I didn't explained why.
The private exponent \(d\) is the inverse of \(e\), that means that \(e * d = 1 \pmod{\varphi(N)}\).
\(\varphi(N)\) is a number almost as big as \(N\) since \(\varphi(N) = (p-1)(q-1)\) in our case. So that our public exponent \(e\) multiplied with something would be equal to \(1\), we would at least need to loop through all the elements of \(\mathbb{Z}_{\varphi(N)}\) at least once.
Or put differently, since \(e > 1 \pmod{\varphi(N)}\), increasing \(e\) over \(\varphi(N)\) will allow us to get a \(1\).
l = 1024
p = random_prime(2^(l/2), lbound= 2^(l/2 - 1))
q = random_prime(2^(l/2), lbound= 2^(l/2 - 1))
N = p * q
phiN = (p-1) * (q-1)
print len(bin(int(phiN / 3))) - 2 # 1024
print len(bin(int(phiN / 10000000))) # 1002
This quick test with Sage shows us that with a small public exponent (like 3, or even 10,000,000), you need to multiply it with a number greater than 1000 bits to reach the end of the group and possibly ending up with a \(1\).
All of this is interesting because in 2000, Boneh and Durfee found out that if the private exponent \(d\) was smaller than a fraction of the modulus \(N\) (the exact bound is \(d < N^{0.292}\)), then the private exponent could be recovered in polynomial time via a lattice attack. What does it mean for the private exponent to be "small" compared to the modulus? Let's get some numbers to get an idea:
print len(bin(N)) - 2 # 1024
print len(bin(int(N^(0.292)))) - 2 # 299
That's right, for a 1024 bits modulus that means that the private exponent \(d\) has to be smaller than 300 bits. This is never going to happen if the public exponent used is too small (note that this doesn't necessarely mean that you should use a small public exponent).
So after testing the University of Michigan · Alexa Top 1 Million HTTPS Handshakes, I decided to tackle a much much larger logfile: the University of Michigan · Full IPv4 HTTPS Handshakes. The first one is 6.3GB uncompressed, the second is 279.93GB. Quite a difference! So the first thing to do was to parse all the public keys in search for greater exponents than 1,000,000 (an arbitrary bound that I could have set higher but, as the results showned, was enough).
I only got 10 public exponents with higher values than this bound! And they were all still relatively small (633951833, 16777259, 1065315695, 2102467769, 41777459, 1073741953, 4294967297, 297612713, 603394037, 171529867).
Here's the code I used to parse the log file:
import sys, json, base64
with open(sys.argv[1]) as ff:
for line in ff:
lined = json.loads(line)
if 'tls' not in lined["data"] or 'server_certificates' not in lined["data"]["tls"].keys() or 'parsed' not in lined["data"]["tls"]["server_certificates"]["certificate"]:
continue
server_certificate = lined["data"]["tls"]["server_certificates"]["certificate"]["parsed"]
public_key = server_certificate["subject_key_info"]
signature_algorithm = public_key["key_algorithm"]["name"]
if signature_algorithm == "RSA":
modulus = base64.b64decode(public_key["rsa_public_key"]["modulus"])
e = public_key["rsa_public_key"]["exponent"]
# ignoring small exponent
if e < 1000000:
continue
N = int(modulus.encode('hex'), 16)
print "[",N,",", e,"]"
comment on this story
There is no day 4, this is over... And I've got a ton to work on/read about/catch up with.
But first! I'm spending the week end in San Francisco before flying to Austin, if anyone wants to hang out in SF feel free to contact me on twitter =)
(and if you work for Dropbox, feel free to invite me to eat at your one michelin star cafetaria)
First, a bunch of slides are already available through the real world crypto webpage. And I've been taking notes every day: day1, day2, day3.
Now here's my to read list from the important talks:
And bonus, here are some paper that have nothing to do with RWC but that I still want to read right now:
I actually have no idea about that. You?
comment on this storyThis is the 3rd post of a series of blogpost on RWC2016. Find the notes from day 1 here.
I'm a bit washed out after three long days of talk. But I'm also sad that this comes to an end :( It was amazing seeing and meeting so many of these huge stars in cryptography. I definitely felt like I was part of something big. Dan Boneh seems like a genuine good guy and the organization was top notch (and the sandwiches amazing).
The morning was filled with talks on SGX, the new Intel technology that could allow for secure VMMs. I didn't really understood these talks as I didn't really know what was SGX. White papers, manual, blogposts and everything else is here.
tl;dw: bleichenbacher pkcs1 v1.5 attack, invalid curve attack
If you know both attacks, don't expect anything new.
tl;dw: how it is to deploy SSE or PPE, and why it's not dead
I think the point is that there is nothing practical that is better than PPE, so rather than using non-encrypted DB... PPE will still hold.
tl;dw: PPE is dead, read the paper
check page 6 of the paper for explanations on these attacks. All I was expecting from this talk was explanation of the improvements (Lp and cumulative) but they just flied through them (fortunately they seem to be pretty easy to understand in the paper). Other than that, nothing new that you can't read from their paper.
tl;dw: cache attacks can work, maybe
conclusion:
Why didn't they talk of flush+reload and others?
tl;dw: ORAM, does it work? Is it practical?
Didn't get much from this talk. I know this is "real world" crypto but a better intro on ORAM would have been nicer, also where does ORAM stands in all the solutions we already have (fortunately the previous talk had a slide on that already). Also, I only read about it in FHE papers/presentations, but there was no mention of FHE in this talk :( well... no mention of FHE at all in this convention. Such sadness.
From their paper:
An Oblivious RAM scheme is a trusted mechanism on a client, which helps an application or the user access the untrusted cloud storage. For each read or write operation the user wants to perform on her cloud-side data, the mechanism converts it into a sequence of operations executed by the storage server. The design of the ORAM ensures that for any two sequences of requests (of the same length), the distributions of the resulting sequences of operations are indis-tinguishable to the cloud storage. Existing ORAM schemes typically fall into one of the following categories: (1) layered (also called hierarchical), (2) partition-based, (3) tree-based; and (4) large-message ORAMs.
tl;dw: the i2p protocol
tl;dw: BREACH is back
But first, what is BREACH/CRIME?
This talk was a surprise talk, apparently to replace a canceled one?
tl;dw: read the paper, attack is impractical
a debriefing of the convention can be found here
comment on this storyThis is the 2nd post of a series of blogpost on RWC2016. Find the notes from day 1 here.
disclaimer: I realize that I am writing notes about talks from people who are currently surrounding me. I don't want to alienate anyone but I also want to write what I thought about the talks, so please don't feel offended and feel free to buy me a beer if you don't like what I'm writing.
And here's another day of RWC! This one was a particularly long one, with a morning full of blockchain talks that I avoided and an afternoon of extremely good talks, followed by a suicidal TLS marathon.
tl;dw: hello tls 1.3
DJB recently said at the last CCC:
"With all the current crypto talks out there you get the idea that crypto has problems. crypto has massive usability problems, has performance problems, has pitfalls for implementers, has crazy complexity in implementation, stupid standards, millions of lines of unauditable code, and then all of these problems are combined into a grand unified clusterfuck called Transport Layer Security.
For such a complex protocol I was expecting the RWC speakers to make some effort. But that first talk was not clear (as were the other tls talks), slides were tiny, the speaker spoke too fast for my non-native ears, etc... Also, nothing you can't learn if you already read this blogpost.
tl;dw: how to build smart contracts using the blockchain
Dapps are based on a token-economy utilizing a block chain to incentivize development and adoption.
if you're really interested, they have a tech report here (pdf)
As for me, this tweet sums up my interest in the subject.
So instead of playing games on my mac (see bellow (who plays games on a mac anyway?)). I took off to visit the Stanford campus and sit in one of their beautiful library
I'm back after successfuly avoiding the blockchain morning. Lightning talks are mini talks of 1 to 3 minutes where slides are forbidden. Most were just people hiring or saying random stuff. Not much to see here but a good way to get into the talking thing it seems.
In the middle of them was Tancrede Lepoint asking for comments on his recent Million Dollar Curve paper. Some people quickly commented without really understanding what it was.
(Sorry Tanja :D). Overall the idea of the paper is how to generate a safe curve that the public can trust. They use the Blum Blum Shub PRNG to generate the parameters of the curve, iterating the process until it completes a list of checks (taken from SafeCurves), and seeding with several drawings from lotteries around the world in a particular timeframe (I think they use a commitment for the time frame) so that people can see that these numbers were not chosen in a certain ways (and would thus be NUMS).
tl;dw: Juniper
Slides are here. The talk was entertaining and really well communicated. But there was nothing majorly new that you can't already read in my blogpost here.
Developing a Trojaned Firmware for Juniper ScreenOS Platforms
A really good question from Tom Ritter: "how many bytes do you need to do the attack". Answer: truncated output of Dual EC is 30 bytes (instead of 32), so you need to bruteforce the 2 bytes. To narrow the search space, 2 bytes from the next output is practical and enough. So ideally 30 bytes and 2 bytes from a following output allows for easy use of the Dual EC backdoor.
(which is something I forgot to mention in my own explanation of Dual EC)
tl;dw: use a external PRF
A smash and grab raid or smash and grab attack (or simply a smash and grab) is a particular form of burglary. The distinctive characteristics of a smash and grab are the elements of speed and surprise. A smash and grab involves smashing a barrier, usually a display window in a shop or a showcase, grabbing valuables, and then making a quick getaway, without concern for setting off alarms or creating noise.
t
is sent as well, basically the user id, it doesn't have to be blinded and so they invented a new concept of "partially oblivious PRF" (PO-PRF)the tweak and the blinded password are sent to the PRF which uses a bilinear pairing construction to do the PO-PRF thingy (this is a new use case fo bilinear pairing apparently).
it's easy to implement, completely transparent to users, highly scalable.
Question from Dmitry Khovratovich: Makwa does something like this, exactly like this (outch!). Answer: "I'm not familiar with that"
tl;dw: argon2 hash function efficient against ASICs
tl;dw: 5 stories about cute and subtle crypto fails
he's narrating the tales of previously disclosed vulns, 5 case studies, mostly because of "following best practice" attitude (not that it's bad but usually not enough).
2)
from the audience, someone from Netscape speaks out "yup we forget that things have to be random as well" (cf predictable Netscape seed)
tl;dw: how they deployed https, wasn't easy
Timeline of the https deployment:
Server Name Indication (SNI) is an extension to the TLS computer networking protocol[1] by which a client indicates which hostname it is attempting to connect to at the start of the handshaking process. This allows a server to present multiple certificates on the same IP address and TCP port number and hence allows multiple secure (HTTPS) websites (or any other Service over TLS) to be served off the same IP address without requiring all those sites to use the same certificate
tl;dw: SLOTH
Didn't understand much, but I know that all the answers are in this paper. So stay tuned for a blogpost on the subject, or just read the freaking paper!
tl;dw: how does OPTLS works
The OPTLS design provides the basis for the handshake modes specified in the current TLS 1.3 draft including 0-RTT, 1-RTT variants, and PSK modes
I have to admit I was way too tired at that point to follow anything. Everything looked like David Chaum's presentation. So we'll skip the last talk in this blogpost.
comment on this story