Hey! I'm David, cofounder of zkSecurity and the author of the Real-World Cryptography book. I was previously a crypto architect at O(1) Labs (working on the Mina cryptocurrency), before that I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.
warning: as this is a proof of concept to see if 4chan could be implemented on the blockchain, some people might post shocking pictures or videos on there. At the time of the writing nothing "bad" has happened, but take precautions if you're planning to take a look at it :)
I've been dabbing in smart contract security (see my video here) and I found it natural to try and do a DAPP (decentralized application) myself. How hard can it be?
3.5 days later I've got back into Javascript after many years; I've learned Vue.js (kind of, my code is really ugly) and I've created my first DAPP!
First thing I'll have to say: it's hella fun.
Javascript has gone a long way and the Vue.js framework is just great! I've tried using React but I found it less developer-friendly, harder to learn and lacking in terms of template. It just didn't click. I guess coming from HTML first (and jQuery later) the Vue.js framework just makes more sense to me. It's all about having fun and I wasn't having any with React.
I still want to create, modify and query things the jQuery way, but I'm getting used to the javascript modernities (querySelector, arrow functions, ...) and the Vue.js way. I like it. It will take some time to get rid of my old habits and re-think the way I write front end code but I like it.
Writing the smart contract is quite straight forward. It's 128 lines of Solidity code, but most of it are comments (yes I comment my code). At one point I should publish it on etherscan.io because this is best practice. It's not compiled with the last version of Solidity (boooo!) because I deployed it via Mist and Mist doesn't have the last version of Solidity.
Writing the dapp with Vue.js and the web3.js API (the javascript library to interact with the blockchain via an ethereum node) is pretty straight forward as well. The learning curve is not bad and there are tons of resources for beginners. That is, until you test your dapp with a real wallet like the Metamask wallet (integrated as a browser plugin) or the official Mist wallet (Electron app). Different wallets offer different functions and versions of web3 (how it works: they inject the web3 object in your document and you can use it directly from your javascript webapp). They also (for the most part) refuse any synchronous calls on the web3 API without really providing ways for you to debug what is not synchronous in your call. A lot of functions have to be replaced for the asynchronous variants, any async/await has to disappear and you enter callback hell. Not fun.
The worst is that the documentation gets sparser and sparser as you enter the world of real DAPPs. You understand that things are changing really fast, that wallets will soon stop supporting web3.js and that a real API will be provided at some point. Everything is way more experimental than I had thought.
On top of that, Metamask doesn't let you watch for events yet, so say goodbye to your DAPP being "live" for their users.
To make the app offline, meaning browsable without a wallet, I use Infura. You basically just have to switch the url of the node to the ones Infura gives you, and web3 will be able to interact with it the same way it interacts with a real node. This is because the standard works via normal Json-RPC routes using the Json way to structure queries and responses. Unfortunately, like Metamask, Infura doesn't let you listen to events so the app is browsable, but not live.
I haven't taken the time to publish the smart contract on the real ethereum network yet. It's sitting on Rinkeby which is a test network where you can get ethers for free. I'm not going to get rich on a test network, and some of my friends are eyeballing me for this decision (I see you jc) but this is fun and I want people to try it for free :)
Is it hard? No but it's annoying. First I need to pull up the whole Ethereum blockchain.
As of April 19th, 2017, my blockchain size is 23.5 GB total.
Second, I need some ethers, and buying ethers from the UK is hard. Of course I already have some (I wouldn't be writing anything about ethereum if I wasn't invested), but I had to go through weeks of research to buy them. (If you're looking for an easy way, learn from my wasted time: transfer money on a Revolut account, change it to euro, do a free SEPA transfer to Kraken, buy ethers there.)
Anyone can replicate my work. The DAPP is client-side code (javascript) so anyone can download it and run it on their own page. the contract is also up there on the blockchain, it's public stuff. I don't really like this, but it is how Ethereum works. If I someday drop the page from the internet, anyone can get it and run it. Run it on their own server, but also run it locally from their own machine.
If you want to see how it works without getting a wallet:
But I recommend you to give a try to this new technology!
Download Mist, set the network to Rinkeby, grab some free ethers from the faucet there and browse to my DAPP FiveMedium.
I just made a video covering common attacks on Ethereum's smart contracts. I used live0verflow's techniques to record and edit this one so it's going to feel different from the others :)
It's a tl;dr of A survey of attacks on Ethereum smart contracts by Nicola Atzei and Massimo Bartoletti and Tiziana Cimoli.
I've polished the design of this blog a bit (with flexbox and css-grid!) and it should look a bit cleaner :)
I've also created a page for graphics. I only have 3 at the moment, but I know that PHD students often present posters like these at conferences so if you know any (or if you have one yourself) and you want me to showcase it there send me a message!
EDIT: no more tickets! If you really want to go to Black Hat, I'd advise you to contact directly other speakers as every BH speaker is given 2 free pass for students.
When a program uses a secret key for some cryptographic operation, it will store it somewhere in memory. This is a problem because it is trivial to read what has been previously stored in memory from a different program, just create something like this:
#include <stdio.h>
int main(){
unsigned char a[5000];
for(int i = 0; i < 10000; i++) {
printf("x", a[i]);
}
printf("\n");
}
This will print out whatever was previously there in memory, because the buffer a is not initialized to zeros. Actually, C seldom initializes things to zeros, it can if you specifically use something like calloc instead of malloc or static in front of a global variable/struct/...
if someone is able to exploit an unrelated problem — a vulnerability which yields remote code execution, or a feature which allows uninitialized memory to be read remotely, for example — then ensuring that sensitive data (e.g., cryptographic keys) is no longer accessible will reduce the impact of the attack. In short, zeroing buffers which contained sensitive information is an exploit mitigation technique.
This is a problem.
To remove a key from memory, developers tend to write something like this:
memset(private_key, 0, sizeof(*private_key));
Unfortunately, when the compiler sees something like this, it will remove it. Indeed, this code is useless since the variable is not used anymore after, and the compiler will optimize it out.
How to fix this issue?
A memset_s function was proposed and introduced in C11. It is basically a safe memset (you need to pass in the size of the pointer you're zero'ing as argument) that will not get optimized out. Unfortunately as Martin Sebor notes:
memset_s is an optional feature of the C11 standard and as such isn't really portable. (AFAIK, there also are no conforming C11 implementations that provide the optional Annex K in which the function is defined.)
To use it, a #define at the right place can be used, and another #define is used as a notice that you can now use the memset_s function.
The GCC -fno-builtin-memset option can be used to prevent compatible compilers from optimizing away calls to memset that aren't strictly speaking necessary.
Unfortunately, it seems like macOS' gcc (which is really clang) ignores this argument.
What else can we do?
I asked Robert Seacord who always have all the answers, here's what he gave me in return:
Time to open gdb (or lldb) to verify what the compiler has done. (This can be done after compiling with or without -O1, -O2, -O3 (different levels of optimization).)
Let's write a small program that uses this code and debug it:
Is this it? Let's put a breakpoint on the first one and see what the stack pointer (rsp) is pointing to.
It's pointing to the string "hello" as we guessed.
Going to the next instruction via ni, we can then see that the first letter h has been removed. Going over the next instructions, we see that the full string end up being zero'ed.
It's a success!
The full code can be seen here as an erase_from_memory.h header file that you can just include in your codebase:
EDIT: As Colin Percival wrote here, this problem is far from being solved. Secrets can get copied around in (special) registers which won't allow you to easily remove them.
In spite of the obvious controversy of launching a new crypto library, I really like it. Note that this is not me officially endorsing the library, I just think it's cool and I would only consider using it after it had matured a bit more.
The blog post mentions a few bugs that were found in his library (and I appreciate how open he is about it). Here's an interesting one:
Bug 5: signed integer overflow
This one was sneaky. I wouldn't have caught it without UBSan.
I was shifting a uint8_t, 24 bits to the left. I failed to realise that integer promotion means this unsigned byte would be converted to a signed integer, and overflow if the byte exceeded 127. (Also, on crazy platforms where integers are smaller than 32 bits, this would never have worked.) An explicit conversion to uint32_t did the trick.
At this point, I was running the various sanitisers just to increase confidence. Since I used Valgrind already, I didn't expect to actually catch a bug. Good thing I did it anyway.
Lesson learned: Never try anything serious in C or C++ without sanitisers. They're not just for theatrics, they catch real bugs.
And all the theory behind the problem can be dismissed, if he had written his code with precautions. When I see something like this, the first thing I think about is that it should probably be written like this:
uint32_t = (uint32_t)uint8_t << 8 * i;
This would avoid any weird C problems as a casting (especially to a bigger type) usually goes fine.
OK but what was the problem with the above code?
Well, in C some operations will usually promote the type to something bigger. See the C standard:
shift-expression << additive-expression
The integer promotions are performed on each of the operands
If an int can represent all values of the original type, the value is converted to an int;
otherwise, it is converted to an unsigned int.
These are called the integer promotions
So looking back at our bad snippet:
uint32_t = uint8_t << 8 * i;
the maximum value of uint8_t is 255, which can largely be hold in a signed int of 16-bit or 32-bit (depends on the architecture). So 01 is promoted to 00 00 00 01 if a signed int is 32-bit (which it probably is). (In the case were we would have been dealing with a uint32-t, there would have been no problems as "big" values that cannot be represented in a signed int of 32-bit would have been promoted to a unsigned int instead of a signed int.)
the bits are shifted on the left. For example of 8 places 00 00 01 00.
the result gets casted to uint32_t. We still get 00 00 01 00.
This doesn't look like an issue, and it probably isn't most of the time. Now imagine if in 1. our value was 80 (which is 1000 0000 in bits).
Imagine now that in 2. we shift it of 24 bits on the left, that will give us 80 00 00 00 which is an all zero bitstring except for the most significant bit (MSB). In an int type the MSB is the signing bit. I believe at this point, the value will be automatically sign extended to the size of the register, so in your 64-bit machine it will be saved as ff ff ff ff 80 00 00 00.
Now in 3. The result now get casted to a uint32_t. Which doesn't do anything but change the value of the pointer. But we now have a wrong result! What we wanted here was 00 00 00 00 80 00 00 00. If you're not convinced, you can run the following script on your computer:
#include <stdio.h>
#include <stdint.h>
int main(){
uint8_t start = -1;
printf("%x\n", start); // prints 0xff
uint64_t result = start << 24;
printf("%llx\n", result); // should print 00000000ff000000, but will print ffffffffff000000
result = (uint64_t)start << 24;
printf("%llx\n", result); // prints 00000000ff000000
return 0;
}
Looking at the binary in Hopper we can see this:
And we notice the movsxd instruction which is "move doubleword to quadword with sign-extension".
It moves the result of the shift left (shl) into a register, making sure that its result is the same for an int64_t which is the maximum value your register can hold.
If you don't know about length extension attacks, it is a very simple and straight forward attack that let you forge a new hash by extending another one, letting you pretend that hashing had previously not been terminated.
The attack targets such hashes: SHA-256(key | message) where the key is secret and where | means concatenation.
This is because a SHA-2 hash (unless we're talking about the truncated versions) is literally a full copy of the state of the hash. It is not the state of hashing key and message, but rather key and message and some padding. Because like everything in the symmetric crypto world you need to pad to the block size. I believe this is 512 bits in the Secure Hash Algorithm 2.
The attack lets you take such a hash, and continue the hashing to obtain the hash of key | message | padding | more where more is whatever you want. And all of this without any knowledge of the secret key!
Interestingly, this comes from the way the Merkle-Damgard construction is applied (without a good finalization function). And because of this hash functions like MD4, MD5, SHA-1 and SHA-2 have all suffered from the same issues. You'd be glad to hear that this issue is fixed in any of the SHA-3 contestant (read: BLAKE2 and SHAKE and SHA-3 are fine). Keccak (SHA-3's winner) fixes it by using a Sponge construction, not letting you see a big part of the state (the capacity) while BLAKE2 fixes it by using the HAsh Iterative FrAmework (HAIFA), using a "number of bits hashed so far" (not including the padding) inside of the compression function.
While looking at the exact date length extension attacks were found (which I couldn't find), Samuel Neves came up with an interesting response.
It looks like the NIST was made aware, during the standardization process of SHA-2, that simple fixes would prevent length extension attacks.
This comment from John Kelsey (who later joined the NIST) is from 28 august 2001 (by the way it doesn't make sense to write dates as month/day/year. Nobody can understand it outside of the US. We have an ISO format that specifies a logical year-month-day). In it he talks about the attack, and proposes a simple fix:
Niels Ferguson suggested the following simple fix to me, some time ago: Choose some nonzero constant C0, of the same size as the hash function chaining variable. Hash messages normally, until we come to the last block in the padded message. XOR C0 into the chaining variable input into that last compression function computation. The resulting compression function output is used as the hash result. For concreteness, I propose C0 = 0xa5a5...a5, with the 0xa5 repeated until every byte is filled in. This should be interpreted in little-endian bit ordering.
Why did the NIST ignore this when it could have modified the draft before publication? I have no idea. Is this one more fuck up from their part?