david wong

Hey! I'm David, a security engineer at the Blockchain team of Facebook, previously a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.

TLS 1.3 is out! posted August 2018

TLS 1.3 has been released as RFC 8446. It took 28 drafts and more than 4 years since draft 0 to come out. Cloudflare has a long blog post about it. Some questions about the deployment of 1.3:

  • Will we see a fast deployment of the protocol? It seems like browsers are ready, but web servers will have to follow.
  • Who will use 0-RTT? I'm expecting the big players to use it (largely because they've been requesting it) but what about the small ones?
  • Are we going to see vulnerabilities in the protocol? It seems highly unlikely, TLS 1.2 itself (with AES-GCM) has remained solid for more than 10 years.
  • Are we going to see vulnerabilities in the implementations? We will see about that. If anything happens, I'm expecting it to happen around 0-RTT, PSKs and key exports. But let's hope that libraries have learned their lessons.
  • Is BearSSL going to implement TLS 1.3? It sounds like it.
comment on this story

WhatsApp, Secure Messaging, Transcript Consistency and Trust in a group chat posted August 2018

Someone wrote a blogpost about man-in-the-middling WhatsApp.

First, there is nothing new in being able to man-in-the-middle and decrypt your own TLS sessions (+ a simple protocol on top). Sure the tool is neat, but it is not breaking WhatsApp in this regard, it is merely allowing you to look at (and to modify) what you're sending to the WhatsApp server.

The blog post goes through some interesting ways to mess with a WhatsApp group chat, as it seems that the application relies in some parts on metadata that you are in control of. This is bad hygiene, but for me the interesting attack is attack number 3: you can send messages to SOME members of the group, and send different messages to OTHER members of the group.

At first I thought: this is nothing new. If you read the WhatsApp whitepaper it is a clear limitation of the protocol: you do not have transcript consistency. And by that I mean, nothing is cryptographically enforcing that all members of a group chat are seeing the exact same thing.

It is always hard to ensure that the last messages have been seen by everyone of course (some people might be offline), but transcript consistency really only cares about ordering, dropping, and tampering of the messages.

Let's talk about WhatsApp some more. Its protocol is very different from what Signal does and in group chats, each member shares their unique symmetric key with the other members of the group (separately). This means that when you join a group with Alice and Bob, you first create some random symmetric key. After that, you encrypt it under Alice's public key and you send it to her. You then do the same thing with Bob. Once all the members have knowledge of your random symmetric key, you can encrypt all of your messages with it (perhaps using a ratchet). When a member leaves, you have to go through this dance again in order to provide forward secrecy to the group (leavers won't be able to read messages anymore). If you understood what I said, the protocol does not really gives you way to enforce transcript consistency, you are in control of the keys so you choose who you encrypt what messages to.

But wait! Normally, the server should distribute the messages in a fan-out way (the server distributes one encrypted message to X participants), forcing you to collude with a [email protected] in order to perform this kind of shenanigans. In the blog post's attack it seems like you are able to bypass this and do not need the help of WhatsApp's servers. This is bad and I'm still trying to figure out what really happened.

By the way, to my knowledge no end-to-end encrypted protocol has this property of transcript consistency for group chats. Interestingly, the Messaging Layer Security (MLS) which is the latest community effort to standardize a messaging protocol does not have a solution for this either. I'll probably talk about MLS in a different blog post because it is still very interesting.

The last thing I wanted to mention is trust inside of a group chat. We've been trying to solve trust in a one-to-one conversation for many many years, and between PGP being broken and the many wars between the secure messaging applications, it seems like this is still something we're struggling with. Just yesterday, a post titled I don't trust Signal made the front page on hackernews. So is there hope for trust in a group chat anytime soon?

First, there are three kinds of group chat:

  • large group chats
  • medium-sized group chats
  • small group chats

I'll argue that large group chats have given up on trust, as it is next to impossible to figure out who is who. Unless of course we're dealing with a PKI and a company enforcing onboarding with a CA. And even this is has issues (beyond the traitors and snoops).

I'll also argue that small group chats are fine with the current protocols, because you're probably trusting people not to run this kind of attacks.

The problem is in medium-sized group chats.

comment on this story

QUIC Crypto and simple state machines posted August 2018

If you don't know about QUIC, go read the excellent Cloudflare post about it. If you're lazy, just think about it as:

  • Google wanted to improve TCP (2.0™️)
  • but TCP can't really be changed
  • so they built it on top of UDP (which is just IP with ports, check the 2 page RFC for UDP if you don't believe me)
  • they made it with encryption by default
  • and they called it QUIC, because it's quick, you know

There is more to it, it makes HTTP blazing fast with multiplexed streams and all, but I'm only interested about the crypto here.

Google QUIC's (or gQUIC) default encryption was provided by a home-made crypto protocol called QUIC Crypto. The thing is documented in a 14-page doc file and is more or less up-to-date. It was at some point agreed that things needed to get standardized, and thus the process of making QUIC an RFC (or RFCs) began.

Unfortunately QUIC Crypto did not make it and the IETF decided to replace it with TLS 1.3 for diverse reasons.

Why "Unfortunately" do you ask?

Well, as Adam Langley puts it in some of his slides. The protocol was dead simple:

quic crypto

While the protocol had some flaws, in the end, it was still a beautiful and elegant protocol. At its core was an extremely straight forward and linear state machine summed up by this diagram:

quic crypto diagram

A few things to help you read it:

  • a server config is just a blob that contains the server current semi-ephemeral keys. The server config is rotated every X days.
  • an inchoate client hello is just an empty client hello, which prompts the server to send a REJ(ect) message containing its latest config (after that the client can try again with a full client hello)
  • SHLO is a (encrypted) server hello which contains ephemeral keys

As you can see there isn't much going on, if you know the keys of the server you can do some 0-RTT magic, if you don't then request the keys and start the handshake again.

Compare that to the state machine of TLS 1.3:

tls state machine

In the end, TLS 1.3 is a solid protocol, but I'd like to see more experimentation here instead of just relying on TLS. version 1.3 is built on top of numerous previous failed versions which means a great amount of complexity due to legacy and a multitude of use cases and extensions it needs to support. Simpler protocols should be better, simple state machines make for better analysis and more secure implementations. Just look at the Noise protocol framework and its 1k LOC implementations and its symbolic proofs done with ProVerif and Tamarin. Actually, why haven't we started using Noise for everything?

5 comments

About Bitcoin Transactions posted August 2018

Did you know that a bitcoin transaction does not have a recipient field?

That's right! when crafting a transaction to send money on the bitcoin network, you actually do not include I am sending my BTC to _this address_. Instead, you include a script called a ScriptPubKey which dictates a set of inputs that are allowed to redeem the monies. The PubKey in the name surely refers to the main use for this field: to actually let a unique public key redeem the money (the intended recipient). But that's not all you can do with it! There exist a multitude of ways to write ScriptPubKeys! You can for example:

  • not allow anyone to redeem the BTCs, and even use the transaction to record arbitrary data on the blockchain (this is what a lot of applications built on top of bitcoin do, they "burn" bitcoins in order to create metadata transactions in their own blockchains)
  • allow someone who has a password to use the BTCs (but to submit the password, you would need to include it in clear inside a transaction which would inevitably be advertised to the network before actually getting mined. This is dangerous)
  • allow a subset of signatures from a fixed set of public keys to redeem the BTCs (this is what we call multi-sig transactions)
  • allow someone who can break a hash function (SHA-1) to redeem the BTCs (This is what Peter Todd did in 2013)
  • only allow the BTCs to be redeemed after some time in the future (via a timestamp)
  • etc.

On the other hand, if you want to use the money you need to prove that you can use such a transaction's output. For that you include a ScriptSig in a new transaction, which is another script that runs and creates a number of inputs to be used by the ScriptPubKey I talked about. And you guessed it, in our prime use-case this will include a signature (the Sig in the name)!

Recap: when you send BTCs, you actually send it to whoever can give you a correct input (created by a ScriptSig) to your program (ScriptPubKey). In more details, a Bitcoin transaction includes a set of input BTCs to spend and a set of output BTCs that are now redeemable by whoever can provide a valid ScriptSig. That's right, a transaction actually uses many previous transactions to collect money from, and spread them in possibly multiple pockets of money that other transactions can use. Each input of a transaction is associated to a previous transaction output, along with the ScriptSig to redeem it. Each output is associated with a ScriptPubKey. By the way, an output that hasn't been spent yet is called an UTXO for unspent transaction output.

The scripting language of Bitcoin is actually quite limited and easy to learn. It uses a stack and must return True at the end. The limitations actually bothered some people who thought it might be interesting to create something more turing-complete, and thus Ethereum was born.

3 comments

CryptoMag is looking for articles posted July 2018

Hey you!

You want to teach someone about a crypto concept, something 101 that could be explained in 1-2 pages with a lot of diagrams? Look no more, we need you.

Concept

The idea is to have a recurrent benevolent e-magazine (like POC||GTFO) that focuses on:

  • cryptography: duh! That being said, cryptography does include: implementations, cryptocurrencies, protocols, at scale, politics, etc. so there are more topics that we deem interesting than just theoretical cryptography.
  • pedagogy: heaps of diagrams and a focus on teaching. Taking an original writing style is a plus. We're looking not to bore readers.
  • 101: we're looking for introductions to concepts, not deeply technical articles that require a lot of initial knowledge to grasp.
  • short: articles should be similar to a blog post, not a full-fledged paper. With that in mind articles should be around 1, 2 or 3 pages. We are not looking for something dense though, so no posters, rather a submission should be a light read that can be part of a series or influence the reader to read more about the topic.

Topics

Preferably, authors should write about something they are familiar with, but here is a list of topics that would likely be interesting for such a light magazine:

  • what is SSH?
  • what is SHA-3?
  • what is functional encryption?
  • what is TLS 1.3?
  • what is a linear differential attack?
  • what is a cache attack?
  • how does LLL work?
  • what are common crypto implementation tricks?
  • what is R-LWE?
  • what is a hash-based signature?
  • what is an RFC?
  • what is the IETF?
  • what is the IACR?
  • why are companies encrypting databases?
  • what is x509, .pem, asn.1 and base64?
  • etc...

Format

LaTeX if possible.

Deadline

No deadline at the moment.

How to submit

send me a dropbox link or something on the contact page, you can also send it to me via twitter

PS: I am going to annoy you if you don't use diagrams in your article

comment on this story

Decentralized Application Security Project posted April 2018

Last month I was in Singapore with Mason to talk about vulnerabilities in Ethereum smart contracts at Black Hat Asia. As part of the talk we released the DASP, a top 10 of the most damaging or surprising security vulnerabilities that we have observed in the wild or in private during audits we perform as part of our jobs.

dasp

The page is on github as well and we welcome contributions to the top 10 and the list of known exploits. In addition we're looking to host more projects related to the Ethereum space there, if you are looking for research projects or are looking to contribute on tools or anything that can make smart contracts development more secure, file an issue on github!

Note that I will be giving the talk again at IT Camp in Cluj-Napoca in a few months.

comment on this story

On Real World Crypto and Secure Messaging posted January 2018

Paul Rösler and Christian Mainka and Jörg Schwenk released More is Less: On the End-to-End Security of Group Chats in Signal, WhatsApp, and Threema in July 2017.

Today Paul Rösler came to Real World Crypto to talk about the results, which is a good thing. Interestingly, in the middle of the talk Wired released a worrying article untitled WhatsApp Security Flaws Could Allow Snoops to Slide Into Group Chats.
Interestingly as well, at some point during the day Matthew Green also wrote about it in Attack of the Week: Group Messaging in WhatsApp and Signal.

They make it seem really worrisome, but should we really be scared about the findings?

Traceable delivery is the first thing that came up in the presentation. What is it? It’s the check marks that appear when your recipient receives a message you sent. It's mostly a UI feature but the fact that no security is tied to it allows a server to fake them while dropping messages, making you think that your recipient has wrongly received the message. This was never a security feature to begin with, and nobody never claimed it was one.

Closeness is the fact that the WhatsApp servers can add a new participant into your private group chat without your consent (assuming you’re the admin). This could lead people to share messages to the group including to a rogue participant. The caveat is that:

  • previous messages cannot be decrypted by the newcomer because a new key is generated when someone new joins the mix

  • everybody is receiving a notification that somebody joined, at this point everyone can choose to willingly send messages to the group

Again, I do not see this as a security vulnerability. Maybe because I’ve understood how group chats can work (or miswork) from growing up with shady websites and applications. But I see this more as a UI/UX problem.

The paper is not bad though, and I think they’re right to point out these issues. Actually, they do something very interesting in it, they start it up with a nice security model that they use to analyse several messaging applications:

Intuitively, a secure group communication protocol should provide a level of security comparable to when a group of people communicates in an isolated room: everyone in the room hears the communication (traceable delivery), everyone knows who spoke (authenticity) and how often words have been said (no duplication), nobody outside the room can either speak into the room (no creation) or hear the communication inside (confidentiality), and the door to the room is only opened for invited persons (closeness).

Following this security model, you could rightfully think that we haven’t reached the best state in secure messaging. But the fuss about it could also wrongfully make you think that these are worrisome attacks that need to be dealt with.

The facts are here though, this paper has been blown out of proportion. Moxie (one of the creator of Signal) reacts on hackernews:

To me, this article reads as a better example of the problems with the security industry and the way security research is done today, because I think the lesson to anyone watching is clear: don't build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not.

I'd say the problem is in the reaction, not in the published analysis. But it's a sad reaction indeed.

Good night.

comment on this story

Updates on How to Backdoor Diffie-Hellman posted January 2018

Early in 2016, I published a whitepaper (here on eprint) on how to backdoor the Diffie-Hellman key agreement algorithm. Inside the whitepaper, I discussed three different ways to construct such a backdoor; two of these were considered nobody-but-us (NOBUS) backdoors.

A NOBUS backdoor is a backdoor accessible only to those who have the knowledge of some secret (a number, a passphrase, ...). Making a NOBUS backdoor irreversible without the knowledge of the secret.

In October 2016, Dorey et al. from Western University (Canada) published a white paper called Indiscreet Logs: Persistent Diffie-Hellman Backdoors in TLS. The research pointed out that one of my NOBUS construction was reversible, while the other NOBUS construction was more dangerous than expected.

I wrote this blogpost resuming their discoveries a long time ago, but never took the time to publish it here. In the rest of this post, I'll expect you to have an understanding of the two NOBUS backdoors introduced in my paper. You can find a summary of the ideas here as well.

Reversing the first NOBUS construction

For those who have attended my talk at Defcon, Toorcon or a meetup; I should assure you that I did not talk about the first (now-known reversible) NOBUS construction. It was left out of the story because it was not such a nice backdoor in the first place. Its security margins were weaker (at the time) compared to the second construction, and it was also harder to implement.

Baby-Step Giant-Step

The attack Dorey et al. wrote about comes from a 2005 white paper, where Coron et al. published an attack on a construction based on Diffie-Hellman. But before I can tell you about the attack, I need to refresh your memory on how the baby-step giant-step (BSGS) algorithm works.

Imagine that a generator \(g\) generates a group \(G\) in \(\mathbb{Z}_p\), and that we want to find the order of that group \(|G| = p_1\).

Now what we could do if we have a good idea of the size of that order \(p_1\), is to split that length in two right in the middle: \(p_1 = a + b \cdot 2^{\lceil \frac{l}{2} \rceil}\), where \( l \) is the bitlength of \(p_1\).

This allows us to write two different lists:

\[ \begin{cases} L = { g^i \mod{p} \mid 0 < i < 2^{\lceil \frac{l}{2} \rceil} } \\ L' = { g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil} } \mod{p} \mid 0 \leq j < 2^{\lceil \frac{l}{2} \rceil} } \end{cases} \]

Now imagine that you compute these two lists, and that you then stumble upon a collision between elements from these two sets. This would entail that for some \(i\) and \(j\) you have:

\[ \begin{align} &g^i = g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil}} \pmod{p}\\ \Leftrightarrow &g^{i + j \cdot 2^{\lceil \frac{l}{2} \rceil}} = 1 \pmod{p}\\ \Rightarrow &i + j \cdot 2^{\lceil \frac{l}{2} \rceil} = a + b \cdot 2^{\lceil \frac{l}{2} \rceil} = p_1 \end{align} \]

We found \(p_1\) in time quasi-linear (via sorting, searching trees, etc...) in \(\sqrt{p_1}\)!

The Construction

Now let's review our first NOBUS construction, detailed in section 4 of my paper.

construction

Here \(p - 1 = 2 p_1 p_2 \) with \( p_1 \) our small-enough subgroup generated by \(g\) in \(\mathbb{Z}_p\), and \(p_2\) our big-enough subgroup that makes the factorization of our modulus near-impossible. The factor \(q\) is generated in the same way.

Using BSGS on our construction

At this point, we could try to reverse the construction using BSGS by creating these two lists and hopping for a collision:

\[ \begin{cases} L = { g^i \mod{p} \mid 0 < i < 2^{\lceil \frac{l}{2} \rceil} } \\ L' = { g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil} } \mod{p} \mid 0 \leq j < 2^{\lceil \frac{l}{2} \rceil} } \end{cases} \]

Unfortunately, remember that \(p\) is hidden inside of \( n = p q \). We have no knowledge of that factor. Instead, we could calculate these two lists:

\[ \begin{cases} L = { g^i \mod{n} \mid 0 < i < 2^{\lceil \frac{l}{2} \rceil} } \\ L' = { g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil} } \mod{n} \mid 0 \leq j < 2^{\lceil \frac{l}{2} \rceil} } \end{cases} \]

And this time, we can test for a collision between two elements of these lists "mod \(p\)" via the \(gcd\) function:

\[ gcd(n, g^i - g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil}}) \]

Hopefully this will yield \(p\), one of the factor of \(n\). If you do not understand why, it works because if \(g^i\) and \(g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil}}\) collide "mod \(p\)", then we have:

\[ p | g^i - g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil}} \]

Since we also know that \( p | n \), it results that the \(gcd\) of the two returns our hidden \(p\)!

Unfortunately at this point, the persnickety reader will have noticed that this cannot be done in the same complexity as the original BSGS attack. Indeed, we need to compute the \(gcd\) for all pairs and this increases our complexity to \(\mathcal{O}(p_1)\), the same complexity as the attack I pointed out in my paper.

The Attack

Now here is the that trick Coron et al. found out. They could optimize calls to \(gcd\) down to \(\mathcal{O}(\sqrt{p_1})\), which would make the reversing as easy as using the backdoor. The trick is as follow:

  1. Create the polynomial

\[ f(x) = (x - g) (x - g^2) \cdots (x - g^{2^{\lceil \frac{l}{2} \rceil}}) \mod{n} \]

  1. For \(0 \leq j < 2^{\lceil \frac{l}{2} \rceil}\) compute the following \(gcd\) until a factor of \(n\) is found (as before)

\[ gcd(n, f(g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil}})) \]

It's pretty easy to see that the \(gcd\) will still yield a factor, as before. Except that this time we only need to call it at most \(2^{\lceil \frac{l}{2} \rceil}\) times, which is \(\approx \sqrt{p_1}\) times by definition.

Improving the second NOBUS construction

The second NOBUS backdoor construction received a different treatment. If you do not know how this backdoor works I urge you to first watch my talk on the subject.

Let's ask ourselves the question: what happens if the client and the server do not negotiate an ephemeral Diffie-Hellman key exchange, and instead use RSA or Elliptic Curve Diffie-Hellman to perform the key exchange?

This could be because the client did not list a DHE (ephemeral Diffie-Hellman) cipher suite in priority, or because the server decided to pick a different kind of key agreement algorithm.

If this is the case, we would observe an exchange that we could not spy on or tamper with via our DHE backdoor.

Dorey et al. discovered that an active man-in-the-middle could change that by tampering with the original client's ClientHello message to single-out a DHE cipher suite (removing the rest of the non-DHE cipher suites) and forcing the key exchange to happen by way of the Diffie-Hellman algorithm.

This works because there are no countermeasures in TLS 1.2 (or prior) to prevent this to happen.

Final notes

My original white paper has been updated to reflect Dorey et al.'s developments while minimally changing its structure (to retain chronology of the discoveries). You can obtain it here.

Furthermore, let me mention that the new version of TLS —TLS 1.3— will fix all of these issues in two ways:

  • A server now signs the entire observed transcript at some point during the handshake. This successfully prevents any tampering with the ClientHello message as the client can verify the signature and make sure that no active man-in-the-middle has tampered with the handshake.
  • Diffie-Hellman groups are now specified, exactly like how curves have always been specified for the Elliptic Curve variant of Diffie-Hellman. This means that unless you are in control of both the client and the server's implementations, you cannot force one or the other to use a backdoored group (unless you can backdoor one of the specified group, which is what happened with RFC 5114).
comment on this story

Best crypto blog posts of 2017 posted December 2017

Hello hello,

Merry christmas and happy new year. We're done for the year and so it is time for me to write this blog post (I did the same last year by the way).

I'll copy verbatim what I wrote last year about what makes a good blog post:

  • Interesting. I need to learn something out of it, whatever the topic is. If it's only about results I'm generally not interested.
  • Pedagogical. Don't dump your unfiltered knowledge on me, I'm dumb. Help me with diagrams and explain it to me like I'm 5.
  • Well written. I can't read boring. Bonus point if it's funny :)

Without further adue, here is the list!

That's it!

Have I missed something? Please tell me in the comments.

If you want more links like these, be sure to subscribe to my link section here on this website.

See you in 2018!

8 comments

SHAKE and SP 800-185 posted December 2017

I've talked about the SHA-3 standard FIPS 202 quite a lot, but haven't talked too much about the second function the standard introduces: SHAKE.

fips 202

SHAKE is not a hash function, but an Extendable Output Function (or XOF). It behaves like a normal hash function except for the fact that it produces an “infinite” output. So you could decide to generate an output of one million bytes or an output of one byte. Obviously don't do the one byte output thing because it's not really secure. The other particularity of SHAKE is that it uses saner parameters that allow it to achieve the desired targets of 128-bit (for SHAKE128) or 256-bit (for SHAKE256) for security. This makes it a faster alternative than SHA-3 while being a more flexible and versatile function.

SP 800-185

SHAKE is intriguing enough that just a year following the standardization of SHA-3 (2016) another standard is released from the NIST's factory: Special Publication 800-185. Inside of it a new customizable version of SHAKE (named cSHAKE) is defined, the novelty: it takes an additional "customization string" as argument. This string can be anything from an empty string to the name of your protocol, but the slightest change will produce entirely different outputs for the same inputs. This customization string is mostly used as domain separation for the other functions defined in the new document: KMAC, TupleHash and ParallelHash. The rest of this blogpost explains what these new functions are for.

KMAC

Imagine that you want to send a message to your good friend Bob. You do not care about encrypting your message, but to make sure that nobody modifies the message in transit, you hash it with SHA-256 (the variant of SHA-2 with an output length of 256-bit) and append the hash to the message you're sending.

message || SHA-256(message)

On the other side, Bob detaches the last 256-bit of the message (the hash), and computes SHA-256 himself on the message. If the obtained result is different from the received hash, Bob will know that someone has modified the message.

Does this work? Is this secure?

Of course not, I hope you know that. A hash function is public, there are no secrets involved, someone who can modify the message can also recompute the hash and replace the original one with the new one.

Alright, so you might think that doing the following might work then:

message || SHA-256(key || message)

Both you and Bob now share that symmetric key which should prevent any man-in-the-middle attacker to recompute that hash.

Do you really think this is working?

Nope it doesn't. The reason, not always known, is that SHA-256 (and most variants of SHA-2) are vulnerable to what is called a length extension attack.

You see, unlike the sponge construction that releases just a part of its state as final output, SHA-256 is based on the Merkle–Damgård construction which outputs the entirity of its state as final output. If an attacker observes that hash, and pretends that the absorption of the input hasn't finished, he can continue hashing and obtain the hash of message || more (pretty much, I'm omitting some details like padding). This would allow the attacker to add more stuff to the original message without being detected by Bob:

message || more || SHA-256(key || message || more)

Fortunately, every SHA-3 participants (including the SHA-3 winner) were required to be resistant to these kind of attacks. Thus, KMAC is a Message Authentication Code leveraging the resistance of SHA-3 to length-extension attacks. The construction HASH(key || message) is now possible and the simplified idea of KMAC is to perform the following computation:

cSHAKE(custom_string=“KMAC”, input=“key || message”)

KMAC also uses a trick to allow pre-computation of the keyed-state: it pads the key up to the block size of cSHAKE. For that reason I would recommend not to come up with your own SHAKE-based MAC construction but to just use KMAC if you need such a function.

TupleHash

TupleHash is a construction allowing you to hash a structure in an non-ambiguous way. In the following example, concatenating together the parts of an RSA public key allows you to obtain a fingerprint.

fingerprint

A malicious attacker could compute a second public key, using the bits of the first one, that would compute to the same fingerprint.

fingerprint attack

Ways to fix this issue are to include the type and length of each element, or just the length, which is what TupleHash does. Simplified, the idea is to compute:

cSHAKE(custom_string=“TupleHash”,
    input=“len_1 || data_1 || len_2 || data_2 || len_3 || data_3 || ..."
)

Where len_i is the length of data_i.

ParallelHash

ParallelHash makes use of a tree hashing construction to allow faster processing of big inputs and large files. The input is first divided in several chunks of B bytes (where B is an argument of your choice), each chunk is then separately hashed with cSHAKE(custom_string=“”, . ) producing as many 256-bit output as there are chunks. This step can be parallelized with SIMD instructions or other techniques available on your architecture. Finally each output is concatenated and hashed a final time with cSHAKE(custom_string=“ParallelHash”, . ). Again, details have been omitted.

comment on this story