I finally chose where I'll be doing my internship to finish my master of Cryptography: it will be at Cryptography Services (a new crypto team, part of NCC Group (iSEC Partners, Matasano, IntrepidusGroup)). I will be conducting researches and audits in the offices of Matasano in **Chicago**. I'm super excited and the next 6 months of my life should be full of surprise!

Cryptography Services is a dedicated team of consultants from iSEC Partners, Matasano, Intrepidus Group, and NCC Group focused on cryptographic security assessments, protocol and design reviews, and tracking impactful developments in the space of academia and industry.

I guess I'll soon have to create a "life in Chicago" category =)

Zokis wrote some tests on python, showing that a difference in declarations and simple syntax do have implications in the size of the program and the rapidity of execution.

For example writing `a, b = 0, 1`

seems faster than doing `a = 0`

then `b = 1`

Using chained conditions like `a < b < 0`

seems faster than doing `a < b and b < 0`

etc... you can find all the tests here

The differences seem negligible though. dis and timeit were used to quantify the tests.

Also here are two useful python arguments:

python -c cmd : program passed in as string (terminates option list)

```
# python -c "print 'haha'"
haha
```

-i : inspect interactively after running script; forces a prompt even

if stdin does not appear to be a terminal; also PYTHONINSPECT=x

```
# python -i -c "a = 5"
>>> a
5
```

I was trying to access the Journal of Cryptology on **Springer** but I had to pay. Thanks to __x86 I realized I had free access to Springer thanks to my university!

So this post is oriented to my fellow classmates. If any of you want to check something there, it's free for us! (Well until we graduate).

I was trying to check the papers that got Dan Boneh and Antoine Joux their Gödel prize:

I'm reading through A Key Recovery Attack on Discrete Log-based Schemes Using a Prime Order Subgoup which is a **Small subgroup confinement attack**.

It deals with stuff I had no knowledge of, like **Schnorr's Signature** that I talk about in a previous post, or like what I'm going to talk about now:

The Pohlig-Hellman Algorithm is a method to compute a **Discrete Logarithm** (which is a difficult problem) on a multiplicative group whose order is a smooth number (also called *friable*). Meaning its order can be factorized into small primes.

```
y = g^x mod p
ord_p(g) = p - 1
p - 1 = q_1^(i_1) * ... * q_j^(i_j)
```

Here y is the public key, `x`

is the secret key we're trying to compute.

The order of `g`

, our generator, is `p - 1`

since p is prime.

p - 1 is smooth so it can be factorized into something like 2^2 * 3 * 7 (extremely convenient example!)

Following is an **overview** of the method, if you read an equation and feel like it comes from nowhere (and it should feel like that), I posted a very short paper containing the simple proofs of those bellow.

## Overview

The idea that should come to your mind, if you're used to seeing that kind of problem, is that there might be a way to use the **Chinese Remainder Theorem** (abbreviated CRT) to our advantage. What if we could write `x`

modulo the factors of `p - 1`

and then **reconstruct** the real `x`

with the CRT? Well, let's do just that!

To write `x`

modulo a factor `q`

of `p - 1`

we can write `y^((p-1) / q)`

which we know and can compute, and is also equal to `g^(x_q * (p-1) / q)`

(If you have a factor that appears multiple times in the prime decomposition of `p - 1`

(for example `p - 1 = 2^5`

, then there is also a way to ease the computations by finding multiple unknowns (5 unknowns in our example))

We then have a discrete logarithm to compute, but a small one, that we can compute efficiently thanks to Shanks' method (baby-step giant-step) or Pollard's rho algorithm.

Then we have multiple ways of writing our `x`

(modulo the factors of `p - 1`

) and we find out what is `x`

with CRT (I explained this step in my airbus write-up here).

## Proofs + Video

You can read through the Algorithm (or watch the video bellow), but I don't really like to do that since I'm not really good at memorizing something if I don't understand the nut and bolts of it. So here's a really good paper written by **D. R. Stinson** that demonstrate where the equations of the algorithm come from. And here's an explanation + example of the algorithm:

Since I've made my first tournament app in ~2005-2006 I received many request to make an open source version available. Back then I didn't really want to release something badly coded so I just kept it for myself and allowed people to go through my online app to create tournaments.

It caught up, it was translated in 8 different languages (sometimes badly translated though) and used all across Europe in real life and on IRC (I think there was something like 7000 different organizations that got created through the app). One day some dude skyped me and offered me 80€ for the sourcecode. I made 80€.

I then rewrote everything using new technologies I had learn or I wanted to learn. CodeIgniter, Zurb Foundation, jQuery, Sass... It was kind of a mess and I must have scared away all the users it had. Eventually I didn't renew the domain name and people started complaining and asking me to hand them the app.

I was sad that there was so many people asking for a tournament app, and that mine was not out there anymore. So yesterday when someone asked me if he could have the code I uploaded everything on Github. The code is old, and it's a mess. The sass is nowhere to be found. I even wonder if it's really secure. But it works, it's easy to setup, and if it gains traction I might want to get back into it. If there was one project I fell in love with, it was this one.

According to the US government, yes they did:

the FBI now has enough information to conclude that the North Korean government is responsible for these actions

What do security experts think about that?

Here's a piece from **Marc Roger** called **No, North Korea Didn’t Hack Sony**. So you can guess what the director of security operations for DEFCON and principal security researcher of Cloudflare is thinking.

What about **Schneier**? Read about it here

I worry that this case echoes the "we have evidence -- trust us" story that the Bush administration told in the run-up to the Iraq invasion. Identifying the origin of a cyberattack is very difficult, and when it is possible, the process of attributing responsibility can take months.

What about **Robert Graham**? his article's title is as usual pretty straight forward: The FBI's North Korea evidence is nonsense

So there is some kind of consensus that the FBI's announcement is abrupt and shady...

To dig further... **Nicholas Weaver** posted an interesting article. Kurt Baumgartner as well.

## Interactive Protocols

**Interactive Protocols** are basically a discussion between a Prover and a Verifier where the Prover has some piece of information he wants to prove, without giving out the information.

It is often illustrated with Peggy and Victor and their super tunnel.

Usualy it takes 3 steps:

- Prover sends a fixed value.
- Verifier sends a challenge.
- Prover answers the challenge.

The Verifier can then verify the answer based on the fixed value. If the answer is correct, the Verifier can assume the Prover knows what he's trying to prove. Sometimes you have to repeat the protocols multiple time to be sure, and **not all problems have an Interactive Proof**.

Classic examples of such proofs can be found on the Discrete Logarithm problem (we'll talk about that later) and the Hamiltonian Cycle problem.

Interactive Protocols are verified if they are :

**Complete**: a Prover can successfully answer the challenge if he is honest.
**Sound** : a dishonest Prover cannot convince the Verifier he knows the secret.

In the real definitions we use probabilities (an honest prover still has a small chance of making a mistake, a dishonest prover still has a small chance of convincing the Verifier).

We also often want a 3rd condition on our Interactive Protocols: we want it to be **Zero-knowledge**, no information about our secret should be leaked in this interaction.

Here are how you prove each one of them:

**Completeness**: Can the Prover answer correctly thanks to his secret?
**Soundness**: From the point of view of the Verifier. If the Prover can correctly answer two different challenges for the same fixed value (however he crafted the answers and the fixed value), does it mean that he must know the secret then?
**Zero-Knowledgeness**: If you see a transcript of a recorded instance of this interaction, will you learn anything about the secret? (See if you can create fake transcripts)

There are also notions of weak Zero-knowledge, strong Zero-knowledge, dishonnest verifiers, etc...

But let's talk about something else.

## Non-interactive Protocols

Since we said that a recorded transcript of a past interaction has no value (if it is zero-knowledge), then we could assume that there is no way of proving something by showing an old transcript, or by showing a transcript with yourself.

Don't fool yourself! Yes we can. We do this by using hash functions that we deem random enough.

The idea is that, by **replacing the Verifier by a random oracle**, we **cannot predict** the challenges and we thus cannot craft a fake transcript (we like to use random oracles instead of hashes, to prove some scheme is secure).

a random oracle is an oracle (a theoretical black box) that responds to every unique query with a (truly) random response chosen uniformly from its output domain. If a query is repeated it responds the same way every time that query is submitted.

What is interesting is that this protocol was used in a **Signature Scheme**.

## Interactive Proof of a Discrete Logarithm

The most famous academic example of Interactive Protocol is done using the **Discrete Logarithm problem**.

we have `<g> = G`

, with `g`

of order `q`

. The Prover wants to show he knows `x`

in `g^x = y`

.

- the Prover sends
`t = g^e`

- the Verifier sends a challenge
`c`

- the Prover sends
`d = e + cx`

The Verifier can then compute `y^c * t = g^(e + cx)`

and see if it equals `g^d = g^(e + cx)`

A transcript would look like this: `(t, c, d)`

## Non-Interactive Proof of a Discrete Logarithm

Doing this with a non-interactive protocol, it would look like this:

`(t, h(t), d)`

with `h`

a hash function.

## Schnorr's Signature

This is what **Schnorr's Signature** is doing:

`t = g^e`

`c = H(m || t)`

`d = e - x*c`

he would then send `(c, d)`

as the signature, along the message `m`

. Which is basically a hash of the message with a proof that he knows the secret `x`

.

To verify the signature you would use the public key `y = g^x`

to compute `y^c * g^d = t`

and then you would compute the hash. It would give you the proof that the signer knows `x`

(authentication, non-repudiation) and that the message hasn't been tampered (integrity).

So this is one way of using Non-interactive proofs!

A great FAQ, written by **Marc Joye** (Thomson R&D), on Whitebox Cryptography.

http://joye.site88.net/papers/Joy08whitebox.pdf

Thansk Tancrède for the link!

Q1: What is white-box cryptography?

A major issue when dealing with security programs is the protection of "sensitive" (secret, confidential or private) data embedded in the code. The usual solution consists in encrypting the data but the legitimate user needs to get access to the decryption key, which also needs to be protected. This is even more challenging in a software-only solution, running on a non-trusted host.

White-box cryptography is aimed at protecting secret keys from being disclosed in a software implementation. In such a context, it is assumed that the attacker (usually a "legitimate" user or malicious software) may also control the execution environment. This is in contrast with the more traditional security model where the attacker is only given a black-box access (i.e., inputs/outputs) to the cryptographic algorithm under consideration.

Q2: What is the difference with code obfuscation?

Related and complementary techniques for protecting software implementations but with different security goals include code obfuscation and software tamper-resistance. *Code obfuscation* is aimed at protecting against the reverse engineering of a (cryptographic) algorithm while
software tamper-resistance is aimed at protecting against modifications of the code.

All these techniques have however in common that the resulting implementation must remain directly executable.

Or as **Francis Gabriel** writes here

Code obfuscation means code protection. A piece of code which is obfuscated is modified in order to be harder to understand. As example, it is often used in DRM (Digital Rights Management) software to protect multimedia content by hiding secrets informations like algorithms and encryption keys.