Zokis wrote some tests on python, showing that a difference in declarations and simple syntax do have implications in the size of the program and the rapidity of execution.
For example writing
a, b = 0, 1 seems faster than doing
a = 0 then
b = 1
Using chained conditions like
a < b < 0 seems faster than doing
a < b and b < 0
etc... you can find all the tests here
The differences seem negligible though. dis and timeit were used to quantify the tests.
Also here are two useful python arguments:
python -c cmd : program passed in as string (terminates option list)
# python -c "print 'haha'"
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
# python -i -c "a = 5"
I was trying to access the Journal of Cryptology on Springer but I had to pay. Thanks to __x86 I realized I had free access to Springer thanks to my university!
So this post is oriented to my fellow classmates. If any of you want to check something there, it's free for us! (Well until we graduate).
I was trying to check the papers that got Dan Boneh and Antoine Joux their Gödel prize:
I'm reading through A Key Recovery Attack on Discrete Log-based Schemes Using a Prime Order Subgoup which is a Small subgroup confinement attack.
It deals with stuff I had no knowledge of, like Schnorr's Signature that I talk about in a previous post, or like what I'm going to talk about now:
The Pohlig-Hellman Algorithm is a method to compute a Discrete Logarithm (which is a difficult problem) on a multiplicative group whose order is a smooth number (also called friable). Meaning its order can be factorized into small primes.
y = g^x mod p
ord_p(g) = p - 1
p - 1 = q_1^(i_1) * ... * q_j^(i_j)
Here y is the public key,
x is the secret key we're trying to compute.
The order of
g, our generator, is
p - 1 since p is prime.
p - 1 is smooth so it can be factorized into something like 2^2 * 3 * 7 (extremely convenient example!)
Following is an overview of the method, if you read an equation and feel like it comes from nowhere (and it should feel like that), I posted a very short paper containing the simple proofs of those bellow.
The idea that should come to your mind, if you're used to seeing that kind of problem, is that there might be a way to use the Chinese Remainder Theorem (abbreviated CRT) to our advantage. What if we could write
x modulo the factors of
p - 1 and then reconstruct the real
x with the CRT? Well, let's do just that!
x modulo a factor
p - 1 we can write
y^((p-1) / q) which we know and can compute, and is also equal to
g^(x_q * (p-1) / q)
(If you have a factor that appears multiple times in the prime decomposition of
p - 1 (for example
p - 1 = 2^5, then there is also a way to ease the computations by finding multiple unknowns (5 unknowns in our example))
We then have a discrete logarithm to compute, but a small one, that we can compute efficiently thanks to Shanks' method (baby-step giant-step) or Pollard's rho algorithm.
Then we have multiple ways of writing our
x (modulo the factors of
p - 1) and we find out what is
x with CRT (I explained this step in my airbus write-up here).
Proofs + Video
You can read through the Algorithm (or watch the video bellow), but I don't really like to do that since I'm not really good at memorizing something if I don't understand the nut and bolts of it. So here's a really good paper written by D. R. Stinson that demonstrate where the equations of the algorithm come from. And here's an explanation + example of the algorithm:
Since I've made my first tournament app in ~2005-2006 I received many request to make an open source version available. Back then I didn't really want to release something badly coded so I just kept it for myself and allowed people to go through my online app to create tournaments.
It caught up, it was translated in 8 different languages (sometimes badly translated though) and used all across Europe in real life and on IRC (I think there was something like 7000 different organizations that got created through the app). One day some dude skyped me and offered me 80€ for the sourcecode. I made 80€.
I then rewrote everything using new technologies I had learn or I wanted to learn. CodeIgniter, Zurb Foundation, jQuery, Sass... It was kind of a mess and I must have scared away all the users it had. Eventually I didn't renew the domain name and people started complaining and asking me to hand them the app.
I was sad that there was so many people asking for a tournament app, and that mine was not out there anymore. So yesterday when someone asked me if he could have the code I uploaded everything on Github. The code is old, and it's a mess. The sass is nowhere to be found. I even wonder if it's really secure. But it works, it's easy to setup, and if it gains traction I might want to get back into it. If there was one project I fell in love with, it was this one.
According to the US government, yes they did:
the FBI now has enough information to conclude that the North Korean government is responsible for these actions
What do security experts think about that?
Here's a piece from Marc Roger called No, North Korea Didn’t Hack Sony. So you can guess what the director of security operations for DEFCON and principal security researcher of Cloudflare is thinking.
What about Schneier? Read about it here
I worry that this case echoes the "we have evidence -- trust us" story that the Bush administration told in the run-up to the Iraq invasion. Identifying the origin of a cyberattack is very difficult, and when it is possible, the process of attributing responsibility can take months.
What about Robert Graham? his article's title is as usual pretty straight forward: The FBI's North Korea evidence is nonsense
So there is some kind of consensus that the FBI's announcement is abrupt and shady...
To dig further... Nicholas Weaver posted an interesting article. Kurt Baumgartner as well.
Interactive Protocols are basically a discussion between a Prover and a Verifier where the Prover has some piece of information he wants to prove, without giving out the information.
It is often illustrated with Peggy and Victor and their super tunnel.
Usualy it takes 3 steps:
- Prover sends a fixed value.
- Verifier sends a challenge.
- Prover answers the challenge.
The Verifier can then verify the answer based on the fixed value. If the answer is correct, the Verifier can assume the Prover knows what he's trying to prove. Sometimes you have to repeat the protocols multiple time to be sure, and not all problems have an Interactive Proof.
Classic examples of such proofs can be found on the Discrete Logarithm problem (we'll talk about that later) and the Hamiltonian Cycle problem.
Interactive Protocols are verified if they are :
- Complete: a Prover can successfully answer the challenge if he is honest.
- Sound : a dishonest Prover cannot convince the Verifier he knows the secret.
In the real definitions we use probabilities (an honest prover still has a small chance of making a mistake, a dishonest prover still has a small chance of convincing the Verifier).
We also often want a 3rd condition on our Interactive Protocols: we want it to be Zero-knowledge, no information about our secret should be leaked in this interaction.
Here are how you prove each one of them:
- Completeness: Can the Prover answer correctly thanks to his secret?
- Soundness: From the point of view of the Verifier. If the Prover can correctly answer two different challenges for the same fixed value (however he crafted the answers and the fixed value), does it mean that he must know the secret then?
- Zero-Knowledgeness: If you see a transcript of a recorded instance of this interaction, will you learn anything about the secret? (See if you can create fake transcripts)
There are also notions of weak Zero-knowledge, strong Zero-knowledge, dishonnest verifiers, etc...
But let's talk about something else.
Since we said that a recorded transcript of a past interaction has no value (if it is zero-knowledge), then we could assume that there is no way of proving something by showing an old transcript, or by showing a transcript with yourself.
Don't fool yourself! Yes we can. We do this by using hash functions that we deem random enough.
The idea is that, by replacing the Verifier by a random oracle, we cannot predict the challenges and we thus cannot craft a fake transcript (we like to use random oracles instead of hashes, to prove some scheme is secure).
a random oracle is an oracle (a theoretical black box) that responds to every unique query with a (truly) random response chosen uniformly from its output domain. If a query is repeated it responds the same way every time that query is submitted.
What is interesting is that this protocol was used in a Signature Scheme.
Interactive Proof of a Discrete Logarithm
The most famous academic example of Interactive Protocol is done using the Discrete Logarithm problem.
<g> = G, with
g of order
q. The Prover wants to show he knows
g^x = y.
- the Prover sends
t = g^e
- the Verifier sends a challenge
- the Prover sends
d = e + cx
The Verifier can then compute
y^c * t = g^(e + cx) and see if it equals
g^d = g^(e + cx)
A transcript would look like this:
(t, c, d)
Non-Interactive Proof of a Discrete Logarithm
Doing this with a non-interactive protocol, it would look like this:
(t, h(t), d) with
h a hash function.
This is what Schnorr's Signature is doing:
t = g^e
c = H(m || t)
d = e - x*c
he would then send
(c, d) as the signature, along the message
m. Which is basically a hash of the message with a proof that he knows the secret
To verify the signature you would use the public key
y = g^x to compute
y^c * g^d = t and then you would compute the hash. It would give you the proof that the signer knows
x (authentication, non-repudiation) and that the message hasn't been tampered (integrity).
So this is one way of using Non-interactive proofs!
A great FAQ, written by Marc Joye (Thomson R&D), on Whitebox Cryptography.
Thansk Tancrède for the link!
Q1: What is white-box cryptography?
A major issue when dealing with security programs is the protection of "sensitive" (secret, confidential or private) data embedded in the code. The usual solution consists in encrypting the data but the legitimate user needs to get access to the decryption key, which also needs to be protected. This is even more challenging in a software-only solution, running on a non-trusted host.
White-box cryptography is aimed at protecting secret keys from being disclosed in a software implementation. In such a context, it is assumed that the attacker (usually a "legitimate" user or malicious software) may also control the execution environment. This is in contrast with the more traditional security model where the attacker is only given a black-box access (i.e., inputs/outputs) to the cryptographic algorithm under consideration.
Q2: What is the difference with code obfuscation?
Related and complementary techniques for protecting software implementations but with different security goals include code obfuscation and software tamper-resistance. Code obfuscation is aimed at protecting against the reverse engineering of a (cryptographic) algorithm while
software tamper-resistance is aimed at protecting against modifications of the code.
All these techniques have however in common that the resulting implementation must remain directly executable.
Or as Francis Gabriel writes here
Code obfuscation means code protection. A piece of code which is obfuscated is modified in order to be harder to understand. As example, it is often used in DRM (Digital Rights Management) software to protect multimedia content by hiding secrets informations like algorithms and encryption keys.
SECURITY DAY will take place at the University of Lille 1, in France, on January 16th. People from Quarkslab (where I almost did my internship), ANSSI, Microsoft, ... will give talks. There is even one of my classmate Jonathan Salwan.
I'm trying to find a way to get there, so if you want to buy me a beer this might be the right place :D
As requested, I added a rss feed to this blog. It's available here in markdown, and here in html, choose whichever suits you best.
posted December 2014
I like how people make an extreme effort to create "sure" source of random numbers.
OneRNG has released a new usb source. Everything is opensource (open hardware, open software), you can even create your own by following instructions on their websites.
OneRNG collects entropy from an avalanche diode circuit, and from a channel-hopping RF receiver. It even has a “tinfoil hat” to prevent RF interference — you can remove the hat in order to visually verify the components being used.
Now I'm wondering who is using that and for what
High on Coffee has released a cheatsheet on nmap. Full of examples and tips. You can find it here
EDIT: There are also Linux Commands for Penetration Testers there. The blog seems pretty new and it already has really good content :)
A new vulnerability has been discovered on the git client. See Github's announcement
Repositories hosted on github.com cannot contain any of the malicious trees that trigger the vulnerability because we now verify and block these trees on push.
The official announcement and the updated and fixed version of git is here.
We used to allow committing a path ".Git/config" with Git that is
running on a case sensitive filesystem, but an attempt to check out
such a path with Git that runs on a case insensitive filesystem
would have clobbered ".git/config", which is definitely not what
the user would have expected. Git now prevents you from tracking
a path with ".Git" (in any case combination) as a path component.
More information about the vulnerability here
Git maintains various meta-information for its repository in files in .git/ directory located at the root of the working tree. The system does not allow a file in that directory (e.g. .git/config) to be committed in the history of the project, or checked out to the working tree from the project. Otherwise, an unsuspecting user can run git pull from an innocuous-looking-but-malicious repository and have the meta-information in her repository overwritten, or executable hooks installed by the owner of that repository she pulled from (i.e. an attacker).
And I just passed the last exam of this semester, which should be the last exam of my life =)
Now is time to take a few days to relax and eat nice food because it will soon be christmas ^^ (or holidays, as I heard some american say to avoid saying christmas).
A few interesting things I had to do during my exams these last few days:
- Simple Power Analysis (SPA). Guess what algorithm is used from smartcards' traces and calculate the exponent if it's a binary exponentiation
In the picture you can see two patterns, "1" is represented by two operations in the algorithm, and one of them is squaring which happens also when you have a "0" in your exponent's binary representation. So following the computations revealed by the power trace you can guess the binary representation of the exponent.
I had to read this article explaining two malloc implementations and their vulnerabilities. GNU Lib C (used in Linux) and System V AT&T (used in Solaris, IRIX). I knew the double chained list system but System V uses a different approach: binary tree and also a
realfree function that completes the
Josip Franjković found a vulnerability in one of the file uploader of facebook.
He described what he did here
basically he uploaded a zipped file of a symbolic link to
ln -s /etc/passwd link
zip --symlinks test.zip link
And since uploaders are always a mess to secure. Facebook just replied displaying the content of what he thought was the unzipped resume.
Schneier just gave a talk on security at Qcon in San Francisco. It was recorded and you can watch that here.
It's a high level talk that brings a lot of interesting points, like how much do we trust our devices, how companies are often doing very bad things in term of security, ...
The psychologist he's talking about is Daniel Kahneman, who won the nobel prize in economics for his work on Prospect Theory.
Prospect theory is a behavioral economic theory that describes the way people choose between probabilistic alternatives that involve risk, where the probabilities of outcomes are known. The theory states that people make decisions based on the potential value of losses and gains rather than the final outcome, and that people evaluate these losses and gains using certain heuristics.