Supply chain attacks are the new big thing posted January 2022
Over 90 WordPress themes, plugins backdoored in supply chain attack
(source: bleepingcomputer.com)
A product can be seen as a production line. That's what The Phoenix Project novel argues. It makes sense to me. Things gets assembled and passed around, work stations can become bottlenecks, and at the end of the line you deliver the product to the user. In that production line, pieces come from your own warehouse, or from external vendors. That distinction my friend, is what we call "insider threat vs supply chain attacks" in security.
Insider threat is a problem that's mostly ignored today. Large corporations have spies in them, that's a given. They have poor ways to deal with them, but that's fine, because most of these spies are working from other governments and they're tolerated in the grand scheme of things. They might inject backdoors here and there (see the juniper incident) but most of the times they remain unnoticed and unproblematic.
Supply chain attacks, on the other hand, are from less sophisticated attackers. They're from script kiddies, blackhats, "hackers", and others that we generally don't label much as advanced (like in advanced persistent threat). But they are getting more and more advanced.
To counter supply chain attacks, the first thing you can do is inventory (some people might call this threat modeling). The idea, really, is to list all of the pieces and the flows that make up your product. An acyclic directed graph of machines and humans. Once you have that, it's much clearer what the different attack vectors are.
I'm a software person, so I mostly focus on software security. When you write software, you worry about a few things:
- your dependencies
- the flows around your code evolution
- what goes into a release
- how that release gets into production
The fourth point is sort of an infra problem, so it's not the most urgent problem you should think about as a developer. I'd say the third point could be an infra problem as well, unless you're lucky enough to have release engineers or dev infra people working in your company. The second point is probably Github, and all the corner cases you have around it. For the most part, you can choose to trust Github. The first point, the dependencies you're using, that is your problem.
I learned a few things, back at Facebook when I was working on libra/diem, and I think that could be useful to others. First, not languages are created equal. Languages like Golang are fantastic because they provide a rich standard library. What some would call a battery-included stdlib. Thanks to that, you rarely see Golang projects depend on more than 100 transitive dependencies (this means that I'm including dependencies that are used by your direct dependencies, and so on). Other languages, like javascript and Rust can have their dependency graph just blow up due to poor standard libraries and great package managers.
When I looked at what we were doing at the time with libra/diem, I got obsessed with the large amount of dependencies that our software used. A year in, I got inspired by Feymann's story about investigating the challenger disaster, and the way he managed to pinpoint the problem by using the wisdom of the crowd.
The wisdom of the crowd is this concept where the average of all people's guesses to a question can be extremely close to the real answer. The best demonstration to this concept is when a bunch of people are asked to estimate the number of jelly beans contained in a jar. Check that video on youtube. If you're french, fouloscopie is a great youtuber who does research about crowds and their behavior.
So I used the same methodology. I created a form and sent it to all the engineers at the company. The form asked the following: "for each pieces of our big puzzle, what is your estimation of the risk from 0 to 5? If you don't know leave it blank." The result was eye opening. The trivial parts were deemed not risky, as expected. The confusing parts (like consensus and HSM code) had estimations that were all over the place. Dependencies were ranked as the highest risk. Finally, some other parts that I had overlooked got ranked as high-risk which made a lot of sense and changed our strategy for the rest of the year.
So dependencies were definitely a problem, people agreed. What could we do? First, I developed a tool called dephell that would allow us to get some quick stats on the dependencies we were using in our Rust project. The tool was a bit buggy, and incorrect at times, but it was good enough to show us outliers. We used it on every libraries in our project to get a sense of what dependencies they were relying on. We had hundreds and hundreds of dependencies, so that was quite a hard task, but a few patterns stood out.
Sometimes, we used many different dependencies to solve the same problem. The fix was easy: decide which one was the most solid dependency and use that one. Other times, we would use a large dependency, that would import more dependencies, and notice that if we would rewrite it ourselves we could live with a much smaller amount of code and no dependencies. This is what I did when I wrote libra's noise implementation which removed a large amount of code and a few dozens of dependencies to a few hundreds of lines of code. Sometimes, we realized that the dependencies we were using were not great: they were not maintained, had unknown maintainers and contributors, lots of unsafe code, and so on. Finding a better alternative was usually the way to go.
That analysis got us to remove a lot of dependencies. But dependencies continue to creep in, and so we needed a flow to prevent that growth. The obvious solution was to add friction to the process of adding a dependency. Making sure that I was involved in accepting any PR that added a new dependency (via continuous integration) was enough. Most of the times it would go like this:
- hey! Can I get a review from you for this PR, it looks like you need to look at it since I'm adding a new dependency.
- for sure! I looked at the dependency and it's adding X lines of code, as well as Y additional dependencies. Is this something that's really necessary? Are there better alternatives?
- hum... actually I could write that in a few lines of code, let me try that instead.
At that point, you're having a good sense of what your dependencies are, you sorted them out and even did some cleaning, and you stopped the growth. You still have problems: dependencies evolve. We know, from experience, that the risk of not updating dependencies is higher than updating. That's because a lot of your updates will be bug fixes. So you need to update. Updating adds a few non-negligible risks though:
- a backdoor could have been introduced in a (transitive) dependency
- new dependencies could be added by the update
- new bugs could be introduced by the update
The cost of 3 is negligible compared to the bug fixes. Remember: you can't review every dependencies you're using. You wouldn't be using third-party dependencies in the first place if you had all the time in the world. 2 has to be taken into account in your dependency growth control strategy. 1 is the subject of this post.
To tackle 1, John and I created whackadep. Whackadep is a service, with a web UI, that monitors your repository. Originally architected to be language-agnostic, it was for obvious reasons built primarily for Rust codebases.
Whackadep's role was to periodically check if new updates were available, and to quickly show you two things:
- what was the risk of not updating
- what was the risk of updating
The risk of not updating is generally calculated via the help of RUSTSEC, the RustSec Advisory Database, as well as a calculation of the difference between the semantic versions. If the update is X breaking versions on top of your version, then it'll be harder to update if they're a serious bug that needs urgent fixing. All of this was taken into account to calculate a priority score that would help a human user figure out what to do.
The risk of updating was calculated from a few heuristics, but was meant to be extendable. The presence of an updated build.rs
file, for example, was a big red flag. The presence of a new, never-seen-before, contributor in a release was also seen as a red flag. A number of these customizable heuristics would be used to calculate a risk score, which could be used by the user of the tool to decide if that warranted a human review.
Of course, in Rust-land, you get dependency updates every single day. So you need more reassurance that a review is not needed. For this, we came up with the idea of a trusted set of libraries. As Linus said one day (although I can't find that quote anymore): security works via a network, and if you don't do security via a network you're dumb. He said it sort of like that I believe. The idea is that you can't do everything yourself, and the only way to scale is to trust others. Thus, by whitelisting a number of libraries as being part of a club of well-maintained libraries with well-known authors, we could reduce the burden of reviewing dependencies.
Anyway, the whole point of this article is to show you that dependencies are a huge problem in most software stacks, and that solutions don't really exist to deal with this. Human processes are doomed to fail, and thus you need an automated and machine-built component to help you there. Interestingly, I just finished reading Zero to One a few days ago, and in the book Peter Thiel makes the point that machines shouldn't try to replace humans but instead provide services to help make them superheroes. This is what whackadep was aiming to do: give developers super powers. I hope I inspired some of you to continue pursuing this idea, because we're going to need some help.
Comments
leave a comment...