Every day we read about yet another way technology has failed society - understandably, consumers are angry about the way “digital overlords” play fast and loose with their lives. For me, this presentation by several eminent computer scientists about the social responsibility of computing professionals prompted me to start thinking about the topic. In this blog post, I want to share some of my thoughts about that responsibility, especially related to secure software.
Software developers are a tiny elite
Are you a software developer? Congratulations! You are one of a tiny elite that affects the lives of everyone else on the planet, through the software you create. GitHub hosts the work of 28M developers, so the elite is at least that large, but contrast that with 4bn people who use the internet.
That elite is failing society
This small group owes the world secure software: others trust us with many aspects of their lives, we have to repay that trust. Unfortunately we are failing to do that. We allow a tiny fraction of our number, a few bad actors, to wreak havoc with the lives we’re entrusted to enhance. We allow the Equifax breach to happen, stand by while cars are controlled remotely and watch innocent users of web browsers give away the entire contents of their computer simply by visiting the wrong URL.
Ignorance is the root cause
Why are we failing society so badly, not providing the security that we should? Unfortunately it’s a matter of ignorance: many developers don’t know about security problems, and are too busy creating new features to worry about their safety. Also, new threats constantly appear, and indeed much software was created before those threats were known. There’s an acute shortage of security experts to research new threats, and to educate everyone else on taking action.
Security research helps awareness
Those few experts do some awesome security research in the name of public good. Google Project Zero is perhaps the most famous example of this: they attack like a bad actor would, but then responsibly disclose their findings so these holes can be plugged. This is effective in drawing attention to high-impact problems, but it’s not scalable: genius security researchers are even more scarce than developers, and understanding codebases consisting of many million lines of code is an arduous manual task.
Automation is the answer
Automation is key, therefore, to reduce the dependence on security experts in defending against known vulnerabilities, and to enable any developer to leverage the work of a small number of security experts, or even engage in security research of their own. Here are six techniques every company should use to achieve those goals:
You can automatically monitor running systems for known attacks, such as those reported in the National Vulnerability Database. These are however retrospective - the exact attack details must be known, so that recurring attacks can be detected and stopped.
Ideally you make new attacks very hard for bad actors. One technique is known as moving target defense: for instance, by making sure no two executables for the same software component are identical, it becomes too labour-intensive for bad actors to create exploits of certain types of vulnerability.
Prevent coding mistakes
When you, as a developer, introduce a vulnerability, say by using an outdated software component with known problems, or by using untrusted data in the wrong place, you want to be notified straight away. This is known as static analysis. Like the above monitoring systems, for traditional static analysis to be effective, the exact patterns must be known, and the update cycle is slow when a new vulnerability type is discovered.
Automate security testing
Automated testing techniques like fuzzing make security researchers much more effective in finding new problems, guiding them to areas to investigate. These fuzzing techniques are now being packaged for use by developers on their own systems, reducing the need for security experts to be involved in routine testing.
Security experts often find new threats through variant analysis: starting with a known vulnerability, find all instances of the same logical mistake. When they find such new variants, they should have a way to encode their new-found knowledge, so that any developer can immediately benefit and determine whether their own code is affected. This is what Semmle QL enables: you can write concise queries over source code, to find deep security problems.
Continuous analysis of every commit
With the Semmle QL queries in hand, you want to widely share the knowledge, and apply the queries on every commit on every repository. With Semmle LGTM, you can do that. You don’t only run the queries on the codebase where you discovered the “root vulnerability” that first inspired a query, but also on all others in your portfolio, and all developers benefit. It’s akin to the legacy static analysers mentioned above, except the set of rules is dynamically updated by the community, always a step ahead in the security arms race.
YOU are the security research team
In order to encourage this sharing of security expertise through queries, Semmle publishes the results of its own security research in open source, as analyses that can be executed on the community site LGTM.com. Several of Semmle’s most prominent customers also freely share the results of their security research, in the same repository. We already have hundreds of analyses in our GitHub repository, and the community of contributors is rapidly growing.
Join us in securing the software that runs the world!