In a recent study of millions of open source commits1, my colleague Albert Ziegler revealed a surprising gap between an increasing security awareness and a lack of secure code improvements in practice. "Developers are increasingly aware about security" but "the security debt of open-source software is increasing."
This is not limited to open-source, and in many other articles2 about secure coding, people debate “Why developers don’t care about security”, highlighting this same gap between the objectives of the security department and the practices of the development teams. What is going wrong with developers? Don’t they take security seriously?
It’s too simple just to think: “developers don’t care.” And it’s wrong. I'm a developer and I know that developers care about how their code impacts their users. The reasons for this disconnect must be sought elsewhere, and I'm going to tell you where.
In his famous talk “Drive: The surprising truth about what motivates us,” Dan Pink points out three key factors for motivating people: autonomy, mastery and purpose. When people say that developers don’t care, they insinuate that the purpose of making software safer is not engaging enough for us. This is wrong. The problem is not a lack of purpose. I think that the reason why secure coding is not widely adopted is rather that writing secure code can often have a high cost, bringing with it inefficient processes, and does not give developers the two other benefits: autonomy and mastery.
The developer point of view
I went through the same situation 10 years ago, when I was in charge of improving the code quality of an organization with 250 developers. There was a big gap between the developers and the Quality Assurance team, leading to the same blame game, and to the same wrong conclusion that developers didn’t care about software quality. And I saw at that time, by interviewing the developers, the same causes I can see today for security.
No autonomy. The organizational structures do not encourage self-driven developers. Often, the product security team is in charge of discovering vulnerabilities, and urging developers to fix them. I’ve seen this process where we receive a weekly report on new issues, for code we wrote weeks before. Often it’s just before the release. And of course we are asked to change our priorities and urgently address the most severe ones.
No mastery. Developers are not given the opportunity to learn new things. Security is not an easy matter. There are many ways code can be vulnerable, and these ways vary with the programming language. This expertise is often limited to security teams and not transferred to development teams. We developers love being good at what we’re doing. The mastery of a language, or of a technology, and being able to share this expertise, gives us so much satisfaction! And we also love learning new things. On the contrary, just executing a task, without learning or increasing any skills in the process, is very frustrating.
Inefficiency. The last factor is the overhead of the process and tooling that developers are asked to use. Sometimes we don’t have access to the security tools, but just to the reports. In other cases, we are given access to these security tools, but this is “yet another tool” that we have to learn and use. And in most cases, the security review process is disconnected from our development process, with additional specific tasks, that get scheduled after we have moved on to other work.
Resentment, frustration, overhead. It’s not surprising that collaboration doesn’t work. Motivating drivers are missing, and the whole process comes as an additional burden.
The alternative approach is to move the checks earlier in the process, integrated into the existing development workflow. How can we do that in practice?
Autonomy, mastery, efficiency
Autonomy. Change the roles! Make the developers accountable for security in their code, and the security team accountable for providing effective information and support.
Instead of the classical approach where the security team is a command center, just give the developers a clear objective in terms of security compliance of their code, a list of prioritized security issues, and let them integrate them into their backlog. Autonomy cannot work without guidance and control, so give them all the information they need to understand the security priorities and to make the right decisions themselves.
Use the security team as a knowledge and help center. They will be in charge of providing this up-to-date prioritized issues list. They will also be available to explain why the code is vulnerable, why this one is more important than the others, why this other one can be easily exploited. And they will provide guidance on how to fix it.
Not only will you get more buy-in from developers, but you will also save time, as developers will be proactive at prioritizing the important issues. And in the case of a zero-day, one day can count.
To do this in practice, you have to reflect these responsibilities in the respective objectives and performance appraisals of the teams. From my past experience with code quality, the organization adapted this way: We moved the software quality objectives and KPIs from the Quality Assurance department to the Development department. And we added developers’ guidance objectives to the QA department. We can do the same for security. As Fermín J. Serna says: the goal of the “security folks” is “making security easy for non security people.”
Mastery. It’s not realistic to transform all developers into security experts. But it’s important for developers to always learn something and increase their skills, instead of just executing a task. We can take the expertise of the security teams, and share it with developers, case by case. With Semmle QL, this expertise is codified: the security researchers write the automated queries that find the vulnerabilities as executable code, in the QL query language.
Each of these queries comes with a comprehensive documentation, examples of good and bad code, and references, that basically end up building a codified, repeatable and executable knowledge base of security vulnerabilities in software code. And these queries are open sourced on GitHub, so this knowledge is enriched by the best security teams, and shared with everyone.
The best part is that when we run these queries on our code, we get results we are familiar with, and that makes this learning process easier: in addition to the examples in the query documentation, we get examples from our own code, that we are likely to understand better!
QL can not only help developers ramp up smoothly on security, but it also allows us to make use of our current mastery, now. We are the experts in our own code and of the language we write in. We can use this expertise to customize the QL queries to our context, to improve their precision, to find more potential security issues or to remove false positives. And eventually magnify the impact of the security expertise of our colleagues. With this approach the developer moves from the child seat to the co-pilot seat, and this is so much more exciting!
Efficiency. “Don’t make me leave my IDE!” is a motto I often heard from developers. We want to minimize interruptions of our workflow, by limiting the number of places we need to switch to. Personally, when it comes to coding, an IDE and a browser (for the review process) are the only things I want to use. I don’t want any other coding activity to take place somewhere else. Typically, I want to get the security code review at the same place as my existing code review, and I want to fix my code and re-run the security checks in my IDE. This is the kind of fast feedback loop I am used to when coding. And secure coding should not be an exception.
The Semmle LGTM platform integrates the security review within the standard code review, by running QL queries on pull requests, and posting the status as any human reviewer, or any continuous integration bot would do. With the Semmle IDE plugins, I navigate directly from a security alert to my code, I fix it, and I run the query locally to get early feedback.
To perfect the loop, I can also update the query to find potential variants of the issue in my code, and I commit this new query into my continuous integration suite. This is no different than what I do when I break an integration test (fix the code, run the test, improve the test), so it doesn’t disturb my current workflow.
Developers care about security a lot more than you think. What they lack is empowerment. Give them autonomy and guidance from the security experts. Give them the opportunity to increase their security skills, but also to use their existing coding expertise to help the security teams. And keep the security process overhead minimal.
Main image: Taseer Beyg CC BY-SA 4.0
https://www.securityinnovationeurope.com/blog/page/do-software-developers-really-care-about-information-security or https://www.mediapro.com/blog/5-bogus-reasons-developers-dont-practice-secure-coding/ or https://www.securedevelopment.org/secure-development-handbook/incentivising-developers/↩