OODA for Code Review: Improving Your Code Quality

July 23, 2019


Technical Difficulty

Reading time

This is part two of a series of 2 posts. If you don't know what OODA loops are, and how they can help with code, please read part 1.

Dealing with fragile code? The OODA loop theory to the rescue

In the previous post, The Ooda Loops Theory: Tame Your Fragile Code, I described how I successfully applied OODA-based practices inspired by military theory, to cope with fragile code. But practices cannot be simply repeated from one context to another with the same success. Developers must figure out what works best in their specific context. This is why I am always excited to discover and try new practices for improving code quality, based on the OODA theory.

Let me share some examples of such ideas around code review. Why code review specifically? Because I think, like many developers1, that it's "the top practice to improve code quality".

Observe better

See what’s important (and only that)

Observing data is important, but having too much data has a dangerous side effect: We don't see the important parts. What if we could see only what is important?

Some code needs more scrutiny

During code review, spend more time reviewing changes in sensitive code, in code that is more subject to mistakes. This information should be stored in code review checklists for all reviewers.

Layout and typography

In his presentation “Beautiful Code”, Peter Hilton invites us to forget our old monospaced font, and to consider “how layout and typography can make code beautiful”, and “how this would change the programming experience”. In the example below, he gives an idea to focus on the important pieces of the code. Notice how the comments are put aside in the left margin, how some code stands out (bold, bigger, red), how code that should be removed is grayed. The reader will naturally focus on the important pieces of code, without being distracted by what is secondary.

fast invsqrt after

Dont' flood developers with minor comments

If you are using automated code review, you will face the same problem. In an empirical study published in 2016, What Developers Want and Need from Program Analysis2, Christakis and Bird from the Microsoft Research team found that “The largest pain point [of program analyzers] is that the default rules or checks that are enabled in a program analyzer do not match what the developer wants”.

But what do the developers want? The millions of public pull requests in open source projects is an endless and ever growing source of information, and I am always impressed by what the machine learning algorithms developed by my colleagues for LGTM can tell us about the behavior of developers. At the moment, LGTM pays attention to how often and how quickly alerts get introduced and fixed, and this information is used to make sure that the results we display are accurate and relevant to developers. And they are looking to expand the amount of things we can learn from pull requests, to make the code review process even more effective.

Continuously perfect your observations

Following the OODA cycle, if your observations are not effective, your decision should be to improve them, before cycling back to the observation phase.

In Christakis and Bird's study, it appears that the second most hated pain point of code analyzers is high false positive ratio. Something that reports an error where none exists. What happens when there are too many false red signals? Well, you don’t trust the data, and end up ignoring all of the red signals, including the critical ones.

high voltage pissing contest

I have seen this too often: A dashboard of test results never entirely green, an endless list of compilation warnings, or an insane number of code smells. Every time the same behaviour was happening: As soon as it was clear for the developers that a significant number of these signals were false positives in their context, they would lose trust in all of them. In my experience, this behavior is even worse with automated code review tools than with compilation warnings or tests.

Having experienced the pain of false positives, I am lucky to now work on a product where they can be fixed so quickly. QL allows for false positives to be easily removed by adding a few lines to the relevant query, either to fix a real bug in the query, or to customize it to the specifics of your code. As a result, most false positives can be fixed in a matter of minutes, and over time, the quality of the results produced by the queries is continuously improving.

Let's look at this example of a false positive reported by the Fawkes Robotic project.

The query in question looks for resources acquired but never released, by flagging open / close patterns where the close statement is actually missing in the resource's destructor. But in their particular case, they had resources managed using a specific pattern init / finalize, and not released in the destructor. The query would then generate wrong alerts for all such resources, claiming they were not released although they were, through the call to finalize. It was not a bug in the query. Still a false positive for this particular project.

It was easy for them to tweak the query and extend the concept of destructor to their specific finalize case. The query code checking that a function is called from a destructor just changed from this:

private predicate calledFromDestructor(Function f) {
  (f instanceof Destructor and 
   f.getDeclaringType() = this.getDeclaringType())
  exists(Function mid, FunctionCall fc |
    calledFromDestructor(mid) and
    fc.getEnclosingFunction() = mid and
    fc.getTarget() = f and
    f.getDeclaringType() = this.getDeclaringType())

to this:

class FawkesDestructor extends MemberFunction {
  FawkesDestructor() {
    this instanceof Destructor
    this.getName() = "finalize"

private predicate calledFromDestructor(Function f) {
  (f instanceof FawkesDestructor and 
   f.getDeclaringType() = this.getDeclaringType())
  exists(Function mid, FunctionCall fc |
    calledFromDestructor(mid) and
    fc.getEnclosingFunction() = mid and
    fc.getTarget() = f and
    f.getDeclaringType() = this.getDeclaringType())

They just created this new class FawkesDestructor comprising both the native Destructor and their specific case, and used it everywhere in the query instead of Destructor. It allowed them to customize the query to their context, without completely disabling it and losing the benefits of catching real problems.

Cherry on top: As a user, you can perform this kind of fix yourself!

You can see the discussion where this query was customized on the LGTM forum.

Boost your orientation with the open source community

The success of Stack overflow proves that knowledge sharing is instrumental to software developers. Another example is the growing number of open source projects. And this is particularly true when the knowledge is scarce. John Boyd, author of the OODA loops theory, says “In a competitive sense, where individuals and groups compete for scarce resources and skills, an improved capacity for independent action achieved by some individuals or groups constraints that capacity for other individuals or groups”.3

This is what happens for example in the security field. This expertise is rare and expensive, the demand recently has been growing, and big tech companies are competing to hire security researchers.

This is why I am convinced by our ambition, here at Semmle, to catalyze open security: the Semmle QL analyses are open source, welcoming contributions by all security researchers. In addition, LGTM.com runs these queries freely for all public open source projects, powering their code review with the expertise of the whole community.

Loop faster by applying a "shift-left" approach to your code review

Pair programming

People already agree that code review should happen early. Why not take it a step further, and experiment with pair (or mob) programming? Not only does it give you all the benefits of code review, at the earliest stage possible (while writing the code), but it has many other benefits. Check out this return on experience, where XP and Continuous Refactoring coach Philippe Bourgau tells the story of his team, trying to improve code reviews, and finally ending up with Pair Programming.

Augment your current workflow, don't slow it down

Christakis and Bird say in their study that “Program analysis should take a two-stage approach, with one analysis stage running in real time [...] and another overnight finding more intricate issues”. Yes, add linters and syntactic checks directly in your IDE.

Programs that perform deeper code analysis, and catch complex security vulnerabilities, such as Semmle’s QL engine, need more time and resources and can't be run at writing or compilation time, without slowing down the developer. But rather than running overnight, code review is the right place for these kinds of analysis, so that results can be checked and problems caught before code is merged.

As a developer, I don't want to adapt my workflow to new tools, I want all tools to integrate seamlessly within my development workflow. So I want my linters to work inside my IDE, and I want the deeper code analyzers such as LGTM to show up in my pull requests. This is how I will loop faster!

lgtm pr check

You can install the free LGTM GitHub App on your open source projects to get automated code review enabled for your projects in LGTM.


Software development is a fast-changing world, requiring constant adaptation. Your working environment and your code must be kept clean, easy to navigate, easy to change! Can we learn from military theories to improve our daily work? John Boyd claimed that his OODA loop was the central mechanism enabling adaptation to any fast-changing world, not limited to a military context. And indeed, we can detect its principles in many proven software development practices. So let’s follow these principles and explore new ideas for getting back control of our code!

Voltaire wrote “Qui plume a, guerre a”, which can be translated as “To hold a pen is to be at war”. I hope he will forgive me for changing his words to To use a keyboard is to be at war.

Image Credits:

  • Main photo by Airman 1st Class Kevin Sommer Gi - 19th Airlift Wing - Copyright: Public Domain
  • High Voltage pissing contest by Ron Bailey

  1. "Since 2016, respondents to our annual report have overwhelmingly identified code review as the top practice for improving code quality." The State of Code Review 2018 - SmartBear. https://smartbear.com/resources/ebooks/the-state-of-code-review-2018/

  2. What Developers Want and Need from Program Analysis: An Empirical Study, Maria Christakis Christian Bird, Microsoft Research, Redmond, USA - September 2016. https://mariachris.github.io/Pubs/ASE-2016.pdf

  3. DESTRUCTION AND CREATION, John Boyd, September 1976, http://www.goalsys.com/books/documents/DESTRUCTION_AND_CREATION.pdf