Angry developers: How code quality is affected by negative emotions

August 22, 2019


Technical Difficulty

Reading time

The other day, I had to overpay horribly on the bus when the driver refused to give me change for a tenner. When I arrived at my computer I was still fuming. Appropriately, the first routine I wrote completely went up in flames when I tested it. It made me wonder: was it just coincidence, or would I have succeeded on my first attempt if I hadn't been so angry? And what if I had been happy instead, or sad?

Basically, I want to know: Is there a connection between a developer's emotional state and their quality of work?

Let's find out.

Measuring the quality of a piece of work

I’m going to look at developers’ commits and compare “how good” those commits are in terms of code quality to what the emotional state of the developer may have been at the time of the commit. To do that, I infer sentiments from commit messages.

On LGTM,com, several million commits have been analyzed and “alerts” identified. These can indicate a wide range of problems with the code. The number of alerts is an indicator of code quality (as described in a previous blog post). It's not just a simple matter of counting alerts though: commits that mostly add code typically also introduce more alerts, and commits that mostly delete code typically eliminate more alerts.

A straightforward way of measuring the quality of a commit is by comparing it to commits with a similar number of net lines of code1. For the purposes of this blog post, I compare each commit to its 500 nearest neighbors2 in terms of net lines of code. If its net effect on the total number of alerts is better than all of them, the commit scores an alert rank of 100%. If it's worse than all of them, it scores 0%. If there are as many commits with a better effect than there are commits with a worse effect, the commit's alert rank is 50%.

The alert rank is a very rough measure: commits are small, and alerts are rare. Most commits (83%, to be exact) don't change the number of alerts at all. Those have an alert rank of pretty close to 50%. The other commits often just add or remove a single alert. I compensate for this low granularity and the high noise level by relying on the large size of the data. If a group of commits has been created under favorable circumstances, those commits should on average have a slightly better (that is, higher) alert rank.

Classifying emotions

Each commit comes with a commit message. I'm running a sentiment analysis on these messages to classify the emotions expressed in them as angry, or sad, for example. I don't distinguish between different degrees of sadness. But I do allow a message to be classified as both angry and sad.

Sentiment analysis is a challenging problem, and many tools have already been developed to tackle it. However, analyzing commit messages is more complicated. Sophisticated attempts to penetrate grammar and meaning are thwarted by incomplete sentences, copious abbreviations and a large amount of jargon.

In contrast, I base my analysis pipeline on a tool called sentimentr3, which uses quite robust heuristics. It doesn’t get everything right, but it isn’t systematically thrown by telegraphese and weird lexemes. It looks for emotionally charged words and checks whether their surroundings are likely to increase or reverse the thrust of the word. For example:

  • The statement "looks good to me" contains the positively charged word "good" in a straightforward way, so it's classified as positive.
  • The statement "looks really good to me" bolsters the word "good" with "really", so it's classified as very positive.
  • The statement "looks good to me, but technobabbletechnobabble" undermines the word "good" with "but", so it's only classified as slightly positive.
  • The statement "doesn't look good to me" inverts the word "good" through negation, so it's actually classified as negative.

This approach requires two lexicons. One is for the modifiers like “really”, “but” and “doesn’t” in the example above. I rely on sentimentr’s default here, which appears to work well. The other one is for words carrying an emotional value like “good” in that example. Sentimentr allows you to customize this, but its default is the Syuzhet lexicon4 which describes positive and negative words. I see no reason to deviate from that default when sorting commit messages into positive, neutral or negative.

But I also want to go beyond this one-dimensional classification to the specific emotional flavors like angry or sad. Here I use the NRC lexicon5, which classifies English words evocative of one or more of the following 8 basic emotions6: anger, fear, joy, sadness, trust, disgust, surprise, anticipation.

The following example word clouds show lexicon words commonly appearing in commit messages:

wordclouds 1

For every message, Sentimentr computes a score between +1 (strongly expressed) and -1 (strongly negated or denied) for each emotion. Inversion through negation or denial appears to be more complicated with specific emotions than it is with general positive/negative sentiments. Rather than fall victim to complex biases I know little about, I interpret negative scores for an emotion as "unclassifiable" and not as categories in their own right. So for each emotion (for example anger) I compare the emotionally charged messages (anger > 0) with the uncharged ones (anger = 0), ignoring the negatively charged ones (anger < 0). I don't distinguish between degrees of anger: either you're angry (anger > 0) or not (anger = 0).

The emotions dictionary has been obtained by crowdsourcing. However, there are some words where the technical meaning quite obviously diverges from the meaning the crowd had in mind. For example, "argument" is listed as connected to anger, and that makes sense if you think of two people yelling at each other. In the programming world though, arguments are what you feed to a function, so the word should be quite neutral. I'm very reluctant to manually remove handpicked words, but I don't see any sensible alternative here. For each emotion, I've checked the very top of the list of most common words in the corpus and removed those which are obviously fishy. These are:

  • From anger: Argument, arguments, react (note that the most commonly analyzed language on is JavaScript), remove, shell, tree
  • From disgust: Bug, default, tree
  • From surprise: Tree7, variable

Just to give you a flavor of this sentiment analysis, here are some sample commits analyzed by LGTM that have been classified as positive, joyful, trusting, anticipatory, surprised, negative, sad, disgusted, fearful or angry.

On average, out of every 4 messages, about 1 is labeled positive, 1 negative, and 2 neutral.

distribution of emotions bar 1

Sad means small, happy means huge

Commits have lots of interesting properties before I even get to their alerts. One of the most basic is size: How many lines of code does the commit consist of in total (either added or deleted)? It turns out that the emotions detected from the commit message predict this surprisingly well.

I measure the size of a commit using its churn rank. Churn refers to the sum of added and deleted lines. The churn rank of a commit is the percentage of other commits of the same language that have less churn than that commit. The biggest commit in a language has a churn rank of 100%, because all other commits are smaller. In theory, the strictly smallest commit has a churn rank of 0%. In fact though, there is a tie for smallest commit, with many commits adding just one line and deleting none, and many commits deleting just one line and adding none8. For example in JavaScript, this concerns the smallest 3.3% of commits. So they all share the same churn rank of 1.65%.

In the following plot, larger areas correspond to larger average churn ranks:

churn rank and sentiment 1

This is a very clear effect: Ebullient commit messages for big commits, cautious ones for small commits. It's also pretty uniform across the three different programming languages I looked at (Java, JavaScript, and Python).

Also, it's not just that positive messages are typical of commits that change a lot of lines. They're also typical of commits that mostly add new lines instead of deleting old ones9:

adding rank and sentiment 1

These are pretty clear effects. And they're not very mysterious. In fact, there are three plausible mechanisms that explain this effect:

  • When you're feeling happy, you're less critical, so you're quickly adding new code. You're not so much focused on pruning away inferior code.
  • When you get a lot of work done, especially if it consists mainly of new additions, you're happy and express that through your commit message. When you get only a little work done, and what you do is mostly focused on pruning inferior code away, that makes you a lot less happy.
  • Even a completely objective message about adding many new features likely evokes more positive emotions than an objective message about tweaking a couple of old routines.

All three effects may well work together: When the coder is happy, the code flows. When the code flows, the coder is happy. When reporting flowing code, the description sounds happy.

Quantity or quality?

So, I know that happy people write more code in one commit, but do they write better code? The answer is no, and I see that by comparing the average alert rank:

net rank for sentiment 1

Generally, the effects are very similar across all three languages (Java, JavaScript, and Python):

  1. Negative sentiments, in particular sadness, are related to cleaner code. This fits in with a wider pattern that has been observed across many domains: Sadness or dysphoria is often observed to coincide with critical thinking and improved judgement.
  2. The one negative emotion that leads to more mistakes is anger, which is proverbially known for having a blinding effect.
  3. Fear is also often seen as blinding, but commits with messages classified as fearful are of above average quality. This might be because the emotion detection does not properly distinguish between fear and caution. It makes sense that cautious commits would be better commits: Prudence prevents problems.
  4. Surprise and anticipation are associated with bad commits. However, surprise is expressed quite rarely, and when it is, it’s often combined with anticipation. If surprise stands on its own, it’s not indicative of a bad commit. So the real culprit of the two is anticipation, possibly due to developers getting ahead of themselves rather than focusing on the task at hand. After all, as Camus said:

Real generosity towards the future lies in giving all to the present.

Statistical significance of observed effects

The absolute values of the average differences are quite small. As noted above, this is expected due to the granularity of the measurement process. But it also means that I need to take special care to ascertain whether these are real effects or if I'm just overinterpreting some random fluctuations. For this, I use p-values11 to see whether the effects are significantly distinct from random noise.

They mostly are, especially for JavaScript and Python (where data is more abundant than for Java, but also where the effects seen above seem a bit stronger10):

visualize p values 1


Programmers aren't robots, and their emotions do matter. As Master Yoda (more or less) put it:

Fear leads to anger. Anger leads to bugs. Bugs... lead to suffering.

Don’t let your project team suffer. You can’t eliminate their anger (and you shouldn’t eliminate their joy). But you can get rid of the mistakes those emotions bring with them, and many others besides.

LGTM helps you with that. It automatically analyzes GitHub and Bitbucket projects so you see which issues in your codebase need fixing. This way, LGTM acts as a trusted code reviewer. It flags up potential problems before they are merged into your codebase.

This lets you enjoy the freely flowing code without fear that your happiness risks mistakes that will cause toil and anger down the line. Master Yoda’s vicious circle is avoided, and you become a true coding Jedi.

Image credits

Title image credit: Daniel Huntley

Note: Post originally published on on 08/17/2018

  1. It’s possible to be even stricter and compare a commit only to other commits with both a similar net and churn. This is more sensitive to the amount of data. Nevertheless, I used it to double check, and the effects are very similar to the ones where the alert rank is based only on the net number of lines. In particular, all the results mentioned in the text below hold for both variants.

  2. In the case of ties, more than 500 neighbors may be used. For example, there are 1592 Python commits which add exactly 50 more lines of code than they remove.

  3. Rinker, T. W. (2017). sentimentr: Calculate Text Polarity Sentiment version 1.0.1. University at Buffalo. Buffalo, New York.

  4. Jockers, M. L. (2015). Extract Sentiment and Plot Arcs from Text. Nebraska Literary Lab.

  5. Mohammad, S. and Turney, P. (2013). NRC Word-Emotion Association Lexicon. National Research Council Canada.

  6. These are the primary emotions according to the research of Plutchik. Irrespective of the merits of this claim to primacy, I think they make a decent set of standard emotions for analysis.

  7. I have to admit that I found the range of emotions associated with trees to be quite... surprising. I had a quick google, and now I know about Christmas tree surprise and tree as a metaphor for anger in English poetry. I still don't get the disgust part though. Email albert at if you can explain it to me.

  8. In fact, many commits don't change any lines of code, but modify one or more non-coding files. They're not interesting for this investigation and so I've excluded them from the data set.

  9. To see how much a commit focuses on adding to the codebase rather than pruning it, I use a similar method to the one for quality. Commit quality is the rank of net alerts (where a commit is only compared to other commits with similar net lines of code). Focus on addition is the rank of net lines of code (where a commit is only compared to other commits with similar total churn).

  10. Java commits don’t seem to be as strongly affected by strong emotions as JavaScript or Python commits. Why do Java programmers appear to be so stoic? It’s certainly not that they don’t use emotionally charged commit messages. In fact, 7 of the 10 emotions used in this analysis are most often detected in Java commit messages (only surprise, sadness and fear are more common in Python commits). One reason might be that Java commits have already undergone the regularizing influence of the compile and build process. Mechanisms like static type checking might stop Java code being swayed by emotions quite as easily as JavaScript and Python.

  11. The distribution of alert ranks is so different from a normal distribution that the assumptions of a standard t-test do not hold. Nor can I use a Wilcox test, because while the alert rank has been crafted in such a way that net lines changed has been removed as a confounding factor for the mean, it's still confounding the median. Thus, I use a bootstrapped p-test based on 50.000 simulations for each p-value. This is computationally more expensive, but it doesn't require any further assumptions.