Skip to content
Engineering

Visualize Brooks's Law: When more people make a late software project later

Brooks's Law visualized. How can you measure trends in development output with respect to the number of contributing authors?

A graph showing an early warning system for Brooks's Law by measuring trends in development output with respect to the number of contributing authors.

Four decades ago, Fred Brooks coined what we now know as Brooks’s Law: ”adding human resources to a late software project makes it later”. 

 

That idea has since found support in empirical research.

What is Brooks’s Law? The key idea connected with Brooks’s Law is that beyond a certain point, each additional person that gets added to a project brings a fixed number of additional hours available, typically 40 hours a week. The number of available hours grow linearly with each additional person. However, the number of possible communication paths expand much more rapidly and at some point, each additional person becomes a net loss. The gains in hours available is consumed by the additional coordination and communication overhead…and then some.

 

Brookss Law predicts that adding more people to a late project makes it later.-1

Brooks's Law predicts that adding more people to a late project makes it later.

 

Over the years, I’ve seen way too many software endeavors fall into this trap. Software development is made up of complex, interdependent tasks with a high degree of uncertainty. We can improve the way we develop products, we can build loosely coupled architectures with autonomous teams, and yet all software projects have a point where the work cannot be broken up into smaller pieces without taking on even higher costs.

I think that the main reason we keep repeating the classic mistake outlined in Brooks’s Law is because the message is counter-intuitive – when we’re busy and lagging behind our time plan, adding more people is the gut response and an easy thing to do – and the effects largely invisible and opaque. I mean, we don’t see “coordination costs” accumulating on a bill or in a corner somewhere. Or, maybe we do – we just need to look in the right corner. Let’s have a look at how CodeScene manages this.

 

Measure Brooks’s Law and Development Output Trends

To measure the effect of staffing changes, we need to have some kind of independent variable. Traditional “velocity” metrics don’t work particularly well since they are 1) based on estimates, 2) fluctuate over time, and 3) are local to a particular team whereas we want to measure the whole organization to capture inter-team coordination effects. I have found better precision in using a higher-level metric like the number of completed tasks, the number of commits delivered to master, or simply the lead time for change. I then normalize the output so that we get the development output per author over time. Here’s what the results look like on a specific project:

 

An early warning system for Brookss Law by measuring trends in development output with respect to the number of contributing authors.

An early warning system for Brooks's Law by measuring trends in development output with respect to the number of contributing authors.

 

 

The preceding visualization is calculated by the CodeScene tool. The graph shows that the measurable output decreases over time (the solid red bars) while at the same time the total number of contributing authors increases at a steep rate. This is a warning sign: the widening distance between the total number of authors and their normalized output shows that with additional developers, each developer seems to become less productive.

Finally, note that these metrics have their limitations as well. That’s why I never use the absolute numbers; like so much else in software development, the signal is in the trend.

 

Investigate the Delivery Decline

While Brooks’s Law is one potential reason that a project might hit a wall, there might be other explanations as well. Thus, we cannot – and shouldn’t – jump to conclusions, but rather view a decline in output as an invitation to investigate the project in more depth:

 

  1. Overstaffed: Maybe the tasks aren’t really parallelizable, which means the organization incurs increased coordination overhead through meetings?
  2. Technical Debt and Code Health: Could it be that the organization has taken on excess technical debt that is now hurting the progress?
  3. On-Boarding Costs: Maybe the project is indeed appropriately staffed, but the decline in output is due to accumulated on-boarding costs?

 

The first likely issue, over-staffing, is usually visible in each developer’s calendar. If your calendar looks like a chessboard of sync meetings, reporting events, scrum of scrums, and whatnot, then that’s a sure sign of Brooks’s Law in action.

The second likely cause, an accumulation of technical debt, is visible in CodeScene’s Code Health scores for Hotspots.

The third point is on-boarding costs, which deserve a special mention since they are – just like Brooks’s Law – a largely hidden and invisible effect. Let’s investigate on-boarding effects by measuring team composition trends.

 

Measure Team Composition Trends

Sometimes a development effort is appropriately staffed, but it takes time for people to get up to speed with a new codebase; on-boarding always comes with a cost. Hence, the team composition in terms of experience on your specific codebase is an important factor. CodeScene measures team experience as shown in the next figure:

 

 

CodeScene visualizes team composition with respect to experience accumulation and on boarding.-2

CodeScene visualizes team composition with respect to experience accumulation and on boarding.

 

This team composition visualization includes the following information based on the actual contribution span of each author:

 

  • Monthly team composition in terms of experience on this particular product. There are three categories: on-boarded (0-3 months), experienced (~6-12 months), and veterans (typically +12 months).
  • Total accumulated experience measured as total months worked on the codebase for the whole team (black line).
  • Qualitative team experience: this is a weighted value where we consider the experience of each developer currently in the team (blue line). On-boarding a developer will come at a slight cost for a period of time. The model also takes ramp up effects into consideration.

 

Often, the area between the black and blue lines is the interesting part. The wider the gap, the higher the on-boarding costs. On-boarding can of course also be viewed as an investment. From that perspective, the area between the lines shows the unrealized potential if the organization manages to keep the team stable.

The team composition analysis is also useful to highlight the effects of High Author Churn: Treating people like interchangeable cogs that can be taken in and out of a project comes with a cost. That cost is largely due to lower system mastery – project members get a more shallow understanding of the codebase, suffer from repeated on-boarding effects, and likely motivational losses. Keeping a team stable allows for learning and is very likely to have a positive impact on the development output. Let’s have a look at a project that shows the effects on high author churn, where the qualitative team experience fails to accumulate:

 

An example on high author churn, likely to lead to low system mastery and constant on-boarding costs.

An example on high author churn, likely to lead to low system mastery and constant on-boarding costs.

 

When used together with the Brooks’s Law development output trends, these graphs help visualize the costs and effects of excessive on-boarding and staffing changes. Like always when trying to measure things like developer and organizational productivity, the absolute numbers aren’t that interesting; the interesting thing is the trend. Do you get an increase or decrease in response to changes in staffing?

 

Explore More and try CodeScene

CodeScene is a new generation of code analysis tools. Read more about how it differers from and compares to traditional static analysis tools in this article.

Check out our free white paper to explore CodeScene’s use cases.

CodeScene is available in both an on-premise version and a hosted cloud version. See our products page for all options.

Adam Tornhill

Adam Tornhill

Adam Tornhill is a programmer who combines degrees in engineering and psychology. He’s the founder and CTO of CodeScene where he designs tools for code analysis. Adam is also a recognized international speaker and the author of multiple technical books, including the best selling Your Code as a Crime Scene and Software Design X-Rays. Adam’s other interests include modern history, music, retro computing, and martial arts.

Elements Image

Subscribe to our newsletter

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Semper neque enim rhoncus vestibulum at maecenas. Ut sociis dignissim.

Latest Articles

6X improvement over SonarQube - Raising the Maintainability bar

6X improvement over SonarQube - Raising the Maintainability bar

CodeScene surpasses SonarQube by 6 times on the public software maintainability dataset, scoring 83% vs. SonarQube's 13.3% using F-score.

CodeScene is a proud sponsor of Pink Programming

CodeScene is a proud sponsor of Pink Programming

We're proud sponsors of Pink Programming, Sweden’s largest community for women and non-binary programmers.

AI Coding Assistants: Introducing CodeScene AI Generated Code Refactoring

AI Coding Assistants: Introducing CodeScene AI Generated Code Refactoring

AI Coding Assistants: Let's introduce you to AI generated code refactoring. Read more and join the Beta testing program.