Over a million developers have joined DZone.

Revisiting Conway's law

· Web Dev Zone

Start coding today to experience the powerful engine that drives data application’s development, brought to you in partnership with Qlik.

It goes more or less like this:

An organization is bound to produce software systems which are copies of its internal structure.

And remarks on the fact that software communicates following the same paths as the people that build it communicate, both in quantity and quality of interactions.

Eric Raymond's restating of the law is:

If you have 4 groups working on a compiler, you'll get a 4-pass compiler.

and indicates a temporal splitting or responsibilities between subsystems, but it's just an example of how multiple teams can divide the work.

The law appers in Mythical Man-Month (what doesn't?) and of course has been stated many times but applied sparingly like the rest of the book.


A large project can be managed by dividing it into different components which run in different processes and can, in principle and in practice, be distributed over multiple machines for scaling. It's not only the load of the system that is at stake: it's also about getting a new programmer to contribute quickly. The larger the complexity and the build time of a project, the more intimidating is for new prople working on it.

The main aim of this segregation of projects into different, smaller teams is to reduce the N^2 communication paths between teams. Communication channels scale with the square of participants as Brooks's law says; so teams of 14 are much less effective than two teams of 7 collaborating through a standard protocol and working on two different codebases.

So these so-called Service-Oriented Architectures are a necessity not only for scaling capabilities but also for organizing the work of dozens of programmers into isolated teams. In the paper introducing the Dynamo distributed database, Amazon folks write that a single page is composed by about an hundred of different services.

A smaller scale example that I lived: in my work at Onebip we have 4 PHP components talking with each other through HTTP; each has its own database, which sometimes is MySQL and sometimes MongoDB (no shared application state).

Cross-project issues such as queuing and other infrastructure are tackled with a libraries project containing standard packages the organization has agreed on; coding standards and the programming language are still consistent between projects.

While I travel from project to project sometimes, I act as a consultant to the other teams where I'm not in my area of expertise. Most of my colleagues have a residence project where it's most efficient for them to work and where they talk closely with the other colleagues. It's only natural for them to specialize as they work more and more in the same boundaries.

Faux reuse

The trouble comes when the organization structure and the software structure do not match, so there is friction caused by two strains:

  • inside projects packages diverge as different mini-teams solve problems in different ways, and no standard nor reuse can come up.
  • between projects everyone is working on everything, and there is no specialization that can make adding a feature less costly.

I'm all for reducing bus factors, but continuously switching people between different projects and architectures is prone to confuse them about style (bringing with them the style and conventions and terminology of the old project) and waste their time inventing an extracted library to avoid redoing old work on the new project, even when the former solution does not fit within the new context.

A common example of is configuration management. Some projects have most of their configuration inside a deployed .ini file; some other read it from a database due to the sheer size of it. Each of these implementations solve different problems (customizing by environment versus managing a million of business rules); a Domain-Driven Design saying is duplication is problematic inside the same Bounded Context, but not between different projects as solutions stay cleaner and specialized instead of becoming enterprise frameworks aiming to work in every possible situation. Given how it is a fact of software engineering that producing a reusable solution is three times as difficult as a specialized one...

When friction is at work between the teams structure and the software structure, it's necessary to change one or the other. Which, is your responsibility to choose.

Impact of the separation between projects

A general knowledge of how the different components work together will always be necessary for debugging: for example, the main persistence storages such as the MongoDB collections used by each projects, along with the logs of the communication protocols.

This knowledge helps writing end-to-end tests when implementing a new feature makes it necessary, and debugging them a bit if something goes wrong. The goal to strive for is to have such standardized channels between projects that implementing new features can just be done by working on one of the subsystem plus the end2end tests.

On the other hand, greater communication is needed inside the project, to avoid having N logging mechanisms or N HttpRequest objects (where N is the number of programmers.) By growing the project one member at a time, each new programmer absorbs style, principles and vision from the previous members. Brooks is also a fan of having a single architect, which I intend here as a single architect per subsystem, because he cannot scale to cover everything; plus its taking decisions for distant projects on which he does not work every day would defeat the growth and the dignity of the single architects of the subsystem.


Another issue with compartmentalization is cannibalization. When a system is more responsive and flexible than the others, it ends up assuming more responsibilities than it should given its subdomain. This happens because responsibilities tend to aling with people instead of with the underlying business domain.

Why one system is more responsive than another? Because more good programmers work on it, for example. Read that as every project is worked upon only by good programmers, but one team is a bit bigger; not as one is worked upon by an higher percentage of good programmers and the other by code monkeys. The same reasoning applies when one of the projects has a very low bus factor, which hampers how many different features can be parallelized in it.


The assertion following by Conway's law is: if you want cohesive and decoupled systems, put different teams at work on each of them, so that the communication structures of the teams resemble what you want to achieve with projects.

Inside the teams, there will be a fast redistribution of responsibilities, and refactoring can be more aggressive; at the same time, between the teams a shared protocol emerges, which is much more stable and backward compatible during evolution. Segregation of responsibilities starts by our own work organization, not just with CRC cards...

Create data driven applications in Qlik’s free and easy to use coding environment, brought to you in partnership with Qlik.


The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}