Breder.org Software and Computer Engineering

The Verdict on Microservices

I have been using microservices for about three years now, across two different companies and I think I have enough experience to draw a conclusion on what this industry-wide experiment looks like.

For microservices to make sense, it is necessary to put it in contrast with what it came as a response to: the monolith. And to understand the monolith we have to understand the time before the Internet.

In ancient times

In the before days, software development was aimed at delivering applications designed to run in a single computer and to act according to the interactions from a single user. Real alien stuff, I know.

This software would most likely spawn a single process and, if you're lucky, a couple of threads, so that graphical interface interactions could be handled concurrently to other tasks.

In this scenario, development looks like what you would expect: pull the source code for the entire application, make a small modification, rebuild the codebase, run the modified application, then finally push the source code changes if you are happy with them.

Since the end goal of this ordeal was to ship a version that very same application to run on the end-customer's machine -- more likely than not much less powerful than the developer's machine -- one could expect the performance of this application would be kept at a reasonable level.

Finally, since this piece of software was designed to be understood and used by one single person, its complexity would be kept in check as well.

Into The Monolith

Now, the Internet arrives and we can ship the software to the almighty Cloud, freeing the software from the confines of single-machine and single-user application.

The same development loop is inherited from the previous model: clone the source, add something, build, test and push the modifications. The difference now is that we aren't burning our code (and mistakes) into immutable CD's shipped worldwide. No, sir.

Now, a few times a day a continuously running pipeline will check out the current state of the codebase, build it and deploy it to a fleet of servers in the cloud.

For a small company with a few teams, that might work very well: most likely no one is stepping in no-one else's toes. Still, there are the occasional “sorry I broke production”, which rolls-back the whole codebase and annoys everyone else who had nothing to do with that boo boo.

Now, for a large company with thousands of teams with different and sometimes diverging goals, working on the same codebase and sharing the same deployment cadence in harmony is close to impossible. Which brings us to...

Yay, Microservices

Well, if we have thousands of teams, why don't let each team build and manage the deployment of their own software? Sensible enough.

How are they going to talk to each other? Since we're talking about distinct processes which, by definition, do not share a memory space, our only tools are message passing and interfaces.

Alright, we need many more processes than a single machine can handle and we may sometimes distribute applications heterogeneously in different machines. How are the pieces of software going to talk to the other pieces of software that they need to talk to? Network calls.

And to the verdict

The microservices pattern does solve a problem: It breaks up the unit of deployment. It does away with the so-called monolith and its ever-increasing complexity creep.

But well, make no mistake, the system, defined as the composite of the microservices that make it up, keeps growing in complexity still. Microservices do not manage or tame the overall system complexity, au contraire, my dear reader.

What would be simple function calls in the same binary executable is now communication between processes across a network. While function calls can never fail (the processor will always jump to the requested instruction) and the function input and output data types are validated statically during build time (assuming a statically-typed language), no such things thaken for granted in the monolith are as natural on microservices: Federation/communication over the network may fail (we need to implement retries, timeouts and backoff) and two microservices might disagree on how to serialize and deserialize a message (typically when there's an interface contract mismatch).

Cross-cutting features that were once, not easy, but tractable to implement, are now days-long cross-service efforts to propagate a change while preventing interface breakage at each step through reasoning in terms of backwards compatibility.

Let me propose just one thing: if you find yourself more often than not running more than a single microservice to make a change, maybe those shouldn't be different services because those pieces of code, independently deployable as they are, have dependencies beyond their externally-facing interfaces.

I think the benefits from microservices only outweigh their cost when interfaces and responsibilities are very well-defined, and such clarity is very unlikely to have been reached at the start of any project. So, I argue, do not start with microservices.

If the company in question has less than, say, 10 teams or 100 developers, I'd suggest not even bothering with microservices: the pattern does not solve any existing problem. But if we grow... well, solve it then.

I think microservices should be treated as an optimization. What do we say to optimizations? Not today. Do the obvious non-stupid thing first. When it starts hurting you will know. Then you measure it, and only then you optimize.