Fix Your Approach
A Super Grandmaster and a new chess player both make the same move on the first turn. However, their reason for each players’ move is different. As the game progresses, the Super GM continues to approach the board with an objective (to checkmate the King). All the while, the new player haphazardly makes moves until the game happens to end.
A magical ending.
I have consulted with many teams that get microservices wrong. I find that these teams get them wrong because they approach microservices like a new player approaches chess. The player understands how to make the first move. The player understands how the game is won. Then there is the strategy in between that separates the winners from the losers.
And when it comes to microservices, many programmers are losers.
To improve in chess, you must step back and analyze your decisions. So when a team gets microservices wrong, do people step back and analyze their decisions? No. The availability heuristic results in programmers making numerous claims about microservices and how it doesn’t work. Others read hundreds of articles on “when to use microservices”. Yet all of this information is irrelevant when you understand one thing.
So now I will tell you what that thing is.
The Conceptualization of Microservices
The term microservices architecture maintains its own formal definition. However, understanding a definition without its context is useless. It also encourages cargo cult programming. So even though this article discusses the microservices architecture, knowledge of its definition is IRRELEVANT.
More important is the context behind the conceptualization of the microservices architecture: It’s not an organizational tool. Instead, microservices were made for scale. No... Not that Mongo Monkey bullshit. Actual “Infinite” Scale. What is scalability? The ability to increase the amount of operations a service can handle.
As an example, a Web Server service sends (serves) pages to a user’s client. It does so by processing user requests (an operation) and then sending the page to the user (another operation). Scaling this Web Server means increasing the amount of pages it can serve in a given time period (i.e per second).
So how does a microservices architecture provide infinite scale?
Just as a single atom maintains a limit to the amount of force it can output in a given moment, a computer maintains a physical limit to the amount of processing power it can output at a given moment: These limits are defined by the laws of physics. As a result of these laws, vertically scaling a service’s computer — by upgrading it — is bound to be IMPOSSIBLE at some point. So what if you horizontally scale the service — by using another computer — when you reach this point instead?
In order to add another computer (for the same operation), you must exchange information with the first computer. The process of exchanging information is known as communication. However, communication comes at a cost to processing speed called latency. From here, the idea is straightforward. When you need more power, add another computer (service) with minimal latency. Use communication (over a computer network) to coordinate the services. So simple.
How can you f*ck this up?
Microservices Gone Awry
Fuck Around and Find Out: Let’s ask a few questions.
Should I use microservices?
Do you need “infinite” scale? If your program has yet to be created, consider saving time now rather than later by implementing horizontal scalability. Otherwise, if your program already exists, a microservices architecture is relevant when you can NOT vertically scale or use active-active load balancing (with clones of your monolith).
How should I structure my code?
It doesn’t matter. Microservices is not about the code. It’s about the architecture.
Should I use one repository or multiple?
It also doesn’t matter. Microservices is not about the code. It’s about the architecture.
With that being said…
Should one service depend on another?
Yes? No? Maybe.
Let’s say that you split the Web Server into two microservices for processing requests and sending pages. When a user sends a request, it will be sent to the sending service which communicates with the processing service BEFORE the user’s client loads the page (from the sending service’s content). In this case, the sending service depends on the processing service.
So what happens when the processing service goes down?
In the worst implementation, users stop receiving pages because nothing is being processed. However, you can fix this by adding logic in the sending service to send an “experiencing server issues” page to the user (when the processing service is down). This allows the user to receive pages even when the processing service is down. Thus, a linear dependency is not the end of the world in a microservices architecture.
But… When the sending service requires information from the processing service, the following scenario occurs.
The SRE is paged in the middle of the night. Users aren’t receiving pages because the processing service is down, even though the sending service is up and… Oh. F*ck. The sending service is also down now! WHY DIDN’T MICROSERVICES SAVE ME??? Because you created a circular dependency with your services you dipshit.
Aghhh, just fuck my shit up!
So that’s when you hop on your favorite internet forum and create a huge rant about how microservices are unnecessary and also complex. “The industry should have stuck with HTML before the five and websh*t development is becoming increasingly draining…” Then you click on this article, which points out how you are a LOSER.
Chill out. It’s not that serious. Just don’t use circular dependencies within your microservices architecture.
So using the microservices architecture requires you to manage your dependencies in a similar manner to your code. Fortunately, programmers have created tools which manage dependencies among multiple containers of services (i.e Docker and Podman). These tools — such as Kubernetes and Nomad — are called container orchestrators because they orchestrate containers.
So when it comes to the polyrepo vs monorepo debate. Can your orchestrator handle it?
A container (of services) is to a computer as a computer is to a cluster. Such that a computer cluster manages multiple computers (called servers). Programmers use Cloud Computing to manage the computers within these clusters in a virtual manner. Stateful languages — such as Terraform — were created to manage the state of the cloud. So that’s why “Cloud Computing is made for scale”.
Good luck managing that shit on your own.
b- but… WHAT ABOUT my database?
Your database is a service. So it makes sense to follow the rules we established above. Don’t implement a circular dependency, even if it requires you to use multiple databases. Yes. You will end up with duplicate data. No. You aren’t likely to end up with the same schema for two separate services. Hence, the database per service recommendation.