


Everyone wants a microservices architecture, but what are the costs?
Camper vans are like microservices architectures in a way: they’re really popular these days, largely because they seem like the ticket to greater agility. But in practice, they don't always work out as planned, especially if you fail to plan for all of the potential challenges you'll face before you're in the thick of using them.
We can't offer much perspective on whether or not a camper van is right for you (although the Internet has plenty of thoughts). What we can offer is an overview of the pros and cons of microservices – with an emphasis on the potential cons, because they're easy to overlook when you're getting excited about the potential of moving your monolithic apps to a microservices architecture.
To be clear, we're not here to say that microservices are inherently bad, or that no one should use them. But we do think it's important to burst the hype bubble surrounding microservices a bit. The reality of microservices architectures is that – although they truly have the potential to deliver lots of great benefits – many things can also go awry when you use microservices without the proper preparations.
What is a microservices architecture?
A microservices architecture is an application development strategy that breaks application functionality into a suite of services. This is the opposite of what's known as a monolithic architecture, in which your entire application runs as a single process.

Importantly, as Martin Fowler and James Lewis note in one of the most influential essays on microservices, there is "no precise definition" of a microservices architecture. In other words, there is no one right way to design a microservices app. Nor are there specific tools, programming languages or deployment technologies that you must strictly use to implement a microservices app. Instead, you should think of microservices as a high-level style or approach to app development more than a rigid recipe for designing applications.
That said, in practice, most microservices architectures involve the following characteristics:
- Each microservice handles a discrete facet of application functionality: There are no hard-and-fast rules about how to divvy up functionality, but you might, for instance, devote one microservice to handling authentication, and another to interfacing with a database that your app needs to access.
- Codebases are broken into discrete units: Developers maintain the code for each microservice separately.
- Each microservice is deployed separately: Running each microservice in its own container is the most common way of achieving isolation between microservices during runtime, but it's not the only viable approach. You could also use serverless containers, or even just deploy each microservice as a separate, non-containerized process.
- Microservices can be redeployed and updated independently of each other: If one microservice stops running, the others will remain operational (although the functionality provided by the non-running microservice will become unavailable).
What’s all the excitement about?
If you follow conversations about DevOps and cloud-native computing, you know that folks have been pretty pumped about microservices for the better part of the past decade. The Internet is chock full of articles that sing the praises of microservices. Microservices are also a hot topic at tech conferences like KubeCon and CloudCon.
The buzz surrounding microservices in recent years doesn't reflect the sudden emergence of the microservices concept at that time, however. Microservices architectures actually have a long history that stretches back decades. But they didn't really catch on and gain mainstream focus until the early-to-mid 2010s.
So, why did everyone go gaga over microservices starting about ten years ago? That's a complex question, but the answer probably involves the popularization around the same time of two other key trends: DevOps and cloud computing. You don't need microservices to do DevOps or use the cloud, but microservices can come in handy in both of these contexts. For DevOps, microservices make it easier in certain important respects to achieve continuous delivery because they allow you to break complex codebases and applications into smaller units that are easier to manage and easier to deploy. And in the cloud, microservices can help to consume cloud resources more efficiently, as well as to improve the reliability of cloud apps.
The upside of using microservices
That, at least, is a high-level summary of why microservices have become so popular. But a more specific explanation of the benefits of microservices includes the following factors:
- Fast and easy deployment: Since microservices can typically be deployed independently of each other, it's faster and easier to deploy a microservices app – and to make continuous updates to it in order to implement new business capabilities – than it is to deploy and update a monolith. Being able to deploy microservices in containers, which helps to provide parity between dev and prod environments, also simplifies things from a software delivery perspective.
- Scalability: By a similar token, microservices architectures make it possible to scale applications quickly and – if desired – granularly. You can quickly deploy additional instances of the microservices that your app needs to handle an uptick in requests, and you can shut the instances down when they're no longer needed to save money.
- Code maintainability: By breaking large codebases into smaller pieces, microservices make developers' lives easier. It's simpler to implement new features or track down and fix bugs when you can work within the code of just one microservice, as opposed to having to worry about an entire app's codebase.
- Fault tolerance: The fact that microservices usually operate independently of each other means that they have high fault tolerance. As we’ve explained above, if one microservice fails, the others keep working, increasing the reliability of your app.
- Experimentation: In some respects, microservices allow developers to experiment more while reducing risk. If you want to add a new feature to your app, you could deploy it as a separate microservice to test it out. If it causes issues, you could simply remove it, without having to redeploy the entire app.
These are the technical reasons why software architects, developers, DevOps engineers, SREs and everyone else who cares about fast, reliable applications are into microservices these days.
The potential pitfalls of a microservices architecture
We said it before, and we'll say it again: Although microservices can deliver a lot of benefits, they also have the potential to introduce a lot of problems – especially for teams that fail to plan ahead to mitigate those problems.
The main challenges you're likely to run into if you adopt a microservices architecture include:
- Interprocess communication: Although microservices can run independently, they need to talk to each other and share data. That requires a complex inter-process communication framework within the application – typically, one driven by APIs. Implementing interprocess communication increases developer effort. Interprocess communication also adds to the number of things that could go wrong with an app, and it can make it challenging to maintain data consistency across discrete microservices.
- Fatter technology stack: Microservices require more resources to run in the sense that a microservices app typically depends on an orchestrator, an API gateway or service mesh and a cluster of servers. This means you end up with a "fatter" technology stack than you'd use for the typical monolith, which you can run on just a single server, without needing an orchestration layer or other special services to help manage your distributed app.
- Testing and debugging: Getting to the root cause of performance issues tends to be tricky when you use a microservices architecture because it's not always readily obvious which microservice is causing a problem. For instance, an error in your app's login process may be triggered not by the microservice that handles authentication, but by some backend microservice that the authentication process depends on.
- Deployment complexity: Because each microservice needs to be deployed separately, microservices multiply the effort required to deploy applications. You'll typically need to set up a different CI/CD pipeline and release automation tooling for each microservice, which is a lot more effort than having to manage just one deployment for a monolithic app.
Tips for implementing a microservices architecture
Avoiding the pitfalls described above starts with recognizing them and ensuring that your microservices architecture, deployment strategy and management operations can accommodate the special challenges that microservices present.
In addition, you should think strategically about questions like:
- How many microservices do I need? The more microservices you implement, the harder it will be to deploy and manage them. It's wise to err on the side of fewer microservices, especially if you're new to the world of microservices architectures.
- How will I deploy my microservices? Again, containers are the most common way to host microservices. But you may be able to achieve better cost and performance using serverless functions instead or another alternative to containers.
- How will I secure and manage my APIs? Since APIs are central to the way microservices apps function, ensuring that your APIs are secure and easy to manage is arguably even more important than doing the same for your application code.
- How will I handle observability? Exposing, collecting and interpreting observability data can be especially tricky when you use a microservices architecture – so tricky, in fact, that the topic deserves its own section…
What to know about microservices observability
The main reason why observing microservices can be hard is that this “fatter technology stack” demands much more attention and a more complex integration for R&D teams. Moreover, it introduces the following issues:
- Root causes are not always obvious within distributed microservices environments.
- Each microservice typically generates its own logs and metrics, so there is more data to collect and analyze in order to observe application state.
That said, one way to simplify microservices observability is to take advantage of the fact that microservices architectures are API-driven by observing API performance in addition to relying on metrics, logs and traces from your microservices themselves. Because API transactions involve exchanges between microservices, tracing API calls is a great way not just to identify performance issues, but also to determine which microservices did what during a problematic transaction – and which ones are therefore likely to be the root cause of problems.
The fact that focusing on observing APIs frees you from having to instrument observability within each microservice is icing on the cake. You can collect most of the data you need to observe your app from API transactions, rather than having to run agents alongside each microservice or add code to each one to expose metrics and logs directly.
The eBPF approach to microservices architectures and observability
The bottom line: We can't promise that adopting a microservices architecture will be a pain-free experience. Tasks like having to manage multiple services and deal with more complex technology stacks are real challenges. When it comes to observability, however, the potential pitfalls surrounding microservices can be conquered.
Thanks to tools like eBPF ,there are virtually no limits on the amount of information we can collect about microservices state and performance. eBPF opens up new ways to observe an API-centric architecture which means that you can observe any app – whether it's a monolith or a set of microservices – with zero instrumentation and without compromising on the depth or granularity of data; making those pesky aforementioned issues a walk in the park for you and your R&D team