Skip navigation
man standing at a fork in the road Alamy

Monoliths vs. Microservices: Are Monoliths Becoming Cool Again?

Microservices have been the preferred approach for developers, yet there is a growing trend to reconsider monolithic architectures. Here, we delve into the advantages and disadvantages of both strategies.

What's old is new again  — or so it may seem when it comes to the debate about monoliths versus microservices. Some developers now appear to be returning to the idea that monolithic application architectures are a better approach after all.

That's a big deal, of course, because breaking monolithic applications into microservices has been the go-to strategy of software architects and developers for quite a while. Until recently, if you said that you preferred a monolithic architecture, you would have sounded like a backwards-thinker — or, worse, someone who was just too lazy to do the work necessary to build and maintain a microservices app.

But that is no longer the case. Today, it is arguably easier than at any point in the past decade to argue that monolithic application architectures aren't only easier, they're actually better in many cases.

Monoliths vs. Microservices: The Fundamentals

Before assessing the growing pushback against microservices, let's go over the basics of why microservices have been in vogue for so long.

It boils down to the idea that because a microservices architecture breaks complex applications into a discrete set of loosely coupled services, microservices apps deliver such benefits as:

  • Simpler deployment, because you can deploy microservices individually.
  • Easier updates, because you can update individual microservices instead of the whole app.
  • Enhanced scalability, because each microservice can scale up or down individually.
  • Simplified code management, because your codebase is broken into smaller chunks — one for each microservice.

Monoliths, meanwhile, came to be seen by many developers as inflexible, lumbering morasses of code that were inefficient and low-performing. Indeed, even the very word monolith conjures images of rough-hewn, ungraceful masses of stone created by prehistoric societies — as opposed to the eye-pleasing, elegant architectural wonders of more complex civilizations.

This doesn't mean that no one has built or maintained monolithic apps in recent years. They did, because in some cases developers decided that the effort required to transform a monolithic app into microservices — a process known as refactoring — wasn't worth the benefits it would yield. But in general, microservices were seen as the modern, mature way to design apps; if you stuck with monoliths, you did so reluctantly and for want of development resources, not because you thought monoliths were actually superior.

The Growing Critique of Microservices

Fast forward to the present, and there's evidence that monolithic architectures are being rehabilitated.

You don't have to dig too deeply to find articles arguing that "microservices are NOT a good idea" or that it's time to "stop this microservices madness." Posts like these don't necessarily conclude that you should never use microservices, but they are circumspect in their analysis of the benefits of microservices.

What's notable about such articles is that, unlike earlier comparisons of microservices to monoliths, they don't critique microservices simply because of the idea that microservices apps require more effort to build and maintain. They emphasize that, at least in some cases, monoliths deliver better performance and reliability.

It's not hard to appreciate why that is the case: There's a huge gap separating the ideal microservices app from the microservices apps that most teams end up deploying. The ideal microservices app is one where each service can scale seamlessly and independently, resource consumption is optimal, and end users enjoy a flawless performance.

But in the real world, microservices apps tend to leave much to be desired. Suboptimal designs may lead to scaling issues or cause bugs in one microservice to affect others. Observability for microservices apps can be a real challenge. And because microservices tend to require a thicker hosting stack — because they depend on things such as orchestrators, service meshes, and other tools that are less important for monoliths — they can end up being quite resource-hungry compared with a monolithic deployment where you have an app, a host server, and nothing else to suck up resources.

Getting the Benefits of Microservices Without Microservices

It's worth noting, too, that in many cases, it's possible to deploy monolithic apps while still leveraging many of the scalability and resiliency benefits that draw teams to microservices.

There's no reason why you can't deploy multiple instances of a monolithic app to increase its resiliency. Nor is there a reason why you can't dynamically scale the resources allocated to a monolith in order for the app to operate more efficiently. And you can go ahead and deploy a monolith on a cloud-native platform, like Kubernetes, if you want (although as I noted, you should consider the resource overhead associated with a thicker hosting stack).

In this sense, you could argue that adopting a microservices architecture isn't necessary to modernize an app. You can take your legacy monolith and make it look and feel in many ways like a microservices app, without actually deploying any microservices.

Conclusion: Are We Returning to a Monolithic Age?

I don't think growing awareness of the pitfalls of microservices means that we'll see a wholesale shift back to monolithic architectures. Well-designed and well-implemented microservices apps deliver a lot of value, and you shouldn't expect them to disappear.

But I do believe that going forward, developers will cease to see monolithic apps as a compromise or shortcoming. Instead, they'll celebrate monoliths for what they are: a superior solution for many use cases.

About the author

Christopher Tozzi headshotChristopher Tozzi is a technology analyst with subject matter expertise in cloud computing, application development, open source software, virtualization, containers and more. He also lectures at a major university in the Albany, New York, area. His book, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” was published by MIT Press.
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish