Right now, at the beginning of 2017, microservices are riding the crest of the Gartner hype cycle – “the peak of unrealistic expectations”. In two or three years we will be in the “trough of disillusionment”. That’s normal.

But this won’t be the normal trough. Microservices are based on a technically unsound premise which leads to meltdown and not just the trough of postpartum depression.  There are a number of problematic issues with microservices.  If it was just about trade-offs microservices might have a chance at success.  History doesn’t raise that expectation but it is possible.

Unfortunately, microservices are built on such a fundamentally unsound idea that they can’t recover.

So, what’s the problem?

Microservices backward the hardware get.

We crossed the CPU vs I/O bound threshold around 2000. Silicon Graphics filed for bankruptcy in 2006. The company had gone from a valuation of $7 billion to $120 million in about seven years due to the flip-flop in being I/O bound instead of CPU bound.

Sun became a “zombie corporation” with a lot of cash, which meant it couldn’t die, but it didn’t have a direction any longer.  Who wanted a bunch of expensive hardware when even cheap commodity boxes could be purchased in quantity but the network was now the problem. Eventually Sun sold out to Oracle and disappeared. Sun identified itself as a “hardware company that makes software”.  But their big servers weren’t relevant anymore.  Why pay that much for a server when CPU and RAM were no longer a bottleneck?

At the same time, new companies like Red Hat thrived because abundant, cheap racks of commodity hardware required a hardened enterprise OS at a reasonable price . That is as true today as it was ten years ago.

For a new architectural design pattern to be successful it must leverage cheap, fast and abundant CPU and RAM in order to permit easy scaling, fast communication and fast code execution. Let the hardware dictate the architectural patterns to solve problems (and even cover a few sins.) Any architectural design pattern that isn’t based on that is simply wrongheaded.

Castles in the Cloud

Software engineering has developed a number of principles to describe good patterns and practices.  These include things like low coupling and high cohesion, event driven design, OO design patterns, enterprise integration patterns, modularity, and so on.  These architectures and patterns provide trade-offs and benefits that can be weighed against one another.

These patterns and principles are the square and compass used to gauge the acceptability of any new architectural pattern. These patterns and principles permit analysis of a concept like microservices to determine if the high cohesion and low coupling of microservices are worth the increased complexity of topology and API management. It allows rational discussion about the acceptability of risk versus the provided rewards. It also allows for an analysis and decisions about when a given architecture makes sense.

But such discussions are only germane when comparing the relative merits of software design choices in the abstract. Software doesn’t exist in the abstract. Discussions about microservices largely treat software services as if they are self-executing with instantaneous communication.


A successful enterprise software architecture must leverage fast, cheap CPUs and abundant RAM.

That’s it really.  That doesn’t mean that just any ol’ architecture will do as long as it takes advantage of cheap, fast CPUs and abundant RAM. What it does mean is that any architectural pattern that relies heavily on I/O reflects a hardware reality that became obsolete 10 years ago.

You can reel off a number of bad architectural patterns that have thrived over the years simply because they were easy to scale by throwing cheap, commodity hardware at them precisely because those bade architectural patterns minimized the impact of the I/O bottleneck while leveraging cheap hardware.

We have excess computational power. We have excess high speed memory. Those facts must drive software design.

I/O – Atlas Bound

Messaging over RAM is measured in nanoseconds. Communicating between application modules in nanoseconds means I/O is not a bottleneck.  Today we only break that fast messaging when necessary and throughput drops from nanoseconds to milliseconds or hundreds of milliseconds or even seconds.  The decision to switch from RAM based messaging to much slower forms of messaging is forced by a given situation.  It shouldn’t be willingly chosen.

When an application must call to an external vendor’s web service or when one has to interact with a corporate database, there isn’t any choice but to use slow I/O. We understand, inherently, that such interactions are a resource and performance sinkhole.

When a business process switches from RAM based messaging and CPU dependent routing and transformation to slow I/O, we are forced to write specialized code for that particular situation.  We might use a SEDA queue with multiple threads pulling messages off in order to make a large number of parallel calls, for example.  We do that when we have no choice. Adopting an architecturally pattern that inherently saddles us with hacking solutions to slow I/O is purblind.

In-memory, messages fly across our routes, transforms and integration patterns until we hit an external I/O endpoint. Then we break out the EIPs and multi-thread the calls and aggregate the results in order minimize the impact.

Microservices are predicated on the donkey trails of 2004 instead of the contrails of today’s hardware.

Microservices take the weakest element of the hardware stack and make it the spine of a new software architecture. Microservices are decoupling systems but are laying the foundations based on I/O. Even worse, that largely means request/response based REST via HTTP. Our CPUs twiddle their thumbs and RAM sits idly by.

HTTP is a necessary evil, it is not a fundamental strength. Predicating architectures on the weakest, slowest and most expensive resource is doomed.

Microservices are also rooted in the well-known fallacies of network computing. Hardware isn’t just a big, fluffy abstraction representing instantaneous code execution with instantaneous communication.

The 7 fallacies of distributed computing bear repeating here.

  1. The network is reliable.
  2. Latency is zero.
  3. Bandwidth is infinite.
  4. The network is secure.
  5. Topology doesn’t change.
  6. There is one administrator.
  7. Transport cost is zero.
  8. The network is homogeneous.

The Next Generation Architecture

Performance isn’t the only fundamental problem with microservices.  In my next blog post I’ll cover the cost of microservices – the dollars.

The other noted problems of REST based microservices have been discussed in other blogs on other sites so I won’t belabor those here. Suffice it to say, microservices have a number of serious problems that have to be solved. But even if they are solved, the hardware problem is a fundamental problem defined by physics and electronics.

I suspect one lesson we’ll learn from the microservices meltdown is that new architectures must be based on the strengths of fast, multiple cores and excess gigabytes of RAM while minimizing I/O. It’s just a shame we’ll all have to suffer through the opposite for the next few years before that becomes apparent.  Before we ask ourselves, “what was I thinking?”

In a subsequent post, I’ll address the issue of the high economic cost of a software architecture based on I/O.  If microservices are wrongheaded for performance reasons they are catastrophic for economic reasons.