In previous posts, I’ve mentioned the boom that’s coming in IoT and also the problematic nature of microservices. In the next 10 years Go applications in Docker containers will become a front runner and overtake Java as the primary choice for microservices.  While I’ve been a Java developer for the past 17 years I’m ready to make that switch because it is the future.


There are a number of problems running JVMs and code in Docker containers whether on large platforms or small. Docker containers run on a shared kernel. They are sometimes described as lightweight VMs (although that is not technically accurate it is a good enough metaphor).

The implications of this are that JVM based microservices that are run on the same kernel will suck up a lot of resources. Each Docker container will have its own heap, will not be capable of sharing classes across containers, and JVM GC threads will have to be scaled back so that the garbage collection doesn’t clobber the kernel.  Deciding on how much heap or how many threads each JVM should have will become an art. Anyone who has done tuning of JVMs on multiple true Virtual Machines will know how very difficult that can be. That is only multiplied when running Docker containers on Virtual Machines.

Many of the standard microservices libraries such as Spring Boot run on uberjars which are heavyweight and take a lot of computing resources.

Managing the runtime dependencies of JVM based applications is already a headache and will not get easier.

Because the JVM, jar files, heap sizes and GC threading is such a heavyweight burden for small microservices in Docker containers, it will be necessary to spread them out over a number of different Virtual Machines on the same network. Why is that important? Because latency becomes a huge issue at that point.

A group of Docker containers running on the same kernel can communicate on a bridged network that essentially runs at IPC RAM speeds. The latency is measured in microseconds and not milliseconds or seconds as it is over Ethernet. The more one can keep the number of Docker containers together the better the speed of execution. But to do that the underlying technology has to be ready to accommodate it.


Unlike JVM based microservices, Golang’s dependencies are compiled into the executable. At runtime, the deployment consists of a Docker container and a relatively small executable. A Go application with web services, routing and a NoSQL database, like BoltDB, compiles into an application of about 6MB. A fraction of the size of the JVM itself.

This means a microservice installation in this case is Docker + Golang executable + properties/configuration.

With large software stacks in Java, the JVM’s Hotspot compiler can make up for lot of the language’s interpreted nature. It often matches and sometime exceeds the execution speed of compiled applications. That’s because a fully optimized and compiled application bloats badly as the size of the source code grows. By contrast the JVM can monitor the interpreted application and as it observes code sections that make the same calls repeatedly, it can inline those sections of code. In other words, it is highly optimized based on empirical data from the running application. But that is really only useful for large applications.

But with microservices that isn’t a concern and the advantage of fully compiled executables is indisputable.


In the future, we’ll explore exactly how Docker containers will become component-like building blocks in this new universe. We’ll also look at why this will be big in the enterprise world and why it will especially take the IoT world by storm.