When Oracle finally ported Java to the ARM processor and the Eclipse Foundation started the OSGi based Kura project, I was pretty excited.  Fifteen years ago when I started Java programming, it held the promise of being an embedded programming language. After all, it had been developed for set top boxes.  Java took a left turn and became an enterprise-centric language instead and in the past number of years I’ve been programming for OSGi and deploying to Fuse.

But we’ve come full circle. Java and OSGi are once again relevant in the embedded world. I’ve been setting up my work bench getting ready to work with sensors and actuators and started reading research papers and books on the Internet of Things [IoT].

Still, I confess to it feeling a bit like a guilty pleasure.

But academic documents and market forecasts paint a startling picture:

Based on bottom-up analysis for IoT applications, McKinsey estimates that the IoT will have a potential economic impact of $11 trillion per year by 2025—which would be equivalent to about 11% of the world economy. They also expect that one trillion IoT devices will be deployed by 2025.

So much for guilty pleasure! The IoT is the biggest economic boom since the Internet revolution started in the 1990s. One ironic aspect of economic forecasts and estimates of the technological future is they consistently underestimate.

Another analysis by a different organization estimates that:

… the potential market of IoT with a fast-growing rate, [shows] a market value of $44.0 billion in 2011. According to a comprehensive market research conducted by RnRMarketResearch that includes current market size and future predictions, the IoT and M2M market will be worth approximately $498.92 billion by 2019. Quoting from the same research, the value of the IoT market is expected to hit $1423.09 billion by 2020, with Internet of Nano Things (IoNT) playing a key role in the future market and holding a value of approximately $9.69 billion by 2020.

Out of Left Field – Illuminates of Thanateros

Sometimes technologies come along that make me feel like I’ve been in a Rip Van Winkle sleep. While dimly aware of IoT, I didn’t have a sense of the sort of growth or market share expected. But it was easy to miss.

If you did a Google search for “IoT” in 2012, the top results would have included “Illuminates of Thanateros” and “International Oceanic Travel Organization.” A search for “Internet of Things” would have produced a results page with a list of academic papers at the top, but with no advertisements—a strong indicator that in 2012, few people spent marketing dollars on the IoT.

Two years on, and this had changed dramatically. In 2014, the IoT was one of the most hyped buzzwords in the IT industry. IT analysts everywhere tried to outdo each other’s growth projections for 2020, from Cisco’s 50 billion connected devices to Gartner’s economic value add of 1.9 trillion dollars.

Enterprises are spending as much on the Internet of Things as on cloud, mobility and analytics, according to Vodafone’s annual research into the sector….Currently, 28 percent have at least one “live” IoT project although 76 percent said the IoT would be “critical” for future success.

The most popular types of use cases include fleet management, remote vehicle monitoring, building automation, security, automating supply chains, creating new connected homes and usage based-insurance (UBI) products.

Growth Rate and Market Share

Total numbers show IoT dwarfing PCs, smartphones and tablets.


IoT isn’t just about the smart watch or consumer product of the future.  In fact, the main driver of this next generation of computing is industry and commerce where IoT will solve hard, nagging problems in power grids, shipping, manufacturing, inventory, automobiles and so on. Safety, regulatory compliance, cost savings and monitoring, manufacturing efficiency, and litigation protection are a few areas where big industry is driving IoT forward.

It doesn’t sound very sexy until you think about what it means from a technical perspective.  Litigation protection in the automotive industry might mean implementing swarm behavior using actors on IoT device.  These actors might sense proximity and adjust automobile rates of speed and spacing ever so slightly to ensure safety.  The swarm might communicate hard braking ten or a hundred cars ahead so that the whole swarm of cars can slow without hard braking. That isn’t just proximity sensors detecting that the car in front of you stopped suddenly.  It’s certainly that as well but it’s about then relaying that information back to those farther back in the swarm so that they can adjust their behavior. That’s fascinating technology even though it might be about “litigation protection”.


Standards for the Future – Kura

The following table from an academic paper lists a number IEEE, W3C, and XMPP standards for transports and protocols. Imagine my surprise to see the OSGi based Kura project in the middle of it. That does make sense, however, since OSGi was originally developed as the “open source gateway initiative”.  The frameworks uses segregated classloaders so that new libraries can be added without stepping on existing functionality. In ad hoc collection of applications one can’t count on a controlled environment so the framework has to supply that level of control.

OSGi was invented to solve just the sort of problems we are facing with the IoT.


UDP and Publish Subscribe are the Future

One problem with IoT microdevices is that request/reply architectures using REST and SOAP are not practical.  Tens of thousands, hundreds of thousands, millions and billions of devices using request/response over TCP/IP would swamp the infrastructure. The research documentation consistently mentions the need in IoT for event based communications and pub/sub messaging with connections to ESBs in the Cloud or corporate infrastructures.

ESBs are going to become more important in the future and Fuse is a natural fit which will hopefully be everyone’s “go to” back end technology stack.

Obviously in such a future growth market, being involved in driving the standards for protocols and trusted OS distributions is going to be a key to getting a chunk of that $11 trillion dollars.



Fog Computing

When contemplating millions then billions then trillions of devices, an obvious problem is the communications overhead.  We’ve had a glimpse of this problem and its transformational nature in the past. Red Hat was in the middle of it.

About 2005 or so we flipped from being CPU bound to I/O bound.  Companies like Sun Microsystems whose business model relied on expensive servers became zombie corporations – a lot of cash with no direction or plan.  Companies like Red Hat helped give rise to commodity solutions that leveraged the new computational power available at a fraction of the cost of the older, monolithic server solutions.

In a world of micro and nano devices the I/O problem becomes more severe.  These devices won’t have the necessary bandwidth for significant communications traffic and server side infrastructure isn’t prepared to handle the load of so many devices, especially if they use common request/response protocols.  While computation might follow Moore’s law, I/O does not.

This means that more computation is required at the microdevice level while I/O is kept to a minimum.  Communications between devices at a local level will become essential. Even where I/O bandwidth isn’t a problem, latency commonly will be.  It doesn’t matter if you have a room full of racks in a manufacturing plant, the software it runs isn’t going to directly control a robotic arm welding on an assembly line or doing metal stamping or controlling valves in chemical processes.

A Small Sample

I’ve run into problems in past projects where an IoT infrastructure would have solved an intractable problem. One example is a trucking company that wants to track their trucks and containers across the country and feed it back into jBPM and Drools. That back end would provide the complex event processing to indicate problems and to provide red pin on the map tracking.  But the number of containers and communications overhead were significant problems. In that scenario, it is logical to have a microdevices on the containers monitoring GPS, making decisions based on time and coordinates, before sending data to corporate infrastructures.

As an example, if the GPS coordinates change a certain amount over a given period there’s no reason to signal the home office other than at heartbeat intervals to report present position and verify that the device is alive.  On the other hand, the devices are only just smart enough.  When a truck stops moving, what is the cause?  The on-board device won’t have the resources to make such a determination.  Is it a traffic stop, an accident or has the driver just stopped at a truck stop for some food, fuel or possibly a nap? That is where the ESB and CEP processing back at the home office come in –  the truck has stopped at the GPS coordinates for Joe’s Truck Stop outside of Barstow, everything’s OK.

Edge Computing

One architecture for that future looks like Rodney Brook’s subsumption architecture proposed for AI and robotics.  Hierarchical computational structures are used where lower level devices report up to higher level devices.  Devices at different levels perform different tasks and computational assignments.  An example might be a smart pallet where the microdevice reads RFID tags of items put on the pallet to compile a parts list.  When that pallet is loaded on a truck, the truck’s microdevice might query the pallets to compile a master parts list and communicate that back to the warehouse or corporate offices.

Each pallet, then, is not communicating back to corporate nor will it require GPS tracking to determine its location. When boxes or parts are delivered, the microdevice scanning the RFID tag will send data to the truck’s microdevice and that, in turn, will report the delivery of goods with a timestamp and current GPS coordinates. The software architecture becomes hierarchical with an emphasis on the responsibilities of the devices at each level.

But what happens if the pallet’s device fails? The higher-level device on the truck might o delegate RFID reading of the parts from the dead pallet to devices on the other pallets. We end up with a self-healing network.

The future architectures require small, ad hoc, self-organizing and self-healing groups of applications using local communications networks.


One technology that lends itself to these requirements is self-organizing groups of Actors.  Libraries like Akka Actors in a hierarchical structure with self-healing networks are a natural fit.  Communications between Actors is asynchronous and event-based which is precisely what this new IoT world demands.  Additionally, Akka URIs are independent of underlying transports and physical location.

Back to the Future

Interestingly this brings up a number of issues that we’ve had to deal with in the past.  How do you secure the communications from such devices? How do you stop man-in-the-middle attacks? Small devices don’t have the computational power to support complex asymmetric cryptography.  Even if they did, how do you keep the certificates on a few hundred thousand devices up to date?  How do you patch the software or do feature releases to that many devices which may or may not be reachable? Once again, OSGi was built to deal with such problems with its natural backward compatibility mechanisms and semantic versioning.

Other problems include how to deal with power consumption for devices that may or may not have ready access to a constant power source?  What happens to that GPS monitoring device when the trucker stops at home for a three-day week end and the vehicle gets shut down?

To list all the issues of IoT in regards to security, communications protocols and interconnection can fill books in and of themselves so it is well beyond this simple blog.

On servers and desktop PCs the emphasis has moved away from tight algorithmic efficiency toward good, readable and easy to maintain OO designs.  Perhaps as the market for ever smaller networked devices booms over the next 10 years we’ll once again be turning our attention to such performance concerns.

So the question is…

One thing is obvious, this is going to be a huge market opportunity and there are a lot of new technology and architectural problems waiting for a solution.

Far from being a guilty pleasure, this $11 trillion market representing 11% of the world economy by 2025 is now the number one technology concern in the world. One can get on-board now or be left behind.

So the $11 trillion question is, how much of that money is going to be in your future?



Internet of Things – Editors Rajkumar Buyya, Amir Vahid Dastjerdi, Elsevier, 2016

Enterprise IoT – Rishi M Bhatnagar; Jim Morrish; Frank Puhlmann; Dirk Slama, O’Reilly Media, 2015