When seeing the future is not enough

When IBM announced in June that they would be making a major investment in Apache Spark, little details were revealed. The only said that 3,500 developers would be assigned to working on Spark work. Last week we got a bit more detail on what exactly they have in mind for Spark. It would appear as though at least part of IBM’s strategy is to offer a Spark-as-a-service integrated with their other cloud offerings.

With the exception of offering some integration with outdated and under-invested tools that IBM customers are stuck with, I’m not seeing where the value proposition is here. Databricks already offers a great cloud based Spark solution; I would have expected IBM to try and differentiate itself a bit more. After 14 consecutive quarters of revenue declines, despite a huge raft of acquisitions, IBM appears to be struggling to find its place in the new open computing world.

Let’s commend them for being able to spot the trends, and for giving a huge endorsement to Spark. They’ve often been able to spot the trends, as they did so famously with Linux, and with analytics in general, and of course now with Spark. They invested many billions into Linux as well as analytics (through many acquisitions), but those investments do not appear to have stemmed their revenue losses. Seeing the future, and adapting and profiting from it are two entirely different things.

IBM is of course not alone in its misery; nearly all the old world business computing companies are all struggling. HP, founded in 1947, a company who also invested considerably in analytics, just completed one of the largest voluntary corporate breakups in history. Other large software companies founded decades ago, who’ve made significant investments in their own proprietary analytics tools, are also struggling to remain relevant in a world where companies are embracing the open-source Apache Spark; which is immeasurably superior in capabilities and cost to those tools. All of this leads to one interesting observation; many of the business computing companies founded before the dawn of the Web era seem unable to adapt to the realities of the web and the open systems it has fostered.

That’s not to say there isn’t money to be made in the open world, far from it; Red Hat taught us that. It’s just that it takes a radically new way of thinking about your customers and how to solve their business problems. Gone are the days of proprietary mass market software driving huge margins, either directly or via services lock-in. Mass-market tools, especially analytical ones, are a commodity now; so get over it and adapt. What the open source community is unlikely to do though, is build a fraud detection platform for insurers, or a route optimization solution for public transit systems, or a pricing optimization solution for grocery stores. What companies like IBM, HP, and others should be doing is building niche analytical solutions on top of open systems to solve their customers business problems. That requires a big change in thinking, one that moves away from centralized R&D developers, and towards solutions built in the field.

So for IBM, this means ditching their plan to build connectors to Spark for all their existing proprietary tools and their build out of a Spark cloud. What they should be doing is building and training a huge team of highly skilled data scientists and analytics specialists, who know how to build stuff in Spark to solve real customer business problems, not just create elaborate publicity stunts.