HPE plus MapR: Too much Hadoop, not enough cloud

Cloud murdered the fortunes of the Hadoop trinity–Cloudera, Hortonworks, and MapR–and that same cloud likely won’t rain victory down on HPE, which recently acquired the business assets of MapR. Though the deal promises to marry”MapR’s technology, intellectual property, and domain expertise in artificial intelligence and machine learning (AI/ML) and analytics data direction” with HPE’s”Intelligent Data Platform capabilities,” the deal is devoid of the 1 component that both companies desire most: Cloud.

The problem, in other words, isn’t that MapR wasn’t stuffed to the brim with smart people and fantastic technology, as Wikibon analyst James Kobielus insists. No, the problem is that MapR is still way too Hadoop-y rather than nearly cloudy enough in a world filled with”fully incorporated [cloud-first] offerings which have a lower cost of acquisition and are cheaper to climb,” as Diffblue CEO Mathew Lodge has said. Simply speaking, MapR may expand HPE’s information resources, but it doesn’t make HPE a cloud competition.

Why cloud issues

Yes, hybrid is still something, and will remain so for many years to come. As much as businesses may want to maneuver workloads to a cloudy future, 95 percent of IT remains firmly implanted in private information centres. New workloads tend to proceed cloud, but there are literally decades of workloads still running on-premises.

But this hybrid universe, which HPE pitches so loudly (“innovation with hybrid cloud,””from border to blur,””exploit the power of information wherever it lives,” etc.), hasn’t been as big a deal in big information workloads. Part of the reason comes down to a reliance on old-school versions like Hadoop,”built to be a giant single source of information,” as mentioned by Amalgam Insights CEO Hyoun Park. That is a cumbersome model, especially in a world where big data is born in the cloud and wants to stay there, instead of being sent to on-premises servers. Can you operate Hadoop in the cloud? Of course. Companies like AWS do exactly that (Elastic MapReduce, anyone?) . But arguably even Hadoop in the cloud is a losing strategy for most big data workloads, because it simply doesn’t fit the streaming data world in which we live.

And then there is the on-premises problem. As AWS data science chief Matt Wood told me, cloud elasticity is crucial to doing information science right:

The ones that go out and buy expensive infrastructure find that the problem extent and domain change really quickly. By the time they get around to answering the initial question, the business has moved on. You want an environment that is flexible and allows you to quickly react to changing big data demands. Your source mix is continually evolving–if you buy infrastructure, then it is almost immediately irrelevant to your business because it is frozen in time. It is solving a problem you may not need or care about any more.

MapR had made efforts to move beyond its on-premises Hadoop ago, but arguably too little, too late.

Brother, can you spare a cloud?

That brings us back to HPE. In 2015 the company dropped its public cloud offering, instead choosing to”double-down on our private and cloud capabilities.” That may have seemed acceptable back when OpenStack was breathing, but it pigeon-holed HPE as a mostly on-premises vendor trying to partner its way into public cloud relevance. It’s not enough.

Whereas Red Hat, by way of example, can credibly claim to possess deep assets in Kubernetes (Red Hat OpenShift) that help enterprises build for hybrid and multi-cloud situations, HPE does not. It’s tried to arrive through purchase (e.g., BlueData for containers), but it simply lacks a cohesive product set.

More worryingly, every major public cloud vendor now has a solid hybrid offering, and enterprises considering modernizing will often choose to choose the cloud-first vendor which also has expertise in private data centers, rather than betting on legacy vendors with ambitions for public cloud value. For Google, it’s Anthos. For Microsoft Azure, hybrid vehicle was central into the company’s product offering and promotion from the beginning. And for AWS, which at one time eschewed private data centres, the company has built a ton of hybrid services (e.g., Snowball) and partnerships (VMware) to assist businesses have their own cloud cake and eat private information centers, too.

Input MapR, using its contrarian, proprietary method of the open source Hadoop market. That strategy won it a few key converts, but it never had a broad-based following. Fantastic tech? Sure. Cloudy DNA and goods? Nope.

In sum, while I hope the marriage of HPE and MapR will yield happy, cloudy venture customers, this”doubling-down” by HPE on technology resources that maintain it firmly grounded on-premises does not hold much promise. Big info belongs in the cloud, and cloud is not something you can buy. It’s another way of working, a different way of thinking. HPE didn’t get that DNA with MapR.

Author: Nhanh Admin