Hackernoon logoIs There a Future for Cloud-in-a-Box? by@iamondemand

Is There a Future for Cloud-in-a-Box?

Author profile picture

@iamondemandOfir N

A cloud evangelist and entrepreneur.

Cloud-in-a-box originally seemed like the perfect compromise, letting enterprises reinvent their aging data centers for today’s IT, built around a cloud-based model. It offered some of the important advantages of public cloud, like better optimization of computer resources and self-service.

But looking at catastrophic failures of promising cloud-in-a-box solutions, most prominently the recent shutdown of Stratoscale, a well-funded startup offering IT departments everything they said they wanted, we have to ask: Is cloud-in-a-box (really just another name for private cloud) truly the way forward?
Cloud-in-a-box solutions claimed ease of setup, rapid deployment, compatibility, and control. Some achieved this through hybrid solutions and new equipment. Others promised backwards compatibility with a range of existing hardware.
But one by one, promising offerings have fallen victim to harsh realities. I predicted some of these occurrences a few years ago, specifically when dealing with Openstack cloud.
And despite the perceived benefits mentioned above, the simple reality is that private cloud contradicts the core promise of cloud: infinite scale.
That’s even more true today and even more crucial to understand as more and more modern “native-cloud” applications are being developed to support usage at scale.
And it’s not clear at this point that even today’s heavyweights—solutions coming from the leading vendors, including Azure Arc and Outposts—can overcome core problems with the cloud-in-a-box model.

High-Profile Failures

To understand whether today’s cloud-in-a-box solutions can emerge as winners, let’s look at two recent high-profile failures in the field.
The first example that comes to mind was the purchased-by-Dell EMC and now-defunct Cloudscaling. It was built on OpenStack, and promised full AWS compatibility.
While Cloudscaling was driven by its passionate founder Randy Bias, according to TechCrunch, its offering was “too early and too incomplete.”
But perhaps the most recent and devastating failure has been Stratoscale, whose core product, Symphony, was built around a great idea. Symphony was a subscription-based cloud built on existing hardware, with AWS and VMware compatibility.
It was founded by serial entrepreneur Ariel Maislos, whose previous startup was acquired by Apple for a reported $390 million. In short order, Stratoscale raised $70 million in funding from companies like Intel, Cisco, Qualcomm, and SanDisk.
It also formed industry alliances with partners like HP, OpenStack, and Lenovo. Yet in December 2019, after an unsuccessful merger, Stratoscale shut down for good. Tellingly, Maislos blamed the shutdown on “a technological switch in which the giants dictate the direction of the market.”
[Disclaimer: IOD served Stratoscale for several years, helping them build a successful online presence with a vast amount of technical expert-based content.]

Giving Customers What They Want?

These were all good products, offering exactly what the enterprise clients said they wanted: a simple cloud package that the traditional enterprise IT team could own, including control, backwards compatibility, and interoperability.
But this seems to be a perfect example of a situation where what the industry says it wants and what it actually needs are two very different things.
Private cloud and cloud-in-a-box have been driven by two faulty and intertwined assumptions. The first is the old-school belief that companies must control their own hardware. And the second is that in regulated industries, there’s no choice: It’s private cloud or no cloud.
Yet while these cloud-in-a-box offerings rose and fell, AWS and Azure were driving innovation at many times the pace. Meaning that by nature, many public cloud capabilities simply don’t—and never will—exist on-site, and not only because of the infinite scale of public cloud.
Even in regulated industries, objections to public clouds are disappearing. Private data center operators can’t keep up with today’s massive security challenges that also come with the ability to scale.
Indeed, public cloud is seeing widespread adoption even in regulated industries like healthcare, which have traditionally kept their data on lockdown. According to IDC research, nearly 77% of healthcare providers used public cloud in 2018.
Bringing the power of public cloud in-house is like reinventing the wheel. Any attempt to create a better wheel (shaped like a triangle? adding another axle?) will be overly complex and expensive and won’t work nearly as well.

The Future of Cloud-in-a-Box

Back in 2016, naysayers claimed that “public cloud can’t solve every business challenge, particularly when it comes to ensuring consistent application performance, compliance and security.”
They also warned that public cloud brought risks of service interruptions due to being “over reliant on a single provider.” Ironic words today, when local data centers are more likely to experience interruptions than a “five-nines” reliability public cloud.
At the most recent re:Invent conference, I attended an analyst briefing on AWS Outposts, and came away with the impression that the only real arguments left for cloud-in-a-box are specific regulation rules and latency. When it comes to a network with public cloud resources, it still may not be fast enough for demanding local enterprise applications such as mobile gaming.
Outposts try to fix this by bringing their flavor of public cloud into the organization’s data center, closer to the user, or maybe it's just the need to have these irons closer to the traditional IT leader of this organization?! (Yes, I am being cynical) 
There are a few major differences between the two: Outposts is simpler to set up and configure, essentially a turnkey hybrid hardware/software solution. AWS promises “install, configure, launch” simplicity, at a cost of a quarter of a million to $1 million per unit (sample pricing from AWS’s site). Outposts is completely AWS-compatible, because it is AWS. It behaves like just another region in your AWS console, allowing a hybrid public-private setup.
Microsoft, on the other hand, was first to market and has more experience delivering enterprise services. Flexibility is the buzzword here, as Azure Stack has the advantage of offering compatibility with a range of hardware provider partners, like Dell, Cisco, and Lenovo. Microsoft also takes a flexible approach to support within the Azure Resource Manager (ARM) for resources running outside of Azure, including VMware vSphere, Amazon EC2, Google Compute Engine, or any Windows or Linux server, even if it's behind a firewall and proxy.
Pricing starts at $0.008 per virtual CPU per hour, plus software support. Hardware support is contracted through the OEM provider partner, making maintenance, upgrades, and future compatibility more complex. (ComputerWorld offers a helpful roundup of the major cloud providers’ approaches.)
One final important commonality: buying into an oligopolistic model that supports limited competition. While Azure offers more flexibility, Outposts doesn’t accommodate third-party vendors or integrate with non-AWS environments.
The cost of these cloud-in-a-box solutions is already high, and besides the fact that there are so few options on the market today, I’m not sure moving in this direction is the right strategy for any large organization.

Solving the Problems?

Beyond superficial similarities and differences, we need to ask: Do AWS’s or Microsoft’s approaches fix the problems with private cloud? Both address latency issues, but at the expense of control and sophistication.
Neither dares to claim anymore that private cloud offers greater security or easier compliance, though this was originally a core marketing point for Azure Stack
And even the latency argument is quickly losing relevance. AWS already has a few great strategies planned to address latency: like AWS Local Zones, offering local internet ingress and egress to reduce latency; AWS Wavelength, offering delivery of ultra-low latency applications for 5G devices, and partnerships with other major telecom players, including KDDI and SK Telecom.
All of these will very soon solve the latency problem for high-demand applications like gaming, 3D modeling, and electronic design automation, by effectively moving public cloud closer to home, solutions which promise to leave private cloud in the dust.
Since these are limited in nature, cloud-in-a-box, will always be harder to use and offer less functionality. And we all know how easy and convenient public clouds are. As TechRepublic’s Matt Asay prophetically wrote a half a decade ago, "In this race for convenience, anything that feels harder than the public cloud seems certain to fail." Asay, who joined the AWS team in 2019, seems to have anticipated today’s state of cloud-in-a-box: “The very notion of a privately provisioned cloud service is contradictory and nearly always doomed. Private cloud lets enterprises pretend to be innovative, embracing pseudo-cloud computing even as they dress up antiquated IT in fancy nomenclature."


Today, many cloud-native organizations have never owned their infrastructure—and they’re doing fine. Better than fine. They’ve discovered that hardware is a heavy liability, not an asset.
This includes players like Airbnb, Netflix, and Tesla, and many others in highly secure industries. Lyft made waves with its IPO in 2019 by announcing that it would be leasing its entire infrastructure from AWS, to the tune of $1.5 billion over five years.
Innovative businesses care less about infrastructure investment and more about flexibility, reliability, and scalability on demand. Improvements in public cloud have already addressed security and compliance, and are now doing the same for latency.
Public cloud just keeps getting better and better, while private cloud—including cloud-in-a-box—races to keep up. It’s a race no one can win.


The Noonification banner

Subscribe to get your daily round-up of top tech stories!