paint-brush
The Technologies Set to Redefine Cloud Computing This Decadeby@benferguson
139 reads

The Technologies Set to Redefine Cloud Computing This Decade

by Ben FergusonFebruary 28th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Ben Ferguson is the Vice President and Senior Network Architect for Shamrock Consulting Group, LLC. He explains how cloud computing has been a careful balance between what is possible and what ispractical on the ground. In 2020, clarity has begun to emerge related to the expected evolution of cloud computing over the coming decade. Understanding these patterns will be the key to success for service and product vendors when planning their business strategies. In 2019, 87% of respondents in a Portworx and Aqua Security’s annual Container Adoption Study are now planning to use containers.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - The Technologies Set to Redefine Cloud Computing This Decade
Ben Ferguson HackerNoon profile picture

For many years, enterprise cloud computing has been a careful balance
– and sometimes an epic battle – between what is possible and what is
practical on the ground. This dichotomy has led to a lot of confusion,
which in turn can hold back development.

Towards the end of the last decade, there was a sharpening of focus
from both business owners and cloud service providers alike: both
parties understood that they needed each other, but neither side could
determine exactly how that relationship should look.

In 2020, clarity has begun to emerge related to the expected evolution of cloud computing over the coming decade. Understanding these patterns will be the key to success for service and product vendors when planning their business strategies.

A New Generation of Containers

Business owners are now beginning to understand the cloud cost reduction and performance benefits of containerization.

Just two years ago, barely half of the owners interviewed in Portworx and Aqua Security’s annual Container Adoption Study were looking at adopting containerized computing. In 2019, 87% of respondents in the study are now planning to use containers.

One thing that has been holding back enterprise adoption is the fear
around security breaches and the complexity of isolating containers
using Linux control groups, mandatory access controls and the like.

To win the trust of businesses, a new breed of containers has been
gaining traction over the last few years. If you haven’t already, you
are going to start hearing a lot more about Kata Containers over the
next few months and years.

Managed by the OpenStack Foundation, the Kata Containers project
began back in 2017. Its overarching aim is to blend the benefits of
containerization with that of virtualization – particularly workload
isolation. The Kata Containers project brings together the high
performance of Intel’s Clear Containers with the adaptability of Hyper’s
runV platform (RunV is a platform-agnostic cloud-based runtime based on
super lightweight VMs).

With input from Google and Microsoft (among others), Kata Containers
is clearly positioning itself as the containerization technology that
will drive widespread enterprise adoption.

Kata Containers is designed to be compatible with all major network architectures and hypervisors, enabling workloads to run seamlessly in multi-cloud environments.

Improving Microservices at Scale

Twitter and Netflix have already proven that microservice
architectures work well at scale, but there remains a lot of complexity
behind the scenes. Communication between modular components can be
problematic, leading to a lack of visibility and an ongoing challenge to
maintain security and QoS.

To solve for this challenge, IBM, Google and Lyft put their
collective heads together to create a solution. The end result became
Istio.

Istio is described as an open source ‘service mesh’ designed to
provide a common environment for connecting, securing, monitoring and
scaling distributed microservices. A key benefit to Istio is that it
works across both hybrid and multi-cloud environments with no change in
application code.

In terms of security, Istio creates a separate, secure communications
channel between microservices and end users (and between the
microservices themselves). In terms of performance monitoring and
troubleshooting, Istio provides an intuitive dashboard and system-wide
view of the entire distributed environment. This enables operators to
see not only how individual microservices are performing, but also how
they are affecting one another. Problem areas can therefore be
pinpointed and remediated very quickly.

Istio is likely to be welcomed with open arms by both developers and
operators working with microservice architectures. By simplifying
security and troubleshooting while also removing roadblocks to scaling,
developers will be free to create new applications at their leisure. As a
result, the microservice business model will become more attractive
than ever.

The Race to Own the Hybrid Cloud Space

The developments detailed above are geared towards a hybrid and
multi-cloud future. Any dreams that the major public cloud providers may
have had of a public cloud-based ‘as-a-service’ monopoly have all but
evaporated. A recent Red Hat survey all but confirmed this new reality,
revealing that only 4% of businesses see cloud native as the best path
forward. In contrast, 31% of respondents favored hybrid cloud deployments.

Predictably, the likes of Amazon, Microsoft and Google have reacted
by rolling out managed hybrid cloud services. These are likely to gain
traction as they continue to blur the boundaries between on-premise and
cloud computing.

Microsoft has a clear head start in this area due to their
well-developed Azure Stack, and this is one reason why Azure has grown
so quickly despite the dominance of AWS’ market share of public cloud
services. Azure Stack works with a variety of partner vendors such as
Dell EMC, Lenovo and Cisco, but it uses the same pricing model as its
public cloud.

Amazon’s most recent response came via a partnership with private
cloud specialists Vmware to launch AWS Outposts. Outposts is marketed as a hybrid cloud solution for businesses needing low latency performance at the cloud’s edge. They include on-premise, single vendor hardware deployments that are installed, configured and managed by Amazon technicians. These are then connected, ideally via AWS Direct Connect, to a parent AWS Region.

Google’s approach, in its patented style, is slightly different. But
as they claim, their innovative solution is one that truly solves for
the multi-cloud challenge.

Solving for the Multi-Cloud Challenge

While Microsoft and Amazon are clearly keen on expanding both their
cloud environments and their service offerings to meet their clients’
needs, Google is placing itself as the company that will truly free
businesses up to operate across any combination of private and public
clouds. And, as usual, Google has an ace up its sleeve: Kubernetes.

Google’s hybrid and multi-cloud solution, Anthos, predictably runs on
GKE, but it also includes an on-premise platform (GKE On-Prem), which
runs on vSphere. Also included are Istio’s service mesh technology
(described above), a configuration management platform to handle
Kubernetes policies and Stackdriver for monitoring.

With AWS and Azure both supporting Kubernetes, this gives Anthos
users the ability to work with either or both public clouds in tandem
with their own private clouds – i.e. a true, honest-to-goodness hybrid
cloud.

Of course, Google also offers a cloud direct connect (Cloud
Interconnect) to ensure high-speed, secure connectivity between
on-premise networks and GCP.

But it doesn’t stop there. Google has also released Anthos Migrate, a
free P2K (physical-to-Kubernetes) migration tool built from Velostrata
technology. Anthos Migrate is designed to allow GCP users to easily
modernize existing applications or, perhaps more interestingly, to
migrate VMs over from other cloud services.

The Ultimate Machine Learning Hotbed

Cloud computing not only allows businesses to provide cheaper, faster
and more scalable services, but it also changes the nature and scope of
what businesses can actually achieve. As cloud technologies become more
widespread and ever-easier to use, the workloads predictably become
more ambitious.

Speaking of ambition, many organizations have put artificial
intelligence (both creating it and benefiting from it) at the top of
their wish lists.

From diagnosing illnesses and identifying Earth-like planets, to
autonomous cars and language translation, the potential of machine
learning outperforming humans on specific tasks will continue to develop
and grow over the coming years.

That said, if you ask Google AI lead, Jeff Dean, the current method
of starting from scratch on every project needs to change yesterday.
Jeff envisions replacing the current atomic, unit-based models of ML
with one multi-functional model. This model would be inactive most of
the time but would build upon previous relevant learning whenever called
upon to carry out a new task.

As Dean explained in a recent Keynote, this would more closely
resemble adult human learning rather than the models of today, which he
compares to the lengthy, inefficient process of infant learning.

There are sure to be a plethora of challenges on the road ahead, but
as the cloud continues to expand in order to attract more businesses,
the number of developers rising to meet those challenges will grow in
kind. Still, no one will really know what that future will look like
until it’s actually here. Just be prepared to be awed, excited and maybe
even a little terrified by the sheer scope and scale of what can (and
will) be achieved in the 2020s.

Post courtesy: Paul Cooney -Founder and President of Shamrock Consulting Group, LLC