paint-brush
The Future of Cloud Services is Borderlessby@jeffandersen
641 reads
641 reads

The Future of Cloud Services is Borderless

by Jeff AndersenJanuary 20th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<a href="https://hackernoon.com/tagged/cloud" target="_blank">Cloud</a> compute is a commodity. Containerization <a href="https://hackernoon.com/tagged/technology" target="_blank">technology</a> and platforms such as <a href="https://kubernetes.io" target="_blank">Kubernetes</a> level the compute playing field to the point that it doesn’t matter where your VMs are running.

Company Mentioned

Mention Thumbnail
featured image - The Future of Cloud Services is Borderless
Jeff Andersen HackerNoon profile picture

Disclosure: Manifold, the developer marketplace, has previously sponsored Hacker Noon. Use code HACKERNOON2018 to get $10 off any service.

Cloud compute is a commodity. Containerization technology and platforms such as Kubernetes level the compute playing field to the point that it doesn’t matter where your VMs are running.

They remove the risk of vendor-specific implementations and platform quirks. You now orchestrate your infrastructure in a uniform manner across the board.

You no longer make one gargantuan decision to lock yourself in a single ecosystem. You mix and match, evaluating where you can get the best support and price for the capacity you require.

Falling for the single ecosystem trap

It used to be that you would get as many services as you could from your cloud compute provider. This was safe and easy because every deployment story began with selecting a compute solution that fit your workflow.

Next you would add the additional services you don’t want to manage yourself: DNS, monitoring and logging, email, database. Your cloud compute provider typically offered a “good-enough” version of these, causing you to double down on the ecosystem. It was easy, you just clicked a few buttons in the same place you managed every other service and you were done.

What made it even easier was cloud compute providers would sweeten the pot with large amounts of free credit, giving you the freedom to just keep piling services on.

This is where they set their trap.

Your monthly bill steadily climbed as you added new services, even ones you didn’t really need. Each service had its own pricing scheme and calculator, making it difficult to know exactly what you are signing on for.

Each of these “good-enough” services would have their own platform-specific quirks and implementation details. Only working together through proprietary APIs and integrations.

Soon the honeymoon period ends and the credits dry up. You would face the daunting reality that you had to pony up the cash to keep going because the effort to move was too daunting. You’re locked-in.

The perils of multi-ecosystem architectures

The alternative to locking yourself in to one provider’s ecosystem is to purchase the best tool for the job regardless of where you source it from, but this has its own problems.

Now you have multiple people adding services from several vendors to your application.

Your billing is fragmented with different pricing models and no longer comes on a single bill to the appropriate person at the end of the month.

You lose the ability to predict how your costs will scale as your application grows. You lose track of services that are dormant, but still being paid for. You have multiple payment methods and inevitably one will be rejected at some point in time causing downtime and confusion.

There is no single source of truth. You have no visibility into the breadth of services necessary to operate your application, complicating onboarding new developers and the auditing of who has access to what.

Developers have to go to multiple sources to obtain configuration necessary to run your application and one thing we all know is: developers can be lazy. Workflow fatigue results in less than ideal handling of your secrets (values in plain text, stored on disk, copied and pasted), which are critical to security and integrity.

It’s a mess, and annoying to work with.

How we’re able to embrace the chaos at Manifold

At Manifold our team embraces the fragmentation of the multi-ecosystem infrastructure. We want to source the best tool for the job regardless of where it comes from. We want our developers to have the freedom to add services when they’re needed and not have to go through lengthy approval processes.

We have an ace in the hole.

We use our own product to manage our cloud services and configuration. Manifold’s marketplace is a single location to find, buy and manage cloud services from multiple vendors that are the backbone of your applications.

We have a number of cross-discipline teams that work on multiple applications, they need to be able to focus on building our applications and not worry about obtaining configuration, secrets, or what service is managed by which team.

Every time a developer provisions a new service, they don’t have to worry about adding billing information. We get a single bill at the end of the month detailing all the services being used by the company.

We consolidate our cloud services, internal services and general configuration in a single location to make a seamless workflow regardless of which platform our application is deployed to.

Each application is grouped into projects, allowing us document exactly which services and configuration are required for which application. No more chasing down people from another team just to get an API token.

A preview of the Manifold dashboard for ACME Corp

Sometimes the services we use aren’t available directly through the Manifold marketplace, this is where we add Custom Configuration resources. This lets us bring any external secrets into the Manifold ecosystem to live side by side with purchased services.

Similarly, sometimes we create our own internal services (workers, APIs, etc) that need to speak to each other, and Custom Configuration lets us bring those along too giving us a single source of truth.

Configuration where you need it

In development, all of our developers use the Manifold CLI to seamlessly inject the required configuration to their application. No more config files, no more plaintext secrets. The configuration variables are securely delivered at runtime from Manifold’s API.

In production, we use Terraform to deploy our infrastructure from code. Using the Manifold Terraform Provider we’re able to define exactly which configuration needs to go where in our architecture, reconciling the most up to date values with every deploy. Our operations team no longer have to worry about having the correct keys for services another team provisioned.

With Kubernetes on the horizon for our stack, as we prepare for a true borderless ecosystem, we implement the same functionality using a Custom Resource Definition that continuously reconciles the correct configuration and secrets with our cluster. This makes it even easier for us to migrate clouds when we want to.

A model for now and the future

Ecosystem lock-in is no longer the norm. Developers want choice, they want to be able to choose the best service for the job that affords them the best workflow.

As cloud computing becomes a commodity, the portability of your services and config becomes paramount. You need the flexibility to change where your applications are deployed without sinking huge effort into translating to a new platform.

Manifold takes the first steps towards modeling your services and config in such a way that you don’t need to think about which platform your application is being deployed and is continuing to evolve the multi-ecosystem workflow.