paint-brush
The Benefits of Serverless Computing and its Impact on DevOpsby@cabot_solutions
5,935 reads
5,935 reads

The Benefits of Serverless Computing and its Impact on DevOps

by Cabot Technology SolutionJune 16th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Every month, new buzzwords are introduced in the world of technology. Some of these innovative technologies boost performance of applications, while some redefine the way in which products are made. Yet some others are given undue importance despite their overhype, and eventually wither away to netherworld. But not with serverless computing. That technology is here to stay.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - The Benefits of Serverless Computing and its Impact on DevOps
Cabot Technology Solution HackerNoon profile picture

Every month, new buzzwords are introduced in the world of technology. Some of these innovative technologies boost performance of applications, while some redefine the way in which products are made. Yet some others are given undue importance despite their overhype, and eventually wither away to netherworld. But not with serverless computing. That technology is here to stay.

You spend a great deal of time building and deploying apps, but how much time do you spend in managing them?

This is where serverless computing comes to the rescue, because, with its abstraction of operating systems, servers and infrastructure it can help solve many problems. The biggest advantage of serverless commuting is that there is no more provisioning or managing of physical servers.

It is not that servers are troublesome. Servers act as computers that accept requests from client machines and delivers data. Web servers, FTP servers, email servers are all examples of server computing. Consider a web server. A user needs to access pages while on the internet. He will send requests to the web server through a browser. The web server sends data based on the request.

Serverless computing doesn’t mean there aren’t any servers; there will be a cloud provider allocating the machine resources, but the developer doesn’t have to worry about server management. He just has to focus on building the best web applications.

It is the cloud provider that does the rest; they handle the resource scaling, and make it flexible and automatic. Organizations pay only for the resources they use, and only when the applications take up the resources. If they are not using the resources, they don’t have to pay anything, eliminating the need for pre-provision or over-provision capacity for storage and computing.

Function as a Service (FaaS)

Function as a Service or FaaS is a form of serverless computing, and a category of the cloud computing service. This is a comparatively recent development in the arena of cloud development, and was first made known in 2014. Eventually, users were introduced to AWS Lambda, Microsoft Azure, Google Cloud, Oracle Cloud, IBM/Apache’s Open Whisk and so on.

The users are provided with a platform through which they can develop, run and manage their application functionalities. They no longer have to worry about managing or maintaining the infrastructure related to building and launching an app. FaaS has evolved to be the ideal solution for apps running in the cloud. There is no perpetual process of running and waiting for HTTP requests or API calls, but rather, only an event mechanism to help trigger code execution on any of the cloud servers; the provider defines the event type.

The functions have to be called in response to predefined events and triggers; they don’t start themselves, but can be set up to start manually, or as per request. The developer needs to write the code only when a certain event takes place. The cloud provider takes care of the rest of the tasks.

Here’s what that looks like:

- The provider finds the server to execute the code.

- The service understands when it needs to scale up/down.

- All the containers used to execute the functions are decommissioned after the task.

- The developer/organization is charged only for the resources used up; the execution is metered in units of 100 ms.

Let’s look at a few examples.

AWS Lambda:

AWS Lambda acts as a compute service that runs your code according to events, minus the hassle of provisioning or managing servers. You pay for the compute time only when the code run. Suppose your user wants to see the thumbnails of the photos he uploads, or you want to generate an in-app request while the user is accessing the application, the code will be generated only at that particular time. To do this, you would need a backend code that responds to these triggers (uploads, in-app purchase, etc.).

Managing an infrastructure to handle the backend code by provisioning, scaling up and down could be resource consuming. And you need to monitor if all this works according to plan can be draining. Developers, in such situations wish for a service that handles all these tasks, while he just focuses on developing the code.

AWS Lambda comes to the rescue here, because it responds to triggers and events like object uploads, DynamoDB, Kinesis, etc. Once the code is ready, you can push it to the service, and it will handle them in all the capacities, including scaling, patching and administration. Additionally, you can monitor the performance of the code in real time logs through Amazon Cloud Watch.

The code run on Lambda is known as a Lambda Function; just upload the code in the zip file or add it in the IDE in AWS Management Console. There are some templates or function use cases that eases the developer’s job. Select AWS Event Sources once the function is loaded to help trigger the event automatically, such as DynamoDB or S3 bucket. This helps you build applications that respond quickly as per requirement.

Azure Functions:

Microsoft Azure makes it possible for developers to just write the bare minimum code while not bothering about common problems.

In Microsoft Azure, three main components provide serverless offering. They are Azure Functions, Logic Apps (helps you to visually design workflow and orchestrate a number of activities in Azure Functions) and Event Grid (a message routing service to various parts of Azure). There are built in features that would let you know when new subscriptions are added to your account, or when a new web service was added. Functions similar to these can be orchestrated well through these core components.

Azure Functions is a serverless event-based framework that would automatically scale up or down based on the number of incoming events, execute commands on requests and manage the flow of data. This leads to reduced DevOps and concentrated effort on business logic, which in turn leads to reduced time to market. It can be ideally used for cron processing, IoT, data transformation, web and mobile backends and others.

Azure Functions is a merger/combination of code and events. There are triggers and bindings (input and output) that would help run the events, based on schedules, HTTP call, Blob Storage, etc.

Once the application is launched, you can monitor its function through Application Insights, get information on the health of your app, new queries, etc.

Google Cloud Functions:

Google has been in the foray of ‘serverless’ options since 2008, with its Google App Engine. The open beta version of Google Cloud Functions is pretty new, launched in March 2017. It supports an event-driven approach, where you can trigger functions when it is supposed to happen, within the cloud environment.

While Lambda offers unlimited functions, Google Cloud provides 1000 functions per project, so there is a limit. Cloud Functions offers event-driven computation for several projects, including APIs and marquee tools. And since the service is fairly new, the support is only for Javascript and Node presently. Other language options are expected to arrive soon. The service presently supports internal events by Google Cloud Storage (Object Change Notifications) and through Google Cloud Pub/Sub. Cloud Pub is Google’s message bus that aids in scaling as per requirement. However, it hides the messaging queue from you, so all you need to is write the code for the consumer and the data producer.

Once you write an executable code, Google App Engine fires up enough nodes to handle incoming traffic, and scales up and down, as per user request.

Serverless Computing and DevOps

Now that developers are able to focus on the business logic, they are able to launch great applications with just the piece of code to be executed when a particular event has to take place. We have also learnt that there are many remarkable FaaS providers around like Amazon Lambda, Microsoft Azure and Google Cloud Functions. This technology goes beyond microservices, and embraces nanoservices.

DevOps is a term used for a particular software engineering culture and practice wherein the entire processes of software development and software operation are unified. It has evolved to be a movement, like Agile. It aims to speed up the process of software development, by revamping traditional methods, and facilitate better collaboration among the development and operational teams.

Earlier, Devs and Ops worked separately in silos, leading to poor teamwork and lack of transparency. In DevOps sometimes, teams merge into a single, consolidated team where they work together in the application’s lifecycle, right from development and deployment to testing and operations Processes that were earlier painstakingly slow are automated through technology stacks and tooling. Softwares and applications no longer complement a business, they are the backbone, an integral component through which they deliver to their customers.

If microservices dealt with very small business capabilities, nanoservices go even further, and to just a fraction of that. For example, if microservices dealt with the code required for a CRUD operation on an account, nanoservice works for each account operation, separately. Serverless computing works with nanoservices, and changes the future of DevOps.

Challenges in Implementing DevOps

Lack of in-house expertise

There are multiple benefits of employing DevOps, but in often turns difficult to convince the stakeholders on its merits. Several people have admitted that they do not have the expertise to adopt DevOps. This leads to confusion in the proper implementation, leading to risks and further challenges. These risks could pose a threat for companies having their own streamlined set of guidelines in performing a project.

Organizational challenges

It is the principles of continuous deployment, persistent testing, and collaborative reporting that make DevOps successful. But many organizations stumble when it comes to tool selection, leading to faulty selection of tools and eventually, failure. Here are the main organizational challenges faced by such companies.

Fear of failure: Managers doubt DevOps’ success. They agonize over whether they can deliver the required transparency to both the customer and the organization.

Bureaucracy: Bigger and older companies organize their workers into different and independent teams and this creates walls and barriers. However, several innovative and younger companies are beginning to realize the importance of cross-functional teams.

Legacy processes: When new employees start at a company, they are usually given a set of guidelines for moving from point A to point B, rather than understanding other methods of reaching that point.

Collaboration between development and operations

Fostering collaboration and communication between team members can resolve a number of problems. This ensures that team members do not repeat tasks, wasting time and resources. It is imperative to make the teams understand that they should bring down the walls between themselves to foster better time to market and better software stability.

Juggling of priorities

The DevOps team spends time on supporting the development team with builds, maintaining the development environment and troubleshooting deployment issues, rather than giving attention to operations as well.

How Serverless Computing Overcomes these Challenges

Focus on core competency

Serverless computing comes to the rescue by freeing up the time the development and operations team spend time on maintaining builds and troubleshooting. This time can be spent wisely on attaining business goals, exploring new pastures and just perfecting products.

Enabling business agility

Business agility is possible with serverless computing because you can create an environment where there is continuous improvement of development. Business agility happens when organizations become agile enough to make instant decisions that will steer towards success. Enterprises that make use of serverless computing to reach DevOps will definitely achieve greater agility.

Continuous deployment

Serverless computing gives the developer the freedom to perfect his code, while the hassle of infrastructure, scaling and provisioning are all handled by the provider. This can be done simultaneously, saving time and resources. Code changes in any environment will be automatically reflected through continuous deployment.

Other Benefits of Serverless Computing

Less operational complexity: The technology of serverless computing itself solves a number of complex engineering problems and provide sophisticated solutions to them, allowing developers to prototype and develop faster. As resource scaling is automatic and flexible, the operational costs are also lower for organizations. Operational inefficiencies are combated successfully in every arena of the value chain — logistics, communications, operations, etc.

Scales within seconds: The technology practically scales up and down in no time. Whenever the load on the function grows, the vendor’s infrastructure will immediately make thousands of copies of the function (depending on the number of requests that come in) and scale up to accommodate the surge.

High availability: Through serverless computing, the ‘servers’ themselves are automatically deployed in several availability zones, so it is always available. And if it is the middle of the night, and you are plagued with an issue that requires support, that will be provided too.

Choice of multiple coding languages: Depending upon which cloud computing services you are using and the nature of the project, you can enjoy working in many programming languages including Python, Swift, Javascript, Node.js, etc. Many of the platforms support code written in more than one language too.

Lower development cost: A number of issues can arise when you are designing a product for launch. Adopting serverless computing solutions like AWS or Azure can lower development costs greatly. Renting and buying infrastructure, its set up, capacity planning and maintenance are all tasks that you can save time on.

Lower operational cost: The cloud provider handles the infrastructure and its operational process, including maintenance, security and scalability, leading to lower operational cost for enterprises.

Easy deployment: Focus on the task at hand, and don’t worry about getting tangled in the traditional workflow while running, testing and deploying the code. The functions can be tested and deployed independently, and any changes in code can be pushed through the continuous delivery pipeline.

Secure infrastructure: It is the responsibility of the cloud service to provide secure infrastructure. They ensure that it is safe, free from hacks and attacks as the cloud provider compromise on security.

Compatible with microservices: Serverless computing ensures microservice oriented solutions so you can break down complex applications into small and easily manageable modules that makes the entire process of developing and testing software programs agile.

Low infrastructure administration overhead: As the cloud provider handles all the tasks related to the infrastructure and maintaining it, there is less administration overhead. You can free your business from the overheads related to maintaining and upgrading infrastructure, provisioning servers, etc.

Dynamic resource allocation: The resources are allocated only when a specific event occurs. They stay idle until a task is assigned, so enterprises only need to pay for the fractional resource time used up.

Agile friendly: Serverless architecture aids in agile-friendly product development. FaaS platforms are capable of letting developers focus on the code, and make their product feature-rich through agile build, test and release cycles.

Faster release cycles: You can not only deploy apps faster, but can easily roll out updates and ensure they get reflected in the program faster than ever before.

Drawbacks to Consider

There are a few limitations to serverless computing.

Latency: Latency and concurrency are issues arbitrating throughout in serverless architecture. Latency is the time taken to start processing a particular task, while concurrency means the number of independent tasks it can run at a particular time. The latency requirements in the processing could be highly variable, so it is important to define them properly to get the best out of the serverless platform.

Limits: The cloud provider enforces task memory and processing limits, sometimes having too many tasks at a single time could mean exceeding the connection time. This could block other tasks from running properly, and within the desired timeframe.

Vendor lock-in: Everything is controlled by the vendor, so the developer or the enterprise don’t have complete control over the usage and management of resources.

Multitenancy issues: If someone else’s function caused the remote server to crash, it could inadvertently affect yours as well. This is an issue when several serverless applications run on the same physical server.

Integration testing challenges: Integration testing in a serverless architecture is tough. This is because the units of integration with this technology is considerably smaller than their architectures, and the tendency to do integration testing is a lot higher.

Conclusion

The serverless cloud architecture is an evolutionary step that helps you leverage the cloud, and advances in computing technology have created a paradigm shift in DevOps as well. It helps in achieving business agility, it can foster rapid delivery of business value through collaboration, learning and continuous improvement.

Take your mind off infrastructure issues, and focus on business goals, because you can build, run and test applications or services without servers. All the developer has to do focus on the code, handle the business logic, and then deploy them in small function packages.

DevOps and microservices have gained sufficient momentum over the last few years, and serverless computing is likely to be the next big thing. It helps in creating an ecosystem that works remarkably to manage every aspect of composable infrastructure. Going serverless can help enterprises deliver new products and services, but in a much cheaper and accurate way.

Interested in adopting serverless computing in your organization? Let us help you!

Contact Us Today!

Originally published at Cabot Solutions on June 15, 2018.