paint-brush
Migration On-Premises Application to Serverlessby@letienthanh0212
944 reads
944 reads

Migration On-Premises Application to Serverless

by Thanh LeDecember 26th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This article covers everything that I present in Viet Nam Web Summit 2019 (VNWS2019) — the biggest event of Viet Nam IT Community. I will show you a case study that I and my project team implemented and delivered to one of our clients. The system is very traditional, there are 2 web applications: Identity Server Web Application for authentication and Authorization. Recruitment Web Application: This is the heart of the whole system. As a user, we can use the Job Search function for seeking a job, File Upload for uploading a resume… And as an admin system, Customer Communication for sending e-mail, SMS…

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Migration On-Premises Application to Serverless
Thanh Le HackerNoon profile picture

Note: This article covers everything that I present in Viet Nam Web Summit 2019 (VNWS2019) — the biggest event of Viet Nam IT Community.

Nowadays, “Serverless” keyword becomes very popular. Serverless is changing how developers and companies alike approach delivering business value using the public cloud. You can easily find a dozen of articles regarding this keyword. However, I’m pretty sure that, there are not too many articles show you step-by-step to migrate an On-Premises application to Serverless, but in this article, I will do!

I will show you a case study that I and my project team implemented and delivered to one of our clients.

This is the outline of my article:

  • Our client problemsWhy Serverless?
  • Road to Serverless solution
  • Our migration approach
  • Challenges

1. Our client problems

Our client provides an Online Recruitment Platform. The diagram below illustrates how the system works.

The system is very traditional, there are 2 web applications:

  • Identity Server Web Application: for Authentication and Authorization. This Web Application provides basic functions like User Management (CRUD), Login/Logout, User’s Permission,… Besides, it also allows users to integrate with their Google/Facebook accounts.
  • Recruitment Web Application: This is the heart of the whole system. As a user, we can use the Job Search function for seeking a job, File Upload for uploading a resume…As an admin system, we can use Reporting feature, Customer Communication for sending e-mail, SMS…

These Web Applications were using the same database called “Recruitment” database. Besides that, the system also includes other parts like Message Queue and Background Jobs for sending notifications and data processing for reporting…

So, what are the problems?

  • Massive codebase: the Legacy system is too large to fully understand, especially for new developers. Our client admitted that one time they had to write a new function to fix a bug instead of refactoring as the function that causes issue was tightly coupled with other functions.
  • Costly: Besides they had to buy a good server to deploy the system. Operation and development cost is not small.
  • Low scalability: modules in the system have conflicting resources, so if they want to scale specific function for example “JobSearch” then they will have to scale the whole application. It leads to wastage of resource and even they accept it, they can’t scale quickly.
  • Low availability: once they release a new function or improvement, even fixing a small bug, they have to deploy the whole system.
  • Low reliability: Bug in any module (e.g. stack overflow) can potentially bring down the system. Moreover, since all instances of the application are identical, that bug will impact the availability of the entire application.
  • Not open for new technologies: the system is developed by .NET, it means that Python, Java for new features never welcome as adopting new technologies will break the existing system.

I guess, you can find some similar points in your company or your client system :)

2. Why Serverless?

I’m not going to give you the answer directly. Instead, you can find the answer by your-self through the fact below:

According to report from Fortune, “Cars are parked 95% of the time”.

Having a car is similar to you buy bare metal servers and deploy your system on that server. Day by day, you have to look after your server in order to make sure that it works without any problems.

Renting a car is similar to using VPS services. You can rent a VPS for a short period. For example: X’mas holiday is coming, and you can predict that a lot of students intend to seeking a part-time job. You can rent a VPS to support scaling of your application during this period.

Although renting a car helps to optimize your usage, but the best option is using car-sharing options like Uber, Grab…

Yes, it’s Serverless in software.

3. Road to Serverless solution?

With the evolution of cloud computing, the IT infrastructure has undergone a rapid revolution. To add more flexibility and scalability to the software on the cloud, Serverless comes into the picture. The figure below illustrates the revolution of IT infrastructure.

  • Back to 1990s, we used to build and operate a local server in order to deploy our applications on it. The server was built for long term used (live for years). However, it required a lot of efforts from setting up, configuration to operation. To scale up the application we need to scale the whole server.
  • Since the 2000s, we have had a new option for the deployment server. Virtual Machine (VM) has joined the game. Go along with VM that is Virtual Private Server (VPS). This has helped us for short term used (some days or some weeks), deploy within a few minutes. And the unit of scale is counted by Machine.
  • Docker was launched in 2011, I think that it’s one of the biggest moves of IT Infrastructure in the decade. It has affected the way we design software architecture, the way we deploy the application. Containers not only speed up your application using fewer resources as compared to Virtual Machine(VM) but also scale multiple, colocated containers to deliver services in a production environment.
  • But extensive use of containers drastically increases the complexity of managing them. Instead of using containers to run the application, Serverless computing replaces containers with another abstraction layer in which the cloud provider acts as the server, dynamically managing the allocation of machine resources.

4. Our Migration approach

Note: Hopefully, when you read this section you already have some reasons why we should go to Serverless. If not, comeback section 2 and 3 to read again or feel free to send me a message :)

The figure below is our TO-BE system, in other words, it’s the goal that both we and our client want to archive.

Although Azure Cloud Provider is our choice for some reasons, you also easily find similar services for AWS and Google Cloud.

We use Azure Web App for hosting Identity Server and create separated Database for it using Azure SQL Server. Besides, we also use Azure Traffic Manager for high availability and failover plan. Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions.

Job Web Application in TO-BE system only contains UI/UX because we will move entire business logics to Azure Functions. Azure API Management (API Gateway) will be used as a single entry point in order to ensure that every request from Job Web Application will have to go through if want to reach Azure Functions. Besides, Azure API Management also integrates with Identity Server for Authentication and Authorization.

Similar to Identity Server we also apply Azure Web App and Traffic Manager for Job Web Application. Azure Application Gateway is the firewall of our Job Web Application.

Even if, Identity Server and Job Web Application are migrated to Cloud services, but they are not Serverless, they are just PaaS.

You can find the differences between PaaS and Serverless here:

https://www.cloudflare.com/learning/serverless/glossary/serverless-vs-paas/

Azure has two kinds of approaches for Serverless architecture:

  • Azure Function: it is a serverless compute service that lets you run event-triggered code without having to explicitly provision or manage infrastructure.
  • Azure Logic App: it helps to build automated scalable workflows, business processes, and enterprise orchestrations to integrate your apps and data across cloud services and on-premises systems.

In the TO-BE system, we will have the following Azure Functions:

  • Job Search
  • Job Management
  • Customer Communication
  • Reporting
  • Send mail
  • Send SMS

And one Azure Logic App for “File Upload” feature. This feature needs to serve different file types with different flows. For example: excel file for data import, jpeg/png file for image uploading and need to resize…

Besides, we also have other Azure services like Azure Service Bus, Blob Storage for file uploading, Table Storage for storing log data. Azure Application Insight is used for monitoring.

Migration steps

We have to implement the new architecture in parallel with the legacy system to ensure that there is no impact on current business.

The first step is that we have to choose the function to migrate to Serverless. It is not easy if don’t want to say it is a nightmare. Let’s imagine you have to dig into the legacy codebase (I mentioned above — massive codebase) and choose the function that less impact to others without any technical documentation or functional specification. Finally, after so many workshops with involving from Business Analyst team, Product Owner and Development team we decided “Job Management” is the first function to apply Serverless.

Next step, we create a Facade layer which provides a high-level abstraction over the Job Management function and refactor all consumers of that functionality to use the facade. This provides us with a single choke point from which we will strangle out the functionality.

Now, it’s time to do the “happy” part — create the new implementation by using Azure Function and Azure Cosmos Database. When creating the new functionality, we are careful to use the same abstractions that we used when building our facade. The most important thing in this step is that synchronizing data between existing database with a new database using Cosmos DB, we called it “Backfill” process.

Next step, we create a Toggler — a third implementation of the facade which acts as sort of traffic router, forwarding requests to either the existing function or new function (Azure Function) through facade layer.

Once Azure Function is ready to use. We start off with a Canary Launch, configuring the Toggler’s feature flag such that 2% of requests to the Azure Function while the remaining 98% is routed to the existing implementation. Assuming things go well we can slowly ramp up traffic to the new implementation until eventually 100% of requests to Job Management function are being delivered via our new Azure Function. If any problem is found within new implementation, a fallback function will be triggered to rollback the request and re-direct the request to existing function.

Once we are comfortable that our new implementation — Azure Function is performing as expected we get to another fun part — deleting code! At this point, we can remove the now-deprecated existing implementation of the “Job Management” function. We can now also remove the Toggler.

5. Challenges

Serverless is a great idea! It’s a new paradigm that can be applied to how modern applications are developed and run in the cloud to drastically increase developer focus and productivity.

But there are challenges and even “gotchas” that we had to overcome during development as well as delivery phase.

  • New technology: At the beginning of the project, there were only two people had worked with cloud services and Serverless before. A lot of training sessions were organized. For the new technology to be successful in your project, you will need to win over your team member to its use. Ideally, you can make them see their benefits and get them excited about those benefits.
  • Debugging: Distributed applications mean that you need to rely much more on log trace to find the root cause of an issue. You can download Azure Function CLI, it’s a tool that helps you debug your Azure Function during the development phase. In order to debug Azure Function on the cloud, you will have to setup remote debugging it’s also not easy.
  • Integration: as mentioned above, we need to implement new Azure Functions — Serverless in parallel with the legacy system. So, the integration of new Azure Functions with existing functions have a lot of issues.
  • Testing: testing with Serverless is very difficult as using Serverless means you no longer have direct access to the environment that executes your code. Bear in mind that, Unit Test must be written fully and carefully.
  • Monitoring: Serverless allows decomposing an application into smaller modules. But this could lead to a new problem of distributed monitoring. With a bunch of serverless components chained together, the ability to trace a request/response end-to-end becomes critical but tends to be very cumbersome to use with legacy monitoring tools. Although Azure Application Insights helps us to monitor Azure Functions but to get familiar with metrics is not easy.

Conclusion

Moving from a monolith to Serverless is not easy. It requires significant investment and management buy-in. On top of all that, it requires incredible discipline and skill from the team actually developing it. However, the advantages are many.

Often, it is a good idea to start small on this journey. Rather than waiting for a big one-shot move, try to take incremental steps, learn from mistakes, iterate, and try again. Also, don’t try to go for a perfect design from the get-go. Instead, be willing to iterate and improve.

Lastly, remember that Serverless is not a destination but a journey. A journey of continuous improvement.