Going serverless is like outsourcing routine tasks to remote pro teams. You get a shorter time to market, lower cost, and better scalability. You focus on features that make your users happy instead of managing a complex infrastructure. You can be more creative and innovative, as a third-party vendor will save you from all the boring tasks.
Sounds cool, and not just to me. Serverless architecture, also known as FaaS (Function-as-a-Service), shows an annual growth rate that exceeds 700 percent, sure evidence that companies highly value its advantages. Ok, but what about the disadvantages? Are there any pitfalls that people tend to overlook? Yes there are, and they mostly relate to security measures.
When you build your application with AWS Lambda, Azure Functions, Google Cloud Functions or IBM BlueMix Cloud Functions, cloud providers take responsibility for securing your project — but only partially. Vendors protect databases, operating systems, virtual machines, the network, and other cloud components. But they are not in charge of the application layer, which includes the code, business logic, data, and cloud services configurations. It’s up to the app’s owner to defend these parts against possible cyber attacks.
Unfortunately, many businesses don’t realize these risks and, as a result, have no strategy to avoid them. As PureSec, a serverless security platform put it in its 2018 survey, ‘there’s a huge gap in security knowledge around serverless when compared to traditional applications’. To close or at least narrow this gap, let’s get down to the major challenges that come alongside the benefits of FaaS.
In the serverless ecosystem, we have lots of independent functions, each with its own set of services and responsibilities, its individual storage and state management system. This causes hundreds of interactions and may entail a situation when certain functions get more permissions than they are supposed to have. For example, functions that were developed to make calculations or send emails, can get access to database resources.
Precautions:
Your serverless application is more porous than monoliths by nature. It is split into small parts and each of them can be triggered in different ways — not only by API gateway commands, but also with cloud storage events, database changes, data streams, IoT telemetry signals, and emails, to name a few. This vast pool of event sources expands the attack surface and makes it more difficult to eliminate malicious event-data injections.
Keep in mind that traditional web application firewalls (WAFs) protect only those functions that are called by an API gateway leaving all other parts open to attacks. For example, serverless apps can be plagued by SQL (Structured Query Language) injections, which embed malicious code into a request.
How would a successful injection attack impact your system? It totally depends on the role of the function infected with malware. If it has access to cloud storage, the virus could destroy data or upload corrupted files. After being transported to a database table, the viral code is capable of deleting records. The rule of least privilege mentioned in the previous section reduces the potential points of attack and prevents malware from spreading across the entire app. But of course, this precaution alone is not sufficient for securing your app.
Precautions:
Dependencies of functions that rely on third-party software (open-source libraries, packages, etc.) are hard to monitor regardless of the app architecture. However, with serverless, the task to control them manually becomes extremely challenging.
Precautions:
Regardless of the architecture, sensitive data exposure is considered one of the major security problems. So most of the practices we use to secure traditional apps also work well for serverless. What we must consider is that attackers can target other data sources, extracting sensitive information from cloud storages and database tables instead of servers.
There’s one more issue. In the world of serverless, functions interact with each other as well as with third-party services, and exchange data more actively than in the case of the classic approach. The more information shared, the greater the risk of data leakage or destruction.
Precautions:
After your app is deployed to the cloud, any person connected to the Internet can try to enter your system. You should cut off people with ill intentions while providing a smooth experience to your actual users. Unlike a traditional server-based application, serverless software is decentralized and has multiple access points including web browsers, mobile apps, etc. To ensure security, you need to authenticate all end users and control what resources they can access.
Precautions:
Autoscaling is one of the killer features offered by serverless. It enables app owners to pay only for what they use and saves them effort, leaving scaling up to the cloud provider. Alas, this technology not only created new opportunities for businesses but begot a new generation of hacker attacks called Denial of Wallet (DoW).
You can consider DoW to be an example of cyber-attacks adapting to new conditions. When a traditional app becomes a victim of the long-familiar denial-of-service (DOS) attacks, a flood of fake requests creates a kind of a traffic jam and makes your services unavailable for regular customers. Serverless architecture dictates a different scenario, though. While under attack, the app isn’t get blocked. Instead, it responds by scaling up in an attempt to deal with an avalanche of calls.
What happens next? The cost of serverless infrastructure grows dramatically until your budget is exhausted. It’s up to the enterprises, not to the cloud service vendors, to pay the bill for the overrun.
Precautions:
Gartner warns that by 2022, APIs will become the major source of data breaches. The problem is particularly acute for serverless applications built on top of microservices, with independent pieces of software interacting through numerous APIs. Published to the public cloud, these APIs become potentially available to hackers or malicious code. Worse yet, some of them stay invisible to traditional security tools. These hidden entry points called Shadow APIs pose a big challenge for keeping your serverless app secure, especially when it comes to complex enterprise software.
Precautions:
New opportunities often create new challenges, and serverless is not an exception. Cloud computing services offer your business incredible advantages in terms of cost-efficiency, scalability, and zero administration, yet to reap the financial benefits you need to invest in a security policy.
Considering the increased attack surface and its complexity, it makes sense to use automated security tools for continuous monitoring, access management and discovery of vulnerabilities. At the same time, you can’t go without human brains as well.
Serverless shifts more responsibility onto developers, QA engineers and DevOps teams who are supposed to follow the best security-related practices. Combining a variety of precautions and tactics throughout the life of application from idea to deployment to maintenance, you’ll win this game and hit the jackpot.