See the first part, of this 5-part blog series, [here](https://platform9.com/blog/what-is-serverless-and-what-it-means-for-you-part-1/).\n\nAWS Lambda? Azure functions? Openwhisk? Fission?\n\nAs you’re considering Serverless and looking for ways to get started (with shedding all your infrastructure-worries :-)), [here are some considerations](https://platform9.com/blog/what-is-serverless-part-2-challenges-and-considerations-for-choosing-the-right-serverless-solution/) and ‘gotchas’ to be aware of when choosing the right Serverless solution to support the needs of large scale enterprises today.\n\n### 1\\. Lock-in to a particular cloud provider\n\nThis is an obvious one. All the leading cloud providers lock customers into the unique implementation of their Serverless framework. For instance, AWS Lambda relies on a panoply of AWS offerings across DNS (Route53), API Gateway, S3, Database, Networking (VPCs), etc. These proprietary components are needed to compose complex serverless applications. This means that Lambda functions, for example, are not portable across other cloud providers. Once written, portability or re-use of these functions on other environments are next to impossible since it is not just the application logic and functionality that needs to be re-written, but also all the essential services provided by the cloud provider.\n\nEssentially, you could be swapping the tight coupling between the app components and the infrastructure with another type of dependency. This is a problem, particularly since the world of modern software delivery has consistently demonstrated that striving towards as much de-coupling as possible, object re-use and portability — are all critical to ensuring business agility and ease of operations.\n\nIn addition to this dependency, the specific cloud providers introduce additional limitations that developers need to be aware of when choosing their preferred service. For example, AWS Lambda limits the artifact sizes (50 MB at the time of writing), the number of concurrent executions and amount of memory allocated per invocation.\n\n### 2\\. Cost (and hidden costs.)\n\nAs we touched on in the previous part of this series, the billing advantages of Serverless depend, quite sensitively, on actual usage patterns. In addition, the cost of using a given FaaS framework should not be viewed in isolation from the cost of the surrounding ecosystem services required to run the functions. For example, the financial implications of using Lambda in a large enterprise is not limited to just vanilla CPU/RAM/Network cost but consists also of the associated charges of API Gateway, S3, Dynamo, costs of sending data across VPCs, etc. Most users find that the charges quickly add up with the public cloud providers.\n\nIf your transaction volumes remain high (and scales-up higher,) solutions such as Lambda functions can potentially cost more of your budget than anticipated. Possible fixes include designing the application in such a way that a larger batch size of data can be ingested into the function, keeping the execution time lower by writing more efficient code, data transfer costs across VPCs and Availability Zones (AZs,) etc. Cross VPC transfers require Lambda functions to open Elastic Network Interfaces (ENI) which causes longer execution times and a higher charge for the transfers themselves.\n\nWhatever be the fix, it stands to reason that Functions should also be offered on the Private cloud and on-premises infrastructure as well.\n\n### 3\\. Startup Latency\n\nOne issue pointed out by various users of the public clouds has been the cold start challenge associated with using FaaS frameworks.\n\nOnce a (Lambda) function has not been used for a certain length of time, the system reclaims the resources that it held, meaning additional spin-up time is required to restart the function — instantiating another container, loading up its dependencies and then making it available. For certain real-time or near real-time applications in IoT or Cognitive applications serving live end-users, 100ms latency is too high.\n\nIn contrast, the open source Serverless framework Fission allows you to pre-tune some reserved resources across a spectrum, to ensure your application is ready with minimum latency.\n\n### 4\\. Serverless Applications on Private Cloud/On-prem\n\nSometimes, you want to own and have full control over your infrastructure and ensure easy portability between environments. Maybe your workload is too business critical, maybe your business organization is not super comfortable adding dependencies to new cloud services. Maybe you want more visibility into the development of the systems you use. And, most commonly, you may want to save on IT costs by leveraging your existing infrastructure, rather than increasing your public cloud footprint.\n\nStill, even when using on-prem infrastructure, you still want to be able to enable your developers to modernize their applications and take advantage of new patterns such as Serverless.\n\nIn the private cloud, most serverless implementations are based on a PaaS platform. The limiting model of a PaaS essentially calls into question the usage of a serverless framework on it. In that sense, serverless frameworks have been added to commercial PaaS’s as an afterthought. The lock-in around such integration makes this one a very difficult proposition as it adds another layer of complexity to an already complex architecture. The net result is that technical debt can get compounded in the case of inefficiently designed applications.\n\n### 5\\. Complex CI/CD toolchains\n\nFaaS frameworks are still evolving and their place in the complex CI/CD toolchain is still being formed. It will take a lot of upfront investment & diligence by development teams to integrate Serverless frameworks into their Continuous Delivery pipelines.\n\nFor instance,\n\n* A newly developed or modified function needs to be passed through a chain of checks — from unit testing to UAT — before being promoted to Production. This can make the process more cumbersome.\n* For FaaS, additional load and performance testing needs to be in place for each individual function. This is critical before deploying these to Production.\n* Rollback and rollforward capabilities need to be put in place for each function.\n* The Ops team needs to get involved much earlier compared to microservices-based development.\n\n### 7\\. Silo’ing of Serverless Operations from Other IT Ops\n\nDevelopers may be exempt from worrying about servers. Ops teams, however — particularly in large enterprises that operate in complex hybrid environments — still need to have visibility and to be able to manage Serverless applications and their footprint. This is doubly-true if you’re trying to enable Serverless on a Private Cloud.\n\nWhile Serverless, like other technologies, may involve specific tools or services, IT still needs to be able to have a single pane of glass and granular visibility and control over ALL types of applications (legacy, microservices, serverless) — across ALL environments — be it on-premises, public clouds, private cloud, containers, and more. You need a solution that allows Ops to incorporate Serverless-based applications into their overall IT strategy, processes and tools — like they would any other type of application.\n\n### 8\\. Visibility and Monitoring\n\nOn Lambda, for example, the biggest complaint from users is that they do not know what’s going on. In contrast, the open source Serverless framework [Fission](https://fission.io/) provides built-in integration with native Kubernetes monitoring tools that give you as good visibility and troubleshooting over your Serverless functions as you’re accustomed to for other containerized applications.\n\nTo be sure, serverless architectures demand a higher level of technology & cultural maturity from enterprises adopting them. The next post in this series will discuss what can be done about this critical enterprise architecture challenge leveraging Kubernetes.