Apply These Techniques To Improve ML Model Deployment With AWS Lambdaby@conrado
372 reads

Apply These Techniques To Improve ML Model Deployment With AWS Lambda

tldt arrow
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

We often get asked whether serverless is the right compute architecture to deploy models. The cost savings touted by serverless seem appealing for ML workloads as for other traditional workloads. However, the special requirements of ML models as related to hardware and resources can cause impediments to using serverless. This blog post talks about how to get started with deploying models on AWS Lambda, along with the pros and cons of using this system for inference. In particular, we use the DistillBERT question and answer model from HuggingFace.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Apply These Techniques To Improve ML Model Deployment With AWS Lambda
Conrado M. HackerNoon profile picture

@conrado

Conrado M.


Receive Stories from @conrado

react to story with heart

RELATED STORIES

L O A D I N G
. . . comments & more!
Hackernoon hq - po box 2206, edwards, colorado 81632, usa