paint-brush
Apply These Techniques To Improve ML Model Deployment With AWS Lambdaby@conrado
374 reads
374 reads

Apply These Techniques To Improve ML Model Deployment With AWS Lambda

by Conrado M.8mNovember 27th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

We often get asked whether serverless is the right compute architecture to deploy models. The cost savings touted by serverless seem appealing for ML workloads as for other traditional workloads. However, the special requirements of ML models as related to hardware and resources can cause impediments to using serverless. This blog post talks about how to get started with deploying models on AWS Lambda, along with the pros and cons of using this system for inference. In particular, we use the DistillBERT question and answer model from HuggingFace.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Apply These Techniques To Improve ML Model Deployment With AWS Lambda
Conrado M. HackerNoon profile picture
Conrado M.

Conrado M.

@conrado

CTO at Verta.ai

About @conrado
LEARN MORE ABOUT @CONRADO'S
EXPERTISE AND PLACE ON THE INTERNET.
L O A D I N G
. . . comments & more!

About Author

Conrado M. HackerNoon profile picture
Conrado M.@conrado
CTO at Verta.ai

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite