paint-brush
Deploying Open-Source Language Models on AWS Lambdaby@horosin

Deploying Open-Source Language Models on AWS Lambda

by Karol Horosin14mJanuary 30th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In this article, I take you through the process of deploying a smaller open-source Language Model (LLM) on AWS Lambda. The goal is to experiment with Microsoft Phi-2, a 2.7 billion parameter LLM, and explore its applications in scenarios like processing sensitive data or generating outputs in languages other than English. I walk you through setting up the environment, creating a Dockerized Lambda function, and deploying the LLM. Throughout the tutorial, we delve into performance metrics, cost considerations, and potential optimizations. I even provide a script to automate the deployment process. Join me to explore the world of LLMs on AWS Lambda, considering factors such as performance, cost, and real-world feasibility.
featured image - Deploying Open-Source Language Models on AWS Lambda
Karol Horosin HackerNoon profile picture
Karol Horosin

Karol Horosin

@horosin

Full stack engineer and manager. I write about startups, dev and cloud. Join free newsletter: horosin.com/newsletter

About @horosin
LEARN MORE ABOUT @HOROSIN'S
EXPERTISE AND PLACE ON THE INTERNET.
0-item

STORY’S CREDIBILITY

Guide

Guide

Walkthroughs, tutorials, guides, and tips. This story will teach you how to do something new or how to do something better.

L O A D I N G
. . . comments & more!

About Author

Karol Horosin HackerNoon profile picture
Karol Horosin@horosin
Full stack engineer and manager. I write about startups, dev and cloud. Join free newsletter: horosin.com/newsletter

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite
Also published here