paint-brush
Deploying Deep Learning Models with Model Serverby@wobotai
7,076 reads
7,076 reads

Deploying Deep Learning Models with Model Server

by Wobot Intelligence Inc12mDecember 2nd, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A model server is a web server that hosts the deep learning model and allows it to be accessed over standard network protocols. The model server can be accessed across devices as long as they are connected via a common network. In this writeup, we will explore a part of a deployment that deals with hosting the deep learning model to make it available across the web for inference, known as model servers. In this example, we will be dealing with images: REST API request-response and gRPC API. We will first learn how to build our own, and then explore the Triton Inference Server (by Nvidia).

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Deploying Deep Learning Models with Model Server
Wobot Intelligence Inc HackerNoon profile picture
Wobot Intelligence Inc

Wobot Intelligence Inc

@wobotai

Wobot.ai is a Video Intelligence Platform that enables businesses to do more with their existing camera systems.

L O A D I N G
. . . comments & more!

About Author

Wobot Intelligence Inc HackerNoon profile picture
Wobot Intelligence Inc@wobotai
Wobot.ai is a Video Intelligence Platform that enables businesses to do more with their existing camera systems.

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite