Deploying Deep Learning Models with Model Serverby@wobotai
5,605 reads

Deploying Deep Learning Models with Model Server

tldt arrow
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

A model server is a web server that hosts the deep learning model and allows it to be accessed over standard network protocols. The model server can be accessed across devices as long as they are connected via a common network. In this writeup, we will explore a part of a deployment that deals with hosting the deep learning model to make it available across the web for inference, known as model servers. In this example, we will be dealing with images: REST API request-response and gRPC API. We will first learn how to build our own, and then explore the Triton Inference Server (by Nvidia).

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Deploying Deep Learning Models with Model Server
Wobot Intelligence Inc HackerNoon profile picture


Wobot Intelligence Inc

Receive Stories from @wobotai

react to story with heart


. . . comments & more!
Hackernoon hq - po box 2206, edwards, colorado 81632, usa