paint-brush
GPU Computing for Machine Learningby@modzy
136 reads

GPU Computing for Machine Learning

by Modzy3mMay 21st, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

GPU computing enables many modern machine learning algorithms that were previously impractical due to slow runtime. CUDA, developed and provided free of charge by NVIDIA, is the parallel computing runtime and software API for developers used to power most leading machine learning frameworks. Many popular deep learning frameworks such as Tensorflow, Pytorch, MXNet, and Chainer include CUDA support and allowing the user to take advantage of GPU computation without writing a single line of CUDA code. The Modzy platform provides a CUDA capable runtime with support for NVIDIA GPUs.

Company Mentioned

Mention Thumbnail
featured image - GPU Computing for Machine Learning
Modzy HackerNoon profile picture
Modzy

Modzy

@modzy

A software platform for organizations and developers to responsibly deploy, monitor, and get value from AI - at scale.

L O A D I N G
. . . comments & more!

About Author

Modzy HackerNoon profile picture
Modzy@modzy
A software platform for organizations and developers to responsibly deploy, monitor, and get value from AI - at scale.

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite
Also published here