Scale Vision Transformers (ViT) Beyond Hugging Face

Written by maziyar | Published 2022/08/25
Tech Story Tags: apache-spark | databricks | nlp | transformers | nvidia | pytorch | tensorflow | hackernoon-top-story | hackernoon-es | hackernoon-hi | hackernoon-zh | hackernoon-vi | hackernoon-fr | hackernoon-pt | hackernoon-ja

TLDRThe purpose of this article is to demonstrate how to scale out Vision Transformer (ViT) models from Hugging Face and deploy them in production-ready environments for accelerated and high-performance inference. By the end, we will scale a ViT model from Hugging Face by 25x times (2300%) by using Databricks, Nvidia, and Spark NLP.via the TL;DR App

no story

Written by maziyar | Principal AI/ML Engineer
Published by HackerNoon on 2022/08/25