Training Your Models on Cloud TPUs in 4 Easy Steps on Google Colab

Written by rish-16 | Published 2019/08/02
Tech Story Tags: machine-learning | tensorflow | distributed-training | tpu | cloud-computing | google | latest-tech-stories | data-science

TLDR The Tensor Processing Unit (TPU) is an accelerator — custom-made by Google Brain hardware engineers — that specialises in training deep and computationally expensive ML models. After this, you’ll never want to touch your clunky CPU ever again. This article shows you how easy it is to train any TensorFlow model on a TPU with very few changes to your code. The TPU is a distributed processor that is not a single-core processor and is not like a traditional-core training processor.via the TL;DR App

no story

Written by rish-16 | ML Research Student at NUS • rish-16.github.io
Published by HackerNoon on 2019/08/02