Train a NER Transformer Model with Just a Few Lines of Code via spaCy 3

Written by ubiai | Published 2021/03/12
Tech Story Tags: nlp | bert | transformers | machine-learning | ner | artificial-intelligence | data-science | named-entity-recognition

TLDR BERT — which stands for Bidirectional Encoder Representations from Transformers— leverages the transformer architecture in a novel way. BERT analyses both sides of the sentence with a randomly masked word to make a prediction. Fine-tuning transformers requires a powerful GPU with parallel processing. In this tutorial, we will use the newly released spaCy 3 library to fine tune our transformer. We will provide the data in IOB format contained in a TSV file then convert to spaCy.via the TL;DR App

no story

Published by HackerNoon on 2021/03/12