A Beginner Guide to Incorporating Tabular Data via HuggingFace Transformersby@codekgu
851 reads
851 reads

A Beginner Guide to Incorporating Tabular Data via HuggingFace Transformers

by Ken Gu7mNovember 11th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Transformer-based models are a game-changer when it comes to using unstructured text data. The top-performing models in the General Language Understanding Evaluation (GLUE) benchmark are all BERT models. Transformer models can learn long-range dependencies between text and can be trained in parallel (as opposed to sequence to sequence models), meaning they can be pre-trained on large amounts of data. We set out to explore how text and tabular data could be used together to provide stronger signals in our projects.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - A Beginner Guide to Incorporating Tabular Data via HuggingFace Transformers
Ken Gu HackerNoon profile picture
Ken Gu

Ken Gu

@codekgu

Passionate about NLP and Graph Deep Learning Research. Georgian. UCLA

About @codekgu
LEARN MORE ABOUT @CODEKGU'S
EXPERTISE AND PLACE ON THE INTERNET.

Share Your Thoughts

About Author

Ken Gu HackerNoon profile picture
Ken Gu@codekgu
Passionate about NLP and Graph Deep Learning Research. Georgian. UCLA

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite
L O A D I N G
. . . comments & more!