Language Modeling - A Look at the Most Common Pre-Training Tasks
Too Long; Didn't Read
Self-Supervised Learning (SSL) is the backbone of transformer-based pre-trained language models. This paradigm involves solving pre-training tasks (PT) that help in modeling the natural language. We will be reviewing 10 of the interesting and popular ones along with their corresponding loss functions.
Share Your Thoughts