Self-Supervised Learning (SSL) is the backbone of transformer-based pre-trained language models, and this paradigm involves solving pre-training tasks (PT) that help in modeling the natural language. This article puts all the popular pre-training tasks together so we can assess them at a glance.
Loss function in SSL
The loss function here is simply the weighted sum of losses of individual pre-training tasks that the model is trained on.
Taking BERT as an example, the loss would be the weighted sum of MLM (Masked Language Modelling) and NSP (Next Sentence Prediction)
Over the years, there have been many pre-training tasks that have come up to solve specific problems. We will be reviewing 10 of the interesting and popular ones along with their corresponding loss functions:
(The loss functions for each task and the content is heavily borrowed from AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing)
It's simply a Unidirectional Language Model that predicts the next word given the context.
Was used as a pre-training task in GPT-1
The loss for CLM is defined as:
Drawback 1:
[MASK] token appears while pre-training but not while fine-tuning — this creates a mismatch between the two scenarios.RTD overcomes this since it doesn’t use any masking
Drawback 2:
In MLM, the training signal is only given by 15% of the tokens since the loss is computed just using these masked tokens, but in RTD, the signal is given by all the tokens since each of them is classified to be “replaced” or “original”
ELECTRA Architecture
Similar to RTD, but the tokens here are classified to be shuffled or not, instead of replaced or not (shown below)
Achieves similar sample efficiency as in RTD compared to MLM
Loss is defined as:
RTD uses a generator to corrupt the sentence, which is computationally expensive.
RTS bypasses this complexity by simply substituting 15% of the tokens using tokens from the vocabulary while achieving similar accuracy as MLM, as shown here.
It's a task to learn a cross-lingual language model just like TLM, where the parallel sentences are code-switched, as shown below:
While code-switching, some phrases of x are substituted from y, and the sample thus obtained is used to train the model.
Was used as a pre-training task in SpanBERT
Loss is defined as:
There are many other interesting tasks that are summarized in AMMUS !! Kudos to the authors, and please give it a read if you find this interesting)
Also published here
Follow me on Medium for more posts on ML/DL/NLP