Validating Theoretical Loss Bound: Vanilla Transformer Experiments

tldt arrow

Too Long; Didn't Read

Explore the training dynamics of vanilla Transformer models on the 2M token Question-Formation dataset, analyzing how their cross-entropy losses stabilize during training.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - Validating Theoretical Loss Bound: Vanilla Transformer Experiments
Reinforcement Technology Advancements HackerNoon profile picture
0-item

Abstract and 1 Introduction

2 Related Work

3 Model and 3.1 Associative memories

3.2 Transformer blocks

4 A New Energy Function

4.1 The layered structure

5 Cross-Entropy Loss

6 Empirical Results and 6.1 Empirical evaluation of the radius

6.2 Training GPT-2

6.3 Training Vanilla Transformers

7 Conclusion and Acknowledgments


Appendix A. Deferred Tables

Appendix B. Some Properties of the Energy Functions

Appendix C. Deferred Proofs from Section 5

Appendix D. Transformer Details: Using GPT-2 as an Example


References

6.3 Training Vanilla Transformers

We next train vanilla transformer models using a small amount of high-quality data. The of Question-Formation dataset, proposed by McCoy et al. (2020), consists of pairs of English sentences in declarative formation and their corresponding question formation. The dataset contains D = 2M tokens. The sentences are context-free with a vocabulary size of 68 words, and the task is to convert declarative sentences into questions.



Authors:

(1) Xueyan Niu, Theory Laboratory, Central Research Institute, 2012 Laboratories, Huawei Technologies Co., Ltd.;

(2) Bo Bai baibo ([email protected]);

(3) Lei Deng ([email protected]);

(4) Wei Han ([email protected]).


This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks