Redefining Induction: Multi-Token vs. Next-Token on High-Quality LLM Data

Written by cosmological | Published 2025/07/23
Tech Story Tags: llm-generalization | next-token-task | ai-training | deep-learning-insights | data-impact | induction-capability | high-quality-data | multi-token-prediction

TLDRThis figure showcases how training on higher-quality data enforces early induction capability, demonstrating that multi-token prediction's advantage in this task diminishes for larger models as feature learning transforms it into a next-token problem.via the TL;DR App

Abstract and 1. Introduction

2. Method

3. Experiments on real data

4. Ablations on synthetic data

5. Why does it work? Some speculation

6. Related work

7. Conclusion, Impact statement, Environmental impact, Acknowledgements and References

A. Additional results on self-speculative decoding

B. Alternative architectures

C. Training speeds

D. Finetuning

E. Additional results on model scaling behavior

F. Details on CodeContests finetuning

G. Additional results on natural language benchmarks

H. Additional results on abstractive text summarization

I. Additional results on mathematical reasoning in natural language

J. Additional results on induction learning

K. Additional results on algorithmic reasoning

L. Additional intuitions on multi-token prediction

M. Training hyperparameters

J. Additional results on induction learning

Authors:

(1) Fabian Gloeckle, FAIR at Meta, CERMICS Ecole des Ponts ParisTech and Equal contribution;

(2) Badr Youbi Idrissi, FAIR at Meta, LISN Université Paris-Saclayand and Equal contribution;

(3) Baptiste Rozière, FAIR at Meta;

(4) David Lopez-Paz, FAIR at Meta and a last author;

(5) Gabriel Synnaeve, FAIR at Meta and a last author.


This paper is available on arxiv under CC BY 4.0 DEED license.


Written by cosmological | From Big Bang's singularity to galaxies' cosmic dance the universe unfolds its majestic tapestry of space and time.
Published by HackerNoon on 2025/07/23