Table of Links
-
Unfolding
-
Results
Appendices
A. Conditional DDPM Loss Derivation
C. Detector Simulation and Jet Matching
A Conditional DDPM Loss Derivation
In the proposed conditional DDPM, the forward process is a Markov chain that gradually adds Gaussian noise to the data according to a variance schedule β.
Training is performed by optimizing the variational bound on negative log likelihood:
Following the similar derivation provided in [13], this loss can then be rewritten using the KL-divergence
and the reverse process posterior as
Authors:
(1) Camila Pazos, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts;
(2) Shuchin Aeron, Department of Electrical and Computer Engineering, Tufts University, Medford, Massachusetts and The NSF AI Institute for Artificial Intelligence and Fundamental Interactions;
(3) Pierre-Hugues Beauchemin, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts and The NSF AI Institute for Artificial Intelligence and Fundamental Interactions;
(4) Vincent Croft, Leiden Institute for Advanced Computer Science LIACS, Leiden University, The Netherlands;
(5) Martin Klassen, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts;
(6) Taritree Wongjirad, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts and The NSF AI Institute for Artificial Intelligence and Fundamental Interactions.
This paper is