Experimental Proof: MENT-Flow Achieves Consistency with MENT Solutions for Beam Reconstruction

Written by tomography | Published 2025/10/07
Tech Story Tags: machine-learning | 6d-phase-space-tomography | normalizing-flows | maximum-entropy-reconstruction | phase-space-distributions | generative-models | tomographic-reconstruction | beam-reconstruction

TLDRThis article describes numerical experiments validating MENT-Flow for phase space tomography.via the TL;DR App

I. Introduction

II. Maximum Entropy Tomography

III. Numerical Experiments

IV. Conclusion and Extensions

V. Acknowledgments and References

III. NUMERICAL EXPERIMENTS

The following numerical experiments demonstrate that MENT-Flow solutions approach MENT solutions in 2D phase space. Subsequent experiments demonstrate that MENT-Flow can fit complicated 6D phase space distributions to large measurement sets in reasonable time and that entropic regularization keeps the reconstruction close to the prior. To simplify the examples, we focused on linear phase space transformations rather than more realistic accelerator models. We also tended to use ground-truth distributions without linear interplane correlations, highlighting nonlinear features.[4] We chose to maximize the entropy relative to a Gaussian prior.[5] The flow’s base distribution is also a Gaussian, so the entropy penalty pushes the flow toward an identity or scaling transformation.

Our normalizing flow architecture is described in the previous section. The flow consists of five layers. Each layer is an autoregressive transformation, where the 1D transformation along each dimension is a rationalquadratic spline with 20 knots; the function values and derivatives at the knots are parameterized by a masked neural network with 3 hidden layers of 64 hidden units. Note that increasing the model size should not lead to overfitting since we train via maximum entropy, not maximum likelihood.

We compare MENT-Flow to MENT. Our MENT implementation uses linear interpolation to evaluate the Lagrange functions at any location on the projection axes, and we simulate the projections by numerical integration. We also compare to an unregularized neural network (NN) whose only aim is to fit the data. The NN is a standard fully connected feedforward network with 3 hidden layers of 32 hidden units and tanh activation functions.

Authors:

(1) Austin Hoover, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37830, USA ([email protected]);

(2) Jonathan C. Wong, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China.


This paper is available on arxiv under CC BY 4.0 DEED license.


Written by tomography | Tomography
Published by HackerNoon on 2025/10/07