This paper is available on arxiv under CC 4.0 license.
Authors:
(1) Mattia Atzeni, EPFL, Switzerland and [email protected];
(2) Mrinmaya Sachan, ETH Zurich, Switzerland;
(3) Andreas Loukas, Prescient Design, Switzerland.
This section prepares some theoretical grounding for LATFORMER, our approach to learn the transformations for lattice symmetry groups in the form of attention masks. The section defines attention masks and explains how they can be leveraged to incorporate geometry priors when solving group action learning problems on sequences and images.
To facilitate the learning of lattice symmetries, one needs to determine methods to parameterize the set of feasible group elements. Fortunately, as precised in the following theorem, the attention masks considered in Theorem 3.1 can be expressed conveniently under the same general formulation.
Although strictly not a symmetry operation, scaling transformations of the lattice can also be defined in terms of attention masks under the same general formulation of Theorem 3.2, as reported in Table 1. Therefore, for completeness, we will consider scaling transformations as well in our experiments.
Notice that Theorem 3.2 allows us to derive a way to calculate the attention masks. In particular, we can express our attention masks as a convolution operation on the identity, as stated below.