paint-brush
The Abstraction and Reasoning Corpus: C. Limitations and Future Workby@escholar

The Abstraction and Reasoning Corpus: C. Limitations and Future Work

tldt arrow

Too Long; Didn't Read

State-of-the-art machine learning models struggle with generalization which can only be achieved by proper accounting for core knowledge priors.
featured image - The Abstraction and Reasoning Corpus: C. Limitations and Future Work
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

This paper is available on arxiv under CC 4.0 license.

Authors:

(1) Mattia Atzeni, EPFL, Switzerland and [email protected];

(2) Mrinmaya Sachan, ETH Zurich, Switzerland;

(3) Andreas Loukas, Prescient Design, Switzerland.

C. Limitations and Future Work

Although we believe our results are interesting and promising for learning group actions with neural networks, we would like to point out some limitations of our approach. First, our method is limited to actions on the symmetry group of the hypercubic lattice and it is not immediately extendable to other groups. For instance, though permutation matrices are still convolutions of the identity and they can be generated by a CNN, providing an architecture with predefined kernels that can compute any permutation matrix is not feasible. Second, the model is hard to fine-tune: we noticed that once the gates of the CNN have been trained, it is hard for the model to adapt to different actions.


We believe that both limitations can be addressed by still keeping the same overall idea of modulating attention weights using soft attention masks, possibly with a different parametrization of the masks. Future work will focus on this research direction and on extending our work to cover a wider set of the ARC tasks.