This paper is available on arxiv under CC 4.0 license. Authors: (1) Thomas Pethick, EPFL (LIONS) thomas.pethick@epfl.ch; (2) Wanyun Xie, EPFL (LIONS) wanyun.xie@epfl.ch; (3) Volkan Cevher, EPFL (LIONS) volkan.cevher@epfl.ch. Table of Links Abstract & Introduction Related work Setup Inexact Krasnosel’ski˘ı-Mann iterations Approximating the resolvent Last iterate under cohypomonotonicity Analysis of Lookahea Experiments Conclusion & limitations Acknowledgements & References 3 Setup Most relevant in the context of GAN training is that (1) includes constrained minimax problems. Example 3.1. Consider the following minimax problem We will rely on the following assumptions (see Appendix B for any missing definitions). In problem (1), Assumption 3.2. 3.3. Assumption 3.2(iii) is also known as |ρ|-cohypomonotonicity when ρ < 0, which allows for increasing nonmonotonicity as |ρ| grows. See Appendix B.1 for the relationship with weak MVI. Remark When only stochastic feedback Fˆ σ(·, ξ) is available we make the following classical assumptions.