This story draft by @escholar has not been reviewed by an editor, YET.

MOMENT: A Family of Open Time-series Foundation Models: Acknowledgments

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
0-item

Table of Links

Abstract and 1. Introduction

  1. Related Work

  2. Methodology

  3. Experimental Setup and Results

  4. Conclusion and Future Work


Acknowledgments

Reproducibility statement

Impact statement, and References

Acknowledgments

Funding. This work was partially supported by the National Institutes of Health (NIH) under awards R01HL144692 and 1R01NS124642-01, and also by the U.S. Army Research Office and the U.S. Army Futures Command under Contract No. W911NF-20-D-0002. The content of the information does not necessarily reflect the position or the policy of the government and no official endorsement should be inferred.


Discussions. We would like to express our sincerest gratitude to Barıs¸ Kurt, Andrey Kan, Laurent Callot, Gauthier Guinet, Jingchao Ni, and Jonas M. Kubler for insightful ¨ discussions regarding the problem setting and experimental design. Their unwavering support was instrumental in the development of MOMENT. We are also thankful to Laurent, Barıs¸, Jingchao and Andrey for their constructive feedback on the writing of this manuscript. Additionally, we acknowledge the insightful exchanges with Yuyang (Bernie) Wang, Abdul Fatir Ansari, Ingo Guering, Xiyuan Zhang, and Anoop Deoras. Special thanks to Cherie Ho for suggesting a creative and befitting name for our model. Lastly, we would like to thank Cecilia Morales for her insightful comments, especially on the broader impacts of this work, and for helping us proofread this manuscript.


Data. We extend our gratitude to the authors and data curators whose meticulous efforts were instrumental in curating the datasets utilized for both pre-training and evaluation purposes: UCR Time Series Classification Archive (Dau et al., 2018), TSB-UAD Anomaly Benchmark (Paparrizos et al., 2022b), Monash Forecasting Archive (Godahewa et al., 2021), and the long-horizon forecasting datasets (Zhou et al., 2021).


Software and Models. Our training and evaluation library was inspired from Time-Series-Library. We would also like to thank the authors of the following libraries for their implementations: universal-computation, Anomaly-Transformer, VUS, tsad-model-selection, One-Fits-All and Statsforecast (Garza et al., 2022).


Authors:

(1) Mononito Goswami, Auton Lab, Robotics Insititute, Carnegie Mellon University, Pittsburgh, USA ([email protected])

(2) Konrad Szafer, Auton Lab, Robotics Institute, Carnegie Mellon University, Pittsburgh, USA, with equal contribution, order decided using a random generator;

(3) Arjun Choudhry, Auton Lab, Robotics Institute, Carnegie Mellon University, Pittsburgh, USA, with equal contribution, order decided using a random generator;

(4) Yifu Cai, Auton Lab, Robotics Institute, Carnegie Mellon University, Pittsburgh, USA;

(5) Shuo Li, University of Pennsylvania, Philadelphia, USA;

(6) Artur Dubrawski, Auton Lab, Robotics Institute, Carnegie Mellon University, Pittsburgh, USA.


This paper is available on arxiv under CC BY 4.0 DEED license.


L O A D I N G
. . . comments & more!

About Author

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
EScholar: Electronic Academic Papers for Scholars@escholar
We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community

Topics

Around The Web...

Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks