paint-brush
Unveiling the Power of Self-Attention for Shipping Cost Prediction: Referencesby@convolution
126 reads

Unveiling the Power of Self-Attention for Shipping Cost Prediction: References

Too Long; Didn't Read

New AI model (Rate Card Transformer) analyzes package details (size, carrier etc.) to predict shipping costs more accurately.
featured image - Unveiling the Power of Self-Attention for Shipping Cost Prediction: References
Convolution: Leading Authority on Signal Processing HackerNoon profile picture

Authors:

(1) P Aditya Sreekar, Amazon and these authors contributed equally to this work {[email protected]};

(2) Sahil Verm, Amazon and these authors contributed equally to this work {[email protected];}

(3) Varun Madhavan, Indian Institute of Technology, Kharagpur. Work done during internship at Amazon {[email protected]};

(4) Abhishek Persad, Amazon {[email protected]}.

References

Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep evidential regression. CoRR, abs/1910.02600, 2019. URL http://arxiv.org/abs/1910.02600.


Sercan O Arik, Engin Gedik, Kenan Guney, and Umut Atilla. Tabnet: Attentive interpretable tabular learning. In Advances in Neural Information Processing Systems, pages 10951–10961, 2019.


Christopher M Bishop. Pattern recognition and machine learning. In Springer, chapter 2, pages 36–43. 2006.


Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.


Jintai Chen, Kuanlun Liao, Yao Wan, Danny Z Chen, and Jian Wu. Danets: Deep abstract networks for tabular data classification and regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 3930–3938, 2022.


Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–794, 2016.


Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423.


Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021.


Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, and Alexander Smola. Autogluon-tabular: Robust and accurate automl for structured data, 2020.


William Falcon and The PyTorch Lightning team. PyTorch Lightning, 3 2019. URL https: //github.com/Lightning-AI/lightning.


Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189–1232, 2001.


Yuan Gong, Yu-An Chung, and James Glass. AST: Audio Spectrogram Transformer. In Proc. Interspeech 2021, pages 571–575, 2021. doi: 10.21437/Interspeech.2021-698.


Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deep learning models for tabular data. Advances in Neural Information Processing Systems, 34:18932–18943, 2021.


Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media, 2009.


Hussein Hazimeh, Natalia Ponomareva, Petros Mol, Zhenyu Tan, and Rahul Mazumder. The tree ensemble layer: Differentiability meets conditional computation. In International Conference on Machine Learning, pages 4138–4148. PMLR, 2020.


Xin Huang, Ashish Khetan, Milan Cvitkovic, and Zohar Karnin. Tabtransformer: Tabular data modeling using contextual embeddings. arXiv preprint arXiv:2012.06678, 2020.


Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems, 30:3146–3154, 2017.


Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.


Daniele Micci-Barreca. A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems. SIGKDD Explor. Newsl., 3(1):27–32, jul 2001. ISSN 1931-0145. doi: 10.1145/507533.507538. URL https://doi.org/10.1145/507533. 507538.


Sergei Popov, Stanislav Morozov, and Artem Babenko. Neural oblivious decision ensembles for deep learning on tabular data. arXiv preprint arXiv:1909.06312, 2019.


Liudmila Prokhorenkova, Gleb Gusev, Alexey Vorobev, Anna Dorogush, and Andrey Gulin. Catboost: unbiased boosting with categorical features. Advances in Neural Information Processing Systems, 31:6638–6648, 2018.


Gowthami Somepalli, Micah Goldblum, Avi Schwarzschild, C Bayan Bruss, and Tom Goldstein. Saint: Improved neural networks for tabular data via row attention and contrastive pre-training. arXiv preprint arXiv:2106.01342, 2021.


Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.


This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.