Authors:
(1) Yuxin Meng;
(2) Feng Gao;
(3) Eric Rigall;
(4) Ran Dong;
(5) Junyu Dong;
(6) Qian Du.
[1] A. F. Shchepetkin and J. C. McWilliams, “The regional oceanic modeling system (ROMS): A split-explicit, free-surface, topographyfollowing-coordinate oceanic model,” Ocean Modeling, vol. 9, no. 4, pp. 347–404, 2005.
[2] R. Jacob, C. Schafer, I. Foster, et al. “Computational design and performance of the Fast Ocean Atmosphere Model,” Proceedings of International Conference on Computational Science. 2001, pp. 175–184.
[3] C. Chen, R. C. Beardsley, G. Cowles, et al. “An unstructured grid, finitevolume coastal ocean model: FVCOM system,” Oceanography, vol. 19, no. 1, pp. 78–89, 2015.
[4] E. P. Chassignet, H. E. Hurlburt, O. M. Smedstad, et al. “The HYCOM (hybrid coordinate ocean model) data assimilative system,” Journal of Marine Systems, vol. 65, no. 1, pp. 60–83, 2007.
[5] Y. LeCun, Y. Bengio, G. Hinton. “Deep learning,” Nature, vol. 521, pp. 436–444, 2015.
[6] P. C. Bermant, M. M. Bronstein, R. J. Wood, et al. “Deep machine learning techniques for the detection and classification of sperm whale bioacoustics,” Scientific Reports, vol. 9, no. 1, pp. 1–10, 2019.
[7] V. Allken V, N. O. Handegard, S. Rosen, et al. “Fish species identification using a convolutional neural network trained on synthetic data,” ICES Journal of Marine Science, vol. 76, no. 1, pp. 342–349, 2019.
[8] E. Lima, X. Sun, J. Dong, et al. “Learning and transferring convolutional neural network knowledge to ocean front recognition,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 3, pp. 354–358, 2017.
[9] L. Xu, X. Wang, X. Wang, “Shipwrecks detection based on deep generation network and transfer learning with small amount of sonar images,” IEEE Data Driven Control and Learning Systems Conference (DDCLS), 2019, pp. 638–643.
[10] Y. Ren, X. Li, W. Zhang, “A data-driven deep learning model for weekly sea ice concentration prediction of the Pan-Arctic during the melting season,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–19, 2022.
[11] M. Reichstein, G. Camps-Valls, B. Stevens, et al. “Deep learning and process understanding for data-driven Earth system science,” Nature, vol. 566, no. 7743, pp. 195–204, 2019.
[12] N. D. Brenowitz, C. S. Bretherton. “Prognostic validation of a neural network unified physics parameterization,” Geophysical Research Letters, vol. 45, no. 12, pp. 6289–6298, 2018.
[13] O. Pannekoucke and R. Fablet. “PDE-NetGen 1.0: from symbolic partial differential equation (PDE) representations of physical processes to trainable neural network representations,” Geoscientific Model Development, vol. 13, no. 7, pp. 3373–3382, 2020.
[14] K. Patil, M. C. Deo, M. Ravichandran. “Prediction of sea surface temperature by combining numerical and neural techniques,” Journal of Atmospheric and Oceanic Technology, vol. 33, no. 8, pp. 1715–1726, 2016.
[15] Y. G. Ham, J. H. Kim, J. J. Luo. “Deep learning for multi-year ENSO forecasts,” Nature, vol. 573, no. 7775, pp. 568–572, 2019.
[16] I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al. “Generative adversarial nets,”Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2014.
[17] L. Yang, D. Zhang, G. E. Karniadakis. “Physics-informed generative adversarial networks for stochastic differential equations,” SIAM Journal on Scientific Computing, vol. 42, no. 1, pp. A292–A317, 2020.
[18] B. Lutjens, B. Leshchinskiy, C. Requena-Mesa, et al. “Physics-informed ¨ GANs for coastal flood visualization,” arXiv preprint arXiv:2010.08103, 2020.
[19] Q. Zheng, L. Zeng, G. E. Karniadakis, “Physics-informed semantic inpainting: Application to geostatistical modeling,” Journal of Computational Physics, vol. 419, pp. 1–10, 2020.
[20] X. Shi, Z. Chen, H. Wang, et al. “Convolutional LSTM network: A machine learning approach for precipitation nowcasting,” Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2015.
[21] J. Gu, Z. Wang, J. Kuen, et al. “Recent advances in convolutional neural networks,” Pattern Recognition, pp. 354–377, 2018.
[22] H. Ge, Z. Yan, W. Yu, et al. “An attention mechanism based convolutional LSTM network for video action recognition,” Multimedia Tools and Applications‘, vol. 78, no. 14, pp. 20533–20556, 2019.
[23] W. Che, and S. Peng, “Convolutional LSTM Networks and RGB-D Video for Human Motion Recognition,” Proceedings of IEEE Information Technology and Mechatronics Engineering Conference (ITOEC), 2018, pp. 951–955.
[24] I. D. Lins, M. Araujo, et al. “Prediction of sea surface temperature in the tropical Atlantic by support vector machines,” Computational Statistics and Data Analysis, vol. 61, pp. 187–198, 2013.
[25] Patil K, Deo M C. “Basin-scale prediction of sea surface temperature with artificial neural networks,” Journal of Atmospheric and Oceanic Technology, vol. 35, no. 7, pp. 1441–1455, 2018.
[26] Q. Zhang, H. Wang, J. Dong, et al. “Prediction of sea surface temperature using long short-term memory,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 10, pp. 1745–1749, 2017.
[27] Y. Yang, J. Dong, X. Sun X, et al. “A CFCC-LSTM model for sea surface temperature prediction,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 2, pp. 207–211, 2017.
[28] K. Patil, M. C. Deo, “Prediction of daily sea surface temperature using efficient neural networks,” Ocean Dynamics, vol. 67, no. 3, pp. 357–368, 2017.
[29] S. Ouala, C. Herzet, R. Fablet, “Sea surface temperature prediction and reconstruction using patch-level neural network representations,” Proceedings of IEEE International Geoscience and Remote Sensing Symposium, 2018, pp. 5628–5631.
[30] C. Shorten, T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 1, pp. 1–48, 2017.
[31] H. Bagherinezhad, M. Horton, M. Rastegari, et al. “Label refinery: Improving Imagenet classification through label progression,” arXiv preprint arXiv:1805.02641, 2018.
[32] K. Chatfield, K. Simonyan, A. Vedaldi, et al. “Return of the devil in the details: Delving deep into convolutional nets,” Proceedings of the British Machine Vision Conference (BMVC), 2014.
[33] A. Jurio, M. Pagola, M. Galar, et al. “A comparison study of different color spaces in clustering based image segmentation,” Proceedings of International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, 2010, pp. 532–541.
[34] Q. You, J. Luo, H. Jin, et al. “Robust image sentiment analysis using progressively trained and domain transferred deep networks,” Proceedings of the AAAI Conference on Artificial Intelligence, 2015, pp. 381–388.
[35] Z. Zhong, L. Zheng, G. Kang, et al. “Random erasing data augmentation,” Proceedings of the AAAI Conference on Artificial Intelligence, 2020, pp. 13001–13008.
[36] T. DeVries, G. W. Taylor, “Improved regularization of convolutional neural networks with Cutout,” arXiv preprint arXiv:1708.04552, 2017.
[37] A. Mikołajczyk, M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” Proceedings of International Interdisciplinary PhD workshop (IIPhDW), 2018, pp. 117–122.
[38] S. M. Moosavi-Dezfooli, A. Fawzi, P. Frossard, “Deepfool: A simple and accurate method to fool deep neural networks,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2574–2582.
[39] J. Su, D. V. Vargas, K. Sakurai, “One pixel attack for fooling deep neural networks,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 5, pp. 828–841, 2019.
[40] M. Zajac, K. Zołna, N. Rostamzadeh, et al. “Adversarial framing for image and video classification,” Proceedings of the AAAI Conference on Artificial Intelligence, 2019, pp. 10077-10078.
[41] S. Li, Y. Chen, Y. Peng, et al. “Learning more robust features with adversarial training,” arXiv preprint arXiv:1804.07757, 2018.
[42] L. A. Gatys, A. S. Ecker, M. Bethge, “A neural algorithm of artistic style,” Journal of Vision vol. 16, no. 12, 2016.
[43] D. Ulyanov, A. Vedaldi, V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022, 2016.
[44] P. Jackson, A. Abarghouei, S. Bonner, et al. “Style augmentation: Data augmentation via style randomization,” Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) Workshop, 2019, pp. 83–92.
[45] J. Tobin, R. Fong, A. Ray, et al. “Domain randomization for transferring deep neural networks from simulation to the real world,” Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 23–30.
[46] C. Summers, and M. Dinneen, “Improved mixed-example data augmentation,” Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2019, pp. 1262–1270.
[47] D. Liang, F. Yang, T. Zhang, et al. “Understanding Mixup training methods,” IEEE Access, vol. 6, pp. 58774–58783, 2018.
[48] R. Takahashi, T. Matsubara, K. Uehara, “Augmentation using random image cropping and patching for deep CNNs,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 9, pp. 2917– 2931, 2019.
[49] T. Konno, and M. Iwazume, “Icing on the cake: An easy and quick post-learnig method you can try after deep learning,” arXiv preprint arXiv:1807.06540, 2018.
[50] T. DeVries, and G. Taylor, “Dataset augmentation in feature space,” arXiv preprint arXiv:1702.05538, 2017.
[51] F. Moreno-Barea, F. Strazzera, J. Jerez, et al. “Forward noise adjustment scheme for data augmentation,” Proceedings of IEEE Symposium Series on Computational Intelligence (SSCI), 2018, pp. 728–734.
[52] M. Frid-Adar, D. Idit, E. Klang, et al. “GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification,” Neurocomputing, vol. 321, pp. 321-331, 2018.
[53] J. Zhu, Y. Shen, D. Zhao, et al. “In-domain GAN inversion for real image editing,” Proceedings of European Conference on Computer Vision (ECCV), 2020, pp. 592–608.
[54] Simonyan K, Zisserman A. “Very deep convolutional networks for largescale image recognition,” Proceedings of International Conference on Learning Representations (ICLR), 2015, pp. 1–14.
[55] GHRSST data, https://www.ghrsst.org (accessed: July. 3, 2022)
[56] HYCOM data, https://www.hycom.org (accessed: July. 3, 2022)
[57] Zhu J Y, Krahenb ¨ uhl P, Shechtman E, et al. “Generative visual ma- ¨ nipulation on the natural image manifold,” Proceedings of European Conference on Computer Vision (ECCV), 2016, pp. 597–613.
[58] A. Larsen, S. Sønderby, H. Larochelle, et al. “Autoencoding beyond pixels using a learned similarity metric,” Proceedings of International Conference on Machine Learning (ICML), 2016, pp. 1558–1566.
This paper is available on arxiv under CC 4.0 license.