paint-brush
HDR or SDR? A Study of Scaled and Compressed Videos: Conclusion, Acknowledgment, and Referencesby@kinetograph

HDR or SDR? A Study of Scaled and Compressed Videos: Conclusion, Acknowledgment, and References

tldt arrow

Too Long; Didn't Read

While conventional expectations are that HDR quality is better than SDR quality, this paper finds that viewers' preference depends heavily on the display device.
featured image - HDR or SDR? A Study of Scaled and Compressed Videos: Conclusion, Acknowledgment, and References
Kinetograph: The Video Editing Technology Publication HackerNoon profile picture

Authors:

(1) Joshua P. Ebenezer, Student Member, IEEE, Laboratory for Image and Video Engineering, The University of Texas at Austin, Austin, TX, 78712, USA, contributed equally to this work (e-mail: [email protected]);

(2) Zaixi Shang, Student Member, IEEE, Laboratory for Image and Video Engineering, The University of Texas at Austin, Austin, TX, 78712, USA, contributed equally to this work;

(3) Yixu Chen, Amazon Prime Video;

(4) Yongjun Wu, Amazon Prime Video;

(5) Hai Wei, Amazon Prime Video;

(6)Sriram Sethuraman, Amazon Prime Video;

(7) Alan C. Bovik, Fellow, IEEE, Laboratory for Image and Video Engineering, The University of Texas at Austin, Austin, TX, 78712, USA.

VI. CONCLUSION

We presented the first ever study on comparing HDR and SDR videos of the same content encoded at different bitrates and resolutions on different display devices. Our study shows that despite HDR’s theoretical capabilities over SDR, its perceptual quality depends heavily on the display device used in practice. We also evaluated several NR and FR VQA algorithms on the new database, and presented a novel NR VQA algorithm called HDRPatchMAX that exceeds the current state-of-the-art on this database. We hope that this spurs research on the modelling of display devices in VQA algorithms as well as for optimal bitrate ladders for streaming.

ACKNOWLEDGMENT

This research was sponsored by a grant from Amazon.com, Inc., and by grant number 2019844 for the National Science Foundation AI Institute for Foundations of Machine Learning (IFML). The authors also thank the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported in this paper. URL: http://www.tacc.utexas.edu.

REFERENCES

[1] ITU, “BT.709 : Parameter values for the HDTV standards for production and international programme exchange,” Intl. Telecomm. Union, Tech. Rep., 2011.


[2] Z. Shang, J. P. Ebenezer, Y. Wu, H. Wei, S. Sethuraman, and A. C. Bovik, “Study of the subjective and objective quality of high motion live streaming videos,” IEEE Trans. Image Process., vol. 31, pp. 1027– 1041, 2022.


[3] D. Y. Lee, S. Paul, C. G. Bampis, H. Ko, J. Kim, S. Y. Jeong, B. Homan, and A. C. Bovik, “A subjective and objective study of space-time subsampled video quality,” arXiv preprint arXiv:2102.00088, 2021.


[4] P. C. Madhusudana, X. Yu, N. Birkbeck, Y. Wang, B. Adsumilli, and A. C. Bovik, “Subjective and objective quality assessment of high frame rate videos,” IEEE Access, vol. 9, pp. 108 069–108 082, 2021.


[5] R. R. R. Rao, S. Goring, W. Robitza, B. Feiten, and A. Raake, “Avt- ¨ vqdb-uhd-1: A large scale video quality database for uhd-1,” in 2019 IEEE Intl. Symposium on Multimedia (ISM). IEEE, 2019, pp. 17–177.


[6] J. P. Ebenezer, Y. Chen, Y. Wu, H. Wei, and S. Sethuraman, “Subjective and objective quality assessment of high-motion sports videos at lowbitrates,” in 2022 IEEE Intl. Conf. on Image Process. (ICIP), 2022, pp. 521–525.


[7] V. Hosu, F. Hahn, M. Jenadeleh, H. Lin, H. Men, T. Sziranyi, S. Li, and ´ D. Saupe, “The Konstanz natural video database (Konvid-1k),” in Int. Conf. Quality of Multimedia Experience, 2017, pp. 1–6.


[8] Y. Wang, S. Inguva, and B. Adsumilli, “Youtube UGC dataset for video compression research,” in IEEE Int. Workshop Multimed. Signal Process. IEEE, 2019, pp. 1–5.


[9] Z. Ying, M. Mandal, D. Ghadiyaram, and A. Bovik, “Patch-VQ: ‘patching up’ the video quality problem,” in IEEE Conf. Comp. Vision Pattern Recognit., 2021, pp. 14 014–14 024.


[10] Z. Shang, J. P. Ebenezer, A. C. Bovik, Y. Wu, H. Wei, and S. Sethuraman, “Subjective assessment of high dynamic range videos under different ambient conditions,” in IEEE Intl. Conf. Image Process., 2022, pp. 786–790.


[11] Z. Shang, J. P. Ebenezer, Y. Wu, H. Wei, S. Sethuraman, and A. C. Bovik, “A study of subjective and objective quality assessment of HDR videos,” submitted to IEEE Trans. Image Process., 2022.


[12] Z. Shang, Y. Chen, Y. Wu, H. Wei, and S. Sethuraman, “Subjective and objective video quality assessment of high dynamic range sports content,” in Proceedings of the IEEE/CVF Winter Conf. on Applications of Computer Vision (WACV) Workshops, January 2023, pp. 556–564.


[13] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.


[14] Netflix, VMAF: The Journey Continues, 2018 (accessed December 28, 2021). [Online]. Available: https://netflixtechblog. com/vmaf-the-journey-continues-44b51ee9ed12


[15] C. G. Bampis, P. Gupta, R. Soundararajan, and A. C. Bovik, “Speedqa: Spatial efficient entropic differencing for image and video quality,” IEEE Signal Process. Letters, vol. 24, no. 9, pp. 1333–1337, 2017.


[16] R. Soundararajan and A. C. Bovik, “Video quality assessment by reduced reference spatio-temporal entropic differencing,” IEEE Trans. Circuits Syst. Video Technol., vol. 23, no. 4, pp. 684–694, 2012.


[17] P. C. Madhusudana, N. Birkbeck, Y. Wang, B. Adsumilli, and A. C. Bovik, “St-greed: Space-time generalized entropic differences for frame rate dependent video quality prediction,” IEEE Trans. Image Process., vol. 30, pp. 7446–7457, 2021.


[18] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. Image Process., vol. 21, no. 12, pp. 4695–4708, 2012.


[19] M. A. Saad, A. C. Bovik, and C. Charrier, “Blind prediction of natural video quality,” IEEE Trans. Image Process., vol. 23, no. 3, pp. 1352– 1365, 2014.


[20] Z. Tu, Y. Wang, N. Birkbeck, B. Adsumilli, and A. C. Bovik, “UGCVQA: Benchmarking blind video quality assessment for user generated content,” IEEE Trans. Image Process., vol. 30, pp. 4449–4464, 2021.


[21] Z. Tu, X. Yu, Y. Wang, N. Birkbeck, B. Adsumilli, and A. C. Bovik, “RAPIQUE: Rapid and accurate video quality prediction of user generated content,” arXiv preprint arXiv:2101.10955, 2021.


[22] J. P. Ebenezer, Z. Shang, Y. Wu, H. Wei, S. Sethuraman, and A. C. Bovik, “ChipQA: No-reference video quality prediction via space-time chips,” IEEE Trans. Image Process., vol. 30, pp. 8059–8074, 2021.


[23] ——, “HDR-ChipQA: No-reference quality assessment for high dynamic range videos,” submitted to IEEE Trans. Image Process., 2023.


[24] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett., vol. 20, no. 3, pp. 209–212, 2012.


[25] J. Korhonen, “Two-level approach for no-reference consumer video quality assessment,” IEEE Trans. Image Process., vol. 28, no. 12, pp. 5923–5938, 2019.


[26] ITU, “ BT.2020 : Parameter values for ultra-high definition television systems for production and international programme exchange,” Intl. Telecomm. Union, Tech. Rep., 2015.


[27] S. Miller, M. Nezamabadi, and S. Daly, “Perceptual signal coding for more efficient usage of bit codes,” SMPTE Motion Imaging J., vol. 122, no. 4, pp. 52–59, 2013.


[28] Rtings, Peak Brightness Measurement, 2023 (accessed February 28, 2023). [Online]. Available: https://www.rtings.com/tv/tests/ picture-quality/sdr-peak-brightness


[29] ITU, “BT.500 : Methodologies for the subjective assessment of the quality of television images,” Intl. Telecomm. Union, Tech. Rep., 2019.


[30] ——, “ITU. 910 : Subjective video quality assessment methods for multimedia applications,” Intl. Telecomm. Union, Tech. Rep., 2008.


[31] Z. Wang, E. Simoncelli, and A. Bovik, “Multiscale structural similarity for image quality assessment,” in Asilomar Conf. Signals, Syst., Comput., vol. 2, 2003, pp. 1398–1402 Vol.2.


[32] H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process., vol. 15, no. 11, pp. 3440–3451, 2006.


[33] J. P. Ebenezer, Z. Shang, Y. Wu, H. Wei, S. Sethuraman, and A. C. Bovik, “Making video quality assessment models robust to bit depth,” submitted to IEEE Signal Process. Letters, 2023.


[34] J. P. Ebenezer, Z. Shang, Y. Wu, H. Wei, and A. C. Bovik, “Noreference video quality assessment using space-time chips,” in IEEE Intl. Workshop Multimedia Signal Process., 2020, pp. 1–6.


[35] D. W. Dong and J. J. Atick, “Temporal decorrelation: a theory of lagged and nonlagged responses in the lateral geniculate nucleus,” Netw.: Comput. Neural Syst., vol. 6, no. 2, pp. 159–178, 1995.


This paper is available on arxiv under CC 4.0 license.