This paper is available on arxiv under CC 4.0 license.
Authors:
(1) Ghazaleh H. Torbati, Max Planck Institute for Informatics Saarbrucken, Germany & [email protected];
(2) Andrew Yates, University of Amsterdam Amsterdam, Netherlands & [email protected];
(3) Anna Tigunova, Max Planck Institute for Informatics Saarbrucken, Germany & [email protected];
(4) Gerhard Weikum, Max Planck Institute for Informatics Saarbrucken, Germany & [email protected].
Ethical concerns which are potentially relevant are about the privacy of the users whose likes and reviews are kept in the data and the appropriateness of the review contents. All data in our experiments was obtained from the public repository at UCSD [40]. To the best of our knowledge, these datasets were already sanitized and anonymized for public research. The reviews on Amazon and Goodreads comply to the community guidelines of these websites[4][5], prohibiting hate speech, spam and otherwise offensive content.
[1] F. Ricci, L. Rokach, and B. Shapira, Eds., Recommender Systems Handbook. Springer US, 2022.
[2] H. Steck, L. Baltrunas, E. Elahi, D. Liang, Y. Raimond, and J. Basilico, “Deep learning for recommender systems: A netflix case study,” AI Mag., vol. 42, no. 3, pp. 7–18, 2021.
[3] Y. Park, “The adaptive clustering method for the long tail problem of recommender systems,” IEEE Trans. Knowl. Data Eng., vol. 25, no. 8, pp. 1904–1915, 2013.
[4] J. Li, M. Jing, K. Lu, L. Zhu, Y. Yang, and Z. Huang, “From zeroshot learning to cold-start recommendation,” in The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, Jan 27 - Feb 1, 2019. AAAI Press, 2019, pp. 4189–4196.
[5] T. Zang, Y. Zhu, H. Liu, R. Zhang, and J. Yu, “A survey on cross-domain recommendation: Taxonomies, methods, and future directions,” CoRR, vol. abs/2108.03357, 2021.
[6] R. Raziperchikolaei, G. Liang, and Y. Chung, “Shared neural item representations for completely cold start problem,” in RecSys ’21: Fifteenth ACM Conference on Recommender Systems, 27 Sep 2021 - 1 Oct 2021. ACM, 2021, pp. 422–431.
[7] B. Liu, B. Bai, W. Xie, Y. Guo, and H. Chen, “Task-optimized user clustering based on mobile app usage for cold-start recommendations,” in KDD ’22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Aug 14 - 18, 2022. ACM, 2022, pp. 3347–3356.
[8] S. Zhang, L. Yao, A. Sun, and Y. Tay, “Deep learning based recommender system: A survey and new perspectives,” ACM Comput. Surv., vol. 52, no. 1, pp. 5:1–5:38, 2019.
[9] L. Wu, X. He, X. Wang, K. Zhang, and M. Wang, “A survey on neural recommendation: From collaborative filtering to content and context enriched recommendation,” CoRR, vol. abs/2104.13030, 2021.
[10] C. Chen, M. Zhang, Y. Liu, and S. Ma, “Neural attentional rating regression with review-level explanations,” in 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, Apr 23-27, 2018. ACM, 2018, pp. 1583–1592.
[11] D. Liu, J. Li, B. Du, J. Chang, and R. Gao, “DAML: dual attention mutual learning betweenratings and reviews for item recommendation,” in 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Aug 4-8, 2019. ACM, 2019, pp. 344–352.
[12] T. Qi, F. Wu, C. Wu, and Y. Huang, “Personalized news recommendation with knowledge-aware interactive matching,” in SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Canada, July 11-15, 2021. ACM, 2021, pp. 61–70.
[13] S. Wu, W. Zhang, F. Sun, and B. Cui, “Graph neural networks in recommender systems: A survey,” CoRR, vol. abs/2011.02260, 2020.
[14] G. Fazelnia, E. Simon, I. Anderson, B. A. Carterette, and M. Lalmas, “Variational user modeling with slow and fast features,” in WSDM ’22: The Fifteenth ACM International Conference on Web Search and Data Mining, Feb 21 - 25, 2022. ACM, 2022, pp. 271–279.
[15] A. N. Nikolakopoulos and G. Karypis, “Recwalk: Nearly uncoupled random walks for top-n recommendation,” in Twelfth ACM International Conference on Web Search and Data Mining, WSDM 2019, Melbourne, VIC, Australia, Feb 11-15, 2019. ACM, 2019, pp. 150–158.
[16] A. N. Nikolakopoulos, X. Ning, C. Desrosiers, and G. Karypis, “Trust your neighbors: A comprehensive survey of neighborhood-based methods for recommender systems,” CoRR, vol. abs/2109.04584, 2021.
[17] L. Chen, G. Chen, and F. Wang, “Recommender systems based on user reviews: the state of the art,” User Model. User Adapt. Interact., vol. 25, no. 2, pp. 99–154, 2015.
[18] L. Zheng, V. Noroozi, and P. S. Yu, “Joint deep modeling of users and items using reviews for recommendation,” ser. WSDM ’17. New York, NY, USA: Association for Computing Machinery, 2017, p. 425–434.
[19] Y. Zhang, Q. Ai, X. Chen, and W. B. Croft, “Joint representation learning for top-n recommendation with heterogeneous information sources,” in 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, Nov 06 - 10, 2017. ACM, 2017, pp. 1449– 1458.
[20] C. Wu, F. Wu, S. Ge, T. Qi, Y. Huang, and X. Xie, “Neural news recommendation with multi head self-attention,” in Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), 2019, pp. 6389–6394.
[21] G. Hu, Y. Zhang, and Q. Yang, “Transfer meets hybrid: A synthetic approach for cross-domain collaborative filtering with text,” in The World Wide Web Conference, WWW 2019, May 13-17, 2019. ACM, 2019, pp. 2822–2829.
[22] O. S. Shalom, G. Uziel, and A. Kantor, “A generative model for reviewbased recommendations,” in 13th ACM Conference on Recommender Systems, RecSys 2019, Copenhagen, Denmark, Sep 16-20, 2019. ACM, 2019, pp. 353–357.
[23] F. J. Pena, D. O’Reilly-Morgan, E. Z. Tragos, N. Hurley, E. Duriakova, ˜ B. Smyth, and A. Lawlor, “Combining rating and review data by initializing latent factor models with topic models for top-n recommendation,” in RecSys 2020: Fourteenth ACM Conference on Recommender Systems, Sep 22-26, 2020. ACM, 2020, pp. 438–443.
[24] O. Sar Shalom, G. Uziel, A. Karatzoglou, and A. Kantor, “A word is worth a thousand ratings: Augmenting ratings using reviews for collaborative filtering,” in Proceedings of the 2018 ACM SIGIR International Conference on Theory of Information Retrieval, 2018, pp. 11–18.
[25] Y. Lu, R. Dong, and B. Smyth, “Coevolutionary recommendation model: Mutual learning between ratings and reviews,” in Proceedings of the 2018 World Wide Web Conference, 2018, pp. 773–782.
[26] H. Liu, Y. Wang, Q. Peng, F. Wu, L. Gan, L. Pan, and P. Jiao, “Hybrid neural recommendation with joint deep representation learning of ratings and reviews,” Neurocomputing, vol. 374, pp. 77–85, 2020.
[27] F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P. Jiang, “Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer,” in 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Nov 3-7, 2019. ACM, 2019, pp. 1441–1450.
[28] A. Petrov and C. Macdonald, “A systematic review and replicability study of bert4rec for sequential recommendation,” in RecSys ’22: Sixteenth ACM Conference on Recommender Systems, Sep 18 - 23, 2022. ACM, 2022, pp. 436–447.
[29] S. Geng, S. Liu, Z. Fu, Y. Ge, and Y. Zhang, “Recommendation as language processing (RLP): A unified pretrain, personalized prompt & predict paradigm (P5),” in RecSys ’22: Sixteenth ACM Conference on Recommender Systems, Seattle, WA, USA, September 18 - 23, 2022. ACM, 2022, pp. 299–315.
[30] G. Penha and C. Hauff, “What does BERT know about books, movies and music? probing BERT for conversational recommendation,” in RecSys 2020: Fourteenth ACM Conference on Recommender Systems, Sep 22-26, 2020. ACM, 2020, pp. 388–397.
[31] Y. Hou, J. Zhang, Z. Lin, H. Lu, R. Xie, J. McAuley, and W. X. Zhao, “Large language models are zero-shot rankers for recommender systems,” arXiv preprint arXiv:2305.08845, 2023.
[32] L. Wang and E.-P. Lim, “Zero-shot next-item recommendation using large pretrained language models,” arXiv preprint arXiv:2304.03153, 2023.
[33] W.-C. Kang, J. Ni, N. Mehta, M. Sathiamoorthy, L. Hong, E. Chi, and D. Z. Cheng, “Do llms understand user preferences? evaluating llms on user rating prediction,” arXiv preprint arXiv:2305.06474, 2023.
[34] R. A. Pugoy and H.-Y. Kao, “BERT-based neural collaborative filtering and fixed-length contiguous tokens explanation,” in Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. Suzhou, China: Association for Computational Linguistics, Dec. 2020, pp. 143–153. [Online]. Available: https://aclanthology.org/2020.aacl-main.18
[35] ——, “Unsupervised extractive summarization-based representations for accurate and explainable collaborative filtering,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 2981–2990.
[36] S. Funk, Netflix Update: Try This at Home, 2006, accessed on Sep 29, 2022. [Online]. Available: https://sifter.org/∼simon/journal/20061211. html,
[37] J. Lin, R. F. Nogueira, and A. Yates, Pretrained Transformers for Text Ranking: BERT and Beyond, ser. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers, 2021.
[38] N. Reimers and I. Gurevych, “Sentence-bert: Sentence embeddings using siamese bert networks,” in 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP, Nov 3-7, 2019. Association for Computational Linguistics, 2019, pp. 3980–3990.
[39] J. Bekker and J. Davis, “Learning from positive and unlabeled data: a survey,” Mach. Learn., vol. 109, no. 4, pp. 719–760, 2020.
[40] J. McAuley, Recommender Systems and Personalization Datasets, 2022, accessed on Sep 29, 2022. [Online]. Available: https: //cseweb.ucsd.edu/∼jmcauley/datasets.html,
[41] M. Wan and J. McAuley, “Item recommendation on monotonic behavior chains,” in Proceedings of the 12th ACM conference on recommender systems, 2018, pp. 86–94.
[42] J. Ni, J. Li, and J. McAuley, “Justifying recommendations using distantly-labeled reviews and fine-grained aspects,” in Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), 2019, pp. 188–197
[43] X. Wang, X. He, M. Wang, F. Feng, and T.-S. Chua, “Neural graph collaborative filtering,” in Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval, 2019, pp. 165–174.
[44] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” The Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485–5551, 2020.
[4] https://www.amazon.com/gp/help/customer/display.html?nodeId= GLHXEX85MENUE4XF
[5] https://www.goodreads.com/community/guidelines