paint-brush
Trends in the Coerciveness of Amazon Book Recommendations by@escholar
117 reads

Trends in the Coerciveness of Amazon Book Recommendations

tldt arrow

Too Long; Didn't Read

This paper investigates the evolution of Amazon's recommender system and its potential impact on user preferences. Using Barrier-to-Exit analysis on Amazon book recommendations, it reveals a significant growth in preference manipulation over time, highlighting implications for user autonomy and the need for further research in this area.
featured image - Trends in the Coerciveness of Amazon Book Recommendations
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

Authors:

(1) Jonathan H. Rystrøm.

Abstract and Introduction

Previous Literature

Methods and Data

Results

Discussions

Conclusions and References

A. Validation of Assumptions

B. Other Models

C. Pre-processing steps

Abstract

Recommender systems can be a helpful tool for recommending content but they can also influence users’ preferences. One sociological theory for this influence is that companies are incentivised to influence preferences to make users easier to predict and thus more profitable by making it harder to change preferences. This paper seeks to test that theory empirically. We use Barrier-to-Exit, a metric for how difficult it is for users to change preferences, to analyse a large dataset of Amazon Book Ratings from 1998 to 2018. We focus the analysis on users who have changed preferences according to Barrier-to-Exit. To assess the growth of Barrier-to-Exit over time, we developed a linear mixed-effects model with crossed random effects for users and categories. Our findings indicate a highly significant growth of Barrier-to-Exit over time, suggesting that it has become more difficult for the analysed subset of users to change their preferences. However, it should be noted that these findings come with several statistical and methodological caveats including sample bias and construct validity issues related to Barrier-to-Exit. We discuss the strengths and limitations of our approach and its implications. Additionally, we highlight the challenges of creating context-sensitive and generalisable measures for complex socio-technical concepts such as ”difficulty to change preferences.” We conclude with a call for further research: to curb the potential threats of preference manipulation, we need more measures that allow us to compare commercial as well as non-commercial systems.

1 Introduction

What role do recommender systems play in shaping our behaviour? On the one hand, they can seem like a mere convenience: they help us select which music to listen to (Millecamp et al., 2018) or which television show to watch (Bennett & Lanning, 2007). When we provide feedback by liking, rating, buying, or interacting with a product, we hope that our actions help the recommender system ”learn” our preferences (Knijnenburg et al., 2011).


However, what if the recommender systems do not simply learn our preferences but shape them? By providing recommendations, the recommender systems can influence the products we engage with, which can shape our preferences (Jiang et al., 2019). This creates a feedback loop that can degenerate into so-called filter bubbles and echo chambers (Jiang et al., 2019).


The drivers of this could be commercial. The companies behind the recommender systems might have incentives for shaping our behaviour to increase profitability. This is what Zuboff terms the prediction imperative in her exposition of Surveillance Capitalism (Zuboff, 2019). The prediction imperative states that to secure revenue streams Big Tech companies must become better at predicting the needs of their users. The first step of this is creating better predictive algorithms i.e. going from simple heuristics to sophisticated machine learning (Raschka et al., 2020). However, as competition increase, the surest way to predict behaviour is to shape it (Zuboff, 2019). By shaping behaviour, companies increase predictability at the cost of the users’ autonomy (Varshney, 2020).


While it may be good business, changing preferences could plausibly count as manipulation. Apart from harming the autonomy of the users (Varshney, 2020), this could have legal implications under the EU AI Act (Franklin et al., 2022; Kop, 2021).


Take the case of Amazon. In 1998, Amazon introduced item-based collaborative filtering (Linden et al., 2003) - a simple and scalable recommender model that allows them to recommend similar items. Since then their models have evolved to create more personalised features using machine learning on sophisticated features (Smith & Linden, 2017). The effects of this have been more accurate recommendations and - crucially - higher sales (Wells et al., 2018).


The prediction imperative posits that the evolution of Amazon recommender systems should have made it more difficult for users to change preferences to make them more predictable and profitable. They might also steer users towards specific categories that are relatively more profitable (Zhu & Liu, 2018).


To make such a claim it is essential to have methods for empirically analysing potential manipulation. Fortunately, Rakova and Chowdhury (2019) provide such a measure: Barrier-to-Exit. Barrier-to-Exit provides a measure for how much effort a user must exert to show that they have changed their preferences within a given category. It is built on a theoretical foundation of Selbst et al. (2019)’s work on fairness traps as well as systems control theory as applied to recommender systems (Jiang et al., 2019). The authors posit that recommender systems with a higher Barrier-to-Exit make it harder to change preferences.


Methods are ineffective without relevant data to apply them to. This may seem like an insurmountable task: Amazon’s recommender system is a complex model that builds on a myriad of advanced features including browsing activity, item features, and buying activity (Smith & Linden, 2017). No one but Amazon has access to this data - and they are unlikely to share it (Burrell, 2016).


We can get around this to some extent by relying on proxies. Specifically, we can use publicly available ratings as a proxy for user input. This has the advantage of being accessible through public datasets (Ni et al., 2019). The disadvantage is that we only have access to a (biased) fraction of the data going into the recommender system.


This paper aims to investigate whether Amazon’s recommender system has made it more difficult to change preferences over time. To focus the scope, we will only investigate book recommendations, as books were Amazon’s first product (Smith & Linden, 2017). This leads us to the following research question:


RQ: Has the Amazon Book Recommender System made it more difficult to change preferences over time?


We take several steps to answer this research question. First, we will formalise Barrier-to-Exit in the context of Amazon book recommendations. We will discuss the caveats of the technique and how it relates to preference change. Then we will use a large dataset of Amazon book recommendations (Ni et al., 2019) to calculate the Barrier-to-Exit for users who have changed their preferences. We will then analyse the change in Barrier-to-Exit over time using a linear mixed-effects model. Finally, we will discuss the validity and implications of these results.


This paper has two main contributions to the literature: 1) it provides a novel analysis of trends in preference manipulation in a commercial rather than academic setting. 2) it assesses the portability of Barrier-to-Exit as a measure for real-world datasets.


This paper is available on arxiv under CC 4.0 license.