Authors:
(1) Reza Ghaiumy Anaraky, New York University;
(2) Byron Lowens;
(3) Yao Li;
(4) Kaileigh A. Byrne;
(5) Marten Risius;
(6) Xinru Page;
(7) Pamela Wisniewski;
(8) Masoumeh Soleimani;
(9) Morteza Soltani;
(10) Bart Knijnenburg.
In this section we first summarize the literature on older and younger adults’ privacy decisions. We then discuss dark pattern designs and common strategies to maximize users’ disclosure (namely framing, defaults, and justification messages).
A large body of literature investigating age-related differences in digital privacy identifies older adults as individuals who experience more difficulties than younger adults in managing their digital privacy (e.g., [11, 14, 42, 64, 90]. Some argue that older adults are less likely to protect themselves against privacy-related risks [86, 98]. Lack of awareness of the privacy risks has been cited as a critical factor impacting older adults’ privacy decisions [46]. For example, age-related differences have been found in research investigating content sharing and sociability and how these components are associated with the need for privacy among Facebook users [14]. Researchers discovered that younger adults are more competent in their Facebook usage and are more informed about and able to make changes to their privacy settings. In contrast, older adults seemed to have difficulties understanding the privacy settings and be less aware of social privacy issues.
Psychological literature corroborates how older and younger adults exhibit fundamental behavioral distinctions in their decision-making patterns, encompassing differences in risk preference and reliance on goal-driven approaches [96]. For example, Anaraky et. al. [3] showed that older and younger adults have different decision processes. While younger adults mostly rely on affect heuristics [44], older adults are more likely to be calculus-driven decision makers. However, to the best of our knowledge, no one has studied the differences between older and younger adults’ decisions in the scope of dark pattern designs that nudge users towards disclosure. In this study, we investigate how dark pattern design interventions influence older adults’ decisions differently than younger adults. Next, we turn to discussing the dark pattern design literature.
Dark patterns in design refer to instances where designers exploit human desires and behaviors and implement functionality that will mislead them and have negative implications [28]. The term was first proposed by UX designer Harry Brignull in 2010, who defined dark patterns as: “a user interface that has been carefully crafted to trick users into doing things they might not otherwise do. “Brignull also notes that “Dark patterns are not mistakes. They are carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind.” [28]. Bösch et. al developed a categorization of privacy dark patterns that designers implement within their systems to exploit the users’ privacy. These strategies include: maximize, publish, centralize, preserve, obscure, deny, violate, and fake [12].
In the context of this work, we focus on the maximize dark strategy by which designers aim to collect more data than is actually needed for the task [12]. Framing, defaults, and justification messages are means by which these designers maximize the amount of personal data collected [12, 33, 54]. We study these dark patterns in the context of a Facebook application—a context in which dark patterns are not uncommon. For example, in the infamous Cambridge Analytica case, the app designers used maximize dark pattern designs to maximize disclosure. The app used defaults to influence users’ decisions to share personal information with rather unfavorable consequences. In our study, we test the effect of dark patterns in the context of a phototagging application on the Facebook platform where users can choose to tag themselves or their friends in their photos. The app applied maximize dark pattern designs to influence the user’s decision in the form of choice framing, default settings, and justification messages. We explain each of these maximize dark patterns here.
Framing and defaults are quintessential examples of the “maximize” dark pattern, in that they tend to increase compliance to disclosure requests made by the information system [4]. The framing effect describes the phenomenon that people are more likely to give consent to a request if it is presented with a positive framing (“Do ...”) rather than a negative framing (“Do not ...”). Johnson et al. [30] and Lai and hui [43] independently studied the framing effect in the context of information privacy. At the end of an online health survey, Johnson et al. [30] asked their participants if they wanted to receive more health surveys. The wording of the choice option for participants in the positive framing condition was “Notify me about more health surveys”, whereas those in a negative framing condition saw the choice wording as “Do not notify me about more health surveys”. Lai and Hui
[43] studied framing in a newsletter sign-up scenario. Similarly, the wording for their positive framing condition was “Please send me Vortrex Newsletters and information” whereas the wording for the negative framing condition was “Please do not send me Vortrex Newsletters and information”. Both Johnson and Lai and Hui found that participants are more likely to comply with the request if the request is presented with a positive framing rather than a negative framing.
Similar to framing, defaults are a form of dark pattern design strategy that can influence individuals’ decisions [33, 69]. The default effect suggests that individuals are more likely to accept an option if that option is pre-selected by default [70]. This is evident in both Johnson et al. [30]’s health survey study and Lai and Hui [43]’s newsletter sign-up study, where sign-up ratio is highest if users are signed up by default (an opt-out default). Overall, both framing and defaults are referred to as tools of choice architecture [33] which can induce compliance to data disclosure requests made by the information system [4].
To help users make a decision, system designers sometimes show them justification messages [1] providing additional information about the choice. These messages can inform users about the popularity of the product or service among other users [22] or its pros and cons [12, 24]. For example, Weinberger et al. [93] show that an unfavorable product rating adversely influences individuals’ intention to purchase the product. Overall, these studies suggest that providing justifications supporting a request would motivate the audiences to comply to the request.
[1] The term “framing” is used in the literature to denote several conceptually distinct interventions, and some studies apply the term “framing” to the type of justifications we use in this study [10]. In order to avoid confusion, we use the term “framing” for negated choice statements (i.e., “Do” vs. “Do not”; [30, 43]), and use the term “justification” to refer to the additional text accompanying the choice statement.
This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.