Authors:
(1) Yuwen Lu, he contributed equally to this work, University of Notre Dame, USA;
(2) Chao Zhang, he contributed equally to this work and work done as a visiting researcher at the University of Notre Dame;
(3) Yuewen Yang, Work done as a visiting researcher at the University of Notre Dame;
(4) Yaxin Yao, Virginia Tech, USA;
(5) Toby Jia-Jun Li, University of Notre Dame, USA.
Our findings highlight the potential of an end-user-empowerment approach in helping users understand, intervene, and make informed decisions about dark patterns based on their specific needs, goals, and context. By disclosing information and enabling actions against dark patterns, users gained an increased sense of autonomy in online experiences (KI1). Our proposed designbehavior-outcome framework maps out design opportunities for future dark pattern interventions. Through our 2-phase studies, we revealed that end users desire non-intrusive (DI6), personalized (KI2), and dynamic (KI3). Future research needs to carefully consider the distinct preferences of the target user groups and the context of use of digital services (DI8).
Although our two-week technology probe study illustrated the usefulness and technical feasibility of this approach, scalability remains a challenge. Our manual process for 31 UI enhancements on 5 websites is adequate for a small-scale probe, but cannot practically cover a significant selection of millions of dark patterns for real-world impact [26]. This scalability challenge is two-fold: on one hand, new dark patterns emerge quickly, and it requires considerable maintenance overhead to keep up-to-date for all sites; on the other, designing multiple user-desired UI enhancements for each dark pattern requires significant effort (illustrated in Section 5.3).
To help a large audience effectively mitigate dark patterns’ impacts in real-world settings, a new approach is needed to scale up this effort. Here, we propose several possible future directions and discuss relevant efforts in adjacent research areas.
A crowd-sourced, collective intelligence approach can be an effective way to tackle scalability issues [3]. This could involve community contributions for identifying dark patterns, their impacts, and potential UI enhancements. The aggregated data can expand the capabilities of tools like Dark Pita, as well as provide training data for future machine learning (ML) models that detect dark patterns, predict user behaviors, and generate interventions, as we will discuss in Section 6.3. This crowd-sourced approach can involve multiple stakeholders:
(1) End users can identify dark patterns in their daily experiences, report their behaviors in response to dark patterns, and express their desired changes. Public release and wide adoption of tools such as Dark Pita can provide a platform for soliciting such information.
(2) Designers who are motivated to contribute can provide meta-information of their design intentions behind UX designs along with these features.
(3) Third-party developers can develop new detectors and UI enhancements for instances of dark patterns and contribute them to a unified repository or “community wiki” for public use.
A citizen science approach [12] can enhance transparency by gathering data on the design processes that result in dark patterns. UX practitioners commonly use A/B tests [111] to examine the effect of dark patterns (e.g., discouraging subscription cancellation). This approach seeks to involve the public in contributing the hypotheses, protocols, and outcomes of these A/B tests to reveal the hidden design intents behind dark patterns. This approach can improve design transparency, similar to how pre-registration of experiments and data transparency contribute to the open science movement [98, 99]. Meanwhile, guardrails must be put in place once this information becomes public: if a study shows the business benefits of including dark patterns, they should not be blindly misused by other companies and designers.
Implementing the citizen science approach would involve (1) a consistent format to report the relevant experiment information; (2) a community repository where the information can be aggregated, organized, and shared; and (3) optionally, a platform or a set of tools for conducting UX experiments that make it easier to share the experiment information. The citizen science approach can also engage multiple stakeholders:
(1) UX practitioners who are ethically-minded can participate by sharing the hypotheses, protocols, and outcomes of these experiments.
(2) Third-party researchers can audit the shared results by replicating experiments using the information provided.
(3) Policy makers and community activists can mandate or advocate for companies’ adoption of this approach, which we will discuss in Section 6.4.
With the latest advances in the computational UI understanding [25, 50, 52, 68, 128] and user behavior modeling [62, 70, 72, 131, 135], machine learning (ML) techniques to model UX dark patterns show great promise in scaling up the effort in dark pattern intervention. Early explorations in this area such as AidUI [79] have demonstrated the impressive performance of ML models in automated dark pattern recognition. Previous efforts to automate the detection of dark patterns [27] in consent banners with ML [115] and reverse engineering [100] have also shown promising results in this area. Specifically, ML models have the potential to (1) identify instances of dark patterns and categorize them, (2) predict the consequences/user behaviors under the influence of these dark patterns; and (3) generate UI enhancements with different dark pattern intervention strategies.
However, the lack of large datasets on dark pattern designs, user behavior under the influence of dark patterns, and users’ preferred action against dark patterns are major barriers to the ML approach. In addition to the ContextDP dataset proposed in AidUI [79], such datasets may also be constructed from existing curated lists of dark patterns, e.g., Deceptive Design Hall of Shame[19] , from website crawling, or from data collected using our proposed crowd-sourced (Section 6.1) and citizen science (Section 6.2) approaches.
It is vital to acknowledge the potential abuse of ML techniques in efficiently creating dark patterns in interfaces. We implore researchers and practitioners using ML in design to be vigilant, adhere to previous empirical results of dark patterns [35, 37, 82], and ensure their designs align with user goals, to mitigate misuse. The development of such ML models must consider the intended users’ goals and ethical values to guarantee their widespread utility and benefit [109].
Our end-user-empowerment approach complements designer-centered ethical practices and policyfocused regulation against dark patterns. We propose several directions that coordinate these efforts to scale up the impact.
Designer-focused efforts can strengthen the proposed crowd-sourced (Section 6.1) and citizen science (Section 6.2) approaches. As discussed, designers play an important role in both approaches; designer education and advocacy are crucial to boosting their participation and engagement [38].
The citizen science method [12] helps with a key issue in policy making: defining dark patterns for regulation is challenging as a comprehensive definition is currently lacking [38, 81]. Citizen science promotes “design transparency” [29] in policy making, such as mandating study preregistration and sharing of A/B testing experiments. If a mandate is not yet practical, we can also take gradual steps, such as issuing “design transparency” or “ethical design” badges to companies or organizations that comply with the requirement.
Our end-user-empowerment approach has implications for addressing the power imbalance between end users and designers in interfaces. Today, designers usually have dictating power over interface design. Even if users can modify interfaces, the possible configurations are often pre-defined by designers. Our design probe Dark Pita and our end-user-empowerment approach attempt to shift this power imbalance through awareness and action.
Together with advancements in ML for dark patterns (Section 6.3), new community-based approaches [57] will further empower end users against designers’ “interface dictatorship”. New communities such as Arc Boost Gallery[20] provide great opportunities for future investigation. For example, CSCW academics can investigate the common community structures, dynamics, and member values to gain insights for sustaining web augmentation communities around dark patterns. Existing research on dark pattern Reddit communities [37] and CSCW research on online communities [28, 30] have built solid foundations for such explorations. Meanwhile, we hope our Design-Behavior-Outcome framework cast light on the design space for intervention techniques and could guide future community creators in coming up with useful solutions.
To address this “tug-of-war” between designers and users, we can also achieve end-user empowerment by “de-powering” designers, a more radical and aggressive approach. Research on malleable interfaces [96, 97] has shown feasibility in generating UIs automatically based on the specifications of service functionalities, user preferences, and usage context. In this way, the role of designers will be limited to describing the specifications of a system, with little power over the visual presentations of information and the interaction mechanisms. This would prevent the creation of many dark patterns in the first place.
[19] https://www.deceptive.design/hall-of-shame/all
[20] https://arc.net/boosts
This paper is available on arxiv under CC 4.0 license.