Authors:
(1) Muneera Bano;
(2) Didar Zowghi;
(3) Vincenzo Gervasi;
(4) Rifat Shams.
Abstract, Impact Statement, and Introduction
Defining Diversity and Inclusion in AI
Conclusion and Future Work and References
The increasing prominence of AI systems in our everyday lives has led to an urgent need for ethical and responsible AI consideration. Numerous ethical guidelines and principles for AI have emerged, emphasizing fairness, justice, and equity as crucial components of ethically sound AI systems [10, 17]. Despite the widespread recognition of D&I as essential social constructs for achieving unbiased outcomes [18, 19] , there is a glaring lack of concrete practical guidance on how to effectively integrate D&I principles into AI systems [1]. This gap has far-reaching implications for the broader AI ecosystem, as it may result in perpetuating existing biases, reinforcing social inequalities, and further marginalizing underrepresented groups. Inconsistencies in the interpretation and application of these principles [17], coupled with a lack of diversity in perspectives and underrepresentation of views from the Global South [7, 20], raise concerns about the effectiveness of current ethical AI guidelines.
Implementing ethical principles in AI remains challenging due to the absence of proven methods, common professional norms, and robust legal accountability mechanisms [21]. While AI ethics guidelines often focus on algorithmic decision-making, they tend to overlook the practical and operational aspects of the business practices and political economies surrounding AI systems [22]. This oversight can lead to issues such as “ethics washing”, corporate secrecy, and competitive and speculative norms [21].
There is ongoing fierce debate on the effectiveness and practical applicability of AI ethical guidelines. Munn [23] highlights the ineffectiveness of current AI ethical principles in mitigating racial, social, and environmental harms associated with AI technologies, primarily due to their contested nature, isolation, and lack of enforceability. In response, Lundgren [24] argues that AI ethics should be viewed as trade-offs, enabling guidelines to provide action guidance through explicit choices and communication. Lundgren emphasizes operationalising ethical guidelines to make them accessible and actionable for non-experts, transforming complex social concepts into practical requirements. Despite conceptual disagreements, Lundgren suggests building on existing frameworks, focusing on areas of agreement, and setting clear requirements for data protection and fairness measures.
To address these challenges with an RE lens, and ensure AI systems are developed and deployed responsibly, we posit that it is crucial to operationalise diversity and inclusion requirements for AI. By providing clear, actionable steps for incorporating D&I principles in AI development and governance, we can promote a more inclusive, equitable, and ethical AI landscape [25-27]. We recognize the validity of Munn’s critique, and in our research, we aim to bridge the gap between high-level ethical guidelines and practical implementation following Lundgren’s call.
In detail, we study and explore the limited practical applicability of existing D&I guidelines in the context of RE for AI systems, emphasizing the need for operationalisation to make them effective. We found several issues with these guidelines, including their circularity, excess specificity for certain attributes and techniques, or conversely lack of sufficient specificity, and absolutism by ignoring resource constraints. While many guidelines are sensible as driving principles, we argue that transforming them into actionable requirements is crucial for integrating D&I concerns into AI system development. To address this challenge, we propose a user story template that maps the five pillars discussed in Section II onto roles, artifacts, and processes, aiming to make these guidelines more applicable in real-world software development contexts.
This paper is available on arxiv under CC 4.0 license.