paint-brush
Why Ignoring Sensitive Factors Won't Solve Algorithmic Bias and Discriminationby@gurman

Why Ignoring Sensitive Factors Won't Solve Algorithmic Bias and Discrimination

by Gurman May 7th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The article explains that simply not using sensitive factors like race or sex in algorithms doesn't prevent them from being biased. Biases can still seep in through related data, like location or income, acting as proxies. While some countries, under laws like the GDPR, avoid collecting data on these sensitive factors, this doesn't truly solve bias; it just hides it because we can't measure it. The piece argues for stronger regulation and accountability, not just guidelines, to actively combat and correct biases in algorithms. It suggests a detailed auditing framework to ensure algorithms are fair and transparent. (the bots got confused so this is chatgpt generated).
featured image - Why Ignoring Sensitive Factors Won't Solve Algorithmic Bias and Discrimination
Gurman  HackerNoon profile picture

Not Accounting for Sensitive Factors Doesn’t Mean Your Algorithm Won’t be Biased

Being colorblind doesn’t mean that color doesn’t exist. Similarly, not including sensitive factors such as race and sex into algorithms doesn’t mean the algorithms won’t carry biases formed on race or sex. Those biases are ingrained into society, hence the data. Most algorithms are literal; their outputs are a function of the patterns they observe.


Nonetheless, a common technique that developers have applied is straight omission despite its continuous failure. Kwok from Yale’s School of Management explains when race is removed from racially biased algorithms, a subtler biased “latent discrimination” is introduced where other factors, such as income or location that are correlated with race, essentially serve as proxies for race. The Harvard Business Review also investigated an employment recruitment scenario and found that proxies could predict gender with 91% accuracy in data.


The omission strategy extends beyond just individual scenarios, though. During a recent conference on AI Regulation at California Western School of Law, a French panelist noted that France doesn’t have to deal with the racial bias issue since they simply do not collect race as a factor. This is due to the GDPR, which prohibits the use of ‘special categories of data’ (Article 9). This includes sensitive factors as well as proxies that may reveal them. It is phrased as follows:


A Robot Working by Andrea De Santis from Unsplash


Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.


Countries subject to the GDPR, such as France, still have racial biases. They are just unable to be measured since the data is never collected. However, one could argue that perhaps biases don’t need to be “fixed” since our algorithm should reflect real life. When ProPublica criticized the maker of COMPAS, a recidivism algorithm that found black defendants to be nearly twice as likely to be classified as high risk compared to their white counterparts, the algorithm maker and researchers responded that it was mathematically impossible to have an algorithm that didn’t result in such racial gaps due to the impact of race on the recidivism.

Image of Justice by Tingey Injury Law from Unsplash


This reasoning is problematic since algorithms can amplify and perpetuate biases. For example, predictive policing tends to drive law enforcement to black and brown areas based on past data. However, the past data is biased based on heightened racial tensions, and increased law enforcement in such areas increases arrests, skewing future data and increasing the racial disparity among arrestees.

We need a solution to prevent algorithms from perpetuating cycles of existing biases, and simply ignoring sensitive factors only masks the issue. The U.S. lacks a regulatory framework that allows organizations to measure and mitigate their own bias. The White House Office of Science and Technology’s AI Blueprint outlines thorough recommendations for best practices. However, the lack of enforcement undermines the Blueprint’s effectiveness, as evidenced by the harmful impact of biased algorithms being deployed. Since sweeping bans such as the GDPR Article 9 will do little to mitigate bias, I argue that the policymakers’ role shouldn’t be to tell developers how to minimize bias but rather do its part as a regulator to strictly hold them accountable through audits.


Here is a sample auditing framework that draws heavily from the National Institute of Standards and Technology’s (NIST) identification of three primary categories for AI bias: systemic, computational, and human.


  1. Assessment of AI System Objectives

    1. Purpose of System

    2. Assumptions Regarding Fairness and Bias

      1. Definitions of Fairness Model Attempted
      2. Sensitive Factors Accounted For
    3. Organizational Norms (e.g. Implicit Bias Training)

    4. Diversity of Team

  2. Data Management and Analysis

    1. Data Collection Oversight

      1. Representation of Groups in Data
      2. Context of Data
    2. Proxy Identification

  3. Algorithm Development and Model Training

    1. Transparent Design

      1. Documentation of Development with Justifications (Particularly Relevant for Models Used in High-Risk Situations (Courts, Healthcare, etc))
    2. Bias Mitigation Techniques Used

  4. Testing and Evaluation

    1. Independent Validation
    2. Continuous Monitoring
    3. Disclose Bias Audit Findings
    4. Engage Stakeholders