paint-brush
AI Is Inherently Neutral - It Is Human Beings Who Are Biased, and the Machines Merely Replicate Themby@makc
1,893 reads
1,893 reads

AI Is Inherently Neutral - It Is Human Beings Who Are Biased, and the Machines Merely Replicate Them

by MakCApril 24th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Artificial intelligence systems are being heavily developed by leading R&D teams worldwide. A common barrier in the process is the bias that AI systems "bring" with them. The developers that provide the data and train the model, it is them that can cause the model to become biased (intentionally or unintentionally)

People Mentioned

Mention Thumbnail
featured image - AI Is Inherently Neutral - It Is Human Beings Who Are Biased, and the Machines Merely Replicate Them
MakC HackerNoon profile picture

Artificial intelligence systems are being heavily developed by leading R&D teams worldwide. However, a common barrier in the process is the bias that AI systems "bring" with them.


However, are the AI systems "bringing" the bias with them, or is bias "built-in" from the beginning?


Artificial intelligence systems, like machine learning models, are trained using large amounts of data. "Training" of an artificial intelligence system means the ability of the system to learn the underlying patterns that the data represents.


A "loss parameter" parameter is also defined for a system that judges the sound of the system's output. Over its training period, the artificial system tries to look for patterns in the data and minimize the loss when making predictions.


Now, these two parts of the system are where the bias problem can come in.


In the paper "Technology, Autonomy, and Manipulation" by Susser, Daniel, Beate Roessler, and Helen Nissenbaum, direct, indirect, and structural manipulation of artificial intelligence systems is something that the developers themselves can configure.


Manipulating the data on which an artificial intelligence system is trained is an example of structural and indirect manipulation of the system. Let us assume we are training a system similar to the infamous COMPAS algorithm.


Suppose we provide only those examples of data where people of a certain race are found to be repeating offenders, and representatives of the remaining races are all examples of non-offenders.


In that case, the system will simply learn the pattern that the people of race X are bound to be repeat offenders. Even though the manipulation of the data came from the model's developers, the AI system will get the blame for being biased.


Similarly, in the paper titled "A Review on Fairness in Machine Learning" by Solon Barocas, Moritz Hardt, and Arvind Narayanan, which was published in the Proceedings of the 2018 ACM Conference on Fairness, Accountability, and Transparency, the authors discuss how an AI model can be manipulated to become biased by incentivizing it through a particular loss function or manipulating data itself.


Since artificial intelligence models are essentially mathematical operations applied to the input to produce a particular output that we call the result, no "built-in" features could cause biased outputs.


However, since it is the developers that provide the data and train the model, it is them that can cause the model to become biased (intentionally or unintentionally, nevertheless).