paint-brush
Ethical AI/ML: A Practical Exampleby@philhopkins
645 reads
645 reads

Ethical AI/ML: A Practical Example

by Philip HopkinsNovember 22nd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Obvious use cases for ethics in applied artificial intelligence reveal themselves in situations where machines are making decisions directly impacting people. A machine learning model that uses race, gender, sexual orientation, or even self-disclosed disabilities on the part of the applicant as inputs to make decisions would be more likely to exhibit behavior that might be considered unjustly discriminatory.

People Mentioned

Mention Thumbnail
featured image - Ethical AI/ML: A Practical Example
Philip Hopkins HackerNoon profile picture

There are many approaches to XAI (Explainable Artificial Intelligence) that seek to grasp the factors that were most influential in models’ decision-making as they classify or predict against a problem space.


One of the most popular methods is LIME (Local Interpretable Model-Agnostic Explanation), a feature importance algorithm that tries to analyze the decision-making of an AI by creating an explainable model that gives the same or similar results to the AI under examination. The originators of this approach describe the method here.


Obvious use cases for ethics in applied artificial intelligence reveal themselves in situations where machines are making decisions directly impacting people. A clear example we will explore here is the granting of credit or a home loan.


But there is ambiguity everywhere, including the edge cases, because even with industrial applications of AI, such as a machine learning model that calculates optimal petroleum mixes, there are human values embedded in the process and the outcome, whether those values were introduced consciously or not.


Is the AI optimizing for viscosity or efficient combustion to the exclusion of pollution reduction?


Similarly, we can ask whether an AI algorithm used for machining car parts is optimizing for speed of production, accuracy, or quality and the safety of the passengers.


Additionally, as a person with a disability, I am keenly aware when using a search engine whether it is aware that someone with mobility issues will need a different entrance to a building than the able-bodied.


Setting aside the examples above, we can turn to the case of a machine learning model that determines eligibility and interest rates for home loans as a situation where the human impact of an AI that does not operate ethically is clear.

Can a Machine Learning Model Discriminate?

It should be obvious that a machine learning model that uses race, gender, sexual orientation, or even self-disclosed disabilities on the part of the applicant as inputs to make decisions would be more likely to exhibit behavior that might be considered unjustly discriminatory in both a legal and ethical sense.


The difficulty arises when proxies for these traits are used by machine learning models to make decisions. One proxy for the race of applicants came into use before the widespread adoption of machine learning, when the banks used the zip codes of applicants as a reason to deny loans.


The fact that humans sometimes live in ethnic enclaves was sufficient for the zip code to become a proxy for race that the banks then used to deny loans to minorities. This practice became known as redlining, and it was outlawed decades ago.


However, are we sure that all the machine learning models that determine creditworthiness and home loan availability today are well-versed in the laws governing redlining? Simply removing race and zip code from the models’ input might not be sufficient. What about the town the applicant lives in or the school they attended?


Might those features, when used by a machine learning model, become proxies for race or gender, especially in the case of schools?


Additionally, products and media consumed by applicants can be used as a proxy for race, gender, and other characteristics. If a sports streaming network is planning their marketing spend, and depending on AI to create audiences with demographics likely to subscribe, might the viewing and spending habits of those audiences be used by machine learning models to market to men over women?


If more men buy certain pay-per-view events to stream than women, is it ethical for a model to use those purchases as a feature to determine the demographics of the audience they will market to?


It’s clear that humans know more than machines about what proxies in the real world can be used for discriminatory decision-making by humans and machines. How do teach machines to avoid using proxies, other than specifically forbidding certain features and criteria from use by models? Can we train models to detect their own human biases?


These are questions that I and my colleagues are exploring as we work with media companies, e-commerce giants, and others to implement AI that mitigates the negative impact of proxy-based decision-making on protected groups and all others because everyone deserves fair treatment.