Obvious use cases for ethics in applied artificial intelligence reveal themselves in situations where machines are making decisions directly impacting people. A machine learning model that uses race, gender, sexual orientation, or even self-disclosed disabilities on the part of the applicant as inputs to make decisions would be more likely to exhibit behavior that might be considered unjustly discriminatory.
People Mentioned
Philip Hopkins
@philhopkins
I am a hacker, engineer, product manager, and researcher on LLMs, AI/ML, and the ethics of applied machine learning.