Objectives
Why should we care?
Humans are complex and have complex and nuanced emotions. Researcher Paul Ekman studied human emotions and determined that humans have 6 basic emotions. They are like the primary colors of art. The 6 emotions are fear, anger, joy, sadness, disgust and surprise.
If you are feeling any anxiety about AI, that’s an emotion that could be related to fear.
Anxiety is fear of the future.
Our capacity for sensing and reacting to the environment with our emotions comes from our limbic system.
Parts of the limbic system are made up of structures that form a complex network for controlling emotion. This part of the brain coordinates to regulate responses to stress, emotion, trauma, and fear. This might be nicknamed lizard brain.
In 2002, a psychologist named Daniel Kahneman won the Nobel Prize in economics for determining a model for how human beings think. He discovered that humans have 2 types of thinking patterns which he named System 1 thinking and System 2 thinking.
When we are thinking in a fear motivated way we will trigger our system 1 thinking and also trigger our capacity to make errors in judgment. These errors in judgment will be based on our implicit bias.
Implicit biases are our prejudices, predetermined beliefs, childhood ingrained values. They are things we have been taught from a very early age which were taught to us to “keep us safe.” They may be part of our core beliefs about ourselves.
These implicit biases might be triggered in our system 1 thinking and cause us to act in ways that are negative, prejudicial, and fearful. We may do this without thinking, on automatic impulse.
The danger in thinking fearfully about AI is that we may have thinking fallacies:
Use system 1 thinking and make quick and fear based thinking about AI and in the proceess not making good decisions about is use in society
Use system 1 thinking when teaching AI new skills - inserting into AI prejudices that can’t be easily removed
Not realizing we have implicit bias in the first place. Not realize that we are installing into everything we do implicit biases, especially when we are acting in a fear based way
Not realizing that though we might be able to determine our individual implicit biases through testing and learning and acting differently, if we teach AI implicit biases we may not be able to remove the AI implicit bias. AI does not necessarily have consciousness to understand that it is being biased.
We might exacerbate the trauma we have just felt collectively as a global community because of COVID pandemic by being afraid of AI. The anxiety we feel will cause us to not make good decisions about the use of this new technology.
In order to reduce our fallacies in thinking, we might consider changing our thinking to system 2 thinking about AI. We could be deliberate and more conscious and more alert in our management and implementation of AI.
System 2 thinking will bring us back to basics. Though there are many possible models for system 2 thinking, it might be helpful to use ethical principles.
Here we revisit ethical principles to more deliberately examine AI.
System 2 thinking can be studied by realizing the fear we have about AI can be managed through deliberate, conscious, mindful thought and action in a system 2 way.
What are ethics anyway? Moral principles that govern a person's behavior or how to conduct an activity.
There are 7 basic ethical principles and 7 principles of medical ethics described here we can consider as an example framework.
These are the 7 basic ethical principles presented here:
So that what you believe and what you do are in harmony
Respect - acknowledging the inherent value of all individuals and treat all with dignity
Honoring others beliefs even if they are different from your own
These are the 7 medical ethics principles described here. These medical principles are considered to help human mental health which has been affected by the emergence of AI.
Non-maleficence - This is the idea that medical providers should do no harm.
Beneficence - This is the ideas that medical providers should go beyond doing no harm to actively help others’ welfare.
Health maximization - This principle allow for an environment that maximizes health and opportunities for health of the general public.
Efficiency - This idea encourages the use of scarce medical resources efficiently.
Respect for autonomy - This is the patients’ right to determine what will happen to them. The ability to make their own medical choices.
Justice - Justice includes equity for all people so that all people can have health care at the same quality.
Proportionality - This principle involves weighing the individual needs versus the needs of the greater good
These are some ideas we could be focusing on when talking about AI. We could use system 2 thinking to make these decisions about AI with more clarity and deliberation. Though there could be other models that would encourage systematic thinking and reduce fear, one idea could be using an ethical guidance model.
References: