paint-brush
Are We Morally Obligated to Adopt AI?by@coryhymel
1,141 reads
1,141 reads

Are We Morally Obligated to Adopt AI?

by CrowdboticsApril 20th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Frank Chen, second from the left, on the AI panel in which the question was posed. Credits: Gigster
featured image - Are We Morally Obligated to Adopt AI?
Crowdbotics HackerNoon profile picture

Five years ago, Frank Chen posed a question that has stuck with me every day since. The question was this, “If self-driving cars are 51% safer, are we not morally obligated to adopt them?”.  I’ve posed this question a ton of times over the past five years, and usually, a knee-jerk reaction leads to an interesting debate. What makes this question so great is the knife's edge – it’s not 99% safer, it’s not 70% safer, it’s only 51% safer.


To put it into context. The National Highway Safety Administration has reported that in 2022, there were an estimated 42,795 traffic fatalities. 50% of 42,795 is 21,398 people, and 51% is 21,825 people.


That means if self-driving cars are 51% safer, using them would save 427 people’s lives every year. That is about 1.5 Boeing 777 airplanes full of passengers.


Is saving 427 lives a moral argument for adoption?


In the five years I’ve been sharing this question, the answers are never simple. They’re always fraught with “what ifs.” But even if the answers lack clarity, I think the question is incredibly important. In part because it opens a broader — and equally important — debate on the moral imperative of AI adoption across many aspects of our lives and work. Because, after all, avoiding technology that could save lives may be just as ethically problematic as adopting technology too hastily,


The Moral Imperative of AI Adoption


I've always found the debate around autonomous vehicles to be a perfect microcosm for the broader discourse on AI. If we possess technology that's statistically safer than human-operated vehicles, isn't the moral choice obvious?


Consider this: studies have shown that human drivers had a higher rate of crashes with a meaningful risk of injury than self-driving  (AI-powered) cars. Specifically, human drivers caused 0.24 injuries per million miles (IPMM) and 0.01 fatalities per million miles (FPMM), while self-driving cars caused 0.06 IPMM and 0 FPMM.


And remember, these numbers aren't just statistics. They represent real lives that could be saved by embracing AI technology.


But why stop at autonomous vehicles? The potential for AI to enhance safety, efficiency, and accuracy spans across fields like medicine, public health, food safety, agriculture, cybersecurity, crime prevention, and military science. If AI can diagnose diseases with greater accuracy than human doctors, predict crop failures before they devastate food supplies, or thwart cyber-attacks before they breach our data, don't we have a moral obligation to also utilize those technologies?


Of course, these are dramatic examples, but the argument extends beyond life-and-death scenarios. AI's ability to improve our daily quality of life is equally compelling. Whether by simplifying mundane tasks or making information and services more accessible and equitable, AI can end drudgery and enhance our daily quality of life. The moral imperative to adopt AI isn't just about preventing harm or death; it's about whether we have an obligation to contribute to human well-being if we can.

The Dilemma of Choice and Safety


So, do we choose human-operated vehicles (or human-led processes) knowing they are less safe or efficient than their AI counterparts? Simply because they are more human?


Faced with the choice between human-operated systems and AI-enhanced alternatives, my thought is the decision should obviously hinge on safety and efficiency rather than an allegiance to some murky idea of what is "human" or not.


Embracing AI doesn't mean disregarding human value or input; rather, it's about acknowledging that what's human isn't inherently superior — and honestly, is often significantly inferior in specific contexts.


Now please don’t get out the pitchforks, I’m not joining Team Robot Overlord. I get the anxiety a lot of people feel about the disruption AI is already causing to their jobs and the societal change that is undoubtedly headed our way. I just wonder whether the efficiencies of AI and the quality of life benefits might, in the long run, outweigh the impact of those disruptions.


Some of our reluctance to adopt AI is informed by cognitive biases and fears. For a species famed for its adaptability, we humans don’t love change.


Cognitive biases play a substantial role in our hesitance to embrace AI. Cognitive biases are psychological patterns that are a holdover from our early years as Homo Sapiens. They are the habits our minds fall into — cognitive shortcuts that can be useful when running from predators but definitely skew our modern perception and judgment.


In this case, recognizing and addressing these biases is crucial in moving toward a more rational, ethical approach to AI adoption. Here are a few that I think could be in play, influencing our suspicion, trust, or acceptance of AI technologies.

  • **Anthropomorphism Bias:**People tend to attribute human characteristics to AI or robots, influencing their trust and expectations. This can lead to unrealistic assumptions about AI systems' capabilities or the attribution of evil intentions.

  • **Availability Heuristic:**This bias leads individuals to overestimate the probability of events associated with memorable or vivid incidents. Sensationalized media reports about AI failures or successes can be blown out of proportion and disproportionately influence perceptions of AI reliability and safety.

  • **Confirmation Bias:**What’s everyone else doing? People may seek out or interpret information in a way that confirms their preexisting beliefs or hypotheses about AI. This bias can hinder the objective evaluation of AI technologies and their potential benefits or risks.

  • FOMO: People don’t want to miss out on beneficial technologies but can lack an understanding of the implications. This bias can overshadow critical evaluation and lead to premature adoption. This bias is related to the Bandwagon Effect (e.g., The tendency to do or believe things because everyone else does - or influencers do. People might trust or distrust AI technologies simply because that seems to be the popular sentiment.


  • Status Quo Bias: People prefer to maintain the current state, which leads to resistance against adopting something new like AI, regardless of potential benefits or proven superiority. This bias can slow innovation and the adoption of potentially life-enhancing technologies.


  • **Loss Aversion:**This bias makes the pain of losing something feel more potent than the pleasure of gaining something of equal value. For AI, this means the fear of job loss or loss of control overshadows the benefits of safety/efficiency/convenience.

  • Overconfidence Bias: Overestimation of one's ability to control or understand something. For AI, this means overestimating or underestimating the risks associated with AI.


  • **Algorithm Aversion/Trust:**Numbers are scary! People tend to be biased against algorithms, believing human decision-making is superior, even when evidence suggests otherwise. On the other hand, some might also have an unquestioning trust in AI decisions, ignoring the potential for errors or biases in AI systems.


Economic Rationality


Interesting, right? But the truth is, this is all academic. We may not even get to make this decision in the end.  Companies are already making it.

A ton of corporations are barreling ahead with AI integration — mainly because the ROI often speaks louder than ethical debates. Take Amazon as a prime example, with its significant shift towards automation. The efficiency and economic benefits are tangible and measurable, and in the face of cold hard cash, moral and social criticisms suddenly feel more academic.


Still, this isn't only about stone-hearted capitalism; it's about survival and adaptation. Businesses are tasked every day with the challenge of balancing technological adoption with ethical and ESG responsibilities. The impact of AI on employment and human well-being can't be an afterthought. For thousands, financial stability and career wellness hinge on these decisions. It’s something a lot of enterprises are grappling with.


And here's where the moral imperative question becomes more nuanced. If AI can streamline operations, reduce costs, and even create new opportunities, then aren’t we also morally responsible for exploring these technologies?


The trick will be to keep that ethical compass handy and ensure that as we embrace AI's efficiencies, we also safeguard against its potential to unfairly disrupt livelihoods.

We are in a Transition Period

Either way, we need to watch our footing. We are standing on the precipice of a new era, and one solid push could end us in a freefall. AI is no longer a futuristic fantasy; it's absolutely embedded in our daily lives and work. That’s exciting — and scary as hell.


One of the most significant challenges we face is the accessibility or tech gap. AI has the potential to democratize technology, making powerful tools available to a broader audience. However, right now, AI's promise is primarily being seen by those who already have a certain level of access, so there is also the potential that AI will exacerbate existing inequalities rather than alleviate them.


It’s a period of adjustment, so it will require patience, education, and proactive measures to ensure that the benefits of AI are widely distributed. We’ve got the potential to level the playing field so that AI's potential can be unlocked for everyone, not just the privileged few.



The Conundrum of Cooperation


Okay, so it is a paradox:  For AI to function optimally alongside humans, it must be superior to us in certain tasks.  But that VERY superiority threatens to displace human roles, fueling resistance and fear among us mortals.


This paradox creates a tough "push-pull" for AI; that’s why we’re seeing such heated debate about morality. I believe the solution may be a suite of emerging design philosophies and technologies aimed at bridging the gap between AI and human cooperation in an ethical way. I’ll list them below. They are worth asking ChatGPT about:


  • **Human-Centric AI Design (HCAI):**ensures that AI systems are developed with human needs and values at their core.

  • **Explainable AI (XAI):**demystifies AI decisions, making them understandable and transparent to humans.

  • **Ethical AI Frameworks:**guide the development and deployment of AI systems in a manner that respects human rights and values.

  • **Adaptive/Responsive AI:**learns from and adapts to human feedback, ensuring a synergistic relationship.

  • **Participatory Design:**involves end-users in the AI development process, ensuring their needs and concerns are addressed.

  • **Augmented Intelligence:**emphasizes AI's role in enhancing human abilities rather than replacing them.

  • **Trustworthy AI:**builds confidence in AI systems through reliability, safety, and ethical assurances.

To Self-Drive Cars, or to Not Self-Drive Cars?


In wrapping up, I’ll take a stand. I think adopting AI is a moral imperative. In my view, the potential to save lives, enhance our quality of life, and even address long-standing inequalities is too significant to ignore. However, this doesn't mean we should dive headfirst without consideration.  In my opinion, we need to approach AI with a blend of enthusiasm and caution—staying excited to explore its possibilities but mindful of the ethical, social, and economic impact.


Thoughtful consideration, robust ethical frameworks, and stringent governance are the keys to unlocking AI's potential responsibly.


I’m still open to debate on the subject. So, I’ll throw the question out to you. Reply here or on my LinkedIn thread and tell me why I’m wrong — or right. I invite your thoughts and comments on this complex issue.


Are we ready to embrace AI with the moral seriousness it demands?

Are you ready to take your next road trip in a self-driving car?