Five years ago, Frank Chen posed a question that has stuck with me every day since. The question was this, “If self-driving cars are 51% safer, are we not morally obligated to adopt them?”. I’ve posed this question a ton of times over the past five years, and usually, a knee-jerk reaction leads to an interesting debate. What makes this question so great is the knife's edge – it’s not 99% safer, it’s not 70% safer, it’s only 51% safer.
To put it into context. The National Highway Safety Administration has reported that in 2022, there were an estimated 42,795 traffic fatalities. 50% of 42,795 is 21,398 people, and 51% is 21,825 people.
That means if self-driving cars are 51% safer, using them would save 427 people’s lives every year. That is about 1.5 Boeing 777 airplanes full of passengers.
Is saving 427 lives a moral argument for adoption?
In the five years I’ve been sharing this question, the answers are never simple. They’re always fraught with “what ifs.” But even if the answers lack clarity, I think the question is incredibly important. In part because it opens a broader — and equally important — debate on the moral imperative of AI adoption across many aspects of our lives and work. Because, after all, avoiding technology that could save lives may be just as ethically problematic as adopting technology too hastily,
I've always found the debate around autonomous vehicles to be a perfect microcosm for the broader discourse on AI. If we possess technology that's statistically safer than human-operated vehicles, isn't the moral choice obvious?
Consider this:
And remember, these numbers aren't just statistics. They represent real lives that could be saved by embracing AI technology.
But why stop at autonomous vehicles? The potential for AI to enhance safety, efficiency, and accuracy spans across fields like medicine, public health, food safety, agriculture, cybersecurity, crime prevention, and military science. If AI can diagnose diseases with greater accuracy than human doctors, predict crop failures before they devastate food supplies, or thwart cyber-attacks before they breach our data, don't we have a moral obligation to also utilize those technologies?
Of course, these are dramatic examples, but the argument extends beyond life-and-death scenarios. AI's ability to improve our daily quality of life is equally compelling. Whether by simplifying mundane tasks or making information and services more accessible and equitable, AI can end drudgery and enhance our daily quality of life. The moral imperative to adopt AI isn't just about preventing harm or death; it's about whether we have an obligation to contribute to human well-being if we can.
So, do we choose human-operated vehicles (or human-led processes) knowing they are less safe or efficient than their AI counterparts? Simply because they are more human?
Faced with the choice between human-operated systems and AI-enhanced alternatives, my thought is the decision should obviously hinge on safety and efficiency rather than an allegiance to some murky idea of what is "human" or not.
Embracing AI doesn't mean disregarding human value or input; rather, it's about acknowledging that what's human isn't inherently superior — and honestly, is often significantly inferior in specific contexts.
Now please don’t get out the pitchforks, I’m not joining Team Robot Overlord. I get the anxiety a lot of people feel about the disruption AI is already causing to their jobs and the societal change that is undoubtedly headed our way. I just wonder whether the efficiencies of AI and the quality of life benefits might, in the long run, outweigh the impact of those disruptions.
Some of our reluctance to adopt AI is informed by cognitive biases and fears. For a species famed for its adaptability, we humans don’t love change.
Cognitive biases play a substantial role in our hesitance to embrace AI. Cognitive biases are psychological patterns that are a holdover from our early years as Homo Sapiens. They are the habits our minds fall into — cognitive shortcuts that can be useful when running from predators but definitely skew our modern perception and judgment.
In this case, recognizing and addressing these biases is crucial in moving toward a more rational, ethical approach to AI adoption. Here are a few that I think could be in play, influencing our suspicion, trust, or acceptance of AI technologies.
Interesting, right? But the truth is, this is all academic. We may not even get to make this decision in the end. Companies are already making it.
A ton of corporations are barreling ahead with AI integration — mainly because the ROI often speaks louder than ethical debates. Take
Still, this isn't only about stone-hearted capitalism; it's about survival and adaptation. Businesses are tasked every day with the challenge of balancing technological adoption with ethical and ESG responsibilities. The impact of AI on employment and human well-being can't be an afterthought. For thousands, financial stability and career wellness hinge on these decisions. It’s something a lot of enterprises are grappling with.
And here's where the moral imperative question becomes more nuanced. If AI can streamline operations, reduce costs, and even create new opportunities, then aren’t we also morally responsible for exploring these technologies?
The trick will be to keep that ethical compass handy and ensure that as we embrace AI's efficiencies, we also safeguard against its potential to unfairly disrupt livelihoods.
Either way, we need to watch our footing. We are standing on the precipice of a new era, and one solid push could end us in a freefall. AI is no longer a futuristic fantasy; it's absolutely embedded in our daily lives and work. That’s exciting — and scary as hell.
One of the most significant challenges we face is the accessibility or tech gap. AI has the potential to democratize technology, making powerful tools available to a broader audience. However, right now, AI's promise is primarily being seen by those who already have a certain level of access, so there is also the potential that AI will exacerbate existing inequalities rather than alleviate them.
It’s a period of adjustment, so it will require patience, education, and proactive measures to ensure that the benefits of AI are widely distributed. We’ve got the potential to level the playing field so that AI's potential can be unlocked for everyone, not just the privileged few.
Okay, so it is a paradox: For AI to function optimally alongside humans, it must be superior to us in certain tasks. But that VERY superiority threatens to displace human roles, fueling resistance and fear among us mortals.
This paradox creates a tough "push-pull" for AI; that’s why we’re seeing such heated debate about morality. I believe the solution may be a suite of emerging design philosophies and technologies aimed at bridging the gap between AI and human cooperation in an ethical way. I’ll list them below. They are worth asking ChatGPT about:
In wrapping up, I’ll take a stand. I think adopting AI is a moral imperative. In my view, the potential to save lives, enhance our quality of life, and even address long-standing inequalities is too significant to ignore. However, this doesn't mean we should dive headfirst without consideration. In my opinion, we need to approach AI with a blend of enthusiasm and caution—staying excited to explore its possibilities but mindful of the ethical, social, and economic impact.
Thoughtful consideration, robust ethical frameworks, and stringent governance are the keys to unlocking AI's potential responsibly.
I’m still open to debate on the subject. So, I’ll throw the question out to you. Reply here or on my
Are we ready to embrace AI with the moral seriousness it demands?
Are you ready to take your next road trip in a self-driving car?