The most recent episode of , centers around a car/truck accident between a family of four (a mother, father and two girls) and a truck driver. Sudden flash rains, lost control and one person gets killed (I won’t spoil the story) and this leads to an Invisibilia team deep dive into emotions. While listening to the episode, I remembered sitting in an MBA Ethics Class and the conversation was on the ‘ ’. The question was, The dilemma that you have to decide to save some lives to kill one is a moral gray area with no right or wrong answer. At the point in the class when we were having the conversation, I didn’t really think too deeply about it; it was an abstract conversation about a situation I didn’t really think I would ever find myself in. It was more of an intellectual exercise than a real one to me. Invisibilia (a fantastic podcast, I encourage you to subscribe) Trolley Problem if you are the driver of a Trolley with faulty brakes whom would you choose to hit between the five unsuspecting workers directly on your path or you could turn the trolley and hit one unsuspecting worker? actively But for some reason, listening to that Invisibilia episode on the way to picking up my son waiting in traffic behind a Tesla, the question became real to me. Because we are moving into a world where, while we might not have to make those Trolley Problem decisions, our technologies might… Self-Driving Car Technology Dreams of self-driving cars have been with us since the first cars were made. At the World Fair in 1939, Far from the models we see driving on the streets of California and Austin, these used more rudimentary technology. General Motors introduced the concept of the driverless car. 1956 advertisement by America’s Independent Electric Light And Power Companies Like most, I believe we are still a ways off from the machine learning technology being robust enough for full autonomy, even as pundits suggest that . But what if all this is much closer than we think? What if I am totally wrong and we’ll have full autonomy in 2018? With the work . So where does that leave us on the innovation path of viability -> feasibility -> desirability (from ) driverless electric vehicles will be the death of big oil that the likes of GM, Waymo, Uber and Tesla this might not be so far in the future after all Creative Confidence Self-driving car technology is close to (technical) as we see them in our cities (I’ve seen some in Austin) feasibility We are getting close to for some use cases, especially the . viability logistics based instances Where we are failing is in getting these technologies to be because we are not having the required conversations at scale. Instead of debating , we should spend more time talking about the ethics and . Said another way; desirable which jobs AI will replace in the future decision-making models of the AI being deployed today do we, the folk who will be sitting in these autonomous vehicles, trust the companies well enough to believe that their technology will make the right choices for us even as we hand over control? The Ethics Questions that Autonomous Vehicles/Robots Raise For companies like Uber, developing autonomous vehicles is core and, frankly, existential. The behind the wheel. As , the company has to shift to driverless cars to reduce the cost of doing business. It’s a critical business decision. Do we that when Uber, with all it’s ethical and cultural problems, will build autonomous vehicles that make the customer-centric decision when it faces the ‘Trolley Problem’? Because you know it will happen don’t you? When the company deploys millions of autonomous vehicles on the roads, you know there will be accidents and moral decisions to make. No technology system is 100% perfect, with more possibilities for error, there will be errors. very business model that sustains Uber now depends on Uber replacing the drivers I laid out in the post totally trust For companies like Google and GM, are we comfortable that their machines will have our best interest at heart when it comes to non-binary decisions that might be related to life or death? Will a Waymo car be able to decide between swerving to hit a car with 4 cute puppies or risk the lives of your family in the car? How is this decision model being programmed into the self-driving cars? We know that the defaults embedded in our machines are . not always as clear cut and unbiased as we think And these questions do not just relates to autonomous vehicles; Robina (below) is a robot that is supposed to assist elderly residents in their homes. Robina has the machine intelligence required for it to learn from the performance and behavior of other retrieving real-time information from centralized cloud databases. But who is to blame if something goes wrong with Robina, and she hurts/maims, as she treats my parents? What are the default mental models that are being embedded in Robina, Humanoid and ASIMO (all robots intended to serve elderly home care residents) that ensure they make the best decisions for us? Robinas, Technology Always Outpaces Regulation Technology advancement always beats policy and regulations. . So the defaults that will be embedded in these technologies will have to come from the moral codes of the programmers and technologists who will embed these We are on the cusp, and in some cases experiencing, and these technologies will improve our lives immensely. We now have to, as informed consumers, ask and demand answers to these questions from our leading tech companies. Our lives might depend on it. Always vehicles with decision-making software. advancements in technology that seemed like magic just a few short years ago I’ll leave you with this quote ‘Speed is irrelevant if you are traveling in the wrong direction.’ M. Gandhi Are we moving too fast? s_ign up for the Polymathic Monthly Newsletter_ Please share, like and tweet. Write your own blog posts using our WYOP tool (it gets you into writing flow) and here , you’ll love it.
Share Your Thoughts