paint-brush
The Development of AI: Balancing Convenience and Ethicsby@ryanayers
1,488 reads
1,488 reads

The Development of AI: Balancing Convenience and Ethics

by Ryan AyersMay 9th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In the age of self-service, it's difficult to remember the days when companies were intensely personal. And as a result, companies compete on price alone. So how can your company become one of the few that offers more than a lower price, and win at retention?

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - The Development of AI: Balancing Convenience and Ethics
Ryan Ayers HackerNoon profile picture


Technology has improved our lives in countless different ways. Today, we have more time than ever (even if it doesn’t feel that way!) to pursue activities we enjoy, thanks to automation.


Throughout the course of history, technology has made essential work easier, freeing up more and more time for people to create, socialize, and relax.


Artificial intelligence (AI) has played a pivotal role in pushing automation forward in recent years.


As the technology has advanced, it’s made its way into nearly every industry, from marketing to healthcare. AI has increased efficiency and taken over many mundane tasks people would rather not do.


But AI isn’t completely free from controversy.


There are legitimate ethical concerns that have come up in recent years, and it’s important to be aware of them so we can work on potential solutions.


As Chirag Shah, Associate Professor at the University of Washington’s Information department explains, these issues can get very ethically complex:



Asking for someone’s race or gender on a job application form may be legal, but is it ethical to use that information to screen them?


This is not a new question or problem, but given how much we are starting to rely on automated processes (e.g., screening candidates, predicting criminal intent, deciding on college admissions), such questions bring back the age-old discussion around ethics.


This development has coincided with the pandemic, but it’s not exclusive to the pandemic age. Sure, some issues have become more pronounced due to pandemic.


Example: inequity. I’m not sure if I can say that we are moving into a more digital world because of pandemic as far as AI is concerned. The transition was already in the works, perhaps the pandemic has only amplified it.”


Here are some of the many ethical factors we need to consider as AI develops even further.


Bias in Artificial Intelligence


One of the biggest issues with artificial intelligence is the fact that it cannot be completely objective. AI is developed by humans and it has the same biases embedded into it as we do.


These biases may be unintentional, but they have already led to several serious problems.


One example of this was when software designed to predict criminal behavior, which was used to help courts in their decision-making process, was found to be frequently inaccurate and biased against Black individuals.


Facial recognition software and financial algorithms have had similar biases. These biases had life-changing implications for people, including not being able to secure a loan or even not being released from prison when eligible.


Cases like these show just how dangerous it is to trust AI systems as implicitly neutral. Recognizing that these systems hold biases is the first step toward fairer technology.


Cybercrime and Data Theft Risks


We live so much of our lives online and trust various private organizations with our most important, sensitive, and personal data.


While this data is supposed to be protected and is mostly stored for convenience’s sake, data theft can and does occur every single day.


From EHR (Electronic Health Records) to credit card information, we just have to hope that the organizations storing our information have invested in good cybersecurity measures.


Unfortunately, the number of data breaches that occur each year shows just how lacking cybersecurity measures are as a whole.


In addition to data theft, cybercriminals can use hacked artificial intelligence for their own purposes and cause significant harm. Hackers who might gain control of autonomous vehicles, drones, or weapons could wreak havoc without having to physically be near the machines they are controlling.


Physical Safety


Autonomous vehicles, powered by AI are taking to the roads. While most of them still require a driver to be behind the wheel, crashes have occurred due to “driver” negligence.


However, as autonomous vehicles progress, there will be more physical safety concerns and crashes involving complex ethical questions.


Physical safety due to AI failures may also be an issue in infrastructure, as smart cities become more common. While it’s not possible to avoid all accidents and technology failures, we will have to decide how much risk is acceptable.


Deciding Accountability


In issues involving artificial intelligence, it can be difficult to decide who is accountable for the AI’s decisions and actions.


The obvious choice is the programmer, but that leads to lots of other questions and concerns surrounding AI itself and how much power it should have in human society.


After all, once artificial intelligence and machine learning begin to evolve their systems and become smarter, is the programmer still at fault? Should someone be overseeing the system’s work so they can monitor its process and decision-making?


These are questions that are still very much up in the air as we see all of the different ways artificial intelligence is impacting our world.


What Happens When AI Doesn’t Need Us?


Ultimately, some of the biggest concerns stem from the ethics of AI when it becomes as smart or smarter than a human.


That’s still a long way off, but many have concerns ranging from job elimination to the ethics of how we as humans should treat AI that is as smart as we are.


Speculation will only go so far, as we don’t yet know what that kind of world will look like, but it is important to consider the possibilities.


Ethics vs. Responsibility in AI Development


Associate Professor Shah is clear to pinpoint language differences that can subtly absolve developers of accountability following the failure of an AI application: ethical vs. responsible.


“I want to be careful about not mixing two terms that are often used around one another: ‘responsible’ and ‘ethical’ when discussing AI these days.


The responsible AI is often preferred for industry because it allows them to continue doing AI and simply ask, how could they do it more responsibly.


Ethical AI, on the other hand, goes to the root of the AI and asks should we even be doing it?”


Unsurprisingly, many developers would rather avoid the issue if they can, as it might compromise their ability to carry on with their work. However, the potential for harm is substantial, and developers need to keep ethics at the forefront of all the work they do.


Following a Code of Ethics in Artificial Intelligence


In any profession, there are certain ethical standards that one must follow to remain in good standing. Doctors and nurses have their own ethical codes for the care they give patients, for instance.


This is important because their decisions can sometimes have life or death consequences. Financial advisors have ethical codes they must follow as well since their work directly impacts the livelihood of their clients.


In Shah’s opinion, AI developers must ask themselves even harder questions:


“A lot of development in AI has been around the tech and its capability. When we consider ethics in AI, the first question to ask is should we be even doing this? Should we use facial recognition in public places?


Should we use an automated system that recommends if we should bail or jail an offender?


A big part of consideration is what groups or individuals are going to be harmed due to issues of bias (in our data and algorithms) and lack of transparency in a given AI system?


Often AI systems are judged by metrics like accuracy, efficiency, and scalability. These criteria ignore various ethical considerations including equity/equality, fairness, and accountability.”


AI developers are not as heavily regulated as nurses or financial advisors, and their work has not been closely examined so far.


This means that they have been able to build their tech with little oversight or consequences when things go wrong.


Despite this, discussions of ethics in artificial intelligence are increasing and it’s important for anyone who is interested in this field to follow these discussions and stick to a code of ethics.


Creating technology that is likely to have a negative impact and knowingly putting it out into the world for people to use is extremely problematic. Those who ignore ethical standards in AI development should be held accountable for the harm they cause.


Challenges Surrounding the Future of Artificial Intelligence


What does all this mean for the future of AI? According to Shah, we need to keep having these conversations as technology evolves with society.


“The first big challenge is our own understanding of what is ethical. These discussions have been going on at least from the days of Socrates. They are fluid (keep changing with time), highly contextual (depends on culture, social norms), and very hard to map from a conceptual understanding to a system implementation.


For example, one of the pillars of ethics in AI is fairness. There are dozens of definitions and notions of what is ‘fair’. No matter which definition one adopts, there will be many other definitions that go different or even against that.


Our AI systems are not able to work in moral disambiguates and ethical dilemma. For example, if a self-driving car has to make a decision between saving an adult vs. a child pedestrian in case of an inevitable crash, what parameters should it use?


Is it more ethical to punish one innocent person than to let four dangerous criminals go free? We, the society, face challenges to address such questions. It’s then even harder to have them coded into an AI system.”


The difficulty with artificial intelligence is that in attempting to mimic human intelligence, it is asked to make ethical decisions. These decisions are not procedural and do not always have easy answers.


Many people are uncomfortable with the idea of machines making our ethical decisions, even as AI technology makes their lives easier.


As artificial intelligence advances, more regulatory oversight can be expected. Policy development is slow, while technology moves fast.


We have already become used to AI’s presence in our lives, a presence that will only grow. Still, we can’t ignore the ethical concerns and must face them in order to ensure that AI serves us, instead of causing more problems than it solves.