Ethics in AI: Balancing Innovation and Responsibility

Written by valentinriabtsev | Published 2023/06/28
Tech Story Tags: ai | ethics | artificial-intelligence | ai-applications | ai-trends | innovation | regulation | future-of-ai

TLDRArtificial Intelligence (AI) has already become an integral part of many areas of our lives. But is AI always responsible for its answers, advice, and guidance? Is its undeniable brilliance balanced with a sense of accountability for its actions? In this article, I’ll find that out.via the TL;DR App

Today, we live in a world where technology is advancing at an unprecedented rate. In recent years, one of the most, if not the most, in-demand fields in science and technology has been closely connected with Artificial Intelligence (AI), which has already become an integral part of many areas of our lives. AI is a field of computer science that develops algorithms and systems that can perform tasks that require human intelligence.

But is AI always responsible for its answers, advice, and guidance? Is its undeniable brilliance balanced with a sense of accountability for its actions? In this article, I’ll find that out.

The Most Common AI Problems

It is no secret that AI is still in the very early stages of its formation, and in this respect, it is doomed to face various kinds of problems. Although AI is developing by leaps and bounds, there is plenty of room for its improvement. I strongly believe that there are 5 major issues putting a spoke in the wheel of AI. Once all of these problems are successfully solved, AI will undoubtedly reach a never-before-seen level.

So, the 5 principal problems related to AI are:

  1. AI Bias;

  2. AI Hallucination;

  3. Privacy and Confidentiality;

  4. Data Usage and Transparency;

  5. Replacing People.

1. AI Bias

It’s common knowledge that every AI model is trained on a specific data set determined by the developers. On the surface, AI bias can be seen in the fact that many analytical systems based on deep learning have an unexpected tendency to draw “biased” conclusions, which can then lead to flawed decisions on their basis.

Obviously, an AI model is going to replicate the biases of the data it has been trained on. Let’s dive into several real-world examples of AI bias to better understand this issue:

  • Facial Recognition. In June 2020, IBM, an internationally-recognized tech giant, stopped selling its facial recognition software due to the racial profiling that the company’s AI product was continuously showing. In June 2022, another major player in the tech market, Microsoft, also discontinued the selling of its AI-based facial-analysis software tools, which “performed poorly on subjects with darker skin”. There is no doubt that the crux of the facial recognition problem is the undisclosed set of faces, seemingly lacking racial diversity, on which IBM’s and Microsoft’s AI models were trained.

  • Bank Credit Score Calculation. In November 2019, the Apple Credit Card issued by Goldman Sachs, a leading global banking firm, was heavily criticized by the international community for having gender bias. Even though Goldman Sachs claimed that its AI-based credit score algorithm was vetted for potential bias by a third party, there were numerous examples of the algorithm being misogynistic. For instance, an entrepreneur David Heinemeier Hansson tweeted that his Apple Card credit limit turned out to be 20 times higher than his wife’s, in spite of her having a higher credit score.

  • Recruiting. In October 2018, it was revealed that Amazon’s AI-based recruiting tool was gender-biased, and if one of the two job candidates, who were completely identical professionally, was male, Amazon’s algorithm would rate him higher than a competing female.

  • Autocomplete. In February 2018, the Google Autocomplete algorithm was accused of being biased against race, religion, and gender. In this way, the algorithm proposed offensive autocomplete suggestions for queries related to Jews, Muslims, and people of color.

Thus, AI bias is the problem that machine learning algorithms can be subject to biases and distortions that can lead to unfair and inaccurate results. As I have described, bias can manifest itself in a variety of ways, including the selection of data used to train the model, the use of certain algorithms and data processing techniques, as well as the training of the model based on certain biases or stereotypes that exist in society.

2. AI Hallucination

Another acute problem that hinders the development of AI is so-called AI hallucination. It is the phenomenon when an AI model gives a confident response that doesn’t seem to be justified by the training data.

Over the last two decades, there have been hundreds of legal cases related to incidents involving users who were directly following the routes suggested by navigation software and ended up being seriously injuredin 2010, Lauren Rosenberg was run over by a driver after Google Maps’ walking directions instructed her to walk along a busy highway with no sidewalk. Additionally, in 2015, an article was published, covering the cases of drivers blindly following their GPS and riding up to a cliff’s edge or getting stuck in a cherry tree.

Given the fact that AI is currently trending all around the world, we can expect a new wave of incidents to be added to this selection in the near future. In the meantime, AI is rapidly conquering such crucial life areas as Healthcare, Transportation, Jurisdiction, Warfare, and the list goes on. So, it’s safe to say that soon, we won’t even pay enough attention to such incidents because they will become a routine part of every news briefing.

However, less than a month ago, OpenAI, the creator of ChatGPT, was sued for defamation for the first time since the groundbreaking AI chatbot was released. A journalist named Fred Riehl was doing research for one of his articles and asked ChatGPT for a summary of a Washington federal court case. As a result, the AI-powered advisor responded with the information regarding Mark Walters, an American radio host, claiming that Walters was involved in “defrauding and embezzling funds” coming from the Second Amendment Foundation. Needless to say, Walters neither worked for the stated nonprofit organization nor received any money from it.

Such AI hallucinations seem to be a very dangerous and horrendous thing. Imagine that just one response generated by an AI model can ruin the entire professional credibility of an innocent specialist. On the other hand, a wrongly compiled route can serve as the cause of someone’s death. Apparently, the algorithm itself can’t be held responsible for the answers it gives. But its creators can and should.

3. Privacy and Confidentiality

A further matter of concern I want to talk about is privacy along with confidentiality. Unfortunately, when it comes to implementing AI and using third-party services, even the world’s largest corporations aren’t usually aware of how their valuable and sensitive data is being processed and stored. Ultimately, this reckless use of AI, as a rule, results in the inadvertent exposure of the company’s precious information and priceless insights. That’s exactly what happened to Samsung.

In the spring of 2023, three Samsung employees unintentionally leaked confidential information while using ChatGPT. The first one inserted top-secret source code to check for errors; the second one leaked another piece of code while trying to tweak it; and the third one shared the recording of an important internal meeting aiming to convert it into notes. It goes without saying that Samsung immediately banned all of its staff from using any AI-powered tools, but this fact doesn’t take all the responsibility away from both the company, which didn’t even consider the possibility of such leaks prior to this case, and from the employees, who reached the pinnacle of incompetence by acting so irresponsibly.

Nevertheless, there is a more frightening issue related to one’s privacy and confidentiality that has been pointed out by one of the most prominent investors in AI, Marc Andreessen:

Every kid is going to grow up now with a friend that's a bot. That bot’s going to be with them their whole lives. It's going to have a memory, it's going to know all their private conversations, it's going to know everything about them”.

If the above-mentioned scenario comes true, it would definitely be the worst feeling in the world for someone to learn that all of their personal data/thoughts/secrets accumulated throughout their entire life have been leaked. In the worst case, negligence towards privacy and confidentiality might lead to an uptick in mental health problems, and sadly, some people may even make choices that harm themselves.

4. Data Usage and Transparency

The fourth major issue I would like to address in this article is data usage and transparency. It’s no coincidence that some of the world’s leading LLM (Large Language Models) providers, including OpenAI, tend not to completely disclose their original datasets, therefore trying to hide the ways they collected the necessary training data.  

Nowadays, it’s widely believed that most of the datasets used to train the most popular AI models were collected without receiving appropriate permissions from the data owners. For example, in February 2023, Getty Images, the owner of 477 million images, sued Stability AI, an AI-based art generator, claiming that the latter had illegally used 12 million copyrighted pictures belonging to Getty Images. One of the examples that Getty Images attached to the case is quite illustrative: not only do the images look like two peas, but the generated picture also contains the outline of the watermark used by Getty Images. In this case, copyright infringement is more than obvious.

If you think that such violations are totally unacceptable, I ought to warn you that this is just the tip of the iceberg. In our severe reality, companies use manual labor to label the required data. In Kenya, an external data labeling provider of OpenAI hired local workers to label toxic content generated by ChatGPT for only $2 per hour. If these people worked 10 hours a day for a month with no days off, they would only receive $600 a month, literally sacrificing their mental and physical health. No wonder that some of the workers interviewed described this experience as “torture”.

5. Replacing People

The last but certainly not the least concern about AI is its great potential to replace humans. Today, many people fear that AI will automate millions of jobs, putting us, humans, out of work. Actually, it’s one of the biggest effects of implementing AI that we permanently talk about. This fear has been supported by the World Economic Forum which predicted that AI will take away about 85 million jobs by 2025. But in fact, the World Economic Forum also forecast that AI will create 97 million jobs in areas such as Data Processing, ML Engineering, Cybersecurity, etc. In this regard, some of us can breathe a sigh of relief.

Another study was conducted by Princeton University and outlined the key professions to be affected by AI, and especially the LLM. And while having telemarketers at the top of the list doesn't sound dramatic to many, they are followed on the list by teachers. This raises the burning and open question of how knowledge workers around the world will be affected by AI.

To delve deeper into this issue, I recommend reading a recent article published by Ethan Ilzetzki and Suryaansh Jain, where they refer to a fundamental study carried out by Daron Acemoglu and Pascual Retsrepo. The main point of these studies is that we can basically stick to a certain framework when considering the impact AI has on humans. This framework consists of three components:

  • a displacement effect;
  • a productivity effect;
  • a reinstatement effect.

Solutions to the Problems

So far in this article, I have only written about the problems, and now it’s time to propose multiple solutions that will assist AI in reaching the next peak in its development.

First and foremost, an industry-wide regulation should be adopted to set the basic principles and standards for the establishment and development of an AI model, as it was done concerning the regulation of personal data collected after the Facebook and Cambridge Analytica cases. I’m glad that the first steps in this direction are already being taken, and the AI “founding fathers” (Geoffrey Hinton, Yoshua Bengio, Sam Altman, Ilya Sutskever, etc.) have recently signed a joint statement calling for AI regulations.

I strongly believe that these regulations should prioritize transparency, accountability, and fairness and should describe how the data is collected, as well as how the models are trained, used, and managed. However, the companies shouldn’t wait for the regulation to come into force. Instead, they need to be proactive in terms of implementing internal AI testing and monitoring processes to minimize AI bias.

Another important aspect of the possible solution is simply an acceptance of the fact that the world around us is inevitably evolving, and so is the role of manual labor. Back in 1973, Wassily Leontief, a Nobel Prize laureate, and NYU professor, already foresaw that “more and more works will be replaced by machines”. In this context, it’s crucial for us to understand how this displacement is executed and how it can affect our lives in general.

I am personally convinced that the displacement should be sustainable enough to take into account the long-term impact on society. Corporations, businesses, and governments, on their part, are to provide all necessary support measures, alongside employment and training services, to aid the displaced workforce in finding brand-new meaningful, and well-paid jobs.

Conclusion

Thus, a vital part of both using and building AI models is to treat these processes with complete responsibility. You, as a normal user, should always remember not to trust AI with your personal or vulnerable data; and AI model providers should have the courage to assume total responsibility for the responses and advice of their brainchildren. We are still not fully aware of all the risks associated with such a rapid introduction of AI into our lives. I don't want to sound conservative by any means, but I do believe that before we entrust AI with our lives, we have to find that proverbial balance between innovation and responsibility.


Written by valentinriabtsev | co-founder, CPO at Wale.AI (London, UK), 10+ years in IT innovation at Fortune 500 companies
Published by HackerNoon on 2023/06/28