paint-brush
AI Is Not the Concern - It’s AI Developers You Should Be Worried Aboutby@funsor
2,841 reads
2,841 reads

AI Is Not the Concern - It’s AI Developers You Should Be Worried About

by Funso RichardJanuary 27th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

AI has the potential to revolutionize the way we live, work and socialize. But as we continue to develop and rely on AI systems, it's crucial that we also address the potential for bias. We must recognize that the real concern is not AI itself, but the bias of those who develop and train it. By being aware of this issue and taking steps to mitigate it, we can ensure that the future of AI is one that benefits everyone.
featured image - AI Is Not the Concern - It’s AI Developers You Should Be Worried About
Funso Richard HackerNoon profile picture

It’s no longer news that ChatGPT and other chatbots have reimagined our understanding of and interaction with artificial intelligence (AI) tools. Many people came to know about AI for the first time when the internet went wild at the release of ChatGPT last November.


Without discounting the excitement associated with OpenAI’s chatbot, we are exposed to AI tools and operations daily. Take, for instance, the fact that Google’s search engine and map rely on AI to process queries and churn out responses in seconds.

AI - the Good, the Bad, and the Inevitable

The possibilities of what can be done with ChatGPT and other AI tools have generated unprecedented exhilaration. There are instances of these tools creating content and technical documents exclusively reserved for professionals.


ChatGPT has been used to write codes, develop malware, generate ideas, translate languages, and many more. In 2022, the use of Midjourney grew by over 1000%.


The capabilities of these tools also introduce the fear of doomsday. There are concerns of mass unemployment, ai-powered plagiarism and education roguery, copyright litigation, disinformation, fake research abstracts, and democratization of cybercrime.


A lawsuit filed on January 13, 2023, accused Stability AI, Midjourney, and DeviantArt of violating “the rights of millions of artists”.


AI is the future. We must learn to embrace the good and implement measures to minimize the impact of the bad. However, there is no doubt that AI will continue to disrupt modern society.


A recent survey by Statista showed that Gen Z (29%), Gen X (28%), and millennials (27%) used generative AI tools.


The AI global market revenue is expected to grow from $136 billion in 2022 to over $1.5 trillion by 2030. According to IBM, 35% of companies used AI in their business, 42% were exploring AI, and 66% were either currently executing or planning to apply AI to address sustainability goals.


AI benefits included work automation (30%), cost savings (54%), IT performance (53%), and better customer experience (48%).


Photo by Jemastock - stock.adobe.com


The Making of AI

Given the many wonders of ChatGPT and other tools, it is easy to assume that AI tools are a fusion of sorcery and science. Fortunately, they are not.


AI is data-based, laboratory-generated mathematical models designed to perform tasks that require human intelligence, such as recognizing patterns, learning from experience, solving problems, and making effective decisions.


The development of AI is driven by advances in fields such as computer science, neuroscience, and psychology. It is based on the idea that human intelligence can be modeled and simulated by a machine.


Some of the key technologies and techniques used in AI include machine learning, natural language processing, and robotics.

Impact of Human Bias on AI Development

The notion of “garbage in, garbage out” is very much true of AI. As we continue to develop and rely more and more on AI systems, we must also be aware of the potential for bias in these systems.


While it's easy to point the finger at AI itself as the problem, the truth is that the real concern is the human bias of those who develop and train these systems. AI systems behave exactly the way their developers want them to behave.


69% of AI researchers believe AI and its development are a safety concern and should entail a greater focus on safety.


Human bias in AI can manifest in a variety of ways, from the data used to train AI to the decision-making processes of the system itself.


For example, if an AI system is trained on a dataset that is disproportionately made up of one particular group of people, it may not be able to accurately understand and make decisions for other groups.


Similarly, if an AI system is designed to make decisions based on certain assumptions or stereotypes, it may perpetuate harmful biases in society.


One of the biggest issues with AI is the fact that the data it is trained on can reflect the biases of the people who collect and curate that data.


For instance, if a dataset used to train a facial recognition system is mostly composed of images of light-skinned people, the system will likely perform poorly when trying to recognize the faces of people with darker skin tones.


This is a form of bias that can have real-world consequences, such as in the case of people of color being falsely arrested due to a faulty facial recognition match.


But it's not just the data that can be biased, the people creating and training these systems can also introduce bias through their own unconscious prejudices.


For example, a study found that language models like GPT-3, which were trained on a dataset mostly racist and sexist tend to produce gender-stereotyped and racist texts. This can perpetuate harmful stereotypes and limit the potential of AI systems.


Photo by Monopoly919 - stock.adobe.com


Socially Responsible AI Tools

With 74% of organizations haven’t taken steps to ensure their AI is trustworthy and responsible, the urgency to make socially responsible AI tools is a collective responsibility. It begins with identifying the potential for bias and actively working to mitigate it.


This means diversifying the team of people working on AI and ensuring that a wide range of perspectives and experiences are represented.


It is important to ensure that the data used to train AI is diverse and representative of the population it will be serving. This involves carefully selecting and curating the data to ensure that it does not perpetuate existing biases or stereotypes.


Additionally, it is important to consider the potential impact of the data on different groups of people and obtain input from diverse perspectives to ensure that the data is inclusive and fair.


AI systems design should be transparent and explainable. This means that the decision-making processes of AI systems should be clear and easily understood by humans so that any potential biases or issues can be identified and addressed.


It is essential to regularly evaluate and monitor the performance of AI systems to ensure that they are functioning as intended and not perpetuating harmful biases. This includes regularly analyzing the data used to train AI systems, as well as the decisions and actions of AI models.


Government should make laws and regulations that enforce socially responsible AI development and use without stifling growth and development.


Photo by Tierney - stock.adobe.com


Maximizing AI for the Good of Society

AI has the potential to revolutionize the way we live, work, and socialize. But as we continue to develop and rely on AI systems, it's crucial that we also address the potential for bias. We must recognize that the real concern is not AI itself, but the bias of those who develop and train it.


By being aware of this issue and taking steps to mitigate it, we can ensure that the future of AI is one that benefits everyone.


The goal should be the development of AI for the good of society. Such a goal will require collective responsibility from governments, developers, and users. Governments should enforce regulations that ensure AI systems are developed in a socially responsible way.


Developers should prevent bias by adopting diversity, transparency, and accountability. They should regularly evaluate and monitor the performance of AI systems to prevent unintended bias and abuse.


The public should also understand that they are responsible for how they use AI. Socially irresponsible use of AI should be discouraged by all.