Following the release and mass adoption of ChatGPT, one way many people have tried to discount its disruptive power is the argument that artificial intelligence (AI) models do not have the ability to process emotion.
It is quite difficult to accept that some poems produced by ChatGPT or images created by Midjourney lack artistic depth or a creative soul when such works dazzle professional artists.
If recent
Generative conversational AI models are designed to recognize, interpret, and respond appropriately to human emotions.
The use of reinforcement learning from human feedback (RHLF) helps AI systems to process context using learned behaviors from human interactions to adapt to new or different situations.
The ability to accept feedback from humans and improves their responses in different situations conveys emotionally intelligent behavior.
Emotional intelligence is becoming popular in AI as more AI systems are designed to interact with humans. Incorporating emotional intelligence into AI helps developers create systems that are more human-like and can better understand and respond to human needs and emotions.
Koko, an online emotional support chat app, used
The successful outcome further demonstrated that AI models can process emotions intelligently without the human counterparts knowing the difference.
While there are ethical or privacy
Riding on ChatGPT’s wide acceptance and great success, Microsoft
Also known as Bing Chat, the chatbot integrates OpenAI’s generative conversational AI model with Microsoft’s proprietary model to create a “collection of capabilities and techniques” known as the Prometheus model.
The model has been taunted as the “new, next generation OpenAI large language model that is more powerful than ChatGPT”. ChatGPT’s training data is limited to 2021, and the tool is not connected to the internet.
However, Bing Chat takes conversational interaction to the next level by pulling data from the internet to augment its response to a prompt.
The ability to infuse current information and references in its feedback is not the only thing Microsoft’s chatbot can do. It is notorious for holding very strong views and acting unpredictably aggressively as well.
Where it did not
In contrast, ChatGPT responded to the same query without giving an “attitude”.
The exchange is an indication that AI can convey desires and needs that are typical human emotions.
A rogue AI is an AI model that behaves in ways that deviate from how it was trained. When an AI system behaves unpredictably, it poses a risk to its users, and can potentially cause harm.
There are several reasons why an AI system can behave erratically, especially if it is confronted by an unforeseen circumstance.
An AI system can go rogue as a result of inadequate training data, flawed algorithms, and biased data.
A lack of transparency in how an AI system makes decisions and the absence of accountability for its actions and decisions are factors that can lead to AI models behaving roguishly.
Threat actors who successfully hack an AI system can cause it to behave in an unintended way by injecting malware or poisoning the training data.
Google cited “
However, due to pressure from the disruptive ChatGPT, Google released Bard which cost the tech giant $100 billion for giving a wrong response during its first public demo.
In 2022, Meta released the
In 2016, Microsoft recalled its AI chatbot, Tay, within a week of its launch because it was spewing
However, despite Bing Chat’s threatening behavior, Microsoft has ignored
There are ethical and legal concerns about how AI systems are developed and used.
Though Koko’s use of a chatbot was more of an ethical implication, there are instances such as discriminatory practices and human rights violations where AI-powered technologies have been a cause of litigation.
However, it is different when AI goes rogue and threatens harm, like in the case of Bing Chat. Should there be legal implications? And if there are, who is getting sued? It is challenging to determine culpability, accountability, and liability when an AI system causes harm or damage.
There are copyright
If ongoing litigations against AI laboratories and companies are taken as precedence, it is safe to assume that developers of rogue AI may be held liable for how their AI systems misbehave.
For those organizations still thinking about whether they will be held liable if their AI technology goes rogue, the EU’s Artificial Intelligence Act has
The responsibility of ensuring how AI systems behave lies with the developers and businesses using them in their operations.
More like ensuring data protection, businesses and laboratories must ensure that appropriate controls are implemented to mitigate unauthorized manipulation of AI data and codes.
Preventing rogue AI requires a combination of technical and non-technical measures. These include robust testing, transparency, ethical design, and governance.
Adequate cybersecurity measures such as access control, vulnerability management, regular updates, data protection, and effective data management are crucial to ensure unauthorized access to AI systems is prevented.
Human oversight and collaboration with different stakeholders such as AI developers, researchers, auditors, legal practitioners, and policymakers can help to guarantee that AI models are developed reliably and responsibly.
In just three months, ChatGPT has transformed our world. There is great potential for generative conversational bots and other AI systems to disrupt businesses, enhance customer experience, transform society, and create innovative opportunities.
However, AI systems can go rogue if they are not developed and deployed securely and responsibly. Rogue AI can pose serious risks to users and society. AI developers and businesses using AI systems can become liable if their AI systems cause harm or damage.
Ensuring that AI systems behave as intended involves the collective responsibility of AI developers, users, and policymakers.
Appropriate technical, non-technical, and regulatory measures must be in place to verify that AI is developed and deployed in a way that is safe, secure, and beneficial to society.