paint-brush
When AI Goes Rogue - The Curious Case of Microsoft's Bing Chatby@funsor
2,199 reads
2,199 reads

When AI Goes Rogue - The Curious Case of Microsoft's Bing Chat

by Funso Richard6mMarch 2nd, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

In just three months, ChatGPT has transformed our world. There is great potential for generative conversational bots and other AI systems to disrupt businesses, enhance customer experience, transform society, and create innovative opportunities. However, AI systems can go rogue if they are not developed and deployed securely and responsibly. Rogue AI can pose serious risks to users and the society. AI developers and businesses using AI systems can become liable if their AI systems cause harm or damage. Ensuring that AI systems behave as intended involves the collective responsibility of AI developers, users, and policymakers. Appropriate technical, non-technical and regulatory measures must be in place to verify that AI is developed and deployed in a way that is safe, secure, and beneficial to society.
featured image - When AI Goes Rogue - The Curious Case of Microsoft's Bing Chat
Funso Richard HackerNoon profile picture
Funso Richard

Funso Richard

@funsor

Information Security Officer and GRC Thought Leader. Writes on business risk, cybersecurity strategy, and governance.

L O A D I N G
. . . comments & more!

About Author

Funso Richard HackerNoon profile picture
Funso Richard@funsor
Information Security Officer and GRC Thought Leader. Writes on business risk, cybersecurity strategy, and governance.

TOPICS

Languages

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite