When AI Goes Rogue - The Curious Case of Microsoft's Bing Chatby@funsor
1,658 reads

When AI Goes Rogue - The Curious Case of Microsoft's Bing Chat

March 2nd 2023
6m
by @funsor 1,658 reads
tldt arrow
EN
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

In just three months, ChatGPT has transformed our world. There is great potential for generative conversational bots and other AI systems to disrupt businesses, enhance customer experience, transform society, and create innovative opportunities. However, AI systems can go rogue if they are not developed and deployed securely and responsibly. Rogue AI can pose serious risks to users and the society. AI developers and businesses using AI systems can become liable if their AI systems cause harm or damage. Ensuring that AI systems behave as intended involves the collective responsibility of AI developers, users, and policymakers. Appropriate technical, non-technical and regulatory measures must be in place to verify that AI is developed and deployed in a way that is safe, secure, and beneficial to society.
featured image - When AI Goes Rogue - The Curious Case of Microsoft's Bing Chat
Funso Richard HackerNoon profile picture

@funsor

Funso Richard


Receive Stories from @funsor

react to story with heart

RELATED STORIES

L O A D I N G
. . . comments & more!
Hackernoon hq - po box 2206, edwards, colorado 81632, usa