When AI Goes Rogue - The Curious Case of Microsoft's Bing Chat
Too Long; Didn't Read
In just three months, ChatGPT has transformed our world. There is great potential for generative conversational bots and other AI systems to disrupt businesses, enhance customer experience, transform society, and create innovative opportunities.
However, AI systems can go rogue if they are not developed and deployed securely and responsibly. Rogue AI can pose serious risks to users and the society. AI developers and businesses using AI systems can become liable if their AI systems cause harm or damage.
Ensuring that AI systems behave as intended involves the collective responsibility of AI developers, users, and policymakers. Appropriate technical, non-technical and regulatory measures must be in place to verify that AI is developed and deployed in a way that is safe, secure, and beneficial to society.