AI is a phrase thrown about a lot nowadays, maybe a little too much. But do you even know what it means, you’ve probably used it many times before without even realizing it, or even that AI defying humans isn’t what we should be worrying about?
Well, according to Cambridge dictionary, AI “is the study of how to make computers that have some of the qualities of the human mind”
Basically giving the computer the ability to think, creepy, I know.
Well, now that we’ve got the basic picture of what it means, what is it being used for?
This may or may not come as a surprise to you, but you were very likely led to this page through AI. And that assumption is based on the 2019 report that over 4 billion people use googles services and as it’s estimated that there are 4.66 billion active internet users, that means 82.9% of internet users come through google (that’s according to the 2021 stats from Statista).
But why am I telling you this? Because google’s search results are actually based on what their AI thinks will be best for you, based on everything it’s learned from your past interactions with its services. And that’s just one common use of AI that people don’t even realise has been integrated into their everyday life.
Some more common uses of AI include:
But that, I’m sure, you probably already knew (or at least should have 😆).
So here are some more lesser-known uses of AI that may surprise you:
And that’s just brushing the surface of what’s possible with AI. With companies spending nearly $20 billion on AI products and services each year, tech giants like Google, Apple, Microsoft, and Amazon spending billions to develop those products and services, and universities making AI a more prominent part of their respective curricula, AI is becoming a more important part of the educational landscape.
Big companies experimenting with AI doesn’t exactly have the best history, to say the least.
In 2004 Microsoft unveiled Tay, an experimental bot put on Twitter to, as Microsoft said, experiment in “conversation understanding”. The project was simple, the more you chat with Tay, the smarter it gets, learning to engage people through casual and playful conversation.
What could go wrong, right? Oh, how wrong they were. 2016 Twitter wasn’t the type of place you’d want AI to develop conversational understanding. In under 24 hours, Twitter users were able to turn the bots friendly greetings of “I’m stocked to meet you” and “humans are super cool” to racist, antisemitic and practically every other discriminatory form there is. To say the least, it wasn’t a successful experiment.
But don’t worry, Microsoft and Twitter with AI experimenting weren’t the last of it. In 2017 Facebook had been experimenting with AI bots that negotiated with each other over the ownership of virtual items, they wanted to see how linguistics played a role in the way such discussions played out for the negotiating parties, but a few days later they started conversing with each other in a modified version of the human language. Text that seemed completely meaningless, but that was being replied to, an example of it is:
And that’s just two early experiments at the beginning of integrating AI with common technology, there are many more public examples including a French chatbot suggesting suicide, which you can read about on Analytics Insight.
In early 2017 the development of global governance board was suggested to regular AI, in 2020 The Global Partnership on Artificial Intelligence was launched, requiring AI to be developed in accordance with human rights and democratic values, to try and gain public trust. It included a whole pile of countries, the EU, UK and USA to name a few.
Some of the adopted guidelines include:
You can read the rest at OECD.
The stereotypical view on AI going wrong, is it getting too clever and turning hostile to humans. But in fact, according to AI researcher Stuart Russell, the threat is the exact opposite!
Stuart is a professor of computer science at the University of California, and in his book Human Compatible: Artificial Intelligence and the Problem of Control, he theorizes that the problem is not that they’ll become too clever and defy us, but that they’ll do exactly as they’re told – but we’ll tell them to do the wrong things – and this could end in a disaster.
Stuart Russell, who helped pioneer the idea of value alignment, likes to compare this to the King Midas story. When King Midas asked for everything he touched to turn to gold, he really just wanted to be rich. He didn’t actually want his food and loved ones to turn to gold. We face a similar situation with artificial intelligence: how do we ensure that an AI will do what we really want, while not harming humans in a misguided attempt to do what its designer requested?
–A quote from Future of life
This article was originally posted on my blog, https://kilabyte.org/. I recently started it, as I’ve always been interested in technology, I have a lot to learn and can’t wait to see where this road takes me! 😉
A bit about me 👋 Hi, I'm Night Wolf, I'm 16 💻 I'm interested in anything computer related 🌱 I'm currently doing a full-stack course 👨💻 I do web development http://nightwolf.tech/ ✍️ I run a blog https://kilabyte.org/ ✉️ Feel free to contact me :) 👉 [email protected] 📸 @nightwolf.tech 📹 @kilabyte_blog