“The very concept of intelligence is like a stage magician’s trick. Like the concept of ‘the unexplored regions of Africa’. It disappears as soon as we discover it.”
— Marvin Minsky (1927–2016), mathematician and an AI pioneer.
According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making intelligent machines, especially intelligent computer programs”.
Basically, artificial intelligence (AI) is the ability of a machine or a computer program to think and learn. The concept of AI is based on the idea of building machines capable of thinking, acting, and learning like humans.
AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems.
AI has been dominant in various fields such as
Chances are, if you’ve heard the term AI ballooning over the last few years, you’ve also heard “machine learning” as a buzzword.
Many have questions like “Is AI The Same As Machine Learning?”
Not really. Although the two terms are often used interchangeably, they are not the same.
Artificial intelligence is a broader concept, while machine learning is the most common application of AI.
Here’s what it means: Advanced machines use large data sets to “learn” and create patterns — then, they use what they’ve learned to recognize more of the unknown.
AI and machine learning have a similar relationship to rectangles and squares. Just as all squares are rectangles, but not all rectangles are squares; machine learning is one application of AI, but AI is a broader concept that has other uses, too.
The computers haven’t taken over the world, but artificial intelligence is already part of our everyday lives.
Although most of us haven’t taken a ride in a self-driving car, we benefit from AI through apps like Uber and Lyft that use algorithms to connect drivers to passengers.
We don’t have robotic assistants, yet, but we use AI assisted software like Siri and Google Now.
AI is also used in e-commerce, customer service, and financial services.
IBM’s cognitive computing system, Watson, is best known as a Jeopardy! winner, but Watson is also used in day-to-day data analytics in marketing, and research and diagnostic assistance at hospitals for physicians.
Google’s AI made news by beating the world Go champion, but its computing is also being used to answer email in Inbox, identify photos in Google Photos, and schedule appointments in G Suite, formerly Google Apps for Work.
Experts predict that within the next decade AI will outperform humans in relatively simple tasks such as translating languages, writing school essays, and driving trucks. More complicated tasks like writing a bestselling book or working as a surgeon, however, will take machines much more time to learn. AI is expected to master these two skills by 2049 and 2053 accordingly.
It is obviously too soon to talk about AI-powered creatures like those from Westworld or Ex Machina stealing our jobs or, worse yet, rising against humanity, but we are certainly moving in that direction. Meanwhile, top tech professionals and scientists are getting increasingly concerned about our future and encourage further research on the potential impact of AI.
AI has an intrinsic potential for bias in terms of the data used to train each algorithm to do what it’s supposed to. For example, Google Photos came under fire for tagging African American users as gorillas in 2015, and in 2017, the developers of FaceApp “beautified” faces by lightening skin tones. That’s why it’s vital for AI companies to look at the data they’re using and make sure it’s engineered to reduce bias.
AI is on the rise in industries across the board. In fact, 30 percent of businesses are predicted to incorporate it before 2019, and that’s up from just 13 percent last year, according to Spiceworks, an information technology company. Google, IBM, Amazon, Microsoft, Apple and many more companies are making AI a priority.
Tesla CEO Elon Musk, who incorporates AI into his company’s autonomous cars, fears for what the technology could mean for the future of humanity. “If you’re not concerned about AI safety, you should be,” he tweeted in August 2017. “Vastly more risk than North Korea.” He also encouraged the government to regulate the technology before it becomes too advanced. “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated,” he wrote on Twitter. “AI should be too.”
Mark Zuckerberg, on the other hand, seems to disagree wholeheartedly. The Facebook CEO hosted a 2017 Facebook live in which he called his views on AI “really optimistic” and mentioned that those who “drum up doomsday scenarios” about AI are “negative” and, in some ways, “really irresponsible.” People naturally pointed to Elon Musk, who later tweeted, “I’ve talked to Mark about this. His understanding of the subject is limited.”
Key figures at Amazon lean more towards Zuckerberg’s view of the subject, saying the benefits of AI outweigh the risk. “We believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future,” wrote Dr. Matt Wood, general manager of AI at AWS. “The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm.” The company recently sold its Rekognition facial recognition software — which identifies and tracks faces in real time, including those of “people of interest” — to police departments and government agencies. Critics argued it could easily be misused and harm marginalized people.
Sundar Pichai, CEO of Google, recently released new guidelines surrounding the company’s future with AI. His views are more in line with regulation, even if it’s self-regulation, of the company’s use of AI. “We recognize that such powerful technology raises equally powerful questions about its use,” he wrote in a June blog post. “How AI is developed and used will have a significant impact on society for many years to come. … We feel a deep responsibility to get this right.” He clarified that where there’s a material risk of harm, the company will proceed only when it believes the benefits substantially outweigh the risk. The company also said it won’t collaborate on weapons or “other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
Given the innate advantage AI machines have over us humans (accuracy, speed, etc.) an AI rebellion scenario is something we should not completely dismiss. Time will show us whether AI is our greatest existential threat or a tech blessing that will improve our quality of life in many different ways.
So far, one thing remains perfectly clear: creating AI is one of the most remarkable events for humankind. After all, AI is considered a major component of 4th Industrial Revolution, and its potential socioeconomic impact is believed to be as huge as the invention of electricity once had.
In light of this, the smartest approach would be keeping an eye on how the technology evolves, taking advantage of the improvements it brings to our lives, and not getting too nervous at the thought of machine takeover.