paint-brush
Artificial Intelligence: Time to Terminate the Terminator Tale?by@shishir
361 reads
361 reads

Artificial Intelligence: Time to Terminate the Terminator Tale?

by shishirAugust 7th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Artificial Intelligence: Time to Terminate the Terminator Tale? Is it time to rethink the Terminator narrative? We must work to truly understand what it means to have and use AI? We have made strides in computer vision, robotics, tactile signaling, and similar sensory-based learning technologies only in the last 3-5 years. While AI hasn't fully gotten there yet, cutting-edge research about transfer-learning and meta-learning is being done to make AI more useful for a broader range of tasks. We may finally get to the AI that the Terminator franchise predicts: AGI (Artificial General Intelligence)

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Artificial Intelligence: Time to Terminate the Terminator Tale?
shishir HackerNoon profile picture

With outdated dystopian movies like Terminator making headlines for possibly predicting the future and with companies like Google already releasing Artificial Intelligence (AI) tools and bots, is it time to rethink the Terminator narrative. As we move towards a world that is increasingly digitized, we must work to truly understand what it means to have and use AI.

How does AI work?

Traces of AI have appeared in mythology, literature, scriptures, and almost everywhere else you wouldn't expect before it did in academic discussions!  For centuries many different cultures put forth their versions of AI but it wasn't until the late 1950s the idea was formally theorized.

All of Artificial Intelligence is based on a single underlying assumption that all human thoughts can be mechanized.

While the way humans learn isn't a single dimensional process, almost all of it is emulatable by a machine. To make this analogy a little clearer, I shall look at intelligence (both human and machine) as a toolbox of skills.

The most basic of this is sensory learning. Even the most unintelligent of animals and organisms use information through sound, vision, and other senses to react to the external world appropriately. For humans, it is no different and is typically something that comes innately to us. However, this makes it all the harder for machines to replicate. This idea is known as Moravec's paradox which states that contrary to assumptions and traditional reasoning, the sensorimotor skills which come so naturally to humans require enormous computational resources for motions to replicate.

As a result, we have made strides in computer vision, roboticstactile signaling, and similar sensory-based learning technologies only in the last 3-5 years. Although now, these technologies are just as adept (if not better) at recording sensory input.

AI and GPU Advancements

A slightly more advanced tool that not all 'intelligent' creatures possess is the ability to gather, store and analyze information. In the past this was the biggest challenge for AI. 

Training a Machine Learning model (the technology that 'teaches' AI how to behave) needs huge computational power. This was an obstacle for quite a few decades until about 6-8 years ago when GPUs were put to use for purposes beyond graphics processing. With the parallel processing power it offered at a relatively low-cost, machines were finally able to crunch a lot more numbers, overcoming computational difficulties that we struggled with in the past.

NLP, Neural Networks and Big Data

To make the whole process a little more natural, supporting technologies like speech recognition and natural language processing were developed to convert sensory and other inputs into a serviceable format.

Then developments in neural networks and big data helped support the machine learning models that were being put in place for information interpretation.

Finally, with these technologies under our belt, we are able to build fairly reliable AI for particular context-specific use cases. This is known as narrow AI. Most current applications fall under this category. 

If we look to scale things further, we must look at the most powerful skill in our human intelligence tool kit: our ability of abstraction. While AI hasn't fully gotten there yet, cutting-edge research about transfer-learning and meta-learning is being done to make AI more useful for a broader range of tasks.

With this, AI will be able to reproduce humanity's extraordinary ability to generalize learning from one situation and apply it in a different context. If this is achieved, we may finally get to the AI that the Terminator franchise predicts: AGI (Artificial General Intelligence). This form of AI can essentially mimic all intellectual human activities and eventually will be able to supersede human abilities. The earliest estimates for this, if even possible, are at least 10 years out.

Bottom Line

It has been made abundantly clear by our art and pop culture that we think AI is a distressing technology to be worried about. Whether or not this is true, wherever there is data, there will be AI and I believe that it will be the backbone to the technologies that will govern our future.

While my naivety would like for me to think that AI will lead to a Utopian world with better healthcare, self-driving cars and more, if current trends are indicative of anything then this will push the world towards more disparity. While it is undeniable that AI is an empowering technology, it is also key to note that the people who are profiting off of its development are the one-percent. They will continue to do so, driving further power, income, and social inequality.

Therefore, it is inaccurate and unlikely that the conflict between AI and humans would even occur. Rather, the more real conflict to be worried about is the one between humans who have AI at their fingertips and those who don't. The narrative from the Terminator is rather misleading (admittedly it was a movie released nearly 40 years ago!). The more likely reality will be that AI further aggravates the dichotomy between the haves and the have-nots.

Previously published on: https://lucidityproject.home.blog/2020/07/15/artificial-intelligence-intro/