With outdated dystopian movies like making headlines for possibly predicting the future and with companies like Google already releasing Artificial Intelligence (AI) tools and bots, is it time to rethink the Terminator narrative. As we move towards a world that is increasingly digitized, we must work to truly understand what it means to have and use AI. Terminator How does AI work? Traces of AI have appeared in mythology, literature, scriptures, and almost everywhere else you wouldn't expect before it did in academic discussions! For centuries many different cultures put forth their versions of AI but it wasn't until the late 1950s the idea was formally theorized. All of Artificial Intelligence is based on a single underlying assumption that all human thoughts can be mechanized. While the way humans learn isn't a single dimensional process, almost all of it is emulatable by a machine. To make this analogy a little clearer, I shall look at intelligence (both human and machine) as a toolbox of skills. The most basic of this is . Even the most unintelligent of animals and organisms use information through sound, vision, and other senses to react to the external world appropriately. For humans, it is no different and is typically something that comes innately to us. However, this makes it all the harder for machines to replicate. This idea is known as which states that contrary to assumptions and traditional reasoning, the sensorimotor skills which come so naturally to humans require enormous computational resources for motions to replicate. sensory learning Moravec's paradox As a result, we have made strides in , , , and similar sensory-based learning technologies only in the last 3-5 years. Although now, these technologies are just as adept (if not better) at recording sensory input. computer vision robotics tactile signaling AI and GPU Advancements A slightly more advanced tool that not all 'intelligent' creatures possess is the ability to . In the past this was the biggest challenge for AI. gather, store and analyze information Training a model (the technology that 'teaches' AI how to behave) needs huge computational power. This was an obstacle for quite a few decades until about 6-8 years ago when GPUs were put to use for purposes beyond graphics processing. With the parallel processing power it offered at a relatively low-cost, machines were finally able to crunch a lot more numbers, overcoming computational difficulties that we struggled with in the past. Machine Learning NLP, Neural Networks and Big Data To make the whole process a little more natural, supporting technologies like and were developed to convert sensory and other inputs into a serviceable format. speech recognition natural language processing Then developments in and helped support the machine learning models that were being put in place for information interpretation. neural networks big data Finally, with these technologies under our belt, we are able to build fairly reliable AI for particular context-specific use cases. This is known as . Most current applications fall under this category. narrow AI If we look to scale things further, we must look at the most powerful skill in our human intelligence tool kit: our ability of . While AI hasn't fully gotten there yet, cutting-edge research about and is being done to make AI more useful for a broader range of tasks. abstraction transfer-learning meta-learning With this, AI will be able to reproduce humanity's extraordinary ability to generalize learning from one situation and apply it in a different context. If this is achieved, we may finally get to the AI that the franchise predicts: (Artificial General Intelligence). This form of AI can essentially mimic all intellectual human activities and eventually will be able to supersede human abilities. The earliest estimates for this, if even possible, are at least 10 years out. Terminator AGI Bottom Line It has been made abundantly clear by our art and pop culture that we think AI is a distressing technology to be worried about. Whether or not this is true, wherever there is data, there will be AI and I believe that it will be the backbone to the technologies that will govern our future. While my naivety would like for me to think that AI will lead to a Utopian world with better healthcare, self-driving cars and more, if current trends are indicative of anything then this will push the world towards more disparity. While it is undeniable that AI is an empowering technology, it is also key to note that the people who are profiting off of its development are the one-percent. They will continue to do so, driving further power, income, and social inequality. Therefore, it is inaccurate and unlikely that the conflict between AI and humans would even occur. Rather, the more real conflict to be worried about is the one between humans who have AI at their fingertips and those who don't. The narrative from the Terminator is rather misleading (admittedly it was a movie released nearly 40 years ago!). The more likely reality will be that AI further aggravates the dichotomy between the haves and the have-nots. Previously published on: https://lucidityproject.home.blog/2020/07/15/artificial-intelligence-intro/