paint-brush
4 Artificial General Intelligence Milestones We Needby@7asabala
210 reads

4 Artificial General Intelligence Milestones We Need

by Mike HassaballaMarch 2nd, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Machine Learning (ML) using Artificial Neural Nets (ANN) has been making significant progress and headlines in the past few years. Generative Pretrained Transformers (GPT) can answer questions, carry a conversation, write a poem, a short story and help you write computer code. Continuously learning AI algorithms that work will be a breakthrough towards AGI. Numenta’s researchers recently outlined four basic concepts that I agree are vital for achieving AGI; these ideas are worth considering in broad AI research.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - 4 Artificial General Intelligence Milestones We Need
Mike Hassaballa HackerNoon profile picture

Artificial General Intelligence (AGI) is one of the most important research topics humans can work on. It is one of the most hyped, speculated, feared, discussed topics in Artificial Intelligence (AI) research. AI is a very broad topic that in our opinion has a lot of boring subfields, such as search, logic, probabilistic methods, classifiers and statistical methods.

With all the advances of GPTs and GANs, AGI remains a hard problem to solve. At its core, general intelligence is hard to define and maybe impossible to achieve. 

While the brightest minds at Google’s Deepmind and Open AI are working hard exploring ways to solve AGI. It seems that many researchers miss important concepts that I believe are essential to solving general intelligence. Numenta’s researchers recently outlined four basic concepts that I agree are vital for achieving AGI; these ideas are worth considering in broad AI research.

1. Continuous Learning

When ML algorithm training happens, it is done on a static dataset, and when the training process is concluded, the algorithm can then be used.

For example, when Open AI GPT3 is trained in 2021 it may be aware of COVID19, however, it will not be aware of the ongoing tensions in Ukraine, it will only know about Ukraine once it is trained on 2022 internet text data. You, on the other hand, know about tensions in Ukraine. Why is that? The answer is Continuous Learning.

Another example, when Tesla's Autopilot got trained for rolling stops, it kept performing rolling stops until it was updated to stop this feature, not because it learned that this is wrong. The point is: the human brain keeps learning and updating its model of the world continuously, current AI algorithms don’t. Continuously learning AI algorithms that work will be a breakthrough towards AGI. 

Tesla kept performing rolling stops until it was updated to stop this feature, not because it learned that this is wrong, image: Forbes

2. Physical World Learning and Exploration

General intelligence, at the first glance, seems to have little relevance to physics or the physical world. However, for an AI algorithm to make generally intelligent decisions or behave in an intelligent way in the human physical world. It has to have the ability to experience and experiment with real-world physics. Otherwise complex reasoning, decisions, or world related action performed by the AI may result in sporadic behavior. To add more to this, I believe that the AI algorithm should not be trained in a simulated virtual world since humans don’t fully understand the physical world. In a nutshell, the human brain is believed to learn about the physical world via movement. Then a breakthrough AI algorithm will learn the laws of physics by moving in the physical world.

Boston Dynamics Atlas carrying a box, Image: Gizmodo

3. Generalization

Zero Shot Learning (ZSL) happens when a child sees a new car that she/he had never seen before they know that this is a car and react accordingly. This generalization could be based on previous learning or intuition I believe has to do with the model-free methods the brain may be using. A breakthrough AI will have a certain structure that allows a degree of generalization or extrapolation without catastrophic results.

When a child sees a new car that she/he had never seen before they know that this is a car and can react accordingly, Image: Pexels

4. Reference Frame Learning

Jeff Hawkins’s 1000 brain theory of intelligence is based on the concept of reference frames. As I understand it, reference frames are abstract concepts (connections between neurons) the brain creates and stores (maintains). According to the theory, the brain uses these reference frames to think, plan, and predict. A breakthrough AGI algorithm may have a similar structure to reference frames to map and store complex concepts. Current designs of ANN may allow the existence of such connections, however, a breakthrough would be the automation of forming, modification, and distraction of such connections based on continuous learning, physical world exploration and generalization.

The brain uses these reference frames to think, plan, and predict, Image: Pixabay

If AI researchers and technologists are successful in embracing these concepts, I expect that AI with human intelligent features may emerge. Let’s hope they are reading this.

Thanks to Christy Maver, Donna Dubinsky and Subutai Ahmad at Numenta for sharing their valuable ideas about the Biological Approach to Machine Intelligence

If you learned something new or liked this content follow me here
For questions, showing support, and feedback contact me here