paint-brush
Bulldozer Intelligence: Here's Why LLMs Won’t Be AGIby@tona
124 reads

Bulldozer Intelligence: Here's Why LLMs Won’t Be AGI

by Joy OguntonaJune 6th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A bulldozer has just two functions, for moving or breaking things. So if it can’t break or move it, throw a bigger weight at it. They are functional tools, the same as LLM (Large Language Models) While LLMs create new things through synthesis, which mimics a sign of human intelligence, LLMs lack creativity.
featured image - Bulldozer Intelligence: Here's Why LLMs Won’t Be AGI
Joy Oguntona HackerNoon profile picture

A bulldozer has just two functions, for moving or breaking things. All the versatility of a bulldozer revolves around this. No matter how good a bulldozer is at carrying out its task, it won’t excel and acquire new capabilities like building or molding. If a bulldozer faces a harder challenge it cannot solve within its forte, it throws more weight at it or a bigger bulldozer will do it. So, if it can’t break or move it, throw a bigger weight at it. They are functional tools; the same as LLM (Large Language Models).


The quality of the data trained determines the whole capability of LLMs; the higher the parameters, the better the LLM gets. When an LLM is faced with a task, it scans through billions of tokens guessing and cross-checking each word against millions of others in a spatial relation looking for a means to an end.


LLMs are functional intelligence; they work to achieve an end goal. An LLM aims to generate as many words as efficiently as possible. It does this by sieving through words at a faster pace. While LLMs create new things through synthesis, which mimics a sign of human intelligence, LLMs lack creativity—a major component of intelligence.


Imagining AGI (Artificial General Intelligence) is an encompassing form of intelligence at an average human pace. The average human brain constitutes an autotelic behavior, not as a functional tool.  The brain works continuously, not as a means to an end. Human intelligence constitutes both functional intelligence and an autotelic one.


When thinking of AGI, it is more about the creativity of its process and less about the output. The human brain, when faced with a harder problem, doesn’t increase its capacity but is more in the process of thinking about a solution.


LLM solves problems with brute force at a faster pace; more problems equal more computing. When building AGI, the end goal should not target the utility but the process of action. When big enough, a bulldozer will carry out its function (move or break) effectively but that doesn’t make it a smarter tool but a functional one. Fed with enough data and computing, an LLM will generate better responses and words.


It doesn’t make it smarter but more functional and one-dimensional. LLMs are more of a human tool than a form of intelligence. The hallucination of LLMs is more of a feature than a bug; although it is a regurgitation of information, it also showcases the capability of LLMs to act in its context.


The human brain composes all forms of knowledge with adaptiveness. The brain switches to the best-performing mode at a flick knowing when to apply the best possible knowledge gotten over a large array of data (experience) gathered over the years. It doesn’t apply functional fixation thinking.


When developing AGI, compute won’t be the determining factor, algorithm will. Building an autotelic way of thinking is one step toward building general intelligence.