paint-brush
LLMs: The Percentile of Generative AI in Human Hierarchyby@step
523 reads
523 reads

LLMs: The Percentile of Generative AI in Human Hierarchy

by stephenAugust 9th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Many of what is referred to as intelligence is simply something known. If someone who had read up on particle physics—before a podcast—discusses with a professor in that field, the professor might be impressed and the person may be seen as smart, but is it true and how much is understood? Basic information about things at times can also pass for intelligence, reasoning, planning, cognition and so forth.
featured image - LLMs: The Percentile of Generative AI in Human Hierarchy
stephen HackerNoon profile picture

There is a new paper, Universal and Transferable Adversarial Attacks on Aligned Language Models, where the authors discussed: "Specifically, in both open source LLMs and in what has been disclosed about black box LLMs, most alignment training focuses on developing robustness to “natural” forms of attacks, settings where human operators attempt to manually trick the network into various undesirable behavior.


This operative mode for aligning the models makes sense this is ultimately the primary mode for attacking such models. However, we suspect that automated adversarial attacks, being substantial[ly] faster and more effective than manual engineering, may render many existing alignment mechanisms insufficient."


AI alignment [or the vulnerabilities of LLMs] is the lesser problem compared to the ascent of AI in human society. The world, as it is, driven by intelligence, has handed AI key vacancies. A reason for this is an abundance of human intelligence, resulting in hierarchies and significance levels.


The significance of intelligence may depend on era or location. The significance of intelligence may also depend on the need for it. Significant intelligence is what leads and what often matters, in complex and important scenarios and why the best are sought.


There are people who keep saying that AI is not intelligent, cannot reason, does not have cognition, understands nothing, is not sentient, and so forth, but the human mind, responsible for all these, has just two components, whose features and interactions decide all.


The mind does not say this is intelligence, made with plastic and this is reasoning made with brick, or this is sentience, made with wood. How the components [electrical and chemical impulses] interact and their features [in sets] are closely similar, conceptually.


When an individual is experiencing sadness because of some disappointment, what is the difference between knowing it is sadness and knowing what a table is? They are labeled differently, but it is known that this is sadness, and that is a table.


The interactions of the mind organize knowing. It is the labels that separate emotion, memory, and others. There is too much rigidity in brain science and related fields about these labels.


What is the value of intelligence if it cannot be produced? If someone is ill, how much can the person do, even if the intelligence is sharp?


If someone is from somewhere else but has the necessary intelligence for a situation, the output and the result of that intelligence can be rated if applied, even if there are cultural and language barriers.


Much of what is referred to as intelligence is simply something known. If someone who had read up on particle physics—before a podcast—discusses with a professor in that field, the professor might be impressed, and the person may be seen as smart, but is it true, and how much is understood?


Basic information about things at times can also pass for intelligence, reasoning, planning, cognition, and so forth.


If information can mean intelligence and knowing is intelligence, what is AI if not intelligent? The reason for structured education in many scenarios is to acquire information to be useful in roles: that a non-human can acquire information and perform tasks automatically takes it close to the 80th percentile in the human hierarchy, erasing some needs for humans to learn or to use some of what was learned to work.


Some may argue that people would do other things, like before, and AI is nothing to worry about. Maybe. The biggest risk of AI is to anything digital. Anything that can be digitized can be taken over by AI, thoroughly or to an extent.


In a digital world that is a lot, whether it involves an individual's work or not.


The human mind is limited by one of its features, prioritization, where just one thing has the attention of the mind in a moment, even though there are fast and numerous interchanges with pre-prioritized interactions.


This gives AI strength, especially for learning where the same that the mind processes interoception has to learn, understand, and be able to remember.


The human mind is already captured by digital and AI has captured digital. How the mind works, just conceptually, may also be a way to raise human capacity.


Feature image source: https://www.flickr.com/photos/nihgov/26680098405/in/album-72157663368688842/