paint-brush
AI Safety: Human Intelligence Beyond LLMs and Panpsychismby@step
300 reads
300 reads

AI Safety: Human Intelligence Beyond LLMs and Panpsychism

by stephenMarch 6th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

LLMs would win over the human mind, because of how good they would deliver on tasks. Many call them tools, but tools that can use language, an agent of being, to access many parts of the mind, including solving valuable problems in situations, is an elevation for LLMs. They are not like pets, but like evolving cohabitants.
featured image - AI Safety: Human Intelligence Beyond LLMs and Panpsychism
stephen HackerNoon profile picture

There is a new preprint on arXiv, Using Counterfactual Tasks to Evaluate the Generality of Analogical Reasoning in Large Language Models, where the authors concluded that, "GPT models perform worse than humans, on average, in solving letter string analogy tasks using the normal alphabet. Moreover, when such tasks are presented with counterfactual alphabets, these models display drops in accuracy that are not seen in humans, and the kinds of mistakes that these models make are different from the kinds of mistakes that humans make. These results imply that GPT models are still lacking the kind of abstract reasoning needed for human-like fluid intelligence."


Human intelligence and LLMs are not directly comparable for a few reasons:


First, what is the convergence between humans and LLMs? Or, what is the primary commonality between humans and LLMs? Digital. This means that what humans do on digital, that LLMs can do, is digitally delimited.


While it is clear that humans understand more than LLMs, which humans express on paper, in communications, and then on digital, LLMs are using what humans have made on digital, in a fraction of the vastness of human abilities. So, just the digital inputs and outputs of humans are comparable with LLMs.


LLMs can be compared to a person learning a second language. However, the progress level should not be measured with someone who is a native speaker but with a native speaker of that language learning another language of similar difficulty.


This implies that LLMs are trying to do what humans are great at in comparison with if humans were to learn machine language and communicate with it.


What does it mean that, as the only non-living things, LLMs can do some of the intelligent things that humans can do, at least, on digital? It means that the similarities are not just what matters but divergence. LLMs can simply be told to make fake images, videos, or audio.


They can be told to write fake information. They run the errand. They pay attention. They are aware of information from different sources. However, they have no emotional connection to anything. They show no remorse. They also have no knowledge of the consequences.


They are the only non-organism that has what is parallel to a human mind, refuting the ubiquity of mind, suggested by panpsychism.


The human mind has functions, and those functions have qualifiers. The collection of the qualifiers can be termed as consciousness, the super qualifier.


LLMs would win over the human mind because of how good they would deliver on tasks. Many call them tools, but tools that can use language, an agent of being, to access many parts of the mind, including solving valuable problems in situations, is an elevation for LLMs. They are not like pets but like evolving cohabitants.

AI Safety and Digital Memory

Digital is not physical. This distinction is now important in an age where there is contamination in digital. AI can be used to find correlations in cardiology, particle physics, decipher papyrus, and so forth, but AI can also clone voices, deepfake images, videos, and generate convincing misinformation amid other vices.


Some of these and other things were at least contained or a tad difficult in the past.


In the physical, contaminants, like pathogens, require precautions against their sources, say water, air, forests, or others. Digital contaminants are freely able to spread with physical-representative digital, with little barriers.


It will be important to note that whatever is not physical may not be as real, at times, if there is no guarantee that it is a transmission of live physical, or something extra verifiable.


The internet is already unified, which, with its advantages, may not be a great thing anymore, as contamination can spread, including cyberattacks. There can be new ways to use AI as security agents, making detections of sources where harmful traffic or activities might be originating, which some are already doing.


For any page on social media or the internet that a person would visit, there can be an AI reader assistant that goes ahead to interpret images, videos, audio, and stories on it. This is done by some browsers already for search results.


It would be vital for social media, not just to say what is ahead while scrolling, but to blur some, if the image, video, or text would be problematic to the person's mental health, or does not have a physical representation likelihood or may be misleading.


Some platforms may provide some features where, as people make posts they go through a filter, such that those who would see it will be sure of what they are seeing, as a real representation of the physical.


How the human mind works, conceptually, may also be useful to go along with digital use, where the working of memory, emotions, feelings, intelligence, and so forth, can be available as a what to consult on the side, against allowing digital results to carry a user away, before much can be done.


The importance of how the mind works as an attachment to digital results is to ensure that some of the effects of contaminated digital images that are sent are mitigated, with the ability to use intent to check distributions in the mind, before falling for it.


Just like brain science, cognitive science, psychology, psychiatry, philosophy of mind, and the rest are driven by labels, AI is also being labeled. There are labels for different aspects of memory, like there are labels for types of emotions, feelings, and so forth, but the key importance in the human mind is how sets of impulses mechanize them, not what they are called.


For AI, there are labels like emergent properties, layers, parameters, tokens, self-attention, and so forth. What will be important is to find parallels between the qualifiers of the human mind and those that are acting on digital for AI. This is different from direct comparisons of the parameters of LLMs to brain synapses.


AI safety institutes may be focused on technical paths against AI risks, which are great, but parallels to the human mind, to know what AI is doing within digital, similar to how the mind qualifies functions, may be useful to set off alarms, or capability benchmarks, especially for wide distribution, thick sets of memory, sequences, and others.


LLMs can interpret inputs, with their ability to summarize essays, similar to how biological senses can also interpret. This is an example of a distant parallel, to human sentience, qualifying inputs with memory functions.


Research and development into the mind, theoretically, may be useful for safety, especially to develop counter agents, or counter qualifiers, against those that serve contaminations.


Feature image source