The inhuman condition

Written by peteryordanov | Published 2017/10/11
Tech Story Tags: artificial-intelligence | neuroscience | evolution | existentialism | inhuman-condition

TLDRvia the TL;DR App

Preliminaries:This article is dedicated to the now-tedious topic of Artificial Intelligence and is aimed at dismantling the concept with biological and evolutionary arguments.My motivation for writing this is to try to articulate a thought that’s been bugging me for a long time. I am an enthusiast. Let me be clear on that.

Everything that is written here, I regard as the truth, however it is still the truth to my knowledge, which is limited and biased.

Claims, made below are stripped of detail. Oversimplifications are applied to strain from diverging from the underlying point.

For this approach to be feasible, controversial topics have been omitted and most claims are proven and generally accepted and used as chained thought-catalysts to arrive at a logical conclusion.

Specific brain structures, neurotransmitters, ion channels and other similar neurobiological $10 words will not be referenced, so to remain true to the idea and not spice the read up with terms that will force the use of search engines or/and build up frustration.

The topic of artificial intelligence has been a beacon of controversy for quite some time. During the 50s it has been recognized as an academic discipline. Traditional, digital and social media have been directing massive efforts on covering this theoretical topic. Many prolific people have followed and have also theorized the transformation that this science will undergo in the near future.

Most hypothesis, wrapping around AI are grim and are paralleled with the technological singularity, which basically states that once a machine learns to better itself it would effectively make better ways to do so. That would leave the development of AI in a recursive state which would not be in our control and the outcomes of which would seem to be devastating.

Sneaking a peek on the wild ride that has been AI research thus far, we can see achievements such as:Google: DeepMind and its alphago wins; IBM: Deep Blue and its chess victories;IBM: Newage attempt with Watson and its Jeopardy! successess;Facebook: scary chat bots that have been shut down for being too scary;OpenAI: Dota2 champion-owning bot;And many, many more.

While the technological singularity hypothesis might be both fascinating and utterly scary, I’d like to open the window to an interpretation of the subject that has little to do with AI and more to do with Intelligence, as the intelligence that we are experiencing and referencing.

In reality, the examples above do not constitute intelligence. They are but computer programs, designed to do a specific task. They do not understand that they are doing anything. This is nothing new and while it may be impressive and in no way trivial to build (kudos to the teams that are behind those projects), it should not be regarded as intelligence of any sort.Although there is a common understanding that there is such a thing as General AI, which is what correlates with the accepted definition of an actual intelligent system, there is also Applied AI, which is synonymous to Machine learning or Deep learning. The latter term has nothing to do with intelligence and is misleading to regard it as such.

Contrary to popular belief, the bottleneck of (General) AI development is not silicon — it’s principle.

AI is failing. That has been widely accepted from the technically savvy people, engaged with the subject. I’d like to offer my take on the reason on why it’s failing and what intelligence might be defined as.

This may be regarded as semantics, but a definition matters. Even more-so when developing something.

Quoting wikipedia, which, ironically, has one of the more accurate definitions of intelligence:

Intelligence has been defined in many different ways including as one’s capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity, and problem solving. It can be more generally described as the ability to perceive information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.

The internet tends to agree. Most books on the matter tend to lean more-or-less on the same definition with one key difference:

knowledge to be applied towards adaptive behaviors within an environment or context.

I argue that intelligence cannot exist as a static component. It’s not an abstract feature and although we treat it as if it were — it actually is bundled within the human condition and you can’t have one without the other.

To understand the point I am making, one must look at evolution and the origin of nerve cells, fed by their purpose.

All cellular structures within a living organism are there because they’re the most energy-efficient way to achieve a certain goal that ultimately leads to continuation of the current specimen and the species.

The first microorganisms that developed neurons used them for simple reflex-based actions and had virtually no information retention[1]. Eventually, this helped with locating and collecting nutrients and with avoiding and parrying predatory entities. In the cases of sexually dimorphic species, it also aided in attracting potential mates.

Information that passed through these neurons had and still has a simple “it’s there” or “it’s not there” binary mode.

Both internal and external stimuli all end up as electrical signals. Once encoded: Sight(photons), sound(vibrations), touch(pressure), smell(odor compounds) and taste(sapid molecules) are all but electrical patterns, funneled through time.

Organisms that outweigh others usually means that they are predators of the latter. Size matters and evolution has an incentive to make more massive specimens[2]. This, however, is not free. As mass increased, so did the need to better regulate internal metabolic state and keep track of the environment around, which meant — more neurons and more complex neural structures. Eventually, as neuron count increased past a threshold, a central nervous system was the solution to keeping all the neural activities in sync, where a single node was responsible for the job.

The central nervous system is responsible for making sure that the internal environment of an organism is favored by the external AND keeping other internal components synchronized.

Minor alterations to the genetic structure of an organism are made after each successful iteration of a reproduction cycle. Many internal and external components are recycled and repurposed to fit the environment requirements. Since this is a long process that requires many lifecycles to yield practical results, a more appropriate, dynamic method was required to further the odds of a single organism’s survival. The aforementioned method was the ability of the organism to adjust itself to its actual surroundings, as opposed to the one of its species.And so the neural connections (synapses) started adapting to the environment. They were able to rewire based on recurring data, driven by an event that favored the internal state of the organism. This is what we call learning and is effectively what constitutes memory.

While the infrastructure is complex, the principle for making new connections is straightforward and summed up as:Neurons that fire together — wire together. Also known as Hebbian Learning[3].

Learning is not to be regarded as an abstract concept. It is well-defined and understandable: Organisms have pleasure/pain receptors both internally and externally. The pleasure receptors are motivational and the pain receptors — deterrent and without them — we wouldn’t have learning[4].

Evolution has been kind enough to hardwire neurotransmitters that cause pleasure once favorable patterns within an organism and/or environment are detected and to cause pain when the opposite occurs[5].

When your external senses are firing, they are forming connections with your internal state and if the state is pleasant — then you are more likely to repeat what you are doing to get that stimulation and vice versa. The more the same pattern is repeated — the stronger its outcome is anticipated.

The bigger the brain — the more neurons, the more senses it can incorporate and the more patterns it can store, for longer periods of time. However, a bigger brain is not a single-dimensional variable. Since internal and external senses are mapped in the brain — the bigger the animal — the more neurons it requires to just exist.

I’ve implicitly addressed two evolutionary brain structures:Reptilian brain, which is responsible for keeping internal metabolic states of an organism.Limbic brain, which is responsible for mapping the Reptilian brain to the external environment and thus driving motivation.

But mammalian evolution has gone a step further. The neocortical tissue became the very reason that we now talk about AI.The neocortex is effectively what makes us smarter than other species. This wrinkled, top-level structure allows us to be efficiently flexible in our synaptic connections so our brain is constantly creating/destroying neurons, creating/strengthening/weakening and destroying synapses so the structure is under dynamic change. It also has the concept of layers, which basically enable information abstractions from pixels/short sounds to conceptual representations. The output from a layer is mapped to the input of the one above.

The brain is happy to generalize. Identical inputs from senses are virtually impossible to recur, so it fills the gap and rounds our understanding to what is closest to a given pattern from our own mental library.

The elaborate composition of layers within the neocortex allow us to use both low and high order memory. We recognize single pixels and clusters of pixels that form more complex representations. This is happening because information enters at the first layer and propagates to the upper ones. To optimize speed — the brain traverses information across layers.Another key feature of these layers is that they implement feedback loops. Once something reaches an upper layer, it can traverse back down to the lower ones in an information feedback loop, creating what we might call mental associations since it now acts as new input, derived from the concept we have remembered from past occurrences of the given pattern[6]. This is what we would define as thought.

Not only do layers communicate with each other — they make connections with other brain regions. Sight, auditory, touch, etc. We are making associations across multiple senses based on simultaneous input.

The brain is neither a computer, nor is it a storage engine, rather it is an elaborate sequence mapper with pattern and anomaly detection.

Objective knowledge is a practical illusion[10]! This claim is not something new but is rather difficult to accept. We have socially-functional definitions of words which we presume mean something concise. Which is nothing but a fault-acceptable protocol with a tolerable error margin and is basically: good enough. That, which can be said, cannot be proven[7].Each of us has an unique mental model of the world, associated with our own state of existence. This model is not exportable! In order to understand it, we must first map every single cell in a human’s body and even then we’ll have nothing without the context of its entire lifespan. Every experience: every stubbed toe on the nightstand, every memory of sweat pouring down one’s back. Even if we managed to somehow solve that problem, then we would arrive at an even bigger one: How do we map information across individuals. What will the absolute representation of a chair be, once denormalized and taken out of the context of our lifespan.

The brain is constantly anticipating. The illusion of consciousness is derived from the manifestation of our senses, combined with the anticipation of our brain for possible future events and our organism’s state in them. It’s running a mental simulation. The evolutionary purpose of which is to let our thoughts die, so we may live.

When we learn a sequence and we see its initial input, we start anticipating the possibilities that might follow, so we can create multiple simulations and swiftly pick the one that’s most beneficial to us and act accordingly[8].

Once an anticipated pattern fails to match expectations, the brain considers it an anomaly: it gets an attention surge and we focus our predictions on that specific event and its surroundings so as to determine what went wrong and to estimate probable danger. As people — we experience that as uneasiness and temporary stress with hyperfocus.

If one, hypothetically, managed to run two parallel lives of identical people with the same experiences and one of them was missing a toe — the two brain structures would be vastly different since the internal models of the organisms differ.

Conclusion: Organisms are selfish processors of information streams who get constant feedback from the world, which only makes sense from the context of their own individual existence.I don’t see how the creation of AI would be possible by applying some arbitrary algorithm within a narrow set of computer instructions, which basically is every approach in the field at this point that has been made available to public scrutiny.

To create something that resembles AI, one would first have to recreate the world.

I urge everyone: stop calling “it” — “intelligence”, please. Let’s use the language we’ve all agreed upon.

Loosely-related and biased notes:We have a better chance of altering what we currently are. Designer superhumans are closer to realization than Artificial Intelligence. The genetic code has shown to be a more-manageable problem, as proven by the development of CRISPR/Cas9[9].

In my opinion, the term Artificial intelligence can easily be labeled as click-bait, or a publicity stunt. At this point, claims to have solved the matter would seem bogus due to their improbable nature. Of course, people beyond the reasoning, applied in this article might prove this statement false. The odds of that, however, are slim. I believe that the approach with which this problem can be tackled heavily relies on solving machine evolution, rather than information processing.

Neuroscience has a general issue regarding knowledge. Brain scans have caused many researchers to claim certain relations between specific brain regions and patterns with social behaviour. Those post-hoc analysis can be misleading and if anyone is concerned with the matter — straining from such studies can prove to be beneficial. Neuroscientific principles are core for understanding the human condition. Specifics, for the most part, vary from individual to individual.

References:

  1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2862905/
  2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4780594/
  3. The organization of behavior — Donald Hebb
  4. http://www.cell.com/neuron/abstract/S0896-6273(12)00941-5
  5. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4141622/
  6. https://bbp.epfl.ch/nmc-portal/microcircuit
  7. Tractatus Logico-philosophicus — Ludwig Wittgenstein
  8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3718380/
  9. http://www.nature.com/nbt/journal/v32/n4/full/nbt.2842.html
  10. https://philpapers.org/browse/perceptual-knowledge

Published by HackerNoon on 2017/10/11