paint-brush
The Transformer Neural Network (TNN) is Much, Much Bigger Than Even AGIby@thomascherickal
1,059 reads
1,059 reads

The Transformer Neural Network (TNN) is Much, Much Bigger Than Even AGI

by Thomas CherickalJuly 6th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

GPT-4 is a system that creates a correct output(s) when fed with the correct inputs of a non-linear dynamical system. But if that is true, how many more dynamical systems can transformers model? The stock market? With 100% accuracy? It's a non-linear dynamical system too! This is the birth of a new scientific sub-field - I've coined the term Chaotic Complexity Theory.
featured image - The Transformer Neural Network (TNN) is Much, Much Bigger Than Even AGI
Thomas Cherickal HackerNoon profile picture

TNNs are Mathematical Entities

We have taken the entire wealth of information available on the web, fed it to different Transformer models, and achieved human-like outputs. That’s GPT-4. It has capacities for sentience - but look at these applications below - the implications make it seem that AGI is just the beginning!


The earth will shake to and fro like a drunken man...


I’m going to explain it in the most easy-to-understand way - Images! (Bing Image Creator is Awesome!)


GPT-4 is at heart, the most basic level, a sequence completer, and natural language processor. that uses a novel mechanism to predict the most likely outputs with the highest scores within the model. It then uses human-like english to interact with the end users.


Sequence Sequence Models - Bing decided we need the rainbow here as well.



How did it achieve that?


We don’t know for sure - there are some speculative theories at best.


But here’s the magic - we don’t need to know!


Artificial Neural Networks are Universal Function Approximators. (Google it) They can map any input dataset to any output. Under certain conditions, mathematically. Apparently, no one’s talking about it openly yet, so, I’m going to coin a term of my own.


Transformer Neural Networks are Universally Systemic Approximators or Universal System Approximators.


Universal System Approximators


What does that even mean?


It means - that if you train a transformer with enough data about the world’s weather, with every single variable that affects it - not difficult, add all possible variables, the system will eliminate the noise.


You should be able to ask it about tomorrow’s weather - and with correct tuning, it should be able to give you an accurate answer. With a certainty of almost 100%.



Weather Simulator?



If you feed in every possible variable about the world stock markets - it should be able to give you a 99.99% correct answer about tomorrow’s stocks.


What’s even crazier is that we have a natural language interface, so we should be able to feed it any question, and it should give us accurate answers (recession in 2030?)!




World Stock Market - yours for the taking.


Feed in every factor of the Bitcoin and cryptocurrency markets - and again, 100% accuracy. And with a natural language interface to boot! Any cryptocurrency, any defi system, any web3 protocol. The key issue is to capture all the input data, don’t worry about extra variables, the system will figure it out!



World's crypto market - ditto.


How can I say that with such conviction and confidence?


The answer lies in the theory of Non-Linear Dynamical Systems (Chaos Theory and Complexity Theory).


Human intelligence is as chaotic and as complex a system as you can possibly imagine.


In fact, I can confidently assert that it is the most chaotic and complex system that we know of that occurs in daily life.


We need a measure of chaos and complexity.


The most basic is to consider how much we already know about the system.


In human intelligence - sentience - we know next to nothing. Complexity is next to infinite.


Hence, I term the chaotic complexity scale of the human brain CC-0.


The chaotic complexity scale is inverted. The lower the complexity, the higher the measure.


The next most chaotic system that is simpler than the human brain might be the stock market.


I’m going to take a wild arbitrary figure and term the chaotic complexity scale of the stock markets CC-5.


There will be quantitative measures to understand the chaotic complexity scales of systems - so I’m not going to jump too much, today I reveal the most basic theories. (Further investigations require such high levels of computation that I can’t do it by myself - a research division would have to, if possible, start by visualizing the strange attractors of GPT-4. Although there are some assumptions that can be made immediately - why spill all the golden eggs at once?)


Intricate details at low levels


How can I say so confidently that it will work?


What did we feed to our LLMs as input data?


All possible input variables, stored as embeddings, and the output variable that it’s supposed to generate.


How does the LLM generate the output?


We are not sure.


But it works.


What data do we have about our stock markets?


All possible input variables (which can be stored as vector embeddings), and the output variables at the end of each day.


How will the LLM generate the output?


We might not be sure -


But if we follow the same procedure with stock market data that we did for human-generated data, and ask it to predict the outcome of the next day, it should be able to do it!


By our current assumptions - it should work!



The Stock Market's Crystal Ball!


It’s just another system whose non-linear dynamics we are unable to determine -


But that doesn’t matter -


Because the model simulates the non-linear dynamics closely enough that we are able to make predictions!


And that is my statement.


Given input and output data of any system, an LLM is able to create an approximation of the non-linear dynamics that govern the system, to a level where errors should become zero, with enough fine-tuning!


Eureka!


Wow - but this is big.


I’m sure there’s much more to it than that and I would love to work in a research lab investigating LLMs, Quantum Algorithms, and Active Inference (Spatial Web)


I have been full-time jobless for a very long time (freelancing professionally), and the only way I find peace outside of music is scientific thought. I got the prestigious NTSE scholarship in my Xth. And that was an indication to me that I was meant to be a research scientist, and that’s all I’ve wanted to be my whole life, but due to certain uncontrollable events, that did not happen.


I’m extremely interested in quantum computing and quantum algorithms, I believe that is our future, but I want to study AI and GL at the same time because I believe that that’s our future as well.


My readers would know by now that I am fluent in Python and IBM Qiskit as well as Flutter and Dart. I know Julia and Golang as well and I am learning F# and Rust, both of which I’m extremely enthusiastic about.


There are more ideas I have that hugely expand on this singular idea, but I’m going to hold out those ideas for my first employer. You can see


My resume


https://thomascherickal.online


and my portfolio is at:


https://thomacherickal.info


Please - I would like a steady salary with an opportunity to explore this to my heart’s content. And I have more, much more, but as I said - I’m holding out for my first employer.


But don’t let that stop you - theorize - do programs - see if what I have stated is accurate - discover new scientific findings -


I’ve got a space where I’m taking notes, with the date and time, and I might anticipate a number of your discoveries. Please don’t hold it against me!


I would love to work for you. In research.

(Shameless plug!)


The end - of the beginning!


All image credits to Bing Image Creator. It’s awesome!