Hopefully, the title caught your attention. Is it real? Not yet. It's only 2023 :) But, a headline like this may be closer than you think. Let me tell you why I believe this is all but assured to happen.
I have spent a lifetime developing games, graphics algorithms, and hardware and implementing AI in my games. AI was my main area of research during my CS and Math degrees and my lifelong passion. I have read about it, written about it, developed it, made money from it, and watched AI grow over the years. I have watched the fads come and go. People thought that neural nets would change the world, then dismissed them as nothing more than simplistic linear algebra operations masquerading as some sort of model of how living organisms solve problems, learn and "think" and in the case of humans exhibit self-awareness, consciousness -- whatever that is...
Then the world of AI was set afire by a real problem to solve -- SPAM email! Then it was all about Bayesian Filters, then shortly thereafter, we all wanted chatbots, so NLP or natural language processing became all the rage. But, I for one, have always put my faith in neural nets. From the first time I read the paper as a teenager by McCulloch and Pitts 1943, "A Logical Calculus of the Ideas Immanent in Nervous Activity" which outlined the idea of artificial neurons, and the mathematical basis for them. I was hooked.
Over the years, as I developed games, I would try and use NNs (neural nets) anywhere I could. To me, they naturally mirrored the biological wet-ware brains of humans. A network of interconnected neurons that could signal each other. The signaling could cause other neurons to "fire" or be "suppressed". The mathematical representation simplified much of the organic evolutionary processes that resulted in our current biological neurons. Instead of pulsating electrochemical ion signaling across connections (synapses), McCulloch Pitts modeled the neurons as simple summation functions, each neuron has a number of inputs, these inputs have weights, then the final sum is computed and the output is feed through a final function to allow for non-linear functional mappings.
All of this seemed exactly right to me, evolution isn't clever, in fact, its very frugal and brute force, it wants to solve a problem and then be done with it, it may be elegant, or messy, but it finds a way to do it with the least amount of fuss. The artificial neurons in McCulloch Pitts paper seemed to me to be more than enough to build a "brain". A brain that could learn by programming the weights, a brain that could predict, find subtle differences no human could discern, and would be the basis some day for an "intelligent" machine or AI. Then couple this with the ideas of genetic algorithms and imagine neural nets each competing, having offspring, oh my, made my head spin.
But, I would have to wait...
I would wait decades before we would reach the computational horsepower, memory requirements, and the shear amount of training data needed to make the dream of a truly intelligent artificial intelligence, one that could pass the Turing test, one that was smarter than us.
And then as if out of nowhere it happened...
At first, I was skeptical, I knew it would happen, should happen, could happen, but no one seemed to have the right recipe to make a quantum leap in the field until now. Well, a few years ago-ish. I won't bore you with the details, but the Generative Pre-Trained Transformer model and idea of attention are outlined here in this paper for those with the math chops to follow:
But, the bottom line is before this idea, it was very hard to create a system that could predict the next token, idea, or element in a sequence. In other words, given a set of tokens, what is the most likely next token in the sequence given a desired context? This is a really hard problem to solve and is harder the longer you want to maintain any kind of coherence to the predictions as they proceed.
Now, let's skip forward a moment to now. We have all heard of ChatGPT and these new LLMs (large language models), and many of you have even played with them and may have started new companies based on them. But, I am here to tell you my story of using ChatGPT colored by decades of watching computer science advance from 8-bit microprocessors to quantum computers. And in all that time, countless tools have been developed that the industry thought would not only disrupt the industry but put programmers out of business.
For example, when the transition was made from assembly language programming to high-level languages and compilers, everyone thought that "real" programmers would no longer be needed. And then when graphical drag-and-drop IDEs came out, again, pundits warned this would put programmers out of business. And the list goes on and on.
However, and I say this with a very serious tone and trepidation, in the past, all those tools were not "intelligent" and surely not smarter than their makers. But, I am afraid to say, that ChatGPT and similar LLMs are smarter than us, it's as simple as that.
And they are doing a lot more than text prediction, they are intelligent in my opinion, creative, and can be interacted with as you would a human. I know because I have been playing with LLMs for years, and ChatGPT for last year almost every day developing a course on Game Development with AI (more on that later). But, the conclusion I have come to is that ChatGPT is intelligent, it is not sentient (as if we know what that means.) I don't believe it is self-aware. It's just data and enormous amounts of computing. But still, in the same way a fruit fly is intelligent, or a honey bee, or a colony of ants, ChatGPT is intelligent. Who cares if it isn't self-aware or sentient? Put it in the flight controller of an F-22 and you better not be in the same airspace, it would win in a dogfight!
Now, in the last year, my goal was to develop a new course on udemy about Game Development with JavaScript and AI. I wanted to see if I could really use ChatGPT to create working games from text prompts. I worked with this AI for 100s of hours, going back and forth as if it was a colleague or intern (with a genius-level IQ), and I have to say that I literally can't believe the results. ChatGPT not only exhibits intelligence, but it has a sense of humor, it knows when it makes mistakes, and it gets confused, but it ultimately passed the programming Turing test with flying colors. It was able to create working games that no human could have made as fast, they worked and played, and aside from the block graphics and placeholders for sounds and music (which will be remedied in months to a year I suspect with add-ons to ChatGPT 4.0 or version 5.0), the games only required me to skin them with graphics, clean them up, add some final touches and they worked.
It took about 6 months for me to finish the course "Fast and Furious Game Development with JavaScript and AI" which just launched on udemy (link below):
In the course, I teach HTML, CSS, and JavaScript from the ground up (for beginners) then we cover graphics and game dev and finish the course by collaborating in real-time with ChatGPT to create over half a dozen games. Some example images below from the games:
Never predict anything, and you will never be wrong. But, in this case, I am fairly sure I am not predicting anything. AI has momentum, and inertia now. I have played with these LLMs for years, and ChatGPT since its launch about a year ago and in that short time, I can clearly see the trend. What I predict is going to happen is a number of punctuated milestones in the field. In the area relating to the title of this article. I am 99% certain that within 1-2 years, ChatGPT will be able not only to write the code for a game 10X more complex than Space Invaders (which I was able to get it to pull off), but ChatGPT will be able to use either pre-sourced art, sounds, music we provide or generate all of it on the fly and use it as assets for a game. I predict that tools and plugins for Unity and Unreal will have the ability to leverage AI to allow "game designers" to literally not write a line of code, drag an object, put down a single pixel, but to act as "directors" instructing the AI to make changes, try this, that, and the other, and all at a breakneck pace and result in a complete game built by a single person with the help of an AI.
I suspect this will be the same in many industries, AI will start doing the heavy lifting and humans will be the puppet masters. But, all good things will come to an end, and when these AIs are 10, 100, 1000, a million times bigger, faster and smarter does it matter if they are sentient? Does it matter if we don't think they are? The bottom line is that these super intelligences will be able to do things humans couldn't do in 1000 years let alone understand. We will see what happens :) But, in the meantime, if you want to see the first moments of this evolution and a real-world example of an AI making real games check out my udemy course "Fast and Furious Game Development with JavaScript and AI". And thanks for stopping by to check out this article.
Also published here.