Olaf Witkowski is the Chief Scientist at Cross Labs, which aims to bridge the divide between intelligence science and AI technology. A researcher of artificial life, Witkowski started in artificial intelligence by exploring the replication of human speech through machines. He founded Commentag in 2007, and in 2009 moved to Japan to continue research, where he first became interested in artificial life.
In his own words, Witkowski says, “artificial intelligence means that you are trying to copy human intelligence as best as possible. Artificial life says, okay, that’s good, but let’s try to understand human intelligence and recreate it from the fundamental knowledge we have acquired. It’s more constructive. It’s a bit like the Richard Feynman quote: what I cannot create, I do not understand.”
In this interview, we talk to Witkowski about his work in artificial life, how it will advance technology, and why he thinks deep learning is already dead.
I work with an AI company called Cross Compass as part of Cross Labs. I’ve talked with a lot of AI companies over the last three years in Japan. I wanted to do more research into artificial life, and Cross Compass founded a research center which I direct.
At Cross Labs we cover three main areas: the neuroscience of intelligence, the theory of agency and learning, and collective AI.
So the thing is, I’m very biased. I really like things that are open-ended. For example, things that create complexity, or things that model new parts of the mind. I really like attention-based algorithms; not like attention to humans, but attention mechanisms in neural nets.
Motivation has been very interesting. The basics of Karl Friston’s free energy principle are really attractive. Valuing surprise and the idea that you want to be surprised. That idea existed already, but Friston translated it, he modeled the error and the difference between prediction and result. It’s called predictive coding, and it’s used in machine learning. I’m not in love with everything about it, but at the end of the day, it works.
I’m attracted to taking little principles from living systems and translating them into a coded system.
Yeah. So the trends I’ve noticed, they’re not new. People have been working on them. But I think GANs are super nice. I could talk about GANs for a long time, because it’s an example of two models interacting with each other adversarially to create complexity and make new discoveries.
We have one student who is using GANs for cellular automata, trying to see if you can discover new rules and create more complexity. It was partially inspired by this study. You see immediately the application of that, right? How can we create systems automatically? What’s the meta algorithm that can create more discovery for free? So it’s about algorithms that try to discover more things.
2010, I think? Given my interest in the nature of language, it was natural to try to recreate it with AI. And to recreate it properly, one must understand what principles in the living systems are necessary to make this communication happen. This happens to be an important research question in Artificial Life. It’s a rather small field, but really creative and promising, with plenty of key researchers here in Japan.
In 2018 we organized a big conference in Tokyo, and should have another one next September. The field of AI itself is super interesting, and it will keep growing, but deep learning – which is what people often mean when they say AI – is already dead.
What I mean is, it’s kind of an old idea. It was invented in the 60s, although the first models weren’t doing any learning. The first attempts at learning came later, even before backpropagation existed. We recently made neural learning work extremely well with convolutions and other tricks, as we also have huge amounts of computation we didn’t have before.
It’s pretty cool, and we’ve made this combination with backprop in those architectures, and now we’re applying this research to many application areas. But the research in deep learning is actually dead.
We are on a plateau. And we’ll be stuck on that plateau until we discover learning principles that are radically different. As long as we spend time on tuning the current paradigm, we are not focusing on searching for other, different AI algorithms. Within the field of AI, researchers are slowly starting to explore new ideas, but ALife (artificial life) was already doing that for a few decades. This is why I feel like ALife is the natural next step in the large field. It’s not the only one, of course, but it’s a very promising one.
ALife studies how to reproduce life from the bottom up. There is a subfield of ALife called open-ended evolution, an area I’m particularly interested in. It looks at mechanisms able to create novelty by itself, forever.
So, think of the Earth. It’s isolated, but it keeps creating more diversity. We have different species, we create and recreate theorems as humans; so why do we have all this diversity, and all this richness? Think of the earth as a box; how come when you shake it, and wait for long enough, suddenly you get humans? And then from that you have technology, and then you have robots, and then you have robots that kill all the humans…
Well, okay, that last one was a joke, but you get all the exciting stuff.
In our research into open ended evolution, we try to understand phase transitions in the emergence of intelligence: how come if you shake a box of stuff, the stuff turns into more interesting stuff? How come atoms naturally evolve through time into intelligent machines? I apply those principles to machine learning, basically. You’re applying that idea to create algorithms that keep creating, and within that you have the invention of, or rather the emergence of, goals.
Concepts like intrinsic motivation are a close example. The idea is to get intelligent behavior without hard-coding it, and without giving the machine any data. Instead, from internal goals, the agent will try to discover solutions, new solutions. We want to avoid hard coding rules into a robot or just presenting it with loads of data.
We want them to be fully emergent instead, by using fundamental principles of intelligence, such as maximizing their “empowerment” over the whole system, or their acquisition of relevant information. We want the agent to discover by itself the goals that will drive it to better solutions, and keep improving them continuously. For example, we want robots to implement curiosity, and all skills that make babies excellent learners. We want to implant the mechanisms that create similar drives inside a robot. If it works, that’s a better way to get intrinsic motivation.
I guess along the way to looking at what life is, you’ve got smaller goals like what is intelligence, what is curiosity, what is motivation. From your perspective, what kind of practical solutions or use cases do you see for this kind of research?
So I think if you’re trying to solve problems now, that can drive you for a while, but it’s the same as the environment. What I mean is, when you try to fix a problem within a time window of one to four years, for example, yeah, you can fix or patch it, but it will reappear in a few years. So I think of ALife as long-term research. It’s not going to solve problems itself; it’s going to solve problems as a side effect of better understanding what nature is about.
There is of course the big question: what is life? What is intelligence? So that’s my big question. What is the nature of intelligent systems? What is the difference between an intelligent system and, say, this cup of coffee? People say cups are not intelligent because they’re just objects, but actually we are objects too; we’re just more complex.
We have other degrees of freedom too, but even a cup has affordances; ways to grab it, et cetera. It has intelligence in the sense it is made in a shape that has been selected over many generations in terms of cultural evolution. This is the same for all objects, and it’s a sort of intelligence.
So there are things that we can measure here, but we don’t have yet the theory to measure it. So trying to understand that is the purpose of the field, and the part of the field I’m interested in.
Where do you see AI and ALife going next?
So we know that backprop is really quick, but we need to either tweak AI or find another paradigm. Another paradigm is going to take time, so for now we can add tweaks to make it more interesting.
Examples for AI are attention mechanisms, making them adversarial, or making them communicate, which hasn’t been talked about so much. That’s actually a lot of my research; collective AI. So GANs is about conflict, right? But I believe in nature you also have parasitism and cooperation mutualisms and that’s actually very easy to translate into math; it’s basically networks helping networks, with both gaining from the interaction.
We did it with two networks, and the basic system works by having a teacher and a learner, and you try to transfer knowledge. It’s very tricky if they have different tasks, but maybe they still have knowledge they can transfer to each other.
The transfer of knowledge is one thing, but there’s also collaboration; maybe you have systems that can discover new solutions together they couldn’t discover alone. We wrote a paper about this, and it’s an exciting research direction. It works on the principle that maybe there is an intrinsic value to misinterpreting information, leading to the type of learning that would not have happened without social relations among the learning agents. Communication-based AI, basically.
Also published on: https://lionbridge.ai/articles/deep-learning-is-dead-towards-artificial-life-with-olaf-witkowski/