paint-brush
An Interview With Ilya Sutskever, Co-Founder of OpenAIby@Eye on AI
16,283 reads
16,283 reads

An Interview With Ilya Sutskever, Co-Founder of OpenAI

by [email protected]March 20th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

AI has already taken over many aspects of our lives. But what's coming is far more advanced, far more powerful. We're moving into uncharted territory. But it’s also important not to overreact, not to withdraw like turtles from the bright sun now shining upon us.
featured image - An Interview With Ilya Sutskever, Co-Founder of OpenAI
craig@eye-on.ai HackerNoon profile picture

As we hurtle towards a future filled with artificial intelligence, many commentators are wondering aloud whether we're moving too fast. The tech giants, the researchers, and the investors all seem to be in a mad dash to develop the most advanced AI.


But are they considering the risks, the worriers ask?


The question is not entirely moot, and rest assured that there are hundreds of incisive minds considering the dystopian possibilities - and ways to avoid them.


But the fact is that the future is unknown, the implications of this powerful new technology are as unimagined as was social media at the advent of the Internet.


There will be good and there will be bad, but there will be powerful artificial intelligence systems in our future and even more powerful AIs in the future of our grandchildren. It can’t be stopped, but it can be understood.


I spoke about this new technology with Ilya Stutskever, a co-founder of OpenAI, the not-for-profit AI research institute whose spinoffs are likely to be among the most profitable entities on earth.


My conversation with Ilya was shortly before the release of GPT-4, the latest iteration of OpenAI’s giant AI system, which has consumed billions of words of text - more than any one human could possibly read in a lifetime.


GPT stands for Generative Pre-trained Transformer, three important words in understanding this Homeric Polyphemus. Transformer is the name of the algorithm at the heart of the giant.


Pre-trained refers to the behemoth’s education with a massive corpus of text, teaching it the underlying patterns and relationships of language - in short, teaching it to understand the world.


Generative means that the AI can create new thoughts from this base of knowledge.


AI has already taken over many aspects of our lives. But what's coming is far more advanced, far more powerful. We're moving into uncharted territory. And it's worth taking a moment to consider what that means.


But it’s also important not to overreact, not to withdraw like turtles from the bright sun now shining upon us. In Homer's epic poem "The Odyssey," the cyclops Polyphemus traps Odysseus and his crew in his cave, intending to eat them.


But Odysseus manages to blind the giant and escape. AI will not eat us.


Ilya Sutskever is a cofounder and chief scientist of OpenAI and one of the primary minds behind the large language model GPT-4 and its public progeny, ChatGPT, which I don’t think it’s an exaggeration to say is changing the world.


This isn’t the first time Ilya has changed the world. He was the main impetus for AlexNet, the convolutional neural network whose dramatic performance stunned the scientific community in 2012 and set off the deep learning revolution.


The following is an edited transcript of our conversation.


CRAIG: Ilya, I know you were born in Russia. What got you interested in computer science, if that was the initial impulse, or neuroscience or whatever it was.


ILYA: Indeed, I was born in Russia. I grew up in Israel, and then as a teenager, my family immigrated to Canada. My parents say I was interested in AI from an early age. I also was very motivated by consciousness. I was very disturbed by it, and I was curious about things that could help me understand it better.


I started working with Geoff Hinton [one of the founders of deep learning, the kind of AI behind GPT-4, and a professor at the University of Toronto at the time] very early when I was 17. Because we moved to Canada and I immediately was able to join the University of Toronto. I really wanted to do machine learning, because that seemed like the most important aspect of artificial intelligence that at the time was completely inaccessible.


That was 2003. We take it for granted that computers can learn, but in 2003, we took it for granted that computers can't learn. The biggest achievement of AI back then was Deep Blue, [IBM’s] chess playing engine [which beat world champion Garry Kasparov in 1997].


But there, you have this game and you have this research, and you have this simple way of determining if one position is better than another. And it really did not feel like that could possibly be applicable to the real world because there was no learning. Learning was this big mystery. And I was really, really interested in learning. To my great luck, Geoff Hinton was a professor at the university, and we began working together almost right away.


So how does intelligence work at all? How can we make computers be even slightly intelligent? I had a very explicit intention to make a very small, but real contribution to AI. So, the motivation was, could I understand how intelligence works? And also make a contribution towards it? So that was my initial motivation. That was almost exactly 20 years ago.


In a nutshell, I had the realization that if you train, a large neural network on a large and a deep neural network on a big enough dataset that specifies some complicated task that people do, such as vision, then you will succeed necessarily. And the logic for it was irreducible; we know that the human brain can solve these tasks and can solve them quickly. And the human brain is just a neural network with slow neurons.


So, then we just need to take a smaller but related neural network and train it on the data. And the best neural network inside the computer will be related to the neural network that we have in our brains that performs this task.


CRAIG: In 2017, the "Attention Is All You Need" paper came out introducing self-attention and transformers. At what point did the GPT project start? Was there some intuition about transformers?


ILYA: So, for context, at OpenAI from the earliest days, we were exploring the idea that predicting the next thing is all you need. We were exploring it with the much more limited neural networks of the time, but the hope was that if you have a neural network that can predict the next word, it'll solve unsupervised learning. So back before the GPTs, unsupervised learning was considered to be the Holy Grail of machine learning.


Now it's been fully solved, and no one even talks about it, but it was a Holy Grail. It was very mysterious, and so we were exploring the idea. I was really excited about it, that predicting the next word well enough is going to give you unsupervised learning.


But our neural networks were not up for the task. We were using recurrent neural networks. When the transformer came out, literally as soon as the paper came out, literally the next day, it was clear to me, to us, that transformers addressed the limitations of recurrent neural networks, of learning long-term dependencies.


It's a technical thing. But we switched to transformers right away. And so, the very nascent GPT effort continued then with the transformer. It started to work better, and you make it bigger, and then you keep making it bigger.


And that's what led to eventually GPT-3 and essentially where we are today.


CRAIG: The limitation of large language models as they exist is that their knowledge is contained in the language that they're trained on. And most human knowledge, I think everyone agrees, is non-linguistic.


Their objective is to satisfy the statistical consistency of the prompt. They don't have an underlying understanding of the reality that language relates to. I asked ChatGPT about myself. It recognized that I'm a journalist, that I've worked at these various newspapers, but it went on and on about awards that I've never won. And it all read beautifully, but little of it connected to the underlying reality. Is there something that is being done to address that in your research going forward?


ILYA: How confident are we that these limitations that we see today will still be with us two years from now? I am not that confident. There is another comment I want to make about one part of the question, which is that these models just learn statistical regularities and therefore they don't really know what the nature of the world is.


I have a view that differs from this. In other words, I think that learning the statistical regularities is a far bigger deal than meets the eye.


Prediction is also a statistical phenomenon. Yet to predict you need to understand the underlying process that produced the data. You need to understand more and more about the world that produced the data.


As our generative models become extraordinarily good, they will have, I claim, a shocking degree of understanding of the world and many of its subtleties. It is the world as seen through the lens of text. It tries to learn more and more about the world through a projection of the world on the space of text as expressed by human beings on the internet.


But still, this text already expresses the world. And I'll give you an example, a recent example, which I think is really telling and fascinating. I've seen this really interesting interaction with [ChatGPT] where [ChatGPT] became combative and aggressive when the user told it that it thinks that Google is a better search engine than Bing.


What is a good way to think about this phenomenon? What does it mean? You can say, it's just predicting what people would do and people would do this, which is true. But maybe we are now reaching a point where the language of psychology is starting to be appropriated to understand the behavior of these neural networks.


Now let's talk about the limitations. It is indeed the case that these neural networks have a tendency to hallucinate. That's because a language model is great for learning about the world, but it is a little bit less great for producing good outputs. And there are various technical reasons for that. There are technical reasons why a language model is much better at learning about the world, learning incredible representations of ideas, of concepts, of people, of processes that exist, but its outputs aren't quite as good as one would hope, or rather as good as they could be.


ILYA: Which is why, for example, for a system like ChatGPT, which is a language model, has an additional reinforcement learning training process. We call it Reinforcement Learning from Human Feedback.


We can say that in the pre-training process, you want to learn everything about the world. With reinforcement learning from human feedback, we care about the outputs. We say, anytime the output is inappropriate, don't do this again. Every time the output does not make sense, don't do this again.


And it learns quickly to produce good outputs. But it's the level of the outputs, which is not the case during the language model pre-training process.


Now on the point of hallucinations, it has a propensity of making stuff up from time to time, and that's something that also greatly limits their usefulness.


But I'm quite hopeful that by simply improving this subsequent reinforcement learning from human feedback step, we can teach it to not hallucinate. Now you could say is it really going to learn? My answer is, let's find out.


The way we do things today is that we hire people to teach our neural network to behave, to teach ChatGPT to behave. You just interact with it, and it sees from your reaction, it infers, oh, that's not what you wanted. You are not happy with its output.


Therefore, the output was not good, and it should do something differently next time. I think there is a quite a high chance that this approach will be able to address hallucinations completely.


CRAIG: Yann LeCun [chief AI scientist at Facebook and another early pioneer of deep learning] believes that what's missing from large language models is this underlying world model that is non-linguistic that the language model can refer to. I wanted to hear what you thought of that and whether you've explored that at all.


ILYA: I reviewed Yann LeCun's proposal and there are a number of ideas there, and they're expressed in different language and there are some maybe small differences from the current paradigm, but to my mind, they are not very significant.


The first claim is that it is desirable for a system to have multimodal understanding where it doesn't just know about the world from text.


And my comment on that will be that indeed multimodal understanding is desirable because you learn more about the world, you learn more about people, you learn more about their condition, and so the system will be able to understand what the task that it's supposed to solve, and the people and what they want better.


We have done quite a bit of work on that, most notably in the form of two major neural nets that we've done. One is called Clip and one is called Dall-E. And both of them move towards this multimodal direction.


But I also want to say that I don't see the situation as a binary either-or, that if you don't have vision, if you don't understand the world visually or from video, then things will not work.


And I'd like to make the case for that. So, I think that some things are much easier to learn from images and diagrams and so on, but I claim that you can still learn them from text only, just more slowly. And I'll give you an example. Consider the notion of color.


Surely one cannot learn the notion of color from text only, and yet when you look at the embeddings — I need to make a small detour to explain the concept of an embedding. Every neural network represents words, sentences, concepts through representations, ‘embeddings,’ that are high-dimensional vectors.


And we can look at those high-dimensional vectors and see what's similar to what; how does the network see this concept or that concept? And so, we can look at the embeddings of colors and it knows that purple is more similar to blue than to red, and it knows that red is more similar to orange than purple. It knows all those things just from text. How can that be?


If you have vision, the distinctions between color just jump at you. You immediately perceive them. Whereas with text, it takes you longer, maybe you know how to talk, and you already understand syntax and words and grammars, and only much later you actually start to understand colors.


So, this will be my point about the necessity of multimodality: I claim it is not necessary, but it is most definitely useful. I think it's a good direction to pursue. I just don't see it in such stark either-or claims.


So, the proposal in [LeCun’s] paper makes a claim that one of the big challenges is predicting high dimensional vectors which have uncertainty about them.


But one thing which I found surprising, or at least unacknowledged in the paper, is that the current autoregressive transformers already have the property.


I'll give you two examples. One is, given one page in a book, predict the next page in a book. There could be so many possible pages that follow. It's a very complicated, high-dimensional space, and they deal with it just fine. The same applies to images. These autoregressive transformers work perfectly on images.


For example, like with OpenAI, we've done work on the iGPT. We just took a transformer, and we applied it to pixels, and it worked super well, and it could generate images in very complicated and subtle ways. With Dall-E 1, same thing again.


So, the part where I thought that the paper made a strong comment around where current approaches can't deal with predicting high dimensional distributions - I think they definitely can.


CRAIG: On this idea of having an army of human trainers that are working with ChatGPT or a large language model to guide it in effect with reinforcement learning, just intuitively, that doesn't sound like an efficient way of teaching a model about the underlying reality of its language.


ILYA: I don't agree with the phrasing of the question. I claim that our pre-trained models already know everything they need to know about the underlying reality. They already have this knowledge of language and also a great deal of knowledge about the processes that exist in the world that produce this language.


The thing that large generative models learn about their data — and in this case, large language models — are compressed representations of the real-world processes that produced this data, which means not only people and something about their thoughts, something about their feelings, but also something about the condition that people are in and the interactions that exist between them.


The different situations a person can be in. All of these are part of that compressed process that is represented by the neural net to produce the text. The better the language model, the better the generative model, the higher the fidelity, the better it captures this process.


Now, the army of teachers, as you phrase it, indeed, those teachers are also using AI assistance. Those teachers aren't on their own. They're working with our tools and the tools are doing the majority of the work. But you do need to have oversight; you need to have people reviewing the behavior because you want to eventually achieve a very high level of reliability.


There is indeed a lot of motivation to make it as efficient and as precise as possible so that the resulting language model will be as well behaved as possible.


ILYA: So yeah, there are these human teachers who are teaching the model desired behavior. And the manner in which they use AI systems is constantly increasing, so their own efficiency keeps increasing.


It's not unlike an education process, how to act well in the world.


We need to do additional training to make sure that the model knows that hallucination is not okay ever. And it's that reinforcement learning human teacher loop or some other variant that will teach it.


Something here should work. And we will find out pretty soon.


CRAIG: Where is this going? What, research are you focused on right now?


ILYA: I can't talk in detail about the specific research that I'm working on, but I can mention some of the research in broad strokes. I'm very interested in making those models more reliable, more controllable, make them learn faster from lesson data, less instructions. Make them so that indeed they don't hallucinate.


CRAIG: I heard you make a comment that we need faster processors to be able to scale further. And it appears that the scaling of models, that there's no end in sight, but the power required to train these models, we're reaching the limit, at least the socially accepted limit.


ILYA: I don't remember the exact comment that I made that you're referring to, but you always want faster processors. Of course, power keeps going up. Generally speaking, the cost is going up.


And the question that I would ask is not whether the cost is large, but whether the thing that we get out of paying this cost outweighs the cost. Maybe you pay all this cost, and you get nothing, then yeah, that's not worth it.


But if you get something very useful, something very valuable, something that can solve a lot of problems that we have, which we really want solved, then the cost can be justified.


CRAIG: You did talk at one point I saw about democracy and about the impact that that AI can have on, democracy.


People have talked to me about a day when conflicts, which seem unresolvable, that if you had enough data and a large enough model, you could train the model on the data and it could come up with an optimal solution that would satisfy everybody.


Do you think about where this might lead in terms of helping humans manage society?


ILYA: It's such a big question because it's a much more future looking question. I think that there are still many ways in which our models will become far more capable than they are right now.


It's unpredictable exactly how governments will use this technology as a source of advice of various kinds.


I think that to the question of democracy, one thing which I think could happen in the future is that because you have these neural nets and they're going to be so pervasive and they're going to be so impactful in society, we will find that it is desirable to have some kind of a democratic process where, let's say the citizens of a country provide some information to the neural net about how they'd like things to be. I could imagine that happening.


That can be a very high bandwidth form of democracy perhaps, where you get a lot more information out of each citizen and you aggregate it, specify how exactly we want such systems to act. Now it opens a whole lot of questions, but that's one thing that could happen in the future.


But what does it mean to analyze all the variables? Eventually there will be a choice you need to make where you say, these variables seem really important. I want to go deep. Because I can read a hundred books, or I can read a book very slowly and carefully and get more out of it. So, there will be some element of that. Also, I think it's probably fundamentally impossible to understand everything in some sense. Let's, take some easier examples.


Anytime there is any kind of complicated situation in society, even in a company, even in a mid-size company, it's already beyond the comprehension of any single individual. And I think that if we build our AI systems the right way, I think AI could be incredibly helpful in pretty much any situation.


Craig S. Smith is a former correspondent and executive at The New York Times. He is the host of the podcast Eye on A.I.


Also published here