paint-brush
Taming the AI Beastby@davidgerster
703 reads
703 reads

Taming the AI Beast

by David Gerster and Trevor MottlJanuary 19th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

After OpenAI launched ChatGPT in late 2022, it was immediately clear that a new beast was prowling the AI ecosystem. More than a million users tried ChatGPT in its first week after launch. Most marveled at its ability to read and write complex natural language; some dwelled on its limitations (especially its habit of making things up); others hailed it as a milestone for human-level “artificial general intelligence.”
featured image - Taming the AI Beast
David Gerster and Trevor Mottl HackerNoon profile picture

After OpenAI launched ChatGPT in late 2022, it was immediately clear that a new beast was prowling the AI ecosystem. More than a million users tried ChatGPT in its first week after launch. Most marveled at its ability to read and write complex natural language; some dwelled on its limitations (especially its habit of making things up); others hailed it as a milestone for human-level “artificial general intelligence.”


There’s no need to rehash what ChatGPT can do. Unless you’ve been living under a rock, you know it can answer questions, summarize documents, write essays, create code, or simply chat. (Plus, it can do all of this in multiple languages, optionally translating between them.) There’s also no need to get into how it manages these eerily human feats of inference because nobody seems to know. But it turns out that if you feed a neural net 400 billion words (including all of Wikipedia, which weighs in at a puny 3 billion) and give it tools to parse for meaning, then it can mimic human intelligence via sheer brute force — an extreme case of the unreasonable effectiveness of data.


The real goal, however, is not to replicate (the limitations of) human intelligence but rather to help control the rapidly evolving bestiary of specialized, superintelligent AIs. Imagine a future code-writing AI that does a perfect job, or at least better than the best humans. The challenge will be getting it to understand what we want so it can work its magic. Now swap out software expertise for anything else — law, medicine, fantasy sports — and you start to get the idea. The human will remain firmly in the loop, conducting a symphony of mechanical savants like an octopus directing its semi-sentient arms.


In practice, human experts will still be needed: Doctors and lawyers will simply do more, better, and faster work as specialist AIs free them from drudgery. More than ever, we’ll also need creative people who can solve complex problems across disciplines. David Epstein explores this idea in “Range: Why Generalists Triumph in a Specialized World,” noting that modern work demands “the ability to apply knowledge to new situations and different domains.” In a world where humans routinely collaborate with expert AIs, well-rounded generalists might see as much demand as deep specialists, and “range” might be the new “10,000 hours.”


Amid the swirl of speculation, one thing is clear: Large language models, such as the one behind ChatGPT, will only improve as Microsoft and Google go to war. Microsoft__invested $1 billion__ in OpenAI (and might invest $10 billion more), which runs on its Azure cloud; Google has its own state-of-the-art models and has declared a “code red” as it battles disruption in both web search and cloud computing. General purpose models will only get broader, and specialist models (such as the one OpenAI trained on billions of lines of code) will only get deeper. In this war of machines, it’s the humans who will win.