paint-brush
Mr. Altman's Imaginarium: Is the Future Far Away?by@cryptobro

Mr. Altman's Imaginarium: Is the Future Far Away?

by Crypto BroOctober 3rd, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Reviews show that tech experts are dissatisfied—too much “fluff” and praise for how great the AI-powered future will be. No specs, no data, nothing concrete!
featured image - Mr. Altman's Imaginarium: Is the Future Far Away?
Crypto Bro HackerNoon profile picture


Vegetarian pasta on the table, wine served, loud music playing—everything to unwind from the eventful Thanksgiving Day of 2023. OpenAI's CEO Sam Altman realized he wasn’t feeling in the flow or resourceful, so he retreated to his ranch in Napa. This happened after he almost lost control of the company that “holds the future of humanity in its hands”.


Fortunately, everything worked out: Altman was fired, then reinstated five days later, and his face appeared on the cover of Time. OpenAI, once again under his leadership, continued shaking up the information space with new neural network marvels.


Futurists and forecasters were quick to speculate on what lies ahead. Interestingly, it used to be writers like Jules Verne and Herbert Wells who painted visions of the future, describing incredible technologies that have now become a reality. However, the role of technological “seers” is increasingly taken up by top executives of tech companies—those directly involved in creating the future.


Of course, their predictions often have shades of PR, as everyone views the future through the lens of their products. A week ago, Sam Altman published an essay on his website, and it contains several interesting points worth pondering. Now, armed with “clues” from the OpenAI CEO’s text, I’ll throw in a few of my own (including retrospective) reflections.


Based on reviews, it seems that tech experts are dissatisfied—there’s a lot of “fluff” and praise for how great the AI-powered future will be. No specs, no data on parameters, nothing concrete! It’s all just literary musings! Yes, this AI manifesto largely follows the traditions of sci-fi writers, with the key difference being that it may contain hints of where OpenAI is heading.


The first point is the emergence of AI teams, consisting of virtual experts in different fields working together. This likely refers to the development of GPT-agent systems—extensions that allow the chatbot to perform specific tasks. There’s already a vast database of chat plugins for various tasks, and chatbots can connect to external services and each other. OpenAI clearly has ambitions to replicate Apple’s App Store story but in the context of AI agents.


Altman then writes about how children will have virtual tutors for any subject, similar improvements in healthcare, and the ability to create any software.


Here’s the nuance—the concept of virtual tutors looks promising. However, currently, the cost of AI computations is still quite high, making full-fledged learning through ChatGPT not so economical. Altman doesn’t say how the cost of computations could be reduced, but he does note that it must be done.


“If we don’t build sufficient infrastructure, AI will become a very limited resource.”


It feels like Altman is being cryptic and fully understands that the race for affordable AI will be extremely costly—in every sense. The Wall Street Journal estimated the required investments at $5–7 trillion.


Chips and graphics processors needed for AI training are already in short supply (which benefits Nvidia as the main supplier). Altman has discussed allocating funds for a radical overhaul of the semiconductor industry with the U.S. administration, but so far, without specifics.


The Middle East is expected to play a key role here. Saudi funds are planned to meet the computing power needs for affordable AI. Negotiations are underway with the investment firm G42 from Abu Dhabi. SoftBank CEO Masayoshi Son and Taiwan’s TSMC are also involved. Still, Altman primarily intends to secure funds from sheiks and sovereign wealth funds in the UAE, Saudi Arabia, Kuwait, and Qatar. Rumor has it that OpenAI is negotiating with MGX, a UAE investment fund with $100 billion in capital, for a multi-billion-dollar funding round that could value OpenAI at $150 billion.


Everything seems fine, except Saudi Arabia has a negative human rights reputation. Because of this, some of Altman’s AI-sector colleagues (notably Anthropic) decided not to risk their image and refused Saudi money.


Another major risk is environmental impact. According to the International Energy Agency, by 2026, global data center energy consumption could more than double, surpassing the energy use of large countries. While using AI to tackle global climate issues, an environmental crisis simultaneously looms.


The average data center that Altman needs for his mega-neural network consumes as much energy as an entire American metropolis. And several such data centers would be required for AGI to flex its neural muscles. In his essays about a “bright future”, Altman remains silent about these issues, ignoring major obstacles on the path to realizing his plans.


“We may achieve superintelligence within a few thousand days; it may take longer, but I’m confident we’ll get there.”


A few thousand days is about 5.5 years. If this prediction hides more than just vague musings, we could soon see technology comparable to human intelligence, capable of performing many tasks without special training. Sam Altman has long talked about the inevitability of AGI (though there’s no infrastructure for it yet). There are rumors that OpenAI’s leadership was frightened by what the new AI could do and rushed to fire Altman but then reconsidered. It’s unclear whether things are so revolutionary that Altman’s colleagues fear a machine uprising or whether the new AI models, instead of discovering all physical laws, might reveal new bio-weapon recipes through jailbreaks...


Regarding healthcare, Overton’s window doesn’t seem to have opened enough for people to trust a neural network in serious cases. Yes, AI’s diagnostic capabilities are reportedly improving. And it’s believable that someone might turn to a chatbot for initial diagnosis. However, it’s also known that ChatGPT can still fabricate information, so there are reasons to believe that even human error from “flesh-and-blood” doctors won’t deter people from visiting clinics. On the other hand, doctors themselves, especially if not particularly skilled, might consult ChatGPT. In that case, maybe it’s for the better?


The “ability to create any software” through prompts is quite believable, although my programmer friends skeptically doubt it. At the very least, making edits to already generated code could be problematic.


According to research from the Oak Ridge National Laboratory under the U.S. Department of Energy, there’s a high likelihood that by 2040, AI will be able to replace software developers.


Software developers are understandably concerned. Nearly 30% of 550 developers surveyed by Evans Data Corporation, a California market research firm specializing in software development, believe that AI will take over their tasks in the foreseeable future.


AI is already being actively used in software development to automate processes like code generation, refactoring, and bug detection. It plays a significant role in testing and quality assurance, analyzing code, and identifying vulnerabilities. Additionally, AI enhances DevOps and CI/CD processes, optimizing the entire development lifecycle, speeding up product releases, and improving reliability.


However, despite automation, developers will remain necessary for solving complex problems like system architecture design and strategic decision-making. While AI can handle routine operations, creating a quality product will still require human involvement and creativity.


Although AI can automate many programming tasks, up to 80% of software development jobs will remain human-oriented. (McKinsey&Co.)


By the way, Altman acknowledges that AI “may significantly impact the job market (both positively and negatively) in the coming years,” but “most professions will change more slowly.” (For now, I clearly see that plumbers and plasterers are not threatened by AI at all.) The OpenAI CEO also writes that new professions will emerge that we don’t even consider jobs today. Here, I remember paid courses about prompt writing. It’s incredibly amusing to watch how the ability to articulate thoughts and describe queries in detail is being presented as a new skill (with a course to buy!) since this is the profession of the future. However, with the introduction of the o1-preview model, a trend toward simplifying prompts has emerged, and neural networks themselves can now be used to write prompts, so the concept of prompting as a future profession is under threat.


“With these new capabilities, we’ll achieve universal prosperity,” writes Sam.


He adds that with AI, all physical laws will be discovered, and we’ll have access to virtually unlimited energy sources. Oh, few believe in this populist rhetoric. AI is already replacing several low-level jobs, forcing many to “retrain as property managers.” It’s obvious that AI is a platform for a new arms race. And I mean a literal “arms race”, since AI is clearly a dual-use technology. What weapons in the near future will be controlled through the ChatGPT API or a “military” version of ChatGPT remains anyone’s guess.


So far, this essay points to the following directions for the near future:


  • Development of the GPT-agent ecosystem and the creation of an App Store-like ecosystem, where chatbots become the new apps.

  • Development of additional models capable of “self-reflection,” like o1.

  • These reflective models will be the foundation for mass proto-AGI.


In the meantime, the company will be building infrastructure and attracting Saudi investments because, without a large number of chips and data centers, all this grandeur will remain distant.