paint-brush
AI: How ChatGPT and Big Tech are Shaping the Future of Artificial Intelligenceby@geekonrecord

AI: How ChatGPT and Big Tech are Shaping the Future of Artificial Intelligence

by Geek on recordAugust 19th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI is changing the game. Mustafa Suleyman, CEO of Microsoft AI, challenges fears and urges for governance while addressing the impact on society.
featured image - AI: How ChatGPT and Big Tech are Shaping the Future of Artificial Intelligence
Geek on record HackerNoon profile picture


ChatGPT gained mainstream popularity in 2023 and changed how Big Tech companies approached artificial intelligence (AI). This marked a major turning point in the AI Wars, a race between leading tech companies to develop the most advanced AI technology.


While AI has been studied and developed for decades, Large Language Models (LLMs) have managed to capture the consumer market’s imagination. Conversational and generative AI finally feels human-like, especially now that models are being trained using massive datasets.


Mainstream media has struggled to explain what LLMs are and what they can actually do, often focusing on the most alarming aspects of these AI models’ capabilities. Will AI eliminate our jobs? Will AI cause the end of the world? Will AI abuse your personal data?


In today’s post, we will unpack many of the views that media outlets have been broadcasting about AI with insights from Mustafa Suleyman, the CEO of Microsoft AI, who recently spoke with CNBC’s Andrew Ross Sorkin about the future of AI.


AGI is dangerous and something to fear.


Suleyman explained that people tend to fixate on the concept of superintelligence –an artificial generalized intelligence (AGI) capable of doing every single task that a human can do better and faster than humans can.


People tend to lunge at AGI as though it is inevitable, desirable, and coming tomorrow. And I just think neither of those things is true. AI is an unprecedented technology, and as a species, we are going to have to figure out a new kind of global governance mechanism so that technology continues to always serve us, make us healthier and happier, more efficient, and add value in the world,” explained Suleyman.


While theoretically possible, AGI may never arrive. We should continue to take the safety risks associated with generalized AI seriously, advocating for safety and ethics along the way, but not because The Terminator’s Skynet is a real possibility. Fearing AGI because it can become Skynet is like fearing a person who completed an apprenticeship in locksmithing because they could infiltrate the Pentagon and steal government secrets.


The real danger is already here: electoral misinformation, rapid spreading of fake news, images and videos, financial scams, etc. In a world where AI is democratized, and everybody has access to it, it can easily be used to influence and harm people at scales that were previously unimaginable. The only protection for that is governance.


Big Tech asking for regulation is a protective effort.


Many leaders in the tech industry have shared messages in favor of AI regulation during Senate hearings, but the media reports on it as if it’s not a sincere request, implying that they are empty words to protect themselves when the problems eventually arise.


Government regulation is generally frowned upon in the US. That’s why most of the recent tech industry’s governance examples (e.g., personal data protection or charging standards) have arguably come from the European Union. Suleyman is not scared of regulation, though.


We have been regulating cars since the ‘20s, and that has consistently and steadily made cars better. And it’s not just the car itself; it’s the street lights, the traffic lights, the zebra crossings, and the culture of safety around developing a car and driving a car. So, it’s a healthy dialogue that we just need to encourage.


Nonetheless, he acknowledged that poor regulation can also be problematic because it can slow down the pace of innovation. Apple has been a big proponent of this perspective, for example, complaining about the way they had to retire their lightning port: can the next charging port innovation happen while the EU forces manufacturers to use USB-C?


“Not enough resources are being devoted to the AI safety issue.”


While it’s a verifiable fact that there is an exponential trajectory in the resources used in AI research and development, people can disagree on whether or not those resources are being properly used to address safety concerns. Some think that the only way to approach the problem is by dedicating human power to explicitly investigate how to implement pull-the-plug safety mechanisms, but in reality, many of these top issues can be addressed with more computational power.


For the last decade, each time you add 10x more compute, you deliver very measurable improvements in capabilities, the models produce fewer hallucinations, they can absorb more factual information, they can integrate real-time information, and they’re more open to stylistic control, so you can tune their behavior,” said Suleyman.


Today’s models cannot fact-check a political debate in real time because they still cannot differentiate what’s a fact from what’s misinformation, often coming from unverified sources. But we are closer than ever to being able to use real-time automated fact-checking, especially because we have the capability to allow a human to confirm whether the sources are trustworthy or not.


Human reasoning works as a result of behavioral observations. When we consistently see and/or do the same thing over and over again, we tend to gain trust and become more reliable in doing said thing. AI models work in a similar way, needing a lot of data and memory to keep a big enough context that allows them to answer with certainty and avoid hallucinations.


“AI companies have stolen the world’s intellectual property.”


AI models need to be trained in a lot of information, which traditionally comes from the open web. The question is, what’s the open web? And where are its boundaries?


For decades, we have been uploading data to free online services: Facebook, Twitter, YouTube, WordPress, and thousands of other publishing websites. It’s clear now that some companies have been scrapping online content for training purposes. When personal content, like YouTube videos, is transformed into transcripts that can be used for AI training, who is supposed to own that intellectual property (IP)?


With respect to content that is already on the open web, the social contract since the ‘90s has been that it is fair use. Anyone can copy it, reproduce it, and recreate with it. That’s been the understanding. There’s a separate category where a website, or a publisher, or a news organization has explicitly said ‘do not scrape or crawl me for any other reason than indexing me.’ That’s a gray area.


Suleyman has a very liberal interpretation of the open web, suggesting how these gray areas should be solved through litigation. He argued that the economics of information are about to radically change because the cost of knowledge production is trending down towards zero marginal cost. This is already happening, though –AI is helping scientists accelerate their research, discoveries, and, ultimately, their inventions.


But what about attributing and referencing previous art and research? How will AI solve that when its training data is a blob of unmanageable proportions? And what about commercial cases, where data is used to generate profit without contributing back to the original source? Unfortunately, these are questions that don’t have satisfactory answers. Suleyman’s suggestion that litigation is the solution seems disingenuous because it doesn’t work at scale.


“AI will remove the need to study and write papers.”


LLMs are revolutionizing how people create content; it’s never been easier to write a professional-looking essay, email, or paper. Teachers and parents are suffering the consequences, worried about their kids not learning critical skills that they will need in their adult lives.


Who can speak multiple social languages? Who can adapt very quickly? That agility is going to be one of the most valuable skills, along with open-mindedness, and not being judgmental or fearful of what’s coming, and embracing the technology. […] I think we have to be slightly careful about fearing the downside of every tool.


Suleyman compared the use of AI in educational contexts to the use of calculators to solve equations instantly. He argued that not doing mental arithmetics didn’t make us dumber and quickly pivoted towards the idea that embracing technology and responding with governance in an agile way is the best way to react to new inventions.


“There is an overconcentration of power in AI technology.”


It can be argued that there is an overconcentration of power at many levels of our society –news organizations, financial institutions, and, yes, technology companies. This complicates innovating in new spaces that require significant economic resources, like AI. Making advanced models open source can help entrepreneurs, but are Big Tech companies gatekeepers of entrepreneurial innovation? Suleyman seems to think so.


The practical fact is that over time, power compounds. It has a tendency to attract more power because it can generate the intellectual and financial resources to be successful and outcompete in open markets. So while on the one hand, it feels like an incredibly competitive situation between Microsoft, Meta, Google, and so on, it is also clear that we’re able to make investments that are just unprecedented in corporate history.


Suleyman continued arguing that some of the best AI models have been open-sourced but then defended using private source code for the larger models, which require tremendous amounts of compute power and energy and can be used to deliver delightful user experiences that give Big Tech companies an edge over smaller or newer companies in the AI Wars.


“AI created an unprecedented burden on the energy grid”


All major smartphone manufacturers are making AI one of the highlights of their 2024 flagship devices. Samsung, Google, and Apple are all betting on a combination of on-device AI computing for improving photos, texts, and documents, with strategic reliance on online AI computing for more complex queries.


A few years ago, nobody was talking about LLMs, and today, we are on the verge of an agent explosion: AI bots that understand our needs and are able to perform tasks on our behalf, organizing our lives, prioritizing, and planning for us.


One thing that all of these new features and capabilities have in common is the immense amount of energy required to satisfy the compute demand. Each AI query has a well-defined cost that each company has to pay, which translates to land, water, and electricity costs that users will end up paying through subscription fees or when they buy a new AI-powered device.


Suleyman mentioned that “Microsoft will be using 100% renewable energy by the end of this year, 100% sustainable on water consumption by the end of next year, and fully net-zero in carbon emissions by 2030,” and other Big Tech companies have similar environmental goals.


This doesn’t change the fact that AI has an increasing impact on the energy infrastructure of the regions where the processing data centers are located. At the current growth rate, Big Tech will need to make a more direct investment to help improve local infrastructure to avoid an energy crisis.


As AI continues to evolve, it’s crucial that both companies and consumers navigate this technological landscape with awareness and responsibility. While the potential benefits are immense, so are the challenges: AI will affect not only our future devices but also our society at large.


The complete Aspen Ideas Festival interview with Mustafa Suleyman can be watched here.


**Did you like this article? Subscribe to get new posts by email.**

All images have been generated with ImageFX (from Google’s AI Test Kitchen).