paint-brush
Is Google Doomed? Only the Paranoid Survive….by@f1r3flyceo
1,302 reads
1,302 reads

Is Google Doomed? Only the Paranoid Survive….

by Lucius MeredithJanuary 17th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The New York Times recently proclaimed a new Chat Bot is a ‘Code Red’ for Google’s Search Business. The purported panic fosters a misunderstanding of how Google operates, why it dominates, and how programs such as GPT3 and ChatGPT operate. The prime directive for Silicon Valley, coined by the late Andy Grove (former CEO of Intel), still applies: “Only the paranoid survive.”
featured image - Is Google Doomed?  Only the Paranoid Survive….
Lucius Meredith HackerNoon profile picture

The New York Times recently proclaimed  A New Chat Bot Is a ‘Code Red’ for Google’s Search Business. It describes Google’s “dread” of interfaces based on the species of artificial intelligence called large language models (LLM), such as ChatGPT, that threaten to displace its dominance in search.


Search proved the key factor in the modern economy, propelling the value of data far beyond that of the previous most valuable commodity in the world: oil and gas.


Per the Times:


“Over the past three decades, a handful of products like Netscape’s web browser, Google’s search engine and Apple’s iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs.


“Three weeks ago, an experimental chat bot called ChatGPT made its case to be the industry’s next big disrupter."


The prime directive for Silicon Valley, coined by the late Andy Grove (former CEO of Intel), still applies: “Only the paranoid survive.” Yet… rumors of Google’s death are greatly exaggerated.


The purported panic fosters a misunderstanding of how Google operates, why it dominates, and how programs such as GPT3 and ChatGPT operate.

#A beginner’s guide to Google’s (and LLMs) inner workings


Google has the world’s largest (mega-billion dollar) supercomputer, built out of networked Linux devices and storage devices.  Reuters estimated that Google was going to spend an additional $13B in 2019 just augmenting capacity. It keeps this beast fed by regularly crawling the web and by attracting a steady stream of user content including the world’s email, videos, geolocation data, and much much more data. By one estimate, at least an exabyte.


It is practically impossible for a mere AI compute function to overcome Alphabet’s massive advantages from its installed base. Unless Google fumbles. Highly unlikely.


ChatGPT has a different sort of compute challenge resulting in a different architecture. This keeps it from being an immediate threat to Google.


It is predicated on large scale linear algebra operations and can get by with a much smaller data set. Furthermore, the corpus it learns, as opposed to what it searches, is fixed. Searchwise, that’s a fatal limitation.


While ChatGPT can search for more recent information about the pandemic, GPT3 knows nothing about  COVID-19. It was trained on a pre pandemic corpus. This prevents it from playing David to Google’s Goliath.


AI interfaces, indeed, could create a threat to attracting a stream of user content. That is the case, however, only if the purveyors of LLM-based interfaces, like OpenAI, are already prepared to store, safeguard, and monetize that data.


Currently none are so prepared. Furthermore, these programs give no particular advantage to finding data in the middle of a video or detecting piracy of video content.  Remember, Alphabet, Google’s parent, owns YouTube, the Fort Knox of video.


There are new approaches on the horizon to expanding what is searchable. These almost certainly will prove of great value for accessing the many specialist data repositories that are currently effectively unsearchable by Google’s search engine.


GitHub, the go to repository of software code, is a great example. Even if there are 30M programmers and software developers in the world, that’s a tiny drop in the bucket of the world’s online population.


Yet, software developers, on average, control vastly more deployable capital than the average citizen of planet Earth. This is not necessarily through their personal wealth. They exercise their market power  through their collective influence on their clients and employers.


As such, they, their clients and employers are more able and more willing to pay for expanded capacity to search repositories like GitHub, which is currently only searchable via metadata and social means. Code is opaque to automated investigation of its structure and behavior.


There are even fewer biologists and chemists actively involved in drug development. That said, they very likely control even more deployable capital than software developers (on average). Companies like Pfizer, Lily, Merck, and AstraZeneca have enormous capital.


They are even more able, and certainly willing, to pay for expanded search capabilities. They are especially keen because it is significantly less expensive to run a computer search than it is to run a physical assay. The potential of the Kyoto Genes and Genomics database, for example, is largely untapped because we cannot see into the dynamics of the data stored there.


That’s a spectacularly valuable niche application.  It is, however, irrelevant to the vast majority of rank-and-file web-searchers who vastly outnumber data specialists.


For the foreseeable future, the expansion of search will not be in consumer markets. It will be in technical specialist, government, and enterprise markets. One of the first things specialists (like the lead author here) do is to test ChatGPT on common sense reasoning over specialist data sets. At this it fails miserably.

#Back to the future

Google’s past could be a map to AI’s future. Google missed video search.  It paid for that by buying YouTube with $1.65B in a stock-for-stock swap in 2006.  Great move, as YouTube is now estimated to be worth $180B, two orders of magnitude greater than what they paid for it. Good deal all around, but to the point Google expanded its search hegemony.


In Life After Google, America’s premier futurist, George Gilder, has suggested the end of big data as a siloed commodity behind the firewalls of the big five. Supposedly, this will be due to developments of decentralizing technologies, such as blockchain. Indeed, the co-author’s review of Life After Google suggested that innovation rather than regulation is a better way of domesticating big data.


However, we can’t look to LLMs as the relevant innovation. While the computer architectures of LLMs and other similar AI-based approaches are not the same as Google’s, they are still massively centralized and run counter to the decentralizing trend. They may yet extend the life of centralized big data, until scalable decentralizing technologies, now on the horizon, come to fruition.


Apple, Amazon, Alphabet (Google’s holding company),  Meta (Facebook’s holding company), and Microsoft today are the Masters of the Digital Universe. That won’t last. Eventually, sooner than you might think, they are likely to go the way of Kodak. Xerox or Blockbuster.  The penalty for failure to innovate is severe.


As the Greek’s taught us, oblos – extreme wealth – tends to lead to hubris – overweening pride.  Which is then, always, slain by the goddess Nemesis.


Google’s reported “dread” at the advent of ChatGBT is very much a bullish, not bearish, signal.  It implies that Google’s leadership has not succumbed, and does not appear at risk of soon succumbing to hubris.


Only the paranoid survive.


By Lucius Gregory Meredith and Ralph J. Benko