paint-brush
Blockchain Search Engines: Can Decentralization Defeat Google?by@maria-lobanova
584 reads
584 reads

Blockchain Search Engines: Can Decentralization Defeat Google?

by Maria LobanovaNovember 26th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Search engines have become so firmly entrenched in our lives that they have gained unprecedented power over what we read, what information we consume, and what services we use. Instead of providing impartial access to data, they have led to the polarization of society. With the arrival of a decentralized internet, the search engines we use today will be replaced by new ones that are devoid of Google’s opaque algorithms and proclivity to censorship. Google controls the dissemination of nearly all information, from descriptions of goods in online stores to local news reports.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Blockchain Search Engines: Can Decentralization Defeat Google?
Maria Lobanova HackerNoon profile picture

Searching for information on the World Wide Web has become routine for billions of users. Meanwhile, search engines have become so firmly entrenched in our lives that they have gained unprecedented power over what we read, what information we consume, and what services we use. The algorithms that generate search results, which were originally designed for the user's convenience, have become a propaganda tool that wields power over the public’s consciousness. Search engines serve an agenda aimed at informational control. Instead of providing impartial access to data, they have led to the polarization of society.

While few people may realize it, the internet is gradually transitioning from Web 2.0, which is based on a concept of social interactivity, where users act as content creators; to Web 3.0, which assumes that data will no longer be exclusively stored on centralized servers. With the arrival of a decentralized internet, the search engines we use today will be replaced by new ones that are devoid of Google’s opaque algorithms and proclivity to censorship. In fact, the first decentralized search engines are already appearing on the market.

How Google Evolved from Revolutionary to Dictator

In 1996, two computer science graduate students at Stanford, Larry Page and Sergey Brin, launched the BackRub research project, which aimed to create a tool that could find information on their university’s homepage. In 1998, they patented their solution under the name Google. By mid-2005, a year after a stunningly successful IPO, Google’s capitalization reached a record $52 billion. And after another fifteen years, Google became the world’s leading search engine, dominating almost all categories – the most used on desktop computers and mobile devices, as well as the most popular search engine among advertisers. Today, Google controls the dissemination of nearly all information, from descriptions of goods in online stores to local news reports. Meanwhile, criticism of Google and other local search market leaders, such as Yandex for the Russian-speaking segment or Bing for the Chinese, grows louder every year. 

The last decade has been marked by a lot of high-profile investigations, research, and lawsuits against major players in the search engine market. Google was caught creating an information trap, or a so-called ‘filter bubble’, where users receive information consistent with their past search queries. The tech giant has also been found guilty of manipulating search results in order to promote its own Google Shopping service. Moreover, IT specialists have discovered structural bias against women and people belonging to ethnic minorities in Google’s search algorithm and language models, as well as prejudice against small local media outlets, whose articles are relegated to the last lines of Google News’ search results.

Among the latest high-profile scandals is an antitrust lawsuit that has been brought against Google by a group of US Attorneys General representing 17 American states. The plaintiffs’ claims reveal stunning details concerning Google’s advertising business. Aside from imposing blatantly oppressive commission schemes on advertisers, amounting to 22-42 percent of advertising expenses, Google has been accused of:

  • manipulating advertising prices with the help of fake buyers and sellers, for which the corporation created a whole department dubbed gTrade
  • colluding with Facebook
  • deliberately slowing the loading of pages that do not use AGP (Accelerated Mobile Pages technology) in order to lock advertising traffic into Google
  • infringing on the privacy rights of more than 750 million Android users 
  • lobbying against legislative initiatives aimed at protecting privacy

and much more…

The 173-page lawsuit prepared by a group of prosecutors examines in detail how the world’s leading search engine forces internet advertisers to use its platform. A description of an internal corporate dialogue that took place in 2016 between Google executives can be considered emblematic. In it, the participants note that Google earns ‘A WHOLE LOT OF money’ thanks to advertising commissions, while admitting that the company operates in the manner that it does – and this is a quote – because “we can afford to.”

Now that Google has secured its position as the world’s dominant search engine, monopolizing more than 90 percent of global market share, the devastating consequences of this state of affairs are being discussed not only among IT specialists and marketers, but also by national regulators. “Google’s dominant position and its commercial arrangements in the market create barriers for new or competing businesses, depriving them of access to consumers. This suppresses innovation and deprives consumers of choice, while the quality of services of the most dominant company is declining,” Rod Sims, the chairman of the Australian Competition and Consumer Commission, recently noted.

Decentralization Instead of Dictatorship 

Sometimes, to find your way out of a difficult situation, it’s necessary to hit bottom, and this couldn’t be more true of the search engine market today. Despite Google’s unchallenged position in this sphere, the balance of power may swing dramatically over the next 5 to 10 years. This will not come about due to pressure that global regulators put on the Alphabet Corporation, nor as a result of numerous fines or attempts to force the company to adopt more open practices. The reason will be the natural evolution of the World Wide Web itself: the transition from Web 2.0 to Web 3.0 that has been talked about for the last 20 years.

Internet searches are currently designed for client-server architectures and work with protocols such as TCP/IP, DNS, URL, and HTTP/S. When a query is entered into the search bar, the search engine generates a list of hyperlinks to third-party sites where relevant content is located. The user clicks on one, and the browser directs them to the specific server on the network where the content is stored via an IP address. So, in Web 2.0, content is dependent on the server, the physical location where it is stored. If this server is destroyed, the content is destroyed as well. Moreover, the content can be surreptitiously altered, because the URL link leading to the content will remain the same. Presently, web content is extremely vulnerable because it can be manipulated, deleted, or blocked at any time.

Web 3.0 works in a completely different way. Content is addressed by the hash of the content itself. After content is located via its hash and downloaded, the user becomes one of its distribution points, similar to the way torrent networks work. This approach effectively eliminates threats that could destroy or manipulate content because, if the content changes, the content’s hash will change as well. With a decentralized web, the architecture of the internet itself will change: there will be no sites as they are currently understood. All content will be stored in a peer-to-peer (P2P) network, where it can be found without an address to a specific server location. The problem of ‘broken’ links will disappear as time passes, as links to the original content will remain the same forever.

The World Wide Web’s new architecture will require new search engines. It will no longer be possible for search engines to conceal their indexing algorithms, as Google does. There will be no need for crawler bots that collect data on changes in site content. There will be no risk of being censored or having your private information stolen.

Visually, the results in a decentralized search engine will differ little from the those found in the usual centralized format, but there are several key advantages: 

The search results will include the desired content, which can be read or viewed directly in the search results without going to another page.

Buttons for interacting with applications on any blockchain and making payments to online stores can be embedded directly in search snippets.

In content-oriented Web 3.0, search engines lose their supreme power over search results, as they will be generated by participants in peer-to-peer networks, whose preferences will determine the ranking of the cyberlinks.

Transparent Ranking

The main hurdle when developing a search engine is devising a system for ranking content. The history of Google began with the creation of its PageRank algorithm in the mid-90s, but search engines designed for Web 2.0 are not suitable for content-oriented Web 3.0.

Coming up with a mechanism to rank links in Web 3.0 is not a trivial task. Having abandoned centralized ranking, developers have to decide how to pass the right to evaluate content into the hands of users, while thinking about ways to prevent potential malicious manipulations. Cyber, a decentralized search engine project, has proposed such a ranking solution for Web 3.0.

The ranking mechanics of Cyber’s protocol apply the principles of tokenomics, which is based on the idea that the network participants themselves should have an interest in forming a knowledge graph, that will successfully generate super-intelligent search results in the long term. Therefore, users will need V tokens (volts) to index content, and A tokens (amps) to rank it. Network participants will need to store H tokens (hydrogen) in their wallet for a certain period of time in order to receive V and A tokens. H, in turn, is produced by liquid staking the main network token (BOOT for Bostrom and CYB for Cyber). Thus, Cyber users will be able to access the resources of the knowledge graph with a network token and receive staking income similar to Polkadot, Cosmos, or Solana.

The rank of cyberlinks associated with an account depends on the number of tokens. But if tokens have such an impact on output, isn’t there a risk that results could be manipulated? To avoid this, at the start of the project, 70 percent of the tokens will be distributed to users of Ether and its applications, as well as Cosmos network users. The drop will be carried out on the basis of an in-depth analysis of activities on these networks. Therefore, the bulk of the stake will go into the hands of users who have proven their ability to be of benefit.