Evolution of Search Engines: How Will Search Engines Look in 10 Years?

Written by vasa | Published 2018/12/16
Tech Story Tags: search-engines | future-of-search-engines | future-of-search | future-search-engines | voice-search

TLDRvia the TL;DR App

From Text to Voice to AR/VR to Smart Cities: The Future of Search Engines Rapidly Approaches

Imagine, one Sunday morning, you wake up and find that the whole Internet is gone.

Let’s suppose you were planning to watch a movie today. Maybe on Netflix.

Oops, it’s not there.

Well, you don’t have the movie on your laptop and you can’t even check on Google if any of the theaters is screening it today.

Maybe a DVD would work(assuming your laptop allows that; bad luck, Apple users).

So, you want to go to the nearest DVD shop. You try to search for it on Google Maps. No luck there.

Ok. Let’s just go to the market; we will find a shop there. You try to book an Uber…and it’s not there. You can still drive to the market, but you don’t have the Google Maps either.

Well, this is getting hard. Right?

This shows how much we are dependent on the Internet and services based on it today.

We are lucky to be alive at such a time. A time where we have such good access to information from all around the world. The Internet really has all the information you possibly want to know. But the Internet is not just about gathering information, but also about retrieving it in a meaningful manner whenever we need it. That’s why since the dawn to the Internet, people have worked hard to devise and create ways to organize and retrieve information from this vast ocean of information.

This is where Search Engines come in.

You may not realize but you use search engines a lot more than you think. Whenever you try to find a nearby coffee shop or check out clothes from your favorite online shopping store for your friend’s birthday or search “when Titanic sank”, to settle an argument between your friends…you are using a Search Engine.

Search engines are here since the dawn of the Internet. The first search engine, Archie came out in 1990 and was created by Alan Emtage, a student at McGill University in Montreal. Since then hundreds of them came, some of them vanished, and some stay to this date.

Here is a list of 250+ search engines…in case you are curious.

SearchPedia: A List of 250+ Search Engines - vasa - Medium_An Exhaustive List of All Search Engines from the Dawn of the Internet Since the dawn of the Internet Era, we have been…_hackernoon.com

Coming back to today, search engines are getting better at their work. Search engines like Google, don’t just spit out the results of your query, but also personalize your search(RIP privacy; more on this topic here) and suggest relevant information(even if that is not what you asked for) which you may find useful.

It’s all good. But what’s next?

How will these search engines look like after 10 years?

In this article we will try to find that; not by astrology but by analyzing different factors that have a major role in shaping this search industry. We will try to see

  • The best choices we have in terms of Interface
  • Changes in the search horizon(information pool).
  • Market maturity of enabling technologies.
  • and Supportive business models for a sustainable search ecosystem.

So buckle up, because we are getting started!

Interface

An interface is something that enables flow of information between two or more parties.

In the case of search engines, there are 2 parties: A human(like you and me) and a machine.

For an interface to be good, it has to give you an effective way to:

  • Give input to a machine and
  • Receive back the output

So really all we have to do is to compare which one of these(below) interfaces are better in this kind of exchange.

We will consider 3 interface types:

  • Text
  • Voice
  • AR/VR(But mainly AR)

To compare their efficiency, we take 3 parameters(below). Let’s compare.

Input Convenience: Natural & Instant

Text

We have been using this for decades, and it seems that it’s slow in many terms. Firstly it’s not instant. I have to get my mobile/laptop out and type the search query. That takes a lot of time.

But it is precise. I can type in exactly what I want to ask.

Also, the text doesn’t give the same quality of results for different languages. Yes, ML(machine learning) is making it better, but still, it’s not that good for many languages. Finally, I can only give limited amount information just by using text.

Voice

We have started using this a while ago, and the market seems to be a bit hyped on this. Voice is a bit convenient as I don’t have to take out my device or even have to be close to my device to use this. A lot of times, like in case of cooking or driving, we are too busy/occupied; voice can prove to be great in such cases. Also, I just have to shout out my query, which is much better than typing for a lot of people. It’s better in terms of a conversational search, cause in case text I will get annoyed if I have to type the conversation.

But it’s not precise. Of course, it is better day-by-day, but it’s not as precise as textual input. Also, it still suffers from language barriers.

AR(Augmented Reality)

This is pretty new and most of us haven’t used it at the time of writing this article. AR individually is great for convenience as we don’t have to give much input to it, rather it takes the images of my surroundings as input. So, this can take up a lot of input compared to other interfaces. But this input is not that meaningful, because if I have to find the nearest coffee shop, I can’t type or speak to it. So, AR individually can’t serve as the interface.

Nearby Places using AR

When we talk about input, another important parameter is Context. This is becoming a major topic in search space. To understand it’s significance, consider an example in which I ask a search engine for “best restaurants”. Now a search engine can show me a list of top restaurants in the world, but there is a good chance that I am asking for a good restaurant near me. This is how context can make a big difference.

Let’s compare these 3 in terms of context.

Context or Input Efficiency

Text

Today as we see, the only context a text interface can get the location-based context via by GPS(or if I provide it specifically). If I have to give any further context, I have to give it by typing it with my query. So, it’s not that good in terms of context.

Voice

Same as text. Voice has to take the context via direct input, which is by telling it the context verbally in this case.

AR(Augmented Reality)

AR is really good when it comes to context. As we get a lot of information about my surroundings, AR will be a crucial part when it comes to context-based search.

Apart from these input mediums, services like Google are also tracking our daily life(again, RIP privacy; more discussion on this topic here) and trying to find patterns in it, bit long term and short term which, even we don’t realize or know. Like if I go for a Movie every Friday, my device will eventually detect a pattern and start suggesting me movies in Google Now cards.

Also, projects like Behavio(now acquired by Google) are using sensors in our devices to judge what we are doing now and thus predict what we may want or what we may do next, allowing service providers(like search engines) to understand and predict our behavior.

Well, we have talked a lot on inputs. Now let’s focus on the output efficiency.

Output Efficiency

Text

The text has a good amount of output efficiency. When I search for “how to make a cake” I get a result like this:

Result for “how to make a cake”

Here I can simultaneously look at pictures of the cake, the recipe, and the reviews. I can also scroll to any part of the recipe easily. But the contextual output is not that good. In case I am in the middle of the part in which I am baking the cake, I don’t get any specific suggestions on how to bake it in a specific model of an oven.

Overall, the output efficiency is good.

Voice

Unfortunately, voice sucks when it comes to output efficiency. If I were to take the same example of “how to make a cake”, a voice search will give me a step-by-step answer to it. I can’t see any pictures of the procedure or my finished cake(so, I can decide which one I will like to make). Also, navigating through the result is a bit hard, I can’t easily skip to a part(as I don’t know what are the next parts; also I have to remember a lot of things, which otherwise I could just see in the text-based interface).

So, basically, voice sucks in output efficiency.

AR(Augmented Reality)

AR has a high output efficiency. Again taking the example of the cake recipe, not only I can simultaneously see multiple things at a time, but also, as it is aware of my context, it can give me granular details according to the things(details on utensils, ovens etc.) available to me. But when I need to ask something which I don’t see, I need to have some text/voice interface for that.

So, AR can really enhance our search experience to a whole new level, when it comes to output efficiency.

So, summing up on the interface, we clearly see that none of these interfaces can solely act as an ultimate interface. What we need is a hybrid of these 3 which has a great input/output efficiency and a good context-awareness factor.

So, a good interface will look like glasses(spectacle) connected to our device. The glasses can act as an AR interface with a microphone in it, so it can also take voice commands. In the case where we need 100% precision(as the voice is not always precise), we can type using a smartphone connect to out glasses.

This is just a suggestion. If you have a better idea, then shoot it in the comments;)

If you want to dig more into this interface discussion, then head here:

5 Ways We'll Interact With Computers in Future - vasa - Medium_How Interfaces Will Look Like in Future? "5 Ways We'll Interact With Computers in Future" is published by vasa_medium.com

So, now we have a bit of idea about how the interface could look like for the search engine of the future. As we are getting more into the context based searches and learning about patterns in our daily life, a lot of new type of information is being flooded on the internet. Let’s see how the search horizon will change and what impacts could it bring for tomorrow.

Search Horizon

Till now we(as humans) have been focussing on the adoption of the Internet. But now as we reach wide-spread adoption, due to cheap ISPs(in the case of India, JIO has a major part bringing internet to common people) and mobiles(new OS like KiaOS, which turns feature phones to smartphones) we are adding a lot of new data to the internet, at rates faster than ever.

So now we are getting more focused on finding patterns and connection between the existing data. This also includes our social data, which reveals a lot of information about who I am, who are my friends etc. Below are some points which have contributed to shaping the current search space.

The New World of Search Engines

  • Schema.org: Bing, Yahoo, and Google recognize that in order to adapt to the new search landscape, they would have to put competition aside and engage in some collaboration. In 2011, they jointly launched the Schema.org initiative. The schema defines a new set of HTML terms which can be added to a web page’s markup. They will be used as clues to the meaning of that page and will assist search engines to recognize specific people, events, attributes, and so on. For example, if a webpage contains the word “pentagon” a Schema definition will clarify whether it’s about the geometric five-sided figure or the Department of Defense headquarters building.
  • Knowledge Graph and Snapshot: Google has been increasing the scope of its Knowledge Graph results, which offers users a box on the right-hand side of the search results page that provides images and facts that are applicable to the searcher’s intent. Bing’s Snapshot, which functions similarly, was enhanced in 2013 by the advent of “Satori,” which will assist with understanding the relationship between people, places, events, and objects.
  • Hummingbird: In September 2013, Google announced the arrival of Hummingbird, its new search algorithm. According to Google search chief Amit Singhal, Hummingbird represents the most drastic change in search algorithms that Google has made since 2001. He explains, in Search Engine Land that Panda and Penguin were updates to the old algorithm, and some aspects of them will continue to apply, but Hummingbird is actually an entirely new search engine, designed for the search needs of today. Hummingbird offers a greatly increased comprehension of the meaning behind the search terms. Instead of just taking a few words from the query and trying to find pages with those words on them, Hummingbird is actually trying to decipher the meaning behind the query and offer results that match users’ intent. The Search Insider blog points out that Bing and Yahoo have made similar changes, though perhaps less drastic. They have geared their searches to respond more to full phrases and to understand the meaning contained in a string of words.
  • Rising Stars: With the advent of semantic search, an array of new search engines are being freshly constructed. Although their user numbers are microscopic when compared to the major search engines, these new players have the advantage of being able to make a fresh start without worrying about modifying earlier structures. Examples of natural language search engines include Powerset (now owned by Microsoft), Hakia and a handful of others.

Applications for Semantic Search

  • Augmented Reality (AR): With Google Glass, an overlay (of a map, for instance) is layered on top of the landscape that is being physically seen by the viewer. This will lead to more image tagging and visually based searches. This has a natural tie-in to marketing since shoppers will be able to look at something and then learn about it (and where to buy it) based on its appearance. However, Google Glass still faces some challenges: It uses a combination of image, facial and voice recognition technology, and that means that a continuous network connection is required because you can’t pack enough processing power into just a few ounces. However, this obstacle is likely to be overcome before too long, and wearable technology of all kinds is just over the horizon.
  • Search and Mobile: According to Search Insider, mobile search and the birth of Siri have been the biggest catalysts for consumers changing the approach to search. Since Siri encourages natural language questions, and people have gotten accustomed to having immediate access to the information that they want, voice recognition technology is increasingly driving Search. The mobile search utility Google Now is powered by natural language and fits into the user’s life by supplying them with the information they want before they even realize they want it. The expectation of this kind of responsiveness has circled back to text-based online searches, and all the major search engines have made adjustments to meet this demand.
  • Social Media and Semantic Technology: Facebook has announced that its new Graph Search is equipped with semantic search technology so that users can find the connections they want more easily, and advertisers can achieve a more intuitive understanding of users’ preferences. Graph Search also enables far more accurate targeting of marketing, since it can make new connections. For example, a user (or advertiser) can find friends who like X who live in Y. Basically, the new technology provides a treasure trove for data mining, although it too has a few challenges to overcome. The new deeper data levels are based on people spending more time on Facebook, with broad networks of friends and connections. Also, the public concern with Facebook privacy continues and these concerns may prevent people from “Liking” certain things. Overall, however, the prospects are bright; Michael Pachter, an analyst at Wedbush Securities in Los Angeles, predicts in Bloomberg Businessweek, “Graph Search will grow to about a quarter of Facebook’s revenue or $3 billion to $4 billion in 2015.”

Market Maturity: Economic & Social

As we have seen with startups like Blippar, market maturity plays an important role when it comes to the adoption of any new technology. Blippar aimed to an AR search engine. But while in its existence(2011–2018), AR was not that supported by the market(instead of being so much hyped). Unfortunately, Blippar was losing money due to competition from giants who were heavily investing in AR. It couldn’t generate much revenue to keep itself afloat and got shut down this year(2018).

Let’s see the state of economic and social maturity of technologies enabling text, voice, and AR.

Economic Maturity

Text

Economic maturity for text is at its peak. All the search engines present today provide a text interface(well, that’s obvious).

Voice

Economic maturity for Voice is getting better day by day. Research by voicebot estimates that 47.3 million people, or nearly one in five adults, in the United States currently own a smart speaker. These are just people with devices like Google home and Alexa. Also, most of the new smartphones now come with inbuilt voice assistants. Right now we have 4 major voice assistants, Google assistant(Google), Siri(Apple), Cortana(Microsoft), and Bixby(Samsung). So, a lot of money is being poured into the voice assistants, which assures the maturity of the market.

AR

Here is what major players are doing in AR:

Google

  • Google gave a shot AR with Google glass which didn’t turn out to be a success.
  • Magik Leap(partly owned by Google) just released(in Dec’18) its new headsets.

Microsoft

  • Windows mixed reality and Microsoft hololens released in 2016
  • Every up-to-date Windows 10 PC(since Oct’17) now comes with the built-in AR and VR platform.

There are tons of other players too in this field.

Even if you haven’t used any of the above, you probably have used some sort of AR. Apps like Snapchat use AR for filters. Also players like Google, Apple and Microsoft have added Augmented reality platforms right into their Operating Systems.

So, in comparison of the other two, AR is just getting started. Also, most of the headsets are really expensive to be used by the general public.

Social Maturity

Text

People who have used at least any kind of search engine have used it with text interface. We are comfortable using it. Although it’s a bit more time taking, it’s more accurate when it comes to multiple languages.

Voice

Voice is a bit hyped up today. People are constantly stating that “it will replace the text”, but the stats show a different story. In 2017 and 2018, Stone Temple did a survey of approximately 1000 people of different sex/age-group/financial status.

This reveals some trends about when we usually use voice commands.

This shows that most of the people prefer to use voice commands when they are alone or with know set of people. We can also see the social trend(using voice command when not alone) improving from 2017 to 2018.

Below graph shows the percentage distribution of people’s answer to the question: “When I need to look up information, I am most likely to … (Please rank your top three choices).”

This shows that most of the people still prefer typing search queries, either in the browser or in the search engine app(or even asking a friend via text message) rather than using voice search.

Here is the distribution of applications usually used with voice.

This justifies the fact that we usually use voice in tasks in which text is not convenient(and we don’t care much about accuracy) or when we are busy/occupied in something(like while driving, we ask for directions). That’s why posting on social media has a bit low use, as in that case we need to have accuracy, or else we will end up posting silly posts.

And this one backs up our results from the previous graph.

So, socially we are getting more used to voice but still, it hasn’t taken over text yet. But in 10 years we can expect widespread adoption of voice-controlled searches, as more people get used to it and more applications provide a voice-controlled interface.

AR

Even if the companies are pouring a lot of money, there is not much response from the public right now. People are not just used to AR right now. Most people only use AR on our mobiles right now. So, the only penetration of AR is because it is bundled with apps/services we use today, of which most are free to use.

In the coming years, we can definitely expect growth in AR users as AR devices get sleeker and cheaper. In 10 years we can expect AR to be in the same position as with the voice today.

Supporting Business Models

Let’s explore the supporting business models for search engines.

Since the dawn to mass media, the advertisement has served to be a perfect business model component for a lot of businesses. It literally boomed with the introduction of the internet and today it has eaten a large part of newspaper and television Ad revenue. One of the reasons is that newspapers need a lot of capital to diversify to a greater demographic area. Whereas in the case of the internet, there is no such financial barrier.

Till now Ads have worked great for providing free of cost search engine services to the common masses. But when we move towards a voice-controlled search economy, this model doesn’t seem to work, as we discussed above that voice is really limited when it comes to output efficiency. You don’t want to listen to Ads when you search for something, right? This is probably the biggest problem with the voice-based search engines.

On the other hand, AR has immense potential when it comes to advertising. It is even better than current advertising because we have much more context(like where exactly I am now). Also, when we show ads on AR, they will be more effective and engaging with respect to Ads on websites, which we see and hate today.

One more thing which will have a contribution is the decentralization wave. As we have seen, till now as consumers we don’t get any benefit for seeing those annoying ads. But with the help of decentralized models, it is possible to transparently distribute Ad revenue between the publisher and the consumer. Also, we can expect a shift in the way this whole advertisement model works. As people are getting concerned about there privacy and data mining, we have proposed a model through which the data of the user still remains with the user and still, the advertisers/publishers can show targeted Ads. You can learn more about it here:

Say BYE to F.A.G.M.A., and HELLO to the new Internet_Breaking the Facebook, Apple, Google, Microsoft, Amazon dominance_hackernoon.com

Conclusion

If you have made it till here, you have definitely learned a lot.

By our analysis, we can say a few things about search engines in the coming years:

  • The text will be still there after 10 years.
  • In the coming years, we will see a lot of applications supporting voice-based commands, which will be the main driver of voice searches. You will see conversational bots when you go to restaurants, effectively replacing waiters from most of the places. Apps/web apps will have more voice-based interactions(still having a text interface for accuracy) rather than text-only.
  • Chatbots(for searching products on a website) will be more voice-based(still having a text interface).
  • AR enabling devices will be much better and cheaper. They will be in the same state of adoption as it is with smartwatches today. Fashion will be a major initial driver for AR glasses. We may see partnerships of AR companies and Spectacle companies(this one is a wild prediction).
  • The growth of AR will depend upon the whole economy. As true AR needs you to buy a new device(like Google glass) rather than using an existing device(like mobile, which was already there before even voice-based services came into existence). But the business model for AR based searches will be much stronger. So an AR search interface(glasses) with microphones in it can solve the business model problem for voice searches(as voice can be used as input and the AR interface can be used for output, which has better output bandwidth for showing Ads). These glasses can be connected to any device, maybe your mobile or laptop for text input.
  • Change in the business model. We may get paid for watching Ads.

Other Predictions

  • We will start seeing the first truly smart cities. As autonomous automobiles will be a trend. We will start spending more time on the internet services(to kill time while commuting).
  • AR can redefine Tourism. As I can wear my glasses and get all the info, I will not need any guide. But some people may still prefer guides for local experience.
  • People will start understanding the value of data and privacy.

These are just some of my views. If you have any comments/suggestions then shoot them in the comment section:)

Thanks for reading ;)

About the Author

Vaibhav Saini is a Co-Founder of TowardsBlockchain, an MIT Cambridge Innovation Center incubated startup.

He works as Senior blockchain developer and has worked on several blockchain platforms including Ethereum, Quorum, EOS, Nano, Hashgraph, IOTA etc.

He is currently a sophomore at IIT Delhi.

Learned something? Press and hold the 👏 to say “thanks!” and help others find this article.

Hold down the clap button if you liked the content! It helps me gain exposure.

Want to learn more? Check out my previous articles.

ConsensusPedia: An Encyclopedia of 30 Consensus Algorithms_A complete list of all consensus algorithms._hackernoon.com

Why We Don’t Care About Privacy & Why We Should — vasa — Medium_Privacy concern is like Lead Poisoning. Nobody will Truly Notice it until People Start Dying due to it…_medium.com

ContractPedia: An Encyclopedia of 40 Smart Contract Platforms_A Complete List of all Smart Contract supportive Platforms_hackernoon.com

Difference between SideChains and State Channels_A complete comparison of the two scaling methods._hackernoon.com

EOS 101: Getting started with EOS, Part 1_The only blockchain which has blocktime of less than a second: 0.5 sec!_hackernoon.com

Clap 50 times and follow me on Twitter: @vasa_develop


Written by vasa | Entrepreneur | Co-founder @tbc_inc, an MIT CIC incubated startup | Speaker |https://vaibhavsaini.com
Published by HackerNoon on 2018/12/16