Imagine, one Sunday morning, you wake up and find that the whole Internet is gone.
Let’s suppose you were planning to watch a movie today. Maybe on Netflix.
Oops, it’s not there.
Well, you don’t have the movie on your laptop and you can’t even check on Google if any of the theaters is screening it today.
Maybe a DVD would work(assuming your laptop allows that; bad luck, Apple users).
So, you want to go to the nearest DVD shop. You try to search for it on Google Maps. No luck there.
Ok. Let’s just go to the market; we will find a shop there. You try to book an Uber…and it’s not there. You can still drive to the market, but you don’t have the Google Maps either.
Well, this is getting hard. Right?
This shows how much we are dependent on the Internet and services based on it today.
We are lucky to be alive at such a time. A time where we have such good access to information from all around the world. The Internet really has all the information you possibly want to know. But the Internet is not just about gathering information, but also about retrieving it in a meaningful manner whenever we need it. That’s why since the dawn to the Internet, people have worked hard to devise and create ways to organize and retrieve information from this vast ocean of information.
This is where Search Engines come in.
You may not realize but you use search engines a lot more than you think. Whenever you try to find a nearby coffee shop or check out clothes from your favorite online shopping store for your friend’s birthday or search “when Titanic sank”, to settle an argument between your friends…you are using a Search Engine.
Search engines are here since the dawn of the Internet. The first search engine, Archie came out in 1990 and was created by Alan Emtage, a student at McGill University in Montreal. Since then hundreds of them came, some of them vanished, and some stay to this date.
Here is a list of 250+ search engines…in case you are curious.
Coming back to today, search engines are getting better at their work. Search engines like Google, don’t just spit out the results of your query, but also personalize your search(RIP privacy; more on this topic here) and suggest relevant information(even if that is not what you asked for) which you may find useful.
It’s all good. But what’s next?
How will these search engines look like after 10 years?
In this article we will try to find that; not by astrology but by analyzing different factors that have a major role in shaping this search industry. We will try to see
So buckle up, because we are getting started!
An interface is something that enables flow of information between two or more parties.
In the case of search engines, there are 2 parties: A human(like you and me) and a machine.
For an interface to be good, it has to give you an effective way to:
So really all we have to do is to compare which one of these(below) interfaces are better in this kind of exchange.
We will consider 3 interface types:
To compare their efficiency, we take 3 parameters(below). Let’s compare.
We have been using this for decades, and it seems that it’s slow in many terms. Firstly it’s not instant. I have to get my mobile/laptop out and type the search query. That takes a lot of time.
But it is precise. I can type in exactly what I want to ask.
Also, the text doesn’t give the same quality of results for different languages. Yes, ML(machine learning) is making it better, but still, it’s not that good for many languages. Finally, I can only give limited amount information just by using text.
We have started using this a while ago, and the market seems to be a bit hyped on this. Voice is a bit convenient as I don’t have to take out my device or even have to be close to my device to use this. A lot of times, like in case of cooking or driving, we are too busy/occupied; voice can prove to be great in such cases. Also, I just have to shout out my query, which is much better than typing for a lot of people. It’s better in terms of a conversational search, cause in case text I will get annoyed if I have to type the conversation.
But it’s not precise. Of course, it is better day-by-day, but it’s not as precise as textual input. Also, it still suffers from language barriers.
This is pretty new and most of us haven’t used it at the time of writing this article. AR individually is great for convenience as we don’t have to give much input to it, rather it takes the images of my surroundings as input. So, this can take up a lot of input compared to other interfaces. But this input is not that meaningful, because if I have to find the nearest coffee shop, I can’t type or speak to it. So, AR individually can’t serve as the interface.
Nearby Places using AR
When we talk about input, another important parameter is Context. This is becoming a major topic in search space. To understand it’s significance, consider an example in which I ask a search engine for “best restaurants”. Now a search engine can show me a list of top restaurants in the world, but there is a good chance that I am asking for a good restaurant near me. This is how context can make a big difference.
Let’s compare these 3 in terms of context.
Today as we see, the only context a text interface can get the location-based context via by GPS(or if I provide it specifically). If I have to give any further context, I have to give it by typing it with my query. So, it’s not that good in terms of context.
Same as text. Voice has to take the context via direct input, which is by telling it the context verbally in this case.
AR is really good when it comes to context. As we get a lot of information about my surroundings, AR will be a crucial part when it comes to context-based search.
Apart from these input mediums, services like Google are also tracking our daily life(again, RIP privacy; more discussion on this topic here) and trying to find patterns in it, bit long term and short term which, even we don’t realize or know. Like if I go for a Movie every Friday, my device will eventually detect a pattern and start suggesting me movies in Google Now cards.
Also, projects like Behavio(now acquired by Google) are using sensors in our devices to judge what we are doing now and thus predict what we may want or what we may do next, allowing service providers(like search engines) to understand and predict our behavior.
Well, we have talked a lot on inputs. Now let’s focus on the output efficiency.
The text has a good amount of output efficiency. When I search for “how to make a cake” I get a result like this:
Result for “how to make a cake”
Here I can simultaneously look at pictures of the cake, the recipe, and the reviews. I can also scroll to any part of the recipe easily. But the contextual output is not that good. In case I am in the middle of the part in which I am baking the cake, I don’t get any specific suggestions on how to bake it in a specific model of an oven.
Overall, the output efficiency is good.
Unfortunately, voice sucks when it comes to output efficiency. If I were to take the same example of “how to make a cake”, a voice search will give me a step-by-step answer to it. I can’t see any pictures of the procedure or my finished cake(so, I can decide which one I will like to make). Also, navigating through the result is a bit hard, I can’t easily skip to a part(as I don’t know what are the next parts; also I have to remember a lot of things, which otherwise I could just see in the text-based interface).
So, basically, voice sucks in output efficiency.
AR has a high output efficiency. Again taking the example of the cake recipe, not only I can simultaneously see multiple things at a time, but also, as it is aware of my context, it can give me granular details according to the things(details on utensils, ovens etc.) available to me. But when I need to ask something which I don’t see, I need to have some text/voice interface for that.
So, AR can really enhance our search experience to a whole new level, when it comes to output efficiency.
So, summing up on the interface, we clearly see that none of these interfaces can solely act as an ultimate interface. What we need is a hybrid of these 3 which has a great input/output efficiency and a good context-awareness factor.
So, a good interface will look like glasses(spectacle) connected to our device. The glasses can act as an AR interface with a microphone in it, so it can also take voice commands. In the case where we need 100% precision(as the voice is not always precise), we can type using a smartphone connect to out glasses.
This is just a suggestion. If you have a better idea, then shoot it in the comments;)
If you want to dig more into this interface discussion, then head here:
So, now we have a bit of idea about how the interface could look like for the search engine of the future. As we are getting more into the context based searches and learning about patterns in our daily life, a lot of new type of information is being flooded on the internet. Let’s see how the search horizon will change and what impacts could it bring for tomorrow.
Till now we(as humans) have been focussing on the adoption of the Internet. But now as we reach wide-spread adoption, due to cheap ISPs(in the case of India, JIO has a major part bringing internet to common people) and mobiles(new OS like KiaOS, which turns feature phones to smartphones) we are adding a lot of new data to the internet, at rates faster than ever.
So now we are getting more focused on finding patterns and connection between the existing data. This also includes our social data, which reveals a lot of information about who I am, who are my friends etc. Below are some points which have contributed to shaping the current search space.
As we have seen with startups like Blippar, market maturity plays an important role when it comes to the adoption of any new technology. Blippar aimed to an AR search engine. But while in its existence(2011–2018), AR was not that supported by the market(instead of being so much hyped). Unfortunately, Blippar was losing money due to competition from giants who were heavily investing in AR. It couldn’t generate much revenue to keep itself afloat and got shut down this year(2018).
Let’s see the state of economic and social maturity of technologies enabling text, voice, and AR.
Economic maturity for text is at its peak. All the search engines present today provide a text interface(well, that’s obvious).
Economic maturity for Voice is getting better day by day. Research by voicebot estimates that 47.3 million people, or nearly one in five adults, in the United States currently own a smart speaker. These are just people with devices like Google home and Alexa. Also, most of the new smartphones now come with inbuilt voice assistants. Right now we have 4 major voice assistants, Google assistant(Google), Siri(Apple), Cortana(Microsoft), and Bixby(Samsung). So, a lot of money is being poured into the voice assistants, which assures the maturity of the market.
Here is what major players are doing in AR:
There are tons of other players too in this field.
Even if you haven’t used any of the above, you probably have used some sort of AR. Apps like Snapchat use AR for filters. Also players like Google, Apple and Microsoft have added Augmented reality platforms right into their Operating Systems.
So, in comparison of the other two, AR is just getting started. Also, most of the headsets are really expensive to be used by the general public.
People who have used at least any kind of search engine have used it with text interface. We are comfortable using it. Although it’s a bit more time taking, it’s more accurate when it comes to multiple languages.
Voice is a bit hyped up today. People are constantly stating that “it will replace the text”, but the stats show a different story. In 2017 and 2018, Stone Temple did a survey of approximately 1000 people of different sex/age-group/financial status.
This reveals some trends about when we usually use voice commands.
This shows that most of the people prefer to use voice commands when they are alone or with know set of people. We can also see the social trend(using voice command when not alone) improving from 2017 to 2018.
Below graph shows the percentage distribution of people’s answer to the question: “When I need to look up information, I am most likely to … (Please rank your top three choices).”
This shows that most of the people still prefer typing search queries, either in the browser or in the search engine app(or even asking a friend via text message) rather than using voice search.
Here is the distribution of applications usually used with voice.
This justifies the fact that we usually use voice in tasks in which text is not convenient(and we don’t care much about accuracy) or when we are busy/occupied in something(like while driving, we ask for directions). That’s why posting on social media has a bit low use, as in that case we need to have accuracy, or else we will end up posting silly posts.
And this one backs up our results from the previous graph.
So, socially we are getting more used to voice but still, it hasn’t taken over text yet. But in 10 years we can expect widespread adoption of voice-controlled searches, as more people get used to it and more applications provide a voice-controlled interface.
Even if the companies are pouring a lot of money, there is not much response from the public right now. People are not just used to AR right now. Most people only use AR on our mobiles right now. So, the only penetration of AR is because it is bundled with apps/services we use today, of which most are free to use.
In the coming years, we can definitely expect growth in AR users as AR devices get sleeker and cheaper. In 10 years we can expect AR to be in the same position as with the voice today.
Let’s explore the supporting business models for search engines.
Since the dawn to mass media, the advertisement has served to be a perfect business model component for a lot of businesses. It literally boomed with the introduction of the internet and today it has eaten a large part of newspaper and television Ad revenue. One of the reasons is that newspapers need a lot of capital to diversify to a greater demographic area. Whereas in the case of the internet, there is no such financial barrier.
Till now Ads have worked great for providing free of cost search engine services to the common masses. But when we move towards a voice-controlled search economy, this model doesn’t seem to work, as we discussed above that voice is really limited when it comes to output efficiency. You don’t want to listen to Ads when you search for something, right? This is probably the biggest problem with the voice-based search engines.
On the other hand, AR has immense potential when it comes to advertising. It is even better than current advertising because we have much more context(like where exactly I am now). Also, when we show ads on AR, they will be more effective and engaging with respect to Ads on websites, which we see and hate today.
One more thing which will have a contribution is the decentralization wave. As we have seen, till now as consumers we don’t get any benefit for seeing those annoying ads. But with the help of decentralized models, it is possible to transparently distribute Ad revenue between the publisher and the consumer. Also, we can expect a shift in the way this whole advertisement model works. As people are getting concerned about there privacy and data mining, we have proposed a model through which the data of the user still remains with the user and still, the advertisers/publishers can show targeted Ads. You can learn more about it here:
If you have made it till here, you have definitely learned a lot.
By our analysis, we can say a few things about search engines in the coming years:
These are just some of my views. If you have any comments/suggestions then shoot them in the comment section:)
Thanks for reading ;)
About the Author
He works as Senior blockchain developer and has worked on several blockchain platforms including Ethereum, Quorum, EOS, Nano, Hashgraph, IOTA etc.
He is currently a sophomore at IIT Delhi.
Hold down the clap button if you liked the content! It helps me gain exposure.
Want to learn more? Check out my previous articles.
Clap 50 times and follow me on Twitter: @vasa_develop