Hello, WSS.media is on the line. We have brought you a lot of interesting and new information about Bard AI. This article will be useful to SEO specialists, marketing managers, and anyone who keeps abreast of the latest news. From the article, you will learn whether you need to change your promotion strategy and how to do it.
Bard is an experimental chatbot from Google. The only difference from the beloved ChatGPT is that Bard gets information directly from the internet.
Here is how Bard describes itself:
On February 6, 2023, Google began testing Bard. Now, the chatbot can operate directly in search results and generate a ready answer for users. There hasn't been a mass release yet, but snippets with Bard periodically appear for different people.
It will look something like this:
We assume that there are links to sources in the top right corner. Based on these, Bard prepares an answer to the user's query. Now there is a new goal for SEO specialists — to get on this list.
That's why the lead of the SEO team at
We started with a simple idea: we went to the FAQ and looked at what the creators of Bard say. They have prepared a detailed answer:
In short, Bard generates unique content and tries not to duplicate word-for-word information from existing sources. If the chatbot does quote text from any page, it provides a link to it.
We decided to start our journey into the world of Bard with such a simple question. The first thing to pay attention to is the three draft answers. This indicates that it is preparing different versions of the answer to our query.
Now we ask Bard: “Why is the sky blue?” It gives us a detailed answer to this question with links to sources and a “Google it” button.
If you click the "Google it" button, we get three more options for possible questions related to our query. The results lead to Google:
If you ask Bard one of these questions, it will generate a regular answer without links to sources and drafts:
We continued to ask Bard the same question: “Why is the sky blue?” After a while, funny situations started to happen with our query. After asking a couple of times, we received this answer:
The first draft lists a source link that leads nowhere:
There is no such source in the search results, and it does not have a saved copy of the page
For comparison, we asked Google the same question. It offers us a “featured snippet” with a highlighted sentence from the NASA Space Place website:
If we go into this article, we will see the answer right at the beginning of the article with the same caption "The Short Answer":
Google's answer to the question "Why is the sky blue?" and what Bard generated are different. Although Bard, like Google, used information from the NASA Space Place website to generate the third draft:
The other sources that Bard collected for the drafts are also in the search results. But they are much lower than NASA Space Place:
For example, Bard considers Space.com more authoritative, as it places the website first in the sources. However, Google search does not highlight information from this site.
As we already mentioned, sometimes Bard answers without indicating sources of information. There are no underlines and links in the generated text. Now let's take a closer look at such an answer. We ask Bard the same question about the sky again.
Then we decided to rephrase the question and ask Google. We wanted to compare the answers received from Google and Bard for two similar but not identical questions.
The query for Google was the following — “Why the is sky blue?”. The search engine gave the same NASA Space Place article, but the “The Short Answer” caption is gone this time. And the source is not the short answer but the text of the article itself:
Let's return to Bard's answer. Interestingly, it decided to generate something of its own this time rather than quoting someone else's text. Apparently, the choice of answer format is up to Bard itself. It decides whether to give an answer with sources or generate a new answer.
Artem decided to check if Bard quoted other sites in its answer. We searched for each sentence in Google by adding a search operator - "...". A match was found only for one query — “Blue light is scattered more than other colors because it travels as shorter waves” — on the educational platform Numerade.
This is the only site where Bard's answer is repeated verbatim. But this quote is not present in the site's page code. We checked the Google archive and found a sentence that matches the sentence from Bard’s answer:
We decided to analyze where the Numerade site is located in the search results. It turned out that for the main query, “Why is the sky blue?” this site is not even in the top 50.
This is the first question that comes to mind because this site is very far from the top of Google. Most likely, Bard's new answer just coincided with the text from Numerade.
We decided to ask Bard which sites it considers authoritative for the answer to the question, "Why is the sky blue?". And then compare the answers received with the list of sites it will send.
The sites from the list are the first ones for our query in Google. And NASA Space Place even got a “featured snippet.”
It also turned out that Bard chose the same resources as sources, the links we saw in the drafts. Let's analyze the similarity of information on the sites and in Bard's answer:
In Bard's answers, words and phrases taken from other sources are often found. This was discovered not only in the texts that we included in the article but also in a thousand others that are not mentioned in the article. This suggests that Bard really takes information from the sources it talks about.
This hypothesis emerged during work with the query, “Why is the sky blue?”. Bard generated an answer based on websites that appear at the top of Google. That is, whoever is first in the list of Google search results will be the source for Bard.
We decided to check the hypothesis using the query "XRP price prediction":
Bard compiled a list of resources. Next, we decided to check these sites' positions in Google search results.
Bard's Sources: |
Google Search Results: |
---|---|
Wallet InvestorDigitalCoinPricePricePrediction |
ChangellyAMBCryptoCryptoNewsZCoinCodex |
We checked a few dozen more queries where we obtained similar results. None of the sources matched, which means the hypothesis was not confirmed.
We came to this hypothesis based on the results of the previous hypothesis. If Bard does not take sites from the top of Google as sources, it likely has a list of authoritative sites on each topic.
We asked Bard which sites it considers authoritative in terms of cryptocurrency price predictions:
Bard compiled a list of such sites. CoinMarketCap does not have a landing page for the XRP price prediction query. Most likely, Bard used the three sites after it to generate an answer to the “XRP price prediction” query.
We decided to test the second hypothesis on another query — "FIFA World Cup 2026 prediction." Bard gave us the following answer:
We also requested the sources from which it took the information.
Then we went to Google and repeated the query:
In the search results, there was only one of the sites that Bard used — ESPN.
Then we again asked the chatbot for a list of sites it considers authoritative for the query "FIFA World Cup predictions."
The list matches the sources that Bard used to generate an answer to the "FIFA World Cup 2026 prediction" query. Everything is as it was the last time.
We decided to take a closer look at these sites:
The only relevant source for answering our query about the World Cup is
And
What conclusion can be made? Bard uses a list of authoritative sites for each topic as sources. It does not particularly care about the information and whether it is relevant. The hypothesis is confirmed.
We decided to ask how Bard determines whether a site is authoritative for a particular niche.
The terms “reputable sites,” “well-known,” and “good reputation” suggest the importance of backlinks.
And what did we decide to do? Correct! Conduct a backlink analysis of the sites that Bard called authoritative. We started with the sites that were the basis for generating the answer to the query "FIFA World Cup 2006 prediction."
We noticed that the higher the metrics for Keywords (Ahrefs), Organic traffic (Ahrefs), Domain Rating (DR), and Backlinks of a site, the more authoritative it is for Bard. The chatbot ranked the sites in its list from higher to lower metrics.
Domain age (years), Referring domains, and Pages in Google do not affect the authority for Bard. In the table, you can see the variation of these metrics among the sites: there is no trend of increase or decrease.
After that, we decided to analyze not just the sites themselves but specific pages:
The more backlinks a site has, the more authoritative it is for Bard. And we came up with a hypothesis:
Site metrics are more important than the metrics of specific pages.
Artem suggested analyzing the sites that Bard considers authoritative in the subject of cryptocurrency price predictions:
Sites that Bard mentioned when asked for "XRP price prediction":
As we mentioned earlier, Bard skipped CoinMarketCap because it did not find suitable pages in this source. If we evaluate the other sites, we can see that they are also arranged by the metrics Keywords (Ahrefs), Organic traffic (Ahrefs), Domain Rating (DR), and Backlinks (from higher to lower).
In addition to these metrics, Ref. domains and Pages in Google followed the same pattern this time. The sites that Bard considers authoritative have higher metrics in these areas.
The Domain Age metric varies greatly from site to site. Therefore, it is not associated with authority in Bard's eyes.
Let's return to the query about the sky. Recall that Bard generated three drafts based on the following sources:
We skipped the second version because Bard did not specify the source.
As in the analysis above, we formed a list of four sites for analysis:
We arranged these sites in the order of search results, and here is what we got:
The clear leader is NASA Space Place, but only by one metric — Domain Rating (DR). In all other metrics, the site is inferior to Space.com.
The Brainly.in site significantly surpasses the Space.com site in most metrics, but falls behind in DR. This led us to think that we need to analyze the backlink profile of specific pages more deeply.
The first thing we discovered is the connection between the first place in Google's search results and Bard's choice of sources. After all, the NASA Space Place website was in the "featured snippet" in both cases.
But how will Bard choose a suitable source among equally reputable websites? This led us to a new assumption:
If the sources have equally high metrics, Bard takes into account the metrics of the specific page of the site where the information it needs is located.
**
We decided to ask Bard about "Pepe coin price."
This request does not imply forecasts and assumptions and is based on factual data. However, Bard gave us a list of reputable sites similar to the sources of information for the response to the query "XRP price prediction."
This time, Bard generated a large text with clickable images that lead to sources. We saw that CoinMarketCap leads the list and is a relevant source for this query.
This same site received a "featured snippet" from Google for the query "Pepe coin price":
Binance is in the third place in the search results. The price shown in the snippet of this site matches the price on Crypto.com and the price in Bard’s answer. Most likely, the chatbot used data specifically from Binance.
As we did before, we conducted another analysis and immediately noticed that Binance's metrics are much higher than CoinMarketCap’s.
Binance:
CoinMarketCap:
The decisive factors for Bard were the familiar metrics — DR and Backlinks.
We decided to check what responses Bard would generate if the queries are semantically identical but different in form. This idea emerged after we received different answers by rephrasing the query "Why is the sky blue?" several times.
The answers to our queries differ in structure and presentation of the material. Most likely, this depends on the queries themselves. The first query resembles a child’s query, so the answer is simpler. The second query is an adult's query, and the answer is much more complex and detailed. Perhaps, Bard is also trying to understand who exactly is formulating the query.
If you enter these queries into Google, the sites in the search results and the “featured snippet” will be different:
Next, we analyzed the same queries in
The service showed a direct correlation with the use of the words “do,” “have,” and “why” in the body section. These words are used more frequently on sites in the top 10, from 89 to 75 times. On sites on the second and subsequent pages of search results, these words occur 65-55 times.
Analysis of the weight and frequency of words used in Bard's response, content on the page for the query “Why do we have 5 fingers?” and the “featured snippet” did not show any correlations.
We analyzed the query “Pepe coin price” using Surfer SEO:
Here we can see an inverse correlation between the frequency of the words “pepe,” “coin,” “price.” Sites on the first page of search results use these words less frequently than sites on the second and subsequent pages.
We also noticed a direct correlation with the exact match of the query “Pepe coin price” in the body of the pages. Pages in the top 10 use this phrase once, while subsequent pages increase usage, by mentioning it 2 or more times.
In its response, Bard did not use the direct match; it used the reverse order of words — “price of Pepe Coin.”
In Google search results, we also noticed the mention of the Memetic coin on several pages:
These pages mention that the Pepe coin has a Memetic origin. The word “Memetic” appears on the pages very often. However, Bard did not use it at all. This led us to think that Bard focused on the main page of the coin price, not on its Memetic version.
For comparison, we conducted a textual analysis of content from the Binance page and Bard's response. The main keyword, “price Pepe” has almost the same percentage of occurrence in both texts.
The results of the textual analysis of content from the CoinMarketCap page differ from Binance and Bard. The main query, “Pepe price” is mentioned three times. The percentage of occurrence is 0.2% compared to 0.9% at Binance and 1% at Bard:
Based on this data, Artem put forward a hypothesis:
Bard bases its responses on the analysis of the main and most authoritative source, keyword density, and the number of key queries on the page.
We also noticed that Bard's response indicates where Pepe Coin can be purchased. This information is available on the CoinMarketCap page but is absent on Binance.
We conducted a TF-IDF analysis of Bard’s response and texts from the Binance and CoinMarketCap pages.
Bard (doc 1) and Binance (doc 2) have similar Term Count metrics. Their TFxIDF scores for the words “price” and “Pepe” are also very close. However, CoinMarketCap (doc 3) has similar results to Bard in terms of the occurrence of the word “Coin.”
This led us to the following hypothesis:
Bard took a paragraph with purchasing locations from the CoinMarketCap page because this site is also authoritative. If CoinMarketCap emphasizes this information, it means that it should also be included in the generated response.
For the experiment, we chose the Thefashionisto website. Why? This website quickly gained backlinks and significantly increased its DR from December 2020 to August 2021.
At the time of the analysis, its metrics were as follows:
The site has many quality links from trusted resources, for example, Wikipedia.
During the analysis, we discovered that the rating increased due to spamming with external links, and placements were made on forums and abandoned platforms.
Example from Pingback
Additionally, the site has decent keywords for which it ranks. For example, for the query "mens 70s fashion":
However, when we sent this query to Bard, it used more authoritative and larger sites:
The selected sources surpassed Thefashionisto in DR and Backlinks metrics. However, inflating the metrics will not produce the desired effect because Google's algorithms work better than external link analysis services.
Spamming redirects or direct links to your site can deceive services, such as Ahrefs, and perhaps monetize your site. However, these methods will not help become an authoritative site for Bard. You can also forget about being included in its source list.
Google has once again proven the importance of the link profile, not only for search results but also for AI.
Create quality and useful content for users.
Want to be featured in Bard's response? Improve and build a quality backlink profile. Unique and quality content will help you with this.
Bard's answer may consist of direct quotes with source attribution. The chatbot-generated response is based on a general textual analysis of words and their percentage ratio in texts on the top 10 websites.
To increase the chances of being included in the list of websites used to prepare Bard's response: