Most businesses have an online presence nowadays, and a brand’s reputation can make or break its success. With the proliferation of online platforms, understanding users’ sentiments and opinions about a brand has become very important. I will show you how you can create an automated pipeline that listens to your brand on social media and performs advanced sentiment analysis on it thanks to modern generative models like GPT-4, LLaMA 2, or Mixtral.
Brand sentiment analysis refers to the use of technologies and methods to understand consumers’ emotions, opinions, or attitudes towards a particular brand. Typically, this analysis processes online conversations and comments about the brand on social media sites, online forums, blogs, and other digital platforms. The insights gathered can help the company address issues, improve its products or services, and develop better marketing strategies.
Social media are full of genuine discussions about brands and products. The challenge is: how to extract such data and — even better — listen to fresh new comments about your brand in real-time (in order to analyse them later)?
In our example, we will leverage KWatch.io to perform brand monitoring on social media.
KWatch.io allows you to define specific keywords that you want to monitor on platforms like Reddit, Linkedin, X (Twitter), Hacker News… and then get an alert once a keyword is detected in a post or a comment.
In this example, as we perform brand monitoring, we want to track keywords like our company name, our product name, and our domain name. In other scenarios, we might also be interested in monitoring our competitors or tracking specific concepts that we are interested in.
Once we have created our brand keywords on KWatch.io for social listening, we now want to set an API webhook in order to automatically ingest the result in our system once our brand is detected in a social media post or comment.
By default, alerts are sent by email, so we should go to the “Notifications” section and add our own API webhook URL.
API webhooks, also known as “web callbacks” or “HTTP push API,” are a way for apps to provide other applications with real-time information. They generate HTTP POST requests when certain events occur and deliver data to other applications as it happens, making them a useful tool for integrating different services.
How they work:
You now need to create your own HTTP page that will read the API webhook sent by KWatch.io.
Here is what the KWatch.io webhook looks like (it is a JSON payload):
{
"platform": "reddit",
"query": "Keywords: vllm",
"datetime": "19 Jan 24 05:52 UTC",
"link": "https://www.reddit.com/r/LocalLLaMA/comments/19934kd/sglang_new/kijvtk5/",
"content": "sglang runtime has a different architecture on the higher-level part with vllm.",
}
If this is your first time, you can easily listen to these webhooks in Python with FastAPI.
First, install FastAPI and Uvicorn:
pip install fastapi uvicorn
Then create the following server script (this is just to give you an idea; you will need to adapt it):
# Import necessary modules
from fastapi import FastAPI
from pydantic import BaseModel
# Initialize your FastAPI app
app = FastAPI()
# Update the Pydantic model to properly type-check and validate the incoming data
class WebhookData(BaseModel):
platform: str
query: str
datetime: str
link: str
content: str
# Define an endpoint to receive webhook data
@app.post("/kwatch-webhooks")
async def receive_webhook(webhook_data: WebhookData):
# Process the incoming data
# For demonstration, we're just printing it
print("Received webhook data:", webhook_data.dict())
# Return a response
return {"message": "Webhook data received successfully"}
if __name__ == "__main__":
# Run the server with Uvicorn
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
To run this script on your server, save the code to a file, for example, `webhook_server.py`, and then start the server using Uvicorn with the following command:
uvicorn webhook_server:app — reload — host 0.0.0.0 — port 8000
Modern large language models make sentiment analysis easy and very accurate.
We can either use a foundational model (like LLaMA 2, Mixtral…) and work on a prompt using
In this example, I won’t show you how to deploy Dolphin Mixtral 8x7b as it is quite complex and requires a dedicated article, so we will use Dolphin Mixtral 8x7b through the
The prompt doesn’t have to be complex. Imagine that your brand name is “OpenAI,” and you have the following comment from Reddit:
Here is how you can automatically analyze the sentiment about your brand by using the NLP Cloud Python client.
First, install the NLP Cloud client:
pip install nlpcloud
Then, create the following script:
import nlpcloud
brand = "OpenAI"
reddit_comment = "Wasn't it the same with all OpenAI products? Amazing and groundbreaking at first, soon ruined by excessive censorship and outpaced by the competitors"
client = nlpcloud.Client("dolphin-mixtral-8x7b", "<your api token>", gpu=True)
print(client.generation(f"What is the sentiment about {brand} in the following comment? Positive, negative, or neutral? Answer with 1 word only.\n\n{reddit_comment}"))
Here is the answer returned by the model:
Negative
Now that we know how to extract the sentiment from a social media comment about our brand, here is our final program that retrieves the alert from KWatch.io and performs the sentiment analysis on NLP Cloud automatically:
# Import necessary modules
from fastapi import FastAPI
from pydantic import BaseModel
import nlpcloud
# Initialize the NLP Cloud client
client = nlpcloud.Client("dolphin-mixtral-8x7b", "<your api token>", gpu=True)
# Initialize your FastAPI app
app = FastAPI()
# Update the Pydantic model to properly type-check and validate the incoming data
class WebhookData(BaseModel):
platform: str
query: str
datetime: str
link: str
content: str
# Define an endpoint to receive webhook data
@app.post("/kwatch-webhooks")
async def receive_webhook(webhook_data: WebhookData):
# Process the incoming data
# For demonstration, we're just printing it
brand = "OpenAI"
print(client.generation(f"""What is the sentiment about {brand} in the following comment? Positive, negative, or neutral? Answer with 1 word only.\n\n
{webhook_data.content}"""))
# Return a response
return {"message": "Webhook data received successfully"}
if __name__ == "__main__":
# Run the server with Uvicorn
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
Automating brand sentiment analysis is not that hard, thanks to modern generative AI models and good social listening tools.
Such a strategy also works in many other situations related to social media listening. Here are a couple of examples:
Monitoring the reputation of a competitor
Monitoring the sentiment about a stock option
Monitoring the sentiment about a specific technology trend (like AI, crypto, etc.)
I hope this article was useful, and please don’t hesitate to ping me if you have questions!