There are plenty of easy-to-use bot building frameworks developed by big companies. I had a chance to take a close look at Dialogflow (formerly known as API.AI; developed by Google) and Bot Framework (developed by Microsoft). Both Dialogflow and Bot Framework have pre-built custom language understanding modes. These frameworks seem to be great tools to get you started quickly, especially if you don’t have existing chat logs that you can use as a training data.
However, there are circumstances in which you may want to avoid using a closed-source framework that processes user inputs on servers owned by Google or Microsoft. For instance, you want to develop a chatbot for a business and this chatbot will be receiving potentially sensitive or confidential information from its users. In such case, you may be more comfortable keeping all the components of your chatbot in the house.
This is where Rasa platform comes incredibly handy. It is an open source bot building framework. It doesn’t have any pre-built models on a server that you can call using an API, which means that it will take more work to get it running. However, I think being in complete control of all the components of your chatbot is totally worth the time investment.
Rasa stack consists of two major components: Rasa NLU and Rasa Core. Rasa NLU is responsible for natural language understanding of the chatbot. Its main purpose is, given an input sentence, predict an intent of that sentence and extract useful entities from it. Intent dictates how the chatbot should respond to an input from a user. Entities are used to make responses more customized (e.g. remembering user’s name/age). More information on intents, entities can be found in the tutorial on Rasa NLU docs website.
The second component, Rasa Core, is the next component in Rasa stack pipeline. It takes structured input in the form of intents and entities (output of Rasa NLU or any other intent classification tool), and chooses which action the bot should take using a probabilistic model (to be more specific, it uses LSTM neural network implemented in Keras).
Typical output of Rasa NLU
The cool thing about Rasa is that every part of the stack is fully customizable and easily interchangeable. It is possible to use Rasa Core or Rasa NLU separately (I initially started with Rasa by using just the NLU component). When using Rasa NLU, you can choose among several backend NLP libraries. The LSTM neural network which Rasa Core uses for action prediction can be easily exchanged for any other, if you know a little bit about recurrent neural networks and how to implement them in Keras. So far the default LSTM neural network works great for my application and can be trained relatively fast, so I didn’t experiment with changing it.
There is little point in going into more detail about Rasa stack. The best way to learn what it is capable of is to work through the tutorials available at Rasa Core and Rasa NLU documentation websites. I found tutorials and documentations to be comprehensive and easy to follow.
I’ve been working on a simple help desk chatbot for the past couple of weeks. Here, I wanted to share a few tips that might be useful if you are just starting the development of your first chatbot using Rasa stack. These tips will make sense if you are familiar with the general structure of Rasa chatbot (that is, it is better to work through the tutorials first).
If you’ve read the entire documentation for Rasa NLU and Rasa Core carefully, you won’t learn anything new here.
There actually is a bolded text in the Deep Dives section of Rasa Core docs saying :
Choice of a slots type should be done with care.
I missed this part when reading the docs for the first time, which resulted in an annoying bug. The chatbot was stubbornly giving incorrect responses to some inputs that were used to train it.
Suppose one of the intents that your chatbot recognizes is a login problem
. You want your bot to provide some generic response (or ask to clarify) when a user tells the bot about a login problem without providing any details. In addition to that you also prepared three specific responses for cases, when a user has login problems when using his/her computer, phone or tablet.
In this case you should define these keywords as slots in domain.yml
as follows:
entities:
slots:login_problem_type:type: categoricalvalues:— computer— phone— tablet
It is easy to make a mistake of defining the type of this slot as text
, since this type is used in most tutorials on the docs website. Description of a slot type text
says:
Results in the feature of the slot being set to
1
if any value is set. Otherwise the feature will be set to0
(no value is set).
This means that this slot will not capture whether a user has a problem with a specific device that you listed. It will just capture whether a user has a problem with one of the devices that you listed, without specifying which one. Most likely, the chatbot will select and give one of the answers that you prepared for any device type it receives.
Now, here is the description of categorical
type:
Creates a one-hot encoding describing which of the
values
matched.
This means that the chatbot will be able to distinguish between login problems that involve different device types.
I continue using the slot from the previous example here. It is expected that some people would refer to their computers as “desktop” or “laptop”. Therefore we need to define these words as a synonyms for the computer
slot value. There are two ways of doing it:
data/nlu.md
file that contains training data for the Rasa Core interpreter specify synonyms in each example where they appear as follows:
- I changed my [laptop](login_problem_type:computer) recently and cant login- cant login from my new [macbook](login_problem_type:computer). Is there any additional software that i need to install?
models/nlu/current/entity_synonyms.json
add synonyms to a dictionary, using a synonym as a key and slot value
as value (using all lowercase is sufficient). Example:
{“laptop”: “computer”,“desktop”: “computer”,“machine”: “computer”,“pc”: “computer”,“macbook”: “computer”,“iphone”: “phone”,“ipad”: “tablet”}
Synonyms specified using the first method are added to the dictionary in entity_synonyms.json
when you train your Rasa Core model, so both ways of adding synonyms are, in the end, equivalent.
When you release your chatbot, you will start aggregating a lot of valuable training data very quickly. This means that frequent updates to data/nlu.md
and data/stories.md
files are required. The size (in terms of number of lines) of these two files grows pretty quickly as you collect more training data and add more complexity to your bot. I am not saying that a disk size of a text file is a concern, but since these files are handled manually it’s important to keep them organized.
It’s a little easier with the data/nlu.md
file. I see it as an append-only file. Since it is used to train the interpreter, and additional training data never hurts the performance, I just add new user inputs to the corresponding intents in that file.
It’s a completely different story with data/stories.md
. You will probably need to make some changes to existing stories as you add new functionality to your bot or encounter a user that leads a valid but very unpredictable conversation. data/stories.md
becomes much easier to manage when you make it more modular by using checkpoints.
Let’s say you are writing stories about conversations in which a user tells about a login problem without providing any details, and the chatbot needs to clarify it:
## help with login (general)* login_general— utter_login_specify> clarify login problem
>clarify login problem
checkpoint is added at the end of this “story module” since the above exchange is a common beginning to different types of more specific login problems.
Then we can specify “story branches” for various more specific login problems.
## help with login (computer)> clarify login problem* login_clarification{"login_problem_type":"computer"}— utter_login_computer_help
## help with login (phone)> clarify login problem* login_clarification{"login_problem_type":"phone"}— utter_login_phone_help
## help with login (tablet)> clarify login problem* login_clarification{"login_problem_type":"tablet"}— utter_login_tablet_help
Clearly, using checkpoints makes the stories file more organized and helps you save some time when writing new stories. Another example of checkpoint use is in the docs.
sender_id
when handing over user message to the botMost of Rasa Core’s functionality can be accessed through methods of Agent class. handle_message(text_message)
is one of the methods of this class. It accepts user input as an argument, then runs that input through the pipeline that I described in the introduction and returns a message the bot wants to respond.
One of the optional arguments to handle_message(text_message)
method is sender_id
. If you have a single Agent instance serving multiple clients, it is very important to assign a unique sender_id
to each client and pass it to handle_message(text_message, sender_id=some_unique_id)
with each input from that client.
Nothing is mentioned about sender_id
in the docs and its purpose wasn’t clear to me at first. Initially, I thought that an Agent instance keeps track of a single conversation. In order to make a chatbot serve two clients simultaneously, I thought I would need two Agent instances. And then create a new one for each new client connecting to the server and deleting or resetting the state of the old ones as clients leave.
There’s no need for all this hassle if you specify sender_id
. For each unique user, the processor of the Agent instance creates a Tracker instance which maintains the conversation state with that particular user. Each Tracker instance is kept in the tracker store where it is updated with the current state of the conversation. No need to worry about the chatbot mixing up conversations with different users when using a single Agent instance!
You can get an overview of how Rasa Core components fit together here. You can find more details on how sender_id
is handled in the source files processor.py and tracker_store.py.
For me, it was surprisingly easy to get started with Rasa. Although I had no training data, I was able to train the first prototype successfully with made-up data. My chatbot now recognizes 15 different intents pretty accurately, some of which have very close meaning and expected sentence structure. I trained the interpreter (nlu.md
) using ~300 examples, about 10–20 examples per intent. My stories.md
file is ~150 lines long, having only 25 stories total. I am very impressed by how accurately it classifies previously unseen inputs after being trained with relatively small amount data.
Rasa stack is very easy to work with. It is possible to create a working chatbot, that you can interact with in the terminal, without writing a single line code. As you’ve probably noticed from my examples above, all Rasa files containing training data and domain specifications are written in Markdown. Of course, when you want to make it customer-ready there’s plenty of programming involved in designing backend and frontend for it.
Testing the chatbot from the terminal
Personally for me, the hardest part of the chatbot development was creating a user-friendly frontend since I have almost no experience in that area. If you are having problems developing a good frontend, I suggest you take a look at simple React or Angular chat room projects on GitHub and fork the one you like (and understand!) the most. You won’t have to change a lot of things, since a chatbot frontend is almost identical to a regular chat room frontend. Although I didn’t explore that path, I know it is possible to connect Rasa chatbot to a Facebook messenger; that way you don’t have to worry about the frontend at all.
I hope these tips will help developers who are just starting to use Rasa to build a prototype faster and avoid some common mistakes in the process.
The best source of information about Rasa stack is the documentation. There’s been plenty of links to it throughout the text. Rasa also has their blog on Medium which you can find here.
Changed intent format in stories to a current one (was introduced in 0.10.1 version of Core).