chatbot-4071274_1920

Conversational Chatbot using Rasa with integrated Q&A Model trained on Bert and SQUAD2.0 (Part-1)

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

 What is a Chatbot? And how does it differ from Conversational AI?

In simple words, a chatbot is an interface to navigate you through a flow so that instead of a menu of choices you can talk or type in natural language and get the information. 

So what is Conversational AI? The conversational AI is more human-like, multi-turn, judgement intensive conversations.  

Most chatbots today are just glorified flowcharts executing IF/THEN/ELSE conditions and lack true conversational AI. Partly because it is tough to implement taking care of 100s of different scenarios and integrations into the flow. A chatbot is merely a hardcoded logic while conversational AI understands natural languages and the flow is dynamic and it learns over time as is used more and more.

This blog focuses on how to build a chatbot with RASA framework and then highlights the integration of Question and Answer model, built separately using Bert trained on SQUAD2.0. This is a step towards conversational AI.

What is RASA?

Rasa is an open-source Conversational AI framework. It has two main components, Rasa NLU and Rasa Core.

Rasa NLU is an open-source natural language processing library for intent classification, response retrieval and entity extraction in chatbots. It can extract user intent, entities, create forms, buttons and can invoke custom actions for you.

Rasa Core is a dialogue management solution tries to build a probability model which decides the set of actions to perform based on the previous set of user inputs.

For example if you give a sentence like :

“Hello , I’m well here in London ,how are you ?”

{

  “intent”: “greet”,

  “entities”: {

    “location” : “London”

  }

}

With RASA, you can build your own chatbot in minutes. It’s easy to understand and configure. So let’s get started.

RASA INSTALLATION:

Before installing rasa, it is recommended to create a virtual environment so that any installation or changes you do won’t be reflected in the root. We will be looking at the installation process only for the Linux platform.

Installing Rasa:

  1. To install rasa, type pip install rasa  
  2. Install spacy 

pip install spacy 

python -m spacy download en_core_web_sm

After following above steps, create a new project using “rasa init –no-prompt”. After this, you can type “rasa shell “ to check if everything is fine and your bot is working.

Command: rasa shell

You should see something after this-

If you see this, then your setup is done . You can proceed further.

Building a chatbot 

Let’s build a chatbot using Rasa now .

To build a chatbot, we must be familiar with some terminologies and files .

The first file we will be looking at is nlu.md .The nlu.md file contains the list of intents we have defined for the bot to understand whenever user types in a message.

The data directory contains the nlu.md file. If you have installed rasa in your home directory, then the data directory will be there along with your environment directory. You can search for your data directory and can change the pwd to /data directory. After that, you can see the contents of the file using cat nlu.md .

You will see something like this 

You can see a list of intents defined. These are the intents that are going to be identified by your bot when the user types in a message similar to as defined in the list. 

Let’s define a new intent cricket. You can define a new intent similar to previously defined intents. 

## intent:cricket

– You know cricket?

– Do you like cricket?

– you play cricket?

So next time when you type something related to cricket your bot will identify your intent as cricket and will respond accordingly.

Let’s define an action to your intent i.e cricket .

First of all we are going to make changes in stories.md file . This file acts as training data for the rasa bot so that it can predict your next intent , it  will teach your assistant how to respond to your messages. This is called dialogue management, and is handled by your Core model.

A story is a real conversation between a user and an assistant.

The stories.md file looks something like this 

It contains a list of conversation paths that the user may follow .

We will insert our cricket inside some of the stories . 

*cricket

  • utter _cricket res 

The above lines describe how to insert an intent . The action on the intent is defined with  ‘-’ , here the action is utter which is similar to printing something.We will see how to define custom actions also and how to call an API while performing custom actions further.

Now the response that our bot will give when it will encounter cricket intent is defined as “cricket_res”. Lets now define cricket_res so that our bot can respond to our intent.

To define the intent , we will go to domain.yml file. The domain contains everything your assistant needs to know i.e how to respond to your action, what to respond, what to store etc.

You can see that it contains a list of intents , responses , actions etc.

So we will update the intent list by adding our new intent i.e cricket. To do so, just add cricket inside the intent list. Now to define the response to the intent cricket, we will update the response list by adding the following lines in the responses list.

utter_cricket_res:

 – text: “Yes I like cricket.”

Now we are done. To reflect the changes, we must train our bot again. To train it, we will type the following command rasa train.

Once the training is finished, you can test your assistant and check whether it responds to the intent “cricket” or not.

We can see that it can now successfully identify our new intent, cricket.


Performing custom actions :

We just defined a response type as utter to our intent cricket, but what if we have to perform some actions like retrieving some data from a server or any 3rd party? 

In that case we have to configure our actions.py file .

The first operation that we are going to perform is just a simple hello world action.

Lets first define a hello world intent in our nlu.md file .

## intent:hello_world

– Say hello world 

– print hello world

– utter hello world for me 

Accordingly do the changes in our stories.md file as described above when we created the intent named cricket , but instead of utter , define action this time .

* hello_world

  – action_hello_world

The changes in the domain.yml file would again be different. Instead of defining an utter response, we will define an action. To do so, open your domain.yml file, create an option named as actions and inside that define action named as hello world.

actions:

– action_hello_world

Also add the intent named as hello world in the intent list in domain.yml file .

Now the last thing we need to do is to configure our actions.py file .our actions.python would contain this code. If it’s already the same, just uncomment all the lines.

from typing import Any, Text, Dict, List

from rasa_sdk import Action, Tracker

from rasa_sdk.executor import CollectingDispatcher

class ActionHelloWorld(Action):

  def name(self) -> Text:

    return “action_hello_world”

  def run(self, dispatcher: CollectingDispatcher,

             tracker: Tracker,

             domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:

    dispatcher.utter_message(text=”Hello World!”)

    return []

Now save the file .Before running we need to specify an endpoint . To configure that , go to the endpoints.yml file and add the following lines there

action_endpoint:

  url: “http://localhost:5055/webhook“.

Now we are almost done. we just need to start our server . To do so , open another terminal and type in following command 

“rasa run –endpoints endpoints.yml  actions”

It will start the action server for us.

Now train rasa again by running “rasa train”. Once the training is done , you can check our bot using the rasa shell.

Performing custom actions using external  API 

So far, so good. We were able to create our own intents and performed some actions on them. But what if the task requires fetching some info from some external server?

Well in that case we have to call an API to perform our task. But how do we do so?

We can do that operation in our actions.py file. Let’s just change our actions.py file to call an API instead of printing hello world . I used cricapi to fetch the next match date.The API is really easy to use. Just replace the previous actions.py code with the code below.

API_URL = “https://cricapi.com/api/”

API_KEY = “”

class ActionHelloWorld(Action):

    def name(self):

       return “action_hello_world”

    def run(self, dispatcher, tracker, domain):

#code to perform custom action 

       return []

We need to add the API key to the code which we will get after signing up for cricapi.

We are done now.

We need to restart the server again with the following command:

rasa run –endpoints endpoints.yml  –enable-api actions

Where enable-api tells the server to enable the api to be used by us.

Now run rasa shell to check whether it works or not 

Cool ! , It works. We just created our own chatbot.

Integrating RASA with Question Answer Model

Our next task is to build a chatbot capable of answering the questions, though the Q&A model trained with SQUAD2.0, in a certain domain (or multiple domains) when given some information about the same. Now to get answers when we provide some questions to the bot, we took the help of the question-answering model BERT. BERT is a Question-answering model which gives you an answer when you ask any question and provide information related to the same.

For example:

But you have to provide a paragraph which contains information about the CM of Delhi to BERT.

But how do we do it ?

The idea is simple . We built an intent about the information we are going to provide to the BERT . The intent had a good amount of questions that the user could possibly ask or the questions that could be formed from the information provided to BERT. We then defined a class for the same in our actions.py file . Whenever the intent for the questions-answering model matches , it calls the class corresponding to it .The class contains the code which connects it with BERT. It passes the question and information paragraph and brings the answer from there .

A GPU is preferred to run BERT model. As of now we ran it on Google colab.

Installation:

We need to install rasa core , rasa nlu and Spacy for this task.

# In your environment run:

!{python} -m pip install -U rasa_core==0.9.6 rasa_nlu[spacy]===0.12.3;

# as well as install a language model:

!{python} -m spacy download en_core_web_md

!{python} -m spacy link en_core_web_md en –force;

We will be defining our stories.md , domain.yml ,config.yml and nlu.md file in the same way we did the first time and writing it into corresponding ile names . For example:

%store stories_md > stories.md

 This will write everything in stories_md into stories.md file .

Now the next step is to write the classes for question-answering in our actions.py file .

The class in actions.py just calls the BERT function which would return us with the answer . To get the latest question or the question user typed in , we use tracker.latest_message.text.

The tracker.latest_message.text returns us the latest query typed in by the user. As the answer returns we use dispatcher to print it for us on the chatbot screen .

dispatcher.utter_message(answer)

After you are done with the above steps ,you need to train your rasa model. 

Our training steps would look like this:

-> Training the rasa nlu

->Training the rasa core 

Once everything is finished , we are ready to test our model.

 It works again !!. We just built a chatbot capable of answering questions related to a specific domain.

This is the end of the conversational chatbot part-1. The part-2 would focus on scalability to multiple domains, follow up questions, making the chatbot more conversational and doing more human-like conversations.

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

Request a call

Leave your contact info and we will get back to you soon.