In many of our projects, our customers come to us with a common request: they want to build a chatbot that can answer any relevant question immediately, without human intervention. Sounds like a cool project, right?
Unfortunately, such a project is often riddled with many unforeseen traps and issues. Since the required technology to build such a powerful chatbot does not really exist yet, developers often have to cut corners to make a dumb chatbot appear smart. But maybe there is a better solution. Maybe we need to get our expectations straight. Maybe it is smarter to build dumb chatbots.
What Users Expect From Chatbots
Let’s first start with two quick definitions:
Dumb chatbot: a simple, dedicated chatbot with one specific workflow for one specific problem. One example is a bot for handling customer service surveys. Smart chatbot: a more sophisticated chatbot with the ability to handle multiple problems. These chatbots could be considered more of an intelligent personal assistant than a chatbot. One example are smart assistants such as Siri, Alexa, or Google Assistant. Most times, users’ expectations for chatbots gather around two extremes. For dumb chatbots, they usually have limited expectations. In these cases, the chatbot is often just another method for filling out a form.
However, for smart chatbots, an increase in functionality comes with an increase in a users’ expectations. While it may seem like a large divide between dumb and smart chatbots, it is very easy for a dumb chatbot to accidentally transition into smart chatbot territory, despite being built or intended for such a purpose. Increasing functionalities comes with increased user expectations to illustrate this, let us use an example. Imagine you want to develop a chatbot that handles the room booking process for users within a certain ecosystem. This chatbot follows only one workflow to solve one specific problem, meaning it is a dumb chatbot. Using existing frameworks, you can quickly put together a working solution and roll it out to users.
Then, sometime in the future, you want to implement a new feature, e.g. a function that would connect different people within the ecosystem if they have any problems. Having already implemented a chatbot which users have to interact with every day, you decide to just add this new workflow to the existing chatbot. Suddenly, your chatbot has gone from being the “room booking chatbot” to being the “ecosystem chatbot.” With this small change, the expectations of your user base will shift a lot, as your chatbot is now expected to deal with tasks related to the whole ecosystem, which is a much wider space than just room booking.
Since you are already connecting people, you think it may be helpful to have a feature that would connect similar people automatically through a recommender system (kind of like the contact suggestions on LinkedIn). Another feature, another workflow, and now, your chatbot has gone from being reactive (only reacting to a user’s messages) to being proactive (able to start conversations with the user). When a chatbot starts a conversation with an open-ended message like “Hey, I found this person you should connect with” or “Hi, I am your personal assistant”, the user has no indication as to how they are expected to respond. From the user’s perspective, they do not know what functionality to expect and may respond in an unexpected way. Since you are at it, maybe it is time to add some basic chit-chat functionality, like talking about the weather, news or traffic? Adding such innocuous features will again change the expectations of your users. You have now gone from having a chatbot which performs specific tasks, to one which is expected to be able to hold basic conversations. As you can see, after just a few small changes, we have gone from having a dumb chatbot which can solve one problem very well, to having a “smart chatbot”, which is expected to be able to do a lot more than was first planned for in the original solution. While from a technical standpoint it would be doable to implement all of these features in just one chatbot, in this scenario, implementing one or more dumb chatbots might be a better option. This is due to two problems.
The expectation curve grows unproportionally. The first is the problem of user expectations. When the functionality of a chatbot increases, the users’ expectations for a chatbot increase at a much faster rate. Once people notice that your chatbot can deal with a few different use cases very well, your users tend to assume that it can do a lot more. This problem can be exhibited in our room booking example. Without much previous experience in the field, one might expect that the graph of functionality to user expectations would look something like the following.
This graph would indicate that as we add the new functionality to the original room booking chatbot, the users’ expectations of what the chatbot could do would increase proportionally. However, as we add new functionality, the perception of the room booking chatbot evolves into a chatbot for the whole ecosystem which can start and hold conversations. Then, the expectation curve for the chatbot looks a bit more like the following.
For each new use case that a chatbot supports, the users’ perception of what the chatbot can actually do ends up disconnecting from reality. This makes it difficult to keep users satisfied when supporting a chatbot which can do multiple different tasks.
As an interesting anecdote, in the US, so-called “robocalls” have been popular for a while. These are calls by chatbots which are meant to automate certain processes such as solicitation or customer service. However, in recent years, they have started trying to act and sound like humans. Although they can mimic human speech quite well, they struggle with answering basic questions like “Who is the president of the US?” because they act based upon predefined paths for different inputs. As soon as the user’s input is unexpected, they do not know what to do, ruining the experience for the user (who is likely to hang up), preventing the business from reaching its intended goal.
Compared to their predecessors, these “fake human” robocallers try to convince the user that they are also a real person. However doing this becomes counter-intuitive, as the user’s expectations become raised, making them more likely to be dissatisfied.
Finding the Happy Path
The second is the problem of unforeseen unhappy paths. When building a chatbot, developers try to come up with all the possible ways a user interacts with the chatbot. The ones which follow the expected workflow defined by the chatbot are called happy paths. It is also possible that a user goes down an unintended path, which is also known as an unhappy path. When allowing the user to interact with the chatbot through a free text input field, the possible number of unhappy paths is practically infinite, making it impossible for the developer to foresee all possible options. This can lead to many unintended consequences and unsatisfied users. Due to these two problems, we need to ask ourselves if the increase in functionality makes sense.
Nlp, Nlu Engines and the Tech Behind Chatbots
One reason why expectations for chatbots can get so high is because users oftentimes do not know how chatbots are built. Some think that they simply consist of “if-else” statements, whereas others think they are built out of advanced learning AI methods which can react to any situation. In reality, most chatbots lie somewhere in the middle. While there are some existing implementations which are just glorified “if-else” statements, there are many existing frameworks and tools for building chatbots which are more advanced.
With frameworks like Rasa, you can build a chatbot which utilizes modern advances in the field of natural language processing (NLP). More specifically, the Rasa framework lets you build chatbots that consist of two parts: a natural language understanding (NLU) engine, and a dialog engine (called Rasa Core). The NLU engine takes the input from the user (utterance) and tries to extract the user’s intention (intent) and relevant information/data (entities) through the application of machine learning. The information extracted from the user’s utterance is then passed to the dialog engine, which is used to predict what actions the chatbot should take based on the user’s input and the context of the current conversation. Simply put, the NLU engine recognizes what the user wants and the Core engine determines the chatbot’s reaction.
Limitations of Current Chatbot Frameworks
Unfortunately, however powerful these frameworks may be, they also have their limitations. One main issue with such systems is their reliance on machine learning and pre-programmed responses.
For instance, sometimes the NLU engine predicts the wrong intent for a given user’s utterance. This can especially happen when there is not enough training data or when the utterance is not grammatically correct. Furthermore, when trying to extract entities such as date and time, the user may use a format that is either unexpected (e.g. using the US Month/Day/Year format when a chatbot is targeting non-US users) or ambiguous (simply saying “at 8” for a time, when that could mean “at 8 am” or “at 8 pm”). In such cases, the chatbot may get stuck and the pre-programmed responses may no longer be helpful. If you have ever gotten stuck/trapped in a loop while talking to a chatbot, you know exactly what I am talking about. It is frustrating and harms the user experience. In the room booking example from above, we had to figure out a way to control the input format so that the chatbot knows exactly what the user wants and can react accordingly. This helped us avoid issues stemming from incorrect intent prediction and unexpected/ambiguous entity formats. To some extent, it is always possible to control the user’s input so that it fits a certain format. Instead of having an input text field where the user can write whatever they want, you could use a combination of buttons, entity pickers (e.g. date/time pickers), and other chat elements to ensure the user’s input is understood correctly. In the case of the room booking chatbot, we solved the entity extraction issue by implementing date, time, and duration pickers which automatically passed the user input to the chatbot in the desired format. But sometimes, it is not so easy to solve issues surrounding chatbots.
Happy Paths Keep the User and the Developer Happy
As mentioned previously, a happy path is a path that is expected and defined by the chatbot to complete a specific workflow. Simply put, a chatbot is following a happy path when the user engages with the chatbot, says all the right things, and in the end, the desired outcome is achieved. To visualize a happy path, take the following conversation as an example.
There are two main ways where the chatbot could end up leaving the happy paths and ending up on what is called an unhappy path. The first stems from issues on the software side of the chatbot. This could include errors or exceptions thrown by the chatbot itself or by any external systems it has integrated with. In these cases, the developer needs to make sure that they catch all possible issues before they get to the end user. One possible way to do this is to always have a fallback action for any step where an error can occur.
The second way the chatbot could end up on an unhappy path stems from unexpected user input. As explained previously, modern chatbot frameworks have their limitations which make them vulnerable to unexpected user input. The following is an example of an unhappy path caused by a user.
Unhappy paths caused by user input are a lot more difficult to handle, requiring much more effort by the developers to avoid. As you add new workflows and functionality to your chatbot, the complexity of your system will increase, making it more difficult for your developers to maintain and grow. More importantly, the likelihood of a user ending up on an unhappy path will increase, leading to unhappy users and unhappy developers.
So the question is, how do we keep users from straying from the happy paths?
Focus On User Experience
To answer this question, we need to first clarify one of the main goals of chatbots, which is to enhance user experience (UX) in a given use case. No one should be building a chatbot which makes it more difficult for the user to accomplish tasks which could be easily done through a traditional user interface. Looking at our example from above, enhancing UX means that tasks such as room booking, networking, or getting expert advice should be easier via a chatbot than via traditional methods. Does a chatbot that can book rooms via simple text input enhance UX? If implemented correctly, yes. But does a chatbot that misunderstands a user’s input or easily ends up in unhappy paths enhance UX? I doubt it.
The easiest and most effective ways to ensure that users stay on happy paths are as follows: Limit how the user interacts with the chatbot. If the chatbot is looking for the user to provide a specific entity (date, time, city, etc.) in their response to a question, there should be some way to enforce that the user provides an expected input. Have ways to handle users wanting to jump from one workflow to another. It becomes a nightmare if the user has the freedom to “control” the conversation flow. The chatbot should be the one controlling the conversation, and if the user wants to stray from the expected flow, handle this throw fallback actions/messages. Keep it simple. Limit the number of workflows in the chatbot. The more workflows you have, the higher the expectations of the user and the more likely the user will end up trying to go down an unhappy path.
Is It Smart to Be Dumb?
So far, we have seen that developing a multi-functional chatbot is difficult due to user expectations and unhappy paths. This, in turn, leads us to a crucial question: if it is difficult to have a well implemented smart chatbot, is it smarter to build a dumb chatbot?
As explained in the previous section, UX is king. If having a smart chatbot does not improve UX, a dumb chatbot with a dedicated specific function is likely better. It will be easier to build and maintain, and the user’s expectations for the chatbot will be much lower and more in line with the supported functionality of the system. Only build a smart chatbot if it produces a better UX than a dumb chatbot! Unfortunately, with the technologies we have at hand today, it is not really possible to build a truly smart chatbot that can handle any possible request. Such complex chatbots would likely require breakthroughs in the field of artificial general intelligence, something that experts say is possibly decades away. With our current technology, building “simpler” smart chatbots with multiple functions is oftentimes not worth it because the potential error scenarios are worse than the potential success. If you get it wrong, you might end up with very unhappy users.
Align Expectations With Experience for a Better Chatbot
We need to align our expectations to the one thing that really matters — user experience — and acknowledge the fact that risking a bad user experience is not worth it. The technology to build the perfect ”intelligent personal assistant” simply does not exist yet and with every additional function, the risk for a bad user experience increases.
Looking at state-of-the-art personal assistants like Siri, Alexa, and Google Assistant makes this point abundantly clear: even after years of development with almost endless data, they still get so much wrong. Do yourself and your developers a favor: instead of trying to build one chatbot that can handle all your requests, build multiple ones with dedicated, stand-alone workflows, just like we did in this project. Not only will it be easier from an implementation side, you will also be able to manage your users’ expectations for your chatbot. They will know that the chatbot can only do certain tasks and they will expect nothing else. As a result, you will get the one thing that really matters: satisfied users. Want satisfied users as well? Talk to our chatbot experts now to develop a solution that fits your needs.