Or: when chatbots do more bad than good
For the customer service industry, chatbots bring a new opportunity in improving customer service and customer satisfaction using the latest technology based on machine and deep learning to create sophisticated chatbot representatives.
It is commonly thought that chatbots are the ultimate answer for customer service cost reduction. But, as many companies learned the hard way, chatbot technology is not there yet.
As a chatbot framework for developers, we have talked with many companies and explored many customer service solutions. In this set of posts, we would like to present some of our experience and knowledge. We will explore different solution examples for a better understanding of how chatbot technology could be used
Artificial intelligence in Chatbots
Before we start to explore solutions, we need to understand what is a chatbot and how it works. In essence, chatbot platforms still are a rule-based or flow-based logic engine that executes commands. From one side, an input sentence is entered by a user. Based on the sentence, the chatbot will provide a pre-programmed answer. The AI part of a chatbot, which people view as the most sophisticated, is called NLU, which stands for Natural Language Understanding. The NLU is a piece of a machine learning program that needs to understand a sentence written by the user and reduce it into simpler classification, in a way the rule-engine can use. For example: “I want to fly to Paris next week” will be converted into an intent – “Book-a-flight”, a first entity, also called slot – “Date” and a third entity – “Destination”. So the rule engine will receive the following data structure:
value: “Aug 7 2019”
In such a case, the chatbot should ask for the missing information – which is the origin and a more specific date, in order to provide flight options. Only then could it – and should it – search in for available flights.
How does one go and build a chatbot? in the diagram below, you can see the general architecture of a chatbot system — actually, the structure of any AI system. A more elaborate discussion on AI system architecture could be found in our conversational AI framework article.
So far, we talked about a fairly simple example. And it seems pretty easy, right? indeed. But — and that’s a huge, huge “but” – it would become much more complex if a user would say “I want to fly next week to Paris with my wife”. In such a case, a good chatbot is expected to understand that two tickets are needed. What about “I want to fly with my dog to Paris”? Or: “Can I fly with my insulin pump?”
These examples show how the number of scenarios that chatbots need to handle grows very very rapidly. And that happens because of three main reasons:
When do chatbots get dumb?
First and foremost, today’s chatbots – and AI in general — do not really understand the world. The technology is not yet there. How would a bot know that a pet is different from a partner? that medicine is not a passenger? Today’s AI is statistical. It does not really have any knowledge about physical sizes, social relationships and all of those things we know intuitively.
If you would tell a bot, for example, “My coat is too big for this suitcase, so I need to fold it” – it would never know (under today’s AI) which one should be folded – the suitcase or the shirt. Or “I want to buy two burgers and one coke. Actually, change it to a milkshake.” – A human knows a coke is a drink and most likely it should be changed to a milkshake, Chatbots do not know anything like that. Last example – there could be so many – “My laptop doesn’t charge when it is in sleep mode” – NLU doesn’t know a laptop can sleep.
Second, even if you train the NLU engine to recognize all those cases, a customer will always surprise you with a new question. After all, she came here for questions for which a simple web search was not enough!
Third, and maybe the biggest problem of them all, is that a chatbot does not really know when it doesn’t understand. Yes, there is a score for every intent and entity that the NLU returns – but where do we draw the line? How do we ask for clarifications?
Why all this is bad for business?
All of this is what makes the chatbot example above so hard to build and maintain. The developer of the customer service chatbot needs to train the NLU for every new option AND integrate it into the chatbot logic rules-engine. So the amount of work – and hence, the cost – does not worth the benefits.
To conclude, even though chatbots had a great promise to improve customer satisfaction, they can handle very simple and limited tasks. The technology is still in its infancy, and it would take years, if not decades or centuries until it becomes a good human replacement.
How do we create a successful chatbot, then?
Chatbots could be very, very helpful in customer service, and that would be the topic of the next chapters of this series. TLDR? goals and make-believe. Constrain your chatbot to a narrow topic and limit the dialogue to that goal. Then, make the user believe she’s having a conversation.