Chatbot Fails – A list of what not to do!

Sometimes looking at mistakes is as valuable as looking at good examples. It opens up our understanding for the type of situations we need a strategy for. The examples below were found laying around in the internet, and only serve as an example to some particular common errors.

Locked user

Being able to understand when the user asks for help or wants to exit the current conversation is critical in order to maintain a good conversation. Failing to acknowledge any of this will result in frustration and ultimately in the end of the interaction.

Failing at Error Handling

There are many ways one can fail error handling. The below one is probably the most unique I’ve seen πŸ™‚

The funny part is that this error seems to have been custom made. Maybe an umbrella message for all the timeouts, or server side errors.

Ideally the chatbot should gracefully handle a backend technical error. It should inform the user that it was unable to complete the task, and offer alternatives on how to proceed.

from reddit

No repair paths

Handling happy path conversations is not that hard. The real challenge comes in doing the same for unhappy paths. This interaction has several problems. First it prompts a very blunt message to provide a valid value. Then it does not disclose more information on what a valid value looks like, and it keeps repeating the same error message. Lastly it ends the conversation again bluntly with a machine like message, with no repair path.

Poorly Trained or lack of NLP

The below example can be of a extremely poorly trained NLP model for the Greeting Intent, or simply a non AI chatbot that uses key words matching to extract intent. Either way, it failed with an “Hi”

Setting up for failure

This an example of the importance of conversational design. In this case the chatbot set itself for failure.

Do you have some similar examples of conversational AI “fails”? Please share them in the comment section πŸ™‚