So why chatbots?

The term “chatbot” has become quite a buzzword. Many companies are starting to look to this medium as a way to offer additional services to their customers. Others even offer full product suites through chatbot services. Chatbots offer unique opportunities, but also unique challenges in terms of their UX. We’ve been involved in the creation of a chatbot solution for the launch of Liberty’s new short-term insurance offering, which was released in February as an MVP. So firstly, I’d like to to take you through the process of how we created our chatbot, before going through the challenges we faced and the things we learned while creating our chatbot solution.

And why did Liberty use a chatbot?

  1. Availability – chatbots are available 24/7, unlike call centre agents. Meaning customers can engage with us at a time that is convenient to them
  2. Flexibility – chatbots can be built once and deployed onto any chat-based interface that allows integration. This means customers can reach us at a channel that suits them and which they are accustomed to (i.e. Facebook). Customers can also start a process on one channel and end it on another channel.
  3. Standardised experience – chatbots have a standard experience, unlike call centre agents that may provide users with different levels of experience based on their individual training and competency
  4. Chatbots can be built to handle standardised requests and queries such as FAQs. Meaning call centre agents can be freed up

CHALLENGE

Who is this chatbot?

The biggest component of the chatbot and how users interact with it, is the language it uses when it has conversations. Therefore defining who the bot is and how it speaks, was a crucial part of the process. These were a few of the key questions that needed to be answered.

Firstly, we looked at the following two questions:

In order to determine this, the team at kopano created three different personalities to test with users.

These are some of the “likes” and “dislikes” that were associated with the different chat flows that were created to reflect each of the personalities:

The humorous persona tested the best with users, the familiar persona the second best, while the very straightforward professional persona came in dead last.

However, the users wanted acombination of tones. They wanted the conversation to begin professionally and they want humour and friendliness to be folded in to make the journey enjoyable.

The best personality is a biased combination of the three. Familiar and friendly comes first, humor is folded in and all copy is built on the strengths ofsimple and professional.

 

LEARNING

Chatbots should speak in the first person

In our minds our chatbot referred to itself as “we”, because it represented Liberty as a whole. However, users were very confused and wanted a much more personal interaction with a single entity.

Next, we investigated what our bot should be and look like.


CHALLENGE

Avoiding the “uncanny valley”

From the beginning we wanted to make it very clear that our bot is, indeed, a bot. We never wanted people to feel deceived or tricked into thinking they were talking with a human, because this creates instant frustration when the bot can only respond in a certain way, and not help with all interactions. As human likeness increases in non-human forms, familiarity grows up to a point, and then it drastically drops before becoming completely familiar and human-like. This creates a very eerie space known as the “uncanny valley” where objects are “human-like”, but very creepy, like the example below.


The kopano design studio created various characters, both human and non-human, to test with users and find out from who they would buy insurance from.


Surprisingly (or not) this guy was by far the favourite. Who would have thought that people would want to buy insurance from a guy in a suit and tie?

The team made a few more tweaks, rounded some edges, added some colour, flipped him around so he was facing the chat bubbles, and this guy came into being.

Last but not least, we needed to name him.

And yes, he does have a name. a good one at that. However, the name is currently still being copyrighted, therefore we’re keeping it under wraps until later in the year when the full app releases.

So now that we knew who our bot was and what he looked like, we needed to define what the rest of the interface looked like.

 

CHALLENGE

Creating a UI that aids understanding

 

Chatbot UI is tricky, because you need to differentiate between bot messages and user messages, as well as differentiate the input area at the bottom from the messages. You also want a unique look and feel that will set your solution apart from other chatbots out there.

So to start, we went all out “different”. Which was not a very popular option, as it turns out.

Then we started on iterations. These are just some of the designs that made it to testing, there are probably about 4 times more that didn’t make the cut.

Eye tracking helped us to see what colours were helping people differentiate between input and speech bubbles, and which were distracting from the messaging. We also looked at the cognitive workload and engagement that the designs elicited.

 

And now for the big reveal…

In order to experience the Liberty Short-Term chatbot functionality for yourself, I suggest going through the Quick Quote functionality on the app. It is available on iOS and Android.

Now that you understand how we got where we are, I can take you through the challenges we faced throughout this process, as well as the things we learnt

CHALLENGE

Our chatbot functions across 3 different platforms (for now)

 

Our chatbot functions across 3 different platforms: you can start a conversation on one platform and then move over to another: it will remember where in the process you were and let you continue. When we started this project, we did not have the ability to differentiate between the platforms, so we didn’t know if you were chatting on Facebook or on the app. This was a major consideration, because not all platforms allow you the same amount of control over what and how you display it. And we had to send the exact same messages and instructions to each of the platforms.

 

On Facebook messenger you have no control over the UI, and only preset choices in terms of the functionality that you can choose from. On web we have full control over the UI (within the Liberty brand, of course), but certain functionality – like the live licence disc scanning – is impossible. On native we have full control and can use the unique functionality that smartphones allow us – telematics for the driving test, as well as the licence disc scanning through the camera.

 

CHALLENGE

There are no set standards for chatbot UX

There are two common ways that you can interact with a chatbot: freeform text, which requires that your chatbot has some form of Neuro-Linguistic Understanding (AI), or a more guided approach where you give users explicit choices to help the process along. We decided to use a more guided approach.

CHALLENGE

Facebook messenger has limited interactions to choose from

CHALLENGE

Platforms have very different UX (and UI)

 


For this interaction: uploading your driver’s licence disc, each of the platforms has a different interaction due to limitations. Facebook messenger has a persistent input box that can confuse users, web only lends itself to photo uploads or manually input, while native has the ideal user journey which allows users to scan their licences.

The date input is also different for each platform: Facebook messenger only allows manual input, web is best suited to a calendar (or manual input – users can choose), and native iOS users are used to this type of scrolling input for dates. This brings us to our next point: having only one app for both iOS and Android native.

CHALLENGE

React native

 

React native is a framework used for app development, where you develop one app, and export it to both iOS and Android. I’m sure as you all know, Apple and Android devices have very different, and very unique ways that users interact with them. They both also have completely different guidelines and standards for interactions: Android has Google Material Design standards, while Apple has their Human Interface Guidelines. So now you’re expecting Apple and Android users to use the exact same interactions.

CHALLENGE

People don’t read

And we mitigated this by adding visuals into the mix – because people DO like images.

LEARNING

Use images sparingly – to focus attention

 

CHALLENGE

People were confused by camera orientation

Because the image showed a driver’s licence horisontally, this is how most people were inclined to take the photo or scan the licence disc – sometimes resulting in the barcode being too small to decode.

We hadn’t considered this as a potential issue before, because we assumed users would just turn the phone and take the picture in a different orientation if they were struggling.

Learning

Just because you think it’s obvious, doesn’t mean it is

 

This is pretty much my favourite gif ever.

Learning

There is no such thing as “user error”. If people can’t use it, you’ve failed in your design

 

So to fix this we changed the orientation of the descriptive image, added helper text on the landscape camera view to indicate it should be turned, and added bars to indicate where the barcode should be.

CHALLENGE

How do I change the conversation?

At the bottom of the interactions there is a quick reply menu with options that people can use at any time to change the conversation.

CHALLENGE

What if the bot is simply not getting it right?

Bot errors can happen.

LEARNING

Always allow instant access to a real human

Talking to a real human is invaluable.

Because as much as our little guy tries, he’s just not that smart (yet).

CHALLENGE

Swiping at the bottom of a cellphone screen is really hard

The initial design had very little space at the bottom of the menu where the peeker menu lives. This lead to people often swiping up the phone menu accidentally (on iOS) The revised design (after user testing – mostly by our team) allows for much more space to accurately swipe.

The iPhone X presents a brand new problem however, and in part because of this, we are in the process of completely rethinking how to allow people to change the conversation, without losing too much space at the bottom of the screen.

CHALLENGE

Devs reeeeeally dislike “cool little interactions” as found on Dribbble

The poor devs on my team had a small heart attack when I showed them this interaction. To their credit, they did actually spend a few days investigating it before asking if we could maybe possibly rather try another route.

CHALLENGE

Should everything be a conversation?

Short answer: no. Some interactions are simply too complicated or wordy and do not lend themselves to chats. We separated our information out into items that live in the chat, and items that live in the burger menu. Facebook messenger also features quite a few breakout webview pages – for banking details for instance.

CHALLENGE

How do you log in via chat?

CHALLENGE

“Quick” changes to the conversation

Are not quick. These two diagrams indicate the workflows that are are required for the vehicle licence workflow, and the driver’s licence workflow. These are just two small parts of the Quick Quote process. As you can see, a “small” change affects many different nodes and can become very complicated very quickly.

CHALLENGE

Considering accessibility

We ran our colour palette through filers to simulate colour blindness, and were very happy to see that our spot colour electric blue maintained its hue. The red, green and yellow used for the driving test was also still easy to differentiate.


And what about screen readers for blind people?

So when I asked this there was an uncomfortable silence in the team, until someone pointed out that we should probably not be selling car insurance to blind drivers (since at this stage we can only insure you if you yourself are the driver of the car). But the team has promised me that they’ll include tags for screen readers once we’ve expanded the product offering to include home and building insurance.

LEARNING

Knowing and trusting your devs leads to better implementation

This is something I’m very passionate about – as part of our design process all experience designs have to be reviewed and signed off by devs before they are even sent to the product owner or client. It doesn’t matter how amazing your design or solution is: if the devs cannot implement it, or have suggestions on how to better it, then it is not done yet.

LEARNING

Compromise between dev and design leads to better solutions

Devs know and consider things designers don’t. And designers know and consider things that devs don’t. By combining this knowledge and working together, we create better solutions. This sounds like a motivational poster, but it really does lead to better ways of working, I promise!

WHAT’S NEXT

  • Natural Language Understanding (NLU)
  • Whatsapp platform
  • Many UX improvements
  • Additional product development
  • Full product suite launch later this year

As mentioned earlier in this post: the app has been launched in its MVP phase, and is available on iOS and Android (and Facebook messenger and web) if you want to play around with the bot and give us some feedback or get a great quote on car insurance.

Liz Spangenberg

Author Liz Spangenberg

Senior Experience Designer and Head of Design @teamretrorabbit. Academic. Violinist. Mountaineer.

More posts by Liz Spangenberg

Leave a Reply