Summary

Through academic research, I was able to determine how an empathic chatbot can help users find health information. My role was to research, ideate and prototype a conversational interface in order to conduct an experiment that validated assumptions and hypotheses. As a result, I was able to determine recommendations on how a chatbot with a “personality” can improve the current experience of diagnosing health issues.

This is a 5 minute read

The problem

Digital technologies, in particular the internet, have enabled people to be more informed and empowered when it comes to understanding and diagnosing their own health. How this information is sourced by the user is reflected in current behaviour of strategic keyword searching in dense online libraries and forums.

How users could complete these health searching tasks with “human-like” robots, mimicking human behaviour, was identified as an area of exploration through current academic thinking. A phenomenon called “cyberchondria” was highlighted, prevalent in current user behaviour. Whilst chatbots currently remain a novel interaction. The goal of this research study was to understand the impact of a chatbot, with an applied empathic personality, on user acceptance when delivering basic health information and services.

The process

As an all-encompassing role, I was responsible for all aspects of the study. This ranged from initial research, via ideation, to statistical and thematic analysis of experiment data.

Previous academia insight provided assumptions on appropriate conversational interface theory, and a methodology to approach the research questions. This generated specific hypotheses that would be challenged by quantitative and qualitative research data.

Adapting user-centred tools to approach conversational interfaces was a challenge. By taking the story of an experience away from a flow of screens to a human-like conversation, led to certain tweaks in my usual approach to human-centred interaction. You have to approach the deliverables form a slightly different perspective.

Research

To understand how people currently search for information online, particularly health issues, I conducted structured interviews with several users. Themes identified in this research showed the anxieties involved in searching online, the need for specific reassurances and online research strategies utilised by people.

To make sense of that data and form my user model for a chatbot design, I used and adapted empathy maps [Fig 1 & 2] to visualise it. By having an empathy map for a user, and for a chatbot gave me the initial understanding about how the two entities in this interaction should relate.

Image: [Fig 1] Empathy map for a health advice seeking user
[Fig 1] Empathy map for a health advice seeking user
Image: [Fig 2] Empathy map for the health conversational agent (chatbot)
[Fig 2] Empathy map for the health conversational agent (chatbot)

Design

Expanding out the research to map the experience [Fig 3]over time gave me the opportunity to see an idealised story of how a conversational interaction would be with a chatbot. Adapting the map to illustrate what a chatbot could say during phases of a scenario provided the building blocks for subsequent design choices.

Image: [Fig 3] Chatbot experience map
[Fig 3] Chatbot experience map

Understanding what personality is, and the appropriate aspects of it for the use in this context was key. The combination of academic insight on personality, empathic language and how turn-taking conversation works, gave me a range of assumptive design blocks to work with.

Taking scenarios from the mapping exercise, I was able to form a conversational script [Fig 4] between a chatbot and a user. I started by writing things down as if I was having a conversation with a doctor. Writing out the words before applying language and emotion I felt was appropriate for that phase of the interaction. Iterating it after acting it out with another person.

Image: [Fig 4] Raw conversational script
[Fig 4] Raw conversational script

This conversation “wireframe” [Fig 5] was then formalised into a flow which considered UI choices and selection pathways. Two versions were created in order to conduct a comparative experiment. A version of a chatbot demonstrating empathic communication, and one that did not.

A “wizard-of-oz” prototype [Fig 6] was constructed in an application named Chatfuel for both versions of the chatbot. This software replicated a Facebook Messenger conversation on a mobile. The point was to create a seemingly intelligent chatbot that could present a seemingly realistic scenario to participants in a test environment.

Image: [Fig 5] Conversation wireframe
[Fig 5] Conversation wireframe
Image: [Fig 6] Chatbot prototype and conversational UI
[Fig 6] Chatbot prototype and conversational UI

Evaluation

Creating a within-subjects experiment, I was able to compare two versions of the chatbot with 22 participants in a lab environment. Presenting hypothetical scenarios for the participants to complete, I was able to understand how empathic communication and personality within a conversational interface affected their perception and acceptance of the chatbot.

In order to get deep into analysis, I took two complimentary approaches. Likert-scale questionnaires gave me the opportunity to statistically analyse participant responses to the chatbot. Using parametric techniques, I was able to determine statistical differences between the two versions.

As a supporting method, I used thematic analysis to code transcripts of the interviews I conducted with participants. The themes generated were used to decipher patterns between qualitative and quantitative data.

Image: [Fig 7] Mobile testing lab
[Fig 7] Mobile testing lab

Results

By conducting a mixed-methods experiment, I was able to determine the impact of an empathic chatbot that delivers basic health information to users.

By coupling a conversational interface with emotive personality traits appropriate for discussing health issues, I was able to replicate how people search for health advice. Within this context, being able to empathise with the user improved the chances of them wanting to reuse the chatbot again.

Analysis showed how perceived enjoyment and usefulness were significant when a user used an empathic chatbot for the first time. The engagement of a chatbot talking to them in a caring manner, improved how they perceived its usability and the tasks the chatbot did. This was offset though by themes of urgency and cynicism of AI, that could negatively perceive the chatbot as an unnecessary novelty.

Reflections

This research opened my eyes to approaching problems in different ways. You can learn user-centred tools and try to get in design mind-set, but it’s when you are presented with a challenge that takes you out of your comfort zone. That is when it gets exciting. How people talk and emote with conversational agents, was beneficial for my constant UX development.

The challenge was an all-encompassing methodology that pushed me. Statistical with cognition. The opportunity to dig deep into a research question and approach it from several perspectives improved me as a practitioner.

View more case studies or read more about me.