Shy patients are more open about their health when talking to a robot AI, study finds

A recent study is the first study to demonstrate that 'virtual humans' could help patients overcome psychological barriers to honesty in medical interviews especially for sensitive, personal and highly stigmatized topics - these findings could prevent potentially serious consequences for the patient’s health, such as incorrect diagnosis.

Sign Up

Get the New Statesman's Morning Call email.

Have you ever had a medical problem so embarrassing that you were reluctant to tell your doctor about it? Despite frequent reminders not to be afraid because “doctors have heard it all and seen it all”, we still end up telling little white lies in an attempt to make the dreaded conversation slightly less awkward. But what if you were given the choice to talk to a ‘virtual human’ about your problems instead?

Much research has explored the question of how to encourage patients to answer honestly and in more detail during medical interviews, particularly for sensitive or embarrassing issues. Health care professionals are expected to foster honesty by establishing rapport with their patients with various verbal and non-verbal techniques - for example, saying “uh huh” at the right times, performing the occasional head nod, and otherwise giving the impression of sympathy during conversations. Scientific literature also shows two psychological barriers preventing patients from answering medical questions truthfully.

The first is "fear of disclosure", which describes the inclination to hold back personal, sensitive or stigmatising information out of fear that healthcare professionals will view them negatively. For example, patients may be afraid to truthfully answer questions about suicidal thoughts or unsafe sex, since both are highly stigmatised topics. The second is "impression management", which is the tendency to cherry pick only the best parts of the truth in order to present a good impression of themselves to healthcare professionals. However, those seemingly-harmless half truths during medical interviews could potentially result in serious consequences for the patient’s health, leading to incorrect diagnoses or inappropriate medication being prescribed, for example.

There is a large amount of research supporting the notion that people are more likely to disclose personal or sensitive information when using a computerised method of assessment - that is, filling in a form on a computer - as opposed to non-computerised methods, such as traditional pen and paper surveys and face-to-face interviews. Some researchers have proposed that computerised assessments provide a “sense of invulnerability to criticism, an illusion of privacy”, and “the impression that responses ‘disappear’ into the computer”, therefore increasing the likelihood that people will answer questions honestly, and in more detail.

However, a recent study has showed that using automated virtual humans during clinical interviews could be a solution to this widespread problem - particularly for sensitive, personal and highly stigmatized topics. The researchers from the Institute of Creative Technologies used a "virtual human interviewer program", specifically designed for dealing with distressing psychological situations. The program is embodied in a virtual human psychologist named Ellie, and just like an actual human psychologist, "she" can mimic the patient’s posture and use conversational prompts like “could you tell me more about that?”

Ellie can even detect a patient’s verbal and non-verbal behaviour - for instance, spotting changes in tone of voice, or analysing smile intensity - all of which are crucial for identifying underlying psychological conditions. (Here’s a video from the Institute of Creative Technologies showing Ellie in action.)

The research team, led by professor of computer science and psychology Jonathan Gratch, split 239 participants into two groups and asked them to engage in a conversation with Ellie. The first group was informed that the virtual human was a fully automated AI with an audio and visual speech recognition system, and it would use data to have a conversation with them. The other group was falsely informed that the Ellie was being controlled by a human in another room, who would not only observe the interview but coordinate it by using a series of pre-recorded questions and answers – similar to a puppet.

In reality, both groups were engaging with the fully-automated Ellie, but the researchers were interested in whether the “mere presence” of another human being and the “mere belief” that another human being is watching and judging them would have an effect of the participants’ willingness to disclose personal and sensitive information.

As expected, the findings showed that participants who believed they were talking to the fully automated Ellie reported a significantly lower fear of negative evaluation and impression management compared to participants who believed that Ellie was a machine operated by someone in another room. In other words, just knowing that a real person was observing was enough discourage honest and detailed answers. This is the first study to demonstrate that by removing both the presence of an interviewer, and thus the feeling of being judged by another human being, patients are more likely to answer more honestly during medical interviews.

Although Ellie is not designed to provide therapy, counselling or give a diagnosis, she can potentially be used as a helpful support tool for health care professionals.

The researchers concluded:

 The possibility that people would tell an impartial machine personal or embarrassing things about themselves, without fear of negative evaluation has been born out. Here we demonstrate that VHs can help overcome psychological barriers to honesty in clinical interviews. Providing more honest responses in medical interviews can help patients to receive better care and avoid serious health consequences. Therefore, the benefits to patients of VH-administrated clinical interviews could be quite substantial.”