Conversational AI and Vaccine Communication: Systematic Review of the Evidence

London School of Hygiene & Tropical Medicine (Passanante, Pertwee, Lin, Larson); Laboratory of Data Discovery for Health (Lin, Lee, Wu); The University of Hong Kong (Lin, Lee, Wu); University of Washington (Larson)
"The review found evidence of potential benefits from conversational AI for vaccine communication."
Since the mid-2010s, use of conversational artificial intelligence (AI; chatbots) in health care has expanded significantly, especially in the context of increased burdens on health systems and restrictions on in-person consultations with healthcare providers during the COVID-19 pandemic. One emerging use of conversational AI is to communicate information about vaccines and vaccination with the aim of building vaccine confidence. The ability to provide timely and accurate information to the public at scale is particularly important in the context of what has come to be called an "infodemic". This systematic review examines documented uses and evidence on the effectiveness of conversational AI for vaccine communication.
A keyword search strategy was used and applied across 13 databases to find studies that included (i) documented instances of conversational AI being used for the purpose of vaccine communication and (ii) evaluation data on the impact and effectiveness of the intervention. After duplicates were removed, the review identified 496 unique records, which were then screened by title and abstract, of which 38 were identified for full-text review. Seven fit the inclusion criteria and were assessed and summarised in the findings of this review.
The 6 chatbots identified by the review were relatively simple in their design. Two were natural language processing (NLP) based, 1 was a hybrid with some NLP functionality integrated within a predominantly rules-based system, 1 was purely rules-based, and the remaining 2 were simulated agents (i.e., "Wizard of Oz" experiments). In most cases, the knowledge base for the chatbots was constructed from governmental websites and scientific literature; chatbot development was not generally informed by systematic analysis of local information environments prior to deployment - for example, by using social media and web search data to identify information-seeking behaviours or prevalent misinformation narratives among intended populations. Only 3 chatbots (50%) had a theoretical underpinning to their approach, such as the Health Belief Model or Information-Motivation-Behavioral Skills Model.
The main use for vaccine chatbots so far has been to provide factual information to users in response to their questions about vaccines. In addition, chatbots have been used to schedule vaccinations, provide appointment reminders, debunk misinformation, and, in some cases, to conduct vaccine counseling and persuasion. (For example, one article included a protocol to pursue a recommendation in favour of human papillomavirus (HPV) vaccination for a child in the event of (parental) user resistance or disengagement.)
Like chatbots in other health domains, vaccine chatbots have not always been subject to robust evaluation. Of the 6 unique chatbots that met the criteria, only one had been evaluated using a randomised controlled trial, and in many cases, the sample sizes were very small. In all cases, evaluation was limited to the short-term, direct effects of chatbot use on users' self-reported vaccine attitudes and behaviours. That said, available evidence suggests that chatbots can have a positive effect on vaccine attitudes, and none identified any "backfire effects" (where some participants become more vaccine hesitant after the intervention), which have been reported in some previous studies of digital health interventions.
Several factors were identified in the studies examined as having a positive influence on users' perceptions of chatbots. Evidence suggests that providing credible, personalised information in real time through a familiar and accessible platform is key to chatbot success. In addition, making chatbot interactions feel more "natural" by limiting the length of text responses, incorporating images and videos, and eliminating repetition can improve the user experience and engagement. Research suggests that users prefer transparency as to whether they are interacting with an AI system or a human being, as this enables them to calibrate their expectations and their language accordingly. There is also some evidence that cues such as the gender of the chatbot persona can affect how users perceive and engage with chatbots. Conversely, excessively lengthy or repetitious text-based responses, obvious gaps in the knowledge base, and a robotic or inhuman "feel" can negatively impact chatbot user perceptions.
Four specific recommendations for future research to build the evidence base around conversational AI for vaccine communication and to ensure that no unintended harms result from its use include:
- There is a need for comparative studies that test how chatbot effectiveness may vary depending on design and implementation (e.g., voice or text interfaces), communication context (e.g., population-wide or community-specific vaccination campaigns), and across different demographic groups and country locations. Researchers should aim to recruit larger, more representative samples and include control groups. In addition, future interventions should have a stronger theoretical basis in behavioural and communication theories.
- There is a need to evaluate the longer-term, indirect, and system-wide effects of conversational AI as well as the short-term, direct effects on chatbot users. Where possible, longitudinal surveys should aim to assess trends in information sharing habits, information literacy, and trust in health care among chatbot users and nonusers over time.
- More evidence and transparency around the costs of chatbot development and maintenance are needed.
- Future research should directly address the question of what may be appropriate or inappropriate tasks for vaccine chatbots to perform based on an analysis of the technical capabilities and limitations of current conversational AI systems. Building this evidence base would enable researchers to make evidence-based recommendations to governments and regulators around appropriate ethical and regulatory frameworks for these technologies in a health context.
In conclusion: "Available evidence, while limited, suggests that conversational AI, properly designed and implemented, can potentially be an effective means of vaccine communication that can complement more traditional channels of health communication, such as consultations with health care providers, especially in situations where health systems are overburdened."
Journal of Medical Internet Research 2023;25:e42758. doi: 10.2196/42758. Image credit: Adobe stock; Copyright: WrightStudio; License: Licensed by JMIR
- Log in to post comments