Large chatbots based on large language models are not able to accurately understand the motivation of users who are hesitant to make healthy behavior changes, but they can provide support to those who are committed to taking action, according to researchers.
Large chatbots based on language models have the potential to encourage behavioral changes that promote health. However, researchers from the University of Illinois Urbana-Champaign’s ACTION Lab have discovered that these artificial intelligence tools are unable to effectively recognize certain motivational states of users, and as a result, do not provide them with appropriate information.
MichelBak, a doctoral student in information sciences, and information sciences professor Jessie Chin published their research in the Journal of the American Medical Informatics Association.
Generative conversational agents, also known as large language model-based chatbots, have been increasingly utilized in healthcare for patient education, assessment, and management. Bak and Chin aimed to investigate whether these chatbots could also be effective in promoting behavior change.
Chin explained that previous research had shown that existing algorithms were not accurately identifying different stages of users’ motivation. Therefore, she and Bak conducted a study to evaluate the effectiveness of these chatbots in promoting behavior change.Large language models, such as ChatGPT, Google Bard, and Llama 2, are utilized to train chatbots to recognize motivational states and offer appropriate information to encourage behavior change. Researchers evaluated these models in 25 different scenarios targeting health needs like low physical activity, diet concerns, mental health challenges, cancer screening, sexually transmitted diseases, and substance dependency. The scenarios covered the five motivational stages of behavior change, including resistance to change and lack of awareness.The study identified five stages of problem behavior: precontemplation, contemplation, preparation, action, and maintenance. It was discovered that large language models can assist users in identifying motivational states and providing relevant information once they have set goals and are committed to taking action. However, during the initial stages when users are unsure or ambivalent about changing their behavior, the chatbot may not be able to recognize these motivational states.Chin explained that language models struggle to accurately detect motivation because they are designed to interpret a user’s language in terms of relevance, without understanding the distinction between someone who is considering a change but is still uncertain, and someone who is fully committed to taking action. She also pointed out that the way users formulate their queries does not semantically indicate their level of motivation, making it difficult to discern their motivational states based on language alone.If someone is looking to change their behavior, big language models can offer valuable information. However, when individuals express that they are considering a change but are not yet ready to take action, it becomes difficult for large language models to comprehend the distinction, according to Chin. The study discovered that when people were resistant to changing their habits, large language models were ineffective in providing information to help them understand their problematic behavior and its underlying causes and consequences, as well as how their environment may be influencing their behavior. For instance, if someone is resistant to increasing their physical activity level, the models struggle to provide useful guidance.
Providing information to help individuals understand the potential negative effects of a sedentary lifestyle is more effective in motivating them through emotional engagement than simply promoting joining a gym. According to Bak and Chin, language models failed to generate a sense of readiness and emotional drive for behavior change when they did not engage with the users’ motivations.
However, once a user made the decision to take action, the large language models were able to offer sufficient information to assist them in progressing towards their goals. Individuals who had already taken steps to change their behaviors were given guidance on substituting problematic habits.The study discovered that individuals can benefit from using large language models to help them change their behaviors and seek support from others. However, the chatbots based on these models did not offer information to those who were already working on changing their behaviors about how to maintain motivation using a reward system, or how to reduce environmental stimuli that could lead to a relapse of the problem behavior. The researchers found that while these chatbots provide resources on obtaining external help, such as social support, they lack information on controlling the environment to eliminate stimuli that reinforce problem behaviors.avior,” Bak said.”
According to the researchers, large language models may not currently be capable of understanding motivational states in natural language conversations, but they have the potential to support behavior change when individuals are highly motivated and ready to take action. “Future studies will focus on fine-tuning these models to recognize linguistic cues, information search patterns, and social determinants of health in order to better understand a user’s motivational states and provide them with specific knowledge to help change their behaviors,” Chin said.