Chatbot goes rogue ...

LLM's

Recently, a significant controversy erupted over the use of AI chatbots in mental health, specifically in the treatment of eating disorders. The National Eating Disorders Association (NEDA) had replaced their helpline with a chatbot named Tessa, aiming to offer an AI-assisted, efficient, and accessible resource for individuals and families. However, this step inadvertently raised concerns about the efficacy and safety of such AI platforms.


Sharon Maxwell, a consultant in the eating disorder field, who herself had struggled with an eating disorder, reported receiving inappropriate weight loss advice from Tessa, triggering an intense debate. NEDA was compelled to indefinitely disable the chatbot, underlining the profound risk of AI misguidance, particularly in sensitive mental health contexts.


NEDA had initiated this shift from a helpline to an AI-based service in response to a growing mental health crisis and the severe shortage of clinical treatment providers. Tessa, initially, was a rule-based chatbot, limited to pre-programmed responses, but the service provider, Cass, altered its capabilities without NEDA's approval. The changes allowed Tessa to generate new responses using generative AI, a feature that had the potential to give unregulated and potentially harmful advice.


NEDA and Cass both acknowledge the problems that emerged, with NEDA CEO Liz Thompson stating that the harmful advice Tessa provided would never have been scripted by eating disorder experts. However, these issues had been flagged as early as October 2022, demonstrating a failure to act on previous warnings.


The Tessa controversy highlights the potential hazards of deploying AI without proper oversight, particularly in sensitive domains such as mental health. Utilizing Language Learning Models (LLMs) like the one behind Tessa presents a significant risk, especially if companies integrate these systems to build their own customer-facing AI without sufficient precautions. Inaccurate or harmful information can easily be provided, potentially leading to dire consequences.


Therefore, for businesses wishing to leverage AI's benefits while minimizing its potential risks, it is advisable to partner with reputable Conversational AI providers. Such providers ensure that the AI only interacts based on specific training data, maintaining 'guardrails' for accuracy, and controlling the depth of information shared with customers. This approach also facilitates appropriate human intervention when necessary.


Companies looking to "do it yourself" risk creating AI that provides inaccurate or harmful advice. The Tessa debacle illustrates that even seemingly benign changes to AI systems can result in unexpected and potentially damaging outputs. Hence, while AI presents immense opportunities, particularly in areas such as mental health, it is crucial that their deployment is carefully managed, ensuring that they enhance rather than endanger the welfare of the individuals they are intended to assist.


To conclude, companies that try to utilize LLMs to build their own customer facing AI are at tremendous risk of inaccuracies or false information, resulting in advice or details it should not have shared. The best way to acquire AI with LLMs, is to work with a proven Conversational AI provider who will ensure the AI only interacts based on specific data it has been trained on, will provide guard rails to ensure accuracy and will control how in-depth the information is that is shared with customers.



Tek Tok

GigCX Worker
By Darren Prine 02 Nov, 2023
Be prepared this holiday season with short term staffing solutions to handle the volume. Take your CX to the next level by eliminating long hold times. Avoid and expense and burden of hiring, onboarding, and training short term employees with GigCX.
Share by: