Building and Beta-Testing Be Well Buddy Chatbot, a Secure, Credible and Trustworthy AI Chatbot That Will Not Misinform, Hallucinate or Stigmatize Substance Use Disorder: Development and Usability Study.

Journal: JMIR human factors
PMID:

Abstract

BACKGROUND: Artificially intelligent (AI) chatbots that deploy natural language processing and machine learning are becoming more common in health care to facilitate patient education and outreach; however, generative chatbots such as ChatGPT face challenges, as they can misinform and hallucinate. Health care systems are increasingly interested in using these tools for patient education, access to care, and self-management, but need reassurances that AI systems can be secure and credible.

Authors

  • Adam Jerome Salyers
    Clinic Chat, LLC, 2950 Arkins Ct, Unit 605, Denver, CO, 80216, United States, 1 3038079800.
  • Sheana Bull
    Clinic Chat, LLC, 2950 Arkins Ct, Unit 605, Denver, CO, 80216, United States, 1 3038079800.
  • Joshva Silvasstar
    Clinic Chat, LLC, 2950 Arkins Ct, Unit 605, Denver, CO, 80216, United States, 1 3038079800.
  • Kevin Howell
    University of Texas Health Sciences at San Antonio, San Antonio, TX, United States.
  • Tara Wright
    University of Texas Health Sciences at San Antonio, San Antonio, TX, United States.
  • Farnoush Banaei-Kashani
    Department of Computer Science and Engineering, University of Colorado, Denver, CO, United States.