Artificial intelligence (AI) platforms use algorithms that extract information from databases to create answers to questions.¹ AI can recognize human language and generate conversational responses, which provides an experience of talking to an actual person.¹ It can also create responses to complex inquiries that would customarily take longer to find in a traditional search engine. Despite these benefits, the serious consequences that can derive from the improper use of AI cannot be ignored. An example of dire consequences was seen in a recent Avianca Airlines lawsuit.
This case involved plaintiff Roberto Mata, who sued Avianca Airlines claiming he was injured in 2019 when a metal cart struck his knee during flight.2,3 In 2023, Avianca Airlines sought to dismiss the suit based on the statute of limitations expiration.2,3 Mata’s counsel filed a response citing several cases which supported the progression of the suit. However, when Avianca’s attorneys reviewed the response, they could not locate any of the cited cases.2,3 The judge overseeing the trial ordered Mata’s lawyers to provide copies of the referenced cases.2,3 It was later discovered the cases did not exist and were fabricated by ChatGPT.2,3 On June 8, 2023, Mata’s attorney was asked to explain his failure to validate ChatGPT responses.2,3 Unfortunately, the attorney did not have a valid reason for his lack of diligence besides the assumption ChatGPT was an advanced search engine that yielded accurate results and the context of the information was convincing and authoritative.2,3
The Avianca case is an example of AI’s use in the legal setting, yet AI’s use and influence has not stopped there. The expansiveness of AI is seen across a variety of industries, such as education, military, and healthcare. For example, AI has already been incorporated in clinical decision support systems (CDSS) allowing for the advanced interpretation of clinical data points to help diagnose and treat patients.4 Though arguments are made that AI has benefits that assist in the diagnosing and treatment of patients, there are specific risks that clinical users of AI should consider. Those risks include accuracy, bias, and protection of data.
ACCURACY The Avianca Airlines case highlights the unsettling reality of the inaccuracies an AI platform can generate. A recent multi-specialty analysis of 180 clinical questions revealed 57.8% (n=104) of AI answers were rated as not all correct.5 This study further highlighted ChatGPT’s ability to deliver inaccurate conclusions in an authoritative and convincing manner.5 Like in the Avianca Airlines case, where the attorney relied on the convincing language and tone of ChatGPT, the delivery of false information in a convincing manner to clinical personnel can also lead to reliance on inaccurate information when rendering patient services. This phenomenon has been recognized as AI hallucination, which occurs when AI generates false information.6 Therefore, it is important that clinicians recognize AI as a supportive tool that should not replace a practitioner’s critical thinking and validation of information received.
BIAS The risk of AI bias in healthcare can arise based on the dataset available to the AI system.7 If the dataset fails to include certain data points related to race, gender, or underrepresented individuals, the information produced can be skewed. For example, if data shows African American patients receive, on average, less treatment for pain than white patients, an AI system could inaccurately learn to suggest lower doses of pain medication to African American patients.7 As such, recognizing potential data bias is important for any user of AI to consider. To eliminate biased outcomes, the data should be reflective of racial and gender diversity. As such, a clinician should question the AI platform to ensure diversified information is included in the dataset.
PROTECTION OF DATA Federal and state laws require privacy and security of patient identifiable medical information. Artificial intelligence platforms are not considered covered entities and are not subject to the same privacy and security standards required by the Health Insurance Portability Act (HIPAA). Though there are general privacy protections afforded under the California Consumer Privacy Act, it is important that you review privacy policies to understand how inputted information is stored and used. Further, deidentified information should be used, but if not feasible, patient consent must be obtained and documented.
LESSONS LEARNED It is without question that AI has opened the door of endless possibilities related to clinical effectiveness. However, it is important that when considering the use of an AI, you understand the accuracy of the system, system bias, and the protection and security of the AI platform. Consideration should also be given to educating your staff on the limitations of AI and the need to validate the information being presented.
The National Academy of Medicine is currently creating guidelines around the use of AI in healthcare. You can monitor the progression of this three-year project at Health Care Artificial Intelligence Code of Conduct - National Academy of Medicine (nam.edu). In the interim, having your staff sign a code of conduct acknowledgement form is a great way of documenting that AI education was provided. You can access CAP’s Code of Conduct template related to the use of AI platforms here.
Bryan Dildy is a Senior Risk & Patient Safety Specialist. Questions or comments related to this article should be directed to BDildy@CAPphysicians.com.
References
1Sindhu Sundar & Aaron Mok, What is ChatGPT? Here’s everything you need to know about ChatGPT, Bus. Insider (Aug. 21, 2023, 9:26 AM), https://www.businessinsider.com/everything-you-need-to-know-about-chat-…
2Molly Bohannon, Lawyer Used ChatGPT in Court and Cited Fake Cases, Forbes (Jun. 8, 2023, 2:06 PM), https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatg…
3Benjamin Weiser & Nate Schweber, The ChatGPT Lawyer Explains Himself, N.Y. Times (Jun. 8, 2023), https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.ht…
4Vinita Mujumdar, JD & Haley Jeffcoat, MPH, How Clinical Decision Support Tools Can Be Used to Support Modern Care Delivery, FACS (Sept. 1, 2022) https://www.facs.org/for-medical-professionals/news-publications/news-a…
5Douglas Johnson et al., Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model¸ PubMed Central (Feb. 28, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10002821/
6Jerome Goddard, PhD, Hallucinations in ChatGPT: A Cautionary Tale for Biomedical Researchers, The Am. J. of Med. (Jun. 25, 2023), https://www.amjmed.com/article/S0002-9343(23)00401-1/fulltext
7W. Nicholson Price II, Risks and Remedies for Artificial Intelligence in Healthcare, Brookings (Nov. 14, 2019), https://www.brookings.edu/articles/risks-and-remedies-for-artificial-intelligence-in-health-care/#:~:text=If%20an%20AI%20system%20recommends,the%20patient%20could%20be%20injured.