B - Risk Management Considerations in the Use of Large Language Models
As their name suggests, artificial intelligence technologies that run “large language models” require enormous amounts of data in order for the model to eventually perform its diagnostic, prognostic, or generative function. Typically, the model “learns” to identify patterns from the data on which it is trained and then uses that pattern recognition information to fashion its outputs or deliverables. This kind of clinical decision-making is unprecedented in twenty-first century healthcare such that it introduces new forms of responsibility, accountability, and liability for clinicians and the clinics and hospitals for which they work.
This presentation will examine a variety of scenarios involving the use of large language models that can expose a clinician, his or her clinic or hospital, or both to considerable legal risk. Put otherwise, these scenarios reflect likely negligence claims by patients or their families in the not-too-distant future involving artificial intelligence applications of large language models.