40939
post-template-default,single,single-post,postid-40939,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Sorry for the wait, the doctor is with A.I. right now

Sorry for the wait, the doctor is with A.I. right now

Artificial intelligence (A.I.) has made its way into various aspects of society, and the medical field is not an exception. Recently, Epic, a healthcare software company, has teamed up with Microsoft to develop a generative AI tool for the healthcare industry.[1]

In ­­­­the early 2000s, Epic developed MyChart, a patient portal that has progressed into a communications tool that enables patients to communicate with their doctors.[2] With the help of Microsoft, Epic implemented an artificial intelligence feature in MyChart to draft replies to patient messages.[3]

The tool works by pulling in context from a patient’s prior messages and information from an individual’s medical records to draft a message that healthcare providers can approve to send or edit.[4] The incentive for this program was to ease physician burnout by decreasing physicians’ cognitive burdens in responding to patient messages and saving them time so they can spend more time with patients in person.[5] Since the COVID-19 pandemic, when telemedicine became a primary substitute for physician-patient interaction, physicians have been remained inundated with patient messages.[6]

Studies show that the program has reduced feelings of physician burnout and cognitive burden, but did not necessarily save physicians time.[7] Moreover, implementing this type of technology would warrant costly investments in the medical field.[8]

Critics of the program are concerned with the liability these types of programs could impose on healthcare workers. The output of large language models, such as the one Epic implemented within MyChart, can be inaccurate. [9] These types of programs sometimes output “hallucinations,” which are mixtures of correct and incorrect information pulled together that may seem plausible to readers.[10] Detecting and reducing hallucinations in large language models has been a challenge to developers.[11] If physicians fail to adequately check messages drafted by A.I. and revise them to give the correct medical advice to patients, they could be on the hook for serious malpractice lawsuits. [12]

President of the American Medical Association, Jesse Ehrenfeld, said at a meeting in March 2024 that physicians are already seeing lawsuits arising from the use of A.I.[13] Some health tech companies and hospitals say that physicians are responsible for the decisions they make while practicing, which would hold physicians liable for mistakes they make from relying on A.I.[14] Traditionally, courts have held physicians liable where they improperly relied on software suggestions for treating patients.[15]

One concern for healthcare providers is the large malpractice insurance premiums they would have to pay to protect themselves in the case of malpractice payouts.[16] Following a dip during the COVID-19 pandemic, malpractice payouts have been on the rise and since 2022, have been more than $3 billion per year.[17] Last year, the American Medical Association said many doctors have consistently incurred double-digit percentage increases in their malpractice insurance premiums over the previous four years.[18]

Legal scholars and researchers have proposed changing the traditional liability system for malpractice claims in cases involving in A.I.[19] One theory is to change the standard of care used in evaluating malpractice cases where A.I. was used.[20] Since courts look to professional norms in articulating the standard of care, researchers suggest stakeholders could alter the standard of care by rigorously evaluating A.I. algorithms and encouraging hospitals and medical practices to appropriately vet these systems before implementing them.[21] Another suggestion is to share liability between A.I. designers and healthcare systems.[22] Researchers suggest that physicians, healthcare systems, and A.I. designers allocate liability through contractual indemnification clauses.[23] This would spread the potential burden on both the healthcare industry and A.I. designers while also encouraging both parties to act safely in using the programs.[24].

Footnotes[+]

Samantha Sivert

Samantha Sivert is second-year J.D. candidate at Fordham University School of Law and a staff member of the Intellectual Property, Media & Entertainment Law Journal. She holds a B.A. in Journalism and Spanish from Hofstra University.