Close

Presentation

DH12 - Machine Learning and Human Expertise in the Balance: Generative AI in Healthcare Settings
DescriptionSince the rise in popularity of large language model chatbots such as Open AI’s ChatGPT and Google’s Bard, their use spread into various industries, including healthcare. Given the rapid advancement of the technology, assessing how the technology is used, abused, misused, and disused is important to help drive positive transformation of healthcare, understand risks to patient safety, and help guide any regulatory oversight in the right direction. It is evident that generative AI will lead to substantial shifts in the way medical providers learn, conduct varying processes within the healthcare system, and enact medical decisions. Current and potential implementations of the technology in the field include clinical data management, medical decision-making, provider-insurance interactions, and information-seeking, which can directly impact patient-medical provider interactions, patient safety, and medical diagnosis accuracy. One important aspect of the human-generative AI interaction in healthcare is the intersection between a provider’s medical expertise and the level of dependence on generative AI for medical decision-making. The present research proposes a two-dimensional framework for understanding the opportunities and risks between medical expertise and provider dependence on generative AI.
Understanding the intersection between automation and human capabilities has long been a focus of the fields of human factors and human-computer interaction. Early models focusing on human vs. machine capabilities transitioned into models evaluating the impact of differing levels of automation in HCI. More recently, as the capabilities of technology have continued to increase exponentially, human-centered AI frameworks have been presented to promote the evaluation of humans and automation as complementary agents towards a common task goal. As Generative AI begins to demonstrate its potential in healthcare and its inclusion in medical decision-making moves towards normalization, understanding the risks of user dependence on generative AI becomes increasingly critical. Specifically, the user must understand how to interact with the technology, its underlying capabilities and functionality, and the quality of its output. Medical expertise is one important individual difference which allows users to understand how to frame queries to generative AI and evaluate the quality of a response. Failure to properly interact with the technology or understand if the output is accurate can lead to medical errors. However, proper use of the technology at varying levels of provider expertise promises to deliver a human-centered AI experience which will help improve the accuracy of medical decision-making and make positive advancements in patient health and safety.
The present research presents a two-dimensional framework to help guide understanding of risks and opportunities at the intersection of medical expertise and generative AI-dependence. This framework will help researchers, healthcare professionals, and regulatory policy makers understand human-AI reliability concerns, drive the technology towards positive and practical implementations, design generative AI systems which are transparent and flexible, and provide regulation which protects patients, but does not hinder progress. The research also highlights the importance of medical expertise in the use of generative AI for medical decision-making. While generative AI has the potential to improve the accuracy of medical decision-making, it is important to remember that it is a tool that must be used appropriately. Healthcare professionals with a deep understanding of medicine are best equipped to interact with generative AI systems and evaluate the quality of their output. Generative AI is still a relatively new technology, and there is still much to learn about its capabilities and limitations. In healthcare settings, it is important to promote good design and use of generative AI systems, while advancing awareness of any potential risks.
Event Type
Poster Presentation
TimeTuesday, March 264:45pm - 6:15pm CDT
LocationSalon C
Tracks
Digital Health
Simulation and Education
Hospital Environments
Medical and Drug Delivery Devices
Patient Safety Research and Initiatives