Presentation
PS10 - Non-Human AI Personas: Bridging the Gap in AI-UX for Healthcare
DescriptionAbstract—Advocates for Artificial Intelligence (AI) technologies within healthcare systems are promising prospects for enhancing patient care. Yet, it simultaneously introduces a myriad of challenges concerning Human Factors Engineering (HFE) and Human-Computer Interaction (HCI). This paper presents a groundbreaking conceptual framework, termed Non-Human AI Personas, to unify the mental models that healthcare providers and patients employ when interacting with AI systems. Through the utilization of Non-Human AI Personas, this work identifies solutions to extant challenges in healthcare AI-UX, such as system explainability, data privacy ethics, universal design, and patient safety. By delineating actionable steps derived from the persona-based framework, this paper significantly contributes to improving the safety, efficacy, and efficiency of AI adoption in healthcare settings.
Keywords--Artificial Intelligence, Healthcare, Human Factors Engineering (HFE), Human-Computer Interaction (HCI), User Experience (UX), Non-Human AI Personas, Data Privacy, Explainability, Safety, Accessibility
I. INTRODUCTION
Possible benefits of AI in Healthcare
The accelerated integration of Artificial Intelligence (AI) into healthcare systems marks a significant paradigm shift in the delivery of medical services. AI technologies offer transformative solutions that augment traditional healthcare practices, ranging from predictive analytics in patient diagnosis to real-time data interpretation during surgeries (Jiang et al., 2017)[1]. The confluence of AI and healthcare aims to augment clinical decision-making, optimize workflows, and may ultimately enhance patient outcomes (Davenport & Kalakota, 2019)[2]. However, these technologies could also introduce risk to patient safety, healthcare practitioners, organizations, and national healthcare infrastructure.
Statement of the Problem
Despite the promising advancements and potential pitfalls, the application of AI in healthcare faces some complexities and challenges already well described and studied in some academic circles. Specifically, issues related to Human Factors Engineering (HFE) and Human-Computer Interaction (HCI) have surfaced as significant roadblocks to the unbridled utilization of AI in clinical settings (Carayon et al., 2015)[3]. The opacity of algorithmic processes, the ethical maze surrounding data privacy, and the overarching necessity for universal accessibility stand as prominent obstacles (Ribeiro et al., 2016)[4].
Objectives
This work and paper aims to address these multi-dimensional challenges by introducing a newer conceptual framework: Non-Human AI Personas. The paper seeks to validate the efficacy of employing these personas to enhance the AI-UX (User Experience) in healthcare settings through rigorous research and analysis. The ultimate goal is to provide actionable insights on how we might continue to extend the use of Human Factors practices and principles in designing, evaluating, and implementing AI technologies, thereby improving healthcare delivery safety, efficacy, and efficiency (Holden et al., 2013)[5].
Approach (or Methods)
This paper employs a multi-method approach to validate the efficacy of Non-Human AI Personas in enhancing the AI-UX (User Experience) within healthcare settings. The methodology includes both qualitative and quantitative methods, such as interviews, surveys, and data analytics. We relied on a modified persona creation process by pulling data from multiple sources and derived archetypes. We will describe design considerations and challenges for distilling draft personas from derived archetypes, validating them with stakeholders, and publishing them. Future and ongoing work will include piloting the personas by using them in journey maps and other processes to improve and assess for risks. The aim is to provide a comprehensive understanding of how these personas can be effectively integrated into healthcare systems to improve safety, efficacy, and efficiency.
Discussion
While the research provides promising insights into the application of Non-Human AI Personas, it is essential to acknowledge its limitations. One significant constraint is the limited scope of healthcare settings examined, which may not be universally applicable. Additionally, the study relies on self-reported data, which could introduce bias.
Future research should aim to expand the scope of healthcare settings and incorporate more objective measures. There is also a need for longitudinal studies to assess the long-term impact of implementing Non-Human AI Personas in healthcare systems.
Keywords--Artificial Intelligence, Healthcare, Human Factors Engineering (HFE), Human-Computer Interaction (HCI), User Experience (UX), Non-Human AI Personas, Data Privacy, Explainability, Safety, Accessibility
I. INTRODUCTION
Possible benefits of AI in Healthcare
The accelerated integration of Artificial Intelligence (AI) into healthcare systems marks a significant paradigm shift in the delivery of medical services. AI technologies offer transformative solutions that augment traditional healthcare practices, ranging from predictive analytics in patient diagnosis to real-time data interpretation during surgeries (Jiang et al., 2017)[1]. The confluence of AI and healthcare aims to augment clinical decision-making, optimize workflows, and may ultimately enhance patient outcomes (Davenport & Kalakota, 2019)[2]. However, these technologies could also introduce risk to patient safety, healthcare practitioners, organizations, and national healthcare infrastructure.
Statement of the Problem
Despite the promising advancements and potential pitfalls, the application of AI in healthcare faces some complexities and challenges already well described and studied in some academic circles. Specifically, issues related to Human Factors Engineering (HFE) and Human-Computer Interaction (HCI) have surfaced as significant roadblocks to the unbridled utilization of AI in clinical settings (Carayon et al., 2015)[3]. The opacity of algorithmic processes, the ethical maze surrounding data privacy, and the overarching necessity for universal accessibility stand as prominent obstacles (Ribeiro et al., 2016)[4].
Objectives
This work and paper aims to address these multi-dimensional challenges by introducing a newer conceptual framework: Non-Human AI Personas. The paper seeks to validate the efficacy of employing these personas to enhance the AI-UX (User Experience) in healthcare settings through rigorous research and analysis. The ultimate goal is to provide actionable insights on how we might continue to extend the use of Human Factors practices and principles in designing, evaluating, and implementing AI technologies, thereby improving healthcare delivery safety, efficacy, and efficiency (Holden et al., 2013)[5].
Approach (or Methods)
This paper employs a multi-method approach to validate the efficacy of Non-Human AI Personas in enhancing the AI-UX (User Experience) within healthcare settings. The methodology includes both qualitative and quantitative methods, such as interviews, surveys, and data analytics. We relied on a modified persona creation process by pulling data from multiple sources and derived archetypes. We will describe design considerations and challenges for distilling draft personas from derived archetypes, validating them with stakeholders, and publishing them. Future and ongoing work will include piloting the personas by using them in journey maps and other processes to improve and assess for risks. The aim is to provide a comprehensive understanding of how these personas can be effectively integrated into healthcare systems to improve safety, efficacy, and efficiency.
Discussion
While the research provides promising insights into the application of Non-Human AI Personas, it is essential to acknowledge its limitations. One significant constraint is the limited scope of healthcare settings examined, which may not be universally applicable. Additionally, the study relies on self-reported data, which could introduce bias.
Future research should aim to expand the scope of healthcare settings and incorporate more objective measures. There is also a need for longitudinal studies to assess the long-term impact of implementing Non-Human AI Personas in healthcare systems.
Event Type
Poster Presentation
TimeTuesday, March 264:45pm - 6:15pm CDT
LocationSalon C
Digital Health
Simulation and Education
Hospital Environments
Medical and Drug Delivery Devices
Patient Safety Research and Initiatives