Close

Presentation

Artificial Intelligence Coming Soon To A Clinical Setting Near You: The Glories & Pitfalls
DescriptionThe disruptive application of Artificial Intelligence is being trialed and evaluated across almost every industry globally. The aspirational promise is that AI will deliver new technology innovations that can automate, track and drive efficiency. The US healthcare system is an obvious candidate: stressed with labor shortages and burn-out, limited resources and a rapidly expanding customer base. Healthcare is eager for ways to lower costs, reduce complexity, ease clinician burden and improve patient care. However, in several recent Veranex projects that have sought to understand and define end user clinical and contextual needs in this space for med tech design development, the adoption path forward faces some notable perils and pitfalls.

The first, most challenging obstacle is one relating to user trust. Patient care at its core is one of human understanding and interaction. Humans can be inherently lazy, poor listeners, forgetful, moody and have shifting and unreliable motivations. Devices can certainly evolve to have automated and sophisticated sentience but the process of building trust at an individual patient level entails more than predictable, consistent rule-based algorithms and has to design allowances for a broad range of human emotions and responses.

Algorithm development is only as good as the training data-set inputs. End users ask what latent selection biases might have occurred during development? How might outlier cases have unduly influenced key decision tree paths? Clinical end users will want to know how well represented their own real world populations are in the validation analysis. Inherently in artificial intelligence development, there is formula complexity and a lack of transparency that inevitably raises user suspicions and exacerbate skepticism. If the AI is not built on-site from the ground up, then it needs to be adaptable and learn from local inputs before it is considered proven.

Clinicians also have different mental models that influence their perceptions and use of AI-powered tools. Some embrace the capability to augment clinical care in ways that reduce rote or administrative tasks; others have concerns that AI is an unwelcome foray into replacing nuanced clinical knowledge and experience with binary and rigid, rule-based algorithms.

A second trust factor that is also quickly raised is one around user over-dependency – where AI weakens original clinical skills, voids the requirement for critical thinking, derails team focus and causes cohesion drift. There are simply too many times in past medicine in every clinician’s recent memory where a machine or a computer has failed, leaving the care team to scramble last minute and fall back on ‘traditional medicine’ best practices. In urgent or emergent situations, the ability to recognize and respond quickly is paramount.

Finally, another complexity to fold in the mix is one of liability. Today, clinicians are only responsible when they deviate from standard of care procedures. But if they follow or rely on AI guidance, is the product developer then liable when negative patient health outcomes occur? Who takes ownership of the damages?

This talk will expand on these themes, pulling narratives and evidence from a treasure chest of clinician interviews and contextual enquiry research conducted by Veranex over the last few years.
Authors
Sr Principal Design Researcher
Design Researcher II
Event Type
Oral Presentations
TimeTuesday, March 269:00am - 9:30am CDT
LocationSalon A-2
Tracks
Digital Health