Close

Presentation

MDD29 - Using Artificial Intelligence to Optimize Known Use Problem Analyses
DescriptionArtificial Intelligence and its applicability to all fields, including Human Factors has been much discussed recently. Most of the discussion we have seen centers on the potential value and risks associated with using it to supplement or replace user research. We assert that a more immediate and effective way it can be leveraged by Human Factors in medical device development is in the analysis of messy data, such as the work needed to conduct the known use problem analysis required by the FDA.

As part of 510(k) submissions, the FDA expects the manufacturer to describe all known use problems for previous models of the same device or with similar types of devices (e.g., predicate devices).

They provide some guidance on what this means and potential sources that can be used to identify these problems, but they do not lay out a detailed recommended methodology. Consequently, despite the recommendation to consider "all" known use problems, we have observed that manufacturers vary with respect to the level of effort and detail used to fulfill this expectation.

HF professionals in this field know that most of these data sources (complaint data, MAUDE) have messy, ambiguous data. Each data source both requests input queries and generates output data in different ways. Even when a source provides a database of events, each event typically needs to be reviewed individually if we want to ensure we capture all use-related issues, because they are put into the system by many different individuals who may not be trained in Human Factors and usability and therefore do not correctly attribute complaints or events to use-related causes. In addition, even when there are standardized fields, such as the name of a device manufacturer, there are often multiple ways the fields can be populated.

Consequently, depending on the device in question, conducting a comprehensive known use problem analysis can take hundreds of hours, because each source must first be searched with multiple permutations of a single query, and each individual event or article return by a data source query should be analyzed individually for applicability. Perhaps because of this, we have seen that manufacturers often wait until after development is near completion before conducting this analysis, just to check the box for an FDA submission for the HF/UE Summary Report. At that point, the design of the device is set, and if any changes need to be made, they are shoved into the user manual or IFU.

AI provides us with an opportunity to significantly reduce or even eliminate the costs of that brute force data review, and it is our hope that with the time and cost burden of the analysis reduced, manufacturers will be more likely to see this analysis as a useful tool, that can improve the usability and reduce the risks associated with the products they are developing, rather than merely a box that must be checked for a US submission.

As a proof of concept, we intend to present an AI-based tool we are developing that can conduct the review needed to identify use-related problems and trends within a dataset, alleviating the burden on the HF Researcher while still proving a comprehensive report. We will compare the scope and outputs of a manually conducted Known Use Problem analysis and an AI-generated analysis to demonstrate that the AI-outputs are equally valid. We will also discuss the limitations of the tool based on today’s technologies and ways in which it can be further developed to further improve the effectiveness of the Human Factors process.
Event Type
Poster Presentation
TimeTuesday, March 264:45pm - 6:15pm CDT
LocationSalon C
Tracks
Digital Health
Simulation and Education
Hospital Environments
Medical and Drug Delivery Devices
Patient Safety Research and Initiatives