Presentation
DH5 - Effects of Neurodivergence on Deepfake-Video Detection: Autism Spectrum Disorder
DescriptionAt the intersection of technology and healthcare, human-factors researchers play a vital role in shaping engineered systems to be both safe and usable. One common way that a system design can pose a threat to user safety is by allowing for the presentation of misrepresentative information (misinformation). Prototypically, misinformation, as it is studied in a human-factors context, might look like a display that inaccurately conveys system status (think: an attitude indicator that misrepresents an aircraft's orientation in space). However, more general instantiations of technologically-facilitated misinformation (e.g., fake news) fall no less under the jurisdiction of human-factors researchers/practitioners. The potential for harm associated with such misinformation can reasonably be construed as both directly and indirectly threatening to human well being, and therefore should be of concern to the healthcare-focused human-factors community. An example of the indirect threat, is the potential for patient trust in sound medical recommendation to be obfuscated by technologically-facilitated misinformation. An example of the direct threat, is the potential for mental health consequences that result from the misinformative spread of reputationally-damaging inaccuracies (think: A high school student who has been the victim of digital altered revenge pornography). Arguments that misinformation is a healthcare issue are even further reinforced by the fact that there is a potential for health-related malady to interact with and exacerbate the potential for harmful misinformation. That is to say, one's health (physical or mental) could make one more or less susceptible to being misinformed.
The particular type of misinformative technology that was studied in the proposed work was deepfake-video technology. Deepfake videos are those in which the (typically) human subject of the video has been manipulated to do, or say, something that they never did (Tidler & Catrambone, 2021). Imagine the potential damage that could be caused by the circulation of a convincing, but fake, video of the Head of CDC warning of the dangers of some standard-of-care treatment. Or perhaps even more frightening, imagine a convincing, but fake, real-time, telehealth appointment with, who you believe to be, a doctor who has cared for you for decades. These, and many related others, are real and present dangers.
In previous work, Tidler & Catrambone (2021), have shown that individuals vary in the extent to which they are able to detect deepfake videos. One of the dimensions that they showed to account for (i.e., correlate with) this variance is Affect-Detection (AD) ability. AD ability is a term that refers to the accuracy with which one can identify the emotional states of others by referring to external cues (e.g., facial expressions) (see various work of Simon Baron-Cohen and colleagues). Although, the connection is controversial among autism researchers (Frith & Happé, 1994), there is some evidence to suggest that those with autism spectrum disorder (ASD) have diminished AD ability when compared to neurotypical counterparts (Baron-Cohen et al, 1985). The proposed presentation will be a report of an quasi-experiment that was run to determine if those with self-reported ASD do not detect deepfake videos as well as neurotypical counterparts.
METHOD:
Participants (N = 56) were shown a series of videos and, for each video, were asked to indicate whether they believe the video was authentic or a deepfake, and to give a separate confidence rating in their belief. Thirty of the participants self-reported having been formally diagnosed of ASD, and twenty six of the participants self-reported not being diagnosed with ASD. NOTE: Although, having selected participants via actual diagnostic criteria would have been preferred, we were able to at least take the step of very explicitly excluding participants who self-reported a personal suspicion of having ASD. The participants in the "ASD group" of this study explicitly self-reported having received a formal diagnosis.
Performance on the task was operationalized three different ways:
1. Participants' raw deepfake detection performance (i.e., the number of correct identifications minus the number of incorrect identifications)
2. Participants' total confidence in their judgments. For each video the participant rated their confidence in their judgement on a scale of 1-4. These confidence ratings were summed into a total score.
3. Participants' deepfake detection performance but weighted by their confidence in their judgements. The idea being that a confidently correct response is an indication of better performance than is an unconfidently correct response.
RESULTS/DISCUSSION/CONCLUSIONS:
No significant differences were detected between the ASD participants and the non-ASD participants in both their raw deepfake detection performance and their weighted deepfake detection performance. However, the groups were observed to significantly differ in their confidence toward their ratings (t(54) = -2.712, p = .009), with the self-reported ASD-diagnosed asserting a greater overall sense of confidence in their judgements. Although no evidence that the two groups differed in their actual deepfake detection performance was observed, the difference in confidence could suggest that there is reason to be concerned that those with ASD might have a lower credulity threshold than neurotypical counterparts in terms of the extent to which they accept their own judgements about the authenticity of videos the encounter, and errors in appropriately accepting/rejecting the information that is conveyed by these videos could result.
REFERENCES:
Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the autistic child have a “theory of mind”?. Cognition, 21(1), 37-46.
Frith, U., & Happé, F. (1994). Autism: Beyond “theory of mind”. Cognition, 50(1-3), 115-132.
Tidler, Z. R., & Catrambone, R. (2021). Individual Differences in Deepfake Detection: Mindblindness and Political Orientation. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 43, No. 43).
The particular type of misinformative technology that was studied in the proposed work was deepfake-video technology. Deepfake videos are those in which the (typically) human subject of the video has been manipulated to do, or say, something that they never did (Tidler & Catrambone, 2021). Imagine the potential damage that could be caused by the circulation of a convincing, but fake, video of the Head of CDC warning of the dangers of some standard-of-care treatment. Or perhaps even more frightening, imagine a convincing, but fake, real-time, telehealth appointment with, who you believe to be, a doctor who has cared for you for decades. These, and many related others, are real and present dangers.
In previous work, Tidler & Catrambone (2021), have shown that individuals vary in the extent to which they are able to detect deepfake videos. One of the dimensions that they showed to account for (i.e., correlate with) this variance is Affect-Detection (AD) ability. AD ability is a term that refers to the accuracy with which one can identify the emotional states of others by referring to external cues (e.g., facial expressions) (see various work of Simon Baron-Cohen and colleagues). Although, the connection is controversial among autism researchers (Frith & Happé, 1994), there is some evidence to suggest that those with autism spectrum disorder (ASD) have diminished AD ability when compared to neurotypical counterparts (Baron-Cohen et al, 1985). The proposed presentation will be a report of an quasi-experiment that was run to determine if those with self-reported ASD do not detect deepfake videos as well as neurotypical counterparts.
METHOD:
Participants (N = 56) were shown a series of videos and, for each video, were asked to indicate whether they believe the video was authentic or a deepfake, and to give a separate confidence rating in their belief. Thirty of the participants self-reported having been formally diagnosed of ASD, and twenty six of the participants self-reported not being diagnosed with ASD. NOTE: Although, having selected participants via actual diagnostic criteria would have been preferred, we were able to at least take the step of very explicitly excluding participants who self-reported a personal suspicion of having ASD. The participants in the "ASD group" of this study explicitly self-reported having received a formal diagnosis.
Performance on the task was operationalized three different ways:
1. Participants' raw deepfake detection performance (i.e., the number of correct identifications minus the number of incorrect identifications)
2. Participants' total confidence in their judgments. For each video the participant rated their confidence in their judgement on a scale of 1-4. These confidence ratings were summed into a total score.
3. Participants' deepfake detection performance but weighted by their confidence in their judgements. The idea being that a confidently correct response is an indication of better performance than is an unconfidently correct response.
RESULTS/DISCUSSION/CONCLUSIONS:
No significant differences were detected between the ASD participants and the non-ASD participants in both their raw deepfake detection performance and their weighted deepfake detection performance. However, the groups were observed to significantly differ in their confidence toward their ratings (t(54) = -2.712, p = .009), with the self-reported ASD-diagnosed asserting a greater overall sense of confidence in their judgements. Although no evidence that the two groups differed in their actual deepfake detection performance was observed, the difference in confidence could suggest that there is reason to be concerned that those with ASD might have a lower credulity threshold than neurotypical counterparts in terms of the extent to which they accept their own judgements about the authenticity of videos the encounter, and errors in appropriately accepting/rejecting the information that is conveyed by these videos could result.
REFERENCES:
Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the autistic child have a “theory of mind”?. Cognition, 21(1), 37-46.
Frith, U., & Happé, F. (1994). Autism: Beyond “theory of mind”. Cognition, 50(1-3), 115-132.
Tidler, Z. R., & Catrambone, R. (2021). Individual Differences in Deepfake Detection: Mindblindness and Political Orientation. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 43, No. 43).
Event Type
Poster Presentation
TimeTuesday, March 264:45pm - 6:15pm CDT
LocationSalon C
Digital Health
Simulation and Education
Hospital Environments
Medical and Drug Delivery Devices
Patient Safety Research and Initiatives