Close

Presentation

DH10 - Improving Language Comprehension via Hand Gestures in Children with Autism Spectrum Disorders and/or Language Impairment in Rural Montana
DescriptionDevelopmental Language Disorder (DLD, 7% of children) and Autism Spectrum Disorders (ASD, 3% of children) are lifelong challenges, dramatically impairing communicative abilities, thereby impacting social, economic, and academic outcomes. Children with DLD/ASD benefit from interventions and supports to improve language skills. However, only few interventions are designed to be done outside of a speech and language pathology clinic, e.g., at home or in classroom; therefore, rural and underserved children with DLD/ASD and their families experience increased disparities in access to crucial services. Our pilot project is a step towards developing an inclusive, convenient, accessible, custom-tailored, in-home / telehealth-delivered intervention for children and caregivers to likely improve children’s language learning abilities, thereby reducing access disparity and improving later life outcomes.

In children with DLD/ASD, caregiver gestures (hand-body movements accompanying verbal communication) can improve comprehension of content words (e.g., nouns that label objects). However, little is known about how gestures affect comprehension of grammatical words like determiners (e.g., “the” or “an”). Determiners mediate successful communication because they indicate who/what is being referenced: e.g., “a book” can indicate any book, but “the book” would indicate a specific book that was just mentioned. This distinction can be relevant to academic or personal interactions, and miscommunication may have undesirable consequences. Determiners develop between ages 3-8 in typically developing (TD) children, and are quite delayed in DLD/ASD.

Our pilot study, using participatory design and human-centered design processes, investigates whether an interactive computer game can teach hearing children with TD or DLD/ASD about gestures and thereby help them understand sentences with determiners better. The custom-created game in JavaScript uses an “act-out” method, where children can decide what to do given a language prompt and within constrained parameters. It is the participant’s task to help two characters, Fishy and Turtle (with respective emojis), follow instructions given by “Lady in the video” by using the mouse to place them in the appropriate locations on the computer screen which are indicated in the video by hand gestures or only verbally.

For locations, there are two arrays of objects with vowel-initial sounds (to enhance the perceptual saliency of determiners), e.g., nine apples and nine octopuses. Participants can click on the video as many times as they need to understand the prompt, and they can click on Fishy and/or Turtle characters, upon which the mouse cursor becomes the character, thus avoiding the difficult click-hold-and-drag motion. Sometimes, the Lady in the video instructs the characters to go to two different locations, e.g., “Fishy clicks an apple, … and then Turtle clicks another apple” – this is called “disjoint reference”. Other times, characters are instructed to go to the same location, e.g., “Turtle clicks an octopus, … and then Fishy clicks the same octopus” – this is called “coreference”. During the game, the program records participants’ mouse clicking as well as reaction times, and we also record our participants’ eye gaze using Gazepoint or Tobii Glasses 3 eye trackers, to infer sub/conscious attention to characters, video of Lady and her gestures, and same/different answer locations. The use of eye tracking is important in that previous studies show that participants may provide the correct behavioral answer to a language question, but their eyes may indicate their confusion as they look among answer choices.

Our protocol involves pre-test (Time0, verbal-only comprehension of determiners); training with gestures; test (T1, comprehension of determiners with gestures and gestures by themselves); post-test (T2, comprehension of verbal-only determiners to check for short term improvement). Our gestures are signs adapted from American Sign Language which support the idea of coreference with gesture/sign for “same”, or disjoint reference with “different” gesture. These gestures are added to the second sentence of prompts at T1, synchronously with the determiner phrase (e.g., “...and then Fishy clicks [SAME GESTURE] the same octopus"). Gestures were also presented without verbal phrase (e.g., “… and then Fishy clicks [DIFFERENT GESTURE]”). We also utilize standardized and established tests of language comprehension, nonverbal reasoning, and executive function, in addition to measures of socio-economic status and autism symptomatology. Thereby, we establish a holistic picture of our participants’ abilities and environmental influences which may impact their knowledge of determiners and ability to learn gestures.

We assessed 35 hearing participants from Montana, ages 1.9 to 31 years, of which 13 had ASD (of which 10 had a language delay (DLD)) and 22 did not have ASD (of which 14 were TD, 5 had a language delay (DLD), and 3 had Attention Deficit/Hyperactivity Disorder (ADHD)).

Of these 35, 10 younger participants (mean chronological age (CA) = 4.3, mean mental age (MA) = 4.5) (across diagnosis groups) could not complete our computer game tasks due to challenges with attention, low cognitive abilities, or lack of familiarity with computers.

Of the 25 remaining participants, 19 (mean CA = 10.3, mean MA = 12, 6 with ASD and 13 TD) already showed adult-like knowledge of determiners at T0, so the addition of gestures in T1 (which were well understood at 76-100% correct) could not improve their performance any further on determiners (they were 90-100% correct across T0, T1, and T2).

The 6 remaining participants, 3 with ASD and concurrent DLD, and 3 TD (mean CA = 14.8, mean MA = 12.3), present the most interesting case as their initial knowledge of some determiners was poor at T0, e.g., ~0% correct for comprehension of coreference for “the”. With gestures, at T1, their comprehension of “the” improved to 61%, and this improvement was mostly retained at T2 at 44% correct. Also, this group did modestly well at learning gestures, with 66% correct on “same” gesture, and 100% correct on “different” gesture.

Eye tracking results for 9 of 25 participants were evaluated so far for proportion of looks to areas of interest of ‘same’ or ‘different’ referents, ‘face’ and ‘gesture/body space’ of Lady in video. We find eye-gazes align with mouse clicking responses: participants looked to ‘same’ objects when hearing “the same”, and ‘different’ objects when hearing “an” or “another”. In addition, some participants switched looks between ‘same’ and ‘different’ locations when hearing “the”, indicating uncertainty.

For 19 of 25 participants, we explored a generalized linear mixed model with grouped binomial for number of same responses (measured by mouse clicking on referents) for comprehension of determiners/gestures, with random effects for subject and family (as some participants were siblings), and other measures as fixed effects. There were significant effects of Determiner/Gesture conditions, Time, Gender, Chronological Age, and ADHD diagnosis, but not ASD or Language diagnoses. Participants’ levels of Nonverbal Reasoning, Vocabulary Comprehension, Pragmatic/Social Reasoning, and Executive Function were significant factors. However, their levels of Comprehension of Grammar or Inferring other people’s points of view (Theory of Mind) were not significant.

Our results demonstrate that for some children, whether they are TD or ASD and/or DLD, gestures presented via an interactive computer game may be helpful in improving comprehension of grammatical words like “the”. For other children, the computer game may be an inappropriate format, and we are exploring whether an interactive game with physical toys and printed pictures, and a live person (examiner) gesturing and speaking, would be more suitable. Our findings may lead to designing an inclusive intervention, for both children and their caregivers (whether parents, teachers, or nurses), to improve communication using multiple modalities – both speech and simple gestures which emphasize the main points of the spoken message. Such an intervention can be tailored to families’ needs and delivered in their homes via telehealth and/or video recordings, to support rural and underserved families.
Event Type
Poster Presentation
TimeTuesday, March 264:45pm - 6:15pm CDT
LocationSalon C
Tracks
Digital Health
Simulation and Education
Hospital Environments
Medical and Drug Delivery Devices
Patient Safety Research and Initiatives