Clinical Trial Details
— Status: Terminated
Administrative data
NCT number |
NCT03764761 |
Other study ID # |
Storybook |
Secondary ID |
R01HD083381-01A1 |
Status |
Terminated |
Phase |
N/A
|
First received |
|
Last updated |
|
Start date |
April 1, 2018 |
Est. completion date |
November 30, 2021 |
Study information
Verified date |
April 2022 |
Source |
Penn State University |
Contact |
n/a |
Is FDA regulated |
No |
Health authority |
|
Study type |
Interventional
|
Clinical Trial Summary
This study uses mobile eye-tracking technology in order to characterize patterns of visual
attention to communication supports, as well as a partner, within real world interactions for
individuals with Down syndrome.
Visual communication supports are central components of what is termed augmentative and
alternative communication (AAC) intervention. AAC refers to the methods and technology
designed to supplement spoken communication for people with limited speech. "Aided" AAC is a
subcategory in which an external aid stores and presents for use visual symbols such as
photographs, line drawings, or alphabet letters. The most traditional means of structuring
aided AAC displays is to present the language concepts within row-column grids, which contain
individual symbols/concepts placed in each grid square. The investigator's previous work
investigated whether these grid-based presentations could be improved by understanding how
different perceptual features of the displays influence responding (i.e., whether what the
display looks like influences how easily the information on it is found). Individuals with
developmental disabilities and children developing typically were faster and more accurate in
finding information on some displays over others, when tested using a "visual search" task
(aka, a "finding game" - "find the dog").
The previous investigations have evaluated visual attention within a setting that isolated
visual processing of the AAC display as the primary dependent measure. However, communication
requires attention not only to an AAC display, but also to a communication partner.
Therefore, the current study seeks to examine questions of visual attention to both an AAC
display and a communication partner. The investigators will manipulate characteristics of the
structure of the display (e.g., arrangement of symbols), in order to determine if more
optimal displays facilitate desirable patterns of visual attention to both the communication
display and the partner. The mobile eye-tracking technology captures attention to both the
display and the communication partner. The investigators anticipate that participants will be
able to attend to their partner and the shared activity more when the AAC display is more
optimal, but that when the AAC display is sub-optimal, the participants will have to spend
more time examining the AAC display and less time in actual communication.
Description:
Visual supports are central components of what is termed augmentative and alternative
communication (AAC) intervention within speech-language pathology. AAC refers to the methods
and technology designed to supplement spoken communication for people with limited speech.
"Aided" AAC is a subcategory in which an external aid stores and presents for use visual
symbols such as photographs, line drawings, or alphabet letters. Aided AAC relies on vision
for access. If users cannot fully attend to, understand, or process the semantic information
on a visual display, they are unlikely to use that display effectively. Regrettably, little
research has focused on AAC display design variables that enhance attention
This research seeks to gain a greater understanding of visual attention to AAC displays and
communication partners in order to further optimize display design. Eye tracking technology
will reveal attention patterns that typically go unrecorded in behavioral research,
particularly in individuals with severe disabilities. Specifically, eye tracking technology
permits recording of the coordinates of where the participant is looking at any given time,
how long they look, and what they ignore. This study seeks to record eye gaze via eye
tracking during a shared book reading activity in which the AAC display is used for
communication with a partner. It will help to determine whether optimal displays, which
facilitate speed to locate targets and minimize fixations to distractors, will promote
attention to the partner. Ultimately, this information will contribute to improving the
design of materials for children with disabilities who require AAC.
The most traditional means of structuring aided AAC displays is to present the language
concepts within traditional row-column grids, which contain individual symbols/concepts
placed in each grid square. The investigator's earlier work examined whether these grid-based
presentations could be improved by understanding how different perceptual features of the
displays influence responding (ie, whether what the display looks like influences how easily
the information on it is found). Individuals with developmental disabilities and children
developing typically were faster and more accurate in finding information on some displays
over others, when tested using a "visual search" task (aka, a "finding game" - "find the
dog"). The next study then examined the reason behind this phenomenon by using eye tracking
technology to examine how visual search itself was influenced by the different displays.
Results indicated that in individuals with and without disabilities, in the non-optimal
display there were significantly more fixations (looks) to non-relevant distractors than on
the optimal display. Given that individuals with disabilities, including Down syndrome, are
prone to ready distraction, the use of a display that by its very structure promotes looks to
distractors seems to be a potentially critical mistake.
The current study examines the effects of adding a communication partner on the allocation of
visual attention to optimally and non-optimally designed displays. This study is a
translational step of moving from more basic research towards more clinically relevant
research. Of interest are two questions: (1) How is a social partner integrated into the
attentional field of the individual using AAC, in general, and (2) What is the effect of the
introduction of the partner/social communication task on the patterns of attention across
different display conditions?
Participants who have participated in the earlier research of the PI will be contacted to see
if they would like to return for this one. Participants who express interest in learning more
will be sent the phone/email information, the recruitment flyer and, if they request it, the
consent form. If after reading these the participants are still interested, scheduling will
begin.
First, during the assessment, participants will be assessed with the Peabody Picture
Vocabulary Test - Fourth Edition (PPVT-4), which is an assessment of receptive vocabulary
skills. It is the gold standard in both speech and language assessments as well as research,
for estimating vocabulary size. In this, the child is shown four pictures at a time, and
asked to choose one of them on the basis of the spoken word. The test continues until the
child makes more than 8 errors in a set of 12. The test takes generally about 20 minutes to
complete.
After the assessment portion is completed, participants will return for up to five additional
sessions to undergo the storybook reading portion of the study. Each visit will involve
reading two separate books with a trained research assistant and should last about 30
minutes. Before reading the books, the research assistant will conduct a preference
assessment during which the participant will be given a choice of 4-6 possible sets of books
that he/she will read over the length of the study. The participant will be provided with
pictures of choices and can indicate their preferred choice by speaking, pointing to, or
selecting their choice.
While participants are engaged in book-reading, they will also be wearing Tobii Pro eye
tracking glasses that have an eye tracking device embedded within. It is ultra-lightweight
glasses with a highly unobtrusive head unit. Mobile eye tracking goggles allow for recording
of gaze path directly within the frames of glasses (similar to Google glasses). These mobile
technologies enable visualization and analysis of allocation of visual attention during live
social interactions, as the recording apparatus moves simultaneously with the movement of the
participant's head and records the changing field of vision. The technology uses extremely
low-level infrared light that is bounced off the pupil of the participant. The amount of
infrared light is smaller than that found in the typical television remote and is far below
federal safety requirements. The glasses include a non-invasive strap that will be tightened
at the back to ensure the glasses stay in place when they are worn.
Prior to placing the glasses on the participant, the research assistant will follow a
protocol to allow the participant to become familiar with the eye-tracking glasses. This will
involve watching a short video that shows another person wearing the glasses. Then, the
participant will be invited to place a pair of sunglasses on that has a strap similar to the
eye-tracking goggles. The research assistant will tighten the strap and allow the child to
wear the sunglasses for several minutes to become accustomed to the strap. Next, the
participant will be fitted with the eye-tracking glasses. If the child wears prescription
eye-glasses, the lenses in the glasses will be changed to match their prescription (there are
multiple lenses that can be switched in or out of the glasses themselves), which will be
obtained from the parents on the demographic form.
One the child is fitted with the eye-tracking glasses, the book reading will begin. The
research assistant will read a book to the participant. The participant will be positioned in
front of an AAC display that include symbols/messages to comment about the book. Each
participant will undergo several sessions of book reading. The participant will interact with
the partner during the book reading exchange by using an AAC display that contains symbols to
comment about the book. The participant may use the AAC display by accessing it with a mouse
or directly touching the symbols. The research assistant will follow a script that includes
different types of questions directed towards the participant.
In between the two books read each session, the participant will be offered a snack, that is
a snack approved by the parents/guardians prior to participation.
The sessions will be video recorded using a video camera. This will allow for post hoc review
to ensure the fidelity of adherence to the script by the trained research assistant.