Gestures Clinical Trial
Official title:
Role of Semantic Processing and Visuospatial Skills in Production of Iconic Gestures
In this study, cognitive skills will be identified that underlie the production of iconic gestures in individuals with language difficulties. Specifically, what is the role of nonverbal semantic processing and visuospatial skills in the use of iconic gestures?
Individuals with language difficulties (e.g., vocabulary or forming sentences) can find it difficult to communicate and express their thoughts. Speech-language therapists sometimes encourage individuals with language difficulties to use hand gestures. By using gestures these individuals may find it easier to express their thoughts and their communication partners may find it easier to comprehend them. The researchers aim to answer the question: which skills are needed to produce highly comprehensible gestures? The answers to this question can inspire future language therapy for individuals with language difficulties. Task1. Participants sees 30 items from the Boston Naming Task one by one. The researcher explains that she can not see the screen. When an image appears, the examiner asks the participant to describe the item without speaking, but by using hand gestures. These gesture versions are recorded on video. The researcher indicates which gesture strategy the participant used with each executed gesture (i.e., sketch, shape, object, or deictic). In addition, 200 adults with a typical development assess the intelligibility of each gesture. The recordings are presented one by one in a random order. The evaluators must write down what concept the person depicts on the video. By adding the correct responses per participant, each participant receives a skill score. This skill score is related to the individual results of cognitive testing. Task2. Participants watch a cartoon. They tell the story to the researcher who "has never seen the cartoon and does not know what is happening". Participants do not receive instructions on the use of gestures. This storytelling task is recorded on video. The researchers will write this down and note which gestures are being produced. The videos are used to calculate two variables: the ratio of the number of gestures to words, and the ratio of gestures that are replacements of speech to all gestures (both speech replacement and complementary to speech). These variables are related to the results of the cognitive tests. Task3. The researcher starts a 1-on-1 interview (10 minutes) with the participant. This conversation partly proceeds according to a semi-structured script: a few questions have been drawn up in advance. Each question or comment contains two content words that make a gesture possible. During half of the scripted questions, the researcher will not use any gesture. During the other half, the researcher will produce the two gestures. As with a semi-structured interview, the researcher ensures a natural conversation. The conversations are recorded on video. The researcher transcribes the interviews and indicates whether the participant takes over the gestures of the researcher. Each time a script gesture is presented, the examiner indicates whether the participant has responded by applying the spoken word and / or the gesture. By involving the cognitive test results, it can be analyzed whether people with higher semantic processing and higher visual-spatial skills take over from others more often than those with weaker semantic processing skills and weaker visual-spatial skills. ;
Status | Clinical Trial | Phase | |
---|---|---|---|
Completed |
NCT03698539 -
How Stuttering and Gestures Influence the Intelligibility of Individuals With Down Syndrome
|