Clinical Trials Logo

Clinical Trial Details — Status: Recruiting

Administrative data

NCT number NCT05854719
Other study ID # 47/2022
Secondary ID
Status Recruiting
Phase N/A
First received
Last updated
Start date May 12, 2023
Est. completion date December 31, 2024

Study information

Verified date May 2023
Source University of Oulu
Contact Kerttu Huttunen, PhD
Phone +358504776610
Email kerttu.huttunen@oulu.fi
Is FDA regulated No
Health authority
Study type Interventional

Clinical Trial Summary

The goal of this clinical trial is to find out the role of background factors and gaze use in children's speechreading performance. The main questions it aims to answer are: - Which background factors and eye gaze patterns are associated with the best speechreading results in hearing children and those with hearing impairment/loss? - Are children's gaze patterns and facial expression discrimination associated with interpretation of emotional contents of verbal messages in speechreading? - What is the efficacy of intervention that is based on the use of a speechreading application to be developed? Participants will be - tested with linguistic and cognitive tests and tasks - tested with a speechreading test and tasks with or without simultaneous eye-tracking - about half of the participants with hearing impairment/loss will train speechreading with an application Researchers will compare the different age groups and the results of hearing children to those of children with impaired hearing to see if there are differences.


Description:

1. Aim The objectives of this project are to 1) gain information about speechreading abilities in Finnish of children without and with hearing impairment, 2) obtain information about the association between gaze behaviour and speechreading accuracy, 3) develop a test for speechreading for Finnish-speaking children, 4) find out whether emotion discrimination (discrimination of facial expressions) helps children with impairment in speechreading, 5) find out whether speechreading can effectively be trained with a smart device application, and 6) explore and train artificial intelligence algorithms further with the help of data on children's use of gaze and speechreading skills (this part does not belong to the clinical trial part of the study). 2. Study design Controlled, clinical trial. Study arms: 1) hearing children serving as controls 2) children with impaired hearing participating in speechreading training, 3) children with impaired hearing serving as controls Altogether 140 children (half of them with impaired hearing) will be tested remotely (via Zoom application), and 100 children on site (as eye-tracking is used in data collection from them). Caregivers will be requested to fill out a background form by using an on-line survey platform REDCap having a high data protection capability. Caregivers will give information about their child's hearing ability/level based on the most recent audiogram, overall health (e.g., possible medical diagnoses, vision) and development. Caregivers' profession and educational level will also be surveyed. Child outcomes: Children will be tested with linguistic, cognitive and social cognitive tests and tasks and 100 children also with eye-tracking. Of the linguistic tests and tasks, a validated Finnish version (Laine et al., 1997) of the Boston Naming Test (Kaplan et al., 1983) is used in testing child's expressive vocabulary. In the nonword repetition subtest of the Nepsy Test (Korkman, 1998), the child is asked to repeat 16 nonwords presented as an audio recording. The phonological processing subtest of the NEPSY II Test (Kemp & Korkman, 2008) is composed of two phonological processing tasks designed to assess phonemic awareness. It explores identification of words from word segments. Children aged 7 to 8 years are asked to repeat a word and then to show from pictures the alternative in question when the test administrator has first pronounced only a part of the word as a cue. Children aged 9 to 11 years are asked to create a new word by omitting a part of a compound word or a phoneme with the test administrator first pronouncing the part to be omitted. Reading skills of the children is explored with three subtests, Technical reading ability TL2B, TL3B and TL4B, of the ALLU Test (Lindeman, 1998). The child is asked to select the right alternative out of four line drawings to match it with single words or sentences or the judge whether the meaning of the sentence written is true or false. Children's speechreading skills will be assessed with Children's speechreading test (Huttunen & Saalasti) which contains single words and short sentences and a task in which facial expressions and speechreading on sentence level need to be combined. Firstly, a novel computerized Children's speechreading test will be constructed (Huttunen & Saalasti) for children acquiring Finnish. In addition to piloting results of hearing children aged 8 to 11 years, information about the receptive vocabulary of 8-11-year-old children with hearing impairment, and visual analogues of spoken phonemes (visemes) in Finnish will be used as the central basis for choosing the items for the multiple-choice word-level part of the test. Two- to three-word sentences will be included in the sentence-level part of the test. When giving their responses in the Children's speechreading test, after watching each video clip, children need to discriminate the word or sentence expressed by choosing from alternatives given as drawings and illustrating various persons, objects, or events. For validating the novel speechreading test, that is, to obtain the age norms for it and to explore its psychometric properties, 120 children with normal hearing and typical development (30 children per age group) and 120 children with HI (again, 30 children per age group) will be tested with it. Reading level sufficient for selecting the alternatives for the meaning of short sentences in the sentence-level part of the speechreading test is required from the participants. In addition to the Children's speechreading test, an emotion + speecreading task is conceived to see whether children can make use of additional information from facial expressions to discriminate the sentences they speechread. For that, a speaker expresses some classic basic emotions (happiness, sadness, anger) in 10 sets of four sentences to be constructed for this purpose. Ten video recordings are presented without voice to the children with always four written alternative choices. Children will also be tested with cognitive and social cognitive tests and tasks: reaction time (Reaction time task), first-order Theory of Mind (Sally Anne Test), second-order Theory of Mind skills (modified Ice Cream Van Task), auditory short-term memory (ITPA auditory serial memory subtest), visual short-term memory (ITPA visual serial memory subtest), visuo-spatial short-term memory (Corsi Block Test), emotion discrimination (discrimination of facial expressions from photographs, video clips and the FEFA 2 test). Of the cognitive and social cognitive tests and tasks, the Reaction time task which follows the classic principles of a two-choice reaction time test, two numbers randomly appear on a computer screen; within 1 to 3 seconds either on the left or on the right side of the screen. Child's task is to strike the left or right arrow key on the keyboard as soon as the the number has appeared on the screen. The task takes less than two minutes to perform, and after 40 numbers shown the software produces the results (mean reaction time in milliseconds, SD, min, max and the number of correct answers as a relative percentage score). As the first-order Theory of Mind task, the classic Sally Anne Test (Baron-Cohen, Leslie & Frith, 1985) will be used and as the second-order Theory of Mind skills a modified version of the Ice Cream Van Task (Perner & Wimmer, 1985; Doherty, 2009). In the second-order task, four drawings constructed are used to help the child to understand and to remember the story told by the task administrator. Short term memory skills are assessed using the auditory and visual short-term sequential memory subtests of the validated Finnish version (Kuusinen & Blåfield, 1974) of the Illinois Test of Psycholinguistic Abilities (ITPA) (Kirk et al., 1968). In the auditory short-term subtest of ITPA the child is asked to orally repeat digit series given and in the visual short-term subtest to arrange the right symbols in the right order. In the visual short-term subtest, the test administrator first shows the symbol series and the child restores that in the short-term memory to reproduce the series. Visuo-spatial short-term memory is researched by using the Corsi Block Test (Corsi, 1972, Kessels, van Zandvoort, Postman, Kapelle & de Hand, 2000) included in the PsyToolkit (Professor Gijsbert Stoet). In the on-line test, nine blocks are shown. They are arranged in certain fixed positions on a screen in front of the participant. The software flashes a sequence of blocks, for example, a sequence of three different blocks, one after another. As a response, by using a mouse, the participant needs to tap the blocks on the screen in the same order the on-line test showed. The test takes less than 30 seconds to perform. The Corsi span is defined as the longest sequence a participant can correctly repeat. Discrimination of facial expressions from photographs and video clips are self-constructed tasks (Huttunen, 2015, first described in Huttunen, Kosonen, Laakso & Waaramaa, 2018). In the first computerized task, a set of 12 photographs depicting four different emotions (three basic emotions and a neutral expression) are shown. Four verbal labels are given as written response choices. To test facial emotion recognition skills using dynamic input, the same set of emotions expressed by the same persons are presented as video clips of two seconds each with four verbal labels as response choices. The computerized "Faces" submodule of the Finnish version of the FEFA 2 test (The Frankfurt Test and Training of Facial Affect Recognition; Bölte et al., 2013; Bölte & Poustka, 2003) is used as a standardized task to assess children's facial emotion recognition skills. This test consists of 50 photographs depicting seven different emotions and their labels as response choices (joy, anger, sadness, fear, surprise, disgust and neutral). The child selects the alternative to match the facial expression (emotion) presented. The FEFA 2 software summarizes the results (total score, confusion matrices, response time). Children's gaze use will be explored by eye-tracking (EyeLink 1000+ device) during facial expression and speechreading tests and tasks. Eye-tracking is used for 100 children (50 with normal hearing and 50 with hearing impairment/loss). Their gaze use is explored during Children's speechreading test, during the tasks in which facial expressions need to be discriminated from photographs and video clips, and during the emotions + speechreading task. Chin rest is used to stabilize the position of the child for securing the success in data collection. Fixations, dwell time (the time the gaze stays on certain place on the screen) and gaze path are analysed to find out which areas of interest on the face are the ones that attract the children's gaze the most. It is explored with eye tracking data what kind of gaze use and gaze patterns are optimal for children's speechreading and emotion discrimination performance. 3. Sample size Hearing children (n = 120) aged 8 to 11 years (30 children per age group), Children with hearing impairment (n = 120) aged 8 to 11 years (30 children per age group). 4. Blinding and randomization None 5. Follow-up protocols Altogether 100 children with impaired hearing out of the aimed total of 120 will be tested twice: 1. After initial testing, 50 children with impaired hearing will be asked to train speechreading with a smart device application at home. After two months their emotion discrimination and speechreading skills and gaze use during emotion discrimination and speechreading tasks will be examined again (on-site testing). Their use of speechreading application will be explored by transferring the user data (how much they have used the application and how their speechreading skills have developed as indicated by the scoring system built in the software) with a cable or blue-tooth connection. 2. A group of children with impaired hearing (n = 50) will serve as controls; they will be remotely tested (via Zoom application) after two months of the initial testing with no intervention between the initial and last assessment.


Recruitment information / eligibility

Status Recruiting
Enrollment 240
Est. completion date December 31, 2024
Est. primary completion date December 31, 2024
Accepts healthy volunteers Accepts Healthy Volunteers
Gender All
Age group 8 Years to 11 Years
Eligibility Inclusion Criteria: Normally hearing children: - age 8-11 years - being born full-term (on 37. gestational week or later) - Finnish speaking (Finnish is the language the child's family uses at home, the child goes to a school where Finnish is used as the language of instruction) - normal hearing and vision - typically developing, mainstream education curriculum at school - for those tested remotely: computer available at home for remote testing Children with hearing impairment/loss: - age 8-11 years - diagnosed bilateral hearing impairment - being born full-term (on 37. gestational week or later) - Finnish speaking (Finnish is the language the child's family uses at home, the child goes to a school where Finnish is used as the language of instruction) - normal vision - (mainly) typically developing - for those tested remotely: computer available at home for remote testing Exclusion Criteria: Normally hearing children: psychiatric and neurodevelopmental disorders, including ADHD (Attention Deficit and Hyperacitivity Disorder) Children with hearing impairment/loss: psychiatric and neurodevelopmental disorders (excluding ADHD if medication helps the child to concentrate well during testing)

Study Design


Intervention

Behavioral:
Speechreading training with an application
Optic Track application developed for this intervention will be used in speechreading training. The target is that, during the eight intervention weeks, it would be used for 15 minutes at least three times a week.

Locations

Country Name City State
Finland University of Oulu Oulu

Sponsors (3)

Lead Sponsor Collaborator
University of Oulu Tampere University, University of Helsinki

Country where clinical trial is conducted

Finland, 

Outcome

Type Measure Description Time frame Safety issue
Other Level of facial expression discrimination skills Ability to discriminate emotions from photographs and video-clips presenting facial expressions. Score obtained (percent correct, min 0, max 100, higher score indicates better result) Two months
Primary Level of speechreading skill Score obtained in Children's speechreading test (percent correct, min 0, max 100, higher score indicates better result) Two months
Primary Level of performance in emotions (facial expressions) + sentence-level speeachreading task Score obtained (percent correct, min 0, max 100, higher score indicates better result) Two months
Secondary Eye gaze use Child's eye gaze use during speechreading is defined as areas on the speaker's face attracting the most eye fixations (stops, the time when the eyes are relatively stationary) and their duration, and gaze path, i.e., which way the eyes move (from where to where) on the speaker's face while watching the face. Areas of interest to be calculated are left eye, right eye, nose or nose/chins, mouth, and other locations. Number of eye fixations and their duration are calculated. The higher the number of fixations and the higher the dwell time (duration of fixation), the higher the interest is on certain part of the face (area of interest) Two months
See also
  Status Clinical Trial Phase
Withdrawn NCT04055987 - Use of Electropalatography to Improve Speech Sound Production N/A
Completed NCT03687801 - Clinically Implementing Online Hearing Support Within Hearing Organization N/A
Enrolling by invitation NCT06051968 - Effects of an Online Hearing Support for First-time Hearing Aid Users N/A
Recruiting NCT05083221 - Effect of an Aural Rehabilitation Program in Hearing-impaired Older Adults N/A
Completed NCT04794179 - CROS and Quality of Life of Elderly Cochlear Implant Recipients and Their Care Givers N/A
Recruiting NCT05003674 - A Feasibility Study Evaluating the Performance of Focused Multipolar Stimulation in Adult Cochlear Implant Recipients N/A
Completed NCT01400178 - Cochlear Implants in Post-lingually Children: Results After 10 Years N/A
Completed NCT00738244 - Effectiveness of Hearing-aid Based Wind-noise Algorithm N/A
Completed NCT03716544 - Efficacy of Amplification With Hearing Aids for Tinnitus Relief N/A
Recruiting NCT02779907 - Prevalence and Associations of Paediatric Chronic Suppurative Otitis Media and Hearing Impairment in Rural Malawi N/A
Completed NCT02832128 - Evaluating Possible Improvement in Speech and Hearing Tests After 28 Days of Dosing of the Study Drug AUT00063 Compared to Placebo (QuicKfire) Phase 2
Completed NCT01816087 - Performances of a Brief Assessment Tool for the Early Diagnosis of Geriatric Syndromes by Primary Care Physicians N/A
Completed NCT00582946 - Wide-Bandwidth Open Canal Hearing Aid For Better Multitalker Speech Understanding Phase 1
Recruiting NCT05847426 - Improving Early Intervention in Hearing Impaired Children Using Functional Near-Infrared Spectroscopy (fNIRS) N/A
Completed NCT04469946 - Hearing Aid Noise Reduction in Pediatric Users Pilot Study (Oticon Pilot Study) N/A
Completed NCT03575390 - The Beneficial Effects of Pomegranate on Hearing of Patients Without Hemodialysis N/A
Withdrawn NCT03966144 - RoboHear™ Device: Advanced Haptic Technology That Allows the Deaf to Understand Speech N/A
Completed NCT02042404 - The EarLens System Long Term Safety and Efficacy Definitive Multi-Center Study N/A
Recruiting NCT05805384 - Evaluating a Noise Reduction Algorithm With Cochlear Implant Users N/A
Active, not recruiting NCT05815667 - Effects of the Swedish Internet-based Individualised Active Communication Education (I-ACE) in FTU N/A