View clinical trials related to Speech.
Filter by:The overall goal of this research is to test a new model of speech motor learning, whose central hypothesis is that learning and retention are associated with plasticity not only in motor areas of the brain but in auditory and somatosensory regions as well. The strategy for the proposed research is to identify individual brain areas that contribute causally to retention by disrupting their activity with transcranial magnetic stimulation (TMS). Investigators will also use functional magnetic resonance imaging (fMRI) which will enable identification of circuit-level activity which predicts either learning or retention of new movements, and hence test the specific contributions of candidate sensory and motor zones. In other studies, investigators will record sensory and motor evoked potentials over the course of learning to determine the temporal order in which individual sensory and cortical motor regions contribute. The goal here is to identify brain areas in which learning-related plasticity occurs first and which among these areas predict subsequent learning.
The overall goal of this research is to test a new model of speech motor learning, whose central hypothesis is that learning and retention are associated with plasticity not only in motor areas of the brain but in auditory and somatosensory regions as well.
The goal of this study is to investigate the preferential responses of speech neural systems in infants. The main question it aims to answer is to determine whether the oscillatory synchronization capacity is associated with children's language level (i.e. vocabulary). Participants will be presented with synthetically modulated stimuli at three frequency scales: 4 Hz, 5 Hz and 30 Hz.
The purpose of this research study is to understand how the brain processes and controls speech in healthy people. The investigators are doing this research because it will help identify the mechanisms that allow people to perceive their own speech errors and to learn new speech sounds, which may be applied to people who have communication disorders. 15 participants will be enrolled into this part of the study and can expect to be on study for 4 visits of 2-4 hours each.
The purpose of this research study is to understand how the brain processes and controls speech in healthy people. The investigators are doing this research because it will help identify the mechanisms that allow people to perceive their own speech errors and to learn new speech sounds. 117 participants will be enrolled into this part of the study and can expect to be on study between 1 day (Experiment 1) and 4 weeks (Experiment 2).
The goal of this prospective, single-arm clinical trial is to evaluate the speech performance of children with anterior dental crossbite before and after correction. Also, to assess the impact of early interceptive orthodontic treatment in the mixed dentition stage to correct the anterior dental crossbite on the quality of life of children. Fifty children of both sexes aged from 8 to 10 years were enrolled and evaluated using the study's inclusion & exclusion criteria. before beginning interceptive orthodontic treatment, each child underwent full mouth treatment. then, using a removable anterior expansion screw along with posterior bite planes to treat the anterior crossbite. All children were subjected to the Protocol of speech evaluation before appliance insertion and after complete correction of anterior crossbite. Also, the Child Perceptions Questionnaire (CPQ 8-10) in the Brazilian version was used to gauge how the anterior crossbite affected the children's oral health-related quality of life.
This study meets the NIH definition of a clinical trial, but is not a treatment study. Instead, the goal of this study is to investigate how hearing ourselves speak affects the planning and execution of speech movements. The study investigates this topic in both typical speakers and in patients with Deep Brain Stimulation (DBS) implants. The main questions it aims to answer are: - Does the way we hear our own speech while talking affect future speech movements? - Can the speech of DBS patients reveal which brain areas are involved in adjusting speech movements? Participants will read words, sentences, or series of random syllables from a computer monitor while their speech is being recorded. For some participants, an electrode cap is also used to record brain activity during these tasks. And for DBS patients, the tasks will be performed with the stimulator ON and with the stimulator OFF.
Sensorineural hearing loss (SNHL) is among the most prevalent chronic conditions in aging and has a profoundly negative effect on speech comprehension, leading to increased social isolation, reduced quality of life, and increased risk for the development of dementia in older adulthood. Typical audiological tests and interventions, which focus on measuring and restoring audibility, do not explain the full range of cognitive difficulties that adults with hearing loss experience in speech comprehension. For example, adults with SNHL have to work disproportionally harder to decode acoustically degraded speech. That additional effort is thought to diminish shared executive and attentional resources for higher-level language processes, impacting subsequent comprehension and memory, even when speech is completely intelligible. This phenomenon has been referred to as listening effort (LE). There is a growing understanding that these cognitive factors are a critical and often "hidden effect" of hearing loss. At the same time, the effects of LE on the neural mechanisms of language processing and memory in SNHL are currently not well understood. In order to develop evidence-based assessments and interventions to improve comprehension and memory in SNHL, it is critical that we elucidate the cognitive and neural mechanisms of LE and its consequences for speech comprehension. In this project, we adopt a multi-method approach, combining methods from clinical audiology, psycholinguistics, and cognitive neuroscience to address this gap of knowledge. Specifically, we adopt a novel and innovative method of co-registering pupillometry (a reliable physiological measure of LE) and language-related event-related brain potential (ERP) measures during real-time speech processing to characterize the effects of clear speech (i.e., a listener-oriented speaking style that is spontaneously adopted to improve intelligibility when speakers are aware of a perception difficulty on behalf of the listener) on high-level language processes (e.g., semantic retrieval, syntactic integration) and subsequent speech memory in older adults with SNHL. This innovative work addresses a time-sensitive gap in the literature regarding the identification of objective and reliable markers of specific neurocognitive processes impacted by speech clarity and LE in age-related SNHL.
The purpose of this study is to investigate the effect of nonlinear signal processing algorithms on speech perception.
Speech and communication disorders often result in aberrant control of the timing of speech production, such as making improper stops at places where they should not be. During normal speech, the ability to stop when necessary is important for maintaining turn-taking in a smooth conversation. Existing studies have largely investigated neural circuits that support the preparation and generation of speech sounds. It is believed that activity in the prefrontal and premotor cortical areas facilitates high-level speech control and activity in the ventral part of the sensorimotor cortex controls the articulator (e.g. lip, jaw, tongue) movements. However, little is known about the neural mechanism controlling a sudden and voluntary stop of speech. Traditional view attributes this to a disengagement of motor signals while recent evidence suggested there may be an inhibitory control mechanism. This gap in knowledge limits our understanding of disorders like stuttering and aphasia, where deficits in speech timing control are among the common symptoms. The overall goal of this study is to determine how the brain controls the stopping of ongoing speech production to deepen our understanding of speech and communication in normal and impaired conditions.