View clinical trials related to Speech.
Filter by:The purpose of this research study is to understand how the brain processes and controls speech in healthy people. The investigators are doing this research because it will help identify the mechanisms that allow people to perceive their own speech errors and to learn new speech sounds, which may be applied to people who have communication disorders. 15 participants will be enrolled into this part of the study and can expect to be on study for 4 visits of 2-4 hours each.
PROMs questionnaires seem to be an effective tool to obtain a greater knowledge of the physical and emotional state of patients. Despite this, few studies have been performed using patient reported outcomes in Head & Neck (H&N) cancer patients during and after treatment. The use of a novel topical mucosa composition (Saliactive®) is studied along the use of questionnaires.
The goal of this study is to investigate the role of social factors on speech learning, including production and perception, in infants ranging in age from ~7-18 months. Infants have either typical hearing or sensorineural hearing loss. The main prediction of the study is that social reinforcement will engender improvements in vocal learning above and beyond gains in hearing in infants with hearing loss. As part of this study: - The parent and infant engage in a free play session in the playroom while the investigator cues the parent to say simple nonsense words; - Infants hear playback of the same words during a second phase.
The basic mechanisms underlying comprehension of spoken language are still largely unknown. Over the past decade, the study team has gained new insights to how the human brain extracts the most fundamental linguistic elements (consonants and vowels) from a complex and highly variable acoustic signal. However, the next set of questions await pertaining to the sequencing of those auditory elements and how they are integrated with other features, such as, the amplitude envelope of speech. Further investigation of the cortical representation of speech sounds can likely shed light on these fundamental questions. Previous research has implicated the superior temporal cortex in the processing of speech sounds, but little is known about how these sounds are linked together into the perceptual experience of words and continuous speech. The overall goal is to determine how the brain extracts linguistic elements from a complex acoustic speech signal towards better understanding and remediating human language disorders.
The purpose of this research study is to understand how the brain processes and controls speech in healthy people. The investigators are doing this research because it will help identify the mechanisms that allow people to perceive their own speech errors and to learn new speech sounds, which may be applied to people who have communication disorders. 15 participants will be enrolled into this part of the study and can expect to be on study for 3-4 visits of 2-4 hours each.
This study examines a cognitive therapy for autistic children, Thinking in Speech. Thinking in Speech helps children with autism independently cope with everyday events that cause stress, by developing their ability to use "inner speech".
The overall goal of this study is to reveal the fundamental neural mechanisms that underlie comprehension across human spoken languages. An understanding of how speech is coded in the brain has significant implications for the development of new diagnostic and rehabilitative strategies for language disorders (e.g. aphasia, dyslexia, autism, et alia). The basic mechanisms underlying comprehension of spoken language are unknown. Researchers are only beginning to understand how the human brain extracts the most fundamental linguistic elements (consonants and vowels) from a complex and highly variable acoustic signal. Traditional theories have posited a 'universal' phonetic inventory shared by all humans, but this has been challenged by other newer theories that each language has its own unique and specialized code. An investigation of the cortical representation of speech sounds across languages can likely shed light on this fundamental question. Previous research has implicated the superior temporal cortex in the processing of speech sounds. Most of this work has been entirely carried out in English. The recording of neural activity directly from the cortical surface from individuals with different language experience is a promising approach since it can provide both high spatial and temporal resolution. This study will examine the mechanisms of phonetic encoding, by utilizing neurophysiological recordings obtained during neurosurgical procedures. High-density electrode arrays, advanced signal processing, and direct electrocortical stimulation will be utilized to unravel both local and population encoding of speech sounds in the lateral temporal cortex. This study will also examine the neural encoding of speech in patients who are monolingual and bilingual in Mandarin, Spanish, and English, the most common spoken languages worldwide, and feature important contrastive differences of pitch, formant, and temporal envelope. A cross-linguistic approach is critical for a true understanding of language, while also striving to achieve a broader approach of diversity and inclusion in neuroscience of language.
The general objectives of this study are to build a proof-of-concept, speech-based, digital biomarker for identifying the presence and tracking the severity of psychiatric disease.
The proposed studies focus on memory for speech movements and sounds and its relation to learning. Continuous theta-burst transcranial magnetic stimulation (cTBS) will be used to suppress activity in a region of pre-frontal cortex associated with somatic and auditory working memory (Brodmann area 46v) to test its involvement in learning.
These studies test the hypothesis that the repeated pairing of somatosensory inputs with speech sounds, such as occurs during speech motor learning, results in changes to the perceptual classification of speech sounds.