Eddie Chang, M.D. (@changlabUCSF) is a neurosurgeon and professor of neurological surgery at the University of California, San Francisco (UCSF) and the co-director of the Center for Neural Engineering & Prostheses. His research focuses on the brain mechanisms for speech, movement, and human emotion.
In this episode, Andrew Huberman & Eddie Chang discuss the brain mechanisms underlying speech, language learning, and comprehension, communicating human emotion with words and hand gestures, bilingualism, and language disorders, such as stuttering. They also cover the use of state-of-the-art technology and artificial intelligence to restore communication with patients, treat epilepsy, and the future of modern neurosurgery.
Host: Andrew Huberman (@hubermanlab)
The human brain is shaped differently depending on sounds you are exposed to in utero and throughout the first years of brain development
Natural shapes you hear shape the brain and what you hear
There’s an early period in brain development called the “critical period” where we’re susceptible to patterns we see (for our eyes) or hear (for our ears) – over time you develop sensitivity to native language
The brain has flexibility and specialization for language
There’s probably a cost to using white noise machines to help babies to sleep but there are no conclusive studies; logically, our brain is structured to hear noises in the environment
  • Rats raised in continuous white noise just loud enough to mask environmental sounds delayed auditory cortex which may delay speech patterns
Brain mapping: strategically stimulating and blocking areas of the brain during surgery to illicit a specific speech, language, or movement and identify locations in relation to activity
Brain mapping can evoke anxiety, stress, and calm states
Epilepsy is an imbalance in excitation and inhibition of the brain
If your epilepsy doesn’t respond to the second or third medication, it likely won’t respond to any medication
Surgery or stimulators may be beneficial in the treatment of non-responsive medication
For some people, the ketogenic diet can change the way the brain works and alleviate epilepsy, especially in kids (the same has been said about the ketogenic diet for adults with Alzheimer’s)
Absence seizure: just one of many types of seizure; you can lose consciousness and be taken offline but remain standing or whatever you’re doing – people around you probably won’t even notice
Frontal lobe: important for articulating speech, creating words, expressing words
Left temporal lobe: important for understanding words
It has been debunked that Broca’s area is responsible for speech production
Wernicke’s area is responsible for understanding what you hear and controlling what you say – injury or damage to the area is responsible for aphasia
Language heavily sits on one side of the brain, almost complete lateralized so it’s only one side without a counterpart on the other
Handedness is strongly genetic – the parts of the brain that control the hand are near the area that controls vocal track and may develop in utero
For right-handed people, the area for language sits on the left; for left-handed people, the area for language still sits on the left but there is some percentage who have language areas on both sides or only on the right
People who recover from stroke may experience a rewiring to the extent that the area for language may shift – it’s possible we have areas for language on both sides but don’t use them simultaneously
In bilingualism, there’s shared circuitry in the brain that allows us to process both languages
Stutter is a speech condition that affects articulation in the coordination of brain programming needed to produce fluid speech
Anxiety can provoke stutter but it isn’t the cause of stutter
When stutter happens from time to time but not consistently, therapy focuses on creating conditions for allowing fluid speech to take place
Speech is the production of audio & communication, moving the mouth to generate a vocal track
Language is broader – what you’re extracting from words being spoken
Semantics – understanding the meaning of words and sentences
Syntax – how the words are assembled
You can’t see speech but vibrations in the air are picked up by sensors in the ear and translated into electrical activity at different frequencies
It only takes milliseconds to understand where the sound came from and break it down
The cortex is looking for specific sounds of human language
When we speak, we’re pushing air into the mouth through the vocal folds
The pharynx shapes the air in ways that create consonants and vowels
Energy in the voice is coming from the larynx
Higher frequencies are created by specific movements created by airflow
Different languages use different sound elements combined in different patterns to give rise to new meanings
There are fundamental elements to speech that have no meaning on their own but in sequence give rise to meaningful communication
Reading & writing are human inventions, relatively recent in human evolution
When we learn to read and write it maps to the speech/sounds part of the brain
Dyslexia: people have trouble reading because the mapping in how we see the words and the brain processes the sounds is different
Dyslexia can be a difficulty in phonological awareness – can hear the words but aren’t processing them when they read the work
Reading books (not audiobooks) is useful for being able to articulate well, structure sentences, and build paragraphs
Myth: injury to the brain may improve function, for example, you can suddenly speak a new language
“Locked in syndrome” – you can think and generate great thoughts with completely aware cognition, but you can’t speak or write them down; this can happen with brainstem strokes
ALS: advanced form of paralysis that kills within a few years
Brain computer interface: connect electrodes from brain to port which translates into digital signals, then the computer translates into the language via an artificial intelligence algorithm
  • Study – a patient is hooked up and prompted to say a certain word like “outside” then the computer generates that word and spits it out (hopefully)
Neuralink: pursuing clinical goals first (such as stopping tremors in Parkinson’s) than other social physiological functions
Nonverbal communication is just as important as verbal communication – future technologies will include more holistic avatars with facial expression