Check out the Huberman Lab episode page
Curious about Andrew Huberman’s recipe for good sleep? Read more here
Can’t get enough Andrew Huberman? Check out our member’s only collection packed with Huberman’s greatest tips
Dr. Erich Jarvis (@erichjarvis) is a Professor and the Head of the Laboratory of Neurogenetics of Language at Rockefeller University and an Investigator with the Howard Hughes Medical Institute (HHMI). Dr. Jarvis’ research spans the molecular and genetic mechanisms of vocal communication, comparative genomics of speech and language across species, and the relationship between speech, language, and movement.
In this episode, Andrew Huberman & Erich Jarvis discuss the unique ability of humans to learn and communicate using complex written and spoken language. They break down the connections between language, song, and dance, underlying biology to speech pathology, what distinguishes one’s ability to learn multiple languages, and more!
Host: Andrew Huberman (@hubermanlab)
Speech and language are behavioral, psychological terms that don’t exactly align with brain function
Speech is the motor patterns and production of sound that combine to have meaning
There may not be as much distinction between speech and language as you’d think – there is a separate language module in the brain that has the algorithms and computations that influence the speech pathway in how to produce sound, and the auditory pathway in how to interpret that sound
Speech production pathway: controls larynx and jaw muscles and has built-in all the algorithms for spoken language
The auditory pathway has the algorithms for understanding speech
Dogs can understand several hundred human speech words
For spoken language, we use the speech pathway and its algorithms
The larynx is the fastest firing muscle in the human  body
There is an evolutionary relationship between the brain pathways that control speech production and gesturing
The region of the brain that controls hand gestures is next to the region of the brain responsible for spoken language
“Humans are the most advanced at spoken language but not necessarily at gestural language.” – Dr. Erich Jarvis
Each language comes with a learned set of gestures that comes with it
Some species can gesture with hands more than voice and vice versa
Most vertebrate species vocalize with innate sounds they’re born with – very few have learned vocal communication in which they can imitate other sounds
Learned behavior uses forebrain circuits
Primates are the only ones that have advanced vocal ability – it’s likely Neanderthals had spoken language which evolved in the last 500,000-1 million years
Birds have parallel structures & underlying genes as humans for language
Hummingbirds hum with their wings and sing with their pharynx in coordination
Songbirds can learn other birds’ songs but not as well as they can sing their innate language
It is easier to learn a language early in life – and – the ability to learn another language later in life is easier if you already speak multiple languages
Critical period: the brain is designed to solidify circuits learned as a child
If you learn another language early in life, you develop and maintain the ability to learn and make different sounds (phonemes) as an adult so you can apply those phonons to other languages
Semantic communication: communication that has meaning
Effective communication: communication that has emotional feelings embedded
The same circuits are being used in different ways when processing semantic and effective communication
The context and intent heavily shape the way we perceive what’s being said
The right brain is artistic, the left brain is for thinking: in birds and humans, there’s left-right dominance for sound communication – the left is more dominant for speech, but the right has more balance for singing and processing musical sounds
History of songs: the evolution of spoken language evolved for singing and emotional mate attraction then was used for abstract communication
Vocal learning brain pathways in songbirds and humans are embedded within circuits that control learning how to move
Motor theory of vocal learning origin: brain pathways for vocal learning and speech evolve by duplication of surround circuits involved in learning how to move
Hypothesis: when speech evolved in humans and songbirds, the auditory-motor integration contaminated surrounding brain regions and allowed for coordination of sound and movement
The body can communicate using movement
Non-human primates have a lot of diversity in facial expressions, like humans
Non-human primates have strong connections from cortical regions to motor neurons that control facial expressions – but – weak connections to motor neurons that control voice
There is likely a combination of innate and learned behavior with allows humans to couple uncoupled circuits that control the thousands of facial expressions depending on the context
We write in complete sentences but don’t often think in simple, declarative sentences (like we speak)
Thought to language to the written word: your speech pathway speaks what you read, then is sent to auditory pathway – you’re essentially speaking in your own head – then you have a fourth pathway online to write
We have competing brain circuits for conscience attention – it can be difficult to speak at the same time depending on the complexity of the task
If the rate of thought and rate of writing are aligned, things tend to go smoothly
There is not necessarily peer-reviewed support, but anecdotally it does seem writing by hand is different than typing
Stutter does not mean the thinking is slow
Songbirds also stutter
Stutter: damage to basal ganglia or disruption to basal ganglia at a young age causes stutter
Stutter can be overcome with behavioral therapy, learning to speak slower, tapping out a rhythm – basically, sensory-motor integration tools – controlling what you hear with output
Are we getting less proficient at speech because we don’t write and think in complete sentences?
Texting has allowed for more rapid communication in written form between people – arguably, since we’re using those parts of our brain more, we’re stimulating it
“With texting, what you’re losing is less so the ability to write but more the ability to interpret what is written.” – Dr. Erich Jarvis
It’s possible we’ll see electrodes implanted in the human brain and electrical outputs translated into human speech which will help us with paralyzed or non-communicative individuals (but could also be frightening to have these signals floating around which could be intercepted)
Study of genomes & creation of a  database to find genetic associations across species and potentially resurrect extinct species