Masks Strongly Recommended but Not Required in Maryland, Starting Immediately

Due to the downward trend in respiratory viruses in Maryland, masking is no longer required but remains strongly recommended in Johns Hopkins Medicine clinical locations in Maryland. Read more .

  • Vaccines  
  • Masking Guidelines
  • Visitor Guidelines  

White illustration of a brain against a bright blue background.

Brain Anatomy and How the Brain Works

What is the brain.

The brain is a complex organ that controls thought, memory, emotion, touch, motor skills, vision, breathing, temperature, hunger and every process that regulates our body. Together, the brain and spinal cord that extends from it make up the central nervous system, or CNS.

What is the brain made of?

Weighing about 3 pounds in the average adult, the brain is about 60% fat. The remaining 40% is a combination of water, protein, carbohydrates and salts. The brain itself is a not a muscle. It contains blood vessels and nerves, including neurons and glial cells.

What is the gray matter and white matter?

Gray and white matter are two different regions of the central nervous system. In the brain, gray matter refers to the darker, outer portion, while white matter describes the lighter, inner section underneath. In the spinal cord, this order is reversed: The white matter is on the outside, and the gray matter sits within.

Cross sections of the brain and spinal cord, showing the grey and white matter.

Gray matter is primarily composed of neuron somas (the round central cell bodies), and white matter is mostly made of axons (the long stems that connects neurons together) wrapped in myelin (a protective coating). The different composition of neuron parts is why the two appear as separate shades on certain scans.

Parts of a nerve cell: the central soma cell body with inner nucleus and outer dendrites and long axon tail, insulated by myelin pads.

Each region serves a different role. Gray matter is primarily responsible for processing and interpreting information, while white matter transmits that information to other parts of the nervous system.

How does the brain work?

The brain sends and receives chemical and electrical signals throughout the body. Different signals control different processes, and your brain interprets each. Some make you feel tired, for example, while others make you feel pain.

Some messages are kept within the brain, while others are relayed through the spine and across the body’s vast network of nerves to distant extremities. To do this, the central nervous system relies on billions of neurons (nerve cells).

Main Parts of the Brain and Their Functions

At a high level, the brain can be divided into the cerebrum, brainstem and cerebellum.

Diagram of the brain's major parts: cerebrum, cerebellum and brainstem

The cerebrum (front of brain) comprises gray matter (the cerebral cortex) and white matter at its center. The largest part of the brain, the cerebrum initiates and coordinates movement and regulates temperature. Other areas of the cerebrum enable speech, judgment, thinking and reasoning, problem-solving, emotions and learning. Other functions relate to vision, hearing, touch and other senses.

Cerebral Cortex

Cortex is Latin for “bark,” and describes the outer gray matter covering of the cerebrum. The cortex has a large surface area due to its folds, and comprises about half of the brain’s weight.

The cerebral cortex is divided into two halves, or hemispheres. It is covered with ridges (gyri) and folds (sulci). The two halves join at a large, deep sulcus (the interhemispheric fissure, AKA the medial longitudinal fissure) that runs from the front of the head to the back. The right hemisphere controls the left side of the body, and the left half controls the right side of the body. The two halves communicate with one another through a large, C-shaped structure of white matter and nerve pathways called the corpus callosum. The corpus callosum is in the center of the cerebrum.

The brainstem (middle of brain) connects the cerebrum with the spinal cord. The brainstem includes the midbrain, the pons and the medulla.

  • Midbrain. The midbrain (or mesencephalon) is a very complex structure with a range of different neuron clusters (nuclei and colliculi), neural pathways and other structures. These features facilitate various functions, from hearing and movement to calculating responses and environmental changes. The midbrain also contains the substantia nigra, an area affected by Parkinson’s disease that is rich in dopamine neurons and part of the basal ganglia, which enables movement and coordination.
  • Pons. The pons is the origin for four of the 12 cranial nerves, which enable a range of activities such as tear production, chewing, blinking, focusing vision, balance, hearing and facial expression. Named for the Latin word for “bridge,” the pons is the connection between the midbrain and the medulla.
  • Medulla. At the bottom of the brainstem, the medulla is where the brain meets the spinal cord. The medulla is essential to survival. Functions of the medulla regulate many bodily activities, including heart rhythm, breathing, blood flow, and oxygen and carbon dioxide levels. The medulla produces reflexive activities such as sneezing, vomiting, coughing and swallowing.

The spinal cord extends from the bottom of the medulla and through a large opening in the bottom of the skull. Supported by the vertebrae, the spinal cord carries messages to and from the brain and the rest of the body.

The cerebellum (“little brain”) is a fist-sized portion of the brain located at the back of the head, below the temporal and occipital lobes and above the brainstem. Like the cerebral cortex, it has two hemispheres. The outer portion contains neurons, and the inner area communicates with the cerebral cortex. Its function is to coordinate voluntary muscle movements and to maintain posture, balance and equilibrium. New studies are exploring the cerebellum’s roles in thought, emotions and social behavior, as well as its possible involvement in addiction, autism and schizophrenia.

Brain Coverings: Meninges

Three layers of protective covering called meninges surround the brain and the spinal cord.

  • The outermost layer, the dura mater , is thick and tough. It includes two layers: The periosteal layer of the dura mater lines the inner dome of the skull (cranium) and the meningeal layer is below that. Spaces between the layers allow for the passage of veins and arteries that supply blood flow to the brain.
  • The arachnoid mater is a thin, weblike layer of connective tissue that does not contain nerves or blood vessels. Below the arachnoid mater is the cerebrospinal fluid, or CSF. This fluid cushions the entire central nervous system (brain and spinal cord) and continually circulates around these structures to remove impurities.
  • The pia mater is a thin membrane that hugs the surface of the brain and follows its contours. The pia mater is rich with veins and arteries.

Three layers of the meninges beneath the skull: the outer dura mater, arachnoid and inner pia mater

Lobes of the Brain and What They Control

Each brain hemisphere (parts of the cerebrum) has four sections, called lobes: frontal, parietal, temporal and occipital. Each lobe controls specific functions.

Diagram of the brain's lobes: frontal, temporal, parietal and occipital

  • Frontal lobe. The largest lobe of the brain, located in the front of the head, the frontal lobe is involved in personality characteristics, decision-making and movement. Recognition of smell usually involves parts of the frontal lobe. The frontal lobe contains Broca’s area, which is associated with speech ability.
  • Parietal lobe. The middle part of the brain, the parietal lobe helps a person identify objects and understand spatial relationships (where one’s body is compared with objects around the person). The parietal lobe is also involved in interpreting pain and touch in the body. The parietal lobe houses Wernicke’s area, which helps the brain understand spoken language.
  • Occipital lobe. The occipital lobe is the back part of the brain that is involved with vision.
  • Temporal lobe. The sides of the brain, temporal lobes are involved in short-term memory, speech, musical rhythm and some degree of smell recognition.

Deeper Structures Within the Brain

Pituitary gland.

Sometimes called the “master gland,” the pituitary gland is a pea-sized structure found deep in the brain behind the bridge of the nose. The pituitary gland governs the function of other glands in the body, regulating the flow of hormones from the thyroid, adrenals, ovaries and testicles. It receives chemical signals from the hypothalamus through its stalk and blood supply.

Hypothalamus

The hypothalamus is located above the pituitary gland and sends it chemical messages that control its function. It regulates body temperature, synchronizes sleep patterns, controls hunger and thirst and also plays a role in some aspects of memory and emotion.

Small, almond-shaped structures, an amygdala is located under each half (hemisphere) of the brain. Included in the limbic system, the amygdalae regulate emotion and memory and are associated with the brain’s reward system, stress, and the “fight or flight” response when someone perceives a threat.

Hippocampus

A curved seahorse-shaped organ on the underside of each temporal lobe, the hippocampus is part of a larger structure called the hippocampal formation. It supports memory, learning, navigation and perception of space. It receives information from the cerebral cortex and may play a role in Alzheimer’s disease.

Pineal Gland

The pineal gland is located deep in the brain and attached by a stalk to the top of the third ventricle. The pineal gland responds to light and dark and secretes melatonin, which regulates circadian rhythms and the sleep-wake cycle.

Ventricles and Cerebrospinal Fluid

Deep in the brain are four open areas with passageways between them. They also open into the central spinal canal and the area beneath arachnoid layer of the meninges.

The ventricles manufacture cerebrospinal fluid , or CSF, a watery fluid that circulates in and around the ventricles and the spinal cord, and between the meninges. CSF surrounds and cushions the spinal cord and brain, washes out waste and impurities, and delivers nutrients.

Diagram of the brain's deeper structures

Blood Supply to the Brain

Two sets of blood vessels supply blood and oxygen to the brain: the vertebral arteries and the carotid arteries.

The external carotid arteries extend up the sides of your neck, and are where you can feel your pulse when you touch the area with your fingertips. The internal carotid arteries branch into the skull and circulate blood to the front part of the brain.

The vertebral arteries follow the spinal column into the skull, where they join together at the brainstem and form the basilar artery , which supplies blood to the rear portions of the brain.

The circle of Willis , a loop of blood vessels near the bottom of the brain that connects major arteries, circulates blood from the front of the brain to the back and helps the arterial systems communicate with one another.

Diagram of the brain's major arteries

Cranial Nerves

Inside the cranium (the dome of the skull), there are 12 nerves, called cranial nerves:

  • Cranial nerve 1: The first is the olfactory nerve, which allows for your sense of smell.
  • Cranial nerve 2: The optic nerve governs eyesight.
  • Cranial nerve 3: The oculomotor nerve controls pupil response and other motions of the eye, and branches out from the area in the brainstem where the midbrain meets the pons.
  • Cranial nerve 4: The trochlear nerve controls muscles in the eye. It emerges from the back of the midbrain part of the brainstem.
  • Cranial nerve 5: The trigeminal nerve is the largest and most complex of the cranial nerves, with both sensory and motor function. It originates from the pons and conveys sensation from the scalp, teeth, jaw, sinuses, parts of the mouth and face to the brain, allows the function of chewing muscles, and much more.
  • Cranial nerve 6: The abducens nerve innervates some of the muscles in the eye.
  • Cranial nerve 7: The facial nerve supports face movement, taste, glandular and other functions.
  • Cranial nerve 8: The vestibulocochlear nerve facilitates balance and hearing.
  • Cranial nerve 9: The glossopharyngeal nerve allows taste, ear and throat movement, and has many more functions.
  • Cranial nerve 10: The vagus nerve allows sensation around the ear and the digestive system and controls motor activity in the heart, throat and digestive system.
  • Cranial nerve 11: The accessory nerve innervates specific muscles in the head, neck and shoulder.
  • Cranial nerve 12: The hypoglossal nerve supplies motor activity to the tongue.

The first two nerves originate in the cerebrum, and the remaining 10 cranial nerves emerge from the brainstem, which has three parts: the midbrain, the pons and the medulla.

Find a Treatment Center

  • Neurology and Neurosurgery

Find Additional Treatment Centers at:

  • Howard County Medical Center
  • Sibley Memorial Hospital
  • Suburban Hospital

Gay standing in New York City.

Request an Appointment

Gay standing in New York City.

Stiff Person Syndrome: Gay's Story

Hailey walking at graduation with the help of her physical therapists.

Neurological Disorder: Hailey's Story

Mollie Hudson and her husband, Rick, celebrated their 24th wedding anniversary in December 2016

Moyamoya: Mollie's Story

Related Topics

  • Brain, Nerves and Spine

Advertisement

Introduction: The Human Brain

By Helen Phillips

4 September 2006

New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

A false-colour Magnetic Resonance Image (MRI) of a mid-sagittal section through the head of a normal 42 year-old woman, showing structures of the brain, spine and facial tissues

(Image: Mehau Kulyk / Science Photo Library)

The brain is the most complex organ in the human body. It produces our every thought, action , memory , feeling and experience of the world. This jelly-like mass of tissue, weighing in at around 1.4 kilograms, contains a staggering one hundred billion nerve cells, or neurons .

The complexity of the connectivity between these cells is mind-boggling. Each neuron can make contact with thousands or even tens of thousands of others, via tiny structures called synapses . Our brains form a million new connections for every second of our lives. The pattern and strength of the connections is constantly changing and no two brains are alike.

It is in these changing connections that memories are stored, habits learned and personalities shaped , by reinforcing certain patterns of brain activity, and losing others.

Grey matter

While people often speak of their “ grey matter “, the brain also contains white matter . The grey matter is the cell bodies of the neurons, while the white matter is the branching network of thread-like tendrils – called dendrites and axons – that spread out from the cell bodies to connect to other neurons.

But the brain also has another, even more numerous type of cell, called glial cells . These outnumber neurons ten times over. Once thought to be support cells, they are now known to amplify neural signals and to be as important as neurons in mental calculations. There are many different types of neuron, only one of which is unique to humans and the other great apes, the so called spindle cells .

Brain structure is shaped partly by genes , but largely by experience . Only relatively recently it was discovered that new brain cells are being born throughout our lives – a process called neurogenesis . The brain has bursts of growth and then periods of consolidation , when excess connections are pruned. The most notable bursts are in the first two or three years of life, during puberty , and also a final burst in young adulthood.

How a brain ages also depends on genes and lifestyle too. Exercising the brain and giving it the right diet can be just as important as it is for the rest of the body.

Chemical messengers

The neurons in our brains communicate in a variety of ways. Signals pass between them by the release and capture of neurotransmitter and neuromodulator chemicals, such as glutamate , dopamine , acetylcholine , noradrenalin , serotonin and endorphins .

Some neurochemicals work in the synapse , passing specific messages from release sites to collection sites, called receptors. Others also spread their influence more widely, like a radio signal , making whole brain regions more or less sensitive.

These neurochemicals are so important that deficiencies in them are linked to certain diseases. For example, a loss of dopamine in the basal ganglia, which control movements, leads to Parkinson’s disease . It can also increase susceptibility to addiction because it mediates our sensations of reward and pleasure.

Similarly, a deficiency in serotonin , used by regions involved in emotion, can be linked to depression or mood disorders, and the loss of acetylcholine in the cerebral cortex is characteristic of Alzheimer’s disease .

Brain scanning

Within individual neurons, signals are formed by electrochemical pulses. Collectively, this electrical activity can be detected outside the scalp by an electroencephalogram (EEG).

These signals have wave-like patterns , which scientists classify from alpha (common while we are relaxing or sleeping ), through to gamma (active thought). When this activity goes awry, it is called a seizure . Some researchers think that synchronising the activity in different brain regions is important in perception .

Other ways of imaging brain activity are indirect. Functional magnetic resonance imaging ( fMRI ) or positron emission tomography ( PET ) monitor blood flow. MRI scans, computed tomography ( CT ) scans and diffusion tensor images (DTI) use the magnetic signatures of different tissues, X-ray absorption, or the movement of water molecules in those tissues, to image the brain.

These scanning techniques have revealed which parts of the brain are associated with which functions . Examples include activity related to sensations , movement, libido , choices , regrets , motivations and even racism . However, some experts argue that we put too much trust in these results and that they raise privacy issues .

Before scanning techniques were common, researchers relied on patients with brain damage caused by strokes , head injuries or illnesses, to determine which brain areas are required for certain functions . This approach exposed the regions connected to emotions , dreams , memory , language and perception and to even more enigmatic events, such as religious or “ paranormal ” experiences.

One famous example was the case of Phineas Gage , a 19 th century railroad worker who lost part of the front of his brain when a 1-metre-long iron pole was blasted through his head during an explosion. He recovered physically, but was left with permanent changes to his personality , showing for the first time that specific brain regions are linked to different processes.

Structure in mind

The most obvious anatomical feature of our brains is the undulating surfac of the cerebrum – the deep clefts are known as sulci and its folds are gyri. The cerebrum is the largest part of our brain and is largely made up of the two cerebral hemispheres . It is the most evolutionarily recent brain structure, dealing with more complex cognitive brain activities.

It is often said that the right hemisphere is more creative and emotional and the left deals with logic, but the reality is more complex . Nonetheless, the sides do have some specialisations , with the left dealing with speech and language , the right with spatial and body awareness.

See our Interactive Graphic for more on brain structure

Further anatomical divisions of the cerebral hemispheres are the occipital lobe at the back, devoted to vision , and the parietal lobe above that, dealing with movement , position, orientation and calculation .

Behind the ears and temples lie the temporal lobes , dealing with sound and speech comprehension and some aspects of memory . And to the fore are the frontal and prefrontal lobes , often considered the most highly developed and most “human” of regions, dealing with the most complex thought, decision making , planning, conceptualising, attention control and working memory. They also deal with complex social emotions such as regret , morality and empathy .

Another way to classify the regions is as sensory cortex and motor cortex , controlling incoming information, and outgoing behaviour respectively.

Below the cerebral hemispheres, but still referred to as part of the forebrain, is the cingulate cortex , which deals with directing behaviour and pain . And beneath this lies the corpus callosum , which connects the two sides of the brain. Other important areas of the forebrain are the basal ganglia , responsible for movement , motivation and reward.

Urges and appetites

Beneath the forebrain lie more primitive brain regions. The limbic system , common to all mammals, deals with urges and appetites. Emotions are most closely linked with structures called the amygdala , caudate nucleus and putamen . Also in the limbic brain are the hippocampus – vital for forming new memories; the thalamus – a kind of sensory relay station; and the hypothalamus , which regulates bodily functions via hormone release from the pituitary gland .

The back of the brain has a highly convoluted and folded swelling called the cerebellum , which stores patterns of movement, habits and repeated tasks – things we can do without thinking about them.

The most primitive parts, the midbrain and brain stem , control the bodily functions we have no conscious control of, such as breathing , heart rate, blood pressure, sleep patterns , and so on. They also control signals that pass between the brain and the rest of the body, through the spinal cord.

Though we have discovered an enormous amount about the brain, huge and crucial mysteries remain. One of the most important is how does the brain produces our conscious experiences ?

The vast majority of the brain’s activity is subconscious . But our conscious thoughts, sensations and perceptions – what define us as humans – cannot yet be explained in terms of brain activity.

  • psychology /

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

More from New Scientist

Explore the latest news, articles and features

Is digital technology really swaying voters and undermining democracy?

Is digital technology really swaying voters and undermining democracy?

Subscriber-only

speech on human brain

We may finally know how the placebo effect relieves pain

Guy sleeping on the couch in what looks like an uncomfortable position; Shutterstock ID 241260808; purchase_order: -; job: -; client: -; other: -

How to use psychology to hack your mind and fall in love with exercise

TOPSHOT - Fitness coach Gabrielle Friscira gives a lesson by videoconference in Saint-Remy-lHonore, west of Paris, on April 15, 2020, on the 30th day of a strict lockdown in France aimed at curbing the spread of the COVID-19 pandemic, caused by the novel coronavirus. (Photo by FRANCK FIFE / AFP) (Photo by FRANCK FIFE/AFP via Getty Images)

If your gym instructor is an iPad, what is lost – and gained?

Popular articles.

Trending New Scientist articles

Visible Body Learn Anatomy

  • Types of Bones
  • Axial Skeleton
  • Appendicular Skeleton
  • Skeletal Pathologies
  • Muscle Tissue Types
  • Muscle Attachments & Actions
  • Muscle Contractions
  • Muscular Pathologies
  • Functions of Blood
  • Blood Vessels
  • Circulation
  • Circulatory Pathologies
  • Respiratory Functions
  • Upper Respiratory System
  • Lower Respiratory System
  • Respiratory Pathologies
  • Oral Cavity
  • Alimentary Canal
  • Accessory Organs
  • Absorption & Elimination
  • Digestive Pathologies
  • Lymphatic Structures
  • Immune System
  • Urinary System Structures
  • Urine Formation
  • Urine Storage & Elimination
  • Urinary Pathologies
  • Female Reproductive System
  • Male Reproductive System
  • Reproductive Process
  • Endocrine Glands

The Human Brain: Anatomy and Function

© 2024 Visible Body

Download on the App Store

The brain directs our body’s internal functions. It also integrates sensory impulses and information to form perceptions, thoughts, and memories. The brain gives us self-awareness and the ability to speak and move in the world. Its four major regions make this possible: The cerebrum , with its cerebral cortex, gives us conscious control of our actions. The diencephalon mediates sensations, manages emotions, and commands whole internal systems. The cerebellum adjusts body movements, speech coordination, and balance, while the brain stem relays signals from the spinal cord and directs basic internal functions and reflexes.

1. The Seat of Consciousness: High Intellectual Functions Occur in the Cerebrum

A diagram of the parts of the cerebrum

The cerebrum is the largest brain structure and part of the forebrain (or prosencephalon). Its prominent outer portion, the cerebral cortex, not only processes sensory and motor information but enables consciousness, our ability to consider ourselves and the outside world. It is what most people think of when they hear the term “grey matter.” The cortex tissue consists mainly of neuron cell bodies, and its folds and fissures (known as gyri and sulci) give the cerebrum its trademark rumpled surface. The cerebral cortex has a left and a right hemisphere. Each hemisphere can be divided into four lobes: the frontal lobe, temporal lobe, occipital lobe, and parietal lobe. The lobes are functional segments. They specialize in various areas of thought and memory, of planning and decision making, and of speech and sense perception.

2. The Cerebellum Fine-Tunes Body Movements and Maintains Balance

A diagram of the parts of the cerebellum

The cerebellum is the second largest part of the brain. It sits below the posterior (occipital) lobes of the cerebrum and behind the brain stem, as part of the hindbrain. Like the cerebrum, the cerebellum has left and right hemispheres. A middle region, the vermis, connects them. Within the interior tissue rises a central white stem, called the arbor vitae because it spreads branches and sub-branches through the hemispheres. The primary function of the cerebellum is to maintain posture and balance. When we jump to the side, reach forward, or turn suddenly, it subconsciously evaluates each movement. The cerebellum then sends signals to the cerebrum, indicating muscle movements that will adjust our position to keep us steady.

3. The Brain Stem Relays Signals Between the Brain and Spinal Cord and Manages Basic Involuntary Functions

A diagram of the parts of the brain stem

The brain stem connects the spinal cord to the higher-thinking centers of the brain. It consists of three structures: the medulla oblongata , the pons , and the midbrain . The medulla oblongata is continuous with the spinal cord and connects to the pons above. Both the medulla and the pons are considered part of the hindbrain. The midbrain, or mesencephalon, connects the pons to the diencephalon and forebrain. Besides relaying sensory and motor signals, the structures of the brain stem direct involuntary functions. The pons helps control breathing rhythms. The medulla handles respiration, digestion, and circulation, and reflexes such as swallowing, coughing, and sneezing. The midbrain contributes to motor control, vision, and hearing, as well as vision- and hearing-related reflexes.

4. A Sorting Station: The Thalamus Mediates Sensory Data and Relays Signals to the Conscious Brain

The thalamus and its position in the brain

The diencephalon is a region of the forebrain, connected to both the midbrain (part of the brain stem) and the cerebrum. The thalamus forms most of the diencephalon. It consists of two symmetrical egg-shaped masses, with neurons that radiate out through the cerebral cortex. Sensory data floods into the thalamus from the brain stem, along with emotional, visceral, and other information from different areas of the brain. The thalamus relays these messages to the appropriate areas of the cerebral cortex. It determines which signals require conscious awareness, and which should be available for learning and memory.

5. The Hypothalamus Manages Sensory Impulses, Controls Emotions, and Regulates Internal Functions

The hypothalamus and its position in the brain

The hypothalamus is part of the diencephalon, a region of the forebrain that connects to the midbrain and the cerebrum. The hypothalamus helps to process sensory impulses of smell, taste, and vision. It manages emotions such as pain and pleasure, aggression and amusement. The hypothalamus is also our visceral control center, regulating the endocrine system and internal functions that sustain the body day to day. It translates nervous system signals into activating or inhibiting hormones that it sends to the pituitary gland. These hormones can activate or inhibit the release of pituitary hormones that target specific glands and tissues in the body. Meanwhile, the hypothalamus manages the autonomic nervous system, devoted to involuntary internal functions. It signals sleep cycles and other circadian rhythms, regulates food consumption, and monitors and adjusts body chemistry and temperature.

Download Brain Lab Manual

External Sources

An article in Science Daily on a research study about REM sleep and the pons , a part of the brain stem.

“ A Neurosurgeon’s Overview of the Brain’s Anatomy ” from the American Association of Neurological Surgeons.

Visible Body Web Suite provides in-depth coverage of each body system in a guided, visually stunning presentation.

Related Articles

Nervous System Overview

Cells that Form the Nervous System

Sight, Sound, Smell, Taste, and Touch

An anatomy course for everyone!

For students

For instructors

Get our awesome anatomy emails!

When you select "Subscribe" you will start receiving our email newsletter. Use the links at the bottom of any email to manage the type of emails you receive or to unsubscribe. See our privacy policy for additional details.

  • User Agreement
  • Permissions
  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih research matters.

February 13, 2024

How the brain produces speech

At a glance.

  • Researchers identified how neurons in the human brain encode various elements of speech.
  • The findings might be used to help develop treatments for speech and language disorders.

Side view portrait of a woman talking with alphabet letters in her head and coming out of her open mouth.

Speech and language depend on our ability to produce a wide variety of sounds in a specific order. How the neurons in the human brain work together to plan and produce speech remains poorly understood.

To begin to address this question, an NIH-funded team of researchers, led by Drs. Ziv Williams and Sydney Cash at Massachusetts General Hospital, recorded neuron activity during natural speech in five native English speakers. The experiments were done while participants were having electrodes implanted for deep brain stimulation. The researchers recorded neurons in a prefrontal brain region known to be involved in word planning and sentence construction. They used high-density arrays of tiny electrodes that could record signals from many individual neurons at once. Their results appeared in Nature on January 31, 2024.

The scientists found that the activity of almost half the neurons depended on the particular sounds, or phonemes, in the word about to be said. Some neurons, for instance, became more active ahead of speaking the sounds for “p” or “b”, which involve stopping airflow at the lips. Others did so ahead of speaking “k” or “g” sounds, which are formed by the tongue against the soft palate. Moreover, certain neurons seemed to reflect the specific combination of phonemes in the upcoming word. The team found that they could predict the phonemes that made up the word about to be spoken based on the activity of these neurons.

For about a quarter of the neurons, activity further reflected specific syllables, or ordered sequences of phonemes that may be all or part of a word. The team could predict the syllables in the upcoming word using the activity from these neurons. These neurons did not respond to the phonemes in the syllable by themselves. Nor did they respond to the phonemes out of order or split across different syllables.

A minority of neurons responded to the presence of prefixes or suffixes. These are examples of morphemes, or groups of sounds that carry specific meanings. The presence of morphemes in the upcoming word could be predicted from these neurons’ activities.

The team also found that different sets of neurons activated in a specific order. The morpheme neurons activated first, around 400 milliseconds (ms) before the utterance. Phoneme neurons activated next, around 200 ms before the utterance. Syllable neurons activated last, around 70 ms before utterance. Most neurons responded to the same feature (phoneme, syllable, or morpheme) both before and during the utterance. But the activity patterns during the utterance differed from those before it. Finally, the team found that neurons that responded to speech sounds during speaking differed from those that responded to those same speech sounds during listening.

In an accompanying paper in the same issue of Nature , another research team used the same technique to examine how neurons in another area of the brain respond while listening to speech. They similarly found that single neurons encoded different speech sound cues.

The findings suggest how various elements of speech are encoded in the brain, and how the brain combines these elements to form spoken words. This information might aid in developing brain-machine interfaces that can synthesize speech. Such devices could help a range of patients with conditions that impair speech.

“Disruptions in the speech and language networks are observed in a wide variety of neurological disorders—including stroke, traumatic brain injury, tumors, neurodegenerative disorders, neurodevelopmental disorders, and more,” says co-author Dr. Arjun Khanna. “Our hope is that a better understanding of the basic neural circuitry that enables speech and language will pave the way for the development of treatments for these disorders.”

—by Brian Doctrow, Ph.D.

Related Links

  • Scientists Translate Brain Activity into Music
  • Brain Decoder Turns a Person’s Brain Activity into Words
  • Understanding How the Brain Tracks Social Status and Competition
  • Study Reveals Brain Networks Critical for Conversation
  • Device Allows Paralyzed Man to Communicate with Words
  • How the Human Brain Tracks Location
  • Memories Involve Replay of Neural Firing Patterns
  • Scientists Create Speech Using Brain Signals
  • How The Brain Keeps Track of Time
  • Brain Basics: Know Your Brain

References:  Single-neuronal elements of speech production in humans. Khanna AR, Muñoz W, Kim YJ, Kfir Y, Paulk AC, Jamali M, Cai J, Mustroph ML, Caprara I, Hardstone R, Mejdell M, Meszéna D, Zuckerman A, Schweitzer J, Cash S, Williams ZM. Nature . 2024 Jan 31. doi: 10.1038/s41586-023-06982-w. Online ahead of print. PMID: 38297120.

Funding:  NIH’s National Institute of Neurological Disorders and Stroke (NINDS), National Institute of Mental Health (NIMH), and National Institute on Deafness and other Communication Disorders (NIDCD); Canadian Institutes of Health Research; Foundations of Human Behavior Initiative; Tiny Blue Dot Foundation; American Association of University Women.

Connect with Us

  • More Social Media from NIH

The human brain, explained

Learn about the most complex organ in the human body, from its structure to its most common disorders.

Here’s something to wrap your mind around: The human brain is more complex than any other known structure in the universe . Weighing in at three pounds, on average, this spongy mass of fat and protein is made up of two overarching types of cells—called glia and neurons—and it contains many billions of each. Neurons are notable for their branch-like projections called axons and dendrites, which gather and transmit electrochemical signals. Different types of glial cells provide physical protection to neurons and help keep them, and the brain, healthy.

Together, this complex network of cells gives rise to every aspect of our shared humanity. We could not breathe, play, love, or remember without the brain.

Anatomy of the brain

The cerebrum is the largest part of the brain , accounting for 85 percent of the organ's weight. The distinctive, deeply wrinkled outer surface is the cerebral cortex. It's the cerebrum that makes the human brain—and therefore humans—so formidable. Animals such as elephants, dolphins, and whales actually have larger brains, but humans have the most developed cerebrum. It's packed to capacity inside our skulls, with deep folds that cleverly maximize the total surface area of the cortex .

The cerebrum has two halves, or hemispheres, that are further divided into four regions, or lobes. The frontal lobes, located behind the forehead, are involved with speech, thought, learning, emotion, and movement. Behind them are the parietal lobes, which process sensory information such as touch, temperature, and pain. At the rear of the brain are the occipital lobes, dealing with vision. Lastly, there are the temporal lobes, near the temples, which are involved with hearing and memory.

The second-largest part of the brain is the cerebellum , which sits beneath the back of the cerebrum. It plays an important role in coordinating movement, posture, and balance.

The third-largest part is the diencephalon, located in the core of the brain. A complex of structures roughly the size of an apricot, its two major sections are the thalamus and hypothalamus. The thalamus acts as a relay station for incoming nerve impulses from around the body that are then forwarded to the appropriate brain region for processing. The hypothalamus controls hormone secretions from the nearby pituitary gland. These hormones govern growth and instinctual behaviors, such as when a new mother starts to lactate. The hypothalamus is also important for keeping bodily processes like temperature, hunger, and thirst balanced.

Seated at the organ's base, the brain stem controls reflexes and basic life functions such as heart rate, breathing, and blood pressure. It also regulates when you feel sleepy or awake and connects the cerebrum and cerebellum to the spinal cord.

a brain

The brain is extremely sensitive and delicate, and so it requires maximum protection, which is provided by the hard bone of the skull and three tough membranes called meninges. The spaces between these membranes are filled with fluid that cushions the brain and keeps it from being damaged by contact with the inside of the skull.

Blood-brain barrier

Want more proof that the brain is extraordinary? Look no further than the blood-brain barrier. The discovery of this unique feature dates to the 19th century, when various experiments revealed that dye, when injected into the bloodstream, colored all of the body’s organs except the brain and spinal cord. The same dye, when injected into the spinal fluid, tinted only the brain and spinal cord.

You May Also Like

speech on human brain

Can positive thinking prolong your life? Science says yes

speech on human brain

How to get high on your own hormones—naturally

speech on human brain

Can you really sweat out toxins?

This led scientists to learn that the brain has an ingenious, protective layer. Called the blood-brain barrier, it’s made up of special, tightly bound cells that together function as a kind of semi-permeable gate throughout most of the organ . It keeps the brain environment safe and stable by preventing some toxins, pathogens, and other harmful substances from entering the brain through the bloodstream, while simultaneously allowing oxygen and vital nutrients to pass through.

Health conditions of the brain

Of course, when a machine as finely calibrated and complex as the brain gets injured or malfunctions, problems arise. One in five Americans suffers from some form of neurological damage , a wide-ranging list that includes stroke, epilepsy, and cerebral palsy, as well as dementia.

Alzheimer’s disease , which is characterized in part by a gradual progression of short-term memory loss, disorientation, and mood swings, is the most common cause of dementia . It is the sixth leading cause of death in the United States, and the number of people diagnosed with it is growing. Worldwide, some 50 million people suffer from Alzheimer’s or some form of dementia. While there are a handful of drugs available to mitigate Alzheimer’s symptoms, there is no cure. Researchers across the globe continue to develop treatments that one day might put an end to the disease’s devasting effects.

Far more common than neurological disorders, however, are conditions that fall under a broad category called mental illness . Unfortunately, negative attitudes toward people who suffer from mental illness are widespread. The stigma attached to mental illness can create feelings of shame, embarrassment, and rejection, causing many people to suffer in silence. In the United States, where anxiety disorders are the most common forms of mental illness, only about 40 percent of sufferers receive treatment. Anxiety disorders often stem from abnormalities in the brain’s hippocampus and prefrontal cortex.

Attention-deficit/hyperactivity disorder, or ADHD , is a mental health condition that also affects adults but is far more often diagnosed in children. ADHD is characterized by hyperactivity and an inability to stay focused. While the exact cause of ADHD has not yet been determined, scientists believe that it may be linked to several factors, among them genetics or brain injury. Treatment for ADHD may include psychotherapy as well as medications. The latter can help by increasing the brain chemicals dopamine and norepinephrine, which are vital to thinking and focusing.

Depression is another common mental health condition. It is the leading cause of disability worldwide and is often accompanied by anxiety. Depression can be marked by an array of symptoms, including persistent sadness, irritability, and changes in appetite. The good news is that in general, anxiety and depression are highly treatable through various medications—which help the brain use certain chemicals more efficiently—and through forms of therapy.

Related Topics

  • NEUROSCIENCE

speech on human brain

How much of a role does genetics play in obesity?

speech on human brain

The 11 most astonishing scientific discoveries of 2023

speech on human brain

This is why getting scared can feel so good

speech on human brain

Why do some say natural deodorants are better—and are the claims accurate?

speech on human brain

Did anyone survive Pompeii?

  • Environment
  • Paid Content

History & Culture

  • History & Culture
  • Mind, Body, Wonder
  • Terms of Use
  • Privacy Policy
  • Your US State Privacy Rights
  • Children's Online Privacy Policy
  • Interest-Based Ads
  • About Nielsen Measurement
  • Do Not Sell or Share My Personal Information
  • Nat Geo Home
  • Attend a Live Event
  • Book a Trip
  • Inspire Your Kids
  • Shop Nat Geo
  • Visit the D.C. Museum
  • Learn About Our Impact
  • Support Our Mission
  • Advertise With Us
  • Customer Service
  • Renew Subscription
  • Manage Your Subscription
  • Work at Nat Geo
  • Sign Up for Our Newsletters
  • Contribute to Protect the Planet

Copyright © 1996-2015 National Geographic Society Copyright © 2015-2024 National Geographic Partners, LLC. All rights reserved

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS AND VIEWS
  • 31 January 2024

How speech is produced and perceived in the human cortex

  • Yves Boubenec   ORCID: http://orcid.org/0000-0002-0106-6947 0

Yves Boubenec is in the Perceptual Systems Laboratory, Department of Cognitive Studies, École Normale Supérieure, PSL Research University, CNRS, Paris 75005, France.

You can also search for this author in PubMed   Google Scholar

In the human brain, the perception and production of speech requires the tightly coordinated activity of neurons across diverse regions of the cerebral cortex. Writing in Nature , Leonard et al . 1 and Khanna et al . 2 report their use of a neural probe consisting of an array of microelectrodes, called Neuropixels, to measure the electrical activity of individual neurons in regions of the human cortex involved in speech processing.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 626 , 485-486 (2024)

doi: https://doi.org/10.1038/d41586-024-00078-9

Leonard, M. K. et al. Nature 626 , 593–602 (2024).

Article   Google Scholar  

Khanna, A. R. et al. Nature 626 , 603–610 (2024).

Quian Quiroga, R. et al. Nature Commun. 14 , 5661 (2023).

Article   PubMed   Google Scholar  

Bouchard, K. E., Mesgarani, N., Johnson, K. & Chang, E. F. Nature 495 , 327–332 (2013).

Mesgarani, N., Cheung, C., Johnson, K. & Chang, E. F. Science 343 , 1006–1010 (2014).

Paulk, A. C. et al. Nature Neurosci. 25 , 252–263 (2022).

Chung, J. E. et al. Neuron 110 , 2409–2421 (2022).

Jun, J. J. et al. Nature 551 , 232–236 (2017).

Saxena, S. & Cunningham, J. P. Curr. Opin. Neurobiol. 55 , 103–111 (2019).

Kaufman, M. T., Churchland, M. M., Ryu, S. I. & Shenoy, K. V. Nature Neurosci. 17 , 440–448 (2014).

Keller, G. B. & Mrsic-Flogel, T. D. Neuron 100 , 424–435 (2018).

Schneider, D. M., Sundararajan, J. & Mooney, R. Nature 561 , 391–395 (2018).

Download references

Reprints and permissions

Competing Interests

The author declares no competing interests.

Related Articles

speech on human brain

See all News & Views

  • Neuroscience

Found: a brain-wiring pattern linked to depression

Found: a brain-wiring pattern linked to depression

News 04 SEP 24

Detecting hidden brain injuries

Detecting hidden brain injuries

Outlook 29 AUG 24

Humanity’s newest brain gains are most at risk from ageing

Humanity’s newest brain gains are most at risk from ageing

News 29 AUG 24

Frontostriatal salience network expansion in individuals in depression

Frontostriatal salience network expansion in individuals in depression

Article 04 SEP 24

DNA methylation controls stemness of astrocytes in health and ischaemia

DNA methylation controls stemness of astrocytes in health and ischaemia

Faculty Recruitment, Westlake University School of Medicine

Faculty positions are open at four distinct ranks: Assistant Professor, Associate Professor, Full Professor, and Chair Professor.

Hangzhou, Zhejiang, China

Westlake University

speech on human brain

Postdoctoral Researcher - Neural Circuits Genetics and Physiology for Learning and Memory

A postdoctoral position is available to study molecular mechanisms, neural circuits and neurophysiology of learning and memory.

Dallas, Texas (US)

The University of Texas Southwestern Medical Center

speech on human brain

Assistant/Associate Professor (Tenure Track) - Integrative Biology & Pharmacology

The Department of Integrative Biology and Pharmacology (https://med.uth.edu/ibp/), McGovern Medical School at The University of Texas Health Scienc...

Houston, Texas (US)

UTHealth Houston

Faculty Positions

The Yale Stem Cell Center invites applications for faculty positions at the rank of Assistant, Associate, or full Professor. Rank and tenure will b...

New Haven, Connecticut

Yale Stem Cell Center

Postdoc/PhD opportunity – Pharmacology of Opioids

Join us at MedUni Vienna to explore the pharmacology of circular and stapled peptide therapeutics targetting the κ-opioid receptor in the periphery.

Vienna (AT)

Medical University of Vienna

speech on human brain

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Introduction

Respiratory mechanisms

Brain functions.

  • Cartilages of the larynx
  • Extrinsic muscles
  • Intrinsic muscles
  • Vocal cords
  • Esophageal voice
  • Artificial larynx
  • The basic registers
  • Studies of register differences
  • Vocal frequency
  • Voice types
  • Vocal ranges
  • Harmonic structure
  • Vocal styles
  • Individual voice quality
  • Singing and speaking
  • Synthetic production of speech sounds

Uncover the science behind the transformation of sounds into speech

  • What did Martin Luther King, Jr., do?
  • What is Martin Luther King, Jr., known for?
  • Who did Martin Luther King, Jr., influence and in what ways?
  • What was Martin Luther King’s family life like?
  • How did Martin Luther King, Jr., die?

Spike Lee at the 2007 Primetime Creative Arts Emmy Awards. Shrine Auditorium, Los Angeles, California

Our editors will review what you’ve submitted and determine whether to revise the article.

  • American Speech-Language-Hearing Association - What is Speech? What is Language?
  • Institute for Natural Language Processing - Voice quality: description and classification
  • speech - Children's Encyclopedia (Ages 8-11)
  • speech - Student Encyclopedia (Ages 11 and up)
  • Table Of Contents

left hemisphere of the brain

speech , human communication through spoken language . Although many animals possess voices of various types and inflectional capabilities, humans have learned to modulate their voices by articulating the laryngeal tones into audible oral speech.

The regulators

Uncover the science behind the transformation of sounds into speech

Human speech is served by a bellows-like respiratory activator, which furnishes the driving energy in the form of an airstream; a phonating sound generator in the larynx (low in the throat) to transform the energy; a sound-molding resonator in the pharynx (higher in the throat), where the individual voice pattern is shaped; and a speech-forming articulator in the oral cavity ( mouth ). Normally, but not necessarily, the four structures function in close coordination. Audible speech without any voice is possible during toneless whisper , and there can be phonation without oral articulation as in some aspects of yodeling that depend on pharyngeal and laryngeal changes. Silent articulation without breath and voice may be used for lipreading .

An early achievement in experimental phonetics at about the end of the 19th century was a description of the differences between quiet breathing and phonic (speaking) respiration. An individual typically breathes approximately 18 to 20 times per minute during rest and much more frequently during periods of strenuous effort. Quiet respiration at rest as well as deep respiration during physical exertion are characterized by symmetry and synchrony of inhalation ( inspiration ) and exhalation ( expiration ). Inspiration and expiration are equally long, equally deep, and transport the same amount of air during the same period of time, approximately half a litre (one pint) of air per breath at rest in most adults. Recordings (made with a device called a pneumograph) of respiratory movements during rest depict a curve in which peaks are followed by valleys in fairly regular alternation.

Phonic respiration is different; inhalation is much deeper than it is during rest and much more rapid. After one takes this deep breath (one or two litres of air), phonic exhalation proceeds slowly and fairly regularly for as long as the spoken utterance lasts. Trained speakers and singers are able to phonate on one breath for at least 30 seconds, often for as much as 45 seconds, and exceptionally up to one minute. The period during which one can hold a tone on one breath with moderate effort is called the maximum phonation time; this potential depends on such factors as body physiology, state of health, age, body size, physical training, and the competence of the laryngeal voice generator—that is, the ability of the glottis (the vocal cords and the opening between them) to convert the moving energy of the breath stream into audible sound. A marked reduction in phonation time is characteristic of all the laryngeal diseases and disorders that weaken the precision of glottal closure, in which the cords (vocal folds) come close together, for phonation.

YOLO "You Only Live Once" written in bright colors and repeated on a purple background (acronym, slang)

Respiratory movements when one is awake and asleep, at rest and at work, silent and speaking are under constant regulation by the nervous system . Specific respiratory centres within the brain stem regulate the details of respiratory mechanics according to the body needs of the moment. Conversely, the impact of emotions is heard immediately in the manner in which respiration drives the phonic generator; the timid voice of fear, the barking voice of fury, the feeble monotony of melancholy , or the raucous vehemence during agitation are examples. Conversely, many organic diseases of the nervous system or of the breathing mechanism are projected in the sound of the sufferer’s voice. Some forms of nervous system disease make the voice sound tremulous; the voice of the asthmatic sounds laboured and short winded; certain types of disease affecting a part of the brain called the cerebellum cause respiration to be forced and strained so that the voice becomes extremely low and grunting. Such observations have led to the traditional practice of prescribing that vocal education begin with exercises in proper breathing.

The mechanism of phonic breathing involves three types of respiration: (1) predominantly pectoral breathing (chiefly by elevation of the chest), (2) predominantly abdominal breathing (through marked movements of the abdominal wall), (3) optimal combination of both (with widening of the lower chest). The female uses upper chest respiration predominantly, the male relies primarily on abdominal breathing. Many voice coaches stress the ideal of a mixture of pectoral (chest) and abdominal breathing for economy of movement. Any exaggeration of one particular breathing habit is impractical and may damage the voice.

How does the McGurk effect trick your brain?

The question of what the brain does to make the mouth speak or the hand write is still incompletely understood despite a rapidly growing number of studies by specialists in many sciences, including neurology, psychology , psycholinguistics, neurophysiology, aphasiology, speech pathology , cybernetics, and others. A basic understanding, however, has emerged from such study. In evolution, one of the oldest structures in the brain is the so-called limbic system , which evolved as part of the olfactory (smell) sense. It traverses both hemispheres in a front to back direction, connecting many vitally important brain centres as if it were a basic mainline for the distribution of energy and information. The limbic system involves the so-called reticular activating system (structures in the brain stem), which represents the chief brain mechanism of arousal, such as from sleep or from rest to activity. In humans, all activities of thinking and moving (as expressed by speaking or writing) require the guidance of the brain cortex. Moreover, in humans the functional organization of the cortical regions of the brain is fundamentally distinct from that of other species, resulting in high sensitivity and responsiveness toward harmonic frequencies and sounds with pitch , which characterize human speech and music.

Know Broca's lesion method in mapping brain activity in humans and how studies of brain disorders to the Broca area help evolve the scientific understanding of cognition

In contrast to animals, humans possess several language centres in the dominant brain hemisphere (on the left side in a clearly right-handed person). It was previously thought that left-handers had their dominant hemisphere on the right side, but recent findings tend to show that many left-handed persons have the language centres more equally developed in both hemispheres or that the left side of the brain is indeed dominant. The foot of the third frontal convolution of the brain cortex, called Broca’s area, is involved with motor elaboration of all movements for expressive language. Its destruction through disease or injury causes expressive aphasia , the inability to speak or write. The posterior third of the upper temporal convolution represents Wernicke’s area of receptive speech comprehension. Damage to this area produces receptive aphasia, the inability to understand what is spoken or written as if the patient had never known that language.

Broca’s area surrounds and serves to regulate the function of other brain parts that initiate the complex patterns of bodily movement (somatomotor function) necessary for the performance of a given motor act. Swallowing is an inborn reflex (present at birth) in the somatomotor area for mouth, throat, and larynx. From these cells in the motor cortex of the brain emerge fibres that connect eventually with the cranial and spinal nerves that control the muscles of oral speech.

In the opposite direction, fibres from the inner ear have a first relay station in the so-called acoustic nuclei of the brain stem. From here the impulses from the ear ascend, via various regulating relay stations for the acoustic reflexes and directional hearing, to the cortical projection of the auditory fibres on the upper surface of the superior temporal convolution (on each side of the brain cortex). This is the cortical hearing centre where the effects of sound stimuli seem to become conscious and understandable. Surrounding this audito-sensory area of initial crude recognition, the inner and outer auditopsychic regions spread over the remainder of the temporal lobe of the brain, where sound signals of all kinds appear to be remembered, comprehended, and fully appreciated. Wernicke’s area (the posterior part of the outer auditopsychic region) appears to be uniquely important for the comprehension of speech sounds.

The integrity of these language areas in the cortex seems insufficient for the smooth production and reception of language. The cortical centres are interconnected with various subcortical areas (deeper within the brain) such as those for emotional integration in the thalamus and for the coordination of movements in the cerebellum (hindbrain).

All creatures regulate their performance instantaneously comparing it with what it was intended to be through so-called feedback mechanisms involving the nervous system. Auditory feedback through the ear, for example, informs the speaker about the pitch, volume, and inflection of his voice, the accuracy of articulation, the selection of the appropriate words, and other audible features of his utterance. Another feedback system through the proprioceptive sense (represented by sensory structures within muscles, tendons, joints, and other moving parts) provides continual information on the position of these parts. Limitations of these systems curtail the quality of speech as observed in pathologic examples (deafness, paralysis , underdevelopment).

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

Cover of StatPearls

StatPearls [Internet].

Physiology, brain.

Kenia A. Maldonado ; Khalid Alsayouri .

Affiliations

Last Update: March 17, 2023 .

  • Introduction

The human brain is perhaps the most complex of all biological systems, with the mature brain composed of more than 100 billion information-processing cells called neurons. [1]  The brain is an organ composed of nervous tissue that commands task-evoked responses, movement, senses, emotions, language, communication, thinking, and memory. The three main parts of the human brain are the cerebrum, cerebellum, and brainstem. See Image. Human Brain, Encephalon.

The cerebrum is divided into the right and left hemispheres and is the largest part of the brain. It contains folds and convolutions on its surface, with the ridges found between the convolutions called gyri and the valleys between the gyri called sulci (plural of sulcus). If the sulci are deep, they are called fissures. Both cerebral hemispheres have an outer layer of gray matter called the cerebral cortex and inner subcortical white matter.

Located in the posterior cranial fossa, above the foramen magnum, the cerebellum's primary function is to modulate motor coordination, posture, and balance. It is comprised of the cerebellar cortex and deep cerebellar nuclei, with the cerebellar cortex being made up of three layers; the molecular, Purkinje, and granular layers. The cerebellum connects to the brainstem via cerebellar peduncles.

The brainstem contains the midbrain, pons, and medulla. It is located anterior to the cerebellum, between the base of the cerebrum and the spinal cord.

  • Issues of Concern

Studies of brain function have focused on analyzing the variations of the electrical activity produced by the application of sensory stimuli. However, it is also essential to study additional features and functions of the brain, including information processing and responding to environmental demands. [2]

The brain works precisely, making connections, and is a deeply divided structure that has remained not entirely explained or examined. [3]  Although researchers have made significant progress in experimental techniques, the human cognitive function that emerges from neuronal structure and dynamics is not entirely understood. [4]

  • Cellular Level

At the beginning of the forebrain formation, the neuroepithelial cells undergo divisions at the inner surface of the neural tube to generate new progenitors. These dividing neuroepithelial cells transform and diversify, leading to radial glial cells (RGCs).

RGCs also work as progenitors with the capacity to regenerate themselves and produce other types of progenitors, neurons, and glial cells. [5]  RGCs have long processes that connect with the neuroepithelium and function as a guide for the migration of neuron cells to ensure that neurons find their resting place, mature, and send out axons and dendrites to participate directly in synapses and electrical signaling. Neurons get produced along with glial cells; glial cells bring support and create an enclosed environment in which neurons can perform their functions.

Glial cells (astrocytes, oligodendrocytes, microglial cells) have well-known roles, which include: keeping the ionic medium of neurons, controlling the rate of nerve signal propagation and synaptic action by regulating the uptake of neurotransmitters, providing a platform for some aspects of neural development, and aiding in recovery from neural damage.

Gray matter is the main component of the central nervous system (CNS) and consists of neuronal cell bodies, dendrites, myelinated and unmyelinated axons, glial cells, synapses, and capillaries. The cerebral cortex is made up of layers of neurons that constitute the gray matter of the brain. The subcortical (beneath the cortex) area is primarily white matter composed of myelinated axons with fewer quantities of cell bodies when compared to gray matter.

Although neurons can have different morphologies, they all contain four common regions: the cell body, the dendrites, the axon, and the axon terminals, each with its respective functions.

The cell body contains a nucleus where proteins and membranes are synthesized. These proteins travel through microtubules down to the axons and terminals via a mechanism known as anterograde transport. In retrograde transport, damaged membranes and organelles travel from the axon toward the cell body along axonal microtubules. Lysosomes are only present in the cell body and are responsible for containing and degrading damaged material. The axon is a thin continuation of a neuron that allows electrical impulses to be sent from neuron to neuron.

Astrocytes occupy 25% of the total brain volume and are the most abundant glial cells. [6]  They are classified into two main groups: protoplasmic and fibrous. Protoplasmic astrocytes appear in gray matter and have several branches that contact both synapses and blood vessels. Fibrous astrocytes are present in the white matter and have long fiber-like processes that contact the nodes of Ranvier and the blood vessels. Astrocytes use their connections to vessels to titrate blood flow in synaptic activity responses. Astrocytic endfeet, which forms tight junctions between endothelial cells and the basal lamina, gives rise to the formation of the blood-brain barrier (BBB). [7]

The primary function of oligodendrocytes is to make myelin, a proteolipid critical in maintaining electrical impulse conduction and maximizing velocity. Myelin is located in segments separated by nodes of Ranvier, and their function is equivalent to those of Schwan cells in the peripheral nervous system.

The macrophage populations of the CNS include microglia, perivascular macrophages, meningeal macrophages, macrophages of the circumventricular organs (CVO), and the microglia of the choroid plexus. Microglia are phagocytic cells representing the immune and support system of the CNS and are the most abundant cells of the choroid plexus. [8]

  • Development

Human brain development starts with the neurulation process from the ectodermic layer of the embryo and takes, on average, 20 to 25 years to mature. [9]  It occurs in a sequential and organized manner, beginning with the neural tube formation at the third or fourth week of gestation. This is followed by cell migration and proliferation that leads to the folding of the cerebral cortex to increase its size and surface area, creating a more complex structure. Failure of this migration and proliferation leads to a smooth brain without sulci or gyri, termed lissencephaly. [10] At birth, the general architecture of the brain is mostly complete, and by the age of 5 years, the total brain volume is about 95% of its adult size. Generally, the white matter increases with age, while the gray matter decreases with age.

The brain's most prominent white matter structure, the corpus callosum, increases by approximately 1.8% per year between the ages of 3 and 18 years. [11]  The corpus callosum conjugates the activity of the right and left hemispheres and allows for the progress of higher-order cognitive abilities.

Gray matter in the frontal lobe undergoes continued structural development, reaching its maximal volume at 11 to 12 years of age before slowing down during adolescence and early adulthood. The gray matter in the temporal lobe follows a similar development pattern, reaching its maximum size at 16 to 17 years of age with a slight decline afterward. [12]

Below is a list of the brain vesicles and the areas of the brain that develop from them (see Image.  Forebrain or Prosencephalon) .  [13]

Prosencephalon (Forebrain)

  • Cerebral cortex
  • Basal ganglia (caudate nucleus, putamen, and globus pallidus)
  • Hippocampus
  • Lateral ventricles
  • Hypothalamus
  • Epithalamus (pineal gland)
  • Subthalamus
  • Posterior pituitary
  • Optic nerve
  • Third ventricle

Mesencephalon (midbrain)

  • Cerebral aqueduct

Rhombencephalon (hindbrain)

  • Fourth ventricle (rostral)
  • Fourth ventricle (caudal)
  • Organ Systems Involved

The brain and the spinal cord comprise the central nervous system (CNS). The peripheral nervous system (PNS) subdivides into the somatic nervous system (SNS) and the autonomic nervous system (ANS). The SNS consists of peripheral nerve fibers that collect sensory information to the CNS and motor fibers that send information from the CNS to skeletal muscle. The ANS functions to control the smooth muscle of the viscera and glands and consists of the sympathetic nervous system (SNS), the parasympathetic nervous system (PaNS), and the enteric nervous system (ENS).

Nerves from the brain connect with multiple parts of the head and body, leading to various voluntary and involuntary functions. The ANS drives basic functions that control unconscious activities such as breathing, digestion, sweating, and trembling.

The ENS provides the intrinsic innervation of the gastrointestinal system and is the most neurochemically diverse branch of the PNS. [14]  Neurotransmitters such as norepinephrine, epinephrine, dopamine, and serotonin have recently been a topic of interest due to their roles in gut physiology and CNS pathophysiology, as they aid in regulating gut blood flow, motility, and absorption. [15]

The cerebrum controls motor and sensory information, conscious and unconscious behaviors, feelings, intelligence, and memory. The left hemisphere controls speech and abstract thinking (the ability to think about things that are not present). In contrast, the right hemisphere controls spatial thinking (thinking that finds meaning in the shape, size, orientation, location, and phenomena). See Figure. Homunculus, Sensory and Motor.

The motor and sensory neurons descending from the brain cross to the opposite side in the brainstem. This crossing means that the right side of the brain controls the motor and sensory functions of the left side of the body, and the left side of the brain controls the motor and sensory functions of the right side of the body. Hence, a stroke affecting the left brain hemisphere, for example, will exhibit motor and sensory deficits on the right side of the body.

Sensory neurons bring sensory input from the body to the thalamus, which then relays this sensory information to the cerebrum. For example, hunger, thirst, and sleep are under the control of the hypothalamus.

The cerebrum is composed of four lobes:

  • Frontal lobe: Responsible for motor function, language, and cognitive processes, such as executive function, attention, memory, affect, mood, personality, self-awareness, and social and moral reasoning. [16]  The Broca area is located in the left frontal lobe and is responsible for the production and articulation of speech.
  • Parietal lobe: Responsible for interpreting vision, hearing, motor, sensory, and memory functions. 
  • Temporal lobe: In the left temporal lobe, the Wernicke area is responsible for understanding spoken and written language. The temporal lobe is also an essential part of the social brain, as it processes sensory information to retain memories, language, and emotions. [17]  The temporal lobe also plays a significant role in hearing and spatial and visual perception.
  • Occipital lobe: The visual cortex is located in the occipital lobe and is responsible for interpreting visual information. See Figure.  Areas of localization, Lateral Surface of Hemisphere. 

The cerebellum controls the coordination of voluntary movement and receives sensory information from the brain and spinal cord to fine-tune the precision and accuracy of motor activity. The cerebellum also aids in various cognitive functions such as attention, language, pleasure response, and fear memory. [18]

The brainstem acts as a bridge that connects the cerebrum and cerebellum to the spinal cord (see Image. Pathways From the Brain to the Spinal Cord). The brainstem houses the principal centers that perform autonomic functions such as breathing, temperature regulation, respiration, heart rate, wake-sleep cycles, coughing, sneezing, digestion, vomiting, and swallowing. The brainstem contains both white and gray matter. The white matter consists of fiber tracts (neuronal cell axons) traveling down from the cerebral cortex for voluntary motor function and up from the spinal cord and peripheral nerves, allowing somatosensory information to travel to the highest parts of the brain. [19]

The brain represents 2% of the human body weight and consumes 15% of the cardiac output and 20% of total body oxygen. The resting brain consumes 20% of the body's energy supply. When the brain performs a task, the energy consumption increases by an additional 5%, proving that most of the brain's energy consumption gets used for intrinsic functions.

The brain uses glucose as its principal source of energy. During low glucose states, the brain utilizes ketone bodies as its primary energy source. During exercise, the brain can use lactate as a source of energy.

In the developing brain, neurons follow molecular signals from regulatory cells like astrocytes to determine their location, the type of neurotransmitter they will secrete, and with which neurons they will communicate, leading to the formation of a circuit between neurons that will be in place during adulthood. In the adult brain, developed neurons fit in their corresponding place and develop axons and dendrites to connect with the neighboring neurons. [20]

Neurons communicate via neurotransmitters released into the synaptic space, a 20 to 50-nanometer area between neurons. The neuron that releases the neurotransmitter into the synaptic space is called the presynaptic neuron, and the neuron that receives the neurotransmitter is called the postsynaptic neuron. An action potential in the presynaptic neuron leads to calcium influx and the subsequent release of neurotransmitters from their storage vesicle into the synaptic space. The neurotransmitter then travels to the postsynaptic neuron and binds to receptors to influence its activity. Neurotransmitters are rapidly removed from the synaptic space by enzymes. [21]

The oligodendrocytes in the CNS produce myelin. Myelin forms insulating sheaths around axons to allow the swift travel of electrical impulses through the axons. The nodes of Ranvier are gaps in the myelin sheath of axons, allowing sodium influx into the axon to help maintain the speed of the electrical impulse traveling through the axon. This transmission is called saltatory nerve conduction, the "jumping" of electrical impulses from one node to another. It ensures that electrical signals do not lose their velocity and can propagate long distances without signal deterioration. [22]

  • Related Testing

Functional magnetic resonance imaging (fMRI) can track the effects of neural activity and the energy that the brain consumes by measuring components of the metabolic chain. Other techniques, such as single-photon emission computed tomography (SPECT), study cerebral blood flow and neuroreceptors. Positron emission tomography (PET) assesses the glucose metabolism of the brain. [23]  Electroencephalography (EEG) records the brain's electrical activity and is very useful for detecting various brain disorders. Advancements in these techniques have enabled a broader vision and objective perceptions of mental disorders, leading to improved diagnosis, treatment, and prognosis.

  • Pathophysiology

Injury to the brain stimulates the proliferation of astrocytes, an immunological response to neurodegenerative disorders called "reactive gliosis." [24]  Damage to neural tissue promotes molecular and morphological changes and is essential in the upregulation of the glial fibrillary acidic protein (GFAP). On the other hand, epidermal growth factor receptor (EGFR) allows the transition from non-reactive to reactive astrocytes, and its inhibition improves axonal regeneration and rapid recovery. This means that when astrocytes are reactive, they proliferate and hypertrophy, leading to glial scar formation.

The microglia represent the immune and support system of the CNS. They are neuroprotective in the young brain but can react abnormally to stimuli in the aged brain and become neurotoxic and destructive, leading to neurodegeneration. [25]  As the brain ages, microglia acquire an increasingly inflammatory and cytotoxic phenotype, generating a hazardous environment for neurons. [26]  Hence, aging is the most critical risk factor for developing neurodegenerative diseases.

The brain is surrounded by cerebrospinal fluid and is isolated from the bloodstream by the blood-brain barrier (BBB). In cases like infectious meningitis and meningoencephalitis, acute inflammation causes a breakdown of the BBB, leading to the influx of blood-borne immune cells into the CNS. In other inflammatory brain disorders such as Alzheimer disease (AD), Parkinson disease (PD), Huntington disease (HD), or X-linked adrenoleukodystrophy, the primary insult is due to degenerative or metabolic processes, and there is no breakdown of the BBB. [27]

Oligodendrocyte loss can occur due to the production of reactive oxygen species or the activation of inflammatory cytokines, causing decreased myelin production and leading to conditions such as multiple sclerosis (MS). [22]

Disturbances in the neurotransmitter systems are related to these substances' production, release, reuptake, or receptor impairments and can cause neurologic or psychiatric disorders. Glutamate is the brain's most abundant excitatory neurotransmitter, while GABA is the primary inhibitory transmitter. Glycine has a similar inhibitory action in the posterior parts of the brain. Acetylcholine aids in processes such as muscle stimulation at the neuromuscular junction (NMJ), digestion, arousal, salivation, and level of attention. Dopamine is involved in the reward and motivational component, motor control, and the regulation of prolactin release. Serotonin influences mood, feelings of happiness, and anxiety. Norepinephrine is involved in arousal, alertness, vigilance, and attention. 

Cerebral oxygen delivery and consumption rates are ten times higher than global body values. [28]  Blood glucose represents the primary energy source for the brain, and the BBB is highly permeable to it. During low glucose states, the body has developed multiple ways to keep blood glucose within the normal range. As the level drops below 80 mg/dL, pancreatic beta-cells decrease insulin secretion to avoid further glucose decrease. If glucose drops further, pancreatic alpha-cells secrete glucagon, and the adrenal medulla releases epinephrine. Glucagon and epinephrine increase blood glucose levels. Cortisol and growth hormone also act to increase glucose, but they depend on the presence of glucagon and epinephrine to work.

  • Clinical Significance

Damage to the Cerebrum

  • Frontal lobe -  Damage to the frontal lobe causes interruption of the higher functioning brain processes, including social behavior, planning, motivation, and speech production. Individuals with frontal lobe damage may be unable to regulate their emotions, have meaningful or appropriate social interactions, maintain their past personality traits, or make difficult decisions. [29]
  • Temporal lobe - The Wernicke area is located in the superior temporal gyrus in an individual's dominant hemisphere, which is the left hemisphere for 95% of people. Damage to the left (dominant) temporal lobe can lead to Wernicke aphasia. This is typically referred to as "word salad" speech, where the patient will speak fluently, but their words and sentences will lack meaning. [30]  Damage to the right (non-dominant) temporal lobe may lead to persistent talking and deficits in nonverbal memory, processing certain aspects of sound or music (tone, rhythm, pitch), and facial recognition (prosopagnosia).
  • Parietal lobe -  Damage to the frontal aspect of the parietal lobe may lead to impaired sensation and numbness on the contralateral side of the body. An individual may have difficulty recognizing texture and shape and may be unable to identify a sensation and its location on their body. Damage to the middle aspect of the parietal lobe can lead to right-left disorientation and difficulty with proprioception. Damage to the non-dominant (right) parietal lobe may lead to apraxia (difficulty with performing purposeful motions such as combing hair or brushing teeth) and difficulty with spatial orientation and navigation (they may get lost in a once familiar area). Patients with non-dominant parietal lobe damage, usually from a middle cerebral artery stroke, may neglect the side opposite of the brain damage (usually the left side), which may manifest as only shaving the right side of their face or drawing a clock with all of the numbers on the right side of the circle. [31]
  • Occipital lobe -  Damage to the occipital lobe may lead to visual defects, color agnosia (inability to identify colors), movement agnosia (difficulty recognizing object movements), hallucinations, illusions, and the inability to recognize written words (word blindness). 

Damage to the Cerebellum

Damage to the cerebellum can lead to ataxia, dysmetria, dysarthria, scanning speech, dysdiadochokinesis, tremor, nystagmus, and hypotonia. To test for possible cerebellar dysfunction, a bedside neurologic exam is commonly the first step. This exam may include the Romberg test, heel-to-shin test, finger-to-nose test, and rapid alternating movement test. [32]

Damage to the Brainstem

Damage to the brainstem may present as muscle weakness, visual changes, dysphagia, vertigo, speech impairment, pupil abnormalities, insomnia, respiratory depression, or death.

Neurodegenerative Diseases

Neuronal degeneration worsens with age and can affect different areas of the brain leading to movement, memory, and cognition problems.

Parkinson disease (PD) occurs due to the degeneration of the neurons that synthesize dopamine, leading to motor function deficits. Alzheimer disease (AD) occurs due to abnormally folded protein deposition in the brain leading to neuronal degeneration. Huntington disease occurs due to a genetic mutation that increases the production of the neurotransmitter glutamate. Excessive amounts of glutamate lead to the death of neurons in the basal ganglia producing movement, cognitive, and psychiatric deficits. Vascular dementia occurs due to the death of neurons resulting from the interruption of blood supply.

Although neurodegenerative diseases are not classically caused by disturbed metabolism, research has shown that there is a reduction in glucose metabolism in Alzheimer disease. [33]

Demyelinating Diseases

Demyelinating diseases result from damage to the myelin sheath that covers the nerve cells in the white matter of the brain, spinal cord, and optic nerves. For example, multiple sclerosis and leukodystrophies are a consequence of oligodendrocyte damage.

A stroke is caused by an interruption in the blood supply to the brain, which may ultimately lead to neuronal death. This condition can result in one of several neurological problems depending on the affected region.

Brain Death

Neurologic evaluation of brain death is a complicated process that non-specialists and families might misunderstand. [34]  Brain death is the complete and irreversible loss of brain activity, including the brainstem. It requires verification through well-established clinical protocols and the support of specialized tests.

Hypoglycemia

Glucose is the primary energy source responsible for maintaining brain metabolism and function. The most significant amount of glucose is used for information processing by neurons. [35]  The brain requires a continuous supply of glucose as it has limited glucose reserves. CNS symptoms and signs of hypoglycemia include focal neurological deficits, confusion, stupor, seizure, cognitive impairment, or death.

  • Review Questions
  • Access free multiple choice questions on this topic.
  • Comment on this article.

Human Brain, Encephalon. Illustrated brain anatomy includes the cerebrum, cerebellum, and pons; the cerebral, superior, middle, and inferior peduncles; and medulla oblongata. Henry Vandyke Carter, Public Domain, via Wikimedia Commons

Forebrain or Prosencephalon. The illustration depicts the mesial aspect of a brain sectioned in the median sagittal plane, including the foramen of Monro, middle commissure, taenia thalami, habenular commissure, genu, callosum, fornix, (more...)

Areas of localization, Lateral Surface of Hemisphere. The figure depicts the motor area in red, the area of general sensations in blue, the auditory area in green, the visual area in yellow, and the psychic portions in lighter tints. Henry (more...)

Pathways From the Brain to the Spinal Cord. The figure shows the motor tract, anterior nerve roots, anterior and lateral cerebrospinal fasciculus, decussation of pyramids, geniculate fibers, internal capsule, and motor area of cortex. Henry Vandyke Carter, (more...)

Homunculus, Sensory and Motor Contributed by S Bhimji, MD

Disclosure: Kenia Maldonado declares no relevant financial relationships with ineligible companies.

Disclosure: Khalid Alsayouri declares no relevant financial relationships with ineligible companies.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

  • Cite this Page Maldonado KA, Alsayouri K. Physiology, Brain. [Updated 2023 Mar 17]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

In this Page

Bulk download.

  • Bulk download StatPearls data from FTP

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • Cerebellar Dysfunction. [StatPearls. 2024] Cerebellar Dysfunction. Ataullah AHM, Singla R, Naqvi IA. StatPearls. 2024 Jan
  • Neuroanatomy, Medulla Oblongata. [StatPearls. 2024] Neuroanatomy, Medulla Oblongata. Iordanova R, Reddivari AKR. StatPearls. 2024 Jan
  • Microsurgical anatomy of the central core of the brain. [J Neurosurg. 2018] Microsurgical anatomy of the central core of the brain. Ribas EC, Yağmurlu K, de Oliveira E, Ribas GC, Rhoton A. J Neurosurg. 2018 Sep; 129(3):752-769. Epub 2017 Dec 22.
  • Review Anatomy of the brainstem: a gaze into the stem of life. [Semin Ultrasound CT MR. 2010] Review Anatomy of the brainstem: a gaze into the stem of life. Angeles Fernández-Gil M, Palacios-Bote R, Leo-Barahona M, Mora-Encinas JP. Semin Ultrasound CT MR. 2010 Jun; 31(3):196-219.
  • Review Physiology of the cerebellum. [Handb Clin Neurol. 2018] Review Physiology of the cerebellum. D'Angelo E. Handb Clin Neurol. 2018; 154:85-108.

Recent Activity

  • Physiology, Brain - StatPearls Physiology, Brain - StatPearls

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

1. Co-lead authors Maxwell Block and Bingtian Ye.

Spin squeezing for all

Should kids play wordle.

Mother teaching daughter about molecules.

How moms may be affecting STEM gender gap

Speech bubble inside of head.

Illustration by Donna Grethen/Ikon Images

The miracle of ‘dog’

Study illuminates complex cognitive steps behind even the simplest words, with implications for treatment of neurological disorders

MGH Communications

By using advanced brain recording techniques, a new study led by researchers from Harvard-affiliated Massachusetts General Hospital demonstrates how neurons in the human brain work together to allow people to think about what words they want to say and then produce them aloud through speech.

The findings provide a detailed map of how speech sounds such as consonants and vowels are represented in the brain well before they are even spoken and how they are strung together during language production.

The work , which is published in Nature, could lead to improvements in the understanding and treatment of speech and language disorders.

“Although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech — including coming up with the words we want to say, planning the articulatory movements, and producing our intended vocalizations,” says senior author Ziv Williams , an associate professor in neurosurgery at MGH and Harvard Medical School.

“Our brains perform these feats surprisingly fast — about three words per second in natural speech — with remarkably few errors. Yet how we precisely achieve this feat has remained a mystery.”

“Although speaking usually seems easy, our brains perform many complex cognitive steps.” Ziv Williams, Harvard Medical School

When they used a cutting-edge technology called Neuropixels probes to record the activities of single neurons in the prefrontal cortex, a frontal region of the human brain, Williams and his colleagues identified cells that are involved in language production and that may underlie the ability to speak. They also found that there are separate groups of neurons in the brain dedicated to speaking and listening.

“The use of Neuropixels probes in humans was first pioneered at MGH,” said Williams. “These probes are remarkable — they are smaller than the width of a human hair, yet they also have hundreds of channels that are capable of simultaneously recording the activity of dozens or even hundreds of individual neurons.”

Williams worked to develop the recording techniques with Sydney Cash, a professor in neurology at MGH and Harvard Medical School, who also helped lead the study.

The research shows how neurons represent some of the most basic elements involved in constructing spoken words — from simple speech sounds called phonemes to their assembly into more complex strings such as syllables.

For example, the consonant “da,” which is produced by touching the tongue to the hard palate behind the teeth, is needed to produce the word dog. By recording individual neurons, the researchers found that certain neurons become active before this phoneme is spoken out loud. Other neurons reflected more complex aspects of word construction such as the specific assembly of phonemes into syllables.

With their technology, the investigators showed that it’s possible to reliably determine the speech sounds that individuals will utter before they articulate them. In other words, scientists can predict what combination of consonants and vowels will be produced before the words are actually spoken. This capability could be leveraged to build artificial prosthetics or brain-machine interfaces capable of producing synthetic speech, which could benefit a range of patients.

“Disruptions in the speech and language networks are observed in a wide variety of neurological disorders — including stroke, traumatic brain injury, tumors, neurodegenerative disorders, neurodevelopmental disorders, and more,” said Arjun Khanna, a postdoctoral fellow in the Williams Lab and a co-author on the study. “Our hope is that a better understanding of the basic neural circuitry that enables speech and language will pave the way for the development of treatments for these disorders.”

The researchers hope to expand on their work by studying more complex language processes that will allow them to investigate questions related to how people choose the words that they intend to say and how the brain assembles words into sentences that convey an individual’s thoughts and feelings to others.

Additional authors include William Muñoz, Young Joon Kim, Yoav Kfir, Angelique C. Paulk, Mohsen Jamali, Jing Cai, Martina L Mustroph, Irene Caprara, Richard Hardstone, Mackenna Mejdell, Domokos Meszena, Abigail Zuckerman, and Jeffrey Schweitzer.

The research was supported by the National Institutes of Health.

Share this article

You might like.

Physicists ease path to entanglement for quantum sensing

Early childhood development expert has news for parents who think the popular online game will turn their children into super readers

Mother teaching daughter about molecules.

Research suggests encouragement toward humanities appears to be very influential for daughters

You want to be boss. You probably won’t be good at it.

Study pinpoints two measures that predict effective managers

Your kid can’t name three branches of government? He’s not alone. 

Efforts launched to turn around plummeting student scores in U.S. history, civics, amid declining citizen engagement across nation

Good genes are nice, but joy is better

Harvard study, almost 80 years old, has proved that embracing community helps us live longer, and be happier

Neuroscience News logo for mobile.

How the Brain Crafts Words Before Speaking

Summary: A new study utilizes advanced Neuropixels probes to unravel the complexities of how the human brain plans and produces speech. The team identified specific neurons in the prefrontal cortex involved in the language production process, including the separate neural pathways for speaking and listening.

Their findings illustrate how the brain represents phonemes and assembles them into syllables, providing insights that could revolutionize treatments for speech and language disorders.

This research not only enhances our understanding of the neural underpinnings of speech but also opens the door to developing technologies for synthetic speech production, offering hope for individuals with neurological disorders affecting communication.

  • Neuropixels probes were used to record activities of individual neurons involved in planning and producing speech, revealing the brain’s intricate processes for language production.
  • The study identified separate groups of neurons dedicated to speaking and listening, and how the brain constructs speech sounds before they are spoken.
  • This research could lead to the development of synthetic speech prosthetics and treatments for a wide range of neurological disorders affecting speech and language.

Source: Mass General

By using advanced brain recording techniques, a new study led by researchers from Massachusetts General Hospital (MGH) demonstrates how neurons in the human brain work together to allow people to think about what words they want to say and then produce them aloud through speech. Together, these findings provide a detailed map of how speech sounds such as consonants and vowels are represented in the brain well before they are even spoken and how they are strung together during language production. The work, which is published in  Nature , reveals insights into the brain’s neurons that enable language production, and which could lead to improvements in the understanding and treatment of speech and language disorders. “Although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech—including coming up with the words we want to say, planning the articulatory movements and producing our intended vocalizations,” says senior author Ziv Williams, MD, an associate professor in Neurosurgery at MGH and Harvard Medical School. “Our brains perform these feats surprisingly fast—about three words per second in natural speech—with remarkably few errors. Yet how we precisely achieve this feat has remained a mystery.”

When they used a cutting-edge technology called Neuropixels probes to record the activities of single neurons in the prefrontal cortex, a frontal region of the human brain, Williams and his colleagues identified cells that are involved in language production and that may underlie the ability to speak. They also found that there are separate groups of neurons in the brain dedicated to speaking and listening. “The use of Neuropixels probes in humans was first pioneered at MGH. These probes are remarkable—they are smaller than the width of a human hair, yet they also have hundreds of channels that are capable of simultaneously recording the activity of dozens or even hundreds of individual neurons,” says Williams who had worked to develop these recording techniques with Sydney Cash, MD, PhD, a professor in Neurology at MGH and Harvard Medical School, who also helped lead the study.

“Use of these probes can therefore offer unprecedented new insights into how neurons in humans collectively act and how they work together to produce complex human behaviors such as language.” The study showed how neurons in the brain represent some of the most basic elements involved in constructing spoken words—from simple speech sounds called phonemes to their assembly into more complex strings such as syllables. For example, the consonant “da”, which is produced by touching the tongue to the hard palate behind the teeth, is needed to produce the word dog. By recording individual neurons, the researchers found that certain neurons become active before this phoneme is spoken out loud. Other neurons reflected more complex aspects of word construction such as the specific assembly of phonemes into syllables. With their technology, the investigators showed that it’s possible to reliably determine the speech sounds that individuals will say before they articulate them. In other words, scientists can predict what combination of consonants and vowels will be produced before the words are actually spoken. This capability could be leveraged to build artificial prosthetics or brain-machine interfaces capable of producing synthetic speech, which could benefit a range of patients. “Disruptions in the speech and language networks are observed in a wide variety of neurological disorders—including stroke, traumatic brain injury, tumors, neurodegenerative disorders, neurodevelopmental disorders, and more,” says Arjun Khanna who is a co-author on the study.

“Our hope is that a better understanding of the basic neural circuitry that enables speech and language will pave the way for the development of treatments for these disorders.” The researchers hope to expand on their work by studying more complex language processes that will allow them to investigate questions related to how people choose the words that they intend to say and how the brain assembles words into sentences that convey an individual’s thoughts and feelings to others.

Additional authors include William Muñoz, Young Joon Kim, Yoav Kfir, Angelique C. Paulk, Mohsen Jamali, Jing Cai, Martina L Mustroph, Irene Caprara, Richard Hardstone, Mackenna Mejdell, Domokos Meszena, Abigail Zuckerman, and Jeffrey Schweitzer..

Funding: This work was supported by the National Institutes of Health.

About this neuroscience and speech research news

Author: Brandon Chase Source: Mass General Contact: Brandon Chase – Mass General Image: The image is credited to Neuroscience News

Original Research: Closed access. “ Single-neuronal elements of speech production in humans ” by Ziv Williams et al. Nature

Single-neuronal elements of speech production in humans

Humans are capable of generating extraordinarily diverse articulatory movement combinations to produce meaningful speech. This ability to orchestrate specific phonetic sequences, and their syllabification and inflection over subsecond timescales allows us to produce thousands of word sounds and is a core component of language.

The fundamental cellular units and constructs by which we plan and produce words during speech, however, remain largely unknown. Here, using acute ultrahigh-density Neuropixels recordings capable of sampling across the cortical column in humans, we discover neurons in the language-dominant prefrontal cortex that encoded detailed information about the phonetic arrangement and composition of planned words during the production of natural speech.

These neurons represented the specific order and structure of articulatory events before utterance and reflected the segmentation of phonetic sequences into distinct syllables. They also accurately predicted the phonetic, syllabic and morphological components of upcoming words and showed a temporally ordered dynamic.

Collectively, we show how these mixtures of cells are broadly organized along the cortical column and how their activity patterns transition from articulation planning to production.

We also demonstrate how these cells reliably track the detailed composition of consonant and vowel sounds during perception and how they distinguish processes specifically related to speaking from those related to listening.

Together, these findings reveal a remarkably structured organization and encoding cascade of phonetic representations by prefrontal neurons in humans and demonstrate a cellular process that can support the production of speech.

If Neuronscience want to Scan my Brain to explain why in many conversations result in my words exiting my thoughts verbally (almost instantaneous) so i have to explain myself and the fact it’s not necessarily ‘My thought as a conclusion’.

Hope this makes sense, was diagnosed with 2 years ago. Always had an interest in neuroscience and i think my challenges are the reason for this.

Thanks for all them work. Expecting nothing here, however if any professionals need a subject, then I am interested :-)

Keep up the great work, it really helps make sense of life at times. 🤟

Comments are closed.

Neuroscience News Small Logo

“Forever Chemicals” May Disrupt Brain Development

This shows an older man with headphones, surrounded by musical notes.

Older Brains Work Harder to Remember Music

This shows a brain.

Cadmium Exposure Linked to Memory Issues

This shows an astrocyte.

How Astrocytes Can Become Nerve Cells

An editorially independent publication supported by the Simons Foundation.

Get the latest news delivered to your inbox.

Type search term(s) and press enter

  • Comment Comments
  • Save Article Read Later Read Later

The Brain Processes Speech in Parallel With Other Sounds

October 21, 2021

An illustration that shows words and music from an opera singer going into a listener’s brain.

The sounds of speech can be buried within a cacophonous soundscape. To perceive them more quickly, the brain’s auditory system seems to tease them out for parallel processing very early.

Ana Kova/Quanta Magazine

Introduction

Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities.

According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words.

But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds.

The work offers hints of a new explanation for how the brain can unbraid overlapping streams of auditory stimuli so quickly and effectively. Yet in doing so, the discovery doesn’t just call into question more established theories about speech processing; it also challenges ideas about how the entire auditory system works. Much of the prevailing wisdom about our perception of sounds is based on analogies to what we know about computations performed in the visual system. But growing evidence, including the recent study on speech, hints that auditory processing works very differently — so much so that scientists are starting to rethink what the various parts of the auditory system are doing and what that means for how we decipher rich soundscapes.

“This study is a monumental undertaking,” said Dana Boebinger , a cognitive neuroscientist at Harvard University who was not involved in the work. Although she is not ready to abandon more conventional theories about how the brain processes complex auditory information, she finds the results “provocative” because they hint that “maybe we don’t actually have a very good idea of what’s going on.”

Turning a Hierarchy on Its Head

The earliest steps in our perception of sound are very well understood. When we hear someone speak, the cochlea in our ear separates the complex sound into different component frequencies and sends that representation through several stages of processing to the auditory cortex. At first, information is extracted from those signals about a sound’s location in space, its pitch and how much it is changing. What happens next is trickier to nail down: Higher cortical regions are thought to tease out features specifically relevant to speech — from phonemes to prosody — in a hierarchical sequence. The features of other complex types of sounds, such as music, would be handled similarly.

This arrangement echoes models of how the visual system works: It interprets patterns of light falling on cells in the retina first as lines and edges, and then as more complex features and patterns, ultimately building up a representation of a face or an object.

But dissecting the details of the flow of auditory information has been difficult. Studies of speech can’t get far by using animals because speech is a uniquely human trait. And in humans, most research has to use indirect methods to measure brain activity. Getting direct recordings is much trickier because it’s invasive: Scientists need to piggyback on medical procedures, collecting data from electrodes implanted in the brains of patients getting surgery for epilepsy. But many auditory regions of interest are nestled deep within the brain between the frontal and temporal lobes — an area where surgeons don’t usually seek recordings.

Still, many of those direct and indirect studies found evidence for the traditional hierarchical model of auditory and speech processing: One of the early stops in the process, the primary auditory cortex, seems to be tuned to encode simple features of sounds, such as frequency. As the signals progress away from the primary auditory cortex, other brain regions seem to respond more to increasingly complex sound features instead, including features unique to speech, like phonemes. So far, so good.

But scientists deduced this hierarchical framework “based on experiments that weren’t necessarily looking to see how these regions were connected” or the sequences in which they became active, said Liberty Hamilton , a neuroscientist at the University of Texas, Austin.

And so, in 2014, she set out to build a more comprehensive map of speech sound representations throughout the auditory cortex, to learn what kind of information gets distilled from a sound in different brain areas, and how that information gets integrated from one region to the next.

She had a rare opportunity to explore that question, first as a postdoctoral researcher in the lab of Edward Chang , a neurosurgeon at the University of California, San Francisco, and then in her own lab in Austin. Chang, Hamilton and their colleagues were able to bring together several patients whose treatment had required electrode grids to be placed in various auditory locations.

Photograph of Liberty Hamilton of the University of Texas, Austin.

Liberty Hamilton (top), a neuroscientist at the University of Texas, Austin, studies how representations of speech in various part of the brain could change in different contexts. Edward Chang, a neurosurgeon at the University of California, San Francisco, has had opportunities to directly study parts of the human auditory system that are inaccessible to most researchers.

A team of researchers that included Liberty Hamilton (left), a neuroscientist at the University of Texas, Austin, and Edward Chang (right), a neurosurgeon at the University of California, San Francisco, mapped out how different parts of the auditory cortex process features of speech and other sounds.

Courtesy of Liberty Hamilton; Tom Seawell for UCSF

Because opportunities to monitor those areas are so hard to come by, their recordings were “super precious data, and exciting,” Boebinger said. The researchers had hoped to be able to fill in details about how the brain transforms the low-level sound representations in the primary auditory cortex into more complex representations of speech sounds in a region further up in the hierarchy, the superior temporal gyrus.

Instead, what they found “sort of turned the idea on its head,” Hamilton said.

Pathways Diverging Early

The first hint that things weren’t proceeding as anticipated arrived quickly. The Chang group analyzed the responses of diverse auditory regions to features of pure tones and spoken words and sentences. They were able to confirm previous findings and fill in details of the map that had been missing.

But they also observed something strange. If information flowed hierarchically from “lower” to “higher” areas as they thought, then the primary auditory cortex should respond to an input before the superior temporal gyrus did. Yet some areas of the superior temporal gyrus seemed to respond to the onset of speech just as quickly as the primary auditory cortex responded to simple sound characteristics, like frequency.

The observation invited a tantalizing hypothesis: that the two brain regions were processing different aspects of the same input in parallel, and that “this parallel pathway for speech perception can bypass the primary auditory cortex — which is where we thought all of the information was supposed to go,” Hamilton said. That would mean some representations of speech sounds didn’t need to be built out of lower-level features extracted in the primary auditory cortex. “In a hierarchical model, you expect that the primary auditory cortex is the first way station that you have to go through before getting to the speech areas of the cortex,” Hamilton said. But her results suggested that that’s not necessarily true.

Samuel Velasco/Quanta Magazine

Chang, Hamilton and their colleagues decided to test that idea further. When they stimulated patients’ primary auditory cortex to disrupt its function, the patients still had no problem perceiving speech. Instead, they reported auditory hallucinations: sounds on top of the words or sentences they were hearing, ranging from buzzing and tapping to running water and shifting gravel.

When the researchers stimulated a subregion of the superior temporal gyrus, they saw the opposite: Patients could not understand speech but could still apparently hear sounds normally. “I could hear you speaking but can’t make out the words,” one subject reported.

Once again, “it was like there are just two separate processes,” Hamilton said — independent pathways for the processing of sounds and supposedly higher-level features associated with speech.

Finding parallel processing in the auditory cortex isn’t entirely a surprise. “Hierarchies are nice and clean when you’re talking about perceptual systems, because you know that at some level, you’re going from a noisy signal to something higher order and more abstract,” said Sophie Scott , a neuroscientist at University College London who did not participate in the study. “But no one ever told nature that that had to be the easiest or cleanest way of doing it.”

It only makes sense that at some point, separate brain circuits have to handle different types of auditory information simultaneously. In fact, researchers have already reported parallel functions at later stages in auditory processing: Complex musical and speech elements are processed separately, with their representations forming at least partly in parallel.

But those splits in speech and sound processing happen only after signals have passed through the primary auditory cortex. Hamilton and Chang’s work uncovered such a branch point very early in the process — so early that it might mean that information gets integrated to represent speech sounds at the subcortical level, rather than just in the cortex. And if subcortical processing plays such a large role in speech, researchers might also have overlooked other important ways in which the brain makes sense of complex sounds.

“We’ve learned again and again over the years that a lot of the things that we thought are cortical actually have, at least to some extent, also been subcortical,” said Israel Nelken , a neurobiologist and director of the Edmond and Lily Safra Center for Brain Sciences at the Hebrew University of Jerusalem.

In fact, the new results demonstrate that “lower” levels of the cortex might be hiding greater complexity, too. Scott, for instance, found it intriguing that stimulating the primary auditory cortex led to such a rich set of auditory hallucinations in the Chang group’s patients. According to her, such hallucinations would typically be associated with higher-order cortical regions.

So the primary auditory cortex might be doing more than it’s typically given credit for. Other recent work has pointed to the same conclusion: In contrast to the primary visual cortex, the primary auditory cortex receives signals that have already undergone much more processing, and it represents information in a much more context-sensitive way. It’s “functionally much more downstream than primary visual cortex is,” said David Poeppel , a neuroscientist at New York University.

‘More Like a Lightning Storm’

Even so, “I don’t think we want to throw out the hierarchical baby with the bathwater entirely,” Poeppel said. There are still hierarchies in this system, and they are important for constructing increasingly abstract mental representations.

But departing from that hierarchy to process speech and other sounds in parallel very early on might offer a lot of advantages. For one, it could help to optimize the speed of the auditory system, which demands microsecond-level precision because of the transient nature of sounds. “So having this kind of parallel organization might allow you to get information about speech or other complex sounds analyzed more quickly,” Boebinger said. Moreover, auditory signals are inherently messy: Individuals drop phonemes or skip words inconsistently, and they may speak differently in different social contexts. A parallel processing system might be better at dealing with such chaotic inputs.

It might also help the auditory system to segregate complex, overlapping sounds more efficiently and allow the brain to rapidly switch attention between those acoustic streams. “There have to be multiple streams of different sorts of information being processed, all at the same time, in a very plastic way, because the auditory environment can change at the drop of a hat,” Scott said. Given the importance of speech sounds to humans, it makes sense that our brain would process them quickly and in a way that keeps them distinct from background or environmental sounds.

And if speech and the sounds that produce them get processed independently very early on, then perhaps other types of sounds do too. To find out, Hamilton and others are hoping to do experiments with a broader array of auditory inputs — environmental sounds, music, sentences spoken amid background noise rather than in silence — to examine when and where different kinds of parallel processing might occur.

“We’re just starting to be able to dissect the components of that processing,” said Robert Shannon , a neuroscientist at the University of Southern California. Perhaps representations will turn out to form not just in ascending hierarchies or in neat parallel pathways, but with so much parallelism and complexity that it’s “more like a lightning storm,” he added.

And that “is a very different picture of how sensory systems work,” Nelken said.

Get highlights of the most important news delivered to your email inbox

Also in Biology

speech on human brain

How Our Longest Nerve Orchestrates the Mind-Body Connection

Hand-drawn curls of ribbon and pointed arrows represent a protein’s structure.

How Colorful Ribbon Diagrams Became the Face of Proteins

Sébastien Calvignac-Spencer sits on a park bench wearing a T-shirt and baseball cap.

The Viral Paleontologist Who Unearths Pathogens’ Deep Histories

Comment on this article.

Quanta Magazine moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (New York time) and can only accept comments written in English. 

Illustration of a figure working on a laptop surrounded by flasks and liquids evaporating into discrete shapes in the air.

Next article

Use your social network.

Forgot your password ?

We’ll email you instructions to reset your password

Enter your new password

  • Division of Social Sciences
  • Social Sciences Book Gallery

Home » Speech on the Brain

Research Highlights

Speech on the Brain

Published July 1, 2015

A UCSF neuroscientist and UC Berkeley linguist team up for leading-edge research that could one day help give speech back to stroke victims and people with paralyses.

14599057004_984399e790_o

Several years ago, Keith Johnson , a Professor of Linguistics at UC Berkeley, was teaching his department’s introductory course in phonetics. Phonetics is the study of the sounds in human speech, everything from the physical placement of your tongue when you say “ch” or “sh” to the social meanings of different sound enunciations. After Johnson’s first lecture, a young man came up and introduced himself as Edward F. Chang . “He was literally a brain surgeon,” Johnson says, “and he told me, ‘I need to know everything you know.’”

Chang, now a faculty member at the University of California, San Francisco (UCSF) School of Medicine, is in fact a brain surgeon. And he needed to know about phonetics because his lab is trying to map how language functions in the human brain. At the heart of this challenge is a well-known but little understood phenomenon: when we listen to a language we understand, the sounds are meaningful, not just a series of vocalizations.

Since that phonetics course, Johnson has become a team member affiliated with the Chang Lab , which brings neurosurgeons, bioengineers, and linguists together to help address questions related to language and the brain. For the surgical part of the research, Chang placed a dense mesh of 264 electrodes on the brains of volunteers, all of whom were having brain surgery for severe epileptic seizures. “You open the skull and look in at part of the brain, and this is done awake,” says Chang.

The network of electrodes allows the research team to see is how the signals triggered by hearing words travels through the brain. “We’re starting at the bottom by looking at what parts of the brain light up with speaking and hearing,” says Johnson. “We are able to detect difference in sound at the millisecond range, so we can see how brain works in real time.”

It is in the nature of these sounds that Johnson adds his contribution. All languages are made up of a limited number of phonemes, the building-block sounds, like “b” or “a,” that are combined into words and meanings. Phonetic linguistics has a sophisticated vocabulary for the sounds that make up human speech, as well as an in-depth understanding of where sounds originate in our throats and mouths.

speech on human brain

The result of taking these two forms of expertise—linguistics and neuroscience—and combining them, Johnson says, is that Chang’s lab is learning how the signals generated in the  motor cortex make the lips, tongue, larynx and jaw move, Johnson explains. The paper he co-authored with Chang last year in Nature is the first to describe these neurological mechanisms for speech. “A lot of this sort of linguistics is not so high-powered on the neuroscience side, with behavioral linguistics people sort of dabbling, or there are neuroscientists who don’t know what the physical side is doing,” Chang says.

The ultimate goal of Chang’s lab is to learn enough about neural speech mechanisms that the lab can develop a computerized prosthetic that would give speech back to people with paralyses or victims of stroke. “The therapeutic application,” Johnson says, “is to be able to take the intention to speak, in your head, and turn it into actual sound,” even for people no longer physically able to use their mouths.

Johnson has found the interdisciplinary nature of the research fulfilling, and working with Chang’s lab has offered exposure to skills and expertise that no single researcher can access.  “I wasn’t going to go out and learn brain surgery,” Johnson says. “Some questions are really addressed best or can only be address by working together across disciplines.”

Photo Credit: aboutmodafinil.com

You May Like

9235392888_eefcd7336c_kA

Published June 10, 2015

Penelope Anthias: “Gas and Land Rights in Bolivia”

In Bolivia, conflicting interests have led to complicated internal struggles over rights, land, and money, says a postdoctoral fellow in UC Berkeley's Department of Geography.

Screen shot 2015-05-12 at 2

Published June 1, 2015

Tribal Tongues

After nearing extinction, California Indian languages are gaining new speakers—and a digital presence—with the help of UC Berkeley’s Linguistics Department.

12130586766_4489f11553_o

Published May 12, 2015

After the Maidan

UC Berkeley scholars present diverse viewpoints on the conflict between Russia and Ukraine.

Every print subscription comes with full digital access

Science News

New brain implants ‘read’ words directly from people’s thoughts.

Devices could permit communication from people with paralysis and others unable to speak

An illustration of a woman with the top of her appearing to open on a hinge and her pull a thin white string out of a tangled collection of string where her brain would be

To restore someone’s lost ability to communicate, scientists used experimental brain implants to turn internal speech into external signals.

Malte Mueller/fstop/Getty Images Plus

Share this:

By Laura Sanders

November 15, 2022 at 7:00 am

SAN DIEGO — Scientists have devised ways to “read” words directly from brains. Brain implants can translate internal speech into external signals, permitting communication from people with paralysis or other diseases that steal their ability to talk or type.

New results from two studies, presented November 13 at the annual meeting of the Society for Neuroscience, “provide additional evidence of the extraordinary potential” that brain implants have for restoring lost communication, says neuroscientist and neurocritical care physician Leigh Hochberg.

Some people who need help communicating can currently use devices that require small movements, such as eye gaze changes. Those tasks aren’t possible for everyone. So the new studies targeted internal speech, which requires a person to do nothing more than think.

“Our device predicts internal speech directly, allowing the patient to just focus on saying a word inside their head and transform it into text,” says Sarah Wandelt, a neuroscientist at Caltech. Internal speech “could be much simpler and more intuitive than requiring the patient to spell out words or mouth them.”

Neural signals associated with words are detected by electrodes implanted in the brain. The signals can then be translated into text, which can be made audible by computer programs that generate speech.

That approach is “really exciting, and reinforces the power of bringing together fundamental neuroscience, neuroengineering and machine learning approaches for the restoration of communication and mobility,” says Hochberg, of Massachusetts General Hospital and Harvard Medical School in Boston, and Brown University in Providence, R.I. 

Wandelt and colleagues could accurately predict which of eight words a person who was paralyzed below the neck was thinking. The man was bilingual, and the researchers could detect both English and Spanish words .

Electrodes picked up nerve cell signals in his posterior parietal cortex, a brain area involved in speech and hand movements. A brain implant there might eventually be used to control devices that can perform tasks usually done by a hand too, Wandelt says.

Another approach, led by neuroscientist Sean Metzger of the University of California, San Francisco and his colleagues, relied on spelling. The participant was a man called Pancho who hadn’t been able to speak for more than 15 years after a car accident and stroke. In the new study, Pancho didn’t use letters; instead, he attempted to silently say code words, such as “alpha” for A and “echo” for E.

By stringing these code letters into words, the man produced sentences such as “I do not want that” and “You have got to be kidding.” Each spelling session would end when the man attempted to squeeze his hand, thereby creating a movement-related neural signal that would stop the decoding. These results presented at the neuroscience meeting were also published November 8 in Nature Communications .

This system allowed Pancho to produce around seven words per minute. That’s faster than the roughly five words per minute his current communication device can make, but much slower than normal speech, typically about 150 words a minute. “That’s the speed we’d love to hit one day,” Metzger says.

To be useful, the current techniques will need to get faster and more accurate. It’s also unclear whether the technology will work for other people, perhaps with more profound speech disorders. “These are still early days for the technologies,” Hochberg says.

Progress will be possible only with the help of people who volunteer for the studies. “The field will continue to benefit from the incredible people who enroll in clinical trials,” says Hochberg, “as their participation is absolutely vital to the successful translation of these early findings into clinical utility.”

More Stories from Science News on Neuroscience

Two white mice sit side by side as they eat from a pile of bird seed off of a white table.

A hunger protein reverses anorexia symptoms in mice

abstract person with wavy colors flowing in and out of brain

‘Then I Am Myself the World’ ponders what it means to be conscious

A vial of blood is put into a tube rack, with medical images of a brain in the background.

Alzheimer’s blood tests are getting better, but still have a ways to go

Psilocybin temporarily dissolves brain networks.

a sea lion facing the camera lies on a concrete wall. A person wearing a white jumpsuit and blue gloves is walking toward it

Bird flu has been invading the brains of mammals. Here’s why

A photograph of a mother in a green shirt breastfeeding her infant. She is sitting in a white armchair next to a small, potted plant in front of a white wall and curtained window.

Breastfeeding should take a toll on bones. A brain hormone may protect them

Art of many faceless people standing in a crowd

‘Do I Know You?’ explores face blindness and the science of the mind

This is stock art of a man and a woman clutching their stomachs as if in pain. Most of the image is black, gray and white, but their T-shirts are colored red in the abdomen area, to denote pain.

Pain may take different pathways in men and women

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

SciTechDaily

Making the Neurodivergent Brain Visible: New Research Cracks the Autism Code

Genetics Autism Child Art Concept

Researchers have developed a technique that accurately identifies genetic markers of autism in brain images, which could revolutionize early diagnosis and treatment.

A team of researchers co-led by University of Virginia engineering professor Gustavo K. Rohde has developed a system that can spot genetic markers of autism in brain images with 89 to 95% accuracy .

Their research, published in the journal Science Advances , indicates that doctors could use this method to see, classify, and treat autism and related neurological conditions without relying on or waiting for behavioral cues, potentially leading to earlier interventions.

“Autism is traditionally diagnosed behaviorally but has a strong genetic basis. A genetics-first approach could transform understanding and treatment of autism,” the researchers explained.

3D TBM System Diagram

Collaborative Research and Technique Development

Rohde, a professor of biomedical and electrical and computer engineering, collaborated with researchers from the University of California San Francisco and the Johns Hopkins University School of Medicine, including Shinjini Kundu, Rohde’s former Ph.D. student and first author of the paper.

While working in Rohde’s lab, Kundu — now a physician at the Johns Hopkins Hospital — helped develop a generative computer modeling technique called transport-based morphometry, or TBM, which is at the heart of the team’s approach.

Using a novel mathematical modeling technique, their system reveals brain structure patterns that predict variations in certain regions of the individual’s genetic code — a phenomenon called “copy number variations,” in which segments of the code are deleted or duplicated. These variations are linked to autism.

Understanding Autism’s Genetic and Morphological Links

TBM allows the researchers to distinguish normal biological variations in brain structure from those associated with the deletions or duplications.

“Some copy number variations are known to be associated with autism, but their link to brain morphology — in other words, how different types of brain tissues such as gray or white matter, are arranged in our brain — is not well known,” Rohde said. “Finding out how CNV relates to brain tissue morphology is an important first step in understanding autism’s biological basis.”

Gustavo Rohde

Advancements in Morphometric Analysis

Transport-based morphometry differs from other machine learning image analysis models because the mathematical models are based on mass transport — the movement of molecules such as proteins, nutrients, and gases in and out of cells and tissues. “Morphometry” refers to measuring and quantifying the biological forms created by these processes.

Most machine learning methods, Rohde said, have little or no relation to the biophysical processes that generated the data. Instead, they rely on recognizing patterns to identify anomalies. However, Rohde’s approach uses mathematical equations to extract the mass transport information from medical images, creating new images for visualization and further analysis.

Then, using a different set of mathematical methods, the system parses information associated with autism-linked CNV variations from other “normal” genetic variations that do not lead to disease or neurological disorders — what the researchers call “confounding sources of variability.”

Implications for Future Autism Research and Treatment

These sources previously prevented researchers from understanding the “gene-brain-behavior” relationship, effectively limiting care providers to behavior-based diagnoses and treatments.

According to Forbes magazine , 90% of medical data is in the form of imaging, which we don’t have the means to unlock. Rohde believes TBM is the skeleton key.

“As such, major discoveries from such vast amounts of data may lie ahead if we utilize more appropriate mathematical models to extract such information.”

The researchers used data from participants in the Simons Variation in Individuals Project, a group of subjects with the autism-linked genetic variation. Control-set subjects were recruited from other clinical settings and matched for age, sex, handedness, and non-verbal IQ while excluding those with related neurological disorders or family histories.

“We hope that the findings, the ability to identify localized changes in brain morphology linked to copy number variations, could point to brain regions and eventually mechanisms that can be leveraged for therapies,” Rohde said.

Reference: “Discovering the gene-brain-behavior link in autism via generative machine learning” by Shinjini Kundu, Haris Sair, Elliott H. Sherr, Pratik Mukherjee and Gustavo K. Rohde, 12 June 2024, Science Advances . DOI: 10.1126/sciadv.adl5307

The research received funding from the National Science Foundation, the National Institutes of Health , the Radiological Society of North America, and the Simons Variation in Individuals Foundation.

Related Articles

Gene mutation linked to autism found to overstimulate brain cells, surprising: rare human gene variant exposes fundamental sex differences, defective gene slows down brain cells: high-risk gene for developing autism, gene changes linked to severe repetitive behaviors seen in autism, schizophrenia, and drug addiction, hereditary form of autism may be treatable with nutritional supplements, de novo somatic mutations likely cause hemimegalencephaly, imbalance between neuronal excitation and inhibition may account for seizure susceptibility in angelman syndrome, evolutionary changes surrounding the nos1 gene, “area x” of zebra finch may provide insights to human speech disorders.

Save my name, email, and website in this browser for the next time I comment.

Type above and press Enter to search. Press Esc to cancel.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

applsci-logo

Article Menu

speech on human brain

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Exploring inner speech recognition via cross-perception approach in eeg and fmri.

speech on human brain

1. Introduction

  • We propose a novel cross-perception model that effectively integrates EEG and fMRI data for inner speech recognition.
  • We introduce a multigranularity encoding scheme that captures both temporal and spatial aspects of brain activity during inner speech.
  • We develop an adaptive fusion mechanism that dynamically weights the contributions of different modalities based on their relevance to the recognition task.
  • We provide extensive experimental results and analyses, demonstrating the superiority of our multimodal approach over unimodal baselines.

2. Related Work

2.1. unimodal approaches, 2.2. bimodal approaches, 2.3. limitations of existing approaches, 3. proposed targeted improvements, 3.1. eeg signal processing enhancements, singular spectrum analysis (ssa) for eeg decomposition, 3.2. fmri data processing enhancements, 3.3. multimodal fusion strategy, 3.4. cross-modal contrastive learning, 3.5. theoretical framework, 4. experiment setup, 4.1. baseline methods, 4.1.1. unimodal methods.

  • EEG-SVM: Support Vector Machine classifier using time–frequency features from EEG data.
  • EEG-RF: Random Forest classifier using wavelet coefficients from EEG data.
  • fMRI-MVPA: Multivoxel Pattern Analysis using a linear SVM on fMRI data.
  • fMRI-3DCNN: 3D Convolutional Neural Network on fMRI data.

4.1.2. Existing Multimodal Methods

  • EEG-fMRI-Concat: Simple concatenation of EEG and fMRI features with an SVM classifier.
  • EEG-fMRI-CCA: Canonical Correlation Analysis for feature fusion of EEG and fMRI data.
  • MM-CNN: Multimodal Convolutional Neural Network for EEG and fMRI fusion.

5.1. Main Results

5.2. ablation study, 5.3. cross-participant generalization, 5.4. extended study, 6. discussion, 7. limitations, 8. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Alderson-Day, B.; Fernyhough, C. Inner Speech: Development, Cognitive Functions, Phenomenology, and Neurobiology. Psychol. Bull. 2015 , 141 , 931–965. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Anumanchipalli, G.K.; Chartier, J.; Chang, E.F. Speech Synthesis from Neural Decoding of Spoken Sentences. Nature 2019 , 568 , 493–498. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Martin, S.; Iturrate, I.; Millán, J.d.R.; Knight, R.T.; Pasley, B.N. Decoding Inner Speech Using Electrocorticography: Progress and Challenges Toward a Speech Prosthesis. Front. Neurosci. 2018 , 12 , 422. [ Google Scholar ] [ CrossRef ]
  • Huster, R.J.; Debener, S.; Eichele, T.; Herrmann, C.S. Methods for Simultaneous EEG-fMRI: An Introductory Review. J. Neurosci. 2012 , 32 , 6053–6060. [ Google Scholar ] [ CrossRef ]
  • Cooney, C.; Folli, R.; Coyle, D. Optimizing Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 1311–1316. [ Google Scholar ] [ CrossRef ]
  • Agarwal and Kumar(2024) EEG-based Imagined Words Classification using Hilbert Transform and Deep Networks. Multimed. Tools Appl. 2024 , 83 , 2725–2748. [ CrossRef ]
  • Porbadnigk, A.; Wester, M.; Calliess, J.; Schultz, T. EEG-Based Speech Recognition—Impact of Temporal Effects. In Proceedings of the International Conference on Bio-Inspired Systems and Signal Processing—Volume 1: BIOSIGNALS, (BIOSTEC 2009), Porto, Portugal, 14–17 January 2009; INSTICC, SciTePress: Setúbal, Portugal, 2009; pp. 376–381. [ Google Scholar ] [ CrossRef ]
  • Nguyen, C.H.; Karavas, G.K.; Artemiadis, P. Inferring imagined speech using EEG signals: A new approach using Riemannian manifold features. J. Neural Eng. 2017 , 15 , 016002. [ Google Scholar ] [ CrossRef ]
  • Lee, Y.E.; Lee, S.H.; Kim, S.H.; Lee, S.W. Towards Voice Reconstruction from EEG during Imagined Speech. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 6030–6038. [ Google Scholar ] [ CrossRef ]
  • Lopes da Silva, F. EEG and MEG: Relevance to Neuroscience. Neuron 2013 , 80 , 1112–1128. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gu, J.; Buidze, T.; Zhao, K.; Gläscher, J.; Fu, X. The neural network of sensory attenuation: A neuroimaging meta-analysis. Psychon. Bull. Rev. 2024 . [ Google Scholar ] [ CrossRef ]
  • Sun, J.; Li, M.; Chen, Z.; Zhang, Y.; Wang, S.; Moens, M.F. Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain Activities. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2023; Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S., Eds.; Curran Associates, Inc.: New York, NY, USA, 2023; Volume 36, pp. 12332–12348. [ Google Scholar ]
  • Cai, H.; Dong, J.; Mei, L.; Feng, G.; Li, L.; Wang, G.; Yan, H. Functional and structural abnormalities of the speech disorders: A multimodal activation likelihood estimation meta-analysis. Cereb. Cortex 2024 , 34 , bhae075. [ Google Scholar ] [ CrossRef ]
  • Takagi, Y.; Nishimoto, S. High-Resolution Image Reconstruction with Latent Diffusion Models from Human Brain Activity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 14453–14463. [ Google Scholar ]
  • Gong, P.; Jia, Z.; Wang, P.; Zhou, Y.; Zhang, D. ASTDF-Net: Attention-Based Spatial-Temporal Dual-Stream Fusion Network for EEG-Based Emotion Recognition. In Proceedings of the 31st ACM International Conference on Multimedia (MM’23), Ottawa, ON, Canada, 29 October–3 November 2023; pp. 883–892. [ Google Scholar ] [ CrossRef ]
  • Su, W.C.; Dashtestani, H.; Miguel, H.O.; Condy, E.; Buckley, A.; Park, S.; Perreault, J.B.; Nguyen, T.; Zeytinoglu, S.; Millerhagen, J.; et al. Simultaneous multimodal fNIRS-EEG recordings reveal new insights in neural activity during motor execution, observation, and imagery. Sci. Rep. 2023 , 13 , 5151. [ Google Scholar ] [ CrossRef ]
  • Passos, L.A.; Papa, J.P.; Del Ser, J.; Hussain, A.; Adeel, A. Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement. Inf. Fusion 2023 , 90 , 1–11. [ Google Scholar ] [ CrossRef ]
  • Goebel, R.; Esposito, F. The Added Value of EEG-fMRI in Imaging Neuroscience. In EEG—fMRI: Physiological Basis, Technique, and Applications ; Mulert, C., Lemieux, L., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 119–138. [ Google Scholar ] [ CrossRef ]
  • Carmichael, D.W.; Vulliemoz, S.; Murta, T.; Chaudhary, U.; Perani, S.; Rodionov, R.; Rosa, M.J.; Friston, K.J.; Lemieux, L. Measurement of the Mapping between Intracranial EEG and fMRI Recordings in the Human Brain. Bioengineering 2024 , 11 , 224. [ Google Scholar ] [ CrossRef ]
  • Koide-Majima, N.; Nishimoto, S.; Majima, K. Mental image reconstruction from human brain activity: Neural decoding of mental imagery via deep neural network-based Bayesian estimation. Neural Netw. 2024 , 170 , 349–363. [ Google Scholar ] [ CrossRef ]
  • Liwicki, F.S.; Gupta, V.; Saini, R.; De, K.; Abid, N.; Rakesh, S.; Wellington, S.; Wilson, H.; Liwicki, M.; Eriksson, J. Bimodal Electroencephalography-Functional Magnetic Resonance Imaging Dataset for Inner-Speech Recognition. Sci. Data 2023 , 10 , 378. [ Google Scholar ] [ CrossRef ]
  • Miyawaki, Y.; Uchida, H.; Yamashita, O.; Sato, M.a.; Morito, Y.; Tanabe, H.C.; Sadato, N.; Kamitani, Y. Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders. Neuron 2008 , 60 , 915–929. [ Google Scholar ] [ CrossRef ]
  • Cetron, J.S.; Connolly, A.C.; Diamond, S.G.; May, V.V.; Haxby, J.V.; Kraemer, D.J.M. Decoding individual differences in STEM learning from functional MRI data. Nat. Commun. 2019 , 10 , 2027. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sligte, I.G.; van Moorselaar, D.; Vandenbroucke, A.R.E. Decoding the Contents of Visual Working Memory: Evidence for Process-Based and Content-Based Working Memory Areas? J. Neurosci. 2013 , 33 , 1293–1294. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Herff, C.; Krusienski, D.J.; Kubben, P. The Potential of Stereotactic-EEG for Brain-Computer Interfaces: Current Progress and Future Directions. Front. Neurosci. 2020 , 14 , 123. [ Google Scholar ] [ CrossRef ]
  • Gao, J.; Li, P.; Chen, Z.; Zhang, J. A Survey on Deep Learning for Multimodal Data Fusion. Neural Comput. 2020 , 32 , 829–864. [ Google Scholar ] [ CrossRef ]
  • Aggarwal, S.; Chugh, N. Review of Machine Learning Techniques for EEG Based Brain Computer Interface. Arch. Comput. Methods Eng. 2022 , 29 , 3001–3020. [ Google Scholar ] [ CrossRef ]
  • Zadeh, A.B.; Liang, P.P.; Poria, S.; Cambria, E.; Morency, L.P. Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, 15–20 July 2018; pp. 2236–2246. [ Google Scholar ]
  • Liu, Z.; Shen, Y.; Lakshminarasimhan, V.B.; Liang, P.P.; Zadeh, A.; Morency, L.P. Efficient low-rank multimodal fusion with modality-specific factors. arXiv 2018 , arXiv:1806.00064. [ Google Scholar ]
  • Tsai, Y.H.H.; Bai, S.; Liang, P.P.; Kolter, J.Z.; Morency, L.P.; Salakhutdinov, R. Multimodal transformer for unaligned multimodal language sequences. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; NIH Public Access: Bethesda, MD, USA, 2019; Volume 2019, p. 6558. [ Google Scholar ]
  • Yu, W.; Xu, H.; Yuan, Z.; Wu, J. Learning modality-specific representations with self-supervised multi-task learning for multimodal sentiment analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021; Volume 35, pp. 10790–10797. [ Google Scholar ]
  • Han, W.; Chen, H.; Poria, S. Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online, 7–11 November 2021; pp. 9180–9192. [ Google Scholar ]
  • Yuan, Z.; Li, W.; Xu, H.; Yu, W. Transformer-based feature reconstruction network for robust multimodal sentiment analysis. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual, 20–24 October 2021; pp. 4400–4407. [ Google Scholar ]
  • Sun, Y.; Mai, S.; Hu, H. Learning to learn better unimodal representations via adaptive multimodal meta-learning. IEEE Trans. Affect. Comput. 2023 , 14 , 2209–2223. [ Google Scholar ] [ CrossRef ]
  • Liu, F.; Shen, S.Y.; Fu, Z.W.; Wang, H.Y.; Zhou, A.M.; Qi, J.Y. Lgcct: A light gated and crossed complementation transformer for multimodal speech emotion recognition. Entropy 2022 , 24 , 1010. [ Google Scholar ] [ CrossRef ]
  • Sun, L.; Lian, Z.; Liu, B.; Tao, J. Efficient multimodal transformer with dual-level feature restoration for robust multimodal sentiment analysis. IEEE Trans. Affect. Comput. 2024 , 15 , 309–325. [ Google Scholar ] [ CrossRef ]
  • Fu, Z.; Liu, F.; Xu, Q.; Fu, X.; Qi, J. LMR-CBT: Learning modality-fused representations with CB-transformer for multimodal emotion recognition from unaligned multimodal sequences. Front. Comput. Sci. 2024 , 18 , 184314. [ Google Scholar ] [ CrossRef ]
  • Wang, L.; Peng, J.; Zheng, C.; Zhao, T.; Zhu, L. A cross modal hierarchical fusion multimodal sentiment analysis method based on multi-task learning. Inf. Process. Manag. 2024 , 61 , 103675. [ Google Scholar ] [ CrossRef ]
  • Shi, H.; Pu, Y.; Zhao, Z.; Huang, J.; Zhou, D.; Xu, D.; Cao, J. Co-space Representation Interaction Network for multimodal sentiment analysis. Knowl.-Based Syst. 2024 , 283 , 111149. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

AspectDescription
DatasetBimodal Dataset on Inner Speech
Participants4 healthy, right-handed (3 females, 1 male, aged 33–51 years)
TasksTwo 4-class classification tasks:
1. Social category: child, daughter, father, wife
2. Numeric category: four, three, ten, six
Data TypesNon-simultaneous EEG and fMRI recordings
PreprocessingEEG: Bandpass filter (1–50 Hz), artifact removal via ICA
fMRI: Motion correction, slice timing correction, spatial normalization to MNI space
Validation Strategy5-fold cross-validation
Evaluation MetricsAccuracy, F1-score, Area Under the Receiver Operating Characteristic Curve (AUC-ROC)
MethodAcc ↑ (%)F1-Score ↑AUC-ROC ↑
EEG-SVM28.5 ± 2.10.27 ± 0.020.32 ± 0.01
EEG-RF30.2 ± 1.80.29 ± 0.020.34 ± 0.01
fMRI-MVPA35.8 ± 1.50.35 ± 0.010.38 ± 0.01
fMRI-3DCNN38.3 ± 1.30.37 ± 0.010.40 ± 0.01
EEG-fMRI-Concat40.5 ± 1.20.40 ± 0.010.42 ± 0.01
EEG-fMRI-CCA42.1 ± 1.00.41 ± 0.010.49 ± 0.01
MM-CNN44.7 ± 0.90.44 ± 0.010.55 ± 0.00
Our Method
MethodAcc ↑ (%)F1-Score ↑AUC-ROC ↑
EEG-SVM17.8 ± 2.20.16 ± 0.020.21 ± 0.01
EEG-RF19.5 ± 1.90.18 ± 0.020.23 ± 0.01
fMRI-MVPA24.9 ± 1.60.24 ± 0.020.31 ± 0.01
fMRI-3DCNN27.6 ± 1.40.26 ± 0.010.33 ± 0.01
EEG-fMRI-Concat29.8 ± 1.30.29 ± 0.010.39 ± 0.01
EEG-fMRI-CCA29.3 ± 1.10.30 ± 0.010.41 ± 0.01
MM-CNN33.9 ± 1.00.33 ± 0.010.44 ± 0.00
Our Method
Model VariantAcc ↑ (%)F1-Score ↑AUC-ROC ↑
Full Model47.2 ± 0.70.47 ± 0.010.56 ± 0.00
w/o EEG-Raw45.0 ± 0.80.45 ± 0.010.54 ± 0.01
w/o EEG-MTF44.5 ± 0.90.44 ± 0.010.53 ± 0.01
w/o fMRI43.7 ± 0.90.43 ± 0.010.52 ± 0.01
w/o Cross-Perception43.9 ± 0.80.44 ± 0.010.53 ± 0.01
w/o Adaptive Fusion45.3 ± 0.80.45 ± 0.010.55 ± 0.01
Model VariantAcc ↑ (%)F1-Score ↑AUC-ROC ↑
Full Model36.5 ± 0.80.36 ± 0.010.45 ± 0.00
w/o EEG-Raw34.4 ± 0.90.34 ± 0.010.43 ± 0.01
w/o EEG-MTF33.9 ± 1.00.33 ± 0.010.42 ± 0.01
w/o fMRI33.2 ± 1.00.33 ± 0.010.41 ± 0.01
w/o Cross-Perception33.4 ± 0.90.33 ± 0.010.42 ± 0.01
w/o Adaptive Fusion34.7 ± 0.90.34 ± 0.010.44 ± 0.01
TaskOur Model Accuracy (%)Best Baseline Accuracy (%)
Social Words47.2 ± 0.747.3 ± 0.1
Numeric Words36.5 ± 0.836.6 ± 0.1
MethodCMU-MOSEICMU-MOSI
Acc-7 ↑ (%) Acc-5 ↑ (%) Acc-2 ↑ (%) MAE ↓ Acc-7 ↑ (%) Acc-5 ↑ (%) Acc-2 ↑ (%) MAE ↓
TFN (2018) [ ]50.2-82.50.59334.9-80.80.901
LMF (2018) [ ]48.0-82.00.62333.2-82.50.917
Mult (2019) [ ]52.654.183.50.56440.446.783.40.846
Self-MM (2021) [ ]53.655.485.00.53346.452.884.60.717
MMIM (2021) [ ]53.255.085.00.53646.953.085.30.712
TFR-Net (2021) [ ]52.354.383.50.55146.153.284.00.721
AMML (2022) [ ]52.4-85.30.61446.3-84.90.723
LGCCT (2022) [ ]47.5-81.1-----
EMT (2023) [ ]54.556.386.00.52747.454.185.00.705
LMR-CBT (2024) [ ]51.9-82.7-41.4-83.10.774
CMHFM (2024) [ ]52.854.484.50.54837.242.481.70.907
CRNet (2024) [ ]53.8-86.40.54147.4-86.40.712
Ours
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Qin, J.; Zong, L.; Liu, F. Exploring Inner Speech Recognition via Cross-Perception Approach in EEG and fMRI. Appl. Sci. 2024 , 14 , 7720. https://doi.org/10.3390/app14177720

Qin J, Zong L, Liu F. Exploring Inner Speech Recognition via Cross-Perception Approach in EEG and fMRI. Applied Sciences . 2024; 14(17):7720. https://doi.org/10.3390/app14177720

Qin, Jiahao, Lu Zong, and Feng Liu. 2024. "Exploring Inner Speech Recognition via Cross-Perception Approach in EEG and fMRI" Applied Sciences 14, no. 17: 7720. https://doi.org/10.3390/app14177720

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. Speech and Brain

    speech on human brain

  2. Introduction to the Human Brain: Facts, Anatomy, and Functions

    speech on human brain

  3. How does the brain process speech? We now know the answer, and it’s

    speech on human brain

  4. The Human Brain. Cortical Representation of Speech and Language Stock

    speech on human brain

  5. How does the brain process speech? We now know the answer, and it’s

    speech on human brain

  6. How Does The Brain Process Speech? Easily Explained

    speech on human brain

VIDEO

  1. Introducing the Amazing Brain Science Talks

  2. Understanding Human Brain

  3. What language does our brain speak?

  4. Human Brain| Understanding the Brain| The Prefrontal Cortex, Amygdala, and the Hippocampus

  5. Brain drain speech in english for students

  6. Ultra-rapid anatomical MRI processing: speaking / singing

COMMENTS

  1. Brain Anatomy and How the Brain Works

    The parietal lobe houses Wernicke's area, which helps the brain understand spoken language. Occipital lobe. The occipital lobe is the back part of the brain that is involved with vision. Temporal lobe. The sides of the brain, temporal lobes are involved in short-term memory, speech, musical rhythm and some degree of smell recognition.

  2. What Part of the Brain Controls Speech?

    Your brain has many parts but speech is primarily controlled by the largest part of the brain, the cerebrum. The cerebrum can be divided into two parts, called hemispheres, which are joined by a ...

  3. Human brain

    The brain is the central organ of the human nervous system, and with the spinal cord makes up the central nervous system. The brain consists of the cerebrum, the brainstem and the cerebellum. It controls most of the activities of the body, processing, integrating, and coordinating the information it receives from the sense organs, and making ...

  4. Introduction: The Human Brain

    The brain is the most complex organ in the human body. It produces our every thought, action, memory, feeling and experience of the world. This jelly-like mass of tissue, weighing in at around 1.4 ...

  5. The Human Brain: Anatomy and Function

    The cerebellum adjusts body movements, speech coordination, and balance, while the brain stem relays signals from the spinal cord and directs basic internal functions and reflexes. 1. The Seat of Consciousness: High Intellectual Functions Occur in the Cerebrum. The cerebrum is the largest brain structure and part of the forebrain (or ...

  6. How the brain produces speech

    How the neurons in the human brain work together to plan and produce speech remains poorly understood. To begin to address this question, an NIH-funded team of researchers, led by Drs. Ziv Williams and Sydney Cash at Massachusetts General Hospital, recorded neuron activity during natural speech in five native English speakers.

  7. Human Brain: facts and information

    The human brain is a 3-pound (1.4-kilogram) mass of jelly-like fats and tissues—yet it's the most complex of all known living structures. The brain is extremely sensitive and delicate, and so it ...

  8. How speech is produced and perceived in the human cortex

    By. Yves Boubenec. In the human brain, the perception and production of speech requires the tightly coordinated activity of neurons across diverse regions of the cerebral cortex. Writing in Nature ...

  9. In brief: How does the brain work?

    The brain works like a big computer. It processes information that it receives from the senses and body, and sends messages back to the body. But the brain can do much more than a machine can: We think and experience emotions with our brain, and it is the root of human intelligence. The human brain is roughly the size of two clenched fists and weighs about 1.5 kilograms. From the outside it ...

  10. Speech

    Speech is human communication through spoken language. Although many animals possess voices of various types and inflectional capabilities, humans have learned to modulate their voices by articulating the laryngeal tones into audible oral speech. ... The question of what the brain does to make the mouth speak or the hand write is still ...

  11. What Part of the Brain Controls Speech?

    Medically reviewed by Heidi Moawad, M.D. — Written by Jared C. Pistoia, ND on February 22, 2023. The left side of your brain controls voice and articulation. The Broca's area, in the frontal ...

  12. Physiology, Brain

    The human brain is perhaps the most complex of all biological systems, with the mature brain composed of more than 100 billion information-processing cells called neurons.[1] The brain is an organ composed of nervous tissue that commands task-evoked responses, movement, senses, emotions, language, communication, thinking, and memory. The three main parts of the human brain are the cerebrum ...

  13. How the brain controls our speech

    July 28, 2022 — New research finds that specific parts of the brain recognize complex cues in human vocal sounds that do not involve speech, such as crying, coughing or gasping. Insights into ...

  14. Study highlights complex neuroscience behind even simplest words

    By using advanced brain recording techniques, a new study led by researchers from Harvard-affiliated Massachusetts General Hospital demonstrates how neurons in the human brain work together to allow people to think about what words they want to say and then produce them aloud through speech. The findings provide a detailed map of how speech ...

  15. How the Brain Crafts Words Before Speaking

    Credit: Neuroscience News. Summary: A new study utilizes advanced Neuropixels probes to unravel the complexities of how the human brain plans and produces speech. The team identified specific neurons in the prefrontal cortex involved in the language production process, including the separate neural pathways for speaking and listening.

  16. The Brain Processes Speech in Parallel With Other Sounds

    Studies of speech can't get far by using animals because speech is a uniquely human trait. And in humans, most research has to use indirect methods to measure brain activity. Getting direct recordings is much trickier because it's invasive: Scientists need to piggyback on medical procedures, collecting data from electrodes implanted in the ...

  17. Language center

    Language areas of the brain. The angular gyrus is represented in orange, the supramarginal gyrus is represented in yellow, Broca's area in blue, Wernicke's area in green, and the primary auditory cortex in pink.. In neuroscience and psychology, the term language center refers collectively to the areas of the brain which serve a particular function for speech processing and production. [1]

  18. Finding thoughts in speech: How human brain processes ...

    Finding thoughts in speech: How human brain processes thoughts during natural communication. ScienceDaily . Retrieved August 31, 2024 from www.sciencedaily.com / releases / 2014 / 06 ...

  19. How Does the Brain Represent Speech?

    Summary. This chapter provides a brief overview of how the brain's auditory system represents speech. Animal experiments have been invaluable in elucidating basic physiological mechanisms of sound encoding, auditory learning, and pattern classification in the mammalian brain. The human auditory nerve contains about 30,000 such nerve fibers ...

  20. Speech on the Brain

    Phonetics is the study of the sounds in human speech, everything from the physical placement of your tongue when you say "ch" or "sh" to the social meanings of different sound enunciations. After Johnson's first lecture, a young man came up and introduced himself as Edward F. Chang. "He was literally a brain surgeon," Johnson says ...

  21. New brain implants 'read' words directly from people's thoughts

    November 15, 2022 at 7:00 am. SAN DIEGO — Scientists have devised ways to "read" words directly from brains. Brain implants can translate internal speech into external signals, permitting ...

  22. UCSF Team Reveals How the Brain Recognizes Speech Sounds

    By Pete Farley. Edward F. Chang, MD. Photo by Cindy Chew. UC San Francisco researchers are reporting a detailed account of how speech sounds are identified by the human brain, offering an unprecedented insight into the basis of human language. The finding, they said, may add to our understanding of language disorders, including dyslexia.

  23. The uniqueness of human vulnerability to brain aging in great ape

    Human multimodal cortical areas are characterized by lower neuronal cell density, as well as higher dendritic branching and spine numbers of pyramidal neurons (53, 54). Compared to other great apes, the human brain has a large neuropil fraction in the frontal pole and the anterior insula . The neuropil fraction represents the space surrounding ...

  24. Making the Neurodivergent Brain Visible: New Research ...

    Researchers have developed a technique that accurately identifies genetic markers of autism in brain images, which could revolutionize early diagnosis and treatment. A team of researchers co-led by University of Virginia engineering professor Gustavo K. Rohde has developed a system that can spot genetic markers of autism in brain images with 89 ...

  25. Applied Sciences

    Multimodal brain signal analysis has shown great potential in decoding complex cognitive processes, particularly in the challenging task of inner speech recognition. This paper introduces an innovative I nner Speech Recognition via Cross-Perception (ISRCP) approach that significantly enhances accuracy by fusing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data.

  26. Plenary Talks

    Brain-To-Speech technology directly connects neural activity to the means of human linguistic communication which may greatly enhance the naturalness of communication using brain signals. With the current discoveries on neural features of imagined speech and the development of the speech synthesis technologies, direct translation of brain ...