Neureka Health
Back to Insights
·Neuroscience·4 min read

The brain has started to publish

By Neureka Team

In 2023, a team at the University of Texas at Austin trained a large language model on a single person's brain. They had the person lie in an MRI scanner and listen to hours of podcasts. The model — built on a GPT-style architecture — learned the mapping between what was being said and what their brain was doing. Then they ran new brain data through it.

It worked. Not perfectly. Not verbatim. But the model could reconstruct the gist of what the person was hearing — and, more startlingly, of what they were silently imagining or watching on a screen. A brain scan, run through software, was producing language about its own contents.

This is, by any reasonable definition, the start of mind reading.

What is actually happening

The standard approach has three parts.

First, train a decoder on one person. That person spends hours in an fMRI scanner being exposed to stimuli — stories, films, images — while their brain is recorded. Second, pair brain activity with the stimulus. A model learns the statistical relationship between specific patterns of activity and specific contents: a category of object, a location, a sentiment, a phoneme. Third, run new brain data through the model and see what it predicts.

The output is not a transcript. It is more like a translation — the broad meaning, the topic, the emotional valence, often wrong on the specifics but right on the substance. Subjects in the Texas studies could even sabotage the decoder by deliberately thinking about something unrelated. Mind reading, but cooperative.

A parallel line of work, using diffusion models like the ones that power image generation, can now reconstruct images a person is seeing or imagining from fMRI alone. The reconstructions are blurry and dreamlike, but the contents are recognisable. A clock looks like a clock. A face looks like a face.

What it does not mean

It does not mean a stranger can scan your brain at the airport and read your private thoughts. The current systems require:

  • Hours of cooperative training data from each individual subject
  • A multi-million-dollar MRI scanner the size of a small room
  • The subject to be still, awake, and engaged with a stimulus
  • Models calibrated specifically to that one brain

It also does not work in real time, does not transfer between people, and does not produce verbatim words. What it shows is that the brain's representations of language and imagery are partially decodable — that meaningful structure exists in the noise, and that machine learning is now powerful enough to find it.

Why it matters anyway

The constraints are real. They are also the kind of constraint that gets weaker, not stronger, with time. Training requirements will shrink. Resolution will improve. Non-invasive devices like portable MEG and high-density EEG are catching up to fMRI's spatial resolution. Models will get better at generalising across brains.

The medical applications are the obvious win: a way out for people with locked-in syndrome, ALS, or severe stroke, whose minds are intact but who have no way to communicate. Implanted speech-decoding interfaces from groups at UCSF and BrainGate are already restoring sentence-level communication at conversational speed. The same logic, applied non-invasively, would change neurology.

The longer-term questions are stranger. What does mental privacy mean when thought has a measurable signature? What kind of consent is meaningful when a system can read your representations faster than you can articulate them? These are not questions for tomorrow. They are questions for the next ten years.

The point

Neuroscience used to study the brain only from the outside — behaviour, lesion, electrode, scan. For most of its history, the contents of a thought were inaccessible in principle. That is no longer quite true.

The brain, it turns out, has started to publish.

Like this? Get the next one.

One email when we publish. No noise.

One email when we publish. Unsubscribe any time. Privacy.