Scientists Develop GPT Model That Interprets Human Thoughts

AI brain illustration
(Image credit: Yuichiro Chino/Getty Images)

Even as the world still reels in its attempt to understand and absorb the ripples from the launch of ChatGPT and assorted AI-based systems — whose dust will take a long while to settle — scientists are carrying on with their own applications of Generative Pre-trained Transformers (GPT) and LLM (Large Language Models). And according to Scientific American, one of the latest such applications is a GPT-based model that takes its prompts not from human text, but directly from the users' mind.

Developed by a research team with the University of Texas at Austin, and published in a paper in the journal Nature, their GPT model interprets a person's brain activity via its bloodflow as shown in an fMRI (functional Magnetic Resonance Imaging), allowing it access to what the user is "hearing, saying, or imagining". And it does this without any invasive surgery or any attachment on the patient itself. There was a clear opportunity to name the new model BrainGPT, but someone ignored that memo: the researchers refer to their "brain reading" model as GPT-1 instead.

In theory, technology itself isn't malicious. Technology is an abstraction, a concept, that can then be used for a purpose. In a vacuum, GPT-1 could help ALS or aphasia patients communicate. Also in a vacuum, technologies such as these could be leveraged by users to "record" their thoughts (imagine a Notes app that's linked to your own thoughts, or an AutoGPT installation that piggybacks on your ideas), opening up new venues for self-knowledge, and perhaps even new pathways for psychotherapy.

But while we're here, we can also throw in some other, less beneficial repurposings of the technology, such as using it in order to extract information directly from an unwilling subject's brain. Being non-invasive is both a strength and a weakness there. And there's also the matter with the technology itself: fMRI machines take up entire rooms and millions of budget dollars wherever they're found, which severely limits applications.

Even so, it would seem that the "willingness" element of communication - that choice of voicing our own thoughts, of bringing them into the actual world - is on the throes of destruction. The researchers themselves have called their attention to potential misuses and negative impacts of the technology in their study - something that  happens far less often that it should in both academia and private research efforts.

"Our privacy analysis suggests that subject cooperation is currently required both to train and to apply the decoder," it reads. "However, future developments might enable decoders to bypass these requirements. Moreover, even if decoder predictions are inaccurate without subject cooperation, they could be intentionally misinterpreted for malicious purposes. For these and other unforeseen reasons, it is critical to raise awareness of the risks of brain decoding technology and enact policies that protect each person's mental privacy."

As we stand at the door beyond which our thoughts are no longer safe, that's a wise stance indeed.

Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.