Scientists Develop GPT Model That Interprets Human Thoughts
LLMs and Generative Pre-Trained Models aren't just here to stay - they're here to bring change.
Even as the world still reels in its attempt to understand and absorb the ripples from the launch of ChatGPT and assorted AI-based systems — whose dust will take a long while to settle — scientists are carrying on with their own applications of Generative Pre-trained Transformers (GPT) and LLM (Large Language Models). And according to Scientific American, one of the latest such applications is a GPT-based model that takes its prompts not from human text, but directly from the users' mind.
Developed by a research team with the University of Texas at Austin, and published in a paper in the journal Nature, their GPT model interprets a person's brain activity via its bloodflow as shown in an fMRI (functional Magnetic Resonance Imaging), allowing it access to what the user is "hearing, saying, or imagining". And it does this without any invasive surgery or any attachment on the patient itself. There was a clear opportunity to name the new model BrainGPT, but someone ignored that memo: the researchers refer to their "brain reading" model as GPT-1 instead.
The researchers do note that due to the fMRI technique being used, GPT-1 can't parse specific words that a subject might think about; and because the model works at a higher level of abstraction (it extrapolates meaning from brain activity, instead of looking for the meaning itself), some details are lost in translation. For instance, one research participant listened to a recording stating, "I don't have my driver's license yet." Processing the fMRI data generated from the moment the participant heard the words, GPT-1 returned the original sentence as meaning "She has not even started to learn to drive yet." So, no - it doesn't transcribe our thoughts verbatim - but it does understand their general meaning, or "the gist of it", as the researchers characterized some of GPT-1's results.
All of this does raise an immediate question: where does this take us?
In theory, technology itself isn't malicious. Technology is an abstraction, a concept, that can then be used for a purpose. In a vacuum, GPT-1 could help ALS or aphasia patients communicate. Also in a vacuum, technologies such as these could be leveraged by users to "record" their thoughts (imagine a Notes app that's linked to your own thoughts, or an AutoGPT installation that piggybacks on your ideas), opening up new venues for self-knowledge, and perhaps even new pathways for psychotherapy.
But while we're here, we can also throw in some other, less beneficial repurposings of the technology, such as using it in order to extract information directly from an unwilling subject's brain. Being non-invasive is both a strength and a weakness there. And there's also the matter with the technology itself: fMRI machines take up entire rooms and millions of budget dollars wherever they're found, which severely limits applications.
Even so, it would seem that the "willingness" element of communication - that choice of voicing our own thoughts, of bringing them into the actual world - is on the throes of destruction. The researchers themselves have called their attention to potential misuses and negative impacts of the technology in their study - something that happens far less often that it should in both academia and private research efforts.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
"Our privacy analysis suggests that subject cooperation is currently required both to train and to apply the decoder," it reads. "However, future developments might enable decoders to bypass these requirements. Moreover, even if decoder predictions are inaccurate without subject cooperation, they could be intentionally misinterpreted for malicious purposes. For these and other unforeseen reasons, it is critical to raise awareness of the risks of brain decoding technology and enact policies that protect each person's mental privacy."
As we stand at the door beyond which our thoughts are no longer safe, that's a wise stance indeed.
Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.
-
domih It will work, it's just a question of years.Reply
Note that this is not the first foray into scanning brains and deduce a rough image of the thoughts being thought. I remember watching a BBC documentary on the subject. What's new is the usage of LLM + GPT.
There is a genuine good medical application: quadriplegics can pilot devices such as phones or computers or artificial limbs by "simply" thinking. -
ralfthedog
It all comes down to language. This text -> that response. Many complex systems can be described as a one way conversation. This input gives you that output. Math is a fantastic example.domih said:It will work, it's just a question of years.
Note that this is not the first foray into scanning brains and deduce a rough image of the thoughts being thought. I remember watching a BBC documentary on the subject. What's new is the usage of LLM + GPT.
There is a genuine good medical application: quadriplegics can pilot devices such as phones or computers or artificial limbs by "simply" thinking. -
bit_user that subject's cooperation is currently required both to train and to apply the decoder
This point needs to be emphasized.
The technology seems immediately applicable for conscious individuals lacking the ability to easily speak or type. The late Stephen Hawking comes to mind. Others, with "locked-in syndrome", as well.
Totalitarian applications would probably require some sort of sophisticated implant, at the very least. That could be a decade away, or more. Yeah, still too close for comfort. -
Heat_Fan89
I'm afraid, you might be right !gg83 said:We are all doomed to be mindless drones for the Conglomerate. -
Heat_Fan89
We are probably a lot closer to it than many would care to admit.Mandark said:I’m thinking minority report will be real someday