Over the years, one area of tech/health tech I have avoided writing about are brain-computer interfaces (B.C.I.). In part, it was because I thought they were kind of creepy, and, in larger part, because I was increasing finding Elon Musk, whose Neuralink is one of the leaders in the field, even more creepy. But an article in The New York Times Magazine by Linda Kinstler rang alarm bells in my head – and I sure hope no one is listening to them.

This is your brain in fMRI. Credit: Max Planck Institure
Her
article, Big
Tech Wants Direct Access to Our Brains, doesn’t just discuss some of
the technological advances in the field, which are, admittedly, quite
impressive. No, what caught my attention was her larger point that it’s time –
it’s past time – that we started taking the issue of the privacy of what goes
on inside our heads very seriously.
Because we
are at the point, or fast approaching it, when those private thoughts of ours
are no longer private.
The
ostensible purpose of B.C.I.s has usually been as for assistance to people with
disabilities, such as people who are paralyzed. Being able to move a cursor or
even a limb could change their lives. It might even allow some to speak or even
see. All are great use cases, with some track record of successes.
B.C.I.s
have tended to go down one of two paths. One uses external signals, such as
through electroencephalography (EEG) and electrooculography (EOG), to try to decipher
what your brain is doing. The other, as Neuralink uses, is an implant directly
in your brain to sense and interrupt activity. The latter approach has the advantage
of more specific readings, but has the obvious drawback of requiring surgery
and wires in your brain.
There’s a
competition held every four years called Cybathlon,
sponsored by ETH Zurich, that “acts as a platform that challenges teams from
all over the world to develop assistive technologies suitable for everyday use
with and for people with disabilities.” A profile
of it in NYT quoted the second place finisher, who uses the external
signals approach but lost to a team using implants: “We weren’t in the same
league as the Pittsburgh people. They’re playing chess and we’re playing
checkers.” He’s now considering
implants.

A Cybathlon 2024 competitor. Credit: Cybathlon ETH Zurich
Fine, you say. I can protect my mental privacy simply by not getting implants, right? Not so fast. A new paper in Science Advances discusses progress in “mind captioning.” I.e.:
We successfully generated descriptive text representing visual content experienced during perception and mental imagery by aligning semantic features of text with those linearly decoded from human brain activity…Together, these factors facilitate the direct translation of brain representations into text, resulting in optimally aligned descriptions of visual semantic information decoded from the brain. These descriptions were well structured, accurately capturing individual components and their interrelations without using the language network, thus suggesting the existence of fine-grained semantic information outside this network. Our method enables the intelligible interpretation of internal thoughts, demonstrating the feasibility of nonverbal thought–based brain-to-text communication.
The model
predicts what a person is looking at “with a lot of detail”, says Alex Huth, a
computational neuroscientist at the University of California, Berkeley who has done
related research. “This is hard to do. It’s surprising you can get that much
detail.”
“Surprising”
is one way to describe it. “Exciting” could be another. For some people, though, “terrifying” might
be what first comes to mind.
The mind
captioning uses fMRI and AI to do the
mind captioning, and the participants were fully aware of what was going on. None
of the researchers suggest that the technique can tell exactly what people are
thinking. “Nobody has shown you can do that, yet,” says Professor Huth.
It’s that “yet” that worries me.
Dr. Kinstler
points out that’s not all we have to worry about: “Advances in optogenetics, a
scientific technique that uses light to stimulate or suppress individual,
genetically modified neurons, could allow scientists to “write” the brain as
well, potentially altering human understanding and behavior.”
“What’s
coming is A.I. and neurotechnology integrated with our everyday devices,” Nita
Farahany, a professor of law and philosophy at Duke University who studies
emerging technologies, told Dr. Kinstler. “Basically, what we are looking at is
brain-to-A.I. direct interactions. These things are going to be ubiquitous. It
could amount to your sense of self being essentially overwritten.”
Now are
you worried?
Dr.
Kinstler notes that some countries – not including the U.S., of course – have passed
neural privacy laws. California, Colorado, Montana and Connecticut have
passed neural data privacy laws, but the Future of Privacy Forum details
how each is different and that there is not even a common agreement on exactly
what “neural data” is, much less how best to safeguard it. As is typical, the
technology is way outpacing the regulation.
![]() |
| Credit: Future of Privacy Forum |
“While
many are concerned about technologies that can “read minds,” such a tool does
not currently exist per se, and in many cases nonneural data can reveal the
same information,” writes
Jameson Spivack, Deputy Director for Artificial Intelligence for FPF. “As
such, focusing too narrowly on “thoughts” or “brain activity” could exclude
some of the most sensitive and intimate personal characteristics that people
want to protect. In finding the right balance, lawmakers should be clear about
what potential uses or outcomes on which they would like to focus.”
I.e., we
can’t even define the problem well enough yet.
Dr.
Kinstler describes how people have been talking about this issue literally for
decades, with little progress on the legislative/regulatory front. We may be at
the point where debate is no longer academic. Professor Farahany warns that
having the ability to control ones thoughts and feelings ““is a precondition to
any other concept of liberty, in that, if the very scaffolding of thought
itself is manipulated, undermined, interfered with, then any other way in which
you would exercise your liberties is meaningless, because you are no longer a self-determined
human at that point.”
In 2025
America, this does not seem like an idle threat.
------------
In this
digital world, we’ve gradually been losing our privacy. Our emails aren’t
private? Oh, OK. Big tech is tracking our shopping? Well, we’ll get better
offers. Social media mines our data to best manipulate us? Yes, but think of
the followers we might gain. Surveillance camera can track our every move? But
we need it to fight crime!
We grumble
but mostly have accepted these (and other) losses of privacy. But when it comes
to the possibility of technology reading our thoughts, much less directly manipulating
them, we cannot afford to keep dithering.

No comments:
Post a Comment