In our previous report on speech-outputting BCIs, we covered a system from researchers at the University of California, Davis. A “brain-to-voice neuroprosthesis” that enabled a mute man to talk, the system worked by decoding signals from the region of the brain responsible for controlling speech muscles, hence it only output speech a person intends to speak.
This new BCI from Stanford, however, went beyond outputting intended speech. The system, detailed in the journal Cell, decodes a user’s inner speech — the words they think to themselves without trying to move their mouth. The researchers found that while this method was less tiring for the participants, it also meant that aspects of a person's private, uninstructed thoughts could be decoded during cognitive tasks like counting.
This discovery highlights a major privacy concern — a BCI could potentially broadcast a user's internal monologue without their consent. To address this ethical concern, the Stanford team developed and tested two “high-fidelity” safeguards. The first is an “imagery-silenced” mode, where the decoder is trained to ignore all inner speech and only output words when the user physically attempts to speak. The second is a keyword system, where the user must first think of a complex word — in this case, “chittychittybangbang” — to activate the device and allow it to start decoding their inner thoughts.
Experts praised the team for being the first to demonstrate a concrete technique for protecting mental privacy in a BCI. “Ultimately, our goal is to restore communication, but only the communication that a person actually intends,” said Vikash Gilja, a computer scientist at UC San Diego who was not involved in the work. This offers users a new, less tiring option for communication, while also taking the first critical steps to ensure their thoughts remain their own.
Source(s)
Image source: Marija Zaric