Facebook envisions a future in which people will be able to type out words and send messages using only their brains.
The idea might seem like the stuff of science fiction, but the social media giant said Tuesday that it’s moving closer to making the moonshot project a reality thanks to new research. That could help Facebook build wearables such as augmented reality glasses, letting people interact with each other in real life without having to pick up a smartphone.
“The promise of AR lies in its ability to seamlessly connect people to the world that surrounds them — and to each other. Rather than looking down at a phone screen or breaking out a laptop, we can maintain eye contact and retrieve useful information and context without ever missing a beat,” Facebook said in a blog post.
The social network first announced that its research lab, Building 8, was working on a computer-brain interface in 2017, during the social network’s F8 developer conference. Regina Dugan, who was leading the effort but left the company after more than a year, said during a speech that Facebook wanted to create a silent speech system that can type 100 words per minute straight from your brain. That’d be five times faster than a person could type from a phone.
Researchers, including at Stanford University, have already found a way to do this with patients who are paralyzed, but it requires surgery in which electrodes are implanted into the brain. Facebook, though, hopes to build a wearable that isn’t invasive.
The social network has kept tight-lipped about progress on the moonshot project since it was first unveiled. In the meantime, the company has faced a series of scandals about privacy and security. Facebook’s tarnished image will likely make consumers wary about giving the social network the ability to decode their thoughts — even if they’re only the ones we want to share. It’s also not the only company studying computer-brain interfaces. Elon Musk’s startup Neuralink is trying to link our brains to computers.
Facebook has teamed up with researchers at the University of California, San Francisco, to study whether it’s possible to decode speech from a person’s brain activity onto a computer screen. The researchers worked with three epilepsy patients who agreed to have electrodes temporarily implanted into their brains, according to a study published Tuesday in the journal Nature Communications.
The patients responded out loud to nine simple questions, such as “How is your room currently?” and “When do you want me to check back on you?” Simultaneously, machine learning algorithms were able “to decode a small set of full, spoken words and phrases from brain activity in real time,” according to Facebook.
Despite this progress, the social network acknowledged that there’s still a lot of work to do to build augmented reality glasses with such features.
“That future is still a long way off, but early-stage research taking place today is the first step toward delivering on its promise,” the company said.