Activity Story: Augmenting Brains Demonstrations

The Media Lab's Fluid Interfaces Group planned a day-long professional symposium for the fall of 2022 revolving around their work in AI and Brain Computer Interfaces. Titled Augmenting Brains, the meeting was planned as an interdisciplinary exchange, so while the organizers assembled an intense schedule for the meeting's program, the presentations were for a more generalist audience. This meant that it was a great opportunity to include public participation too, so the decision was made to move the venue for the meeting to the MIT Museum, and the Museum promoted free public participation. The event was oversubscribed and included remarkable presentations, networking meals, and several demonstrations and temporary artworks. However, this all took place during the day on a Friday, so to ensure robust public involvement some of the demonstrations were also presented weeks later at a Museum AfterDark event. The 21+ AfterDark event drew hundreds of people over a few hours, and after experiencing memory implants and brain entrainment, participants reflected on the social implications of what they had just been through. This incredibly rich set of activities is summarized here in two parts, one for the day-long Augmenting Brains, and one for the AfterDark demos.

Augmenting Brains at the MIT Museum

“We make new tools to change what is possible. But at the same time, the tools change our world and change us. This is a cycle that has been going on: writing changed things, Google changed things—so we are constantly redefining what it means to be human. … I hope many of the tools we develop are used to learn something [about ourselves] without making us dependent on the technologies. That is one of the principles we design for.” -MIT Prof. Patti Maes, head of MIT’s Fluid Interfaces Lab

Much of what we once thought of as science fiction—the seemingly fantastical possibilities brought to life in movies like Total Recall, Inception, and The Eternal Sunshine of the Spotless Mind—is now bleeding into real life, says MIT research scientist Nataliya Kos’myna, organizer of the Augmenting Brains 2022 symposium. By way of introducing the series of lectures, demonstrations, and conversations that she hosted on December 9, 2022, Kos’myna told the audience that they would meet experts who would discuss “what is fiction, what is reality, what is a gray area; what we can do, what we should do, what we cannot do—and maybe should not do, at least at the moment” (watch this introduction online). At the same time, Kos’myna insisted, all of the innovations coming down the pike will help us tap the full potential of “the most powerful tool we have, the human brain.” But the question of whether or not the public brain is prepared for all this, societally, psychologically, and ethically, is another question—one that Ben Wiehe, MIT Museum manager, hoped adding the symposium to the Dana Center Planning Grant would help address.

The symposium brought together a group of powerful speakers, including:

  • Professor Patti Maes, director of the MIT Media Lab's Fluid Interfaces Group, who has been recognized as one of the top One Hundred People for the New Century by Newsweek and one of the top fifty high-tech pioneers by TIME Digital, and as a “Global Leader for Tomorrow,” by the World Economic Forum; 
  • Psychologist Elizabeth Loftus, Distinguished Professor of Criminology, Law & Society and Psychological Science at the University of California-Irvine, who has served as an expert witness in such highly publicized legal cases as those of Michael Jackson, Harvey Weinstein, and Jeffrey Epstein associate Ghislaine Maxwell; and
  • Film and TV writer Steven Kane, perhaps best known as the co-creator and showrunner for the recent Paramount+ sci-fi series, “Halo.”

Videos of Augmenting Brains Symposium presentations are available to watch online.

Patti Maes Presentation: Cognitive Augmentation (Watch Online)

At the moment, tech can bring a universe of information to our fingertips instantly, sure. But what’s not yet widely available are devices and applications that will help us enhance cognitive capacities—e.g., improved attention and better memorization—that are important to success and self-fulfillment. The day’s opening speaker, Professor Patti Maes, hopes to change that. Throughout her thirty-year career, Maes has done pioneering work on wearable computers that assist with cognitive enhancement and respond to subvocalizations that help people with paralysis and motor impairment communicate and interact with their home environment. As head of the MIT Media Lab, moreover, she oversees a team that uses insights from neuroscience, machine learning, and psychology to create tools that enhance motivation, attention, memory, creativity, critical thinking, communication, empathy, and emotion regulation. The systems and implements the Fluid Interfaces team creates are compact and cheap, with an end goal of being widely available so that anyone who wants to develop the untapped powers of their mind will have the opportunity. 

Maes’s talk amounted to a verbal World’s Fair of neuroscientific innovation, as she took us through the dazzling projects that her protegees are working on. They included: 

SleepStim, a “smart” sleep mask that can be used at home to measure physiological responses during sleep and subsequently provide in response, wirelessly, interventions like white noise, pink noise, and room temperature changes to improve sleep; 

AttentivU, a pair of “smart” glasses (for which Kos’myna is the lead scientist) that would improve attentiveness by way of a biofeedback system, incorporated into the frames, that tracks brain activity and prods the wearer when attention lags;

AlterEgo, a Brain Computer Interface (BCI) that picks up subtle muscle movements related to speech production by way of electrodes and communicates them to a computer screen, which helps people with ALS, MS, non-verbal autism and other disabilities to communicate;

Future You, an app that uses generative AI language models techniques to give users a profile of themselves decades in the future—at age sixty, for instance—with the goal of helping them take action to realize their goals, and to think more concretely about how to achieve them.

Elizabeth Loftus Presentation: Fabricating Memories (Watch Online)

“Memory is like a wikipedia page,” said Loftus, to open her presentation. “You can write it, you can edit it, but anyone else can come in and edit it too.” She went on to point out that high-tech devices aren’t necessary to create false memories that affect people’s behavior. “We can do that now, just talking to people,” she said. 

Indeed, a voluminous body of research shows that memories are easily altered by suggestive assertions made after an event. In a study that Loftus led, for instance, researchers showed subjects a simulated accident in which a red Datsun goes through an intersection where it should’ve stopped. “By supplying the subjects with some post-event misinformation, we got lots and lots of people to believe and remember they saw the car go through a yield sign, not a stop sign,” she explained. “We did that by just asking our witnesses a series of questions, including: ‘Did you notice another car pass the red Datsun, at the intersection with the yield sign?” She said that the question about the other car was a kind of decoy or distraction—a “Trojan Horse,” as she put it, which got the subjects wracking their brains, to try to call up the supposed other vehicle, and while they were preoccupied with that, the misinformation about the yield sign slipped in, unscrutinized by many participants, turning into a false memory in the process. 

Researchers subsequently wondered if it would be possible to implant memories of experiences that would be personally affecting for the subjects—about, for instance, nearly drowning in childhood, or being attacked viciously by an animal in childhood. A meta-analysis of these studies found that nearly one in three participants developed a false memory; moreover, an additional twenty-three percent developed a false sense that something bad had probably happened to them—”which may be the first step down the road to developing a full-blown false memory,” says Loftus. She also described evidence that false memories could affect a person’s subsequent behavior—such as when people have been given a false memory of getting sick from a certain food; subsequently, they eat less of it.

Steve Kane Conversation: "HALO" and the Holograms (Watch Online)

When storytellers take inspiration from neuroscientists and translate those ideas into vivid entertainment, they are helping to both drive further, more elegant innovation and to educate the public—”by showing the ethical boundaries of the technology,” as Kos’myna said; also by “asking and possibly answering [the question], ‘What is it to be human?’”No wonder, then, Kos’myna gave sci-fi a prominent place in her symposium—introducing Loftus’s talk with a clip from “Inception,” for instance. Kos’myna ended the day with more from Hollywood: Special guest Kane, the writer and director, discussed how his study of artificial intelligence and holograms informed his popular 2022 streaming series, “Halo.” The show, set during a fictional  26th-century war, in which humans battle a military alliance of advanced alien races, focuses on the relationship between a supersoldier, equipped with both BCI’s and prosthetics, and his complicated relationship with an AI-powered hologram named Cortana. Kane used a series of clips from the show to help give the audience a sense of both the narrative and some of the moral quandaries he was interested in exploring through it. “Cortana has every data point in the world but no value measure,” he explained. To the robot, “Mein Kampf and the Bible are equivalents,” he pointed out. “AI is a perfect assistant, but at what price?” He went on to pose a rhetorical question: “As we build tech to adapt to the world we created, do we lose sight of what it is to be human?” 

Ethical and Societal Issues

A big hurdle in developing the BCI’s that Maes and her protegees are developing into the hands of the public will be ensuring that they’re effective not just in controlled environment of the lab—where there isn’t much to distract the user, or interfere with the device—but in the more chaotic setting that is the real world. Beyond that, however, big moral questions loom. “We weave ethics discussions throughout all of our work, to try to keep in mind what the negative consequences of all these technologies that we develop might be,” Maes said. Her lab doesn’t share people’s private data, for instance, or store it on a server, to help protect their privacy. Nonetheless, she noted, in her lab, a topic of frequent discussion is how technology often helps to create problems, rather than solve them. For instance, as she pointed out, “All of us are busier and busier [due in part to] these phones that are constantly giving us stimulation. 

Event organizer Kos’myna—who has been experimenting with real-time biofeedback models to enhance and augment human performance, particularly attention and focus, since 2016—is concerned about privacy, too. “We lost the battle in social media,” she says. “That train left the station and we didn’t do enough to stop it.” She continues the metaphor: “This train is about to depart, it is moving slowly, it hasn’t sped up yet.” There is still time to control its path, she suggests. “And this is much more important than social media,” she adds. “This is about your most intimate data, how your brain behaves.” 

Loftus broached the topic of ethical issues—“We now need to be asking: When should we use this kind of mind technology or should we ever think about banning its use?”—only to punt on it. “This isn’t for me as a cognitive psychologist to answer,” she continued, “but just for the rest of us to think about it.” That dodge disappointed MIT Museum manager Ben Wiehe was disappointed. “I thought that was a cop-out,” he says. Experts like Loftus have a responsibility to consider such matters, as far as Wiehe is concerned; and if the MIT museum were to do a similar event in the future, he would encourage the speakers to engage more deeply with the ethical and philosophical considerations at hand. He notes that many of MIT’s neuroscientists we think that the technology they’re developing is neutral, even though, as he says, “a lot of it was created with the intent of disruption.” He continues: “The speed at which this is moving presumes a level of understanding of it, and of the deep implications of it all, that is just not there. It presumes an equitable society that knows how to handle things. And the quicker you move, the less inclusive and equitable it will be.” 

Augmenting Brains Demos at MIT Museum's AfterDark Event

There was substantial public participation in the Augmenting Brains symposium, but becuase it was held during the work week, the project looked for a more accessible time to connect with the public. Several weeks after the Augmenting Brains symposium, two related demonstrations were presented at the MIT Museum's AfterDark event, a nighttime 21+ event that drew hundreds to the Museum for several hours.

At an MIT Museum's AfterDark in January of 2023, visitors were treated to encounters with memory implantation and brain entrainment. Hundreds of participants eagerly stood in long lines to have a chance to try these interventions. The memory implant demonstration offered them a replication of Professor Elizabeth Loftus’s well-known experiment (which she described during the Augmenting Brains symposium). Attendees were shown a video of a red Datsun going through an intersection with a stop sign—and then asked if they’d noticed another car passing the red Datsun, at the intersection with the yield sign. In a quiz after the video participants remembered a yield sign instead of a stop sign. After the fact, the neuroscientist working the demonstration told them about Loftus’s research—and how introducing misinformation, like the yield sign, when people are distracted, as they would’ve been by the question about the other car, can impair memory and even result in specific implanted memories.

In the next room over, Nathan Whitmore  demonstrated a prototypical version of technology to treat memory impairment that he’d prepared especially for the evening. The headset gave people a sense of how neuroscientists like Whitmore are trying to augment brains in ways that will “enhance and change people’s memories according to the their preferences, and give people the ability to learn new skills or learn skills faster,” as Whitmore explained. 

When one visitor heard about a Whitmore's SleepStim project they a pointed question about the potential to manipulate memory: “Do you think it will be possible to tamp down the emotion around bad memories—of abuse or trauma, let’s say—without actually deleting or erasing the bad memories? Because what seems problematic is the emotional reaction, more than the actual memory. I think I’d want to retain the truth of a bad thing that had happened to me, since it seems like it would be central to identity, even if I’d also want to expunge the negative feelings and emotions around it.” Whitmore said he thought that kind of thing would indeed be possible some day. “This happens naturally [to some extent] during sleep,” he explained. “Sleep seems to reduce the fear/distress associated with memories even while preserving their content. It's also a goal with therapies for PTSD and even other disorders like depression and anxiety disorders.

MIT Neuroscientist Nataliya Kos’myna, who oversaw the Augmenting Brains Demonstrations event, was invigorated by the strong attendance and curiosity that attendees exhibited that evening. “The feedback we got was that people wanted more time to try them,” she says. As such, if she had the chance to offer similar events again by way of a Center for Neuroscience & Society, she would give the public more time to interact with the new systems and devices that she and her colleagues are working on, and to do so throughout the day. She thinks the Museum represents “a wonderful setting,” which allows for deeper engagement with the general public, including people of all ages. It provides an opportunity to “explain the ethical dilemmas without shoving it in their face. I think that is so powerful. It can get us going with building a stronger bond with the community.” At the Museum, she points out, people are relaxed and looking to enjoy themselves, which sets the stage for “a more fluid engagement, where everyone can be comfortable with their skill set and knowledge base.” 

Jacob Montz, the MIT Museum coordinator of the Dana Center Planning Grant (see the images associated with this story), staffed the next station in the experience for participants. This presented a set of single-word prompts developed with the help of an ELSI scholar, and asked participants to choose a prompt that stands out for them, note why on a Post-It note, and place that on the associated prompt posters hung throughout the space. Montz was also impressed by how engaged people were. “Though there were lines out the door, attention held for a long time, which was a bit of a surprise,” he reports. Montz observed and spoke to people after they experienced the demos, and he says, “People were really genuinely engaged with it and asking a lot of questions. They stayed because they were interested.” 

If Montz were going to do a similar activity again, he would want more training for how to succeed with ethical engagement of visitors—because he had some concerns about how these devices might be abused in “a sci-fi kind of way.” What’s more, he said, while devices like these might help a lot of the population live happier, healthier lives, he also wonders if, due to high cost, they might become a way to reinforce disparities—”in the way that those of us who have more money can eat healthier, because a good diet is more expensive than a bad one.” As for how to engage visitors with questions like these, however, he wasn’t sure. “If we even had someone talk to us for thirty minutes about how to engage people with ethical questions, that would help. Because I’m not trained to engage people on ethical questions, per se.”

A project of:
With funding from: