In Conversation: Hause Lin, Post-doctoral Fellow, Human Cooperation Laboratory

Neuroscientist Hause Lin is a post-doctoral research fellow at MIT, where he works with David Rand, a professor of management science and brain and cognitive science. Lin studies misinformation, including ways to prevent or reduce sharing it. He served as both a consultant to the Dana Center planning grant team, helping them to develop new relationships, and for the Make-a-Fake class. We spoke to him about what draws people to misinformation. 

Q: How did you become involved with the Dana Center planning team?

Hause Lin: I’m a computational neuroscientist with some background in social psychology, and I’ve been working on misinformation, and how to communicate about it. So I have a foot in all three domains—neuroscience, ESLI scholarship, and public engagement—that were important to the MIT Museum as they were planning their grant. I thought I could help bring people from the different domains together. 

I introduced the project deemonsration team to Matt Groh, one of my colleagues, who went on to teach the Make-a-Fake class. Matt and I talk constantly with people at social media companies. Our goal is to get them to engage with our research, and implement our ideas, about how to reduce the spread of misinformation. Matt and I study the neuroscience of how people make decisions—their cognitive biases; how they interpret or perceive information; how visual illusions occur. Last week I traveled to LA to talk to people at TikTok to work with them and engage with our research to implement some of our interventions on their platforms. It’s important to us to reach people outside of academia, so our work can have an impact.

Q: What do you think about the MIT Museum’s interest in developing a center that would bring together neuroscientists and public engagement specialists, as well as ELSI scholars?

Haura Lin: During my training as a neuroscientist, I used to work in an animal lab, with rats. And I’ve noticed that the more low-level researchers go—down to the level where you are studying neurons firing, literally, for example, which is lots of people at MIT—the more reluctant they become to engage the public. They’re worried about people misinterpreting their results. But public engagement is important. 

Q: What’s your focus now, as a researcher? 

Hause Lin: Most of my research looks at what draws people to fake content. What is it about the social media environment that makes people less discerning, less aware? My colleagues and I are trying to understand the cognitive science, the human psychology, behind why people believe misinformation. We’re also trying to understand how we can help people identify fake content and encourage them to be more selective about what they look at and forward. 

Q: What have you found, about what draws people to fake content? 

Hause Lin: I believe that the root of the problem is that the main goal of tech companies is to make money—and so much of the internet is based on an advertising model. We have “free” Gmail, “free” Google search, and so on—but what underlies those “free” services, what pays for them, is advertising. It’s the force behind everything online. And for that reason, the entire internet has been built around businesses making people stay on their sites, and making them stay there as long as possible. 

Every second, billions of transactions are made by computers to figure out how to get your eyeballs. Your attention is being traded. Sophisticated systems are bidding for your attention. And my sense is that most people don’t know this is going on. 

The way to get eyeballs is by creating and offering sensational, engaging content. To be engaging, it needs to be surprising, or really odd, really weird. And it also needs to keep you wanting more. And fake news is often outrageous so people click on it.

They need this kind of content to make people click, because every click they get is money. You are generating income for them. If you don’t scroll, you don’t click, you don’t make money for them—and the business fails. This has been going on for twenty years. So a lot of this terrible content has been floating around that long—and people become desensitized.

Over the years, people have started to forget to think about accuracy, because content that makes sense and is factually accurate isn’t that engaging. Even The New York Times needs to always create the feeling that they’re reporting on something exciting, new, novel. But hype distorts the truth. 

Q: Is there anything we can do?

Hause Lin: In research I’ve done with David Rand’s lab, we’ve found that if you gently nudge people to think about accuracy, just a simple reminder, which takes two or three seconds, people start sharing more accurate content. It’s pretty simple. You don’t need a media literacy class. Research shows that people are relatively good at identifying whether something is a fake or not—but not if you show them in social media, because there, people are not primed to look for accuracy. And it’s a distracting environment. But if you can stop people and ask them to think about accuracy they are pretty good at it. 

Q: One way to organize a Dana Center for Neuroscience and Society at MIT would be to break the work up into two-year project cycles. If you had two years to do a Neuroscience & Society project through the museum; what would you like to do? And how has the planning you've done, and the Dana Center events you've participated in, informed what you'd like to do?

Hause Lin: Developing exhibits that are more interactive, that look at the cognitive biases that influence how people make judgments, and how these processes could explain why people are susceptible to deep fakes, fake news, and misinformation—that would be really nice. There really isn't much research on the neuroscience of misinformation yet, since it’s still fairly young and new.

To put together exhibits like this, we’d need to ask: At what level do we want to engage the public? And what kind of neuroscience do we want to do? We could find someone who looks at MRIs to see how brains work when they are making judgements about fake news—that would be one way to go. In terms of the public, we should try to bring together a more diverse audience too—different groups of people, with really different viewpoints. 

Having a lecture series in which we bring in policy makers, thought leaders, people from social media companies who are actually developing the tech would be good, too—and encourage them to chat. I don’t think there is enough conversation going on about artificial intelligence and large language models like Chat GPT. Policymakers aren’t even keeping up with the tech. So it would be good to bring them in, and have a panel discussion, and give the public the opportunity to listen in and ask questions.

But we should also be sure to include scholars in anything we do through a Dana Center. And one way to do that would be to have a tech-a-thon, a hack-a-thon—something that would bring together interdisciplinary teams to consider how to solve various problems that we’re facing now. Scholars love these kinds of competitions. And though tech is never going to resolve the kinds of ethical and moral dilemmas we’re facing right now, tech can help. 

A project of:
With funding from: