‘I knew it!’—Why misinformation feels so good to share, and what to do about it
SCARP researcher Wes Regan talks about how people can navigate an increasingly polluted information environment ahead of an important election season.
Photo by Humphrey M on Unsplash
With a Canadian federal election possible at any time, and U.S. midterm elections set for November 2026, the information landscape is increasingly strained. AI-generated videos can falsely portray public figures, and misleading claims about local politicians continue to circulate. These forms of misinformation risk shaping how citizens understand public issues—and may weaken trust in democratic institutions.
Recent examples hit close to home. In Vancouver, a city councillor shared a widely viewed video claiming that other elected officials had used or distributed drugs. The allegation was later found to have been shared first by the mayor, who has since apologized. The episode showed that misinformation doesn’t need advanced technology to influence public debate.
Wes Regan, a PhD candidate and researcher in UBC’s school of community and regional planning, studies polarization, emotion and public decision-making. His work examines how and why misinformation spreads, particularly around urban planning and public policy. He also looks at ways to rebuild trust and improve democratic discussion. He spoke with UBC News about how people can navigate an increasingly polluted information environment ahead of an important election season.
What does misinformation look like, and is it getting worse?
Misinformation is often shared with good intentions—someone believing they’re helping. Disinformation is different: It’s spread deliberately, often to sow division, undermine trust in institutions or subvert democratic processes like elections.
Digital platforms, particularly social media algorithms, tend to reward sensational and controversial content. In that sense, a polluted information environment has become part of the new normal. Like any polluted environment, cleaning it up requires government regulation, corporate responsibility and actions by individuals and communities.
Why does misinformation feel so convincing?
It often comes down to emotion, which can tip us toward believing or sharing something, even if we have doubts.
Misinformation often confirms suspicions and reinforces existing biases, making it feel validating to read and satisfying to share. It frequently contains an element of plausibility. Legitimate concerns about corporate greed, for example, can be used to cast doubt on rigorously tested vaccines. Even if a claim sounds unlikely, it may feel like it captures a deeper truth or belief that we hold.
We’re heading into election season with more AI-generated political content than ever. Should we be worried?
AI may package misinformation more convincingly, but it doesn’t change the underlying dynamic—it still works best when someone already wants to believe what they’re hearing.
The greater concern is impersonation: AI mimicking politicians or election officials, directing people to the wrong polling station or giving incorrect voting dates. That has already occurred in limited cases, and AI could make those attempts more persuasive.
The deeper risk, however, is cultural. If we begin consulting an algorithmic oracle instead of engaging with one another, we sidestep the harder work of democracy—engaging across differences, negotiating, deciding for ourselves.
What’s a practical first step before sharing something?
Listen to that voice inside that something feels off. If a claim is sensational, too good to be true, or paints someone in a cartoonishly villainous light, pause and check the source, and remember that legacy media, whatever its biases, is far more reputable and more heavily regulated than online influencers.
If you see misinformation spreading in your own community, what’s the best response?
It depends on the context. Sometimes speaking up publicly is appropriate; other times a private conversation is more effective. Research shows the messenger can matter as much as the message. MIT political scientist Adam Berinsky has demonstrated the power of an “unlikely source” in correcting misinformation. This is someone you wouldn’t expect to challenge a claim, someone perceived as less partisan or whose values align with the person who has been exposed to misinformation.




