Skip to main content

The Missing Human in Misinformation Fixes

Misinformation solutions target a rational, ethical ideal who doesn’t exist; to combat misinfo, we need to start with a richer concept of the human

Illustration, colorless and featureless mannequin head and bust on a light gray background

Grebeshkovmaxim/Getty Images

Flicking through your social media feed, an image strikes you. It is outrage-inducing, confirming long-held beliefs about a group or an issue. You quickly repost it while stating your viewpoint, signaling to your family and friends whom you stand with.

This is the kind of everyday, understandable response that has seen misinformation spread widely. But misinformation “solutions” are still based on a rational and ethical figure, an idealized human with no background or social ties, who carefully weighs up all the facts and arrives at “the truth,” contributing to a more civil public discourse.

Real people operate on hunches, loyalties and grudges. To combat misinformation, we need to start from this actually existing human, someone who is emotional, factional and frictional.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The stakes here are clear. As misinformation proliferates across our digital environments, it has real world consequences, from political campaigns to pandemic responses. But if misinformation matters, its framing and foundations are shaky, failing to adequately account for affect and emotionality, for the role of individual and group identity and for its entanglements with deep aspects of human nature.

To start, misinfo solutions too often assume that humans are rational. This is the figure behind misinformation as a “reasoning error,” a result of an information deficit or equivalent to it. Because facts are the problem, fact-checking is the de facto solution, with 200 fact-checking initiatives proliferating across 60 countries. This reasoning human just needs extra reasoning power provided by technology.

But actual humans are no longer rational (if they ever were). Enlightenment principles of positivism, progress and universalism have been fatally undermined. We see a shift from objectivity to intersubjectivity when people interpret issues and events. While the rise of fake news and post-truth is certainly not condoned, it signals an important dissatisfaction with rationality. Ours is a moment in which “facts have lost their currency.” Rather than pining for older ideals, we must incorporate an understanding of the human that acknowledges this shift.

Next, it’s assumed this model human is ethical, someone who always respects the humanity in others, and unfailingly directs their practices toward the moral and the good. People are honorable and tolerant, consistently achieving a virtuous life, even online. Based on this assumption, misinformation becomes a malevolent technique carried out by “bad actors” on decent people, a “scourge” deployed by rogue states and social manipulators on “liberal institutions, electoral processes, and social norms.”

Perfectly ethical people only need to be shown that “information equals misinformation” to disown it. This is why many studies focus on identifying misinformation with little rationale or follow-up. Informed of their mistake, it is assumed upstanding citizens will correct their practice and return to careful dialog to uplift the public sphere. But actual humans are not so innocent and austere. Whether zoom bombing, shitposting or sharing hoaxes and rumors, humans carry out playful, social, and antagonistic activities because of curiosity, status-seeking and myriad other reasons.

Misinfo solutions thus rest on a cardboard cutout, a flat-packed persona. We need a three-dimensional human, a more robust and authentic model that incorporates the messy, irrational and social aspects of our nature.

First, humans are emotional, intuitive and instinctual rather than rational. Faced with an incredible volume and velocity of conflicting information, we turn to the nonthought of feeling, reactions and routines. Preferences come before inferences. In decision theories, the growing emphasis on emotionality rather than rationality has been described as a paradigm shift, with a mountain of evidence suggesting that emotions constitute potent, pervasive and predictable drivers of decision making.

The feeds and features of our online environments capture and amplify this emotion, prioritizing feelings and immediacy rather than rationality and rumination. As one Google engineer stressed, these environments privilege our “impulses over our intentions.” Given these conditions, a more intuitive and expressive human must underpin any attempt to address misinformation.

Secondly, humans are factional. Individuals show favoritism to those within their group while ignoring or ostracizing those deemed to be outside this circle. Factionalism shapes how we assess and even approach information. Group belonging trumps accurate judgment. Partisan differences matter. The “truth” is shaped by identity and sociality.

Given a robust conception of the human, this should come as no surprise: information is not evaluated by a “blank slate” individual, but by someone with kith and kin, with beliefs and a background. A liberal’s go-to news source is, to that person, “real news”; to a conservative, it’s “fake news.” And even beyond obvious political persuasions, we can imagine diverse affiliations (Gen Z, Buddhist, “single mother,” “working-class”) that all shape our responses. To really understand the deep allure of misinformation, its repeated hold even in the face of corrections, requires incorporating a factional human whose loyalties run deeper than logic.

Finally, humans can be frictional. This doesn’t imply everyone exhibits virulent racism or sexism but simply acknowledges that humans are highly attuned to difference, and that favoritism and discrimination can manifest in systemic and subliminal ways. While these antipathies are ancient, digital technologies repackage them into compelling new forms.

These divisions shape the production, consumption and circulation of information. “Identity propaganda,” for instance, uses othering narratives to delegitimize or even dehumanize out-groups and reinforce membership in the in-group. Us versus them. Rather than assuming a cookie-cutter figure—someone liberal, civil and infinitely tolerant—we must acknowledge that human relations also contain frictions, fears and antagonisms.

Reinserting the human reshapes how misinfo is framed and fought. Instead of a thinker evaluating claims, or a passive populace being duped by bad actors, we see subjects with lived experiences, ideologies and communities, who respond to information through a mixture of feelings, factionalism and even animus for others. COVID shots and climate denial are not about “the facts” but rather a felt truth that is deeper and more social. Misinformation is actively constructed by society.

The messy human tramples through the comforting assumption that misinfo can be solved with more or “better” information. Instead, we need multipronged solutions, combining interdisciplinary insights from media, race and cultural studies, psychology, political science and education to understand what makes anti-immigration or climate denial compelling. Rather than claiming silver bullet solutions, we should acknowledge misinformation is a wicked problem, one without easy answers. Such humility is ultimately beneficial, opening up new perspectives and approaches from a broader community. Starting with a more holistic human will empower our interventions, targeting the full spectrum of all-too-human traits that fuel misinformation.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Luke Munn is a research fellow in Digital Cultures & Societies at the University of Queensland, Australia. The author of six books and numerous articles, his work combines digital methods and critical insights from across the humanities to explore the social, political, and environmental implications of contemporary technologies.

More by Luke Munn