Can Geeks Defeat Lies? Thoughts on a Fresh New Approach to Dealing With Online Errors, Misrepresentations, and Quackery

Read time: 10 mins

This afternoon, I’ll be at MIT for this conference, sponsored by the Berkman Center for Internet and Society at Harvard and the MIT Center for Civic Media and entitled “Truthiness in Digital Media: A symposium that seeks to address propaganda and misinformation in the new media ecosystem.” Yesterday was the scholarly and intellectual part of the conference, where a variety of presenters (including yours truly) discussed the problem of online misinformation on topics ranging from climate change to healthcare—and learned about some whizzbang potential solutions that some tech folks have already come up with. And now today is the “hack day” where, as MIT’s Ethan Zuckerman put it, the programmers and designers will try to think of ways to “tackle tractable problems with small experiments.”

In his talk yesterday, Zuckerman quoted a helpful—if frankly, somewhat jarring—analogy for thinking about political and scientific misinformation. It’s one that has been used before in this context: You can think of the dissemination of misinformation as someone akin to someone being shot. Once the bullet has been fired and the victim hit, you can try to run to the rescue and stanch the bleeding—by correcting the “facts,” usually several days later. But, psychology tells us that that approach has limited use–and to continue the analogy, it might be a lot better to try to secure a flak jacket for future victims.

Or, better still, stop people from shooting. (I’m paraphrasing Zuckerman here; I did not take exact notes.)

From an MIT engineer’s perspective, Zuckerman noted, the key question is: Where is the “tractable problem” in this, uh, shootout, and what kind of “small experiments” might help us to address it? Do we reach the victim sooner? Is a flak jacket feasible? And so on.

The experimenters have already begun attacking this design problem: I was fascinated yesterday by a number of canny widgets and technologies that folks have come up with to try to defeat all manner of truthiness.

I must admit, though, that I’m still not sure that their approaches can ultimately “scale” with the kind of mega-conundrum we’re dealing with—a problem that ultimately may or may not be tractable. Still, my hat is off to these folks, and the enthusiasm I detected yesterday was impressive.

Some examples:

* Gilad Lotan, VP of R&D for Social Flow, has crunched the data on falsehoods and, um, truthoods that trend on Twitter. He’s studied which lies persist, which die quickly, which never catch fire—and why. To stop falsehoods in their tracks, he advocates a “hybrid” approach to monitoring Twitter lies–combining the efforts of man and machine. “We can use algorithmic methods to quickly identify and track emerging events,” he writes. “Model specific keywords that tend to show up around breaking news events (think “bomb”, “death”) and identify deviations from the norm. At the same time, it’s important to have humans constantly verifying information sources, part based on intuition, and part by activating their networks.”

* Computer scientist Panagiotis Metaxas, of Wellesley College, has figured out a way to detect “Twitter bombs.” For instance, during the 2010 Senate race in Massachusetts between Scott Brown and Martha Coakley, Metaxas and his colleague Eni Mustafaraj found that a conservative group had “apparently set up nine accounts that sent 929 tweets over the course of about two hours….Those messages would have reached about 60,000 people.” Alas, the Twitter bomb was only detected after the election, once Metaxas and Mustafaraj crunched the data on 185,000 Tweets.

* Tim Hwang, of the Pacific Social Architecting Corporation, introduced us to bot-ology: How people are creating programs that manipulate Twitter and even try to infiltrate social networks and movements. Hwang talked about, essentially, designing countermeasures: Bots that can “out” other bots—and even serve virtuous purposes. “There’s a lot of potential for a lot of evil here,” he told The Atlantic. “But there’s also a lot of potential for a lot of good.”

* Paul Resnick, of the University of Michigan, discussed the beta-mode tool Fact Spreaders, an app that automatically finds Tweets that contain falsehoods and connects users to the relevant fact-check rebuttals–so they can rapidly tweet them at the misinformers (and misinformed). It seems to me that if something like this catches on widely, it could be powerful indeed.

This is just a tiny sampling of the truth gadgets that people are coming up with. Ethan Siegel, a science blogger who was not at the conference (but should have been), is now working for Trap!t, an aggregator that is “trained” to find reliable news and content, and screen out bad information.

So…okay. I am very impressed by all of this wizardry, and am glad to share news of these efforts here. But now let’s ask the key question: Can it scale? Can it really make a difference?

Look: If Google were to suddenly do something about factually misleading sites showing up when you, say, search for “morning after pill” (see the fourth hit), there’s no doubt it would make a big difference. But as Eszter Hargittai of Northwestern put it in her talk yesterday (which highlighted the “morning after pill” example), Google doesn’t seem to be taking on this role. And none of us have anything remotely like the sway of Google.

Short of that, what can these kinds of efforts accomplish?

I heard a lot of impressive stuff yesterday. But what I didn’t hear—not yet anyway—was an idea that seems capable of getting past the vast and potentially “intractable” problem of information-stream fragmentation along ideological lines. The problem, I think, is captured powerfully in this image from a recent New America Foundation report on “The Fact-Checking Universe”; the image itself was originally created by a firm called Morningside Analytics.

What the image shows is an “attentive cluster” analysis of blogs that are interested in the topic of fact-checking—e.g., reality. Blogs that link to similar sites are grouped together in bubbles—or closer to each other–and the whole group of bubbles is organized on a left-to-right political dimension.

The image shows that although both profess to care about “facts,” progressive and conservative blogs tend to link to radically different things—e.g., to construct different realities. And that’s not all. “A striking feature of the map,” write the New America folks, “is that the main­stream progressive cluster is woven into [a] wider interest structure [of blogs that are interested in economics, law, taxes, policy, and so on], while political discourse on the right is both denser and more isolated.” In other words, conservatives interested in fact-checking are linking to their own “truths,” their own alternative “fact-checking” sites like

What I haven’t yet heard are ideas that seem capable of breaking into hermetically sealed misinformation environments, where an endless cycle of falsehoods churns and churns–where global warming is still a hoax, and President Obama is still a Muslim, born in Kenya, and the health care bill still creates “death panels.”

Nor, for that matter, have I yet heard of a tech innovation that seems fully attuned to the psychological research that I discussed yesterday, along with Brendan Nyhan of Dartmouth. For a primer, see here for my Mother Jones piece on motivated reasoning, and here for my piece on the “smart idiot” effect—both are previews of my new book The Republican Brain. And see Nyhan’s research, which I report on in some detail in the book.

What all of this research shows—very dismayingly—is that many people do not really want the truth. They sometimes double down on wrong beliefs after being corrected, and become more wrong and harder to sway as they become more knowledgeable about a subject, or more highly educated.

Facts alone—or, the rapid fire Tweeting of fact-checks—will not suffice to change minds like these. Ultimately, the psychology research says that you move people not so much through factual rebuttals as through emotional appeals that resonate with their core values. These, in turn, shape how people receive facts—how they weave them into a narrative that imparts a sense of identity, belonging, and security.

Stephen Colbert himself, when he coined the word “truthiness,” seemed to understand this, talking about the emotional appeal of falsehoods:

Truthiness is 'What I say is right, and [nothing] anyone else says could possibly be true.' It's not only that I feel it to be true, but that I feel it to be true. There's not only an emotional quality, but there's a selfish quality.

As I said in my talk yesterday, there is now a “Science of Truthiness”—that was very nearly the title of my next book, though Republican Brain is better—and it pretty much confirms exactly what Colbert said.

So unless you get the psychological and emotional piece of the truthiness puzzle right, it seems to me, you’re not really going to be able to change the minds of human beings, no matter how cool your technology.

Therefore–and ignoring for a moment whether I am sticking with “tractable” problems or not–I think these tech forays into combating misinformation are currently falling behind in three areas:

1.      Speed. This one the programmers and designers seem most aware of. You have to be right there in real time correcting falsehoods, before they get loose into the information ecosystem—before the victim is shot. This is extremely difficult to pull off—and while I suspect progress will be made, it will be hard to really keep up with all the misinformation being spewed in real time. At most, we might find that the best that's possible is a stalemate in the misinformation arms race.

2.      Selective Exposure. You’ve got to find ways to break into networks where you aren’t really wanted—like the alternative “fact” universe that conservatives have created for themselves. This is going to mean appealing to the values of a conservative—perhaps even talking like one. But….that sounds very bot-like, does it not? Unless moderate conservative and moderate religious messengers can be mobilized to make inroads into this community–again, operating at rapid-fire.

3.      We Can’t Handle The Truth. Most important, human nature itself stands in the way of these efforts. I’m still waiting for the killer apps that really seems to reflect a deep understanding of how we human beings are, er, wired. We cling to beliefs, and if our core beliefs are refuted, we don’t just give them up—we double down. We come up with new reasons for why they are true.

Please understand: I have no intention of raining on this parade. I’m actually feeling more optimism than I’ve felt in a long time. It’s infectious and inspiring to see brilliant people trying to take on and address discrete chunks of the misinformation problem—a problem that has consumed me for over a decade—and to do so by bringing new ideas to bear. To do so scientifically.

Still, to really get somewhere, we’ve really got to wrap our heads around 1, 2, and 3 above. That’s what I’m going to tell them at the “hack day” today—and the great thing is that unlike some of the people we’re trying to reach, I know this crowd is very open to new ideas, and new information.

So here’s to finding out what actually works in our quest to make the world less “truthy”–one app at a time. 

Get DeSmog News and Alerts


First, thanks so much for sharing your notes from this event. And for writing what I often think, “Can it scale? Can it make a difference?”

A word of introduction: I work at the Pew Research Center in DC, studying the social impact of the internet, particularly as it relates to health & health care. Most of what I’m focusing on these days is what I call “peer-to-peer healthcare” – how people with certain health issues are able to find each other online, share what they know, and band together as a posse to navigate the world in a better way, together.

The first or second question a lot of people (esp reporters and policymakers) ask me, in so many words, is: How can people possibly trust information posted online by complete strangers?

By contrast, when I talk with patients, entrepreneurs, or fellow researchers, their first or second question is along the lines of: How can we harness this trend?

I’m going to write more about this on soon, but I wanted to highlight one aspect of peer-to-peer healthcare which might be useful for how people are thinking about truthiness in general: if you can find a good patient community, they will help filter out invalid information. There are all kinds of caveats to put on that, but check this out:

Researchers put this self-correction hypothesis to the test in a 2006 study published by the British Medical Journal. They analyzed the content of an online breast cancer forum and found that “10 of 4,600 postings (0.22%) were found to be false or misleading. Of these, seven were identified as false or misleading by other participants and corrected within an average of four hours and 33 minutes (maximum, nine hours and nine minutes).”

My favorite part of the study is the addendum, which includes excerpts of the 10 “bad” postings. They show that this was a high-level medical discussion among women whose lives were at stake. Group members talk about prescription-drug shelf-life, disease-staging parameters, and the likelihood of recurrence within five years – serious topics, taken seriously. The excerpts show that patients, when given access to sound medical information, cite it and put it to use.

The following comment, collected in a Pew Internet survey of a PatientsLikeMe community, is typical of what I’ve heard from patients who go online for health advice: “Of course all information found on the internet should be taken with a grain of salt and further researched, but it can be an invaluable resource if used properly. I’ve found plenty of bad or misleading information but have had little problem distinguishing that from useful, reliable information.”

Again, caveats galore - and I’m not talking about political discussions online, which you rightly point out are in a category by themselves. But I thought I’d share a perspective of hope. Altruism in patient communities – that can scale and make a difference.

See: Esquivel, et al. “Accuracy and self-correction of information received from an internet breast cancer list: content analysis.” BMJ. 2006 Apr 22;332(7547):939-42.

Disease and Vaccine.

You may not be able to save the first victim, but you can inoculate the rest.  Furthermore if you achieve a certain level of inoculation you acquire what is called ‘Herd Immunity’, meaning the disease cannot get a foothold in the population.

“Look: If Google were to suddenly do something about factually misleading sites showing up when you, say, search for “morning after pill” (see the fourth hit), there’s no doubt it would make a big difference.”

Yes,  google adwords and google optimization have just as much power to do harm as to help the profitability of a ethically balanced business.

Hopefully these web tools will go a long way to countering the types of “persona management” software that was uncovered in the HBGary hack lat year:

“To build this capability we will create a set of personas on twitter,‭ ‬blogs,‭ ‬forums,‭ ‬buzz,‭ ‬and myspace under created names that fit the profile‭ (‬satellitejockey,‭ ‬hack3rman,‭ ‬etc‭)‬.‭  ‬These accounts are maintained and updated automatically through RSS feeds,‭ ‬retweets,‭ ‬and linking together social media commenting between platforms.‭  ‬With a pool of these accounts to choose from,‭ ‬once you have a real name persona you create a Facebook and LinkedIn account using the given name,‭ ‬lock those accounts down and link these accounts to a selected‭ ‬#‭ ‬of previously created social media accounts,‭ ‬automatically pre-aging the real accounts.”

“How many times have you seen a diary get posted that reports some revelatory yet unfavorable tidbit about someone only to see a swarm of commenters arrive who hijack the thread, distract with a bunch of irrelevant nonsense, start throwing unsubstantiated accusations and ad hominem attacks to where before you know it, everyone’s pretty much forgotten what the diary said in the first place.”

Google sorts not just by ads or popularity, but by your personal preferences.

So.. if you prefer conservative views, Google will start to show those to you preferentially.  Therefore Google\Facebook reinforces closed bubbles.

Check out this TED talk about Filters;

That’s interesting, because when I clicked on the google search for “morning after pill” in the article, the first four hits were unremarkable, but the 5th linked to morningafterpill dot org, for which google’s summary was as follows:

“Site asserts that “morning after” emergency contraception is just another abortion approach that kills a human life.”

Google must know more about me than I thought!

The biggest problem with all research into how technology can be used to manipulate opinion is that the first and most effective adopters are right wing think tanks. They’ve got the money to employ the expertise and can devote all their time to it.

“The biggest problem with all research into how technology can be used to manipulate opinion is that the first and most effective adopters are right wing think tanks. They’ve got the money to employ the expertise and can devote all their time to it.”

In the Heartland budget doc, they dedicated $40k per year to it. That’s just one think tank.

“The biggest problem with all research into how technology can be used to manipulate opinion is that the first and most effective adopters are right wing think tanks.”

Just look at the way the deniers manipulated the polls to determine the best blogs. A number of denier blogs got awards, including Wattsuphisbutt which got the award for “Best Science Blog”. What a joke.

The award that Wattsuphisbutt comes from, which has rules ripe for exploitation.

Let’s see in detail from the rules at

E-mail addresses are required to vote. You must use your own address and confirm the verification e-mail.

Once they own a few Internet domains, they can create as many e-mail addresses as they wish. And then they can robo-vote Wattsuphisbutt. look like a one-man venture. Winners get $20, the ceremony takes place on Twitter, etc. But for the lobbyists that must have some award, it’s good business.

Come on guys, do you think Watts went to all that trouble to win that silly award?

And isn’t possible it had just a little bit to do with all the millions of hits his site gets each week?

People are always complaining about ‘denialts’ conspiracy theories’, you guys are just as bad!

Anyone can nominate anyone for the ‘’. There is no such thing as rudimentary vetting of the nominations. If there was any vetting, Wattsuphisbutt wouldn’t be in the Science category but rather in the Astroturf category.

When Nikolai Nolan (founder of was contacted about the votting procedure and whether he looks for potential stuffing of the votes, he did not reply.

For the founder of, he is happy to get as much as interest as possible for his venture. For the denialists, he is the perfect target so that they can abuse the service he provides.

Just for transparency, shall we see the emails (perhaps just the domains of the emails) of all those that voted for Wattsuphisbutt?

The more important question might be;

Who actually cares??

Peter Gleick just ruined his career and  bought himself a possible ticket to some jail time and you’re worried about who won the Boggies award?

MIT? That’s ironic. Was MIT social engineer Richard Lindzen in the room?