This afternoon, I’ll be at MIT for this conference, sponsored by the Berkman Center for Internet and Society at Harvard and the MIT Center for Civic Media and entitled “Truthiness in Digital Media: A symposium that seeks to address propaganda and misinformation in the new media ecosystem.” Yesterday was the scholarly and intellectual part of the conference, where a variety of presenters (including yours truly) discussed the problem of online misinformation on topics ranging from climate change to healthcare—and learned about some whizzbang potential solutions that some tech folks have already come up with. And now today is the “hack day” where, as MIT’s Ethan Zuckerman put it, the programmers and designers will try to think of ways to “tackle tractable problems with small experiments.”
In his talk yesterday, Zuckerman quoted a helpful—if frankly, somewhat jarring—analogy for thinking about political and scientific misinformation. It’s one that has been used before in this context: You can think of the dissemination of misinformation as someone akin to someone being shot. Once the bullet has been fired and the victim hit, you can try to run to the rescue and stanch the bleeding—by correcting the “facts,” usually several days later. But, psychology tells us that that approach has limited use–and to continue the analogy, it might be a lot better to try to secure a flak jacket for future victims.
Or, better still, stop people from shooting. (I’m paraphrasing Zuckerman here; I did not take exact notes.)
From an MIT engineer’s perspective, Zuckerman noted, the key question is: Where is the “tractable problem” in this, uh, shootout, and what kind of “small experiments” might help us to address it? Do we reach the victim sooner? Is a flak jacket feasible? And so on.
The experimenters have already begun attacking this design problem: I was fascinated yesterday by a number of canny widgets and technologies that folks have come up with to try to defeat all manner of truthiness.