Welcome to the Era of ‘Deep Doubt’
Deep doubt impacts more than just current events and legal issues. In 2020, I wrote about a potential “cultural singularity,” a threshold where truth and fiction in media become indistinguishable. A key part of the threshold is the level of “noise,” or uncertainty, that AI-generated media can inject into our information ecosystem at scale. Deepfakes may lead to scenarios where the prevalence of AI-generated content could create widespread doubt about the authenticity of real events that took place in history—perhaps another manifestation of deep doubt. In 2022, Microsoft chief scientific officer Eric Horvitz echoed these ideas when he wrote a research paper about a similar topic, warning of a potential “post-epistemic world, where fact cannot be distinguished from fiction.”
And deep doubt could erode social trust on a massive, internet-wide scale. This erosion is already manifesting in online communities through phenomena like the growing conspiracy theory called “dead internet theory,” which posits that the internet now mostly consists of algorithmically generated content and bots that pretend to interact with it. The ease and scale with which AI models can now generate convincing fake content is reshaping our entire digital landscape, affecting billions of users and countless online interactions.
Deep Doubt as “The Liar’s Dividend”
“Deep doubt” is a new term, but it’s not a new idea. The erosion of trust in online information from synthetic media extends back to the origins of deepfakes themselves. Writing for The Guardian in 2018, David Shariatmadari spoke of an upcoming “information apocalypse” due to deepfakes and questioned, “When a public figure claims the racist or sexist audio of them is simply fake, will we believe them?”
In 2019, Danielle K. Citron of Boston University School of Law and Robert Chesney of the University of Texas, in a paper called “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” coined the term “liar’s dividend” to describe this phenomenon. In that paper, the authors say that “deepfakes make it easier for liars to avoid accountability for things that are in fact true.”
This liar’s dividend paradoxically increases in efficacy as society becomes more educated about the dangers of deepfakes, since they will know it is possible to fake various forms of media easily. The paper warns that this trend could exacerbate distrust in traditional news sources, potentially eroding the foundations of democratic discourse. Moreover, the authors suggest that the phenomenon could create fertile ground for authoritarianism, as objective truths lose their power and opinions become more influential than facts.
The concept of deep doubt also intersects with existing issues of misinformation and disinformation. It provides a new tool for those seeking to spread false narratives or attempting to discredit factual reporting. This could lead to the acceleration of the already present scenario, driven by cable news media and social media in particular, in which our shared cultural perception of truth becomes even more subjective, with more individuals choosing to believe what aligns with their preexisting views rather than considering the evidence from a different cultural perspective.
How to Counter Deep Doubt: Context Is Key
All meaning derives from context. In a sense, crafting our own interrelated web of ideas is how we make sense of reality. Considering any idea standing alone without knowing how it links up conceptually with the existing world is meaningless. Along those lines, attempting to authenticate a potentially falsified media artifact in isolation doesn’t make much sense.
Throughout recorded history, historians and journalists have had to evaluate the reliability of sources based on provenance, context, and the messenger’s motives. For example, imagine a 17th-century parchment that apparently provides key evidence about a royal trial. To determine if it’s reliable, historians would evaluate the chain of custody, as well as check if other sources report the same information. They might also check the historical context to see if there is a contemporary historical record of that parchment existing. That requirement has not magically changed in the age of generative AI.