The End of Photographic Evidence? The Liar’s Dividend and Generative AI
- WULR Team
- 2 days ago
- 4 min read
An analysis of how generative AI and deepfakes create the “Liar’s Dividend,” allowing authentic visual evidence to be dismissed as fake and complicating the authentication of evidence in the legal system.
Published March 20th, 2026
Written by Fayzaan Virk
Imagine a scenario where a police officer is standing trial for assault. They are accused of an unnecessary use of force on an unarmed Black man. The only evidence against them is the statement of the victim and a video recording of the incident. In almost every trial up until now, the video would be conclusive evidence against them. In this trial, however, the defense attorney takes a different approach. They claim the video was AI-generated. Despite the video being authentic, this concern of deepfake video evidence sways the jury, and they acquit the police officer.
According to a survey conducted by Security, 74% of people worry about the societal impact of deepfake AI images and videos. This fear is well placed. The latest generative AI models like Google’s Nano Banana Pro and OpenAI’s Sora can generate images and videos that are indistinguishable from real ones. Gemini is also capable of writing material that is capable of not only fooling humans, but also getting past current AI writing detectors. Similarly, current tools which claim to detect AI-generated images have been unable to catch images generated by Nano Banana Pro. While concern about AI images being mistaken for real ones is well-documented, there is another issue that is just as dangerous but isn’t getting as much attention. Coined the “Liar’s Dividend” by law professors Bobby Chesney and Danielle Citron, it is the phenomenon of real videos being wrongfully dismissed as fake.
The Liar’s Dividend isn’t just a hypothetical situation for law professors, it’s already playing out in the real world in both political and legal scenarios. One of the first was in early 2023. According to the New York Times, early images of the Israel-Hamas war in the Gaza Strip were accused of being AI-generated online, despite AI specialists concluding that most of the images were authentic. Spanish foreign minister Alfonso Dastis claimed video and images of police violence in Catalonia were faked by AI, according to Catalan News. Both this incident and the hypothetical I opened this article with show how the Liar’s Dividend will be especially impactful in cases of police violence. NPR describes a recent lawsuit about the death of a man driving a Tesla in which Elon Musk’s attorneys have claimed that videos of him boasting about the safety of Tesla’s self-driving cars are AI deepfakes. The same article also mentions that two defendants from the January 6th riots claimed that videos and images showing them at the Capitol were AI-generated. Each of these scenarios show the threat of generative AI is not simply the fabrication of evidence, but the systematic erosion of our ability to determine what is real and what is not.
AI deepfakes are already increasing in prevalence, becoming more difficult to tell apart from real images and can have real consequences for photographic evidence. Shifts are already starting to take place to account for both possibilities of AI images being mistaken for real ones and real images being dismissed as AI. According to the Brennan Center for Justice, camera technology is being adapted to add irremovable signatures to the metadata of real images that will distinguish them from AI and edited images. However, it will be challenging even for tech companies, image originators and media companies to all be on the same page for image authentication. Another challenge is determining how to authenticate evidence used in the courtroom. Federal Rule of Evidence 901 states that evidence must be authenticated and trustworthy before it can be admitted. This could potentially create a conflict when it comes to establishing a burden of proof for AI-generated images. Will the party submitting the evidence have to prove their authenticity, or will a challenging party have to prove they are AI-generated? This has yet to be decided, but it is certain that the legal system will struggle to keep pace with the rapidly developing AI models.
The Liar’s Dividend doesn’t harm all cases equally. This can already be seen in how juries are currently adapting to the challenge of authenticating evidence. According to Stanford Internet Observatory researcher Riana Pfefferkorn, juries and lawyers could start demanding more evidence to prove authentication. This gives the upper hand to defendants who can afford experts to testify about the authenticity of evidence. This compounds the issue that the Liar’s Dividend affects people who are already socially marginalized. Cases like domestic abuse, police violence, and human rights violations often have photographic evidence as the only evidence. In cases like these, the Liar’s Dividend can serve as racial or social confirmation bias to show that victims of the crimes are untrustworthy. All of these factors combined demonstrate how the erosion of trust in visual evidence will disproportionately harm the most vulnerable members of society.
The rise of generative AI and the Liar’s Dividend have clear ramifications for both the legal and political spheres. As models like Gemini and ChatGPT become more sophisticated, the challenges for courts to verify evidence will only become greater. Generative AI is creating unique challenges for courts systems, especially for cases involving marginalized individuals. It raises serious doubts about the future of photographic evidence and how both juries and ordinary citizens will recognize what is real and what is not.

