What is the Liar’s Dividend?

Well, what is it? Here’s a definition:

The benefit received by those spreading fake information as a consequence of the environment in which there is a great deal of fake information and hence it is unclear what is real and what is fake.

The first and immediate problem with deep fakes, or pictures generated with AI, isn’t the fact that they exist. Just the idea that it could exist is enough.

Fake images are problematic in and of themselves. But they are also problematic because it is now all too easy to deny that real images are, well, real.

Amid highly emotional discussions about Gaza, many happening on social media platforms that have struggled to shield users against graphic and inaccurate content, trust continues to fray. And now, experts say that malicious agents are taking advantage of A.I.’s availability to dismiss authentic content as fake — a concept known as the liar’s dividend.


That picture of a murdered (insert religion and nationality of choice here so as to not offend your sensibilities) child?

Real if it is a convenience for our worldview, fake if it isn’t. And it is very, very easy to convince yourself of the truth value of either of these statements, because who can tell these days?

And so fake images being fake isn’t the only problem.

Real images can also be dismissed as being fake. They are being dismissed as being fake.

The greatest trick AI ever pulled, it turns out, was in convincing the world that it might exist.

Here’s the original definition of the liar’s dividend:

Hence what we call the liar’s dividend: this dividend flows, perversely, in proportion to success in educating the public about the dangers of deep fakes


Realize the utterly delightful paradox: the better we get at convincing people of the problem of deep fakes, the easier it is to convince them that parts of reality itself are fake.

If you want to make your Monday even cheerier, do read the whole paper.