There was a time when content moderation powered by Artificial Intelligence could save us from disinformation. Nowadays, Artificial Intelligence is supposed to flood us with disinformation. Both these claims seem equally flawed.

Among the various reasons everyone is freaking out about Large Language Models,1 such as ChatGPT, there is the claim that they might be used to speed up the production and diffusion of “disinformation”. The United Nations is calling for regulating AI speech, the White House mandates the detection and labelling of “synthetic content”, and the European Commission is asking Big Tech platforms to tackle AI disinformation. However, the current evidence seems to dismantle many of these fears.
In an excellent piece, Felix Simon, Sacha Altay and Hugo Mercier sustain that the impact of generative AI on misinformation is “overblown”. Particularly, they counter the argument that AIs can be used to spread cheap disinformation at scale, claiming that disinformation has been cheap for many years (i.e., fake websites, Photoshopped pictures) and the average internet user consumes very little of it (most of the people access news from mainstream media). Put simply: there is no demand for this content.
The current events seem to support their view: in the Israel-Gaza conflict, AI content diffusion is marginal, being merely used by activists to solicit support, and the main role is played by the abundance of real photographs. Nevertheless, the incentives driving the claims of an AI information apocalypse still stand:
Newspapers want to convince readers that social media [AI] disinformation is so widespread that you should rely only on their “fact-checked” content.
Social media providers profit from the narrative that their [AI] content is so persuasive they can change the course of elections. Imagine how effective their platforms can be in advertising your toothpaste!
Politicians find it tempting to blame [AI] disinformation when they lose elections (or say something embarrassing): it is very difficult to falsify such statements!
However, the available evidence shows that “fake news” constitute only a minimal part of the average media consumption, their persuasiveness is contextual to many variables, and elections likely have not been significantly influenced by fake news.
Additionally, we should recall the various problems related to moderating speech, already investigated in the context of pre-AI content moderation regulation:
Sanction of false positives (and how to define disinformation): how to distinguish AI products from human-made creations? How to distinguish AI disinformation from AI art, memes, or satire? Is Donald Trump playing Minecraft disinformation? There are no easy boundaries, even when cases are assessed by human moderators. Forcing social media platforms to check for AI content might lead to overly aggressive rules, which dissuade people from posting AI-produced content for fear of content moderation.2
Resource constraints: how many people should firms employ to flag content? Lack of resources might cause inconsistencies and mistakes; moreover, pressures to regulate content in one part of the world may dry up content moderation resources elsewhere. Currently, only big social media platforms have the resources to recognise this content, so this might further solidify their position.
Weaponisation of [AI] content moderation: authoritarian countries may use [AI] content moderation laws to censor content they politically disagree with, and public figures might use their legal teams to take down unwanted content.
Because of these risks, we should reverse the burden of proof by requiring strong evidence from the claimants of an AI information apocalypse. Let’s be clear: LLMs MAY constitute an unprecedented risk, but at the moment the evidence seems at best anecdotical. For these reasons, any attempt to regulate AI disinformation should be gradual and focused on empowering readers instead of introducing “upload filters” or removing content. Incentivising the labelling of AI content could be a good middle ground.
Finally, AI disinformation should not be overlapped with the right to use someone’s image, impersonations, or fake reviews. These are all serious and potentially harmful issues: however, in such cases, targeted regulation or collective bargaining seems more effective.
Regulating AI is crucial. What I argue is that rather than fighting ill-defined disinformation, policymakers should prioritise concrete short-term risks, such as the models’ biases and the application of AI in sensitive fields. Let’s not fall into panic, and build the AI policy agenda on the available evidence. There is already a lot of work to do.
A large language model (LLM) is a type of computational model notable for its ability to achieve general-purpose language understanding and generation. LLMs acquire these abilities by training on huge amounts of data.
In legal literature, this is often defined as a “chilling effect”, which is a phenomenon where individuals refrain from engaging in expression for fear of running afoul of regulation.
Very good vademecum on the discussion, but I think is quite incorrect to say "there is no demand for AI generated fake news". The demand for (dis)information that confirm anyone's beliefs is very hight since ever, and no one openly 'want' AI generated fake news. They want 'information', that's it, and we should, as the article says, focus on improving our capacity to recognise fake news, despite are AI generated or not.