0

AI makes lies look like the truth and it’s hard to detect

In a new study pnas nexusresearchers show that the interpretation of false information generated by AI (called “Aipasta”) can make false claims look more credible and widely shared. Unlike traditional copy propaganda, Aipasta raises perceptions of social consensus when flying within the scope of existing AI detection tools.

False gains subtle power when repetition is in line with AI

Repeated messaging is a known psychological strategy: the more we hear, the more likely we are to believe it. The False Information campaign has long exploited this through the “relapse class”, which repeats the same message on social media. However, in this new study, the researchers explore what happens when generating AI is used to create many slightly different versions of the same information—each with the same meaning but with the new wording.

The team used Chatgpt to explain the news from two famous conspiracy campaigns – #stopthesteal and #plandemic – to create versions of Aipasta that retained the original meaning but changed wording. Then, a pre-scan experiment was performed on 1,200 U.S. participants, and then tested how people react to Copypasta, aipasta, or control messages.

Key points of experiment

  • Aipasta adds perspective on social consensus– People are more likely to think that false narratives are widely believed.
  • Aipasta has not reduced sharing intentionsUnlike Copypasta, Copypasta makes people less likely to share.
  • Only in the Republican PartyAipasta exposure increases belief in exact false claims.
  • Current AI detector failed Unlike traditional complex carriers, it is necessary to identify Aipasta as machine-generated.

Why Aipasta is hard to detect and can be more dangerous

This study confirms that Aipasta has a higher vocabulary diversity (using more diverse vocabulary) while maintaining similar meanings. This makes it difficult for both algorithms and humans to view duplication as a coordinated manipulation. Participants who came into contact with Aipasta judged this information as from independent sources, a well-known driving force for truth and consensus.

“Aipasta is easy to produce and demonstrates features that have strategic advantages,” the author wrote. One finding not highlighted in the press release: Complex reduce Participants share the intention of posts, and Aipasta has no such effect – it may spread more widely over time.

The impact on the future of online misinformation

As social platforms struggle to mitigate misleading content, the study highlights an unsettling possibility: Generated AI may make false information more persuasive and avoidable. The authors warn that current tools are not enough to detect AI-Paraphraphrather’s text, and that future movements can use this blind spot to expand its coverage.

For now, the public is still vulnerable to subtle, AI-driven manipulation, as many agree.


Magazine: pnas nexus
doi: 10.1093/pnasnexus/pgaf207
release: July 22, 2025


Discover more from Neuroweed

Subscribe to send the latest posts to your email.