Woman defrauded of €800,000 in Brad Pitt AI deepfake

A French interior designer’s skiing vacation post on Instagram was left financially bankrupt after scammers used artificial intelligence to make her believe she was in a relationship with Brad Pitt.
The 18-month scam targeted Anne, 53, who received an initial message from someone pretending to be Brad’s mother Jane Etta Pitt, claiming her son “needs someone like A woman like you”.
Soon after, Anne began talking to people she believed to be the Hollywood stars themselves, providing them with AI-generated photos and videos.
“I’m shocked that we’re here talking about Brad Pitt,” Anne told French media. “At first I thought it was fake, but I really didn’t understand what was happening to me.”
After months of daily contact, the pair’s relationship deepened, with fake Pete sending poems, declarations of love, and eventually a proposal of marriage.
“So few people write to you like this,” Anne described. “I love the guy I’m talking to. He knows how to talk to women and always has it organized.”
The scammer’s tactics proved so convincing that Anne eventually divorced her millionaire entrepreneur husband.
After establishing a rapport, the scammers began to extort money with a modest request – a customs fee of €9,000 for a so-called luxury gift. Things escalated when the imposter claimed to need cancer treatment and his account was frozen due to his divorce from Angelina Jolie.
Doctors fabricated information about Peter’s condition, prompting Anne to transfer 800,000 euros to a Turkish account.
“I paid a price for it, but I thought I might save someone’s life,” she said. When her daughter recognized the hoax, Anne refused to believe it: “When he comes here in person, you’ll see it, and then you’ll apologize.”
In the summer of 2024, she was disillusioned when she saw news reports of the real Brad Pitt and his partner Ines de Ramon.
Even so, scammers tried to maintain control, sending out fake news alerts dismissing the reports and claiming that Pitt was actually dating an unnamed “very special someone.” On the final roll of the dice, someone impersonated an FBI agent and offered to help her escape the scheme, thus swindling her out of another €5,000.
The consequences were devastating – three suicide attempts resulted in hospitalizations for depression.
Anne spoke about her experience to French broadcaster TF1, but the interview was later deleted due to the severe cyberbullying she faced.
After selling her furniture and now living with a friend, she has filed a criminal complaint and launched a crowdfunding campaign for legal help.
It’s a tragic situation – although Anne is certainly not alone. Her story parallels the massive surge in AI-driven fraud around the world.
Spanish authorities recently arrested five men who stole €325,000 from two women through similar Brad Pitt impersonations.
McAfee CTO Steve Grobman on last year’s AI fraud Explained why these scams succeed: “Cybercriminals are able to leverage generative AI to create fake voices and deepfakes, which in the past required more sophisticated means.”
It’s not just individuals but businesses that are targeted by scammers. Last year in Hong Kong, Scammers stole $25.6 million From a multinational company, AI-generated executive impersonators were used on video calls.
Superintendent Baron Chan Shun-ching described how “the worker was lured into a video conference that was said to have numerous participants. The true appearance of these individuals resulted in the employee executing 15 transactions to five local bank accounts.”
Can you spot an AI scam?
Most people would love the chance to spot an AI scam, but research shows this isn’t the case.
Research shows human effort Distinguish between real faces and artificial intelligence creationsand synthesized sounds Fooled about a quarter of the audience. This evidence comes from last year – AI speech image, speech and video synthesis has come a long way since then.
Synthesia, an AI video platform that generates realistic human avatars speaking multiple languages, is now powered by Nvidia. Its valuation doubled to $2.1 billion. Video and speech synthesis platforms like Synthesia and Elevenlabs are among them Tools used by fraudsters to launch deep fake scams.
Synthesia itself acknowledges this, recently demonstrating its commitment to preventing abuse through rigorous public red team testing, which showed how its compliance controls successfully prevent the creation of non-consensual deepfakes or the use of avatars to promote suicide and gambling and other harmful content.
Are these measures effective in deterring abuse? ——Obviously, the jury is still out.
As companies and individuals grapple with convincingly real AI-generated media, the human cost (as Annie’s devastating experience illustrates) is likely to rise.