We’re led to believe that America was once Mister Rogers’ Neighborhood, a place of cardigans and kindness where everyone got along just swell. Then it all went wrong. MSNBC hosts talk of a “crisis in authority” while New York Times columnists blame corrupt Republicans for a “lost faith in liberal governance.” Right-leaning commentators point to mass migration as the great trust killer. Illegal aliens, we’re told, have a “fragmenting effect on shared cultural norms” and are “importing distrust.”
No doubt these arguments contain an element of truth: America is a less trusting society than it was a few decades ago. But soon such arguments are going to appear as quaint as Mister Rogers’s model Pennsylvanian town. Forget shady politicians or Venezuelan carjackers: the US is hurtling toward an almighty Trust-Bust. And the reason is artificial intelligence. It’s becoming far easier to abuse not just the technology, but confidence in our own perceptions.
A record $16.6 billion was stolen from Americans last year, up by a third on the year before, according to the FBI. Fraudsters are getting far better at convincing their victims: in 2023, just over a quarter of targets lost money. Last year, 38 percent handed over cash. Thanks to deepfaking technology, these scams are becoming far more convincing.
Not only is every stranger who gets in contact a possible threat but supposed family members are, too
Law enforcement believe the fraudsters are able to scrape audio samples from TikTok and Instagram videos; sometimes as little as a few seconds is all that’s needed. The FBI says reports have been filed nationwide, with family members getting hurried phone calls about car crashes, kidnappings and bail bonds. They’ve told families to introduce a safeword so that if you get a panicked call from your child or grandkid, you can check if it really is them. Earlier this year, the FBI warned that malicious actors were already impersonating senior US officials using AI-generated voice memos to target other staff. “If you receive a message claiming to be from a senior US official, do not assume it is authentic,” the agency said. There is a growing risk of catastrophic leaks or – worse – a coordinated series of fake orders. What happens when state actors are able to temporarily hijack the chain of command?
Nor is the corporate world immune. Last year, an engineering company in Hong Kong lost $25 million when an employee was tricked by AI fraudsters. The scammers had used video conferencing along with face and voice cloning to convince the worker he was talking to senior management. The employee had transferred millions of dollars into offshore bank accounts by the time the scam was discovered.
Insurers are already increasing premiums on companies that don’t keep up with the latest encryption software, according to Deloitte. Although the effectiveness of such software is questionable. Earlier this year, MIT reviewed existing deepfake protections and found some 70 percent of detectors failed when deployed on real-world examples of voice cloning. Some banks are dropping voice-ID security features after finding that 40 percent of audio clones could bypass this technology.
These kinds of frauds can now be run by a kid in Nigeria with a SIM card and a few lines of code. Most companies have spent years trying to promote their chief executives on podcasts and YouTube interviews, meaning there are hours of high-quality source material for scammers to use.
Already, fraud is being sold as a service, with software such as WormGPT and FraudGPT on offer for as little as $70 a month. These are versions of what OpenAI already offers, with content restrictions removed to make them useful for criminal activity – plus they’ve been trained on thousands of examples of malware code and fraud playbooks.
The FBI says that $2.7 million has been reported stolen through virtual kidnapping scams alone – a fraction of the total lost to fraud last year – although it believes that figure is substantially under-reported (people tend to feel embarrassed when they’ve been taken advantage of). Deepfake fraud is projected to cost $40 billion globally by 2027, the equivalent of the GDP of Vermont. One analyst believes deepfake fraud is increasing at a rate of 3,000 percent a year.
The monetary loss of all this is serious. But what’s even more disturbing is what deepfakes are going to do to our psychology. Those who fall prey to such confidence tricks won’t be as trusting again. All Americans are going to learn what it’s like to live in central New York City – but worse. Not only is every stranger who gets in contact a possible threat but supposed family members are, too. In order to survive, people will need to adopt a mindset of pre-emptive mistrust. The risk is that it really is your daughter screaming for help as she’s forced to dig her own grave in the Sierra Nevada. And you won’t believe her.
Researchers at the University of Amsterdam recently put forward what they call the truth-default theory. They argue that because human beings are social animals, we are primed to trust those around us. From an evolutionary perspective, this is a useful adaptation: default trust makes collaboration possible. It’s what makes civilization function. Only when an interlocutor appears dishonest do humans tend to lose that sense of trust. Imagine a poker game: the assumption is that your opponents are playing their hand unless you spot a tell, a suggestion that they might be bluffing.
Most of us think we’re able to identify deception. But in the era of the Trust-Bust, none of the normal tells will apply. We’re going to hear voices that are identical to those of the people we know, see the faces of those we trust – but the assumption that what we’re seeing is real will be gone. Being on-guard all the time itself has a psychological cost. Evidence shows that people who are always doubting those around them have much higher levels of the stress hormone cortisol, as well as impaired working memories.
Once people realize how easy deepfaking is, the incentive to useit becomes exponentially larger
Fraud and federal attacks won’t be the only problems. Once people realize how easy deepfaking is, the incentive to use it becomes exponentially larger. Spurned lovers will be able to create incriminating evidence that their former partner is cheating in their current relationship. Employees will invent malicious voice-notes from bosses they don’t like. Family members will be able to forge video messages of dead relatives to manipulate one another. With the few clicks of a mouse, malign intent can become evidence of whatever you want.
One of the consequences may be a retreat from social media and the return of face-to-face relationships. Already there is a taste for analog among the affluent: typewriters, vinyl records, film photography. The less you trust the virtual world, the greater a desire for a pre-digital existence.
Expect social media to become more gated: perhaps new networks will emerge where you can only connect with someone in-person so you can verify their existence. Calls from withheld numbers or unknown accounts purporting to be an acquaintance will be routinely ignored. Or perhaps we’ll have to start asking each other inane questions at the start of each call – “what did you order the last time we went to a restaurant?” “Where did we go on vacation in 2014?” – anything to try to confirm who we’re talking to is real.
There are those trying to stop this trustless world from becoming real. Google and OpenAI are testing so-called “proof-of-life” watermarks, embedding cryptographic pixels in images and videos.
Nikon has already rolled out software to its new line of cameras that do much the same thing. These pixels mean that if the images are manipulated, the hidden marks in the image will be changed too, alerting the receiver’s computer to a potential scam. The European Union is also proposing massive fines for companies that fail to combat synthetic content. So far, however, the AI forgers are moving faster than those hoping to counter them.
When people spoke of a loss of trust in the past, it tended to concern the public. Citizens stopped trusting Congress, the police, the church, or the strange man at the door. But deepfakes are coming for something more intimate and therefore precious – our private lives.
It’s the people who are kind, trusting, and eager to help who are going to suffer the most. Mister Rogers and his neighbors are just waiting to get scammed.
This article was originally published in The Spectator’s July 2025 World edition.
Leave a Reply