Anyone can be deceived by deepfakes, especially as AI technology becomes more sophisticated and less amateurish.
With minimal regulations in place, social media has become a minefield of fake, AI-enhanced images and audio. As the November election draws closer, the prevalence of deepfakes seems to be at an all-time high.
No, Taylor Swift did not release a poster supporting GOP presidential nominee Donald Trump while dressed as Uncle Sam. (However, this didn’t stop the former president from sharing it on his Truth Social platform.)
That “campaign video” — reposted by tech mogul Elon Musk — where Democratic presidential nominee, Vice President Kamala Harris, discusses President Joe Biden’s “senility”? Also fake.
In a more unsettling incident earlier this year, robocalls using AI to replicate Biden’s voice were used to discourage voters from participating in the New Hampshire primary.
Trump himself has been a target of AI-manipulated imagery — remember those fake photos of him in police custody before his first criminal indictment? He’s also fueled distrust by claiming that images showing crowds at Harris’ campaign events were artificially enhanced.
A staggering 78% of American adults anticipate that the misuse of deepfakes and AI systems will influence the outcome of the 2024 presidential election, according to a survey conducted by Elon University Poll and the Imagining the Digital Future Center at Elon University in North Carolina.
Both Kamala Harris and Donald Trump have been subjects of AI-altered media during the 2024 presidential election cycle.
The same survey revealed that 45% of American adults lack confidence in their ability to spot fake photos.
“It was very striking that this uncertainty was across the board among demographic groups,” said Lee Rainie, the director of the Imagining the Digital Future Center, in an interview with HuffPost.
“Young people and their elders, well educated and less well educated, men and women, Republicans and Democrats, all expressed some level of this self-doubt. I take that to mean that Americans themselves worry they can be victimized,” Rainie added.
While baby boomers often get blamed for falling for some of the more outlandish examples of AI — like images of skiing dogs or famine-related content meant to tug at heartstrings — the truth is, anyone can be fooled by deepfakes, especially as AI technology becomes more advanced.
“Oftentimes, people get duped by AI content that reinforces their own interests or preferences,” said Julia Feerrar, an associate professor and head of digital literacy initiatives at Virginia Tech’s University Libraries.
“I’ve almost been fooled multiple times by fake social media posts about reboots of my favorite TV shows,” she admitted. “So much misleading content is created to appeal to our emotions, whether that’s shock, anger, or excitement. And it’s such a human thing to want to act on those feelings by sharing.”
Since anyone can fall for these tricks, it’s likely that at some point, one of your friends or family members will share fake content. When that happens, should you speak up, or just let it slide? Here’s what experts suggest.
Consider the potential harm to someone’s reputation
If someone has shared or endorsed a fake image, video, or audio clip, they might appreciate knowing the truth so they can remove it before it damages their credibility. Rainie believes it’s kind and empathetic to discreetly inform them of the mistake.
“You know that old Ad Council public service message against drunk driving with the tagline ‘friends don’t let friends drive drunk’? In the age of deepfakes that can spread rapidly on social media, the modern equivalent could be ‘friends don’t let friends look like fools,’” Rainie said.
“I think we’re still at a point where most people aren’t actively looking out for AI-generated content and probably shared with no ill intent,” added Feerrar.
Think about how it might affect your relationship
Feerrar generally recommends pointing out fake images, noting that a gentle nudge from a friend can raise awareness.
“However, the stakes of the content itself and the nature of your relationship with the person who shared it are important factors in deciding whether to call out misinformation publicly, message someone privately, or simply keep scrolling,” she explained.
Address it privately if possible
Most people prefer to avoid confrontation. If you do decide to speak up, public shaming is unlikely to help and might even worsen the situation, said Janet Coats, managing director of the Consortium on Trust in Media and Technology at the University of Florida.
She suggests messaging the person privately, or better yet, having a face-to-face conversation or phone call instead of using texts or private messages.
“Our research has found that one-on-one conversation gives people space to listen and reason, rather than immediately becoming defensive,” Coats told HuffPost. “The best chance we have for improving information quality is when we actually talk to each other.”
If you need to inform someone that a picture they posted online is fake, it’s better to do it privately than in a public comment.
Remember that you could be fooled too, so stay vigilant
While AI-generated images still often exhibit strange, hyperreal qualities and inconsistencies, such as garbled text, awkward transitions between objects, or malformed hands, these flaws are becoming less obvious as AI tools improve. It’s possible that even you could be deceived by such images.
If you encounter content that triggers strong emotions or seems suspicious, take a moment to analyze it critically, Feerrar advised. Don’t rely solely on appearances to determine if an image is real.
“Use your search engine to describe what you’re seeing and add the phrase ‘fact check’ to your search,” she suggested. “The search results should help you assess the accuracy of the content.”
Feerrar also recommended asking yourself: Where did this content originate? Is it from a reputable news source or a trusted platform? Can you find other sources reporting on the same thing?
“Luckily, those questions can often be answered with a quick search in a new browser window,” she said.
As tech algorithms continue to promote AI content, this kind of reflexive, DIY fact-checking will need to become a part of our evolving digital literacy. “It’s going to take all of us to keep figuring out what it means to be a person in our digital world,” Feerrar concluded.