cnet.com

Deepfakes: What to Know About AI Images and Videos

A deepfake is one of the most worrisome and fascinating artificial intelligence applications. These often hyper-realistic digital forgeries can replace someone's face, voice or entire likeness in images, audio recordings and even videos. What was once a niche Hollywood visual effects VFX tool — like digitally de-aging Samuel L. Jackson in Captain Marvel — has become widely accessible through AI-driven manipulation. Today, almost anyone can generate a deepfake in minutes with simple and easily accessible AI tools like Synthesia, DeepFaceLab, Reface, HeyGen, Pickle and ElevenLabs (for audio). If you're wondering how advanced and realistic these deepfakes have become, brace yourself: A study of 2,000 people in the UK and the US this year showed only 0.1% of participants could spot a deepfake video or image "even when specifically told to look for fakes," while 22% had never even heard of deepfakes before taking the survey.Some people use deepfakes for harmless fun, but the technology has raised serious ethical concerns, from misinformation and disinformation to fraud and privacy violations. Unlike other AI image generators and video generators, such as Dall-E, Midjourney and Sora, deepfakes specifically modify or fabricate realistic media to imitate real people.Let's dive deeper into how AI deepfakes work and what risks come with them. Watch this: How to Clone Your Own Voice With AI

05:38 Is it hard to create a deepfake?Deepfakes are not a new technology. This "face-swap" tech, where one person's face is replaced with another's to alter their identity, became very apparent in 2017 when a deepfake of President Barack Obama, produced and voiced by director Jordan Peele and BuzzFeed, warned viewers about tech advancements that can make people appear to say or do things they never actually did. Even then, forensics at the Pentagon said it was easier to generate a deepfake than to detect one. Imagine how hard it is to distinguish a deepfake from a real video nowadays.Deepfakes rely on deep learning, a branch of AI that mimics how humans recognize patterns. These AI models analyze thousands of images and videos of a person, learning their facial expressions, movements and voice patterns. Then, using generative adversarial networks, AI creates a realistic simulation of that person in new content. GANs are made up of two neural networks where one creates content (the generator), and the other tries to spot if it's fake (the discriminator).The number of images or frames needed to create a convincing deepfake depends on the quality and length of the final output. For a single deepfake image, as few as five to 10 clear photos of the person's face may be enough. For a video, hundreds or even thousands of frames are typically required, especially to capture different facial expressions, angles and lighting conditions. Gathering that much visual data might have been challenging in the past, but social media has made it easier than ever to access millions of public photos and videos of individuals. Considering that a 30-second video at 30 frames per second already gives you 900 frames to work with, deepfake creators have abundant training material at their fingertips.Fake or real? How to spot deepfakesThe famous quote, often attributed to Edgar Allan Poe, goes, "Believe nothing of what you hear and only half that you see." Well, with the advancement of deepfakes, we might not be able to believe anything we see anymore, either. Like other AI detection tools, deepfake video detection methods are still unreliable. Rochester Institute of Technology started the DeFake Project against digital deception. They built a forensic tool that helps journalists analyze and verify deepfake technology. Matthew Wright, an endowed professor and chair of the department of cybersecurity at RIT, tells CNET it's "very hard these days" to discern real video from a deepfake. "There's no AI system that's really going to be able to understand the full context of what the video is, who's depicted in it and what the relationship is to the channel that posted it," Wright says. "Is there any kind of consent documentation? There's just so many things that would go into that kind of automated decision-making."While tech-savvy people might be more vigilant about spotting deepfakes, regular folks need to be more cautious. I asked John Sohrawardi, a computing and information sciences Ph.D. student leading the DeFake Project, about common ways to recognize a deepfake. He advised people to look at the mouth to see if the teeth are garbled." Is the video more blurry around the mouth? Does it feel like they're talking about something very exciting but act monotonous? That's one of the giveaways of more lazy deepfakes." Sohrawardi says if it's a bad face-swap deepfake, you would have edges of the face or even an extra eyebrow appearing. "Another giveaway," he continues, "is if there's an obstruction in front of the face, like a microphone or a hand, or if a person turns away, the face goes away for a bit if they face an extreme angle." Kelly Wu, a computing and information sciences doctoral student at RIT who is also involved with the project, adds, "There's kind of like synchronization issues with the mouth. You can listen to the audio and see if the mouth actually moves along with the audio. That's a very easy way to tell if there's lip-syncing happening behind it."Deepfake controversies and cybercrimeThe rise of deepfake technology has sparked major concerns, particularly in areas like misinformation and political manipulation. While a deepfake of Donald Trump and Elon Musk dancing together might seem funny (or not), AI-generated videos have also been used to create fake political speeches, hoaxes and propaganda.Another misuse of deepfakes is for fraud and scams. "Right now, audio deepfake scams are the most popular because audio is the best form, with the least artifacts and problems with it," Wright says. "So it's the easiest to fool people."Deepfake phone scams have fooled victims into believing they're talking to a real family member who has been kidnapped. This is known as the family emergency scam, and has reportedly caused affected families to lose tens of thousands of dollars.One of the biggest ethical issues with deepfakes is their use in nonconsensual videos, especially deepfake pornography. Celebrities and public figures have been frequent targets, but even private individuals, including minors, have been affected by AI-generated fake content appearing online. One of those children was Francesca Mani, who Time named one of The 100 Most Influential People in AI 2024 for her anti-deepfake activism. Francesca and her mother, Dorota Mani, have been campaigning for changes in policies around AI. "Too often, the focus is on how to protect yourself, but we need to shift the conversation to the responsibility of those who create and distribute harmful content," Dorota Mani tells CNET, when asked what advice she would give to girls and women facing these threats. "The burden shouldn't only be on the victims to prevent this from happening. Stand up for your rights, challenge the narrative and don't accept being silenced or blamed for someone else's violation of your autonomy."To help combat the spread of deepfake pornography and other nonconsensual intimate imagery, the Take It Down Act passed the Senate unanimously last month and is now awaiting approval in the House. The bill would require platforms to remove reported content within 48 hours. "That's a game-changer, giving power back to victims and ensuring that they have a clear and swift path to justice," Dorota Mani said. While it's a step forward, critics say the damage often happens before takedown, as harmful content can quickly be downloaded and shared beyond control. L-R: Francesca Mani, Elliston Berry, US First Lady Melania Trump, Sen. Ted Cruz (R-TX) and Rep. Maria Salazar (R-FL). Photo shared with CNET by Dorota ManiHow tech companies are responding to deepfakesTech platforms like TikTok and YouTube have implemented AI detection tools that can flag or even ban manipulated media, requiring AI-generated content to be labeled as such to prevent deception.Meta has adjusted the AI labeling approach on its social platforms (Facebook, Instagram and Threads) in September 2024. The "AI info" label is now displayed prominently only for fully AI-generated content, including cases identified by industry signals or self-disclosure. For content merely edited or modified by AI, Meta has relocated the "AI info" label to the post's menu. Meta AI, though, automatically labels its own AI-generated content as such.Google announced it would remove explicit nonconsensual deepfakes from its Search upon user request. Microsoft last year asked lawmakers to make deepfakes illegal.Not everything about deepfakes is gloomyThere are some positive use cases for deepfakes. While they will most likely be used in entertainment, like ageing or reverse-ageing actors in movies and TV series, they also have the potential for immersive education and trauma therapy. For example, you could interact with digital versions of lost loved ones or confront past abusers in a controlled setting. Deepfakes could also benefit the gaming and fashion industries by enabling the creation of personalized avatars using one's own face.One of the coolest deepfake applications is Salvador Dalí's Museum in Florida, where visitors can interact with a lifelike AI version of Dalí, watch him talk about his art and even take a selfie with the artist.As deepfakes advance, verifying sources and staying skeptical is the best defense against malicious AI deception.

Read full news in source page