arstechnica.com

Your AI clone could target your family, but there’s a simple defense

The FBI now recommends choosing a secret password to thwart AI voice clones from tricking people.

Man looks through letter flap Man looks through letter flap

Credit: GSO Images via Getty Images

On Tuesday, the US Federal Bureau of Investigation advised Americans to share a secret word or phrase with their family members to protect against AI-powered voice-cloning scams, as criminals increasingly use voice synthesis to impersonate loved ones in crisis.

"Create a secret word or phrase with your family to verify their identity," wrote the FBI in an official public service announcement (I-120324-PSA).

For example, you could tell your parents, children, or spouse to ask for a word or phrase to verify your identity if something seems suspicious, such as "The sparrow flies at midnight," "Greg is the king of burritos," or simply "flibbertigibbet." (As fun as these sound, your password should be secret and not the same as these.)

The bureau also recommends that people listen carefully to the tone and word choices in unexpected calls claiming to be from family members. The FBI reports that criminals use AI-generated audio to create convincing voice clips of relatives pleading for emergency financial help or ransom payments.

The recommendation comes as part of a broader service announcement detailing how criminal groups now use generative AI models in their fraud operations, which we've reported on in the past. AI technology now makes creating realistic voice clones trivial.

It's worth noting that these types of fraudulent clones typically rely on having samples of your speaking voice publicly available (such as in a podcast or recorded interview), so if you aren't a semi-public figure, it's far less likely your voice will be cloned.

The warning extends beyond voice scams. The FBI announcement details how criminals also use AI models to generate convincing profile photos, identification documents, and chatbots embedded in fraudulent websites. These tools automate the creation of deceptive content while reducing previously obvious signs of humans behind the scams, like poor grammar or obviously fake photos.

Much like we warned in 2022 in a piece about life-wrecking deepfakes based on publicly available photos, the FBI also recommends limiting public access to recordings of your voice and images online. The bureau suggests making social media accounts private and restricting followers to known contacts.

Origin of the secret word in AI

To our knowledge, we can trace the first appearance of the secret word in the context of modern AI voice synthesis and deepfakes back to an AI developer named Asara Near, who first announced the idea on Twitter on March 27, 2023.

"(I)t may be useful to establish a 'proof of humanity' word, which your trusted contacts can ask you for," Near wrote. "(I)n case they get a strange and urgent voice or video call from you this can help assure them they are actually speaking with you, and not a deepfaked/deepcloned version of you."

Since then, the idea has spread widely. In February, Rachel Metz covered the topic for Bloomberg, writing, "The idea is becoming common in the AI research community, one founder told me. It’s also simple and free."

Of course, passwords have been used since ancient times to verify someone's identity, and it seems likely some science fiction story has dealt with the issue of passwords and robot clones in the past. It's interesting that, in this new age of high-tech AI identity fraud, this ancient invention—a special word or phrase known to few—can still prove so useful.

Read full news in source page