There is no denying it: artificial intelligence is changing the world as we know it. But are those changes for the better? In recent years, we have witnessed the rise of the deepfake. Nifty algorithms allow users to doctor photos and videos, replacing one person’s face with that of someone else.
Deepfake technology has been put to a broad range of uses, many of them positive. Star Wars fans have modified footage from 2018’s Solo movie so that lead actor Alden Ehrenreich resembles a young Harrison Ford. Only last Christmas, British TV station Channel 4 broadcast a digitally-altered version of the Queen’s speech. Instead of her usual message of goodwill, Her Majesty was seen making snide jokes about the year’s events before jumping onto her desk for a festive boogie.
But, even by the standards of modern technology, deepfake AI has a pernicious underbelly. Jealous users are now able to rustle up fake revenge porn with just a few clicks. Politicians are deploying deepfakes to add to the rise in fake news and misinformation. Cybercriminals have utilized the algorithms to steal hundreds of thousands of dollars from business executives. These are ten of the weirdest and most appalling uses of this manipulative new technology.
10 Jealous Mother Forges Explicit Images of Her Daughter’s Cheerleading Rivals
Parents can be a nightmare when you are a teenager, but few are quite as bad as Raffaela Spone. The Pennsylvania mother created false incriminating photos of her daughter’s cheerleading rivals in a malicious attempt to get them kicked off the team. Using images from social media, she was able to fabricate footage of Victory Vipers members “naked, drinking and smoking.”
Spone was caught after several of the girls’ families went to the police. They told officers that they and their daughters had received threatening text messages from an unknown number. The police were able to trace the phone number of the mysterious abuser. This led them to a phone number sales website. Through this site, police uncovered an IP address that revealed Spone as the cyber-abuser. They do not believe her daughter was aware of her alleged actions.
Spone has since been charged with harassment and sending abusive messages to team members, their parents, and the gym owners.
9 Deepfakes In The Delhi Election
Politicians are known for their duplicitous actions. In an era of fake news and post-truth, one politician turned to deepfakes to boost their election campaign.
In February 2020, with people across Delhi preparing to go to the polls, two videos started circulating on WhatsApp. They appeared to show a senior politician criticizing the Delhi government. BJP president Manoj Tiwari was seen attacking his rival Arvind Kejriwal in two languages. The first video had him speaking English. In the second he was talking in the popular Hindi dialect Haryanvi. But only one of them was real.
Tiwari had teamed up with a communications company to create politically motivated deepfakes. He gave a short speech in English and had tech experts manipulate the footage to make it seem as if he was also speaking Haryanvi. The forged clip was seen by around 15 million people. Understandably, many of them felt they had been deceived.
In the aftermath of the incident, experts warned that we may be veering towards a situation where people are unable to trust anything they see online. The spread of fake news in India is already so severe that it has been labeled a “public health crisis.”
8 Tom Cruise TikTok Trip
In 2021, a deepfake video was uploaded to TikTok showing Tom Cruise stumbling over while walking around his lavish house. More clips appeared of the Hollywood A-lister playing golf, sucking a lollipop, and performing a magic trick.
None of them were real. Fortunately, there was no con involved. The account responsible for the clips, DeepTomCruise, was upfront about their lack of authenticity. But the footage looked so realistic that some users refused to believe it was false. A few speculated that it was the real Tom Cruise claiming to be a fake as part of a social media bluff.
At the time of writing, the DeepTomCruise account has far more followers than Cruise’s actual account, although the actor has never posted a video. The real Tom Cruise has around 35,000 fans on the app, whereas the fake account boasts nearly a million.
7 Cybercriminals Steal $240,000 from CEO
In movies, heists are typically daring adventures full of dangerous escapades and high-stakes stunts. But now, thanks to deepfake technology, cybercriminals can steal hundreds of thousands of dollars over the phone.
In March 2019, fraudsters called the CEO of a British energy company posing as his German boss. They used algorithms to mimic the boss’ mild German accent over the phone. The British CEO had no reason to suspect foul play. The criminals asked him to send over $240,000 to a Hungarian supplier, claiming it was an urgent transfer that needed completing within the hour. It was only when they phoned again, requesting a second payment, that he became suspicious.
Experts claimed that this claim might be the first time criminals have used artificial intelligence in a heist. Any previous incidents were either unreported or unrecognized. But technology experts have been aware of the potential for AI attacks for some time. Scams like this pose a new threat to corporations, and cybersecurity firms are developing new software to fend off these deepfake attacks.
6 Avatarify, The Deepfake Zoom Filter
Video conferencing apps like Zoom and Skype have become incredibly popular during the pandemic. But, for many of us, the endless Zoom calls can be a bit of a drag. To alleviate some of that boredom, programmers have created a deepfake filter for video calls. Avatarify allows users to take on someone else’s identity. The nifty software can superimpose another face on top of your own. Ali Aliev, the creator of Avatarify, pretended to be Elon Musk on work Zoom call to the bemusement of his colleagues.
Although impressive, Avatarify is not particularly convincing. Crucially, it does not affect the users’ voice. But, as time goes on, this technology can only get better. A handful of companies are making steady progress developing audio deepfakes. Who knows, one day soon it could be possible to convincingly pass yourself off as a VIP on a Zoom call. And, when you think about it, that could have some pretty dire consequences for the celebrities being impersonated.
5 Video of Nancy Pelosi Drunk
In May 2019, a video of Nancy Pelosi went viral. The clip featured an AI-altered version of the House Speaker drunkenly garbling at an official event. She is seen accusing the former US president of a “cover up” at a Center for American Progress event. Her speech has been noticeably slowed down.
President Trump’s lawyer and former New York Mayor Rudy Giuliani, was among the people who shared the video on social media. The clip went on to receive millions of views. It is unclear where the deepfake came from, but it appeared amidst a flurry of people attempting to smear Pelosi as mentally unstable or seriously alcohol dependent. Only days earlier, the president had shared a different video that appeared to show the speaker stumbling over her words during a speech.
4 Mumbai Student Edits Teenage Girl’s Face into Porn Video
In 2019, Mumbai police arrested a 20-year-old college student for blackmailing a teenage girl with fake pornography. The student had edited her face into an obscene video. He later contacted her on Instagram using an anonymous account and threatened to share it online. The teenager was pressured into sending the video to another social media user. Reports claim he did the same to two other young women.
The girl reported the incident at Lokmanya Tilak Marg Police Station. Police successfully tracked down and arrested the perpetrator.
Just one more reason that social media is poison.
3 British Activists Attacked By Non-Existent Journalist
Two Palestinian rights activists were shocked to learn that they had been singled out as terrorist sympathizers. The accusation was made by a British journalist called Oliver Taylor. According to his online profiles, Taylor is a Jewish-raised university student with a deep interest in politics, particularly Israel and anti-Semitism. He has been published by the Jerusalem Post and the Times of Israel.
In an article for the Jewish newspaper The Algemeiner, he accused married couple Mazen Masri and Ryvka Barnard of being “known terrorist sympathizers.” Masri is a lecturer at City Law School, London. In 2018, he helped sue the Israeli tech firm NSO for its role in a Mexican phone-hacking scandal. He and his wife were startled by the out-of-the-blue allegations and wondered why they had been targeted by a student.
Here’s the thing: Oliver Taylor does not exist. His university has no record of him ever studying there. Forensic analysis revealed that Taylor’s profile picture is a deepfake. No one has any idea who the writer is, and the fake images are untraceable. The Algemeiner claim Taylor contacted them via email and never asked for payment. They, along with the Times of Israel, later removed his work. But several of his articles remain online, including the pieces he wrote for The Jerusalem Post and Arutz Sheva.
2 Cybercriminals Attempt To Steal Money from Tech Company
In June 2020, cybercriminals used audio deepfakes in an attempt to trick a tech company employee into sending them money. The employee received an odd voicemail from someone who sounded similar to the CEO. The voice on the phone claimed he required “immediate assistance to finalize an urgent business deal.”
Fortunately, the employee was not buying it. They could tell there was something slightly off about the message. It later transpired that it was an audio deepfake. Analysts discovered that the voicemail had been created using AI software to dupe the receiver into sending money. The voice did not quite follow the regular speech patterns, and there was no background noise whatsoever, both signs of a false message. The identity of the person or people behind the attack is still unknown.
On the other hand, deepfake audio can also have a fun side (see the video clip above).
1 Non-consensual Pornography
Non-consensual pornography is by far the biggest use of deepfake technology. The trend first emerged on Reddit a few years ago. Users would take existing adult videos and replace the performer’s face with a celebrity’s, almost always female. People were shocked to learn that these were created—not by state-of-the-art film studios—but by using machine learning tools that are freely available online. Some of the first people to have their faces’ algorithmically inserted into porn include Gal Godot, Scarlett Johansson, Taylor Swift, and Maisie Williams.
Since then, deepfake porn has evolved, with programmers bringing in 3D digital avatars. Now, porn creators are generating lifelike models, programming them to perform sexual acts, and then replacing their faces with someone like Margot Robbie or Emilia Clarke. This means anyone can control a 3D avatar, forcing it into whatever sex positions they like, then map a real person’s face on afterwards.
In 2019, deepfake programmers released the app DeepNude. DeepNude uses neural networks to undress photos of women, creating convincing naked images. Simply upload an image of a clothed woman and the AI reduces them to their birthday suit. The app did not survive for long. Just days after its launch, DeepNude’s anonymous creator “Alberto” took it offline.
As you can imagine, the growing trend for deepfake pornography has thrown up all kinds of ethical issues. Danielle Citron, who was asked to testify to Congress about the threat posed by AI-generated porn, called the technology an “invasion of sexual privacy.”
Attorney Carrie Goldberg, an expert in revenge porn, is equally critical. “Anybody who says the internet isn’t real life or virtual reality is fake is just constructing excuses for doing bad shit,” she told reporters. “There’s no question that building a bot to rape in VR delivers a different injury to the depicted person than actually going and attacking her. However, two dissimilar things can be wrong and unethical at once.”