The misuse of artificial intelligence and the cost of deepfakes in a post-truth era.
By Tiana Cline
In the beginning, it was a lot of fun. Anyone could create a deepfake video to merge their face with that of a celebrity. At first, these videos were shared across social media as a parody and everyone laughed. What could be better than becoming the star of your favorite streaming series? But now the technology has evolved. It has become more sophisticated to the point where it’s not as easy to tell what is real or not. From misinformation to identity theft, deepfake porn and copyright infringements, the misuse of artificial intelligence (AI) is far from entertaining – it’s an epidemic.
“As technology’s power increases, so the ability to cause harm increases exponentially,” says Professor Johan Steyn, a Research Fellow at the School of Data Science and Computational Thinking at Stellenbosch University in South Africa and Adjunct Professor at the School of Business at Woxsen University in India.
“From both a legal and a government ethics point of view, it’s a massive problem and I don’t know how it is going to be regulated. How do you present evidence to a court of law when you cannot confirm if a video or voice is authentic? There’s almost no way of proving deepfakes are authentic.”
There are currently over 80 different deepfake software apps, many of which are free and allow anyone to take advantage of the advancement of AI. Deepfakes use artificial neural networks – data pattern-recognition systems that are trained to identify (and reconstruct) patterns. The technology is mind-blowing.
It gained Miles Fisher – a Tom Cruise lookalike and AI entrepreneur – nearly four million followers on social media. His TikTok account, ‘deeptomcruise’, has unofficially made him the poster child of the deepfake movement. The person behind the videos, Chris Ume, is a Belgian visual effects artist and the co-founder of an AI-generated content company called Metaphysic.ai.
“There’s a big philosophical, ethical and societal lens to deepfakes,” adds Steyn. “We’re living in a world where we cannot trust almost anything. There are limited ways in verifying the authenticity of something and the technology that should have brought us together and empowered us with knowledge is doing the exact opposite.”
One of the most interesting deepfake use cases is happening in the advertising sector. Where actors were once paid undisclosed sums for appearing in TV advertisements, some agencies are resorting to using deepfakes instead – with or without their consent. An advert for a Russian telecoms company starring Bruce Willis went viral and it was reported that the actor sold his image rights. It turned out that Willis was a deepfake and the agency that made the advert (called Deepcake) simply mapped a digital version of his appearance onto another actor. If images and videos of Willis are readily available to do this, it questions how anyone can protect their own identity.
“In the future of AI, people will not be quantum physicists or machine learning engineers but philosophers and ethicists. We’re now in a post-truth era and thanks to the power of technology. You can no longer trust anything or anyone,” warns Steyn. “If you’re a critical thinker, fake news should be relatively easy to pick up. Deepfakes are more serious. What happens when a bank, for example, accepts voice as a proof of identity?”
The link between deepfakes and cybercrime is inevitable. Throughout Africa, phishing remains one of the continent’s biggest threats and by using deepfake technology, cybercriminals will be able to increase the effectiveness of their attacks exponentially. “Any new technology, development or technology will inevitably be exploited by criminals to target the vulnerable or unwary,” says Carl Wearn, Mimecast’s Head of Threat Intelligence Analysis and Future Ops. “The quicker they can utilize a new technology, before it is properly understood or awareness of it is raised, the more likely they are able to monetize some form of it.”
With deepfakes, cybercrime can be socially engineered because threat actors (posing as friends, family, or colleagues) exploit the inherent trust that comes with interpersonal relationships. “Deepfakes are at their core seeking to exploit key relationships to compel action,” adds Wearn. “As the technology matures – aided by machine learning and AI – this will almost certainly become indistinguishable to the real thing for ordinary humans when undertaking virtual or telephony-communication medium reliant interactions.”
Deepfakes scams are increasing. They’re not only being used in cyberbullying and smear campaigns to harass and silence individuals; synthetic identity fraud can cost businesses and government millions. “People are, for example, edited into a humiliating video that is then spread on social media highlights the risk for public figures and activists,” says Vladislav Tushkanov, a Lead Data Scientist at Kaspersky. Scammers recently used a deepfake interview with Elon Musk to promote a cryptocurrency scam called BitVex. The CEO of a British energy firm was tricked out of $243,000 by a voice deepfake from the head of his parent company requesting an emergency transfer of funds. And in the United Arab Emirates, bank robbers deepfaked the voice of a company executive to an unsuspecting bank manager – it was reported that he recognized the voice and then authorized a transfer of $35 million.
“Deepfakes can significantly undermine trust in audio and video content,” adds Tushkanov. “Deepfakes, and mis- and disinformation, is a rising concern around the globe. It is particularly worrying that deepfakes can be used to incite violence and stoke civil unrest. This places high importance on public education and raising awareness about this technology, as well as furthering digital literacy.”
How to spot a deepfake?
“It all starts by having good cybersecurity procedures and practices in place – not only in the form of antivirus software, but also in the form of developed IT skills and cybersecurity awareness of the employees,” says Tushkanov. “The following are key characteristics of deepfake videos to be aware of and look out for, to avoid becoming a victim: jerky movement; shifts in lighting from one frame to the next; shifts in skin tone; strange blinking or no blinking at all; lips poorly synched with speech; digital artifacts in the image; and video is intentionally encoded down in quality with poor lighting.”