Synthetic media files have been in circulation since the dawn of the internet. The 21st century kicked off with Photoshopping and audio alteration, and now, deepfakes have taken hold. Leveraging artificial intelligence to create fake events and videos, the latest technology is indistinguishable from reality, at least to the untrained eye.
Deepfakes and metaverse- what’s the relation?
Deepfakes are already a big cybersecurity concern for individuals and businesses alike- individual deepfake scams have cost from $243,000 to $35 million in losses.
But with the advent of the metaverse, these numbers are expected to increase (as per Europol).
Mark Zuckerberg’s metaverse is currently in the beta phase. This virtual environment is set to gain an influx of users, and Gartner estimates a quarter of all internet users will spend at least 1 hour daily in the metaverse by 2030.
With such a huge amount of people interacting with friends, colleagues, businesses, doctors, educators, etc, online and the environment hosting a ton of biometric data, the metaverse is a prime target for hackers.
Interactions in the metaverse will occur through individual avatars, created as a hyperrealistic version of you. Deepfakes and AI can be used to mimic these avatars and create fakes that are identical. Hackers must only reproduce your identity markers once to make the ideal deepfake.
Metaverse avatars
Companies like Metaphysic and Humans.ai are working on letting users create their hyperrealistic avatars for the metaverse, using your voice, face, expressions, etc. Remember that viral Tom Cruise deepfake that was everywhere on TikTok in 2021? That was Metaphysic.
Once Metaphysic is done testing the features, it plans to let users create personalized avatars as NFTs. And of course, you’ll need to input biometric data to do so.
While it’s a bit of fun for users who want to run around as a mini-version of themselves or want to use their voice or face to replace Harrison Ford in Indiana Jones or dress up like Rihanna, there are many security risks at the moment. If Metaphysic and competitors fail to safeguard identities, deepfakes may wreak havoc and there will be a huge loss of pictures and biometric data from users.
Criminal possibilities attached to deepfakes
Deepfakes can be continually used to target the reputation of companies and individuals using malicious content (a fake message, fake audio clip, a video partaking in something illegal, to name a few).
This affects the criminal justice system as well. Individuals may be painted in wrongdoings as evidence. On the flipside, wrongdoers can claim evidence to be altered. Hijacked user identities can also be used to implicate them in embarrassing behavior.
Deepfake celebrity avatars in the metaverse can be used to fake endorsements and spread misinformation. Brands can be impersonated with fake retail storefronts and customers can be scammed.
The most alarming of these though are the high chances of child abuse. Metaverse will need to strictly control and monitor the audience age through identity verification, or criminals can present as children and come in contact with kids. Abusers may create child sexual abuse material (CSAM) too.
Worryingly, the best deepfake detection software Meta has is only 65% successful, but Meta seems to be working on better security measures and AI.
As for avatar-making apps, yes, most are trying to incorporate protocols that help protect data from thieves, but not all are competent or even interested enough. Meta will have to play a vital role in what companies it allows to make avatars for the virtual environment. Until metaverse launches and there’s enough research and development in the security department, deepfakes remain a looming threat.