Deepfakes: new technologies of deception

The word deepfake is a compound word and comes from deep learning and fake. Deep learning is a sophisticated artificial intelligence-based technique that uses multi-layered machine learning algorithms to extract increasingly complex characteristics from raw input data. At the same time, AI is able to learn from unstructured data, such as facial images. For example, AI can collect data about your body movements.

This data can then be processed to create deepfake videos using what is called a Generative Adversarial Network, or GAN. This is another specialized machine learning system. It involves two neural networks that compete with each other by learning the characteristics of training datasets (for example, photos of faces) and creating new data based on these characteristics (that is, new “photos”).

As such a network constantly checks the samples it creates against the original training set, the fake images look more and more believable. This is why the threat posed by deepfakes is constantly growing. In addition, GANs can create other fake data besides photos and videos. For example, deepfake technologies can be used to imitate voices.

Examples of deepfakes

Samuel Stefenson the founder of Deep-Nudes.com says -"High-quality deepfakes featuring celebrities are not difficult to find. One example is a fake video posted by actor Jordan Peele. He recorded a short speech in Barack Obama's voice, which was then combined with a video of the politician's actual speech. Peele then demonstrated what both parts of the video looked like and encouraged his viewers to be critical of everything they see."

A video of Facebook CEO Mark Zuckerberg allegedly talking about how Facebook is “controlling the future” with stolen user data has appeared on Instagram. The original video captured his speech in connection with the scandal about the “Russian trace” in the US presidential election - a fragment of only 21 seconds was enough to create a fake. However, Zuckerberg's voice was not as well imitated as Jordan Peele's Obama parody, and the fake was easy to spot.

However, even lower-quality fakes can cause a lively response. A video of Nancy Pelosi, the Speaker of the US House of Representatives, being "drunk" has received millions of views on YouTube - but it was just a fake, made by slowing down the real video to make it look like slurred speech. In addition, many famous women suddenly found themselves in the role of porn stars - their faces were embedded in pornographic videos and images.

Fraud and blackmail using deepfakes

Deepfake videos have been repeatedly used for political purposes and also as a method of revenge. However, now these technologies are increasingly being used for blackmail and large-scale scams.

Fraudsters were able to defraud the CEO of a British energy company out of 220,000 euros using a deepfake imitation of the voice of the head of the parent company, who allegedly requested an urgent transfer of the specified amount. The substitution was so naturalistic that the deceived director did not double-check the information, although the funds were transferred not to the head office, but to a third-party account. Only when the “boss” requested another transfer did his interlocutor suspect something was wrong, but the transferred money had already disappeared without a trace.

A scam recently broke out in France, in which, however, deepfake technology was not used - a fraudster named Gilbert Chicli pretended to be French Foreign Minister Jean-Yves Le Drian, recreating with great accuracy not only his appearance, but also the decor of his office. The swindler, on behalf of the minister, appealed to wealthy individuals and company executives with a request to allocate funds for the ransom of French citizens who were held hostage by terrorists in Syria. In this way he managed to swindle several million euros. This case is now being considered in court.

Authors of deepfakes can blackmail executives of large companies by threatening to publish a fake video that could undermine their reputation if they do not pay compensation. Fraudsters can also, for example, infiltrate your network by spoofing a call from the IT director and tricking employees into providing passwords and access privileges, after which your confidential data will be at the complete disposal of hackers.

Fake porn videos have already been used to blackmail female reporters, as happened in India with Rana Ayyub, who was working to expose abuses of power. Deepfake production technologies are becoming cheaper, so we can predict an increase in their use for blackmail and fraud.

h2>How to protect yourself from deepfakes?

They are already trying to begin to solve the problem of deepfakes at the legislative level. Thus, in the state of California last year, two laws were passed restricting the use of deepfakes: bill AB-602 prohibited the use of human image synthesis technologies to produce pornographic content without the consent of those depicted, and AB-730 prohibited the forgery of images of candidates for public office within 60 days before the elections.

But will these measures be sufficient? Fortunately, security companies are constantly developing better recognition algorithms. They analyze video images and notice minor distortions that occur during the process of creating a fake. For example, modern deepfake generators simulate a 2D face and then distort it to fit it into the 3D perspective of a video. It is easy to recognize a fake by the direction of the nose.

So far, the technologies for creating deepfakes are not yet advanced enough, and signs of a fake are often visible to the naked eye. Pay attention to the following characteristic signs:

  • uneven movement;
  • changes in lighting in adjacent frames;
  • differences in skin tones;
  • the person in the video blinks strangely or doesn’t blink at all;
  • poor synchronization of lip movements with speech;
  • digital artifacts in the image.

However, as technology improves, your eyes will be less and less likely to recognize deception - but a good security solution will be much more difficult to deceive.

Unique anti-fake technologies

Some emerging technologies are already helping video content creators protect their authenticity. Using a special encryption algorithm, hashes are embedded into the video stream at certain intervals; if the video is changed, the hashes will also change. It is also possible to create digital signatures for videos using AI and blockchain. This is similar to protecting documents with watermarks; in the case of video, however, the difficulties lie in the fact that the hashes must remain unchanged when the video stream is compressed by various codecs.

Another way to combat deepfakes is to use a program that inserts special digital artifacts into video content that mask groups of pixels that facial recognition programs are guided by. This technique slows down deepfake algorithms and will result in a lower quality fake, which in turn will reduce the likelihood of a successful deepfake being used.

The best protection is following safety rules

Technology is not the only way to protect yourself from deepfake videos. Even basic safety rules are very effective in combating them.

For example, integrating automatic checks into all processes related to the transfer of funds would prevent many fraudulent activities, including the use of deepfakes. You can also:

  • Educate your employees and family members about how deepfakes work and the potential risks involved.
  • Learn how to recognize a deepfake and tell others about it.
  • Improve your media literacy and use reliable sources of information.
  • Maintain basic security and follow the “trust but verify” rule. Taking a critical approach to voicemail and video calls won't guarantee you won't get scammed, but it will help you avoid many pitfalls.

Remember that if hackers begin to actively use deepfakes to penetrate home and corporate networks, compliance with basic cybersecurity rules will be the most important factor in minimizing risks:

  • Regular backups will protect your data from ransomware and help restore damaged files.
  • Use different strong passwords for different accounts - if one network or service is compromised, this will help prevent everyone else from being compromised. For example, if someone manages to steal your Facebook account, at least they won't automatically have access to all your other accounts.

How will deepfakes evolve?

Deepfakes are evolving at an alarming rate. Just two years ago, counterfeits could be easily distinguished by the low quality of motion transmission; In addition, people in such videos almost never blinked. However, technology does not stand still, and the latest generation of deepfakes are made of noticeably better quality.

According to rough estimates, there are now more than 15,000 deepfake videos floating around the Internet. Some of them are humorous, but there are also those that were created to manipulate public consciousness. Nowadays, it takes a couple of days at most to make a new deepfake, so there could soon be a lot more of them.