Deepfake: The Lay of the Land

Author: Pavankumar Mulgund, Ph.D., CISA, CSM, CSPO, PMI-ACP, SAFE, Samina Lohawala, AWS CCP, PSM I, Splunk 7.x Fundamentals Part 1
Date Published: 24 February 2021

Deepfake is the modern equivalent of Photoshop. It uses an artificial intelligence (AI) method called deep learning to create fake images and videos.1 This technique can superimpose the facial image of one person (the target) onto a video of another person (the source) to create a new video in which the target person appears to be saying or doing things the source person actually said or did.2 Deepfake is used mainly for the creation of pornography—either for revenge against the target or to produce fake celebrity pornography. It is used for other purposes, too, including in political campaigns, to scam corporate figures and to make fraudulent money transfers.3 Deepfake methods have surged in popularity among cyberattackers because deepfakes are easy to create from available data sets and videos. Several applications provide step-by-step guidance so that even unskilled users can create deepfakes with just a few photographs and videos.4 Various countermeasures can be instituted against these attacks.

Deepfake Creation Process

The primary elements in the creation of deepfakes are two deep learning algorithms called autoencoder and decoder. The autoencoder algorithm performs dimensionality reduction and image compression.56 The decoder algorithm maps the features to reconstruct the face. The following steps are required:

  1. Collect source data—These data consist of individual frames from a video or some other source. Autoencoders extract latent features from these facial data, which are subsequently used as a foundation to reconstruct the image with a decoder.
  2. Extract latent features—The same encoder network is used to extract latent features, but these extracted features are fed into a different decoder network. The resultant mapping makes reconstruction possible based on the same encoder and a different decoder, enabling the identification of similar features on different faces based on their anatomical commonalities.
  3. Extract facial data—The training process refines the encoder-decoder networks by extracting facial data. The video is then generated from the individual frames produced by the deep learning algorithm.

This process is illustrated in figure 1. In this example, two networks use the same encoder but different decoders for the training process (top). An image of face A is encoded with the common encoder and decoded with decoder B to create a deepfake (bottom). Numerous face shots of the two individuals are run through the encoder, which finds similarities between the two faces, reduces them to common features and compresses the images. The decoder is then taught to recover faces from the compressed images. Two different decoders (A and B) are trained to retrieve the faces—decoder A for the first face and decoder B for the second face. The encoded face A is deliberately fed into decoder B to achieve a face swap. For example, the face of person 1 is swapped with that of person 2; then the compressed image of person 1 is fed into the decoder trained for person 2. The decoder reconstructs person 2’s face with the orientations and features of person 1. For the video to appear genuine, this must be done for each frame.7

Another popular method of creating deepfake content uses generative adversarial networks (GANs). A GAN model is comprised of two deep neural networks—a generator network and a discriminator network. Initially, random noise is fed into the generator network, which produces a synthetic image. Next, a stream of real pictures is added to the synthetic image and furnished as input to the discriminator network. The initial images created do not resemble faces, but after repeating this method several times and incorporating feedback on performance, both the generator and the discriminator improve, resulting in the creation of very similar faces.8

Societal Implications of Deepfakes

Deepfake technology has several applications related to the production of high-quality audio and video and can be useful for speech therapy, remote teaching and real-time language translation.9 There is also the potential for considerable misuse, with possibly catastrophic consequences for victims.

The creation of nonconsensual pornographic content accounts for 96 percent of all deepfake videos.10 Many women, especially celebrities, have been victimized. In the political domain, deepfake videos have been used to malign politicians and other leaders by showing them allegedly inciting hate speech or acts of violence. Ironically, in some cases, politicians have falsely claimed that they have been framed by deepfake, in an attempt to deny something they actually did. A related implication is the spread of fake news intended to mislead people. For instance, a deepfake video of Belgium’s prime minister linking the COVID-19 pandemic to a more profound ecologic crisis emphasizes the ability to generate fake news with potentially dangerous consequences.11 This is especially problematic in developing countries where digital literacy may be lacking, making the masses more susceptible to false information.

Deepfake videos also pose a challenge to facial recognition systems, representing a new attack vector for cybersecurity threats. Easily accessible applications and open-source software for the creation of deepfake images make it easy to swap faces, limiting the ability to distinguish genuine pictures from fake ones. Deepfake technologies are rapidly becoming more sophisticated, and existing facial recognition systems are having trouble keeping pace.12 Ultimately, deepfakes threaten the overall credibility of digital content.

Deepfake Detection

There are several strategies to distinguish authentic videos from deepfakes.

DEEPFAKE VIDEOS ALSO POSE A CHALLENGE TO FACIAL RECOGNITION SYSTEMS, REPRESENTING A NEW ATTACK VECTOR FOR CYBERSECURITY THREATS.

Basic Approaches
Deepfakes are very effective at spreading misinformation, but they leave traces and visual cues that can be used to detect them. Deepfake algorithms supplant the face of one individual with that of another. Although the algorithms can develop very similar images, they are incapable of controlling subtleties such as the blinking of eyes.13 Typically, people blink every two to 10 seconds, with each blink lasting one-tenth to four-tenths of a second. The use of machine learning to change content requires a substantial volume of accessible online video content; however, images of individuals blinking or with their eyes shut are usually not available.14 Therefore, deepfake videos usually have fewer than normal eye blinks or none at all. This can be used to identify a deepfake.15 Similarly, eye color can be used to distinguish a fake video from an authentic one.16

Another detection strategy is to look for missing subtleties around the teeth. Typically, after facial recognition, the facial landmarks are trimmed and resized to 256 pixels in height. Subsequently, teeth are updated section by section by changing over the picture (figure 2).17 From that point onward, Kmeans clustering is used to assemble dark and bright clusters. Bright clusters belong to the original teeth in the authentic image. In this way, a deepfake can be easily distinguished.18

 

Source: Matern, F.; C. Riess; M. Stamminger; “Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations,” 2019,
https://faui1-files.cs.fau.de/public/publications/mmsec/2019-Matern-EVA.pdf. Reproduced with permission.

Digital or Image Forensics
Another detection method relies on digital or image forensics to identify parameters such as pixel correlation, continuity of image and lighting.19 Digital forensics can also use the chain of custody of an image to determine authenticity. In each phase of image processing, such as compressed storage, acquisition and postprocessing, a digital trace or fingerprint is left behind on the image. Investigators can examine these traces to determine whether the image was altered.20 Similarly, watermarks are applied every time the digital content of an image is changed. These traces are noticeable and reveal which parts were changed, alerting recipients to the fake content.21 Figure 3 presents some of the emerging applications (apps) and tools for deepfake detection, along with their features and limitations.

* Source: Newman, L. H.; “A New Tool Protects Videos From Deepfakes and Tampering,” Wired, 11 February, 2019, https://www.wired.com/story/amber-authenticate-video-validation-blockchain-tampering-deepfakes

AI and Machine Learning Methods
Machine learning and AI can be used to detect fake images and videos. Convolutional neural networks (CNNs) are AI-based algorithms that can be embedded in data-sharing sites and social networks. They work in the background, continually observing transferred content and recognizing whether it is genuine or fake. This strategy cautions users or removes fake content before it is disseminated.22 Figure 4 summarizes AI- and machine-learning-based deepfake detection methods.

GOVERNMENT AGENCIES SHOULD INVEST IN TOOLS DESIGNED TO DETECT FAKE CONTENT BY IDENTIFYING CHANGES IN THE METADATA OF THE DISPLAYED CONTENT THAT ARE NOT VISIBLE TO THE NAKED EYE.

Countermeasures and Safeguards in Development

Anti-deepfake technology incorporates identification, content authentication and avoidance. Other preventive measures include education and training to raise awareness, enterprise policies and legislation.

Anti-Deepfake Technology
Anti-deepfake technology consists of tools to recognize deepfakes, validate content and keep content from being used to deliver deepfakes.23 The Intelligence Advanced Research Project Activity (IARPA) received a US$5 million grant to support researchers who are creating methods to identify deepfakes and automatically verify content legitimacy.24 Similarly, the US Department of Defense (DoD) is tackling deepfakes through the Media Forensics (MediFor) program of the Defense Advanced Research Projects Agency (DARPA). The objective is to develop the ability to perform end-toend media forensic investigations to recognize manipulations and detail how those manipulations were executed.25 Several content-distribution technology companies such as Google, Microsoft, Facebook and Amazon are investing heavily in deepfake detection algorithms.

Education and Training
Despite concerns expressed by authorities and extensive news coverage, the danger of deepfakes is not well understood. Enterprises, governments and individuals need to understand that videos may not provide accurate portrayals of events. They should also understand which perceptual signs can assist in identifying deepfakes. Susceptible populations such as less-technology-savvy seniors and children should be trained to spot false news and doctored videos. Deepfake technology provides cybercriminals new tools for social engineering. Therefore, enterprises should be on high alert, keeping abreast of new forms of attack and continually updating their cybercrime resilience plans. For instance, many employees are unaware that audio content deepfakes are being used to trick people into approving wire payments for products and services. Training employees how to protect against such attacks can motivate them to scrutinize the legitimacy of unauthorized network access or illegitimate transaction requests.26

Enterprise Policies and Voluntary Measures
Enterprise policies and voluntary activities offer compelling tools to protect against deepfakes. Politicians should resist the temptation to use deepfakes in their campaigns to spread misinformation. Social media platforms should avoid pushing deepfake advertisements to the top of the feed for monetary gain. Social media websites could collaborate to develop restrictive policies to block and remove deepfakes. Some websites, such as Reddit and Pornhub, have already banned deepfakes and nonconsensual pornography.27 Other websites do not remove duped content but take steps to reduce its accessibility to the public. The increase in deepfakes and misinformation contaminating web platforms has driven a few enterprises to take strict actions such as suspending user accounts.

Legislative and Regulatory Protection
Neither civil nor criminal laws explicitly address deepfakes. However, legal specialists have proposed amending current laws to cover criticism, defamation and identity fraud using deepfakes. For instance, in the United States, the Virginia state law prohibiting vengeance pornography makes circulating “falsely made” pictures and recordings an offense, and that law has been extended to cover deepfakes.28 To prevent the distribution of deepfakes through social media and video hosting sites, the Deepfake Report Act of 2019 was proposed in the US Congress. It would require several agencies to address deepfakes and their effect on national security.29 Enacting new laws to prevent deepfakes will help prevent this technology’s misuse, but enforcement mechanisms will also be needed.

Other Methods
Government agencies should invest in tools designed to detect fake content by identifying changes in the metadata of the displayed content that are not visible to the naked eye. It may be difficult to train every employee to identify deepfake content, but web security solutions can be implemented to detect unusual activity and prevent unauthorized access.30 For example, online moneylenders are implementing innovative security measures to combat fraud by installing software to spot deepfake videos and fabricated content. These lenders, which once used site-inspection companies, are now using software to verify the authenticity of the photographs submitted with online applications.31

With more enterprises moving online during the COVID-19 pandemic, those enterprises have become more susceptible to sophisticated cyberattacks. Deepfakes are a genuine threat. Although there are no definitive countermeasures at this time, various preventive measures are being developed. The following best practices can be used to prepare for deepfake attacks:

  • Deploy an awareness program to educate management, IT and engineering about the nature of the deepfake threat and methods of prevention.
  • Have a robust response plan in place to resolve a deepfake situation.
  • Monitor the social and digital appearances of executives and brand ambassadors to stay ahead of the threat.
  • Have a protocol in place that employees should follow when uploading media content online or when answering calls.
  • Use AI-powered software to detect deepfakes.
  • Include a watermark or cryptographic fingerprint to verify any media content being shared on behalf of the enterprise.
  • Have an algorithm to detect the authenticity of copyrighted digital content and identify its source.
  • Maintain the chain of custody for digital media from the time it is captured until its disposal.
  • Maintain access logs for enterprise-owned digital media and have management review them at regular intervals.
  • Institute a policy for media created or shared using employees’ personal devices.
  • Install a mechanism to prevent employees from sharing disturbing content.
  • Capture media content created by the enterprise using a camera integrated with a hardware-based solution to automatically sign captured data.
  • In the event of a deepfake incident, send a formal email to the affected audience informing them of the incident.
  • Store all digital media captured, shared or used by the enterprise at a secure location with limited access.
  • Scan downloaded media files for misleading information using an appropriate scanning tool.
  • Deploy an intelligent tool to collect, monitor and analyze deepfake content and provide insights to prepare for a deepfake threat.
  • Label videos as deepfakes if manipulation is detected.
  • Have a media privacy policy that restricts unethical use.
  • Be careful with what is shared online: More online media creates easier targets.

Conclusion

Deepfakes can have adverse societal and individual consequences. Fake pornography, fake news, hoaxes and financial fraud are just some of the nefarious uses of deepfake technology. Thus, deepfakes represent a significant danger to the general public, global political frameworks and enterprises. To prevent their potentially negative consequences, deepfakes can be identified and countered with measures such as education and training, enterprise policies, voluntary actions, legislation, and anti-deepfake technology. It is hoped that these countermeasures will reduce deepfake’s potential for misuse while leaving it accessible for legitimate applications.

Endnotes

1 Sample, I; “What Are Deepfakes—and How Can You Spot Them?” The Guardian, 13 January 2020, https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them
2 Nguyen, T. T.; C. M. Nguyen; D. T. Nguyen; D. T. Nguyen; S. Nhavandi; “Deep Learning for Deepfakes Creation and Detection: A Survey,” September 2019, https://arxiv.org/pdf/1909.11573.pdf
3 Raj, Y.; “Obscuring the Lines of Truth: The Alarming Implications of Deepfakes,” Jurist, 17 June 2020, https://www.jurist.org/commentary/2020/06/yash-raj-deepfakes/
4 Albahar, M.; J. Almalki; “Deepfakes: Threats and Countermeasures,” Journal of Theoretical and Applied Information Technology, vol. 97, iss. 22, 30 November 2019, http://www.jatit.org/volumes/Vol97No22/7Vol97No22.pdf
5 Punnappurath, A.; M. S. Brown; “Learning Raw Image Reconstruction-Aware Deep Image Compressors,” Journal of Latex Class Files, vol. 14, iss. 8, August 2015, https://abhijithpunnappurath.github.io/pami_raw.pdf
6 Cheng, Z.; H. Sun; M. Takeuchi; J. Katto; “Deep Convolutional AutoEncoder-Based Lossy Image Compression,” 2018 Picture Coding Symposium (PCS), June 2018, https://arxiv.org/pdf/1804.09535.pdf
7 Op cit Sample
8 Ibid.
9 Gardiner, N.; “Facial Re-Enactment, Speech Synthesis and the Rise of the Deepfake,” https://ro.ecu.edu.au/cgi/viewcontent.cgi?article=2530&context=theses_hons
10 Ajder, H.; G. Patrini; F. Cavalli; L. Cullen; “The State of Deepfakes: Landscape, Threats and Impact,” Deeptrace Labs, September 2019, https://enough.org/objects/Deeptrace-the-State-of-Deepfakes-2019.pdf
11 Galindo, G.; “XR Belgium Posts Deepfake of Belgian Premier Linking COVID-19 With Climate Crisis,” The Brussels Times, 14 April 2020, https://www.brusselstimes.com/news/belgiumall-news/politics/106320/xr-belgium-posts-deep fake-of-belgian-premier-linking-covid-19-withclimate-crisis/
12 Korshunov, P.; S. Marcel; “DeepFakes: A New Threat to Face Recognition? Assessment and Detection,” Idiap Publications, December 2018
13 Op cit Albahar
14 Op cit Nguyen
15 Jones, V. A.; “Artificial Intelligence Enabled Deepfake Technology: The Emergence of a New Threat,” https://search.proquest.com/openview/60d6b06b94904dccf257c4ea7c297226/1?pqorigsite=gscholar&cbl=18750&diss=y
16 Matern, F.; C. Riess; M. Stamminger; “Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations,” 2019 IEEE Winter Applications
of Computer Vision Workshops (WACVW), January 2019, https://faui1-files.cs.fau.de/public/publications/mmsec/2019-Matern-EVA.pdf
17 Ibid.
18 Op cit Albahar
19 Op cit Gardiner
20 Piva, A.; “An Overview on Image Forensics,” International Scholarly Research Notices (ISNR) Signal Processing, January 2013, https://www.hindawi.com/journals/isrn/2013/496701/
21 Ibid.
22 Op cit Albahar
23 Westerlund, M.; “The Emergence of Deepfake Technology: A Review,” Technology Innovation Management Review, November 2019, https://timreview.ca/article/1282
24 Op cit Jones
25 Strout, N.; “How the Pentagon Is Tackling Deepfakes as a National Security Problem,” C4ISRNET, 29 August 2019, https://www.c4isrnet.com/information-warfare/2019/08/29/how-the-pentagon-is-tackling-deepfakes-as-a-national-security-problem/
26 Op cit Jones
27 Op cit Westerlund
28 Ibid.
29 Op cit Jones
30 Ibid.
31 Crosman, P; “Online Lenders Confront Deepfake Threat,” American Banker, 2020, https://www.americanbanker.com/news/online-lenders-confront-deepfake-threat

Pavankumar Mulgund, Ph.D., CISA, CSM, CSPO, PMI-ACP, SAFE

Is a clinical assistant professor in the Management Science and Systems department at the State University of New York (SUNY) at Buffalo (USA) with more than 12 years of corporate and consulting experience. His expertise includes technology strategy, the business value of IT, information assurance, information privacy and the application of novel technologies (e.g., artificial intelligence [AI], Internet of Things [IoT], blockchain) in the context of healthcare. Mulgund has published several papers in leading academic and industry journals, is a frequent speaker at information systems and security conferences, and has consulted for several organizations. He has developed and taught graduate-level information systems courses in database management systems, systems analysis and design, data visualization with Tableau, and experiential IT projects. Before joining SUNY Buffalo, he was leading product and delivery teams for a primary contractor of the US Centers for Medicare and Medicaid Services. He has also worked for IBM and Mindtree, among others. He holds several agile methods and design thinking certifications. He can be reached at pmulgund@buffalo.edu.

Samina Lohawala, AWS CCP, PSM I, Splunk 7.x Fundamentals Part 1

Is a research intern in the Management Science and Systems department at SUNY Buffalo. She has five years of software quality assurance engineering and test management experience, during which she acquired crucial business insights and knowledge of deriving data-driven business decisions while working in data and storage management and security verticals. Lohawala’s areas of professional interest are information assurance, information systems audit, digital forensics, and test automation tools and practices. Before joining SUNY Buffalo. she previously worked as a software quality assurance engineer at Symantec Corporation and Veritas Technologies. She is a true believer in collaboration, asking the right questions and giving help to people, while at the same time reaching out to learn and be mentored by others. She can be reached at saminalo@buffalo.edu.