Deep Fakes And National Security – Analysis (2024)

By Laurie A. Harris*

“Deep fakes”—a term that first emerged in 2017 to describe realistic photo, audio, video, and other forgeries generated with artificial intelligence (AI) technologies—could present a variety of national security challenges in the years to come. As these technologies continue to mature, they could hold significant implications for congressional oversight, U.S. defense authorizations and appropriations, and the regulation of social media platforms.

How Are Deep Fakes Created?

Though definitions vary, deep fakes are most commonly described as forgeries created using techniques in machine learning (ML)—a subfield of AI—especially generative adversarial networks (GANs). In the GAN process, two ML systems called neural networks are trained in competition with each other. The first network, or the generator, is tasked with creating counterfeit data—such as photos, audio recordings, or video footage—that replicate the properties of the original data set. The second network, or the discriminator, is tasked with identifying the counterfeit data. Based on the results of each iteration, the generator network adjusts to create increasingly realistic data. The networks continue to compete—often for thousands or millions of iterations—until the generator improves its performance such that the discriminator can no longer distinguish between real and counterfeit data.

Though media manipulation is not a new phenomenon, the use of AI to generate deep fakes is causing concern because the results are increasingly realistic, rapidly created, and cheaply made with freely available software and the ability to rent processing power through cloud computing. Thus, even unskilled operators could download the requisite software tools and, using publically available data, create increasingly convincing counterfeit content.

How Could Deep Fakes Be Used?

Deep fake technology has been popularized for entertainment purposes—for example, social media users inserting the actor Nicholas Cage into movies in which he did not originally appear and a museum generating an interactive exhibit with artist Salvador Dalí. Deep fake technologies have also been used for beneficial purposes. For example, medical researchers have reported using GANs to synthesize fake medical images to train disease detection algorithms for rare diseases and to minimize patient privacy concerns.

Deep fakes could, however, be used for nefarious purposes. State adversaries or politically motivated individuals could release falsified videos of elected officials or other public figures making incendiary comments or behaving inappropriately. Doing so could, in turn, erode public trust, negatively affect public discourse, or even sway an election.

Indeed, the U.S. intelligence community concluded that Russia engaged in extensive influence operations during the 2016 presidential election to “undermine public faith in the U.S. democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency.” In the future, convincing audio or video forgeries could potentially strengthen similar efforts.

Deep fakes could also be used to embarrass or blackmail elected officials or individuals with access to classified information. Already there is evidence that foreign intelligence operatives have used deep fake photos to create fake social media accounts from which they have attempted to recruit sources. Some analysts have suggested that deep fakes could similarly be used to generate inflammatory content—such as convincing video of U.S. military personnel engaged in warcrimes—intended to radicalize populations, recruit terrorists, or incite violence. Section 589F of the FY2021 National Defense Authorization Act (P.L. 116-283) directs the Secretary of Defense to conduct an intelligence assessment of the threat posed by deep fakes to service members and their families, including an assessment of the maturity of the technology and how it might be used to conduct information operations.

In addition, deep fakes could produce an effect that professors Danielle Keats Citron and Robert Chesney have termed the “Liar’s Dividend”; it involves the notion that individuals could successfully deny the authenticity of genuine content—particularly if it depicts inappropriate or criminal behavior—by claiming that the content is a deep fake. Citron and Chesney suggest that the Liar’s Dividend could become more powerful as deep fake technology proliferates and public knowledge of the technology grows.

Some reports indicate that such tactics have already been used for political purposes. For example, political opponents of Gabon President Ali Bongo asserted that a video intended to demonstrate his good health and mental competency was a deep fake, later citing it as part of the justification for an attempted coup. Outside experts were unable to determine the video’s authenticity, but one expert noted, “in some ways it doesn’t matter if [the video is] a fake…It can be used to just undermine credibility and cast doubt.”

How Can Deep Fakes Be Detected?

Today, deep fakes can often be detected without specialized detection tools. However, the sophistication of the technology is rapidly progressing to a point at which unaided human detection will be very difficult or impossible. While commercial industry has been investing in automated deep fake detection tools, this section describes the U.S. government investments at the Defense Advanced Research Projects Agency (DARPA).

DARPA has had two programs devoted to the detection of deep fakes: Media Forensics (MediFor) and Semantic Forensics (SemaFor). MediFor, which concluded in FY2021, was to develop algorithms to automatically assess the integrity of photos and videos and to provide analysts with information about how counterfeit content was generated. The program reportedly explored techniques for identifying the audio-visual inconsistencies present in deep fakes , including in consistencies in pixels (digital integrity) , inconsistencies with the laws of physics (physical integrity), and inconsistencies with other information sources (semantic integrity). MediFor technologies are expected to transition to operational commands and the intelligence community.

SemaFor seeks to build upon MediFor technologies and to develop algorithms that will automatically detect, attribute, and characterize (i.e.,identify as either benign or malicious) various types of deep fakes. This program is to catalog semantic inconsistencies—such as the mismatched earrings seen in the GAN-generated image in Figure 1, or unusual facial features or backgrounds—and prioritize suspected deep fakes for human review. DARPA received $19.7 million for SemaFor in FY2021 and requested $23.4 million for the program in FY2022. Technologies developed by both SemaFor and MediFor are intended to improve defenses against adversary information operations.

Policy Considerations

Some analysts have noted that algorithm-based detection tools could lead to a cat-and-mouse game, in which the deep fake generators are rapidly updated to address flaws identified by detection tools. For this reason, they argue that social media platforms—in addition to deploying deep fake detection tools—may need to expand the means of labeling and/or authenticating content. This could include a requirement that users identify the time and location at which the content originated or that they label edited content as such.

Other analysts have expressed concern that regulation of deep fake technology could impose undue burden on social media platforms or lead to unconstitutional restrictions on free speech and artistic expression. These analysts have suggested that existing law is sufficient for managing the malicious use of deep fakes. Some experts have asserted that responding with technical tools alone will be insufficient and that instead the focus should be on the need to educate the public about deep fakes and minimize incentives for creators of malicious deep fakes.

Potential Questions for Congress

  • Do the Department of Defense, the Department of State, and the intelligence community have adequate information about the state of foreign deep fake technology and the ways in which this technology may be used to harm U.S. national security?
  • How mature are DARPA ’s efforts to develop automated deep fake detection tools? What are the limitations of DARPA ’s approach, and are any additional efforts required to ensure that malicious deep fakes do not harm U.S. national security?
  • Are federal investments and coordination efforts, across defense and non-defense agencies and with the private sector, adequate to address research and development needs and national security concerns regarding deep fake technologies?
  • How should national security considerations with regard to deep fakes be balanced with free speech protections, artistic expression, and beneficial uses of the underlying technologies?
  • Should social media platforms be required to authenticate or label content? Should users be required to submit information about the provenance of content? What secondary effects could this have for social media platforms and the safety, security, and privacy of users?
  • To what extent and in what manner, if at all, should social media platforms and users be held accountable for the dissemination and impacts of malicious deep fake content?
  • What efforts, if any, should the U.S. government undertake to ensure that the public is educated about deep fakes?

About the author: Laurie A. Harris, Analyst in Science and Technology Policy

Source: This article was published byCRS (PDF)

Deep Fakes And National Security – Analysis (2024)

FAQs

Are deepfakes a risk for national security? ›

Deepfake technologies pose a grave and imminent threat to national security. There are a variety of deepfake technologies available today, with differing degrees of complexity and ease of use. Some commonly used deepfake tools include Face Swapping 120, Lip Syncing 121, Voice Cloning 122, and GAN-based deepfakes 123.

Are deepfakes illegal? ›

There are no federal laws banning deepfake p*rn, but several bills have been introduced in Congress, including the AI Labeling Act of 2023 and the DEFIANCE Act of 2024.

How does AI affect national security? ›

This paper explores the multifaceted impact of AI on enhancing the capabilities of nations to safeguard their interests and citizens. From predictive analytics to machine learning algorithms, AI enables security agencies to analyze vast amounts of data and identify patterns indicative of potential risks.

How does deepfake affect our society? ›

Social impact

Deepfake videos can also manipulate public opinion and erode trust in media and public sources. The ability to fabricate realistic videos of public figures, politicians, or celebrities saying or doing things they never actually did can have far-reaching consequences for society and democratic processes.

How deepfake poses a cybersecurity threat? ›

Why Do Deepfakes Pose a Cybersecurity Threat? Deepfake is another creation of artificial intelligence that invites potential cyber hackers or malicious cybercriminals to misuse images, video, and audio to their benefit, making it a struggle to differentiate between fake and confirmed speakers.

Should I be worried about deepfakes? ›

Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security. With the ability to convincingly impersonate anyone, cybercriminals can orchestrate phishing scams or identity theft operations with alarming precision.

Can deepfakes be tracked? ›

The database includes descriptors for each deepfake, such as the URL and how much it was seen or shared on social media. It also lists the original source, the sharer, the person or group that the deepfake was targeting, and theoretical indicators.

Can software detect deepfakes? ›

It typically utilizes various methods to analyze digital content and determine whether it has been manipulated or generated by AI. With the growing number of deepfakes, Deepfake detection software is becoming increasingly popular to protect against the harmful effects of fake videos and audios.

Are deepfakes identity theft? ›

By leveraging artificial intelligence, deepfakes enable fraudsters to clone your face, voice, and mannerisms to steal your identity.

How is AI a threat to security? ›

AI has the potential to generate malware that could evade detection by current security filters, but only if it is trained on quality exploit data. There is a realistic possibility that highly capable states have repositories of malware that are large enough to effectively train an AI model for this purpose.

How will AI affect the US government? ›

The U.S. government is actively harnessing AI technologies to improve its services and operations. AI offers numerous benefits in areas such as healthcare, transportation, the environment, and benefits delivery. The federal government has initiated various projects and initiatives to leverage AI technologies.

What is the negative impact of AI in cybersecurity? ›

Privacy Concerns

AI-powered cybersecurity tools gather information from various sources, and in the collection efforts, they commonly scoop up sensitive information. With threat actors targeting systems for this information, these data stores are at risk for cyberattacks and data breaches.

Who benefits from deepfakes? ›

For content creators, the deepfake possibilities are super beneficial because they could create a deepfake version of themselves that could do their educational content, advertising, and so much more.

What is the bad side of deepfakes? ›

By enabling the creation of convincing yet fraudulent content, Deepfake technology has the potential to undermine trust, propagate misinformation, and facilitate cybercrimes with profound societal consequences.

What is the most common deepfake? ›

A common example of a deepfake video is 'face swap' in which the face of the victim is placed on a different body. This could create a scene which is embarrassing or even illegal. Another deepfake is known as 'face puppetry' in which the victim's lip movements are manipulated to literally put words into their mouths.

What are some threats to national security? ›

Similarly, national security risks involve international threats, such as cyber criminals and violent non-state extremists; as well as the domestic threats posed by hazardous material releases and natural disasters.

What are the concerns of Deepfake technology? ›

Deepfake technology carries profound ethical implications, amplifying concerns around misinformation and manipulation. Deepfakes undermine trust in media and public discourse by seamlessly blending fabricated content with reality.

What are the privacy concerns of deepfakes? ›

Deepfakes represent a significant threat to personal privacy as they can be manipulated for personal and financial gain. With the ability to convincingly alter digital media to depict individuals saying or doing things they never did; malicious actors can exploit deepfakes for various purposes.

References

Top Articles
Latest Posts
Article information

Author: Errol Quitzon

Last Updated:

Views: 5623

Rating: 4.9 / 5 (79 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Errol Quitzon

Birthday: 1993-04-02

Address: 70604 Haley Lane, Port Weldonside, TN 99233-0942

Phone: +9665282866296

Job: Product Retail Agent

Hobby: Computer programming, Horseback riding, Hooping, Dance, Ice skating, Backpacking, Rafting

Introduction: My name is Errol Quitzon, I am a fair, cute, fancy, clean, attractive, sparkling, kind person who loves writing and wants to share my knowledge and understanding with you.