Deep Fakes and National Security (2024)

Deep Fakes and National Security (1)


Updated April 17, 2023
Deep Fakes and National Security
“Deep fakes”—a term that first emerged in 2017 to describe
Indeed, the U.S. intelligence community concluded that
realistic photo, audio, video, and other forgeries generated
Russia engaged in extensive influence operations during the
with artificial intelligence (AI) technologies—could present
2016 presidential election to “undermine public faith in the
a variety of national security challenges in the years to
U.S. democratic process, denigrate Secretary Clinton, and
come. As these technologies continue to mature, they could
harm her electability and potential presidency.” Likewise,
hold significant implications for congressional oversight,
in March 2022, Ukrainian President Volodymyr Zelensky
U.S. defense authorizations and appropriations, and the
announced that a video posted to social media—in which he
regulation of social media platforms.
appeared to direct Ukrainian soldiers to surrender to
Russian forces—was a deep fake. While experts noted that
How Are Deep Fakes Created?
this deep fake was not particularly sophisticated, in the
Though definitions vary, deep fakes are most commonly
future, convincing audio or video forgeries could
described as forgeries created using techniques in machine
potentially strengthen malicious influence operations.
learning (ML)—a subfield of AI—especially generative
adversarial networks (GANs). In the GAN process, two ML
Deep fakes could also be used to embarrass or blackmail
systems called neural networks are trained in competition
elected officials or individuals with access to classified
with each other. The first network, or the generator, is
information. Already there is evidence that foreign
tasked with creating counterfeit data—such as photos, audio
intelligence operatives have used deep fake photos to create
recordings, or video footage—that replicate the properties
fake social media accounts from which they have attempted
of the original data set. The second network, or the
to recruit sources. Some analysts have suggested that deep
discriminator, is tasked with identifying the counterfeit
fakes could similarly be used to generate inflammatory
data. Based on the results of each iteration, the generator
content—such as convincing video of U.S. military
network adjusts to create increasingly realistic data. The
personnel engaged in war crimes—intended to radicalize
networks continue to compete—often for thousands or
populations, recruit terrorists, or incite violence. Section
millions of iterations—until the generator improves its
589F of the FY2021 National Defense Authorization Act
performance such that the discriminator can no longer
(P.L. 116-283) directs the Secretary of Defense to conduct
distinguish between real and counterfeit data.
an intelligence assessment of the threat posed by deep fakes
to servicemembers and their families, including an
Though media manipulation is not a new phenomenon, the
assessment of the maturity of the technology and how it
use of AI to generate deep fakes is causing concern because
might be used to conduct information operations.
the results are increasingly realistic, rapidly created, and
cheaply made with freely available software and the ability
In addition, deep fakes could produce an effect that
to rent processing power through cloud computing. Thus,
professors Danielle Keats Citron and Robert Chesney have
even unskilled operators could download the requisite
termed the “Liar’s Dividend”; it involves the notion that
software tools and, using publically available data, create
individuals could successfully deny the authenticity of
increasingly convincing counterfeit content.
genuine content—particularly if it depicts inappropriate or
criminal behavior—by claiming that the content is a deep
How Could Deep Fakes Be Used?
fake. Citron and Chesney suggest that the Liar’s Dividend
Deep fake technology has been popularized for
could become more powerful as deep fake technology
entertainment purposes—for example, social media users
proliferates and public knowledge of the technology grows.
inserting the actor Nicholas Cage into movies in which he
did not originally appear and a museum generating an
Some reports indicate that such tactics have already been
interactive exhibit with artist Salvador Dalí. Deep fake
used for political purposes. For example, political
technologies have also been used for beneficial purposes.
opponents of Gabon President Ali Bongo asserted that a
For example, medical researchers have reported using
video intended to demonstrate his good health and mental
GANs to synthesize fake medical images to train disease
competency was a deep fake, later citing it as part of the
detection algorithms for rare diseases and to minimize
justification for an attempted coup. Outside experts were
patient privacy concerns.
unable to determine the video’s authenticity, but one expert
noted, “in some ways it doesn’t matter if [the video is] a
Deep fakes could, however, be used for nefarious purposes.
fake… It can be used to just undermine credibility and cast
State adversaries or politically motivated individuals could
doubt.”
release falsified videos of elected officials or other public
figures making incendiary comments or behaving
How Can Deep Fakes Be Detected?
inappropriately. Doing so could, in turn, erode public trust,
Today, deep fakes can often be detected without specialized
negatively affect public discourse, or even sway an election.
detection tools. However, the sophistication of the
https://crsreports.congress.gov

link to page 2 Deep Fakes and National Security (2)
Deep Fakes and National Security
technology is rapidly progressing to a point at which
identified by detection tools. For this reason, they argue that
unaided human detection will be very difficult or
social media platforms—in addition to deploying deep fake
impossible. While commercial industry has been investing
detection tools—may need to expand the means of labeling
in automated deep fake detection tools, this section
and/or authenticating content. This could include a
describes U.S. government investments and activities.
requirement that users identify the time and location at
which the content originated or that they label edited
The Identifying Outputs of Generative Adversarial
content as such.
Networks Act (P.L. 116-258) directed NSF and NIST to
support research on GANs. Specifically, NSF is directed to
Other analysts have expressed concern that regulation of
support research on manipulated or synthesized content and
deep fake technology could impose undue burden on social
information authenticity, and NIST is directed to support
media platforms or lead to unconstitutional restrictions on
research for the development of measurements and
free speech and artistic expression. These analysts have
standards necessary to develop tools to examine the
suggested that existing law is sufficient for managing the
function and outputs of GANs or other technologies that
malicious use of deep fakes. Some experts have asserted
synthesize or manipulate content.
that responding with technical tools alone will be
insufficient and that instead the focus should be on the need
In addition, DARPA has had two programs devoted to the
to educate the public about deep fakes and minimize
detection of deep fakes: Media Forensics (MediFor) and
incentives for creators of malicious deep fakes.
Semantic Forensics (SemaFor). MediFor, which concluded
in FY2021, was to develop algorithms to automatically
Potential Questions for Congress
assess the integrity of photos and videos and to provide
 Do the Department of Defense, the Department of State,
analysts with information about how counterfeit content
and the intelligence community have adequate
was generated. The program reportedly explored techniques
information about the state of foreign deep fake
for identifying the audio-visual inconsistencies present in
technology and the ways in which this technology may
deep fakes, including inconsistencies in pixels (digital
be used to harm U.S. national security?
integrity), inconsistencies with the laws of physics (physical
 How mature are DARPA’s efforts to develop automated
integrity), and inconsistencies with other information
deep fake detection tools? What are the limitations of
sources (semantic integrity). MediFor technologies are
DARPA’s approach, and are any additional efforts
expected to transition to operational commands and the
required to ensure that malicious deep fakes do not harm
intelligence community.
U.S. national security?
 Are federal investments and coordination efforts, across
SemaFor seeks to build upon MediFor technologies and to
defense and nondefense agencies and with the private
develop algorithms that will automatically detect, attribute,
sector, adequate to address research and development
and characterize (i.e., identify as either benign or malicious)
needs and national security concerns regarding deep
various types of deep fakes. This program is to catalog
fake technologies?
semantic inconsistencies—such as the mismatched earrings
 How should national security considerations with regard
seen in the GAN-generated image in Figure 1, or unusual
to deep fakes be balanced with free speech protections,
facial features or backgrounds—and prioritize suspected
artistic expression, and beneficial uses of the underlying
deep fakes for human review. DARPA requested $18
technologies?
million for SemaFor in FY2024, $4 million under the
 Should social media platforms be required to
FY2023 appropriation. Technologies developed by both
authenticate or label content? Should users be required
SemaFor and MediFor are intended to improve defenses
to submit information about the provenance of content?
against adversary information operations.
What secondary effects could this have for social media
platforms and the safety, security, and privacy of users?
Figure 1. Example of Semantic Inconsistency in a
 To what extent and in what manner, if at all, should
GAN-Generated Image
social media platforms and users be held accountable for
the dissemination and impacts of malicious deep fake
content?
 What efforts, if any, should the U.S. government
undertake to ensure that the public is educated about
deep fakes?
CRS Products
CRS Report R46795, Artificial Intelligence: Background, Selected
Issues, and Policy Considerations
, by Laurie A. Harris
CRS Report R45178, Artificial Intelligence and National Security,

by Kelley M. Sayler
Source: https://www.darpa.mil/news-events/2019-09-03a.
CRS Report R45142, Information Warfare: Issues for Congress,

by Catherine A. Theohary
Policy Considerations
Some analysts have noted that algorithm-based detection
tools could lead to a cat-and-mouse game, in which the
deep fake generators are rapidly updated to address flaws
https://crsreports.congress.gov

Deep Fakes and National Security

Laurie A. Harris, Analyst in Science and Technology
Policy
Kelley M. Sayler, Analyst in Advanced Technology and
Global Security
IF11333


Disclaimer
This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan shared staff to
congressional committees and Members of Congress. It operates solely at the behest of and under the direction of Congress.
Information in a CRS Report should not be relied upon for purposes other than public understanding of information that has
been provided by CRS to Members of Congress in connection with CRS’s institutional role. CRS Reports, as a work of the
United States Government, are not subject to copyright protection in the United States. Any CRS Report may be
reproduced and distributed in its entirety without permission from CRS. However, as a CRS Report may include
copyrighted images or material from a third party, you may need to obtain the permission of the copyright holder if you
wish to copy or otherwise use copyrighted material.

https://crsreports.congress.gov | IF11333 · VERSION 7 · UPDATED

Deep Fakes and National Security (2024)

FAQs

How do deepfakes affect cyber security? ›

Deepfakes are artificial audio, video, or image creations that use known, valid data and artificial intelligence to produce a synthetic output. In the context of cybersecurity, deepfakes are categorized as social engineering attacks that can be used to breach an organization's systems and compromise internal data.

How does AI affect national security? ›

Anticipatory security strategies are revolutionized by the predictive capabilities of AI, which can identify vulnerabilities in national infrastructure and forecast intrusions. Incorporating artificial intelligence into military operations has similarly transformed defense strategies.

Are deepfakes a threat to national security? ›

Deepfake technologies pose a grave and imminent threat to national security. There are a variety of deepfake technologies available today, with differing degrees of complexity and ease of use. Some commonly used deepfake tools include Face Swapping 120, Lip Syncing 121, Voice Cloning 122, and GAN-based deepfakes 123.

How does deepfake affect our society? ›

Social impact

Deepfake videos can also manipulate public opinion and erode trust in media and public sources. The ability to fabricate realistic videos of public figures, politicians, or celebrities saying or doing things they never actually did can have far-reaching consequences for society and democratic processes.

What are the risks of deepfakes? ›

Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security. With the ability to convincingly impersonate anyone, cybercriminals can orchestrate phishing scams or identity theft operations with alarming precision.

What rights do deepfakes violate? ›

Right to privacy and publicity laws

Many jurisdictions require consent to an individual's likeness or personal data, but enforcing these consent requirements for deepfakes, especially when anonymously created, is challenging and infringes on an individual's right for someone to control their image and likeness.

Why is AI hurting society? ›

If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What are some threats to national security? ›

Similarly, national security risks involve international threats, such as cyber criminals and violent non-state extremists; as well as the domestic threats posed by hazardous material releases and natural disasters.

Why should we be worried about deepfakes? ›

Deepfakes are creating havoc across the globe, spreading fake news and p*rnography, being used to steal identities, exploiting celebrities, scamming ordinary people and even influencing elections.

What are the threats of deepfakes? ›

The threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people's natural inclination to believe what they see, and as a result deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading mis/disinformation.

Are deepfakes morally wrong? ›

If you don't agree to your image being used or manipulated, then it's wrong for someone to do so. It's a line that can be (and has been) easily turned into law — if you deepfake someone without their consent, then you risk a criminal charge.

How does deepfake affect privacy? ›

Deepfakes represent a significant threat to personal privacy as they can be manipulated for personal and financial gain. With the ability to convincingly alter digital media to depict individuals saying or doing things they never did; malicious actors can exploit deepfakes for various purposes.

What are the potential problems that deepfakes can cause? ›

Typically, deepfakes are used to purposefully spread false information or they may have a malicious intent behind their use. They can be designed to harass, intimidate, demean and undermine people. Deepfakes can also create misinformation and confusion about important issues.

Are deepfakes identity theft? ›

By leveraging artificial intelligence, deepfakes enable fraudsters to clone your face, voice, and mannerisms to steal your identity.

What are the malicious use of deepfakes? ›

The impact of deepfakes on society

Deepfakes can be used to harm reputations, manipulate public sentiment, sway elections, and erode democratic processes. Beyond politics, deepfakes present a threat to personal security. The technology can be used for malicious purposes, such as blackmail, fraud, and cyberbullying.

References

Top Articles
Latest Posts
Article information

Author: Cheryll Lueilwitz

Last Updated:

Views: 5613

Rating: 4.3 / 5 (74 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Cheryll Lueilwitz

Birthday: 1997-12-23

Address: 4653 O'Kon Hill, Lake Juanstad, AR 65469

Phone: +494124489301

Job: Marketing Representative

Hobby: Reading, Ice skating, Foraging, BASE jumping, Hiking, Skateboarding, Kayaking

Introduction: My name is Cheryll Lueilwitz, I am a sparkling, clean, super, lucky, joyous, outstanding, lucky person who loves writing and wants to share my knowledge and understanding with you.