Sponsored By

Cybersecurity insights from industry experts.

Deepfakes, Synthetic Media: How Digital Propaganda Undermines Trust

Organizations must educate themselves and their users on how to detect, disrupt, and defend against the increasing volume of online disinformation.

Microsoft Security

March 14, 2023

3 Min Read
Deepfake or deep fake technology as AI or artificial intelligence as a biometrics fake visual identity concept as a fake news c
Source: Brain light via Alamy Stock Photo

More and more, nation-states are leveraging sophisticated cyber influence campaigns and digital propaganda to sway public opinion. Their goal? To decrease trust, increase polarization, and undermine democracies around the world.

In particular, synthetic media is becoming more commonplace thanks to an increase in tools that easily create and disseminate realistic artificial images, videos, and audio. This technology is advancing so quickly that soon anyone will be able to create a synthetic video of anyone saying or doing anything the creator wants. According to Sentinel, there was a 900% year-over-year increase in the proliferation of deepfakes in 2020.

It's up to organizations to protect against these cyber influence operations. But strategies are available for organizations to detect, disrupt, deter, and defend against online propaganda. Read on to learn more.

Building a Cyber Influence Campaign

Cyber influence operations have three main stages. They begin with prepositioning, which is when nation-state actors first introduce their propaganda or false narratives to the general public — whether using the Internet or through making it part of breaking world news. These false Internet narratives are especially harmful because they can lend credence to subsequent references.

Next is the launch phase. This involves foreign entities creating a coordinated campaign to spread their narrative through government-influenced media outlets and social channels. Then comes the amplification phase in which nation-state-controlled media and proxies amplify false narratives to targeted audiences.

The corruption doesn't end there. Cyber influence campaigns can lead to market manipulation, payment fraud, vishing, impersonations, brand damage, and botnets, to name a few. But the greater threat is to our collective sense of trust and authenticity. The growing use of artificial media means that any compromising or undesirable image, audio, or video of a public or private figure can be dismissed as fake — even when it's legitimate.

How Organizations Can Protect Themselves

As technology advances, tools that have traditionally been used in cyberattacks are now being applied to cyber influence operations. Nation-states have also begun collaborating to amplify each other's fake content.

These trends point to a need for greater consumer education on how to accurately identify foreign influence operations and avoid engaging with them. We believe the best way to promote this education is to increase collaboration between the federal government, the private sector, and end users in business and personal contexts.

There are four key ways to ensure the effectiveness of such training and education. First, we must be able to detect foreign cyber influence operations. No individual organization will be able to do this on its own. Instead, we will need the support of academic institutions, nonprofit organizations, and other entities to better analyze and report on cyber influence operations.

Next, defenses must be strengthened to account for the challenges and opportunities that technology has created for the world's democracies — especially when it comes to the disruption of independent journalism, local news, and information accuracy.

Another element in combating this widespread deception is radical transparency. We recommend increasing both the volume and dissemination of geopolitical analysis, reporting, and threat intelligence to better inform effective responses and protection.

Finally, there have to be consequences when nation-states violate international rules. While it often falls on state, local, and federal governments to enforce these penalties, multistakeholder action can be leveraged to strengthen and extend international norms. For example, Microsoft recently signed onto the European Commission's Code of Practice on Disinformation along with more than 30 online businesses to collectively tackle this growing challenge. Governments can build on these norms and laws to advance accountability.

Ultimately, threat actors are only going to continue getting better at evading detection and influencing public opinion. The latest nation-state threats and emerging trends show that threat actors will keep evolving their tactics. However, there are things organizations can do to improve their defenses. We just need to create holistic policies that public and private entities alike can use to combat digital propaganda and protect our collective operations against false narratives.

Read more Partner Perspectives from Microsoft Security.

Read more about:

Partner Perspectives

About the Author(s)

Microsoft Security

Microsoft

Protect it all with Microsoft Security.

Microsoft offers simplified, comprehensive protection and expertise that eliminates security gaps so you can innovate and grow in a changing world. Our integrated security, compliance, and identity solutions work across platforms and cloud environments, providing protection without compromising productivity.

We help customers simplify the complex by prioritizing risks with unified management tools and strategic guidance created to maximize the human expertise inside your company. Our unparalleled AI is informed by trillions of signals so you can detect threats quickly, respond effectively, and fortify your security posture to stay ahead of ever-evolving threats.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights