3 Min Read
Hand putting a ballot in a ballot box in front of a Korean flag
Source: Panther Media GmbH via Alamy Stock Photo

Amid a steep rise in politically motivated deepfakes, South Korea's National Police Agency (KNPA) has developed and deployed a tool for detecting AI-generated content for use in potential criminal investigations.

According to the KNPA's National Office of Investigation (NOI), the deep learning program was trained on approximately 5.2 million pieces of data sourced from 5,400 Korean citizens. It can determine whether a video (which it hasn't been pretrained on) is real or not in only five to 10 minutes, with an accuracy rate of around 80%. The tool auto-generates a results sheet that police can use in criminal investigations.

As reported by Korean media, these results will be used to inform investigations but will not be used as direct evidence in criminal trials. Police will also make space for collaboration with AI experts in academia and business.

AI security experts have called for the use of AI for good, including detecting misinformation and deepfakes.

"This is the point: AI can help us analyze [false content] at any scale," Gil Shwed, CEO of Check Point, told Dark Reading in an interview this week. Though AI is the sickness, he said, it is also the cure: "[Detecting fraud] used to require very complex technologies, but with AI you can do the same thing with a minimum amount of information — not just good and large amounts of information."

Korea's Deepfake Problem

While the rest of the world waits in anticipation of deepfakes invading election seasons, Koreans have already been dealing with the problem up close and personal.

The canary in the coal mine occurred during provincial elections in 2022, when a video spread on social media appearing to show President Yoon Suk Yeol endorsing a local candidate for the ruling party.

This type of deception has lately become more prevalent. Last month, the country's National Election Commission revealed that between Jan. 29 and Feb. 16, it detected 129 deepfakes in violation of election laws — a figure that is only expected to rise as its April 10 Election Day approaches. All this in spite of a revised law that came into effect on Jan. 29, stating that use of deepfake videos, photos, or audio in connection with elections can earn a citizen up to seven years in prison, and fines up to 50 million won (around $37,500). 

Not Just Disinformation

Check Point's Shwed warned that, like any new technology, AI has its risks. "So yes, there are bad things that can happen and we need to defend against them," he said.

Fake information is not as much the problem, he added. "The biggest issue in human conflict in general is that we don't see the whole picture — we pick the elements [in the news] that we want to see, and then based on them make a decision," he said.

"It's not about disinformation, it's about what you believe in. And based on what you believe in, you pick which information you want to see. Not the other way around."

About the Author(s)

Nate Nelson, Contributing Writer

Nate Nelson is a freelance writer based in New York City. Formerly a reporter at Threatpost, he contributes to a number of cybersecurity blogs and podcasts. He writes "Malicious Life" -- an award-winning Top 20 tech podcast on Apple and Spotify -- and hosts every other episode, featuring interviews with leading voices in security. He also co-hosts "The Industrial Security Podcast," the most popular show in its field.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights