Cybersecurity challenges cannot be solved with computers alone. They demand a closer look at how social and technical systems overlap, and how this growing overlap influences security.
As it stands, many of these issues are being addressed separately. The general public and defense leaders understand the risk of online propaganda, but they know little about the techniques involved. The field of computational social science studies how digital media affects society, but it rarely tackles security. And the security community understands the protocols and services of tech platforms, but they know less about how these networks collectively influence society and politics.
Pablo Breuer, innovation officer at US Special Operations Command Donovan Group, and David Perlman, researcher at A Social Network, have developed an integrated view of socio-technical systems (STS) to which security principles can be applied. An STS consists of a social network, the population using it, and an output system (political system or economic market, for example) that feels the resulting effects.
Their idea is to create a framework that combines social and technical systems and can inform security operations. As disinformation campaigns and online propaganda continue to spread, STS can help defend and fight different types of cyberattacks with their roots in digital media.
"As I went through my schooling, I realized none of the really interesting problems about computer security can be answered with computers," Breuer says. A mutual friend introduced him to Perlman, and the duo began exploring mass influence and weaponized information. They wanted to educate people and government on why everybody should be involved.
"We realized that anybody who's in the field recognizes that this is a huge problem and that this is a train wreck, but nobody's actually doing anything," he explains. "Everybody's just admiring the problem." The issue isn't limited to any single part of computer science, policy, or law, Breuer continues. "It's not a silver bullet problem – it's a thousand-bullet problem," he says.
Placing security in the context of a social network offers a different perspective, Perlman adds, because at the center are interactions among many people's minds. Researchers see how people interact with technology and one another. "You can't ignore any of those parts of the equation," he says. Before, the way people interacted with systems wasn't considered.
The Information Revolution Continues
The rise of the Internet – specifically, social networks like Facebook, Twitter, and Instagram – have enabled anyone to speak to mass audiences. Breuer and Perlman use the term "radical leveling technologies" to describe how the Internet has shifted the power of balance online. Before social media, few people could speak to a large populous. Now just about anyone can.
"It's just a fundamental shift in the landscape," Breuer says. The transmission of messages has changed, but receptors are still human. "That's where the socio-technical comes in," he adds.
Digital media has accelerated the reach and speed of propaganda online: People can automate the process of creating new messages, then see how effective they are and the kind of responses they generate. "The whole thing has to be considered as a security question," Perlman says.
The idea of large groups of people communicating with one another seems benign, Perlman continues, and it is – if everyone acts in good faith. Problems occur when bad guys figure out how to game the system before the good guys know they do. Now they have, he adds, and the result is a new adversarial aspect to digital communications that is now possible. Cybersecurity issues, propaganda, and the Internet are intertwined in a web of interconnected problems.
"It's the combination with modern technology and the Internet, that whole is greater than the sum of solving each of the parts," says Breuer, and the security industry isn't tackling it as a larger problem. Conferences may focus on policy or computer science, but not both.
"Very rarely do you get legal and policy and tech all in the same room," he notes. "And this is one of those problems where you have to have that or you won't make any inroads to making it better."
Offense and Defense in STS Security
In their upcoming Black Hat USA briefing, "Hacking Ten Million Useful Idiots: Online Propaganda as a Socio-Technical Security Project," Breuer and Perlman will discuss their framework, how security principles apply to STS, how red team and blue team processes could look in the context of STS security, and examples of red team analyses of influence operations.
Breuer explains an example of blue team operations, or how a company could defend themselves from a digital media-based attack. Most companies have some idea of what will happen if they suffer a data breach; however, they aren't prepared for social media attacks.
He cites an incident the Associated Press handled this past December, when the publication was covering yellow jacket protests in France. One of its stories included an up-close image of a fire. A separate blog obtained pictures the AP had posted in a previous story; those photos also included a fire, but they were panned back so it seemed smaller. The blog's narrative said the AP had misrepresented the fire's size with an up-close photograph and not to believe it.
What happened "almost instantly," Breuer says, is the AP replied with a series of tweets saying both were AP photos but were taken at different times during different events. The publication highlighted aspects of each photo to demonstrate they were from separate occasions.
"That kind of forethought allows for very rapid response," he continues. It shows how the AP had considered the possibility someone might take its stories out of context and planned its reaction. Any company on social media should consider the chance they'll have to do the same.