Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

Facebook at 20: Contemplating the Cost of Privacy

As the social media giant celebrates its two-decade anniversary, privacy experts reflect on how it changed the way the world shares information.

Joan Goodchild, Contributing Writer, Contributing Writer

April 30, 2024

5 Min Read
giant eye against a dark background
SourceL Skorzewiak via Alamy Stock Photo

In the 20 years since then-Harvard University student Mark Zuckerberg launched Facebook, there has been a profound shift in our understanding of privacy and security in the digital age. Facebook's path to becoming a digital town square has been fraught with challenges as the company — and the rest of the world — balanced the human desire to share information with the expectation of personal privacy.

In February, Zuckerberg went before a Senate committee and made apologies to parents who blame social media for what they say was its role in their children's death by suicide or from drug overdoses. Zuckerberg has appeared before congressional committees multiple times over the years to address concerns about the platform's impact. The issue of privacy, security, and Facebook has been a long-standing lightning rod, says Martin J. Kraemer, a security awareness advocate at KnowBe4 who has a doctorate in cybersecurity.

"Facebook's fundamental purpose, centered around the systematic collection of information, naturally conflicts with privacy and data protection laws," Kraemer says.

The social media company's missteps over the years have shaped the broader discussions and regulations around data privacy. In fact, it is that tension between social connectivity and privacy rights that prompted the need for robust data protection measures, influencing the creation of significant legal frameworks, like Europe's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

The Cambridge Analytica scandal in 2016 was a watershed moment for many, as the company's decision to grant third-party entities with unethical access to user data underscored the potential for personal data to be weaponized against democratic processes. This incident, according to Kraemer, was a direct catalyst for the CCPA and marked a decisive turn in how policymakers and the public at large perceived and prioritized online privacy and data security.

"Cambridge Analytica was the first time the dangers of mass data collection and processing became a tangible reality for the public," Kraemer says.

The 'Attention Economy': Your Data Up For Sale

Facebook has also been a key player in the development of the "attention economy," says Justin Daniels, a faculty member at IANS Research. Its creation catalyzed a shift toward the sale of user data as a commodity sold to the highest bidder under the guise of a "free" service.

"Their algorithm, designed to keep you on the site with things that you emotionally respond to, has resulted in the following: Information is available everywhere, yet the facts prove elusive, and anyone who disagrees with you becomes your adversary," Daniels says. "Our need to have convenience above all has made privacy and security afterthoughts for too many people."

The platform's initial slow response to recognize its role in disseminating misinformation, as evidenced by Zuckerberg's later retracted comments on the 2016 election, underscores the company's significant impact, Daniels says. Resulting state laws in Texas and Florida, targeting big tech, highlight a legislative response to privacy concerns directly influenced by Facebook's actions. Facebook's long-standing stance of neutrality as mere "engineers" has also allowed misinformation to proliferate, causing harm and dividing communities, he says.

Looking ahead, Daniels foresees a challenging future for online privacy and security, particularly with the advent of AI.

"If you connect the dots, you see that we are paying the price in privacy and security for Congressional inaction on the digital economy," he says. "They basically let the private sector innovate without any rules. Since big tech business models depend on collecting personal information, they are never going to voluntarily limit themselves, as data collection is critical to their business model. They are the richest companies in human history and now have great influence. An AI black swan event will be the culmination of this lack of regulatory oversight."

A Privacy-Free Future?

Like Daniels, AJ Nash, VP and Distinguished Fellow of Intelligence at ZeroFox, also thinks the quest for profitability, driving platforms like Facebook to maximize user engagement, comes at the expense of user privacy. Social media companies are neither legally accountable for user content, thanks to Section 230(c)(1) of the Communications Decency Act, nor are they obligated to adhere to First Amendment principles, he notes. That means policy changes within these platforms are primarily motivated by either the pursuit of growth or forced with governmental mandates, such as GDPR.

"If the user base shrinks or engagement drops as a result of user concerns regarding censorship, mis/dis/malinformation, privacy, or security [for example], social media platforms will likely be motivated to address those concerns with new policies," Nash says. "Beyond that, the only motivation any private corporation — including social media platforms — has for making any policy changes will likely be when they are required to do so by a government with the authority to enforce such mandates."

And as anyone working in security knows, social media's extensive data collection practices have also become a goldmine for attackers, enabling crimes ranging from theft to influence campaigns designed to sway public opinion — underscoring the complexity of securing user data against cyber threats on social sites that are designed to be, well, social.

"I recently read that 1.4 billion social media accounts are hacked every month, showing that hackers aren't reluctant to attack users directly," Nash says. "Social media platforms can [and do] collect an impressive amount of information about their users, including name, age, gender, location, IP address, devices used, hobbies/interests, shopping tendencies, political views, and individual or group pattern of life analysis … the list goes on. Most [if not all] social media platforms have reportedly been compromised at least once."

In light of this near-constant onslaught of attacks, Nash thinks there will be a shift toward resignation and skepticism among users. The concept of privacy also will be increasingly viewed as untenable in the face of the Internet and social media's expansion, he says.

"I think we've reached a point where most people believe they've been compromised and are becoming numb to — and skeptical of — the idea that their privacy can be protected," Nash says. "The two youngest generations seem largely less concerned with privacy than their predecessors. Social media probably played a significant role in all of this. As people spent more time on these platforms over the last two decades, they have grown accustomed to the news of data leaks, while becoming more interested in the potential benefits of less privacy than the associated risks. To be frank, I think we are nearing — if not already at — the dawn of a post-privacy world."

About the Author(s)

Joan Goodchild, Contributing Writer

Contributing Writer, Dark Reading

Joan Goodchild is a veteran journalist, editor, and writer who has been covering security for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights