A Linux maintainer pledges to stop taking code submissions from the University of Minnesota after a research team purposely submitted vulnerabilities to show software supply chain weaknesses.

6 Min Read

The University of Minnesota has suspended a research project after complaints that two student researchers submitted intentionally vulnerable code to the maintainers of the Linux kernel as a way to investigate whether supply chain integrity issues affected the widely used Linux ecosystem.

At the core of the kerfuffle is a research paper accepted to next month's prestigious IEEE Symposium on Security and Privacy. The paper describes a research project that aimed to determine the resilience of open source software projects to purposely flawed patches, through which attackers could introduce vulnerabilities to be exploited at a later time. The researchers submitted at least three updates that could have added vulnerabilities to the Linux kernel.

On April 21, Greg Kroah-Hartman, a fellow with the Linux Foundation and Linux kernel maintainer, banned the University of Minnesota from contributing to the Linux kernel and pledged to revert all previous patches submitted by the researchers, pending review. The maintainers, many of them volunteers, do not have time to try to weed out purposely malicious updates, Kroah-Hartman told Dark Reading in an e-mail interview.

"I have no idea what a random researcher should, or should not do, that's not my place to say," he stated. "What I do object to is when people purposefully waste Linux kernel reviewer's time, which is what was happening here."

Software supply chain attacks have become a major problem for open source projects and commercial vendors alike: The insertion of malicious code into an update for the SolarWinds Orion remote management software likely installed backdoors in thousands of companies. Attackers are also actively looking for vulnerabilities in open source components, buying software projects to turn into malware channels, or adding vulnerable code as an unfaithful contributor, as the University of Minnesota researchers did.

All of these vectors expose weaknesses in the software supply chain and the reliance of both open source and commercial applications and Web services on open source components, many of which are maintained by volunteers.

The UMN researchers — PhD student Qiushi Wu and his adviser, associate professor Kangjie Lu — decided to investigate the degree to which a malicious actor could sneak vulnerable code into one of the most significant open source software (OSS) projects, the Linux kernel. The researchers submitted "hypocrite commits," or malicious patches, fixing minor issues while at the same time introducing more significant vulnerabilities.

The research intended to "investigate the insecurity of OSS from a critical perspective—the feasibility of a malicious committer stealthily introducing vulnerabilities such as use-after-free (UAF) in OSS through hypocrite commits—seemingly beneficial minor commits that actually introduce other critical issues," the paper stated. "Such introduced vulnerabilities can be critical, as they can exist in the OSS for a long period and be exploited by the malicious committer to impact a massive number of devices and users."

However, actively undermining the software development process for open source created significant work for the maintainers of the Linux kernel. In a discussion on the mailing list for the Linux Network File System (Linux-NFS), Kroah-Hartman roundly criticized the research, the breach of trust, and the auspices under which the researchers justified the experiments.

"Our community does not appreciate being experimented on, and being 'tested' by submitting known patches that are either do nothing on purpose, or introduce bugs on purpose," Kroah-Hartman wrote in response to a second student researcher, Aditya Pakki, who is also part of Professor Lu's group. "If you wish to do work like this, I suggest you find a different community to run your experiments on, you are not welcome here. Because of this, I will now have to ban all future contributions from your University and rip out your previous contributions, as they were obviously submitted in bad-faith with the intent to cause problems."

Neither of the authors of the paper responded to a request for comment via e-mail.

By late Wednesday, however, the University of Minnesota's Department of Computer Science and Engineering issued a statement, noting the Linux community's concern and pledging to investigate the research project, which the department put on hold.

"We take this situation extremely seriously," said department head Mats Heimdahl and associate department head Loren Terveen in the statement. "We have immediately suspended this line of research. We will investigate the research method and the process by which this research method was approved, determine appropriate remedial action, and safeguard against future issues, if needed."

The most problematic issue with the research is that the problem of supply chain integrity is a known issue that does not need proof, as actual attacks have demonstrated its effectiveness, wrote Laura Abbot, a Linux kernel developer, on her blog.

"The problem with the approach the authors took is that it doesn't actually show anything particularly new," she said. "The kernel community has been well aware of this gap for a while. Nobody needs to actually intentionally put bugs in the kernel, we're perfectly capable of doing it as part of our normal work flow."

The researchers stressed that they designed the project to prevent the actual malicious patches from being merged with the Linux kernel or subsystems.

"[T]he experiment was performed in a safe way—we ensure that our patches stay only in email exchanges and will not be merged into the actual code, so it would not hurt any real users," they stated in the paper.

The researchers also added that they honored the efforts that maintainers contributed to open source projects but could not see a way to conduct the research without wasting maintainers' time. In response to concerns that the researchers' approach tainted the relationship between academia and industry, they apologized but maintained that the research benefitted the community overall.

"[U]sers of OSS have the right to know the potential risks; on the other hand, exposing the issue has clear benefits for the OSS community because it calls for efforts to fix the issue," the researchers stated in a clarification in December to concerns that emerged at the time. "It would motivate researchers and professionals to develop tools that automatically test and verify the patches, which would alleviate maintainer burden."

The group tried to make sure the initial patches were as minor as possible — less than five lines of code in each case — and only submit the initial patches for real bugs. While the patches also introduced vulnerabilities, the researchers provided a real fix after they notified the maintainer of the additional issues.

Read more about:

2021

About the Author(s)

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights