Legality of Security Research to Be Decided in US Supreme Court Case
A ruling that a police officer's personal use of a law enforcement database is "hacking" has security researchers worried for the future.
September 9, 2020
Independent security researchers, digital-rights groups, and technology companies have issued friend-of-the-court briefs in a US Supreme Court case that could determine whether violating the terms of service for software, hardware, or an online service equates to hacking under the law.
The case—Nathan Van Buren v. United States—stems from the appeal of Van Buren, a police sergeant in Cumming, Georgia, who was found guilty in May 2018 of honest services wire-fraud and a single charge of violating the Computer Fraud and Abuse Act (CFAA) for accessing state and government databases to look up a license plate in exchange for money. While Van Buren was authorized to use the Georgia Crime Information Center (GCIC) to access information, including license plates, federal prosecutors argued successfully that he exceeded that authorization by looking up information for a non-law enforcement purpose.
With the appeal accepted by the US Supreme Court, security researchers and technology companies are concerned with the potential for the case to turn independent vulnerability research into unauthorized access and, thus, a prosecutable offense. If the US Supreme Court rules that Van Buren's actions are a violation of the CFAA, it will undermine software and cloud security, says Casey Ellis, chief technology officer and founder of crowdsourced bug bounty firm Bugcrowd.
"Unauthorized access is one of the main purposes of security research—by making it illegal, researchers will be unable to effectively do their jobs, the organization will not be able to close all vulnerabilities, and attackers will win," Ellis says, adding, "the purpose of the CFAA is to outlaw malicious cyberattacks, not grant organizations the ability to halt vulnerability reporting by holding ethical researchers legally accountable for their actions."
The list of interested parties filing so-called Amicus briefs in the case pit the usual suspects against each other: Digital rights groups—such as the American Civil Liberties Union, the Center for Democracy and Technology, and Electronic Frontier Foundation—against law enforcement—specifically, the Federal Law Enforcement Officers Association, and security researchers and security firms—such as Rapid7 and Bugcrowd—against organizations such as the financial group Managed Funds Association (MFA) and mobile voting firm Voatz.
The MFA worried about "faithless employees" stealing client information, financial information and trade secrets, while Voatz raised its concerns that independent research—such as a recent paper authored by Massachussetts Institute of Technology (MIT) researchers that found significant security issues with its mobile voting application—is not in the cause of security. On September 3, Voatz filed its brief in response to the filing on behalf of security researchers.
"We're not advocating to limit anyone's freedom – we're saying it's difficult to distinguish between good and bad faith attacks in the midst of a live election," the company said in a statement sent to Dark Reading. "For everyone's sake, it's better to work collaboratively with the organization — bad actors disguise themselves as good actors on a regular basis. All attempts to break into or tamper with an election system during a live election need to be treated as hostile unless prior authorization was specifically granted."
The MIT research used the Voatz app and a reverse-engineered version of the backend server, and never took place during a live election, according to a paper published at the prestigious USENIX Security Conference last month.
"As performing a security analysis against a running election server would raise a number of unacceptable legal and ethical concerns, we instead chose to perform all of our analyses in a 'cleanroom' environment, connecting only to our own servers," Michael Specter, a PhD candidate in computer science at MIT, and his co-authors stated in the paper. A later analysis funded by Voatz actually verified all the vulnerabilities plus a significant number of additional issues.
Yet, other technology companies and organizations have voiced support for security researchers and limiting the application of the Computer Fraud and Abuse Act. In their joint Amicus brief, software-developer tools maker Atlassian, browser maker Mozilla, and e-commerce platform firm Shopify all supported security researchers' efforts.
"Effective computer security ... entails creating systems that are resilient to computer hackers. That requires letting people, including members of the robust community of independent security researchers, probe and test our computer networks," the companies stated, adding "[a]n overbroad reading of the CFAA, however, chills ... critical security research. Security experts may not think it worth the risk to conduct their research without a clear definition of what it means to 'exceed authorized access,' especially when mere terms of service violations have been used to impose criminal penalties in the past."
Security researchers are not the only ones at risk, says Bugcrowd's Ellis. Anyone who uses a computer system in a way not intended by the manufacturer could find themselves the target of legal action and, perhaps, prosecution, he says.
"The law is so broadly written that it criminalizes acts that otherwise violate a website's terms of services, from lying about your name on a Web form to the socially beneficial security testing that ethical security researchers undertake," he says. "A broader interpretation of 'exceeds unauthorized access' in CFAA works directly against the goals of a safer and more resilient Internet."
A date for oral arguments in the case has not been set.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024