In a letter, almost 70 different security firms and individual researchers criticize Voatz for misrepresenting to the US Supreme Court widely accepted security research practices.

7 Min Read

A US Supreme Court case that could expand the Computer Fraud and Abuse Act (CFAA) to include prosecuting "improper" uses of technology not specifically allowed by software makers will chill security research and could be used to punish other fair uses of technology, a group of nearly 70 vulnerability researchers and security firms said in a letter published on September 14. 

The letter — signed by computer scientists from the University of Michigan and Johns Hopkins University, as well as security firms Bugcrowd, HackerOne, and Trail of Bits, among others — is a response to a legal filing by e-voting firm Voatz in a case that could expand the definition of "exceeds authorized access" under the CFAA to include violations of user agreements and software licenses. While Voatz has participated in bug bounty programs granting participants legal protections, the firm also has reported a student researcher to state officials, dismissed serious vulnerabilities found by three researchers from the Massachusetts Institute of Technology, and even downplayed a third-party audit of their entire systems by security firm Trail of Bits that both confirmed the MIT findings and also found even more critical vulnerabilities. 

"Voatz's insinuation that the researchers broke the law despite having taken all precautions to act in good faith and respect legal boundaries shows why authorization for this research should not hinge on companies themselves acting in good faith," the security researchers say in the letter, referring specifically to the MIT case. "To companies like Voatz, coordinated vulnerability disclosure is a mechanism that shields the company from public scrutiny by allowing it to control the process of security research."

The letter took shape following a September 3 legal filing, known as an amicus or friend-of-the-court brief, in which Voatz argued that testing laboratories, security reviews, and bug bounties are all authorized forms of security testing and should be enough to guarantee security. Independent code reviews and penetration tests, the company claims, are not authorized and the CFAA's language "exceeds authorized access" should apply.

Yet, Voatz has aggressively targeted researchers even when its own bug bounty agreement allowed their actions, and has denied vulnerabilities in the face of ample proof, says Jack Cable, a 20-year-old independent security researcher who won the 'Hack the Air Force' competition held by HackerOne in 2017 and started the letter initiative because of the impact Voatz's argument would have on third-party research.

"They really seemed to have escalated their fight with the security industry," he says. "Rather than targeting a single, or a couple, security researchers, they are trying to compromise security research at a national level. I really saw that as unacceptable."

The original case that ended up at the US Supreme Court seemingly has little to do with election systems or even hacking. The case originates in the prosecution of Nathan Van Buren, a police sergeant in Cumming, Georgia, who had accessed the state records system to get information on a license plate in exchange for money. In addition to being found guilty of honest services wire-fraud in May 2018, the court also found him guilty of a single charge of violating the CFAA for accessing state and government databases for an improper use.

If the US Supreme Court upholds his CFAA conviction, the precedent will empower a variety of technology firms that over the past two decades have tried to frighten off security researchers from disclosing vulnerabilities by threatening legal action, and in some cases, conscripting law enforcement by claiming unsanctioned access to be "unauthorized" under the CFAA.  

Election firms have used this strategy extensively. Election system maker ES&S threatened legal action after security issues were found at the DEFCON Voting Village, despite government officials supporting independent security research. Voatz has used the threat of legal action and criminal prosecution, along with aggressive marketing, to avoid scrutiny of its black box system, researchers stated in the letter.

Three researchers at MIT, for example, found that Voatz's system failed to meet essential requirements of voting, including a way to prove votes were cast as intended, a method of protecting voter privacy, and the ability to preventing voters from revealing how they voted. By reverse-engineering the Voatz Android application and creating their own server, the MIT researchers found a handful of high-severity vulnerabilities

If Voatz's legal gambit pays off, such research will be much harder, if not impossible, to conduct in the future, says Michael Specter, a final-year PhD candidate focused on system security at the Massachusetts Institute of Technology and one of the authors of the research paper.

"I am incredibly privileged. I am a security researcher at a great university that has resources, including legal representation that helps guide me down the right path," he says. "Without the support we got from [our] cyberlaw clinic, this research would never have seen the light of day. I have no idea how someone without that support, without a legal team, would be able to get a positive resolution." 

Voatz has taken a more conciliatory approach since filing the brief, claiming that another friend-of-the-court brief issued by the Electronic Frontier Foundation and security firms on July 8 had inaccuracies that needed to be corrected. 

"We're not advocating to limit anyone's freedom – we're saying it's difficult to distinguish between good and bad faith attacks in the midst of a live election," the company said in a statement sent to Dark Reading. "For everyone's sake, it's better to work collaboratively with the organization as bad actors disguise themselves as good actors on a regular basis." 

Yet, Voatz did not address its prior attempts to downplay the vulnerabilities uncovered by MIT researchers or those found by Trail of Bits - a security consultancy hired by Voatz and Tusk Philanthropies, which promoted the Voatz system - in its audit of the election software.

The verdict from the research: The company's code has major and possibly unfixable security issues and the inclusion of "blockchain" technology — a hot marketing term — has no real utility as a security measure in the technology, says Dan Guido, co-founder and CEO of Trail of Bits, who had complete access to the source code of Voatz's Core Server. In a report published in March, the security company not only vindicated the MIT research report, but also found a total of 79 issues, a third of which were high severity.

Despite Trail of Bits informing two of Voatz's executives that the MIT findings were valid, Voatz posted a blog post in February attacking the researchers' efforts, Guido says. He decided to speak up about Voatz's tactics after seeing how they ignored both the MIT findings and his own company's report. In an interview, he roundly criticized the firm, saying "they should not be in this industry," a marked difference from the tone of the report. "When I found out that Voatz was addressing these issues by attacking the people who were reporting the issues, and they are misrepresenting the code review in their Amicus brief, I had to speak up," Guido says. "I think Voatz is actively harmful to the security of elections and to other election vendors." 

Trail of Bits is not the only company to part ways with Voatz. In March, HackerOne, a bug-bounty program services firm, jettisoned Voatz after the company withdrew any promise not to prosecute researchers — known in the industry as "safe harbor" agreements — from its bug bounty program. HackerOne had never dumped a client prior to Voatz. 

If the US Supreme Court limits the application of the CFAA, that could end the cycle of legal threats by technology firms and naming-and-shaming tactics by security researchers that pop up every few years, says Alex Rice, chief technology officer of HackerOne. 

"The naming and shaming has helped us advance in vulnerability research, but I don't think this relationship between companies and security researchers could be said to be 'working well,'" he says. "On the other hand, failing to follow a company's restrictions to a 'T' should not be weaponized or criminalized to force compliance."

About the Author(s)

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights