informa
/
Endpoint
News

Microsoft Refining Third-Party Driver Vetting Processes After Signing Malicious Rootkit

Rogue driver was distributed within gaming community in China, company says.

Microsoft is refining its policies and processes for certifying drivers through its Windows Hardware Compatibility Program (WHCP) after a recent incident in which the company appears to have inadvertently signed a malicious driver that was later distributed within gaming environments in China.

In a Microsoft Security Response Center (MSRC) blog post Friday, Microsoft said it was investigating the incident, in which an unnamed entity submitted drivers for certification through the WHCP. Microsoft did not explicitly confirm that it had signed — and had therefore validated as trusted — at least one malicious driver. However, Microsoft said it had suspended the account of the party that had submitted the drivers and had reviewed other submissions of theirs for malware.

"We have seen no evidence that the WHCP signing certificate was exposed," the company noted. "The infrastructure was not compromised." Microsoft did not offer any additional details on how the actor managed to slip the malicious driver past the company's security checks. A Microsoft spokesman declined further comment.

The incident is the latest example of what security experts say is a growing targeting of the software supply chain by cyber-threat actors. In the months since last December, when SolarWinds disclosed that its software build system had been compromised, the issue of supply chain security has gained increasing attention not just within the industry but also in government circles. One example showing concern about the topic is a provision in an executive order that President Biden signed in May that required federal civilian agencies to maintain trusted source code supply chains and a comprehensive software bill of materials.

"I suppose the good news is that this exposure was a process failure at Microsoft that didn’t identify the driver as malicious before digitally signing it rather than a compromise of Microsoft's signing certificate itself," says Chris Clements, vice president of solutions architecture at Cerberus Sentinel. A compromise of the signing certificate would have allowed an attacker to sign as many drivers as they would want in a way that would be indistinguishable from Microsoft itself, he says.

Karsten Hahn, a security analyst at G Data, was the first to detect the malicious driver snafu at Microsoft. According to Hahn, G Data's malware alerting system notified the company about a potential problem with a Microsoft-signed driver called Netfilter. Upon closer inspection, the rootkit was found to be redirecting traffic to IP addresses based in China. Independent malware researcher Johann Aydinbas, who Hahn identified as contributing to the research around Netfilter, described the driver as being designed primarily for SSL eavesdropping, IP redirecting, and installing a root certificate to the system registry.

Microsoft said the malware author's goal was to use the driver to spoof their geolocation so they could play games from anywhere. "The malware enables them to gain an advantage in games and possibly exploit other players by compromising their accounts through common tools like keyloggers." The company has updated its Microsoft Defender antivirus product and distributed signatures against the threat to other security vendors.

Ilia Kolochenko, founder, CEO, and chief architect at ImmuniWeb, says the latest incident is a great example of why organizations need to shift to zero-trust security models where all software and external entities are considered untrusted and therefore diligently verified, tested, and continuously monitored. "Industry knows many similar incidents, for instance, when Android or iOS mobile apps are approved to be hosted at the official app stores but contain sophisticated malware, spyware, or undocumented features that violate privacy," Kolochenko says.

A similar situation exists with backdoored container images available in public repositories, like Docker Hub. "[Organizations should] consider all external code as potentially malicious," Kolochenko says, "and perform rigorous security and privacy testing prior to deploying it internally."

Recommended Reading: