With so much automation in code writing process, results are rarely double-checked, which opens the door to vulnerabilities and downright danger.

Dr. Jethro Beekman, Technical Director

March 3, 2021

4 Min Read

As the fallout from the SolarWinds hack broadens, we continue to learn more about just how it happened in the first place. There have now been four malware strains identified, one being Sunspot, which was installed on the SolarWinds build server that developers use to piece together software applications.

When it comes to software supply chains, code signing is a commonly used practice to indicate the provenance of software. In theory, the process validates the authenticity and integrity of the code. But as we all now know, that isn't always the case. 


As it turns out, code signing is the very last step in what is often a convoluted process to get from original source code to finalized packaged software. An attacker that can inject changes into a software build pipeline or continuous integration (CI) process — as was the case with SolarWinds — will be able to make changes that are included in the signed final product, altogether defeating the purpose of the signature.  

Many software vendors may not have thought to take great care in securing their software release pipeline, but these recent attacks have more and more taking a deep look at how they can do that effectively. They need a system to certify that every step from source code to software has been executed correctly. 

The Real Problem With Code Signing: Assuming It's Fool-Proof
The process of code signing isn't inherently bad. The problem is that it can (and often does) give people a false sense of security. The whole idea behind code signing is that it verifies that the code itself hasn't been modified by anyone who doesn't have the proper access. 

A lot of the process is typically automated, and people don't usually double-check things when it's all set up — it's just supposed to work. That's when vulnerability and downright danger can strike.  

If a cybercriminal or anyone else with malicious intent is able to make a change before code signing takes place, everything can seem to be working perfectly and no one will dive deep because everything is expected to function. In other words, code signing is designed to verify the software supply chain is legit, but if you're signing something that's already wrong or has been tampered with, it doesn't matter. 

Further, the size of an individual project may correlate with the amount of risk involved. If a mobile phone vendor is putting together a big release of an operating system such as Android, for example, there are so many components involved that not a single person understands every single thing. And more people involved creates more risk. 

Strengthening the Integrity of Software Supply Chains 
Fortunately, there are some steps vendors can take to better protect their software supply chain. On the most basic level, they can scour a list of all of the code components used to help identify potential vulnerabilities. This sort of "code audit" via a software bill of materials can help eliminate security risks for a specific release the vendor is working on, as well as provide guidance on what to look out for with future releases.  

In my eyes, an emerging solution that vendors should consider is confidential computing, which is already being used in security-focused industries such as healthcare to improve clinical AI algorithms or financial services to prevent fraud. This, too, can come with a significant level of investment, particularly as it relates to underlying infrastructures, but it shouldn't be an issue for many software vendors considering the software release pipeline is one of the most critical pieces of their business. 

As SolarWinds has shown, the risk of a software vendors' code being the source of a data breach to its customers can forever damage its reputation and relationships with its customers. With this precedent in place, enterprises will increase the level of scrutiny applied to the supply chain of its software vendors. Software app makers such as Signal are already taking advantage of using better privacy and security as a differentiator to encourage users to move from WhatsApp.

The key to implementing confidential computing is a trusted execution environment that secures encryption keys within secure enclaves to protect them from external threats such as root users, a compromised network, rogue hardware devices, or, as was the case in the SolarWinds attack, advanced malware. 

A rule of thumb to live by, particularly for larger organizations, is to operate under the assumption that you've already been compromised. That assumption shouldn't go away as confidential computing becomes more widely adopted, but it will be far less damning. I see this as the next logical evolution in automating and securing software releases, and vendors who take advantage now will future-proof themselves for years to come.

About the Author(s)

Dr. Jethro Beekman

Technical Director

Dr. Jethro Beekman is a technical director and is working on next-generation cloud computing security at Fortanix. Jethro received his M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences from the University of California at Berkeley in 2014 and 2016, respectively. Before that, he received his B.Sc. degree in Electrical Engineering from the University of Twente, The Netherlands, in 2011. His current research interests include cloud security, secure enclaves, side-channel countermeasures, as well as network and hardware security.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights