Increasing complexity is putting networks at risk. It's time to shift our security approach and take some lessons from software development.

Brighten Godfrey, Co-founder and CTO, Veriflow

May 31, 2017

5 Min Read

In 2015, the US government disclosed breaches at the US Office of Personnel Management (OPM) that exposed sensitive records of more than 22 million people in what was potentially the biggest "doxing" in history.

What vulnerability did the adversary exploit? One step involved endpoint malware — no surprise given that modern enterprises are exposed via thousands of weaknesses in software running on their endpoints, from users' laptops to Internet of Things (IoT) devices to database servers.

But sophisticated attacks typically involve multiple steps. One of the keys to the breadth of the OPM incident was that after an initial compromise, the adversary apparently was able to gain unfettered access to a full data center.

In other words, even though the word "vulnerability" typically brings to mind endpoint software weaknesses, endpoints are not the only vulnerable system. What if the vulnerability is in the network itself, such as a weakness in network segmentation or microsegmentation that should quarantine parts of the network but instead exposes assets to attack?

That's becoming more likely because of increasing complexity. In a simpler time, the network's job was done if a packet went in one end of the metaphorical tube and came out the other end unharmed. Today, network infrastructure is dramatically more sophisticated. A large enterprise might have tens of thousands of routers, switches, firewalls, load balancers, application delivery controllers (ADCs), and other gear. Access control rules and policies might number in the thousands or even hundreds of thousands in very large enterprises. These devices and their configurations are often orchestrated in part manually and in part through automated configuration management software or homegrown scripts. Now there are new layers of virtualization on top of the physical infrastructure — in private and public clouds and hybrids of the two — and new layers of commercial software automation.

All this is to say that we need to rethink how we conceive of an enterprise network. It is no longer a collection of individual boxes from vendors. Today, the network is a single large distributed system of software and hardware, crafted and composed by engineers within the enterprise.

It's useful to think about the network as one system because its components are intended to work together to achieve end-to-end goals: providing resilience to keep services highly available, and ensuring security to protect services and data.

And just like the software systems on endpoints, the complexity of that network system means it may have vulnerabilities, too.

Individual network devices can have serious vulnerabilities, like the backdoor discovered in Juniper Netscreen firewalls in December 2015. But even if each device individually is secure, the network system may still have a weakness. In particular, numerous data breaches, like the OPM's, have been enabled by the network allowing too much connectivity. That lets attackers move laterally through the network, expanding from an initial point of compromise to breach increasingly valuable assets. This is a vulnerability in network segmentation.

There are two reasons the industry would do well to quickly shift its thinking to see the network as effectively a distributed system, complete with all the vulnerabilities that entails.

First, the risk of accidentally introducing a vulnerability is high. Amid the complex environment of an enterprise network, segmentation may be implemented with a combination of many devices and protocols from explicit controls in firewalls or software-defined overlays to sometimes-implicit (and often poorly documented) use of virtual LANs, virtual routing and forwarding (VRF) instances, Layer 3 routing protocol configurations, and beyond. This is in part because enterprise networks have often grown organically with only secondary consideration given to security. The result is that it is more difficult to implement segmentation initially, and also to ensure it is preserved amid changes across time.

Second, we need secure networks more than ever, because endpoints have become less trustworthy. That's true in public and private clouds, where any two applications or tenants might be hosted on the same hardware. It's also true of the increasingly connected "things" in our homes and enterprises; indeed, the largest known denial-of-service attack, clocking in at a rate of 1.2 Tbps, was launched from the IoT-focused Mirai botnet in October. As we trust endpoints less, the network can and should offer better protection.

Network engineers recognize the stakes are high. A 2016 survey found 80% of respondents' networking teams placed security as a key concern, and the OPM, after the breach, moved to improve its network segmentation. But identifying vulnerabilities in an enterprise network's configuration, such as a flaw in segmentation, isn't always easy. No device vendor will issue a patch, US-CERT won't issue an alert, and traditional vulnerability scanners that monitor only endpoints won't discover the problem.

In a sense, many enterprises have "zero-day" vulnerabilities specific to their own network that may be exploited at any moment.

That's one reason that I've predicted an increasing need for advanced analytics and verification of the network. Solutions have now emerged to help enterprises determine whether their business intent, including security policy, aligns with the reality of the network. For example, in line with thinking of the network as similar to a large software system, enterprises can adopt strategies from software development, such as continuous integration, to continually validate the network's security and correctness as the network is modified.

As the network becomes increasingly like a large software system, there will be changes in skill sets, processes, and risks — including rethinking how we approach vulnerabilities, moving our viewpoint beyond just the endpoint to the network itself. That perspective recognizes that we can leverage the network that we already have as one of our most important assets, providing critical infrastructure that protects the whole enterprise.

Check out the all-star panels at the 'Understanding Cyber Attackers & Cyber Threats' event June 21 and get an in-depth look at your cyber adversaries. Click here to register

Related Content:

About the Author(s)

Brighten Godfrey

Co-founder and CTO, Veriflow

Brighten Godfrey is Co-Founder and Chief Technology Officer of Veriflow. Brighten has conducted research in networked systems and algorithms for more than a decade. Brighten has co-authored over 50 scientific publications, and his work has developed novel architectures and systems for Internet routing, data center networking, high-performance data transport, and network data plane verification, as well as advancing theoretical analysis of network algorithms. Brighten has received multiple awards, including the ACM SIGCOMM Rising Star Award, the Alfred P. Sloan Research Fellowship, the UIUC Dean's Award for Excellence in Research, the National Science Foundation CAREER Award and the Internet2 Innovative Application Award, in addition to several best paper awards. He was a Beckman Fellow at the UIUC Center for Advanced Study in 2014-2015, and has served as program committee chair of several academic conferences. Brighten holds a Ph.D. in Computer Science from the University of California, Berkeley.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights