Malware attacks cost US companies $2.6 million per company on average — and that amount is increasing, according to a 2019 report from Accenture Security and the Ponemon Institute. Part of the reason for this increase is the growing number of network blind spots: CISOs and security teams can't see into certain portions of the network, so if malware manages to get past perimeter defenses, it can sit, undetected, and wreak havoc. These blind spots are exacerbated by a hybrid network model; as applications move to a public cloud or companies roll out virtualization, the network gets more complex, visibility gets limited, and security monitoring becomes more difficult.
Fortunately, recent reports show this issue appears to be improving, with organizations managing to steadily decrease malware dwell time. The "2020 Data Breach Investigations Report" (DBIR) from Verizon found that over 60% of data breaches were discovered in days or less. That's an encouraging improvement from past years, but over a quarter of breaches still take months or more to be detected, so there is still more work to be done.
Yet at the same time, digital transformation projects and cloud-first or cloud-smart paradigms are proliferating, both of which complicate monitoring and visibility. If the security team doesn't keep up with the network's growing complexity, they risk losing recent gains.
Here's how CISOs and IT security operations teams can best address some of the key challenges to network monitoring that threaten to increase malware dwell time.
1. Visibility into east-west traffic
East-west traffic (that is, within a data center) has increased over the last several years as applications have become multitier and more compute-intensive, networks have become more virtualized to support more virtual machines, and the number of transactions and exchanges in an east-west direction has increased. This is happening across many sectors, including financial services, service providers, and retail. The shift is making monitoring more difficult — where do you tap the network without physical connections and devices?
But getting access to this traffic is essential because it lets security tools detect unusual network behavior that can indicate a security breach. Access to east-west traffic reveals which IP addresses are talking to one another, when these connections take place, etc. This information lets analysts or behavioral-based security tools raise alerts to investigate and remediate unusual network events (either automatically or manually). For example, an unusual database access by an application or a large FTP download at 2 a.m. is an event that should be investigated. As businesses go virtual and cloud-first, having full access to all network traffic, including traffic within the data center, is vital to keeping them secure.
2. Ability to capture and store network data for forensics
Having access to detailed packet and flow data from before, during, and after a security breach is necessary for security analysts to accurately determine the extent of the breach, analyze the damage, and figure out how to prevent it going forward. Capturing and storing a bank of network data for this purpose will usually require gathering network metadata and packet data from physical, virtual, and cloud-native elements of the network deployed across the data center, branch offices, and multicloud environments. Obtaining this insight requires a mix of physical and virtual network probes, packet brokers, and capture devices to gather and consolidate data from the various corners of the network to process and deliver it to the security tool stack. It's equally important that teams can capture and store packet data from before, during, and after an indicator of compromise for later forensic analysis. The easier it is to access, index, and make sense out of this data, the more value it will provide.
While it's more complex and difficult to obtain this information from cloud-based or virtual segments of the network, it's essential for keeping organizations secure. The 2020 Verizon DBIR found that attacks targeting web applications were involved in 43% of breaches, more than double what they were in 2019. As more workflows move to the cloud, the attacks will follow — so monitoring and defenses need to do the same.
3. Reworking security policies for remote workers
Many knowledge workers are still working from home thanks to COVID-19, and this has significantly changed the security posture for most organizations. In the past, IT and security teams could base security policies on the assumption that most users access resources via the corporate network while on-site, with a small number accessing it remotely. Now that's flipped — most users are accessing applications in the cloud or in the data center via the public Internet. Companies have reacted by loosening security restrictions to better accommodate the groundswell of remote access. That softens perimeter security, thereby increasing the need to quickly spot and mitigate any malware that might sneak through.
4. Getting visibility into the public cloud
Many organizations have moved applications to the public cloud to take advantage of their scalability and flexibility, but there can be a cost in lack of visibility. Until recently, major public cloud platforms were black boxes; it was possible to see traffic into and out of the cloud, but little of what happened inside. Without this access to the network traffic within AWS, Google Cloud, or Azure, IT teams couldn't monitor for signs of a breach. Fortunately, that's changing, with some major cloud providers adding features that mirror network traffic to and from a client's applications. Then a virtual packet broker can be used to forward that traffic to cloud-native security monitoring tools. A feed can be directed to virtual packet capture device as well for archiving the packet data to cloud storage for compliance and forensics.
In summary, detecting and reducing malware dwell time in a hybrid environment requires access to full network traffic for all segments of the network — whether that is on-premises, within the data center, within the public cloud, or for remote worker access. IT infrastructure and operations leadership should put network traffic intelligence on their list and set aside a portion of their security budget for proper network instrumentation.