Despite an influx of best-in-breed security technologies, organizations around the world are seeing a continued rise in cyber attacks. There are big implications. Financial consequences include immediate costs of investigating the breach and extend longer-term to include lawsuits and regulatory fines. Loss of customer trust can translate into declines in business. Perhaps most damaging is the impact of shutting down entire systems, which can grind operations to a halt. This is especially dangerous when the target is a critical healthcare, government, or utility provider.
From the high-profile Equifax breach to payment compromises at hotel chains and retailers, security teams are increasingly under pressure to not only determine why this is happening but what can be done to fix or prevent these problems. For many companies, getting "back to basics" could be one of the most effective weapons in the war on cyberattacks.
It's About the Fundamentals
Spending more time on maturing and measuring fundamental security controls might have helped prevent many of the breaches we've seen recently. For instance, Equifax was compromised by a Web application vulnerability that had an available patch, which the company failed to employ. Too often companies underestimate basic security measures, instead prioritizing time and budget on the latest and greatest technology solutions.
Here are ways to stick to the basics of managing cyber-risk to better protect your company.
This is one of the most challenging aspects of security, especially with dispersed assets. You can achieve greater visibility by leveraging these functionalities:
- Passive technologies that either live at the gateway or process log data are very effective at detecting when new devices come online and then triggering an active scan in order to provide more user information and context.
- Active scanning technologies that constantly poll your network will discover when new devices come online and report these assets back to a system of record where more information can be obtained from the user directories. An informed decision can then be made about whether or not the devices need to be passed along to the vulnerability management team.
Prioritize Vulnerability Management
Continuous assessments around known inventory can help lower the risk of exploitation. Many of the recent breaches resulting from the leaked Shadow Brokers' tool sets could have been avoided, but too many organizations have weak vulnerability management platforms that leave critical systems exposed. The crippling of the UK's National Health Service by the WannaCry ransomware attack, which targeted basic security weaknesses, was particularly egregious because of the direct impact on patient care.
A robust vulnerability management program can identify these issues so they can be patched, preventing them from being exploited. Some best practices include:
- Before even attempting a program, understand who is responsible for the functional areas of IT so the proper groups can be alerted when a vulnerability is identified.
- Obtain the correct buy-in from system owners that are going to be affected, which typically include those managing endpoints, servers and non-user devices such as printers and video cameras.
- Have clearly defined next steps once vulnerability is identified. Too often, companies recognize their vulnerabilities but have no action plan to move forward with patching, virtual patching, or another means of control.
- Patching servers and applications can inadvertently have a negative impact on business-critical applications resulting in system downtime. Yet, comprehensive patch management can be time-consuming. Putting a strong development team in place can accelerate the patch process. Alternatively, virtual patching can identify an active exploit and stop it at another layer, whether in the OS itself or at a network function or gateway.
Layer on Next-Gen Technology
With these baseline controls in place, next-generation threat prevention solutions such as anti-malware software, firewalls, and Web/email protections can be more successfully integrated into a company's architecture and associated operational structure.
This is also critical as security solutions become even more sophisticated, sometimes combining different technologies into one more powerful platform. For instance, next-gen endpoints are more advanced than traditional endpoints, with machine learning, artificial intelligence, integration, and open APIs. But leveraging these features into an orchestrated operational model can add a certain level of complexity for analysts and operators, and care should be taken to ensure manual concepts and abilities are understood before employing these enhanced features.
Master Manual Processes Before You Automate
Automating certain security controls can be extremely beneficial, helping analysts more efficiently investigate and triage events by allowing multiple sources of records to be examined and providing context to determine the traffic, user, intellectual property on the device, and what it was doing before and after the event. But automation can also greatly increase risk if done too quickly. While it provides the heavy lifting, it will not make you an instant expert. You still need brains and smarts to accompany orchestration and automation. This means it's much more effective and reliable to first create well-defined and tested manual processes before writing the appropriate automation scripts and playbooks.
While there's no guaranteed security solution, a company's ability to successfully reduce risk starts with building a solid security foundation. These baseline concepts are essential, and understanding the capabilities of technologies currently in place will help make operations more secure in the long term.
- The Internet of (Secure) Things Checklist
- In Mobile, It's Back to the Future
- Post-Breach Carnage: Worst Ways the Axe Fell in 2017