At what point does the proverbial “haystack” get too big to find the “needles”? Many of the best security teams have hit that breaking point. They simply have too many sources and event types to process. It’s impossible to manage information at this scale and accurately decide which are truly the important events requiring action.
It is time to narrow the aperture of data collection in your environment. This will help remove as much “hay” from the “haystack” and allow for the “needles” to come to the surface. To do this, I recommend a five-step process.
Step 1: Consider Your Environment Contested Space
The banking industry has become quite good at fighting fraud from their customer base by operating as if their customers’ home systems are contested. Banks and financial institutions also have placed advanced login analytics on their online banking sites and highly encourage two-factor authentication for customers.
Enterprise IT teams can take this same approach to protecting data from compromised employee systems. In fact, most have even more control by ensuring their hosts are using hardened, updated operating systems and are following sound patch management processes. (That said, if an organization still has instances of Microsoft Windows XP in the user environment, it will live in a constant state of compromise.)
If you execute on this strategy, many (or all) of the host-level security events detected can fall to the cutting-room floor. This is because you can assume your user devices are compromised. This is a very effective strategy for companies supporting a mixture of BYOD, corporate-provided devices and Internet of Things (IOT) solutions.
Step 2: Ensure Proper Remote Access Authentication
Remote access is only safe through multifactor authentication. Period. No exceptions. Most successful advanced persistent threat (APT) attacks over the last four years have used this vector to such an extent that once they have a remote user account (normally username and password), threat actors pull back their tools and just log in as a valid user with elevated privileges.
Some will argue that multifactor is no longer effective; that it is merely a speed bump. I review every intrusion I hear about where multifactor is allegedly compromised. In each case, there was a mistake in how the multifactor authentication controls were applied. The threat actors took advantage of the flawed implementation.
When properly implemented, multifactor authentication presents a significant challenge for attacks. It helps eliminate the need to track all remote user login activity and focus on specific events to narrow the “haystack.”
Step 3: Take Control of Elevated Privileges
Threat actors are compromising elevated privileges and creating accounts with admin rights at will. This, in turn, requires security teams to closely monitor login activity at critical points in the infrastructure. This generates an astounding number of events to assess.
For access to critical systems, all admin users should be required to log in via a proven method of multifactor authentication to a single “jump host” (e.g., Bastion host). From a jump host, admins should connect to a permission access manager (PAM) that monitors and records all activity. This method also will help limit elevated access to match the amount of time the administrator needs to accomplish their task.
In short, we should eliminate any and all scenarios where elevated privileges are open-ended and unmanaged.
Step 4: Direct Traffic
Shape your network traffic to filter out as much known malicious traffic “on the wire” as you can without impacting business. This may be effectively achieved via an aggressive Internet protocol address reputation management (IPRM) program. Such an approach will help limit the amount of bad traffic — sometimes by as much as a factor of 10 — that layered security devices must inspect.
Step 5: Learn from ‘Successful’ Events
No security posture is 100 percent impenetrable. But for events that do circumvent established controls, it’s critical to learn from the experience. By turning an eye toward network-layer events, we can better understand what’s successful against a given environment. Monitoring “traffic blocked” messages from the firewall provides little context and can serve to distract from real issues. Truly dissecting and studying successful events will serve organizations far better in the long run.
Unfortunately, many security departments expend too much time and energy managing alerts from their user base, remote access, elevated privilege use and network traffic. As a result, , they have little time to focus on the most important events occurring on critical applications and databases that overload security information event management (SIEM) systems or mask real issues. Would banks security screen everyone entering a bank then leaving the vault door open with no one watching the money? Of course not. And it’s why it’s critical we fine-tune our focus.
Black Hat Europe returns to the beautiful city of Amsterdam, Netherlands November 12 & 13, 2015. Click here for more information and to register.