We've heard a lot about the fallout from last year's Shadow Brokers bombshells, particularly when it comes to previously unreleased exploits put into the wild. Those exploits include EternalBlue, which was weaponized into the WannaCry ransomware that wreaked havoc on the Internet.
However, the most damaging element wasn't necessarily the actual exploits. Yes, there has been an incredible amount of harm done, but it only foreshadows the real damage control the industry will be doing in the coming months and years.
Included in the information leaked by the Shadow Brokers, but not as widely understood, was a trove of information on the intelligence community's methodology for finding vulnerabilities and building exploits. Now that this has been exposed, security pros had better be prepared to weather continuous attacks by zero day exploits against any and all applications and platforms.
In the next 12 to 36 months, we're going to see hackers using these techniques to build the next generation of attacks. There will be perpetual storms of malware, click-free attacks, perfect lures — and much of it will be untraceable, with exploits becoming unique and essentially disposable.
Powered by big data, machine learning, and natural language processing engines, we'll see phishes and false websites that will be nearly indistinguishable from the real things. There won't just be new worms, but the equivalent of Web drive-by attacks extended to major services and even mobile platforms.
These attacks will be launched from command and control (C&C) networks that were never seen before and will never be seen again. Expect malware that is constantly evolving and never reused, or which contains so many new detections that traditional antivirus software can't handle it all.
As such, domain analysis for malware C&C networks will become an obsolete art. IP reputation filters will be useless. Information from intelligence providers will become less valuable.
Get Your Head out of the Gear and into the Analytics
To defend against this dizzying new reality, we need to produce new types of logs that are more behavior-based and do a better job of using automation to analyze and detect other outputs across the entire stack. Security professionals need to be thinking about how they're building their fully automated analytical loop instead of what a specific device is detecting.
Essentially, today's gear represents a collection and input mechanism. It's a collector and an actuator. It's both an earphone and a switch. To defend against the attacks of tomorrow, we need to extend the power of that collection effort, and close the loop with machine learning logic that can correlate, corroborate, and take action appropriate to the server or device in question.
To do this, companies must first establish methods for collecting information from all layers. The way to defend goes back to TCP/IP and the Open Systems Interconnection (OSI) model, from the physical layer to the application layer and everything in between.
Improve Your Ability to Detect Anomalies and Close the Loop
There are multiple places to detect anomalies — end-user machines, network gear, firewalls and application-aware firewalls, servers. Output feeds about the behavior of the full stack on these devices must be collected from all these physical locations.
After establishing a collection methodology, it's time to get better at identifying anomalies, with the idea of creating an engine that will know the markers across all layers and devices.
Once those anomalies are understood, they must be fed into some kind of analytical system to be correlated. This allows for corroboration of what's happening at the different layers, enabling more assured detection.
Then there should be some logic that looks at what the anomaly is and where it is best to halt the activity with the least impact to operational processing. At this point, the system can decide where to stop the traffic and make a determination. And the answer may be: multiple places.
In other words, we're going to get to a point where the system doesn't just automatically detect something and corroborate it but goes beyond that to determine the best place to stop it and take action to close the loop.
Don't Be Afraid to Get Crafty
Yes, there's machine learning embedded in this process. Is it something you can build? Yes. Something you can buy? Not always. Notice that I called for the feeds to go to analytical system and not a SIEM. The concept is not fully fleshed out in the industry, though a lot of players are working on it, especially in the SIEM market. The next generation of security analysis capabilities will clean all the disparate inputs, normalize them so that they can be used for analysis, and allow us to compare information from all sources.
In the end, this process will combine and automate network and application security to an extent most organizations haven't experienced before. But to do so, companies will have to get really knowledgeable about what is happening in their network and what the blind spots are. We have to ask questions such as: Do I have anything that gives me visibility into what is happening at the session layer on a PC? To what extent do I have the stack in my PCs, servers, and network covered?
All this is imperative today. In this new era, companies who rely on block lists, human analysis, or end users being able to spot phishing emails are going to be completely exposed.
And the pace is about to change due to our adversaries' ability to generate new attacks in an automatic way. How do we fight off more automation on the bad guys' part? The answer is a massive push into automation and a clearer and speedier corroboration of data on ours.