Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.
Do you know what happens to your data when it's not in use? If the answer is no, you need to fix that.
July 31, 2018
4 Min Read
Source: Uplevel Security
When cyberattacks take place in enterprises, the resulting data lives in various siloes: security information and event management (SIEM) systems, emails, ticketing systems, intel feeds, security devices, and more. Data flows in and out of these systems, and security teams react to the data as best they can in order to address threats as they arise. But what happens to the data once it's not in use? Where does this data live long term, and how can it be applied to future threats? Unifying data across an entire security architecture provides the intelligence and context necessary to activate data on demand and use it to identify and resolve persistent threats.
For example, a phishing email is the most common and pervasive attack vector that leaves a trail of data throughout the security architecture. The 2017 Verizon Data Breach Report found that 90% of data breaches are the result of phishing or social engineering. A 2015 Intel report reveals that 97% of people around the world are unable to identify a sophisticated phishing email; while Symantec reports that an astounding one in 131 emails contains malware.
A typical phishing email is detected by an email security gateway and/or reported directly to the security team by a recipient. Data identified by the device is directly reported and searchable in the SIEM but lacks much of the critical information contained in the email itself. The raw email provides critical contextual information and lives in a system outside of those processing security alerts, making it not searchable in a SIEM. This makes the data very difficult to correlate and creates a process that relies on point-in-time analysis requiring advanced knowledge of what data to look for before it can be found. This leaves the analyst piecing together an incident without any way of knowing what he or she might be missing.
After a security analyst is done cobbling together the attack elements, the following questions remain:
Has there been related, unusual traffic?
Was the company compromised?
Did the attacker send other phishing emails in the past?
Is the attack an evolution of a previous attack?
Unifying security data helps answer all of these questions within a specific environment. To achieve unification, a dynamic data hub should be established that captures all data that flows throughout an architecture. Once a hub is established, information such as historical data not only has a place to reside but can also be activated as new data is ingested. Security teams then have the ability to identify the secondary characteristics that distinguish the malicious instance versus the false positive. For example, similar emails from the same sender were both flagged as malicious based on the existing alerting rules, but only one was actually malicious.
Source: Uplevel Security
Alerting rules are refined based upon the new indicators, making the resulting future alerts more useful. This reduces the amount of investigation needed, surfaces details that might otherwise go undetected and allows security teams to focus on what matters — effectively and efficiently resolving the threat.
Despite the significant benefits of unifying data, many organizations struggle with achieving it in practice or think they have achieved it using standard technologies. Some rely too heavily on SIEMs and, in turn, adjust data ingestion and analysis based on a SIEM's capabilities. This results in reliance on static rules, vendor-specific correlation, and the elimination of data streams due to cost. Others try to piece together SIEMs, point solutions, and response platforms, but instead of creating a unified data architecture, this usually results in the scenario outlined above in which data related to the same threat ends up dispersed throughout multiple systems and must be manually pieced together.
If questions are continuously left unanswered at the end of a mitigation process, then it's time to take a serious look at how security data is being captured and applied to safeguard enterprises.
Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info.
About the Author(s)
Co-founder, CEO & CTO, Uplevel Security
Liz Maida is instrumental in building and leading the company and its technology, which is founded on core elements of her graduate school research examining the application of graph theory to network interconnection. She was formerly a senior director at Akamai Technologies, and served in multiple executive roles focused on technology strategy and new product development. She played a lead role in Akamai's initial efforts in DDoS mitigation, fraud detection, and mobile authentication, as well as security products including Akamai's cloud-based web application firewall and an analytical engine that leveraged Akamai's visibility into almost 30% of Internet traffic to assess the security risk of end user requests. Liz holds a Bachelor of Science in Engineering from Princeton University and dual Masters degrees in Computer Science and Engineering Systems from the Massachusetts Institute of Technology.
You May Also Like
A screen displaying many different types of charts and graphs to show what data is being analyzed.Cybersecurity Analytics