Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

12/1/2017
10:30 AM
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Deception: Why It's Not Just Another Honeypot

The technology has made huge strides in evolving from limited, static capabilities to adaptive, machine learning deception.

Deception — isn't that a honeypot? That's a frequently asked question when the topic of deception technology arises. This two-part post will trace the origins of honeypots, the rationale behind them, and what factors ultimately hampered their wide-scale adoption. The second post focuses on what makes up modern-day deception technology, how the application of deception technology has evolved, and which features and functions are driving its adoption and global deployment.

Almost 15 years ago, Honeyd was introduced as the first commercially available honeypot and offered simple network emulation tools designed to detect attackers. The concept was intriguing but never gained much traction outside of organizations with highly skilled staff and for research. The idea was to place a honeypot outside the network, wait for inbound network connections, and see if an attacker would engage with the decoy.

Today's attackers are more sophisticated, well-funded, and increasingly more aggressive in their attacks. Human error will continually result in mistakes for attackers to exploit. With breaches getting more severe, the population getting less patient, and the emergence of regulations and fines, in-network threat detection has become critical for every organization's security infrastructure. So much so that FBR Capital Markets forecasts that the deception technology market as a detection security control will grow to $3 billion by 2019, three times its size in 2016.

The systemic problem is that organizations are overly dependent on their prevention infrastructure, leading to a detection gap once that attacker is inside the network. For the connected world we live in, it's widely believed in the industry that it no longer works to focus only on keeping attackers out. The structure is also flawed when applied to insiders, contractors, and suppliers who have forms of privileged access. Alternatively, solutions that rely on monitoring, pattern matching, and behavioral analysis are being used as a detection control but can be prone to false positives, making them complex and resource intensive.

The concept of setting traps for attackers is re-emerging given its efficiency and the advancements in deception technology that have removed scalability, deployment, and operational functionality issues that previously had hampered the wide-scale adoption of honeypots. Consequently, companies across the financial, healthcare, technology, retail, energy, and government sectors are starting to turn to deception technology as part of their defense strategies.

Deception is still a fairly new technology, so it is not surprising that seasoned security professionals will ask, "Isn't deception just a honeypot or honeynet?" In fairness, if you consider that they are both built on trapping technology, they are similar. Both technologies were designed to confuse, mislead, and delay attackers by incorporating ambiguity and misdirecting their operations. But that is where the similarity ends.  

Deception's Evolution
Gene Spafford, a leading security industry expert and professor of computer science at Purdue University, originally introduced the concept of cyber deception in 1989 when he employed "active defenses" to identify attacks that were underway, designed to slow down attackers, learn their techniques, and feed them fake data.

The next generation of advancements included low-interaction honeypots, such as Honeyd, built on limited service emulations. The ability to detect mass network scanning or automated attacks (malware, scripts, bots, scanners), track worms, and low purchase costs were the principal appeal of low-interaction honeypots. However, honeypot adoption was limited, given a number of limitations and associated management complexity, such as the following:

  • Honeypots were designed for detecting threats that are outside the network and were predominately focused on general research vs. responding to the more critical need for in-network detection.  
  • Human attackers can easily figure out if a system is emulated, fingerprint it, and avoid detection from honeypots
  • These systems are not high interaction, limiting the attack information that could be collected and any value in improving incident response.
  • Attackers could abuse a compromised system and use it as a pivot point to continue their attack.
  • Honeypots are not designed for scalability, are operationally intensive, and require skilled security professionals to operate

Deception technology has made monumental strides in evolving from limited, static capabilities to adaptive, machine learning deception that is designed for easy operationalization and scalability. Today's deception platforms are built on the pillars of authenticity/attractiveness, scalability, ease of operations, and integrations that accelerate incident response. Based on our own internal testing and from others in the emerging deception market, deception is now so authentic that highly skilled red team penetration testers continually fall prey to deception decoys and planted credentials, further validating the technology's ability to successfully detect and confuse highly skilled cyberattackers into revealing themselves. 

Related Content:

Carolyn Crandall is the Chief Deception Officer and CMO at Attivo Networks, the leader in deception for cybersecurity threat detection. She is a high-impact technology executive with over 30 years of experience in building new markets and successful enterprise infrastructure ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...