Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint

5/13/2013
12:52 PM
Tim Rohrbaugh
Tim Rohrbaugh
Commentary
50%
50%

Use A Human Trust Model For Endpoints

Use anthropomorphic references to engage your brain and strengthen your approach to security

Have you ever used a feminine pronoun when talking about a boat? What about a computer program? Have you ever resented your computer after you felt it "intended" to lose your work? (I will refrain from linking to a YouTube video showing someone beating their office computer.)

People endowing inanimate objects with human characteristics is commonplace today. I believe it's also a useful approach when dealing with security design, controls, and analysis. Just as analogies and metaphors aid in helping the brain process new information, thinking of your endpoints as having human intentions (regardless of whether a real one is there at the moment) is also a very useful aid because it engages the two ancient almond-shaped regions of your brain called the amygdala.

She trusts me, she trusts me not, she ...

One type of the human trust model takes three forms: Trust no one at any time, trust some of the people some of the time, and trust all of the people all of the time. It is best when designing your network to match up devices, applications, and people based on this trust model. Why? So you can focus your efforts in the most effective way when defining controls, processing logs, and correlating events. By "effective" I mean likely to increase your security posture.

Do you trust everyone and everything the same?

How about the computers used by your road warriors? What about the systems exposed to the Internet? How about security software? Vendor-hosted systems? No, you don't trust all of these the same. You create zones of trust. You and everyone around you, regardless of their job roles, are experts at risk analysis. Why? Because they have survived walking across busy streets. Now all you need to do is apply these evolved senses to modern-day technology challenges by training your brain and linking human traits to those that live on your network.

How does it work?

Start with a pattern like this:

  • Trust no one at any time (I don't trust you) = trust 3
  • Trust some of the people some of the time (I trust you, but will verify) = trust 2
  • Trust all of the people all of the time (I trust you) = trust 1

Note: There is a trust 0 (something akin to subconscious) and trust 4 (enemy) AND the higher trust number = less trust

Next:

Place trust # entities together; when they're mismatched over time, classify/reclassify them as the highest trust number between them. Focus your controls, log capture, and analysis on those you do not trust, then verify those you do trust with the leftover team hours.

Sample: You see traffic coming from these examples ... How will you trust them?

  • Web server in a demilitarized zone (DMZ) [trust 3] uses standalone accounts or ones that have no privileges [trust 3] in inner layers
  • Development workstation in development area [trust 2] with users who are not admins, but developers [trust 2]
  • Production database server with no outside access [trust 1] with no interactive users (no one logged on) [trust 1]

Have kings been toppled by their inner circle? Sure they have. But did those in the inner circle responsible commingle with the untrusted at some point? Did they transition to the inner circle through other levels of trust? Yes, they did. Do you feel like arguing this point, linking your argument to an historical event, and/or taking this approach (called profiling) personally? Then you get my point. It's hard to be passionate and accurate when dealing with, "IP address: X connecting to Y with a byte count of X." So engage your amygdala by endowing endpoints with the expectation of trust ... as you do people.

Tim Rohrbaugh VP. Information Security for Intersections Inc. Tim Rohrbaugh is an information security practitioner who used military (comsec) experience to transition, in the mid 90's, to supporting Government Information Assurance (IA) projects. While splitting time between penetration testing and teaching at DISA, Mr. Rohrbaugh ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...