Using Dependency Modeling For Better Risk Decisions
Q&A with Open Group executives who are evangelizing a new standard for dependency modeling to help with IT risk management and beyond
In the world of IT security, risk management requires decisions to be made based on a wide range of variables. The problem is that these variables are often nested and interconnected to such a degree that without some rigorous planning, a flowchart based on their dependencies could quickly look like an MC Escher drawing. Add in the real-time information flow put out by a lot of technology devices that determine these variables at any given moment, and it becomes quite the task to factor everything in with any degree of discipline.
Enter the concept of dependency modeling, a methodology developed for risk managers based on fault-tree analysis first developed by Dr. John Gordon back in the early 1990s with help from Chris Baker. At the time, says Baker, who now works as a principal information security consultant at Fujitsu Services, the analysis was done by a simple piece of standalone PC software.
"But there obviously was a need to be able to use networked connections to understand the status of the final dependency leaves of the tree that we were building," says Baker. "Technology wasn't really aligned at the time to look at what the actual status of something that you depend upon is at any time, 24/7."
That technology alignment is finally coming into shift now, and the standards experts at The Open Group are hoping to grease the skids with a new communications standard called the Open Group Dependency Modeling (O-DM) standard, which is designed to offer a level platform for how networked information is fed into a centralized analysis engine to conduct meaningful dependency modeling. Introduced in December, the standard will not only offer a powerful toolset for IT risk managers, but also offer broader use cases in enterprise risk management, anti-fraud activities, and the insurance industry.
Dark Reading recently interviewed Jim Hietala, vice president of security for The Open Group, and Baker, who acted as a member of the consortium that helped to develop O-DM, to get the scoop on the new standard and how it could impact the way IT security risk managers do their job.
Dark Reading: Can you talk about the motivations of the open group in going after this initiative?
Hietala: There are so many frameworks, many of which come without a whole heck of a lot of guidance around how you really take them and do something useful with them. This fits into an area that is of a lot of interest to us and to our members, so when there's opportunity to take work that's being done and bring it to a broader audience so that risk managers can benefit from it, that's something we're interested in doing.
Dark Reading: Can you discuss the IT risk management problems dependency modeling was meant to solve? How can risk managers use this standard to make better decisions?
Baker: Within IT security risk management, one of the real benefits that start from deploying a dependency model in your environment is that it demands that you start by saying, 'What are we trying to achieve? What are our objectives?'
In the case of IT security, then you might adopt confidentiality, availability, and integrity, or some other issues around authentication and so on and so forth. And you will have systems that help you implement that. The risk management people within an organization won't generally want to be sitting there monitoring all of the equipment all of the time. There will be centrally controlled operation people that are doing that. But risk managers will want to know about statuses of alerts and so on and so forth.
The way that we enrich people's understanding of risk is first of all to hypothesize about what kind of attacks we might be preventing, that we want to defend ourselves against, and also what systems we will use in order to defend against those kinds of attacks. They will come from multiple sources at the same time and use different vehicles. But if we can monitor all of those things we can understand, a) when the attacks are coming in and, b) also the level of effectiveness of the countermeasures and controls, how well they are working, and whether they are disrupted by the attack.
This really is a way of centralizing and consolidating an awful lot of that information within the IT security framework in order to help people upstairs, if you like, to understand what's going on, to automatically alert and tell them what's happening, and to allow them to prepare and defend.
Dark Reading: How does this new standard interplay with other risk management frameworks and models?
Baker: One of the things behind the dependency modeling thinking is that we will take whatever approach that is being used and condense it into a dependency modeling framework. Within IT security there are a half-dozen standards you might be pointing to, and in every case we will take those drivers and feed them into the models. In the end we really interface with those countermeasures and controls and those alerting systems. It overlays on top of that framework. Agnostic would be a good word to describe it.
Dark Reading: How does this dovetail with other open group standards and efforts?
Hietala: If we look at the context of how this fits with, for instance, the risk taxonomy standard, or a companion standard we're working on called the risk analysis standard, those tend to be more about analyzing the risk of a specific set of circumstances in a business. So, maybe analyzing the risk of lost laptops or what that means to the business. Whereas the dependency modeling standard really is about looking at the dependencies that exist within or outside of an organization.
[How efficient are your compliance practices? See 7 Routes To Reducing The Compliance "Tax".]
Dark Reading: What has uptake been like for the standard in its first few months of existence?
Baker: We're currently talking to three different organizations about the way they will exploit [the standard]. One customer is looking at 1,000 branches, and all of the building monitoring systems within those branches, in order to understand where the critical alerts are coming from. The customer came to us basically saying, 'OK, I've got a 1,000 branches, and I've got 20 odd systems in every branch, and that's a big number. I'm getting red, amber, and green [alerts] from all of them. Every morning I come in and I have about 1,500 reds. All I want to know is, which of these reds are the most red?'
We're working with an organization that already provides that data within medium-sized environments to enable them to use our calculation engine and the open dependency modeling standard to communicate, capture that, and then give a prioritized list of outputs.
Dark Reading: Are there any other IT security-related applications you're hoping people will use the standard for?
Baker: One way we've used it in the past was looking at everything around compliance with ISO 270001, using it almost as an audit tool. Then the model was taking the standard section by section and showing the customer the most effective things they could do to be as compliant as they needed to be with the standard to face the risks that they had.
[Another use case] will be around companies that are providing a security service to their customers, offering them a way to provide a completely transparent view of how that service is working on a day-by-day, hour-by-hour basis and ensuring that the customer understands the risks that they have, so that it's a shared understanding, which is really quite a dramatic step forward for most suppliers.
Dark Reading: Supply-chain risks have been an increasing worry for IT security folks. Can you talk a little more about how this program is good for addressing these issues within IT?
Baker: This kind of requires a supplier and customer attitude change. Within any individual company you may want to monitor your own services, your own systems, your own staff, your own whatever from an IT security point of view. If you're supplying a service to another customer, you might be brave enough and bold enough to share that information with your customer because it's your customer's systems you're probably looking after.
That works in a number of different ways within outsourcing environments, where I'm going to take your systems and put them into our data center so that they're going be our systems now. But in the end they're still yours. And you want to know if they fall over; you want to know if they're compromised in some way. So being able to share that information using the open standard way of communicating is fundamental to being able to do that. And then once you get a distributed network or a global network, adhering to the standard enables that to happen. So within supply chains, the ideal is to have the supplier and the customer both sharing the same information and both being able to have the same view and trying to get away from this, frankly , deliberate obfuscation of what the real cause of the problem was.
You already contracted -- you've shared responsibility of risk. The blame game doesn't help either side in finding out what happened, getting it amended, and repairing whatever damage there was.
Dark Reading: How long do you think it will take to gain momentum with this standard?
Baker: We'll have our first proofs of concepts within three months now. I'm expecting a big ramp up in take-up within six months from then.
Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024