Initech, Inc.

September 21, 2008

As part of my goal of wanting to post some risk scenarios and accompanying assessments on the blog, I went ahead and posted a profile of a company (and one of its subsidiaries) over on the “Initech, Inc” page. Instead of having to write background and “given” information for each and every risk scenario – doing it once will save a lot of time.

This approach is also important, because it underscores the importance of analyzing risk elements within the context of the organization that faces the exposure. Company X may have a strong security posture where Company Y may have a weak security posture. Thus, a threat agent may be able to come in contact, take action against, and overcome Company Y’s security controls but not be successful against Company X. It would not be reasonable for Company X’s information security risk assessors to assume that since Company Y was impacted by a risk scenario that they are equally as vulnerable as well.

So, take a look at the “Initech, Inc.” page, have a good chuckle, and stay tuned for some upcoming risk scenarios, assessments, and interesting dialogue.


Risk Ostrich

September 19, 2008
Risk Ostrish

Risk Ostrich

The recent “midwest wind storm” combined with some crazy work activities has hindered my ability to get in some blog postings. I took a few minutes this morning to quickly peruse some blogs and stumbled across this posting over at securosis.

I think it is pretty irresponsible for someone to poo-poo an emerging discipline in our profession by comparing it to financial risk management. The motive of being able to quantify information security risk is to allow for better decision making and understanding the cost of risk to an organization- not to make a profit. More on this in a future posting.

We all know that ostriches appear to bury their heads in the sand. However, apparently it is a myth that they do it because they are scared. They bury their eggs in the dirt or in a hole and once in a while, they stick their head in there to check up on the eggs or do whatever to them.

So, to the blog post author, while you have you head under the dirt checking up on your investment eggs, take another look at those risk quantification eggs.


Risk and CVSS (Post 5) *FINAL*

September 4, 2008

I had no idea that the CVSS topic would turn into a five post series. There was just too much information and thoughts to cram into one or even two posts so for those of you that read even a few let alone all five – thanks for persevering.

Final thoughts on CVSS; two good and two not so good:

NOT SO GOOD:

1.    The CVSS framework is probably not being *fully embraced* or properly utilized by the people that need to leverage it the most – consumers of vendors that use it to score vulnerabilities with their products. Scoring the environmental metrics and observing the impact to the base metrics could add a lot of value. Other frameworks or organizations that reference CVSS scores as part of a vulnerability management process should mention the optional metrics that can influence the base score that a vendor provides. Better yet, maybe throw a disclaimer that the CVSS score listed today may be outdated and needs to be updated.

2.    The CVSS risk vernacular needs to be updated. I would recommend that the CVSS-SOG consider participating in “The Open Group” “Risk Management and Analysis Taxonomy” forum. Better yet, the CVSS-SOG should consider adopting the FAIR methodology. Specifically, use CVSS metrics that could factor into FAIR taxonomy elements. Some of the CVSS metrics focus more on impact then on the vulnerability itself. This can be a slippery slope especially when there are no metrics for “threat event frequency” let alone “loss event frequency”.

GOOD

1.    Pretty much all the CVSS metrics have some usefulness and should be able to be used by most information security professionals and especially risk analysts. I am already creating a small utility to use so I can consistently analyze various vulnerabilities and when appropriate – use the metrics as contributing factors for FAIR.

2.    Industry adoption. A lot of vendors use the CVSS framework. PCI-DSS references it for vulnerability related PCI guidelines. Just remember, use the whole framework and do not rely upon what is spoon-fed to you by PCI QSAs or value added resellers. If applicable, take back your ability to analyze risk and make informed decisions.

There you have it. Again, thanks for reading and submitting comments. The feedback and scrutiny has been well taken and appreciated.


Risk and CVSS (Post 4)

September 3, 2008

We are now up to the CVSS “Environmental Metrics” group. According to the CVSS documentation, this group ‘captures the characteristics of a vulnerability that are associated with a user’s environment’. This group is also optional from a scoring perspective and is intended to be completed by someone familiar with the environment the vulnerability resides within.

In “Post 1” I mentioned that CVSS does not take into consideration “threat event frequency” or how often I expect to get attacked nor does it take into consideration “loss event frequency”; how often I expect to realize a loss. The “environmental metric” does not fill this void either – but there is still value in being able to quickly analyze vulnerability in the context of these metrics – again, as contributing factors to various FAIR risk taxonomy elements.

FAIR & CVSS "Environmental Metrics) Mapping

FAIR & CVSS

Collateral Damage Potential. This metric measures the potential for loss of life or physical assets through damage or theft of property. Now real quick, I scoffed when I saw the loss of life – and none of the risk issues I have ever dealt with ever involved estimating loss of life. However, there are real life examples of software defects (essentially vulnerabilities) that have loss of human life implications. Take a look at “Geekonomics” by David Rice, there is some fascinating information in the book that will give you a whole new perspective on vulnerabilities. Getting back on track, the collateral damage metric maps very well to the “probable loss magnitude (PLM)” branch of the FAIR taxonomy. I do not want to dive into PLM right now – but let me state this – the word potential is not the same as probable, nor does it imply expected loss. So with the CVSS metric it could be very easy for someone to err on the side of a worst case loss versus choosing a value that best resembles expected loss. Either way, with CVSS this would just result in the CVSS score being raised. I would prefer to see a value in terms of dollars; whether it is monetary ranges or actual expected loss amounts based off simulations.

Target Distribution. This metric measures the proportion of vulnerable systems. I like this metric and I think it can be very useful as a contributing factor to the FAIR taxonomy element “threat event frequency”; specifically “threat contact” and possibly “threat capability”. The number and placement of vulnerable systems in my environment could directly factor into how often or what type if threat agents I expect to come into contact with the vulnerable systems – let alone attack them. Remember, within FAIR – attacking an asset with a vulnerability does not guarantee loss. We have to take into consideration the ability of the attacker to overcome the control resistance applied to the asset.

Security Requirements. These metrics enable the analyst to customize the CVSS score based on the importance of the affected IT asset to a user’s organization in terms of confidentiality (CR), integrity (IR), and availability (AR). Possible values include: LOW, MEDIUM, HIGH, or NOT DEFINED. These metrics were designed to work with the CVSS “Base Metrics” group; specifically the CIA Impact metrics. So if the vendor analyst states that a vulnerability has a Confidentiality Impact, and the analyst for the organization that has the vulnerable asset states that her or his organization has a Confidentiality Requirement – then the CVSS score could increase. Sounds pretty straightforward – seems to map nicely into the PLM branch of the FAIR taxonomy. Specifically, as contributing factors to estimating loss should the vulnerability be exploited and a loss occur.

It is too bad that the CVSS environmental metrics are optional. I understand why they are and regardless of CVSS generating a score and not taking into account loss event frequency – just imagine how much more informed a security folks and decision makers could be if they took a few more minutes to analyze a given vulnerability and the CVSS score that was provided to them from a vendor in light of these metrics.

In the next (and final) CVSS post, I will share some final thoughts on CVSS and finally put a nail in what was not intended to be a series of posts. Thanks for reading!


Risk and CVSS (Post 3)

September 3, 2008

If you missed the first two posts, the links to Post 1 and Post 2 are on the right hand side of your browser screen.

Let’s get to it…

We are now up to the CVSS “Temporal Metrics”. According to the documentation, these are optional and are meant to be completed by the vendor vulnerability analyst more so then the end user. As to be expected, all three of these can be considered as contributing factors to assessing a vulnerability for risk.

FAIR & CVSS "Temporal Metric" Mapping

FAIR & CVSS

Exploitability. This metric measures the current state of exploit techniques or code availability (possible values are: Unproven, Proof-of-concept, Functional, High, and Not Defined). So the key words here are “current state”. Thus indicating that state changes for better or for worse over time. So in the world of FAIR, this would seem to map nicely to “threat capability” and “control resistance”. “Threat capability” because the exploit methods may be limited to a very small percentage of the threat community; a fraction of the percentage of the threat population as a whole (the community is a subset of the population). “Control resistance” because the exploit may only be possible if the attacker has local system access. Heck, in some cases, security controls could be entirely absent and the system is no more vulnerable then if there were a tens of thousands of dollars worth of controls (paying dollars to protect pennies).

Remediation Level.  According to CVSS, this metric is an important factor for prioritization. Possible values for this metric include: Official fix, Temporary Fix, Workaround, Unavailable, and Not Defined. These choices are more on how the vendor would score it. Seems like there is room for us, the risk assessors to provide our own value. This metric maps well to control resistance in the FAIR taxonomy. Specifically, how resistant are my security controls against the overall threat population? Just because the vendor may not have a solution for us, there could be off-setting controls that do not require a vendor solution for the short term. I appreciate the context this metric was developed within – but do not be fooled into taking logic back into your own hands when it comes to these metrics. We all hear and preach about security in depth – take an opportunity to leverage those investments when analyzing vulnerability from a risk assessment perspective.

Report Confidence. This metric measures the degree of confidence in the existence of the vulnerability and the credibility of the known details. Provided values include: Unconfirmed, Uncorroborated, Confirmed and Not Defined. This may be the one metric that when a vulnerability is first disclosed, only the vendor or other industry experts can accurately select a value for. But within days, weeks or months of public disclosure – pretty much any non-“wolf crying” security expert can do some simple Google searches and validate a provided CVSS metric score or update one on their own. From my perspective, this metric maps well to FAIR’s “threat event frequency” and “threat capability” taxonomy elements. For “threat event frequency”; specifically, “threat contact” and “threat action”. For “threat capability” because the exploit methods may be limited to a very small percentage of the threat community; a fraction of the percentage of the threat population as a whole (the community is a subset of the population). How often do I think a threat agent comes into contact with a vulnerable system and how often do I think they will attempt to exploit the vulnerability once they do come in contact it.  So, “report confidence” is a contributing factor to possibly a higher threat contact or threat action – but does not necessarily guarantee that your threat event frequency is going to be higher or that every threat agent in a threat community or threat population is capable of exploiting the vulnerability – especially in light of other security controls that may be present in your environment.

I will do two more posts on the CVSS framework; the “environmental metrics” group and then a summary post. Thanks for reading!


Risk and CVSS (Post 2)

September 1, 2008

Just before I published “Risk and CVSS (Post 1)” a week or so ago, there was some email strands on the SecurityMetrics.Org mailing list about the scoring methodology that CVSS uses. I had planned on commenting on the scoring as part of this series – but am only going to say that one should be very cautious about how they use such a score – especially in determining a risk rating or quantifying. I have seen multiple risk scenarios where the vulnerability is very high, but the loss event frequency is so low or the impact is low enough that the overall risk is pretty much nothing.

Another noteworthy comment from Risk and CVSS (Post 1) – would be surrounding PCI QSA’s that apparently think that they are providing value-add to PCI merchants by including CVSS vectors and partial scores (directly from the National Vulnerability Database) in their reports but not educating the merchant on how the CVSS “Environmental Metrics” can significantly lower the score that only represents the Base and Temporal metrics. From my perspective, this is where “value add” should come into play from a professional services contractor. If you are paying for consulting firms to assess your compliance or risk posture – do not hesitate to ask them their methodology for scoring or rating risk – you may not be getting your money’s worth.

Back to Post 2….

Despite a problematic scoring model and the suggestion that the CVSS vulnerability score is representative of the actual risk to an organization – the CVSS framework does allow one to consistently and quickly analyze a vulnerability. Yes – consistently and quickly.

After three weeks of analyzing CVSS, I believe that the components of the frame work that make up the three metric groups (base, temporal, and environmental) are great contributing factors (details or facts that influence or factor into risk elements) to the FAIR methodology. Below, I will attempt to quickly cover all three CVSS metric groups and how they map as contributing factors to FAIR. I think this exercise should also result in better understanding the elements that make an information security risk and their relationship with each other.

In CVSS, the “Base Metric” group ‘captures the characteristics of a vulnerability that are constant with time and across user environments’. Specifically, how the vulnerability is exploited (access vector), the complexity of the attack required to exploit the vulnerability (access complexity), number of times an attacker must authenticate pre-attack (authentication), the confidentiality impact to the asset that is vulnerable (confidentiality), the integrity impact to the asset that is vulnerable (integrity) and the availability impact to the asset that is vulnerable (availability).

FAIR & CVSS "Base Metrics" Mapping.

FAIR & CVSS

In FAIR, there is a risk taxonomy diagram (see above) that visually depicts risk and the elements that make up risk. With risk being at the top, it splits off into two branches: “loss event frequency” and “probable loss magnitude”; both of which are broken down further. The CVSS “Base Metrics” can be mapped to FAIR.

Access Vector – CVSS suggests this represents how the vulnerability is exploited: local (system) access, adjacent network, or network. You can read the CVSS documentation for details. In FAIR, I think this vector is a great contributing factor to the “Contact” and “Threat Capability” taxonomy elements. Contact – how often do I expect a threat agent to come in contact (not necessarily attack) a vulnerable system. “Threat Capability” – what percentage of the threat community do I think is capable of overcoming the security resistance present on the asset – let alone get access to it?

Access Complexity – CVSS suggests this measures the complexity of the attack required to exploit the vulnerability once an attacker has gained access to the target system. Metric values include: HIGH, very hard to exploit; MEDIUM, not easily exploited; LOW, easy to exploit. This CVSS metric maps to three different FAIR taxonomy elements: “Action”, “Threat Capability” and “Control Resistance”.
Action – How often will the threat agent attempt to attack my asset after it comes into contact with it? Threat Capability – what percentage of the threat community do I think is capable of overcoming the security resistance present on the asset? Control Resistance – My security controls are effective against what percentage of the threat population?

Authentication – In CVSS, this is the number of times the attacker must authenticate to a target in order to exploit a vulnerability; Multiple – two or more authentication instances, Single – one authentication instance, None – no authentication. Within FAIR, I have mapped this to “Control Resistance”.

The three other “base metrics” – confidentiality impact, integrity impact, and “availability impact” are all contributing factors to the “probable loss magnitude” branch of the FAIR taxonomy diagram. The metric values for each of these three impacts are – None, no impact; Partial, some impact, and Complete, total loss. Keep in mind that the three metrics are probably more reflective of state versus actual loss. So, the real impact can really only be measured or estimated by someone more familiar with the vulnerable system (its role in a business process and the amount of data on the system).

Since this is running into a long post, I will go ahead and wrap-up. We still have the “temporal” and “environmental” metric groups to look at. Another thought that came to mind while typing this was how access exploit methods can change over time. So any given CVSS score is reflective of the circumstances at that point of time. Thus, these scores should not be blindly used with no additional review of the metrics that make them up.

Finally, I want to start introducing risk scenarios on this blog. To do so, I need to create a fictitious company profile that will be referenced in all of the scenarios. Hopefully in the next few weeks I can get this profile created and published. Once it is done, I think there will be a strong enough foundation from an information standpoint to start publishing risk scenarios and having what I am sure is to be contested – yet meaningful – dialogue.