Risktical Blog Series: SOTSOG

May 28, 2010

Recently in the information security web 2.0 circles there has been some buzz about “breaking” into the industry, what does career goodness look like, etc. This has prompted me to think about my journey to date; and I would like to write a series titled “Standing On The Shoulders Of Giants” (SOTSOG).

The first few posts in the series may have little to do with risk management since they date back to my childhood and Marine Corps days. However, it would be negligent of me not to reference some people or things dear to me from those days since it established the foundation from which I have been built upon.

For each post, I will make a point to highlight the one or two qualities from that particular “giant” that I think applies to the information risk management professional. And just in case I forget to write the following points in my posts let me take the opportunity to do it now:

1. YOU are responsible for YOUR career (period).
2. YOU cannot control EVERY aspect of YOUR career.
3. However, YOU are responsible for how YOU deal with the things YOU cannot CONTROL
4. Its OK to admit YOU don’t know something
5. The advice I give to kids going into boot camp or learning something brand new: KEEP YOUR EYES AND EARS OPEN and YOUR MOUTH SHUT!

I hope you will enjoy the series. I will try not to expose too much of the gooey center inside my crunchy shell.


Impromtu IT Risk Assessment Poll

May 25, 2010

You can select up to two answers. Thank you for participating!


Impromtu PCI-DSS Poll

May 14, 2010

More Heat Map Love

May 11, 2010

In my previous post “Heat Map Love” I attempted to illustrate the relationship between plots on a heat map and a loss distribution. In this post I am going to illustrate another method to show the relationship – hopefully in simpler terms.

In the heat map above I have plotted five example risk issues:

I: Application security; cross-site scripting; external facing application(s); actual loss events between 2-10 times a year; low magnitude per event – less then $10,000.

II: Data confidentiality; lost / stolen mobile device or computer; no hard disk encryption; simulated or actual one loss event per year, low to moderate magnitude per event.

III:  PCI-DSS Compliance; level 2 Visa merchant; not compliant with numerous PCI-DSS standards; merchant self-reports not being in compliance this year; merchant expects monthly fines of $5,000 for a one year total of $60,000.

IV: Malware outbreak; large malware outbreak (greater then 10% of your protected endpoints). Less then once every ten years; magnitude per event could range between $100,000 and $1,000,000; productivity hit, external consulting, etc.

V: Availability; loss of data center; very low frequency; very large magnitude per event.

Since there is a frequency and magnitude of loss associated with each of these issues we can conceptually associate these issues with a loss distribution (assuming that our loss distribution is a normal-like or log normal).

Step 1: Hold a piece of paper with the heat map looking like the image below:

Step 2: Flip the paper towards you so the heat map looks like image below (flip vertical):


Step 3: Rotate the paper counter-clockwise 90 degrees; it should like the image below.


For ease of illustration; let’s overlay a log normal distribution.

What we see is in line with what we discussed in the “Heat Map Love” post:

Risk V – Loss of data center; is driving the tail; very low frequency; very large magnitude.
Risk IV – Malware outbreak; low frequency; but significant or high magnitude.
Risk III – Annual PCI fines from Visa via acquirer / processor; once per year; $60K.
Risk II – Lost or stolen laptop that had confidential information on it; response and notification costs not expected to be significant.
Risk I – Lots of small application security issues; for example cross site scripting; numerous detected and reported instances per year; low cost per event.

There you have it – a less technical way to perform a sniff test on your heat map plots and / or validate against a loss distribution.

Once you have taught everyone how to perform this artwork paper rotation trick. You can have a paper airplane flying contest.


Heat Map Love

May 6, 2010

First, I would like to welcome Jack Jones *back* to the world of risk blogging. Jack blogged a few weeks ago on the subject of heat maps; “Lipstick on Pigs” and “Lipstick Part II”; prompting a response by Jared Pfost of Third Defense. These are great posts that underscore the need to structure and leverage heat maps in an effective and defensible manner.

The purpose of this post is to share a recent “ah hah” moment involving heat maps and loss distributions. Whether you are an advocate or not of risk quantification or simulation modeling – it is hard to criticize one for having tools or procedures in place that essentially serve as a “risk sniff test”. I consider reconciling portions of a loss distribution to a heat map – a pretty useful sniff test.

QUESTION TO BE ANSWERED: How do I reconcile – or validate – the plotting of a heat map bubble with a loss distribution?

Well, it depends…but let’s establish some context.

•    5-by-5 heat map. The X axis of the heat map represents “Frequency of Loss”; the Y axis represents “Magnitude of Loss”. Each axis is broken into 5 sections.

•    Let’ say we have a heat map whose bubbles represent categories of risk issues (ISO 27002 categories, BASEL II OpRisk Categories, etc…).

•    At a minimum, all of these issues have been assessed with some methodology and/or tool (I prefer FAIR) that allow us to associate the frequency and magnitude of loss for each and every issue.

•    We can perform thousands of simulation iterations for each and every issue in the risk repository, perform analysis, determine categories of risk that are contributing the most to various percentiles of the loss distribution, and then associate them with a heat map.

•    For the purpose of this post we are going to make a good faith assumption that our loss distribution resembles a log-normal or normal “like” distribution.

Back in my “Rainbow Risk” post I shared an example of a “rainbow chart”; a 100% stacked bar chart representing the contribution of loss that a category contributes (by percentage) to any given loss distribution percentile. For example, in the rainbow chart on that post, it showed that Business Continuity Mgmt category of risk issues accounted for about 55% of the risk in the 99th percentile. On a heat map, most significant IT Business Continuity issues are probably going to be very low frequency, very high magnitude events. Thus, it is fairly intuitive that very low frequency / very high frequency magnitude loss events would “drive” the tail of a given loss distribution.

In the images below, I have mapped areas on a heat map (image 1) to areas on a distribution (image 2). Specifically, I am trying to illustrate how frequency and magnitude for any given issue factors into or most likely represented in a loss distribution.

Image 1
Image 2

Area A – Very low frequency, very high magnitude risk issues. These types of events or risk issues drive the tail portion of a loss distribution.

Area B – Very low frequency, moderate or high magnitude risk issues; or low to moderate frequency, very high magnitude loss events. It can be said that these type of issues also drive the tail – but maybe not as much past the 99th percentile like issues associated with Area A.

Area C – Low to Moderate frequency, moderate or high magnitude.  These issues are best represented in the middle of the distribution; generally speaking, around one standard deviation on both sides of the mean.

Area D – Very frequent, moderate or high magnitude. Loss associated with these issues is not as severe as those of Areas A and B; but are typically greater then the mean expected loss.

Area E – Very frequent, very high magnitude. Generally speaking, these issues probably drive the portion of the distribution between 1 and 2.5 standard deviations (to the right of the mean).

Area F – Low or moderate frequency, low or moderate magnitude. These issues best factor into the area of the distribution left of the mean. Loss associated with these issues is less then the mean.

In closing, I would share at least one use case for performing this analysis or validation. Key risk heat maps. If all of your issues have frequency and magnitude values as well as some other attributes associated with the issue, you can:

1.    Perform simulations on all of these issues.
2.    Calculate their contributions to various distribution percentiles
3.    Analyze the results by various attributes (ISO 27002, BASEL II, IT Process, etc…).
4.    Chart derived information (categories of risk) on a heat map
5.    Review for plausibility / accuracy (this should occur all the time)

I welcome any feedback!