Knowing Your Exposure Limitations

April 1, 2013

For those familiar with risk analysis – specifically the measurement [or estimate] of frequency and severity of loss – there are many scenarios where the severity of loss or resultant expected loss can have a long tail. In FAIR terms, we may have a scenario where the minimum loss is $1K, the most likely loss $10K and the maximum loss is $2M. In this type of a scenario – using Monte Carlo simulation and depending on the associated kurtosis of the estimated distribution – it is very easy for a severity distribution or aggregate distribution (that takes into account both frequency and severity) to be derived of which the resulting descriptive statistics don’t as accurately reflect the reality of exposure of the adverse event should it occur. While understanding the full range of severity or overall expected loss may be useful, a prudent risk practitioner should understand and account for the details of the organization’s business insurance policies to better understand when insurance controls will be invoked to limit financial loss for significant adverse events.

Using the example values above, an organization may be will willing to pay out of pocket for all adverse events –similar to the scenario above – up to $1M and then rely upon insurance to cover the rest. This in turn changes the maximum amount of loss the company is directly exposed to (per event); from $2M to $1M. In addition, this understanding could be a significant information point for decisions makers as they ponder how to treat an issue. Given this information consider the following:

1. How familiar you are with your organizations business or corporate insurance program?

2. Does your business insurance program cover the exposures you are responsible for or accountable for managing the risk of?

3. Are your risk analysis models flexible enough to incorporate limits of loss relative to potential loss?

4. When you talk with decisions makers, are you even referencing the existence of business insurance policies or other risk financing / transfer controls that limit your organization’s exposure when significant adverse events occur?

The more we can leverage other risk-related controls in the organization and paint a more accurate picture of exposure, the more we become a trusted advisor to our decision makers and other stakeholders in the larger risk management life-cycle.

Want to learn more?

AICPCU – Associate in Risk Finance (ARM) – http://www.aicpcu.org/comet/programs/arm/arm.htm

SIRA – Society of Information Risk Analysts – http://www.societyinforisk.org


OpenPERT – A FREE Add-In for Microsoft Office Excel

August 15, 2011

INTRODUCTION. In early June of this year, Jay Jacobs and I started having a long email / phone call discussion about risk modeling, model comparisons, descriptive statistics, and risk management in general. At some point in our conversation the topic of Excel add-ins came up and how nice it would be to NOT have to rely upon 3rd party add-ins that cost between hundreds and thousands of dollars to acquire. You can sort of think of the 80-20 rule when it comes to out of the box Excel functionality – though it is probably more like 95-5 depending on your profession – most of the functionality you need to perform analysis is there. However, there are at least two capabilities not included in Excel that are useful for risk modeling and analysis: the betaPERT distribution and Monte Carlo simulation. Thus,  the need for costly 3rd-party add-ins or a free alternative, the OpenPERT add-in.

ABOUT BETAPERT. You can get very thorough explanations about  the betaPERT distribution here, here, and here. What follows is the ‘cliff notes’ version. The betaPERT distribution is often used for modeling subject matter expert estimates in scenarios where there is no data or not enough of it. The underlying distribution is the beta distribution (which is included in Microsoft Office Excel).  If we can over-simply and define a distribution as a collection or range of values – the betaPERT distribution when initially used with three values, such as minimum, most likely (think mode) and maximum values will create a distribution of values (output) that can then be used for statistical analysis and modeling. By introducing a fourth parameter – which I will refer to as confidence, regarding the ‘most likely’ estimate – we can account for the kurtosis – or peakedness – of the distribution.

WHO USES BETAPERT? There are a few professions and disciplines that leverage the betaPERT distribution:

Project Management – The project management profession is most often associated with betaPERT. PERT stands for Program (or Project) Evaluation and Review Technique. PERT was developed by the Navy and Booz-Allen-Hamilton back in the 1950’s (ref.1; see below ) – as part of the Polaris missile program. Anyway, it is often used today in project management for project / task planning and I believe it is covered as part of the PMP certification curriculum.

Risk Analysis / Modeling – There are some risk analysis scenarios where due to a lack of data, estimates are used to bring form to components of scenarios that factor into risk. The FAIR methodology – specifically some tools that leverage the FAIR methodology as applied to IT risk – is such an example of using betaPERT for risk analysis and risk modeling.

Ad-Hoc Analysis – There are many times where having access to a distribution like betaPERT is useful outside the disciplines listed above. For example, if a baker is looking to compare the price of her/his product with the rest of the market – data could be collected, a distribution created, and analysis could occur. Or, maybe a church is analyzing its year to year growth and wants to create a dynamic model that accounts for both probable growth and shrinkage – betaPERT can help with that as well.

OPENPERT ADD-IN FOR MICROSOFT OFFICE EXCEL. Jay and I developed the OpenPERT add-in as an alternative to paying money to leverage the betaPERT distribution. Of course, we underestimated the complexity of not only creating an Excel add-in but also working with the distribution itself and specific cases where divide by zero errors can occur. That said, we are very pleased with version 1.0 of OpenPERT and are excited about future enhancements as well as releasing examples of problem scenarios that are better understood with betaPERT analysis. Version 1.0 has been tested on Microsoft Office Excel 2007 and 2010; on both 32 bit and 64 bit Microsoft Windows operating systems. Version 1.0 of OpenPERT is not supported on ANY Microsoft Office for Mac products.

The project home of OpenPERT is here.

The downloads page is here. Even if you are familiar with the betaPERT distribution, please read the reference guide before installing and using the OpenPERT add-in.

Your feedback is welcome via support@openpert.org

Finally – On behalf of Jay and myself – a special thank you to members of the Society of Information Risk Analysts (SIRA) that helped test and provided feedback on the OpenPERT add-in. Find out more about SIRA here.

Ref. 1 – Malcolm, D. G., J. H. Roseboom, C. E. Clark, W. Fazar Application of a Technique for Research and Development Program Evaluation OPERATIONS RESEARCH Vol. 7, No. 5, September-October 1959, pp. 646-669


Metricon 6 Wrap-Up

August 10, 2011

Metricon 6 was held in San Francisco, CA on August 9th, 2011. A few months ago, I and a few others were asked by the conference chair – Mr. Alex Hutton (@alexhutton) – to assist in the planning and organization of the conference. One of the goals established early-on was that this Metricon needed to be different then previous Metricon events. Having attended Metricon 5, I witnessed firsthand the inquisitive and skeptical nature of the conference attendees towards speakers and towards each other. So, one of our goals for Metricon 6 was to change the culture of the conference. In my opinion, we succeeded in doing that by establishing topics that would draw new speakers and strike a happy balance between metrics, security and information risk management.

Following are a few Metricon 6 after-thoughts…

Venue: This was my first non-military trip to San Francisco. I loved the city! The vibe was awesome! The sheer number of people made for great people-watching entertainment and so many countries / cultures were represented everywhere I went. It gave a whole new meaning to America being a melting pot of the world.

Speakers: We had some great speakers at Metricon. Every speaker did well, the audience was engaged, and while questions were limited due to time – they took some tough questions and dealt with them appropriately.

Full list of speakers and presentations…

Favorite Sessions: Three of the 11 sessions stood out to me:

Jake Kouns – Cyber Insurance. I enjoyed this talk for a few reasons: a. it is an area of interest I have and b. the talk was easy to understand. I would characterize it as an overview of what cyber insurance is [should be] as well as some of the some of the nuances. Keeping in mind it was an overview – commercial insurance policies can be very complex – especially for large organizations. Some organizations do not buy separate “cyber insurance” policies – but utilize their existing policies to cover potential claims / liability arising from operational information technology failures or other scenarios. Overall – Jake is offering a unique product and while I would like to know more details – he appears to be well positioned in the cyber insurance product space.

Allison Miller / Itai Zukerman – Operationalizing Analytics. Alli and Itai went from 0 to 60 in about 5 seconds. They presented some work that brought together data collection, modeling and analysis- in less then 30 minutes. Itai was asked a question about the underlying analytical engine used – and he just nonchalantly replied ‘I wrote it in Java myself’ – like it was no big deal. That was hot.

Richard Lippman – Metrics for Continuous Network Monitoring. Richard gave us a glimpse of a real-time monitoring application; specifically, tracking un-trusted devices on protected subnets. The demo was very impressive and probably gave a few in the room some ‘metrigasms’ (I heard this phrase from @mrmeritology).

People: All the attendees and speakers were cordial and professional. By the end of the day – the sense of community was stronger then what we started with. A few quick shout-outs:

Behind-the-scenes contributors / organizers. The Usenix staff helped us out a lot over the last few months. We also had some help from Richard Baker who performed some site reconnaissance in an effort to determine video recording / video streaming capabilities – thank you sir. There were a few others that helped in selecting conference topics – you know who you are – thank you!

@chort0 and his lovely fiancé Meredith. They pointed some of us to some great establishments around Union Square. Good luck to the two of you as you go on this journey together.

@joshcorman. I had some great discussion with Josh. While we have only known each other for a few months – he has challenged me to think about questions [scenarios] that no one else is addressing.

+Wendy Nather. Consummate professional. Wendy and I have known of each other for a few years but never met in person prior to Metricon6. We had some great conversation; both professional and personal. She values human relationships and that is more important in my book then just the social networking aspect.

@alexhutton & @jayjacobs – yep – it rocked. Next… ?

All the attendees. Without attendance, there is no Metricon. The information sharing, hallway collaboration and presentation questions contributed greatly to the event. Thank you!

***

So there you go everyone! It was a great event! Keep your eyes and ears open for information about the next Metricon. Consider reanalyzing your favorite conferences and if you are looking for small, intimate and stimulating conferences – filled with thought leadership and progressive mindsets – give Metricon a chance!


Standing On The Shoulders Of Giants (SOTSOG): My Parents

June 23, 2010

INFORMATION SECURITY PROFESSION TRAITS: TENACIOUS & FAITHFUL

This post is about my parents. My parents have been married for about 40 years and everyone in our family (parents, sister and I) still talks to one another!

My Dad is a Baptist minister; has been since I was like three years old or something. My mom currently works in the healthcare industry, but growing up she was a full-time Mom and as we got older she had some administrative jobs. People underestimate the demands placed upon ministers and their families. They get a lot of satisfaction from their profession. They give more then they earn- let alone take. Our family did not have excess but we were not poor either; the word optimal comes to mind.

TENACIOUS (Merriam Webster definition / synonyms)
My parents adhere to a way of life that was not always easy to understand growing up. I respect my parents for their resolve and desire to guard me (often against my desires) from situations that could have had undesirable consequences. However, I still managed to get myself in trouble on occasion. I would laugh when being corrected, once in awhile I made remarks that were not polite, I cut a girl’s hair “tail” off in the 6th grade, I liked flirting with girls- normal stuff…right?

Spanking – both in the home and in schools – was still a norm in the small town I spent the majority of my childhood in. Yep, I got spanked once in awhile – and to the best of my knowledge I deserved every one of them. I preferred the hand or a ping pong paddle instead of a wooden spoon or a real paddle. I also learned at some point that attempting to run away from or move in the paddling process could result in misplacement of the object striking me. Deep down inside I know my parents did not enjoy punishing me – they would probably never verbally admit they received some satisfaction from it – but if they ever read this – I bet you they would start to crack a smile…

Even though I do not share *all* of their political, social, or spiritual views – I respect – and even admire – them for their tenaciousness.

So how does this pertain to information security or risk management? From my perspective, it is not always easy to be in this profession. Between technology changes, doubters, binary IT mindsets, shortage of data sets, the nature of our work and a slew of other things – it is easy to become frustrated with our profession and leave. Our profession is not a one, two or three year stroll in the park that should reward folks with extra money because they passed the CISSP exam or know a list of acronyms. We are in a journey within a profession that is still evolving and that is slowly but surely integrating itself within business management. To that end is where I think tenacity comes into play.

FAITHFUL (Merriam Webster definition / synonyms)

When I reflect back on the first 18 years of my life – my parents are usually the first thought that comes to mind when I think about faithfulness.  With my Dad being a minister, I grew up seeing first hand how he and my mother served the church(s) he ministered to. The word ‘served’ is probably not an adequate word to describe his and her commitment to a group of people of which they would drop pretty much anything they were doing to be there in someone’s time of need – regardless of the circumstances.

So how does this pertain to information security or risk management? Well, there is a lot of randomness associated with the nature of our profession. We have very little control over externally initiated security incidents or even incidents that occur internally – no matter how awesome or weak our risk management programs are. Thus, we have to be there to deal with incidents and issues; 24×7. Faithfulness is applicable in many aspects of our lives; our personal relationships, our professional relationships, our employer, our profession and the list goes on. There are lots of times where we as information security professionals are not popular with the teams we are helping or non-information security people leaders in general. This is where faith comes into play. If you first stick to the principles of our profession and not get wrapped up in the emotions of others objections to what we are here to do – you will probably prevail.

The next SOTSOG post will be about the United States Marine Corps – feel free to drop down and give them 20, plus one for those currently getting shot at – just because!

Note: I started this post on 5/30/2010. A lot of things have happened since then that make me appreciate my parents even more. My Dad was having some chest pains and after a heart catheterization found out he had a 95% blocked artery in his heart; he had a stent put in the same day. Two days later he and my Mom made an emergency flight to Hilton Head to drive two of our relatives back to Ohio; one of which who had been admitted to the emergency room because of a blocked bowel. Yet another example of unselfish faithfulness.


Impromtu IT Risk Assessment Poll

May 25, 2010

You can select up to two answers. Thank you for participating!


Verizon – 2009 Data Breach Investigations Supplemental Report

December 9, 2009

This is no doubt one of many blog posts regarding the Verizon Business RISK Team “2009 Data Breach Investigations Supplemental Report” (DBISR). Below are a few of my thoughts.

1.    Quality of the Data. While it is neither the intent or spirit of the report to compare the usefulness of the information or the quality of the data to public data sources, I think it is important to recognize that the facts being collected by the Verizon team are generally more credible then the third-party sources that other public sources rely upon. In scenarios where I am trying to gather information about a breach or compiling a dataset for analysis – I am going to have a higher level of confidence in data / information from sources closer to the incident – then third parties just reporting on it. This does not mean that 3rd-party data is not legit – I am just suggesting the quality – from an accuracy and reliability perspective – is different and should be recognized.

2.    Data Overlap. On page 23 – is a table comparing the Verizon IR breaches and records lost to the equivalent DataLossDB values (keep in mind these are point in time values). The question I have is, how many of the 592 breaches in Verizon’s dataset are accounted for in the DataLossDB dataset? The reality is that in some US states (assuming all the breaches were in the US), data breach notification is not required, so an event can occur that does not result in breach notification to the consumer or the applicable State Attorneys General. If there were a difference between Verizon and DataLossDB – it only strengthens my confidence in their data because it contains credible data points not represented elsewhere (private consortium data aside).

3.    Threat Action “Profiles”. If you have not printed pages 5-21 and posted them on your cubicle / office wall – or recommended to your peers or other information security professionals – why not? Seriously. Threat actor / threat community profiles are such a valuable resource for security / risk practitioners to quickly reference, especially when we are dealing with dozens of threats and hundreds of controls. I can assure you that I will probably incorporate some of the DBSIR “threat action” profiles for some work I am doing in this same space with my employer – good job Verizon!

4.    Industry. My final observation is related to the industry and size of companies where breaches have occurred. I have blogged about this recently and I only mention this to remind folks that not every data point whether it is from Verizon, DataLossDB, PrivacyRights.Org, or other public / private data sources may be relevant to your industry or your company. The reality is that there are different expectations and regulatory requirements between industries and you have to keep that in mind while in the process of drawing conclusions from these types of reports.

Overall, two thumbs up to the Verizon Business RISK team. I commend them on their willingness to share this information and their desire to influence our industry as a whole.


Working With External Data (Part 1 of X)

November 21, 2009

In early October I began reviewing three external data repositories containing “loss event” data. I think it is important to state that what you are about to read is the result of me being guided by a real risk modeler at the company I work for. Modelers are very methodical, consistent, and have high expectations of quality – sort of like engineers. I understand information security, he understands modeling. I get to do the mundane work – he gets to build the mathematical relationships and distributions. No matter what though – I have to be able to explain everything in the model as well as maintain it moving forward. Thus, in this series, I want to share some observations and lessons I learned from the “gathering external” data exercise.

Really understand what you are looking to get from the data.
It is too easy to jump into these data sets, perform some simple statistical calculations and then communicate outrageous findings to an audience. For me and my employer’s purpose we wanted to use “some” of this external data for use in a loss model. Specifically, to help establish a distribution of possible number of records that could be lost and potential loss magnitude per event in various types of security incidents. (Notice I said possible, not probable). The reality is that most companies do not have dozens let alone hundreds of loss events to develop loss models without needing to use external data. So, one of the benefits of using external data in a loss model is that it can really help understand worst-case loss magnitude also know as “tail risk”. Internal data may more influence the mean value of a loss model. For two of the data sources – dataloss.org and privacyrights.org – the number of records lost was the key data point. For the third and non-public data consortium source, the cost of security related events (not necessarily data loss events) was the most useful. Below are some considerations for narrowing down the number of data points in data set from all to “some”.

a.    Time. Technology and the regulatory landscape changes quickly. Thus, it is preferable to time limit data points to a period where a minimum level of technology was assumed as well as a consistent expectation of regulatory / industry standard requirements. For our purposes, we only chose data points dating back to 2005. Again, this time range will vary from model to model, person to person, company to company and industry to industry.

Note 1: One record in the dataloss.org set goes back to 1903. Seriously.

Note 2: In the dataloss.org data set dated 9/30/2010. There were 2013 data points. Using only records from 2005 to 9/30/2009; reduced the set down to 1945.

b.    Good Fit. Not all data points are a good fit to be included in your analysis. Security control expectations vary from industry to industry. Thus you need to have a way of methodically reviewing data points to determine which are a good fit. Below are just a few considerations:

i.    Industry. Most data sets are not industry specific – so they contain data points spanning all kinds of industries. The transportation industry has a different value proposition then the financial services industry. So, depending on your model – points outside your industry may not be relevant.

ii.    Service or Value Proposition. Somewhat related to industry but some services and value propositions are shared between industries. I think of health care insurance and property and casualty insurance. Both industries have to protect confidential information. This does not mean that if I am in the financial services industry that I would include ALL healthcare industry data points – it just means that I am acknowledging there is a shared value proposition and that some data points – depending on the loss form – can be used for my purposes.

iii.    Loss Form Categories. When I talk about loss form categories, I am referring mostly to BASEL II Operational Risk Categories (Level 1); “Internal fraud”, “External fraud”, “Employment Practices and Workplace Safety”, “Clients, Products & Business Practices”, “Damage to Physical Assets”, “Business Disruption and System failures” and “Execution, Delivery & Process Management”. Most data loss events will only map to a few of these categories and in some instances these categories may not even be applicable to your needs, your company or your industry – but classifying each data point to one of these categories or another category framework more relevant for your company / industry can allow you to refine your data set in a methodical and unbiased manner.

Note 3: After applying my good fit criteria, the total number of dataloss.org data points I am using for my model has been reduced from 1945 (note 2 above) down to 84.

Note 4: Of those 84 data points: 9 data points were categorized as “Internal Fraud”, 37 were categorized as “External Fraud” and 38 were categorized as “Execution, Delivery, and Process Management”.

c.    Duplicate Records. When you are using multiple data sets, you have to assume there is duplication of data points between data sets. This was definitely the case for the dataloss.org and privacyrights.org data sets. To compound matters, just expect that for a certain percentage of duplicate data points – the details might differ. This is not a super big deal – just understand that you will have duplicate data points and will have to choose one of the data points.

Note 5: Ok, there could be some duplicates where the variance in details is so wide and there is neither time to determine which one is more correct or there is not a valid source to determine which one is more accurate; you could throw them both out.

d.    Consistency. You have to be consistent in your approach to reviewing data points. Distributing the work between numerous people could be problematic if they are not all properly aligned on the goals of what you are doing and properly calibrated on determining if a data point meets the criteria for inclusion.

In the next post, I will focus more on “right-sizing” data points. In other words, adjusting data points to be commensurate with your particular company.

Note 6: Please do not take any of my remarks about dataloss.org or privacyrights.org having errors to be an attack against the fine folks that are maintaining those data sets. My intent for raising these points is related to taking personal responsibility for knowing the data points you are using to derive information from. It is too easy for our business partners and even others in the security industry to raise the “garbage in garbage out” argument when trying to understand risk or loss models.