Personal Risk Management

November 4, 2011

Somewhere between self-improvement, the feedback process, perception management and total quality management (TQM) is a lesson to be learned and an opportunity for introspection. I want [need] to document a few thoughts about the intersection of these concepts based off recent personal and professional experiences.

Self-Improvement. At some point while serving in the Marine Corps it became very obvious that there were three performance paths: be a bad performer and let the system make your life a living heck, be an average performer and let the system carry you along, be a stellar performer and push the system to its limits and possibly change it. I have always chosen to chase after stellar and it has worked pretty well for me over the years. However, in some professions to maintain stellar status – you have to constantly be seeking self-improvement.

Feedback. The term feedback means different things depending on the context in how it is being used. I find the act of feedback to be challenging both on the giving end as well as the receiving end – especially when it is feedback that is not complimentary. I have had both great and absolutely horrendous experiences – as an actor in both roles. The reality is that having feedback mechanisms in place whether formal or informal is critical to have – regardless of the merit of the feedback or how the feedback was communicated. More on this later when I attempt to tie all of this together.

Perception Management. Perception is reality to most people regardless of the facts. Anyone that is actively managing their career or personal life probably cares about perception. Furthermore, we probably want to be in control of how people perceive our actions, thoughts, attitudes and even mannerisms – lest it be established by others.

Total Quality Management. My current school studies are revolving around operations management. Specifically, quality improvement, TQM, Six Sigma, etc. There are concepts around TQM that can be applied to various dimensions of our lives: personal, professional, ethical, moral, giving, etc. Without going down a rabbit hole, I am convinced that quality improvement concepts allow us to construct guard rails (control limits) for the aforementioned dimensions.

So how does all of this tie together?

If you are serious about self-improvement and managing perception – you have to embrace feedback and take into consideration if you are approaching a quality limit if a feedback opportunity presents itself (me being the recipient). You may not agree with the merit of the feedback or agree with the delivery mechanism but you have to listen – just not hear – what is being communicated. This is really hard to do sometimes and how we react to the feedback experience can destroy relationships and further erode trust. When it comes to constructive criticism feedback – if someone is taking the time to give it – regardless of its validity – could this possibly be an indicator that we are approaching some of our quality limits – whether you have defined them or not?

For example, here are two commonly used rules for determining is a process is out of control:
1.    A single point outside the control limits.
2.    Obvious consistent or persistent patterns that suggest that there is something unusual about the data.

Keeping these two rules in mind, we can go through this exercise of introspection. Such an exercise requires one to put their pride on the shelf, set aside emotions, and really try to flush out the opportunity for self improvement. And, if all this can be done in a manner with a redemptive mindset – the better yet. In the end of such an exercise, there should always be one or more questions we should strive to answer:

1.    Is there something minor I can improve on? Is a slight adjustment needed to pull me back from the guard rails or better manage perception?
2.    Is there something major going on that calls for a massive adjustment? Is there really a fire that is producing all this feedback smoke?
3.    Was I a good partner in the feedback process? Did I listen? Did I have a redemptive mindset?

Hear me folks – this topic and what I have outlined is not something I consider myself to be a stellar example of. However, I do care about self-improvement, managing my perception, and adhering to quality in the execution of my responsibilities and will strive to keep in mind what I have outlined moving forward.

That’s it.

Advertisements

OpenPERT – A FREE Add-In for Microsoft Office Excel

August 15, 2011

INTRODUCTION. In early June of this year, Jay Jacobs and I started having a long email / phone call discussion about risk modeling, model comparisons, descriptive statistics, and risk management in general. At some point in our conversation the topic of Excel add-ins came up and how nice it would be to NOT have to rely upon 3rd party add-ins that cost between hundreds and thousands of dollars to acquire. You can sort of think of the 80-20 rule when it comes to out of the box Excel functionality – though it is probably more like 95-5 depending on your profession – most of the functionality you need to perform analysis is there. However, there are at least two capabilities not included in Excel that are useful for risk modeling and analysis: the betaPERT distribution and Monte Carlo simulation. Thus,  the need for costly 3rd-party add-ins or a free alternative, the OpenPERT add-in.

ABOUT BETAPERT. You can get very thorough explanations about  the betaPERT distribution here, here, and here. What follows is the ‘cliff notes’ version. The betaPERT distribution is often used for modeling subject matter expert estimates in scenarios where there is no data or not enough of it. The underlying distribution is the beta distribution (which is included in Microsoft Office Excel).  If we can over-simply and define a distribution as a collection or range of values – the betaPERT distribution when initially used with three values, such as minimum, most likely (think mode) and maximum values will create a distribution of values (output) that can then be used for statistical analysis and modeling. By introducing a fourth parameter – which I will refer to as confidence, regarding the ‘most likely’ estimate – we can account for the kurtosis – or peakedness – of the distribution.

WHO USES BETAPERT? There are a few professions and disciplines that leverage the betaPERT distribution:

Project Management – The project management profession is most often associated with betaPERT. PERT stands for Program (or Project) Evaluation and Review Technique. PERT was developed by the Navy and Booz-Allen-Hamilton back in the 1950’s (ref.1; see below ) – as part of the Polaris missile program. Anyway, it is often used today in project management for project / task planning and I believe it is covered as part of the PMP certification curriculum.

Risk Analysis / Modeling – There are some risk analysis scenarios where due to a lack of data, estimates are used to bring form to components of scenarios that factor into risk. The FAIR methodology – specifically some tools that leverage the FAIR methodology as applied to IT risk – is such an example of using betaPERT for risk analysis and risk modeling.

Ad-Hoc Analysis – There are many times where having access to a distribution like betaPERT is useful outside the disciplines listed above. For example, if a baker is looking to compare the price of her/his product with the rest of the market – data could be collected, a distribution created, and analysis could occur. Or, maybe a church is analyzing its year to year growth and wants to create a dynamic model that accounts for both probable growth and shrinkage – betaPERT can help with that as well.

OPENPERT ADD-IN FOR MICROSOFT OFFICE EXCEL. Jay and I developed the OpenPERT add-in as an alternative to paying money to leverage the betaPERT distribution. Of course, we underestimated the complexity of not only creating an Excel add-in but also working with the distribution itself and specific cases where divide by zero errors can occur. That said, we are very pleased with version 1.0 of OpenPERT and are excited about future enhancements as well as releasing examples of problem scenarios that are better understood with betaPERT analysis. Version 1.0 has been tested on Microsoft Office Excel 2007 and 2010; on both 32 bit and 64 bit Microsoft Windows operating systems. Version 1.0 of OpenPERT is not supported on ANY Microsoft Office for Mac products.

The project home of OpenPERT is here.

The downloads page is here. Even if you are familiar with the betaPERT distribution, please read the reference guide before installing and using the OpenPERT add-in.

Your feedback is welcome via support@openpert.org

Finally – On behalf of Jay and myself – a special thank you to members of the Society of Information Risk Analysts (SIRA) that helped test and provided feedback on the OpenPERT add-in. Find out more about SIRA here.

Ref. 1 – Malcolm, D. G., J. H. Roseboom, C. E. Clark, W. Fazar Application of a Technique for Research and Development Program Evaluation OPERATIONS RESEARCH Vol. 7, No. 5, September-October 1959, pp. 646-669


Metricon 6 Wrap-Up

August 10, 2011

Metricon 6 was held in San Francisco, CA on August 9th, 2011. A few months ago, I and a few others were asked by the conference chair – Mr. Alex Hutton (@alexhutton) – to assist in the planning and organization of the conference. One of the goals established early-on was that this Metricon needed to be different then previous Metricon events. Having attended Metricon 5, I witnessed firsthand the inquisitive and skeptical nature of the conference attendees towards speakers and towards each other. So, one of our goals for Metricon 6 was to change the culture of the conference. In my opinion, we succeeded in doing that by establishing topics that would draw new speakers and strike a happy balance between metrics, security and information risk management.

Following are a few Metricon 6 after-thoughts…

Venue: This was my first non-military trip to San Francisco. I loved the city! The vibe was awesome! The sheer number of people made for great people-watching entertainment and so many countries / cultures were represented everywhere I went. It gave a whole new meaning to America being a melting pot of the world.

Speakers: We had some great speakers at Metricon. Every speaker did well, the audience was engaged, and while questions were limited due to time – they took some tough questions and dealt with them appropriately.

Full list of speakers and presentations…

Favorite Sessions: Three of the 11 sessions stood out to me:

Jake Kouns – Cyber Insurance. I enjoyed this talk for a few reasons: a. it is an area of interest I have and b. the talk was easy to understand. I would characterize it as an overview of what cyber insurance is [should be] as well as some of the some of the nuances. Keeping in mind it was an overview – commercial insurance policies can be very complex – especially for large organizations. Some organizations do not buy separate “cyber insurance” policies – but utilize their existing policies to cover potential claims / liability arising from operational information technology failures or other scenarios. Overall – Jake is offering a unique product and while I would like to know more details – he appears to be well positioned in the cyber insurance product space.

Allison Miller / Itai Zukerman – Operationalizing Analytics. Alli and Itai went from 0 to 60 in about 5 seconds. They presented some work that brought together data collection, modeling and analysis- in less then 30 minutes. Itai was asked a question about the underlying analytical engine used – and he just nonchalantly replied ‘I wrote it in Java myself’ – like it was no big deal. That was hot.

Richard Lippman – Metrics for Continuous Network Monitoring. Richard gave us a glimpse of a real-time monitoring application; specifically, tracking un-trusted devices on protected subnets. The demo was very impressive and probably gave a few in the room some ‘metrigasms’ (I heard this phrase from @mrmeritology).

People: All the attendees and speakers were cordial and professional. By the end of the day – the sense of community was stronger then what we started with. A few quick shout-outs:

Behind-the-scenes contributors / organizers. The Usenix staff helped us out a lot over the last few months. We also had some help from Richard Baker who performed some site reconnaissance in an effort to determine video recording / video streaming capabilities – thank you sir. There were a few others that helped in selecting conference topics – you know who you are – thank you!

@chort0 and his lovely fiancé Meredith. They pointed some of us to some great establishments around Union Square. Good luck to the two of you as you go on this journey together.

@joshcorman. I had some great discussion with Josh. While we have only known each other for a few months – he has challenged me to think about questions [scenarios] that no one else is addressing.

+Wendy Nather. Consummate professional. Wendy and I have known of each other for a few years but never met in person prior to Metricon6. We had some great conversation; both professional and personal. She values human relationships and that is more important in my book then just the social networking aspect.

@alexhutton & @jayjacobs – yep – it rocked. Next… ?

All the attendees. Without attendance, there is no Metricon. The information sharing, hallway collaboration and presentation questions contributed greatly to the event. Thank you!

***

So there you go everyone! It was a great event! Keep your eyes and ears open for information about the next Metricon. Consider reanalyzing your favorite conferences and if you are looking for small, intimate and stimulating conferences – filled with thought leadership and progressive mindsets – give Metricon a chance!


Risk Vernacular Update

August 2, 2011

It has been a few years since I updated the “risk vernacular” portion of this blog. Based off some college-level  insurance and risk management courses as well as some work I am doing in the operational risk management space – there are some new terms I wanted to share as well as update some existing terms based off new information / knowledge. If it has been a while since you reviewed the page – take a few minutes to look at the page. Enjoy!

BTW, I will be in San Francisco on August 9th and 10th for Metricon 6.


What’s Your Target?

May 19, 2011

Been awhile since I publicly blogged. Between family, work, school, podcasting, helping run the Society of Information Risk Analysts (SIRA) and some public speaking – time has been limited. I want to briefly write about targets today.

I have had the privilege to speak twice in the month of May. The first engagement was at Secure360, an awesome regional information security conference based out of St. Paul, Minnesota. Mr. Jack Jones and I partnered up to give a talk about having a ‘seat at the table’. Specifically, speaking in a language that our IT and business leaders understand, establishing perspective, gaining influence, and providing value to our leaders so they can effectively manage risk. The talk appeared to be well received and there have been a few follow-up conversations with some information security professionals that want to up their game – which was the point to begin with.

Earlier this week I had the privilege to speak about IT risk management – specifically IT risk quantification – as part of the ‘CIO Practicum’ series at the University of Kentucky. The theme of this particular event was “Security for Grown-Ups”. I found myself in a room of IT and business executives who came to get a glimpse of how information risk management functions can begin adding value to the business or organization. My take-away from the event was that IT and business executives are craving value-add from information risk management functions (security, continuity management, compliance, etc.). Let me repeat in bold capital letters: IT AND BUSINESS EXECUTIVES ARE CRAVING VALUE FROM INFORMATION RISK MANAGEMENT FUNCTIONS.

So here is the dilemma. Information risk management professionals want to add value and our IT and business executives want [expect] value. How can we achieve goodness?

In order to achieve goodness, you and your leadership have to define it for your organization – you have to have a vision or a target to direct your efforts toward. It requires relationship building with your leadership and executives to develop a sense of mutual trust, perspective and shared understanding about why the organization exists, how the information risk management function fits into the organization as well as how the information risk management function contributes to helping the organization reach its goals and fulfill its strategy.

If you are an information risk practitioner, security, continuity management or compliance professional – what is the target that the outcomes of your efforts are directed towards? If you don’t know – figure it out quickly. Better yet – if your manager or other leaders cannot tell you then be proactive and work with your leadership to help define it.

If you are an IT or business executive that happened to stumble on this blog post – let me ask you a question. Have you established a vision or target for your information risk management function(s) to direct their efforts toward? If so – how is it working out? Is value being added? If a vision or target has not been established, why not?

Thoughts?


Deconstructing Some HITECH Hype

February 23, 2011

A few days ago I began analyzing some model output and noticed that the amount of loss exposure for ISO 27002 section “Communications and Operations Management” had increased by 600% in a five week time frame. It only took a few seconds to zero-in on an issue that was responsible for the increase.

The issue was related to a gap with a 3rd party of which there was some Health Information Technology for Economic and Clinical Health Act (HITECH) fine exposure. The estimated HITECH fines were really LARGE. Large in the sense that the estimates:

a.    Did not pass the sniff test
b.    Could not be justified based off any documented fines / or statutes.
c.    From a simulation perspective were completely skewing the average simulated expected loss value for the scenario itself.

I reached out to better understand the rationale of the practitioner who performed the analysis and after some discussion we were in agreement that some additional analysis was warranted to accurately represent assumptions as well as refine loss magnitude estimates – especially for potential HITECH fines. About 10 minutes of additional information gathering yielded valuable information.

In a nutshell, the HITECH penalty structure is a tiered system that takes into consideration the nature of the data breach, the fine per violation and maximum amounts of fines for a given year. See below (the tier summary is from link # 2 at the bottom of this post; supported by links # 1 and 3):

Tier A is for violations in which the offender didn’t realize he or she violated the Act and would have handled the matter differently if he or she had. This results in a $100 fine for each violation, and the total imposed for such violations cannot exceed $25,000 for the calendar year.

Tier B is for violations due to reasonable cause, but not “willful neglect.” The result is a $1,000 fine for each violation, and the fines cannot exceed $100,000 for the calendar year.

Tier C is for violations due to willful neglect that the organization ultimately corrected. The result is a $10,000 fine for each violation, and the fines cannot exceed $250,000 for the calendar year.

Tier D is for violations of willful neglect that the organization did not correct. The result is a $50,000 fine for each violation, and the fines cannot exceed $1,500,000 for the calendar year.

Given this information and the nature of the control gap – one can quickly determine the penalty tier as well as estimate fine amounts. The opportunity cost to gather this additional information was minimal and the benefits of the additional analysis will result  in not only more accurate and defendable analysis – but also spare the risk practitioner from what would have been certain scrutiny from other IT risk leaders and possibly business partner allegations of IT Risk Management once again “crying wolf”.

Key Take-Away(s)

1.    Perform sniff tests on your analysis; have others review your analysis.
2.    There is probably more information then you realize about the problem space you are dealing with.
3.    Be able to defend assumptions and estimates that you make.
4.    Become the “expert” about the problem space not the repeater of information that may not be valid to begin with.

Links / References associated with this post:

1.    HIPAA Enforcement Rule ref. HITECH <- lots of legalese
2.    HITECH Summary <- less legalese
3.    HITECH Act scroll down to section 13410 for fine information <-lots of legalese
4.    Actual instance of a HITECH-related fine
5.    Interesting Record Loss Threshold Observation; Is 500 records the magic number?


Simple Risk Model (Part 4 of 5): Simulating both Loss Frequency & Loss Magnitude

February 5, 2011

Part 1 – Simulate Loss Frequency Method 1
Part 2 – Simulate Loss Frequency Method 2
Part 3 – Simulate Loss Frequency Method 3

In this post we want to combine the techniques demonstrated in parts two and three into a single simulation. To accomplish this simulation we will:

1.    Define input parameters
2.    Introduce VBA code – via a macro – that consumes the input parameters
3.    Perform functions within the VBA code
4.    Take the output from functions and store them in the spreadsheet
5.    Create a histogram of the simulation output.

Steps 3 & 4 will be performed many times; depending on the number of iterations we want to perform in our simulation.

You can download this spreadsheet to use as a reference throughout the post. The spreadsheet should be used in Excel only. The worksheets we are concerned with are:

test – This worksheet contains code that will step through each part of the loss magnitude potion of the simulation. By displaying this information, it allows you to validate that both the code and calculations are functioning as coded. This tab is also useful for testing code in small iterations. Thus, the number of iterations should be kept fairly low (“test”; B1).

prod – Unlike the “test” tab, this tab does not display the result of each loss magnitude calculation per iteration. This is the tab that you would want to run the full simulation on; thousands of iterations.

Here we go…and referencing the “prod” worksheet…

Input Parameters.
Expected Loss Frequency. It is assumed for this post that you have estimated or derived a most likely or average loss frequency value. Cell B2 contains this value. The value in this cell will be one of the input parameters into a POISSON probability distribution to return an inverse cumulative value (Part 2 of this Series).

Average Loss Magnitude. It is assumed for this post that you have estimated or derived a most likely or average loss magnitude value. Cell B3 contains this value. The value in this cell will be one of the input parameters into a NORMAL probability distribution to return an inverse cumulative value (Part 3 of this Series).

Loss Magnitude Standard Deviation. It is assumed for this post that you have estimated or derived the standard deviation for loss magnitude. Cell B4 contains this value. The value in this cell will be one of the input parameters into a NORMAL probability distribution to return an inverse cumulative value (Part 3 of this Series).

The Simulation.
On the “prod” tab, when you click the button labeled “Prod” – this will execute a macro composed of VBA code. I will let you explore the code on your own – it is fairly intuitive. I also left a few comments in the VBA so I remember what certain sections of the code are doing. There are four columns of simulation output that the macro will generate.

Iter# (B10). This is the iteration number. In cell B1 we set the number of iterations to be 5000. Thus, the VBA will cycle through a section of its code 5000 times.

LEF Random (C10). For each iteration, we will generate a random value between 0 and 1 to be used in generating a loss frequency value. Displaying the random value in the simulation is not necessary, but I prefer to see it so I can informally analyze the random values themselves and gauge the relationship between the random value and the inverse cumulative value in the next cell.

LEF Value (D10). For each iteration, we will use the random value we generated in the adjacent cell (column c), combine it with the Expected Loss Frequency value declared in B2 and input these values as parameters into a POISSON probability distribution that returns an inverse cumulative value. The value returned will be an integer – a whole number. Why a whole number? Because you can’t have half a loss event – just like a woman cannot be half pregnant ( <- one of my favorite analogies). This is a fairly important concept to realize from a loss event modeling perspective.

Loss Magnitude (E10). For each iteration, we will consume the value in the adjacent cell (column D) and apply logical rules to it.

a.    If the LEF Value = 0, then the loss magnitude is zero.
b.    If the LEF Value > 0, then for each instance of loss we will:
1.    Generate a random value
2.    Consume the average loss magnitude value in cell B3
3.    Consume the loss magnitude standard deviation in cell B4
4.    Use the values referenced in 1-3 as input parameters into a Normal probability distribution and return an inverse cumulative value. In other words, given a normal distribution with mean $2000 and standard deviation of $1000 – what is the value of that distribution point given a random value between 0 and 1.
5.    We will add all the instances of loss for that iteration and record the sum in column E.

Note: Steps 4 and 5 can be observed on the “test” worksheet by clicking the button labeled “test”.

The code will continue to loop until we have completed the number of iterations we specified in cell B1.

The Results. Now that the simulation is complete we can begin to analyze the output.

# of Iterations With No Loss (B5). This is the number of iterations where the returned inverse cumulative value was zero.

# of Iterations With Loss (B6). This is the number of iterations where the returned inverse cumulative value was greater than zero.

# of Loss Events (B7). This is the sum of loss events for all the iterations. There was some iteration where there was more then one loss event.

Max. # of Loss Events for an iteration (B8). This is the maximum number of loss events for any given iteration.

Next, let’s look at some of the simulation output in the context of loss severity; $.

Min. Loss (K6). This is minimum loss value returned from the simulation. I round the results to the nearest hundred in the worksheet.

Max. Loss (K7). This is maximum loss value returned from the simulation. I round the results to the nearest hundred in the worksheet.

Median (G5). This is the 50th percentile of the simulation results. In other words, 50% of the simulations results were equal to or less then this value.

Average (G6). This is the average loss value for the simulation. This is the quotient of summing all the loss magnitude values and dividing by the number of iterations. This value can quickly be compared to the median to make inferences about the skew of the simulation output.

80th % (G7). This is the 80th percentile of the simulation results. In other words, 80% of the simulations results were equal to or less then this value. In some industries, this is often referred to as the 1-in-5 loss.

90th % (G8). This is the 90th percentile of the simulation results. In other words, 90% of the simulations results were equal to or less then this value. In some industries, this is often referred to as the 1-in-10 loss.

95th % (G9). This is the 95th percentile of the simulation results. In other words, 95% of the simulations results were equal to or less then this value. In some industries, this is often referred to as the 1-in-20 loss.

99th % (G10). This is the 99th percentile of the simulation results. In other words, 99% of the simulations results were equal to or less then this value. In some industries, this is often referred to as the 1-in-100 loss.

Note 2: Generally speaking, the 95th, 99th and greater percentiles are often considered as being part of the tail of the loss distribution. I consider all the points in cells G5:G10 to be useful. For some loss exposures, the median and average values are more than enough to make informed decisions. For some loss exposures; the 80th, 90th, 95th, 99th and even larger percentiles are necessary.

Simulated Loss Magnitude Histogram. A histogram is a graphical representation showing the distribution of data. The histogram in the “prod” worksheet represents the distribution of data for all iterations where the loss was greater than zero.

Wrap Up. What I have presented in this post is a very simple model for a single loss exposure using randomness and probability distributions. Depending on your comfort level with VBA and creativity, one can easily build out more complex models; whether it is hundreds of loss exposures you want to model for or just a few dependent loss exposures.