Reputation FUD (Fear, Uncertainty and Doubt)

August 6, 2014

***

Key Point: Use of a reputation taxonomy – similar to Harris Interactive’s “Reputation Quotient” methodology – can enhance risk analysis activities where reputation is a factor as well as increase the value that risk practitioners provide in their organizations by enabling more informed risk decisions.

***

Within the risk management discipline there is an activity called “risk analysis” that will usually entail understanding the drivers – or dimensions – of frequency and severity for potential adverse events. One common severity dimension is reputation; defined by Oxford Dictionary (n.d) as “The beliefs or opinions that are generally held about someone or something”. Intuitively we all know what reputation is but putting your finger on what it is – for a business – can be challenging.

Over the years, I have seen many approaches to accounting for a reputation impact. Some methodologies encourage estimating some dollar value that can be associated with customer migration, expenses to combat negative publicity or expenses to make customers whole when an adverse event has occurred. Other practices will assign ordinal values of reputation badness. Some companies have built reputation correlation factors in their risk models for scenarios that have a potential reputation impact. Finally, some companies just annotate reputation as being a loss driver but do not try to estimate or measure any severity factors. While all of these approaches have merit, all too often I see reputation discussed or analyzed in the form of emotions: this potentially bad thing feels bad, therefore we will look bad. There has to be a more logical way to analyze reputation regardless if the method is quantitative or qualitative in nature.

I recently read a Harvard Business Review blog regarding reputation, in which Loeb and McNulty (2014) referenced a reputation scoring methodology called “Reputation Quotient” (RQ) developed by Harris Interactive [Harris] (2014). I began digging around Harris’ site and found the “2013 Harris Poll 2013 RQ Summary Report” (Harris, 2013), in which they detail out the RQ dimensions and variables. Here they are:

Emotional Appeal:
• Feel Good About
• Admire and Respect
• Trust

Products & Services:
• High Quality
• Innovative
• Value for Money
• Stands Behind

Workplace Environment:
• Rewards Employees Fairly
• Good Place to Work
• Good Employees

Financial Performance:
• Outperforms Competitors
• Record of Profitability
• Low Risk Investment
• Growth Prospects

Vision & Leadership
• Market Opportunities
• Excellent Leadership
• Clear Vision for the Future

Social Responsibility
• Supports Good Causes
• Environmental Responsibility
• Community Responsibility

The factors outlined by Harris Interactive can help us as risk practitioners talk more intelligently about reputation as part of our risk analysis – especially if our employers participate in the Reputation Quotient Survey (as mine does). For any scenario we are analyzing, where there could be a potential reputation impact, we can ask ourselves if the adverse event lends itself towards violating one of these factors. My intuition is that many assumptions about reputation would be challenged using such an approach. In addition, the RQ variables may be idea candidates for factors in quantitative statistical model to better understand severity.

Quantifying reputation can be challenging but talking about it an in objective and logical manner offers benefits. The more knowledgeable and objective we are in understanding squishy problem spaces like reputation the more information we can provide to our stakeholders to make more informed, effective risk management decisions. Better decision making ultimately creates value for the organization as it facilitates decisions around expense optimization, ensures tactical and strategic goals are being met and reinforces adherence to ethics and values.

References

Harris Interactive Inc. (2013). The Harris Poll 2013 RQ Summary Report.  Retrieved from http://www.harrisinteractive.com/vault/2013%20RQ%20Summary%20Report%20FINAL.pdf.

Harris Interactive Inc. (2014a). The Harris Poll Reputation Quotient. Retrieved from http://www.harrisinteractive.com/Products/ReputationQuotient.aspx.

Loeb, H. and McNulty, E.J. (2014, August 4). Don’t Trust Your Company’s Reputation to the Quants. Harvard Business Review Online. Retrieved from http://blogs.hbr.org/2014/08/dont-trust-your-companys-reputation-to-the-quants/.

“reputation”. (n.d.) Oxford Dictionaries Online. Retrieved from http://www.oxforddictionaries.com/definition/english/reputation.

Advertisements

It Is Too Hard Or Impossible…

July 15, 2014

** Admitting that you don’t know how to make the sausage will always cast doubt on the quality of the sausage you do produce. **

One of my personal risk management aggravations relates to risk management professionals that claim it is too hard or impossible to quantify the frequency or severity of loss. First, there is the irony that we operate in a problem space of uncertainty and then make absolute statements that something cannot be done. When I witness this type of uttering, I will often challenge the person on the spot – keeping in mind the audience – in an effort to pull that person off the edge of mental failure. And make no mistake, I struggle with quantification as well – but to what degree I share that with stakeholders or peers is an aspect of professional perception that I intentionally manage. Reflecting on my own experiences and interactions with others, I want to share some quick litmus tests I use when addressing the “it is too hard or impossible” challenges.

1. Problem Scoping. Have I scoped the problem or challenge too broadly? Sometimes we take these super-big, gnarly problem spaces and become fascinated with them without trying to deconstruct the problem into more manageable chunks. Often, once you begin reducing your scope, the variables that drive frequency or severity will emerge.

2. Subject Matter Experts. This is one litmus test that I have to attribute to Jack Jones and the FAIR methodology. Often, we are not the best person to be making subject matter estimates for the variables that factor into the overall risk. The closer you get to the true experts and extract their knowledge for your analysis, the more robust and meaningful your analysis will become. In addition, leveraging subject matter experts fosters collaboration and in some cases innovation where leaders of unrelated value chains realize there is opportunity to reduce risk across one or more chains.

3. Extremes and Calibration. Once again, I have Jack Jones to thank for this litmus test and Doug Hubbard as well. Recently, a co-worker declared something was impossible to measure (workforce, increased expense related). After his “too hard” declaration, I simply asked: “Will it cost us more than $1BN?” The question stunned my co-worker, which resulted in a “Of course not!” to which I replied “It looks like it is greater than zero and less than 1 billion, we are making progress!” Here is the point, we can tease extremes and leverage calibration techniques to narrow down our uncertainty and define variable ranges versus anchoring in on a single, discreet value.

4. Am I Trying Hard Enough. This is a no-brainer but unfortunately I feel too many of us do not try hard enough. A simple phone call, email or even well crafted Google query can quickly provide useful information that in turn reduces our uncertainty.

These are just a few “litmus tests” you can use to evaluate if an estimation or scenario is too hard to quantify. But here is the deal, as risk professionals it is expected that we deal with tough things so our decision makers don’t have too.


Assurance vs. Risk Management

August 29, 2012

One of my current hot button is the over-emphasis of assurance with regards to risk management. I recently was given visibility to a risk management framework where ‘management assurance’ was listed as the goal of the framework. However, the framework did not allow for management to actually manage risk.

Recently at BSidesLA I attempted to reduce the definitions of risk and ‘risk management’ down to fundamental attributes because there are so many different – and in a lot of cases contextually valid – definitions of risk.

Risk: Something that can happen that can result in loss. It is about the frequency of events that can have an adverse impact to our time, resources and of course our money.

Risk Management: Activities that allow us to reduce our uncertainty about risk(s) so we can make good trade off decisions.

So how does this tie into assurance? The shortcoming with an assurance-centric approach to risk management is that assurance IMPLIES 100% certainty that all risks are known and that all identified controls are comprehensive and effective. An assurance-centric approach also implies that a control gap, control failure or some other issue HAS to be mitigated so management can have FULL assurance regarding their risk management posture.

Where risk management comes into play is when management does not require with having 100% assurance because there may not be adequate benefit to their span of control or the organization proper. Thus, robust risk management frameworks need to have a management response process – i.e. risk treatment decisions – when issues or gaps are identified. A management response and risk treatment decision process has a few benefits:

1. It promotes transparency and accountability of management’s decisions regarding their risk management mindset (tolerance, appetite, etc.).

2. It empowers management to make the best business decision (think trade-off) given the information (containing elements of uncertainty) provided to them.

3. It potentially allows organizations to better understand the ‘total cost of risk’ (TCoR) relative to other operational costs associated with the business.

So here are the take-aways:

1. Assurance does always not equate to effective risk management.

2. Effective risk management can facilitate levels of assurance, confidence as well one’s understanding of uncertainty regarding loss exposures they are faced with.

3. Empowering and enabling management to make effective risk treatment decisions can provide management a level of assurance that they are running their business they way they deem fit.


Metricon 6 Wrap-Up

August 10, 2011

Metricon 6 was held in San Francisco, CA on August 9th, 2011. A few months ago, I and a few others were asked by the conference chair – Mr. Alex Hutton (@alexhutton) – to assist in the planning and organization of the conference. One of the goals established early-on was that this Metricon needed to be different then previous Metricon events. Having attended Metricon 5, I witnessed firsthand the inquisitive and skeptical nature of the conference attendees towards speakers and towards each other. So, one of our goals for Metricon 6 was to change the culture of the conference. In my opinion, we succeeded in doing that by establishing topics that would draw new speakers and strike a happy balance between metrics, security and information risk management.

Following are a few Metricon 6 after-thoughts…

Venue: This was my first non-military trip to San Francisco. I loved the city! The vibe was awesome! The sheer number of people made for great people-watching entertainment and so many countries / cultures were represented everywhere I went. It gave a whole new meaning to America being a melting pot of the world.

Speakers: We had some great speakers at Metricon. Every speaker did well, the audience was engaged, and while questions were limited due to time – they took some tough questions and dealt with them appropriately.

Full list of speakers and presentations…

Favorite Sessions: Three of the 11 sessions stood out to me:

Jake Kouns – Cyber Insurance. I enjoyed this talk for a few reasons: a. it is an area of interest I have and b. the talk was easy to understand. I would characterize it as an overview of what cyber insurance is [should be] as well as some of the some of the nuances. Keeping in mind it was an overview – commercial insurance policies can be very complex – especially for large organizations. Some organizations do not buy separate “cyber insurance” policies – but utilize their existing policies to cover potential claims / liability arising from operational information technology failures or other scenarios. Overall – Jake is offering a unique product and while I would like to know more details – he appears to be well positioned in the cyber insurance product space.

Allison Miller / Itai Zukerman – Operationalizing Analytics. Alli and Itai went from 0 to 60 in about 5 seconds. They presented some work that brought together data collection, modeling and analysis- in less then 30 minutes. Itai was asked a question about the underlying analytical engine used – and he just nonchalantly replied ‘I wrote it in Java myself’ – like it was no big deal. That was hot.

Richard Lippman – Metrics for Continuous Network Monitoring. Richard gave us a glimpse of a real-time monitoring application; specifically, tracking un-trusted devices on protected subnets. The demo was very impressive and probably gave a few in the room some ‘metrigasms’ (I heard this phrase from @mrmeritology).

People: All the attendees and speakers were cordial and professional. By the end of the day – the sense of community was stronger then what we started with. A few quick shout-outs:

Behind-the-scenes contributors / organizers. The Usenix staff helped us out a lot over the last few months. We also had some help from Richard Baker who performed some site reconnaissance in an effort to determine video recording / video streaming capabilities – thank you sir. There were a few others that helped in selecting conference topics – you know who you are – thank you!

@chort0 and his lovely fiancé Meredith. They pointed some of us to some great establishments around Union Square. Good luck to the two of you as you go on this journey together.

@joshcorman. I had some great discussion with Josh. While we have only known each other for a few months – he has challenged me to think about questions [scenarios] that no one else is addressing.

+Wendy Nather. Consummate professional. Wendy and I have known of each other for a few years but never met in person prior to Metricon6. We had some great conversation; both professional and personal. She values human relationships and that is more important in my book then just the social networking aspect.

@alexhutton & @jayjacobs – yep – it rocked. Next… ?

All the attendees. Without attendance, there is no Metricon. The information sharing, hallway collaboration and presentation questions contributed greatly to the event. Thank you!

***

So there you go everyone! It was a great event! Keep your eyes and ears open for information about the next Metricon. Consider reanalyzing your favorite conferences and if you are looking for small, intimate and stimulating conferences – filled with thought leadership and progressive mindsets – give Metricon a chance!


Simple Risk Model (Part 4 of 5): Simulating both Loss Frequency & Loss Magnitude

February 5, 2011

Part 1 – Simulate Loss Frequency Method 1
Part 2 – Simulate Loss Frequency Method 2
Part 3 – Simulate Loss Frequency Method 3

In this post we want to combine the techniques demonstrated in parts two and three into a single simulation. To accomplish this simulation we will:

1.    Define input parameters
2.    Introduce VBA code – via a macro – that consumes the input parameters
3.    Perform functions within the VBA code
4.    Take the output from functions and store them in the spreadsheet
5.    Create a histogram of the simulation output.

Steps 3 & 4 will be performed many times; depending on the number of iterations we want to perform in our simulation.

You can download this spreadsheet to use as a reference throughout the post. The spreadsheet should be used in Excel only. The worksheets we are concerned with are:

test – This worksheet contains code that will step through each part of the loss magnitude potion of the simulation. By displaying this information, it allows you to validate that both the code and calculations are functioning as coded. This tab is also useful for testing code in small iterations. Thus, the number of iterations should be kept fairly low (“test”; B1).

prod – Unlike the “test” tab, this tab does not display the result of each loss magnitude calculation per iteration. This is the tab that you would want to run the full simulation on; thousands of iterations.

Here we go…and referencing the “prod” worksheet…

Input Parameters.
Expected Loss Frequency. It is assumed for this post that you have estimated or derived a most likely or average loss frequency value. Cell B2 contains this value. The value in this cell will be one of the input parameters into a POISSON probability distribution to return an inverse cumulative value (Part 2 of this Series).

Average Loss Magnitude. It is assumed for this post that you have estimated or derived a most likely or average loss magnitude value. Cell B3 contains this value. The value in this cell will be one of the input parameters into a NORMAL probability distribution to return an inverse cumulative value (Part 3 of this Series).

Loss Magnitude Standard Deviation. It is assumed for this post that you have estimated or derived the standard deviation for loss magnitude. Cell B4 contains this value. The value in this cell will be one of the input parameters into a NORMAL probability distribution to return an inverse cumulative value (Part 3 of this Series).

The Simulation.
On the “prod” tab, when you click the button labeled “Prod” – this will execute a macro composed of VBA code. I will let you explore the code on your own – it is fairly intuitive. I also left a few comments in the VBA so I remember what certain sections of the code are doing. There are four columns of simulation output that the macro will generate.

Iter# (B10). This is the iteration number. In cell B1 we set the number of iterations to be 5000. Thus, the VBA will cycle through a section of its code 5000 times.

LEF Random (C10). For each iteration, we will generate a random value between 0 and 1 to be used in generating a loss frequency value. Displaying the random value in the simulation is not necessary, but I prefer to see it so I can informally analyze the random values themselves and gauge the relationship between the random value and the inverse cumulative value in the next cell.

LEF Value (D10). For each iteration, we will use the random value we generated in the adjacent cell (column c), combine it with the Expected Loss Frequency value declared in B2 and input these values as parameters into a POISSON probability distribution that returns an inverse cumulative value. The value returned will be an integer – a whole number. Why a whole number? Because you can’t have half a loss event – just like a woman cannot be half pregnant ( <- one of my favorite analogies). This is a fairly important concept to realize from a loss event modeling perspective.

Loss Magnitude (E10). For each iteration, we will consume the value in the adjacent cell (column D) and apply logical rules to it.

a.    If the LEF Value = 0, then the loss magnitude is zero.
b.    If the LEF Value > 0, then for each instance of loss we will:
1.    Generate a random value
2.    Consume the average loss magnitude value in cell B3
3.    Consume the loss magnitude standard deviation in cell B4
4.    Use the values referenced in 1-3 as input parameters into a Normal probability distribution and return an inverse cumulative value. In other words, given a normal distribution with mean $2000 and standard deviation of $1000 – what is the value of that distribution point given a random value between 0 and 1.
5.    We will add all the instances of loss for that iteration and record the sum in column E.

Note: Steps 4 and 5 can be observed on the “test” worksheet by clicking the button labeled “test”.

The code will continue to loop until we have completed the number of iterations we specified in cell B1.

The Results. Now that the simulation is complete we can begin to analyze the output.

# of Iterations With No Loss (B5). This is the number of iterations where the returned inverse cumulative value was zero.

# of Iterations With Loss (B6). This is the number of iterations where the returned inverse cumulative value was greater than zero.

# of Loss Events (B7). This is the sum of loss events for all the iterations. There was some iteration where there was more then one loss event.

Max. # of Loss Events for an iteration (B8). This is the maximum number of loss events for any given iteration.

Next, let’s look at some of the simulation output in the context of loss severity; $.

Min. Loss (K6). This is minimum loss value returned from the simulation. I round the results to the nearest hundred in the worksheet.

Max. Loss (K7). This is maximum loss value returned from the simulation. I round the results to the nearest hundred in the worksheet.

Median (G5). This is the 50th percentile of the simulation results. In other words, 50% of the simulations results were equal to or less then this value.

Average (G6). This is the average loss value for the simulation. This is the quotient of summing all the loss magnitude values and dividing by the number of iterations. This value can quickly be compared to the median to make inferences about the skew of the simulation output.

80th % (G7). This is the 80th percentile of the simulation results. In other words, 80% of the simulations results were equal to or less then this value. In some industries, this is often referred to as the 1-in-5 loss.

90th % (G8). This is the 90th percentile of the simulation results. In other words, 90% of the simulations results were equal to or less then this value. In some industries, this is often referred to as the 1-in-10 loss.

95th % (G9). This is the 95th percentile of the simulation results. In other words, 95% of the simulations results were equal to or less then this value. In some industries, this is often referred to as the 1-in-20 loss.

99th % (G10). This is the 99th percentile of the simulation results. In other words, 99% of the simulations results were equal to or less then this value. In some industries, this is often referred to as the 1-in-100 loss.

Note 2: Generally speaking, the 95th, 99th and greater percentiles are often considered as being part of the tail of the loss distribution. I consider all the points in cells G5:G10 to be useful. For some loss exposures, the median and average values are more than enough to make informed decisions. For some loss exposures; the 80th, 90th, 95th, 99th and even larger percentiles are necessary.

Simulated Loss Magnitude Histogram. A histogram is a graphical representation showing the distribution of data. The histogram in the “prod” worksheet represents the distribution of data for all iterations where the loss was greater than zero.

Wrap Up. What I have presented in this post is a very simple model for a single loss exposure using randomness and probability distributions. Depending on your comfort level with VBA and creativity, one can easily build out more complex models; whether it is hundreds of loss exposures you want to model for or just a few dependent loss exposures.


Risk Fu Fighting

January 31, 2011

If you are an information risk analyst or perform any type of IT risk analysis – you should really consider joining the Society of Information Risks Analysts mailing list. Over the last several weeks there have been some amazing exchanges of ideas, opinions, and spirited debate over the legitimacy and value of risk analysis. Some of the content is no doubt a pre-cursor to the anxiously awaited “Risk Management Smackdown” at RSA on 2/15. Regardless of my role within SIRA or the upcoming RSA debate, the SIRA mailing list is a great resource for learning more about IT risk analysis and IT risk management in general.

Below are a couple of links:

Society of Information Risk Analysts (SIRA)
Society of Information Risk Analysts – Mailing List
Society of Information Risk Analysts – Mailing List Archives (must be subscribed to the mailing list to view)
RSA Session Catalog (filter on security tag “risk management”)


More Heat Map Love

May 11, 2010

In my previous post “Heat Map Love” I attempted to illustrate the relationship between plots on a heat map and a loss distribution. In this post I am going to illustrate another method to show the relationship – hopefully in simpler terms.

In the heat map above I have plotted five example risk issues:

I: Application security; cross-site scripting; external facing application(s); actual loss events between 2-10 times a year; low magnitude per event – less then $10,000.

II: Data confidentiality; lost / stolen mobile device or computer; no hard disk encryption; simulated or actual one loss event per year, low to moderate magnitude per event.

III:  PCI-DSS Compliance; level 2 Visa merchant; not compliant with numerous PCI-DSS standards; merchant self-reports not being in compliance this year; merchant expects monthly fines of $5,000 for a one year total of $60,000.

IV: Malware outbreak; large malware outbreak (greater then 10% of your protected endpoints). Less then once every ten years; magnitude per event could range between $100,000 and $1,000,000; productivity hit, external consulting, etc.

V: Availability; loss of data center; very low frequency; very large magnitude per event.

Since there is a frequency and magnitude of loss associated with each of these issues we can conceptually associate these issues with a loss distribution (assuming that our loss distribution is a normal-like or log normal).

Step 1: Hold a piece of paper with the heat map looking like the image below:

Step 2: Flip the paper towards you so the heat map looks like image below (flip vertical):


Step 3: Rotate the paper counter-clockwise 90 degrees; it should like the image below.


For ease of illustration; let’s overlay a log normal distribution.

What we see is in line with what we discussed in the “Heat Map Love” post:

Risk V – Loss of data center; is driving the tail; very low frequency; very large magnitude.
Risk IV – Malware outbreak; low frequency; but significant or high magnitude.
Risk III – Annual PCI fines from Visa via acquirer / processor; once per year; $60K.
Risk II – Lost or stolen laptop that had confidential information on it; response and notification costs not expected to be significant.
Risk I – Lots of small application security issues; for example cross site scripting; numerous detected and reported instances per year; low cost per event.

There you have it – a less technical way to perform a sniff test on your heat map plots and / or validate against a loss distribution.

Once you have taught everyone how to perform this artwork paper rotation trick. You can have a paper airplane flying contest.