Reputation FUD (Fear, Uncertainty and Doubt)

August 6, 2014


Key Point: Use of a reputation taxonomy – similar to Harris Interactive’s “Reputation Quotient” methodology – can enhance risk analysis activities where reputation is a factor as well as increase the value that risk practitioners provide in their organizations by enabling more informed risk decisions.


Within the risk management discipline there is an activity called “risk analysis” that will usually entail understanding the drivers – or dimensions – of frequency and severity for potential adverse events. One common severity dimension is reputation; defined by Oxford Dictionary (n.d) as “The beliefs or opinions that are generally held about someone or something”. Intuitively we all know what reputation is but putting your finger on what it is – for a business – can be challenging.

Over the years, I have seen many approaches to accounting for a reputation impact. Some methodologies encourage estimating some dollar value that can be associated with customer migration, expenses to combat negative publicity or expenses to make customers whole when an adverse event has occurred. Other practices will assign ordinal values of reputation badness. Some companies have built reputation correlation factors in their risk models for scenarios that have a potential reputation impact. Finally, some companies just annotate reputation as being a loss driver but do not try to estimate or measure any severity factors. While all of these approaches have merit, all too often I see reputation discussed or analyzed in the form of emotions: this potentially bad thing feels bad, therefore we will look bad. There has to be a more logical way to analyze reputation regardless if the method is quantitative or qualitative in nature.

I recently read a Harvard Business Review blog regarding reputation, in which Loeb and McNulty (2014) referenced a reputation scoring methodology called “Reputation Quotient” (RQ) developed by Harris Interactive [Harris] (2014). I began digging around Harris’ site and found the “2013 Harris Poll 2013 RQ Summary Report” (Harris, 2013), in which they detail out the RQ dimensions and variables. Here they are:

Emotional Appeal:
• Feel Good About
• Admire and Respect
• Trust

Products & Services:
• High Quality
• Innovative
• Value for Money
• Stands Behind

Workplace Environment:
• Rewards Employees Fairly
• Good Place to Work
• Good Employees

Financial Performance:
• Outperforms Competitors
• Record of Profitability
• Low Risk Investment
• Growth Prospects

Vision & Leadership
• Market Opportunities
• Excellent Leadership
• Clear Vision for the Future

Social Responsibility
• Supports Good Causes
• Environmental Responsibility
• Community Responsibility

The factors outlined by Harris Interactive can help us as risk practitioners talk more intelligently about reputation as part of our risk analysis – especially if our employers participate in the Reputation Quotient Survey (as mine does). For any scenario we are analyzing, where there could be a potential reputation impact, we can ask ourselves if the adverse event lends itself towards violating one of these factors. My intuition is that many assumptions about reputation would be challenged using such an approach. In addition, the RQ variables may be idea candidates for factors in quantitative statistical model to better understand severity.

Quantifying reputation can be challenging but talking about it an in objective and logical manner offers benefits. The more knowledgeable and objective we are in understanding squishy problem spaces like reputation the more information we can provide to our stakeholders to make more informed, effective risk management decisions. Better decision making ultimately creates value for the organization as it facilitates decisions around expense optimization, ensures tactical and strategic goals are being met and reinforces adherence to ethics and values.


Harris Interactive Inc. (2013). The Harris Poll 2013 RQ Summary Report.  Retrieved from

Harris Interactive Inc. (2014a). The Harris Poll Reputation Quotient. Retrieved from

Loeb, H. and McNulty, E.J. (2014, August 4). Don’t Trust Your Company’s Reputation to the Quants. Harvard Business Review Online. Retrieved from

“reputation”. (n.d.) Oxford Dictionaries Online. Retrieved from


It Is Too Hard Or Impossible…

July 15, 2014

** Admitting that you don’t know how to make the sausage will always cast doubt on the quality of the sausage you do produce. **

One of my personal risk management aggravations relates to risk management professionals that claim it is too hard or impossible to quantify the frequency or severity of loss. First, there is the irony that we operate in a problem space of uncertainty and then make absolute statements that something cannot be done. When I witness this type of uttering, I will often challenge the person on the spot – keeping in mind the audience – in an effort to pull that person off the edge of mental failure. And make no mistake, I struggle with quantification as well – but to what degree I share that with stakeholders or peers is an aspect of professional perception that I intentionally manage. Reflecting on my own experiences and interactions with others, I want to share some quick litmus tests I use when addressing the “it is too hard or impossible” challenges.

1. Problem Scoping. Have I scoped the problem or challenge too broadly? Sometimes we take these super-big, gnarly problem spaces and become fascinated with them without trying to deconstruct the problem into more manageable chunks. Often, once you begin reducing your scope, the variables that drive frequency or severity will emerge.

2. Subject Matter Experts. This is one litmus test that I have to attribute to Jack Jones and the FAIR methodology. Often, we are not the best person to be making subject matter estimates for the variables that factor into the overall risk. The closer you get to the true experts and extract their knowledge for your analysis, the more robust and meaningful your analysis will become. In addition, leveraging subject matter experts fosters collaboration and in some cases innovation where leaders of unrelated value chains realize there is opportunity to reduce risk across one or more chains.

3. Extremes and Calibration. Once again, I have Jack Jones to thank for this litmus test and Doug Hubbard as well. Recently, a co-worker declared something was impossible to measure (workforce, increased expense related). After his “too hard” declaration, I simply asked: “Will it cost us more than $1BN?” The question stunned my co-worker, which resulted in a “Of course not!” to which I replied “It looks like it is greater than zero and less than 1 billion, we are making progress!” Here is the point, we can tease extremes and leverage calibration techniques to narrow down our uncertainty and define variable ranges versus anchoring in on a single, discreet value.

4. Am I Trying Hard Enough. This is a no-brainer but unfortunately I feel too many of us do not try hard enough. A simple phone call, email or even well crafted Google query can quickly provide useful information that in turn reduces our uncertainty.

These are just a few “litmus tests” you can use to evaluate if an estimation or scenario is too hard to quantify. But here is the deal, as risk professionals it is expected that we deal with tough things so our decision makers don’t have too.

Knowing Your Exposure Limitations

April 1, 2013

For those familiar with risk analysis – specifically the measurement [or estimate] of frequency and severity of loss – there are many scenarios where the severity of loss or resultant expected loss can have a long tail. In FAIR terms, we may have a scenario where the minimum loss is $1K, the most likely loss $10K and the maximum loss is $2M. In this type of a scenario – using Monte Carlo simulation and depending on the associated kurtosis of the estimated distribution – it is very easy for a severity distribution or aggregate distribution (that takes into account both frequency and severity) to be derived of which the resulting descriptive statistics don’t as accurately reflect the reality of exposure of the adverse event should it occur. While understanding the full range of severity or overall expected loss may be useful, a prudent risk practitioner should understand and account for the details of the organization’s business insurance policies to better understand when insurance controls will be invoked to limit financial loss for significant adverse events.

Using the example values above, an organization may be will willing to pay out of pocket for all adverse events –similar to the scenario above – up to $1M and then rely upon insurance to cover the rest. This in turn changes the maximum amount of loss the company is directly exposed to (per event); from $2M to $1M. In addition, this understanding could be a significant information point for decisions makers as they ponder how to treat an issue. Given this information consider the following:

1. How familiar you are with your organizations business or corporate insurance program?

2. Does your business insurance program cover the exposures you are responsible for or accountable for managing the risk of?

3. Are your risk analysis models flexible enough to incorporate limits of loss relative to potential loss?

4. When you talk with decisions makers, are you even referencing the existence of business insurance policies or other risk financing / transfer controls that limit your organization’s exposure when significant adverse events occur?

The more we can leverage other risk-related controls in the organization and paint a more accurate picture of exposure, the more we become a trusted advisor to our decision makers and other stakeholders in the larger risk management life-cycle.

Want to learn more?

AICPCU – Associate in Risk Finance (ARM) –

SIRA – Society of Information Risk Analysts –

Wonder Twin Powers Activate…

March 7, 2013

…form of risk professional. I really miss blogging. The last year or so has been a complete gaggle from a relocation and time-management perspective. So naturally, discretionary activities – like blogging – take a back seat. I want to share a few quick thoughts around the topic of transitioning from a pure information technology / information security mindset to a risk management professional mindset.

1. Embrace the Gray Space. Information technology is all about bits, bytes, ones and zeros. Things either work or don’t work; it is either black or white, it is either good or bad – you get the point. In the discipline of risk management we are interested in everything between the two extremes. It is within this space where there is information to allow decision makers to make more well informed decisions.

2. Embrace Uncertainty. Intuitively, the concept of uncertainty is contrary to a lot of information technology concepts. Foundational risk concepts revolve around understanding and managing uncertainty and infusing it into our analysis / conversation with decision makers. There is no reason why this cannot be done within information risk management programs as well.  At first, it may feel awkward as an IT professional to admit to a leader that there is uncertainty inherent within some of the variables included in your analysis. However, what you will find – assuming you can clearly articulate your analysis – is that infusing the topic of uncertainty in your conversations and analysis has indirect benefits. Such an approach implies rigor, maturity and builds confidence with the decision maker.

3. Find New Friends. Notice I did not type find different friends. There is an old adage that goes something to the effect of “you are who you surround yourself with”. Let me change this up: “you are who you are learning from”. You want to learn risk management? Indulge yourself in non-IT risk management knowledge sources, learn centuries old principles of risk management and then begin applying what you have learned to the information technology / information security problem space. Here are just a few places to begin:

b. Risk Management Magazine
c. The Risk Management Society
d. Property & Casualty  – Enterprise Risk Management

4. Change Your Thinking. This is going to sound heretical but bear with me. Stop thinking like an IT professional and begin thinking like a business and a risk management professional. Identify and follow the money trails for the various risk management problem spaces you are dealing with. Think like a commercial insurer. An entire industry exists to reduce the uncertainty associated with technology-related, operational risk – when bad things happen. Thus, learn how commercial insurers think so you can manage risk more effectively without having to overspend on third party risk financing products – as well as manage risk in such a way that can tie back to the financials – feelings and emotions. This is why I am so on-board with the AICPCU’s Associate in Risk Management (ARM) professional designation. You can also check out the FAIR risk measurement methodology which is also very useful for associating loss forms to adverse events which can also help tell the story around financial consequences.

5. Don’t Die On That Hill. I have to thank my new boss for this advice. Choose your risk management battles wisely and in the heat of the conversation ask yourself if you need to die on this hill. Not all of our conversations with decision makers, leaders or even between ourselves – as dear colleagues – is easy. It is way too easy for passion to get in the way of progress and influencing. Often, if you find yourself “on the hill” asking if you need to die – something has gone terribly wrong. Instead of dying and ruining a long term relationship – take a few steps back, get more information that will help in the situation, regroup and attack again. This is an example of being a quiet professional.

That is all for now. Take care.

The AICPCU ‘Associate in Risk Management’ (ARM)

September 14, 2012

A year or so ago I stumbled upon the ARM designation that is administered through the AICPCU or ‘the Institutes’ for short. What attracted me then to the designation was that it appeared to be a comprehensive approach to performing a risk assessment for scenarios that result in some form of business liability. Unfortunately, I did not start pursuing the designation until July 2012. The base designation consists of passing three tests on the topics of ‘risk assessment’, ‘risk control’ and ‘risk financing’. In addition, there are a few other tests which allows one to extend their designation to include disciplines such as ‘risk management for public entities’ and ‘enterprise risk management’.

I am about two months into my ARM journey and just passed the ARM-54 ‘Risk Assessment’ test. I wanted to share some perspective on the curriculum itself and some differentiators when compared to some other ‘risk assessment’ and ‘risk analysis / risk measurement’ frameworks.

1. Proven Approach. Insurance and risk management practices have been around for centuries. Insurance carriers especially those who write commercial insurance products are very skilled at identifying and understanding the various loss exposures businesses face. Within the information risk management and operational risk management space, many of the loss exposures we care about and look for are the same that insurance carriers may look for when they assess a business for business risk and hazard risk; so they can create a business insurance policy. In other words, the ‘so what’ associated with the bad things we and insurance carriers care about is essentially a business liability that we want to manage. Our problem space / skills and risk treatment options may be slightly different but the goal of our efforts is the same: risk management.

2. Comprehensive. The ARM-54 course alone covers an enormous amount of information. The material easily encompasses the high level learning objectives of six college undergraduate courses I have taken in the last few years:

– Insurance and Risk Management
– Commercial Insurance
– Statistics
– Business Law
– Calculus (Business / Finance Problem Analysis / Calculations)
– Business Finance

The test for ARM-54 was no walk in the park. Even though I passed on the first attempt, I short-changed myself on some of the objectives which caused a little bit of panic on my part. The questions were well written and quite a few of them forced you to understand problem context so you could choose the best answer.

3. ‘Risk Management Value Chain’. Some of the following thoughts are the largest selling points of this designation compared to other IT risk assessment frameworks, IT risk analysis frameworks and IT risk certifications / designations. The ARM curriculum connects the dots between risk assessment activities, risk management decisions and the financial implications of those decisions at various levels of abstraction. This is where existing IT-centric risk assessment / analysis frameworks fall short – they are either to narrow in focus, do not incorporate business context, are not practical to execute or in some cases, not useful at all in helping someone or a business manage risk.

4. Cost Effective. For between $300-$500 per ARM course – one can get some amazing reference material and pay for the test. Compare that to the cost of six university courses (between $6K – $9K) or the cost of one formal risk measurement course (~$1k). I am convinced that any risk management professional can begin applying learned concepts from the ARM-54 text within hours after having been introduced to the text. So just the cost of the text books alone (~$100 give or take) is justified even if you do not take the test(s).

5. Learn How To Fish. Finally, I think it is worth noting that there is nothing proprietary to the objectives and concepts presented in the ARM-54 ‘Risk Assessment’ curriculum. Any statistical probability calculations or mathematical finance problems are exactly that – good ole math and probability calculations. In addition, there is nothing proprietary about the methods or definitions presented as they relate to risk assessments or risk management proper. This is an important selling point to me because there are many information risk management practitioners that are begging for curricula or training such as ARM where they can begin applying what they are learning and not be dependent on proprietary tools, proprietary calculations or pay for the license to use a proprietary framework.

In closing, the ARM-54 curriculum is a very comprehensive risk management curriculum that establishes business context, introduces proven risk assessment methods, and reinforces sound risk management principals. In my opinion, it is very practical for the information / operational risk management professional – especially those that are new to risk management or looking for a non-IT or non-security biased approach to risk management – regardless of the industry you work in.

So there you have it. I am really psyched about this designation and the benefits I am already realizing in my job as a Sr. Risk Advisor for a Fortune 200 financial services firm. I wish I would have pursued this designation two years ago but I am optimistic that I will make for lost time and tangible business value very quickly.

Assurance vs. Risk Management

August 29, 2012

One of my current hot button is the over-emphasis of assurance with regards to risk management. I recently was given visibility to a risk management framework where ‘management assurance’ was listed as the goal of the framework. However, the framework did not allow for management to actually manage risk.

Recently at BSidesLA I attempted to reduce the definitions of risk and ‘risk management’ down to fundamental attributes because there are so many different – and in a lot of cases contextually valid – definitions of risk.

Risk: Something that can happen that can result in loss. It is about the frequency of events that can have an adverse impact to our time, resources and of course our money.

Risk Management: Activities that allow us to reduce our uncertainty about risk(s) so we can make good trade off decisions.

So how does this tie into assurance? The shortcoming with an assurance-centric approach to risk management is that assurance IMPLIES 100% certainty that all risks are known and that all identified controls are comprehensive and effective. An assurance-centric approach also implies that a control gap, control failure or some other issue HAS to be mitigated so management can have FULL assurance regarding their risk management posture.

Where risk management comes into play is when management does not require with having 100% assurance because there may not be adequate benefit to their span of control or the organization proper. Thus, robust risk management frameworks need to have a management response process – i.e. risk treatment decisions – when issues or gaps are identified. A management response and risk treatment decision process has a few benefits:

1. It promotes transparency and accountability of management’s decisions regarding their risk management mindset (tolerance, appetite, etc.).

2. It empowers management to make the best business decision (think trade-off) given the information (containing elements of uncertainty) provided to them.

3. It potentially allows organizations to better understand the ‘total cost of risk’ (TCoR) relative to other operational costs associated with the business.

So here are the take-aways:

1. Assurance does always not equate to effective risk management.

2. Effective risk management can facilitate levels of assurance, confidence as well one’s understanding of uncertainty regarding loss exposures they are faced with.

3. Empowering and enabling management to make effective risk treatment decisions can provide management a level of assurance that they are running their business they way they deem fit.

Heat Map Love – R Style

January 20, 2012

Over the last several years not a month has gone by where I have not heard someone mention R – with regards to risk analysis or risk modeling – either in discussion or on a mailing list. If you do not know what R is, take a few minutes to read about it at the project’s main site. Simply put, R is a free software environment for statistical computing and graphics. Most of my quantitative modeling and analysis has been strictly Excel-based, which to date has been more then sufficient for my needs. However, Excel is not the ‘end-all-be-all’ tool. Excel does not contain every statistical distribution that risk practitioners may need to work with, there is no native Monte Carlo engine and it does have graphing limitations short of purchasing third party add-ons (advanced charts, granular configuration of graphs, etc…).

Thanks to some industry peer prodding (Jay Jacobs of Verizon’s Risk Intelligence team and Alex Hutton suggesting that ‘Numbers’ is a superior tool for visualizations). I finally bit the bullet, downloaded and then installed R.  For those completely new to R you have to realize that R is a platform to build amazing things upon. It is very command-line like in nature. You type in instructions and it executes. I like this approach because you are forced to learn the R language and syntax. Thus, in the end you will probably understand your data and resulting analysis much better.

One of the first graphics I wanted to explore with R was heat maps. At first, as I was thinking a standard risk management heat map; a 5×5 matrix with issues plotted on the matrix relative to frequency and magnitude. However, when I started searching Google for ‘R heat map’, a similar yet different style of heat map – referred to as a cluster heat map – was first returned in the search results. A cluster heat map is useful for comparing data elements in a matrix against each other depending on how your data is laid out. It is very visual in nature and allows the reader to quickly zero in on data elements or visual information of importance. From an information risk management perspective, if we have quantitative risk information and some metadata, we can begin a discussion with management by leveraging a heat map visualization. If additional information is needed as to why there are dark areas, then we can have the discussion about the underlying quantitative data. Thus, I decided to build a cluster heat map in R.

I referenced three blogs to guide my efforts – they can be found here, here and here. What I am writing here is in no way a complete copy and paste of their content because I provide some additional details on some steps that generated errors for me that in some cases took hours to figure out. This is not unexpected given the difference in data sets.

Let’s do it.

1.    Download and install R. After installation, start an R session. The version of R used for this post is 2.14.0. You can check your version by typing version at the command prompt and pressing ENTER.

2.    You will need to download and install the ggplot2 package / library. Do this through the R gui by referencing an online CRAN repository (packages -> install packages …). This method seems to be cleaner then downloading a package to your hard disk and then telling R to install it. In addition, if you reference an online repository, it will also grab any dependent packages at the same time. You can learn more about ggplot2 here.

3.    Once you have installed the ggplot2 package, we have to load it into our current R workspace.

> library(ggplot2)

4.    Next, we are going to import data to work with in R. Download ‘risktical_csv1.csv’ to your hard disk and execute the following command. Change the file path to match the file path for where you saved the file to.

risk <- read.csv(“C:/temph/risktical_csv1.csv”, sep=”,”, check.names= FALSE)

a.    We are telling R to import a Comma Separated Value file and assign it to a variable called ‘risk’.
b.    Read.csv is the method or function type of import.
c.    Notice that the slashes in the file name are opposite of what they normally would be when working with other common Windows-based applications.
d.    ‘sep=”,”’ tells R what character is used to separate values within the data set.
e.    ‘check.names=FALSE’ tells R not to check the column headers for correctness. R expects to see only letters, if it sees numbers, it will prepend an X to the column headers – we don’t want that based off the data set we are using.
f.    Once you hit enter, you can type ‘risk’ and hit enter again. The data from the file will be displayed on the screen.

5.    Now we need to ‘shape’ the data. The ggplot graphing function we want to use cannot consume the data as it currently is, so we are going to reformat the data first. The ‘melt’ function helps us accomplish this.

risk.m <- melt(risk)

a.    We are telling R to use the melt function against the ‘risk’ variable. Then we are going to take the output from melt and create a new variable called risk.m.
b.    Melt rearranges the data elements. Type ‘help(melt)’ for more information.
c.    After you hit enter, you can type ‘risk.m’ and hit enter again. Notice the way the data is displayed compared to the data prior to ‘melting’ (variable ‘risk’).

6.    Next, we have to rescale our numerical values so we can know how to shade any given section of our heat map. The higher the numerical value within a series of data, the darker the color or shade that tile of the heat map should be. The ‘ddply’ function helps us accomplish the rescaling; type ‘help(ddply)’ for more information.

risk.m <- ddply(risk.m, .(variable), transform, rescale = rescale(value), reorder=FALSE)

a.    We are telling R to execute the ‘ddply’ function against the risk.m variable.
b.    We are also passing some arguments to ‘ddply’ telling it to transform and reshape the numerical values. The result of this command produces a new column of values between 0 and 1.
c.    Finally, we pass an argument to ‘ddply’ not to reorder any rows.
d.    After you hit enter, you can type ‘risk.m’ and hit enter again and observe changes to the data elements; there should be two new columns of data.

7.    We are now ready to plot our heat map.

(p <- ggplot(risk.m, aes(variable, BU.Name)) + geom_tile(aes(fill = rescale), colour = “grey20”) + scale_fill_gradient(low = “white”, high = “red”))

a.    This command will produce a very crude looking heat map plot.
b.    The plot itself is assigned to a variable called p
c.    ‘scale_fill_gradient’ is the argument that associates color shading to the numerical values we rescaled in step 6. The higher the rescaling value – the darker the shading.
d.    The ‘aes’ function of ggplot is related to aesthetics. You can type in ‘help(aes)’ to learn about the various ‘aes’ arguments.

8.    Before we tidy up the plot, let’s set a variable that we will use in formatting axis values in step 9.

base_size <- 9

9.    Now we are going to tidy up the plot. There is a lot going on here.

p + theme_grey(base_size = base_size) + labs(x = “”, y = “”) + scale_x_discrete(expand = c(0, 0)) + scale_y_discrete(expand = c(0, 0)) + opts(legend.position = “none”, axis.ticks = theme_blank(), axis.text.x = theme_text(size = base_size * 0.8, angle = -90, hjust = 0, colour = “black”), axis.text.y = theme_text(size = base_size * 0.8, angle = 0, hjust = 0, colour = “black”))

a.    ‘labs(x = “”, y = “”)’ removes the axis labels.
b.    ‘opts(legend.position = “none”’ gets rid of the scaling legend.
c.    ‘axis.text.x = theme_text(size = base_size * 0.8, angle = -90’ sets the X axis text size as well as orientation.
d.    The heat map should look like the image below.

A few final notes:

1.    The color shading is performed within series of data, vertically. Thus, in the heat map we have generated, the color for any given tile is relative to the tile above and below it –IN THE SAME COLUMN – or in our case for a given ISO 2700X policy section.

2.    If we transposed our original data set – risktical_cvs2 – and applied the same commands with the exception of replacing BU.Name with Policy in our initial ggplot command (step 7), you should get a heat map that looks like the one below.

3.    In this heat map, we can quickly determine key areas of exposure for all 36 of our fictional business units relative to ISO 2700X. For example, most of BU3’s exposure is related to Compliance, followed by Organizational Security Policy and Access Control. If the executive in that business unit wanted more granular information in terms of dollar value exposure, we could share that information with them.

So there you have it! A quick R tutorial on developing a cluster heat map for information risk management purposes. I look forward to learning more about R and leveraging it to analyze and visualize data in unique and thought-provoking ways. As always, feel free to leave comments!