[BOOK REVIEW] The Communicators: Leadership in the Age of Crisis

December 23, 2010

I just finished reading The Communicators: Leadership in the Age of Crisis by Richard Levick and Charles Slack. For regular readers of this blog – you may recall a two part series back in 2009 on this blog – here and here – where Mr. Levick participated in a question and answer format on the topic of reputation risk. I have a lot of respect for the work Mr. Levick and his firm Levick Strategic Communications performs for their clients. “Why?” you might ask; the answer is risk management and leadership management.

***

RISK MANAGEMENT

The majority of the readers of this blog have information risk management backgrounds. So I will speak to risk management first. I am going to define risk as the probable frequency and probable magnitude of future loss. For those familiar with the FAIR risk analysis methodology – specifically the taxonomy – you will recall that in the “loss magnitude” side of the taxonomy there are concepts such as “duration of loss”, “effect of loss” and “secondary stakeholders” that can inflict secondary loss against our company when a bad event occurs.

The Communicators is filled with examples about how an individual, business leaders, or organizations as a whole – can impact (both good and bad) the duration and effect of loss as well as effectively manage the perceptions of secondary stakeholders – when a bad event (or crisis) occurs. As risk practitioners, it is no longer acceptable to just know that a big loss event can impact our employer’s reputation or other more-tangible loss forms. We have to be able give real –yet practical – scenarios and examples of loss forms. Better yet, we need to offer additional value by asking tough questions that could shed light on a systemic weakness in existing plans to deal with a crisis when it does occur.

For the information risk practitioner, the following sections stood out to me:

Section 1: The Blind Spot. While this section is more about courage and leadership; there are time honored nuggets of wisdom in this section that we should embrace no matter what your role or title in the organization is.

Section 6: Leadership in the Digital Era. Social media is a double-edged sword – every information risk practitioner knows it. While social media can enable our company it can also be an information distribution mechanism that can damage our company’s reputation and ability to minimize loss in minutes compared to days, weeks or even months. Read this section to get great perspective on social media and the risks associated with it.

Note: With regards to the subject of risk management and its relationship with “bad” events. A crisis does not need to be initiated by something “bad” or an actual loss event. The Communicators gives a few examples of these scenarios (Rule #35; When Facts Don’t Matter, Forget The Facts).

***

LEADERSHIP MANAGEMENT

As a former Marine, I cringe when I hear the words manager and leader used synonymously. Some organizations now even call all their managers “people leaders”. Philosophically, I can appreciate what is trying to be accomplished. But let’s face it there are managers out there that could not lead their teams out of an open door. I make such analogies to convey that leadership means something special to me. Thus, when I pick up a book that contains advice or examples of leadership – it better be good. The Communicators far exceeded my expectations.

If I was mentoring someone on the topic of leadership, using The Communicators as a mentoring aid and only had time to discuss one section; that section would be…

Section 9: Internal Leadership. The concept of ‘servant leadership’ is not necessarily new. Levick writes “Servant leadership defines the supervisory missions in terms of helping subordinates succeed and achieve through appreciation and reinforcement, not intimidation” (206). Just imagine a company where this approach was really embedded into its culture – not just a talking point on a PowerPoint slide deck that is helping your co-worker catch up on sleep and drool on him or herself. Better yet – forget about the manager / subordinate or corporate training aspect – what if everyone applied the concept of “servant leadership” in some or all aspects of their lives? Imagine how much more different our relationships and quality of life could be.

Leadership is not just about you and something you do relative to others. It is a mindset that can be leveraged at various levels of abstraction (personal, social, professional…) for those willing to embrace it.

In summary, I really enjoyed The Communicators and highly recommend it to anyone in the information risk management profession or anyone else that is serious about managing their career – regardless of your role or title.

Bene valete ac me vobis prodesse spero (“I bid farewell and hope I may help you”)

Advertisements

Simple Risk Model (Part 3 of 5): Simulate Loss Magnitude

December 22, 2010

Part 1 – Simulate Loss Frequency Method 1
Part 2 – Simulate Loss Frequency Method 2

In parts one and two of this series we looked at two methods for simulating loss frequency. Method one – while useful – has shortcomings as it primarily requires working with expected loss frequency values less then 1 (once per year). In addition, with method one, it was not possible to determine iterations where loss frequency could be greater then once per year.

Method two overcame these limitations. We leveraged the Poisson probability distribution (discrete distribution) as well as an expected loss frequency value and a random value between 0 and 1 to return a loss value (an integer) for any given iteration. Using this method – about 10% of our iterations resulted in loss events and some of those iterations had multiple loss events. From my perspective method two is the more useful of the two – especially since it has the potential to account for low probability situations where there could be numerous loss events for any simulation iteration.

The purpose of this post is to simulate loss magnitude. Conceptually, we are going to do what we did with loss frequency method two – but our distribution and input parameters will differ. To simulate loss magnitude we need four things:

1.    A continuous probability distribution.
2.    A random value between 0 and 1
3.    An expected or average loss magnitude
4.    A loss magnitude standard deviation

Continuous Probability Distribution. Technically, if you have internal or external loss magnitude data, you would analyze that data and fit the data to an appropriate continuous probability distribution. There are dozens of such distributions. There are often times where we have limited data or we need to make good faith (or “educated”) assumptions about the shape of our loss magnitude curve. A lot of IT risk scenarios loss magnitude curves are often assumed to be normal or lognormal in nature. Normal is often assumed but it has its limitations since there can be negative values and rarely is there a “perfect” normal loss magnitude curve for IT risk scenarios. However, most of the “normal-like” distributions converge to normal (as data points increase). Thus, for the purposes of demonstration I am going to use the normal distribution.

Random Value Between 0 and 1. Because we are dealing with uncertainty and a distribution, we will use random values between 0 and 1 in our probability distribution; think Excel function RAND().

Expected or Average Loss Magnitude. Statistics 101 – If you take the sum of X values and divide by X you get the average. Quantitative risk analysis methodologies like FAIR can facilitate deriving an average loss magnitude estimate. Or maybe, you have actual loss magnitude data points. How you derive average loss magnitude is not the focus of this post – just remember that to use the normal distribution you need that average loss magnitude value.

Loss Magnitude Standard Deviation. More Statistics 101. At a high level, standard deviation is a statistic or measure of how spread out our data points are relative to the mean. The larger the number, the greater or flatter our distribution (think bell curve) will be; the smaller the number – the more narrow the bell curve will be. In the interest of brevity, it is assumed that either you can use existing Excel functions to calculate a standard deviation from your loss magnitude data points, or your risk analysis tool sets will provide this value to you. In some cases you may not have actual data sets to calculate a standard deviation let alone an average magnitude value – in those cases we have to make our best estimates and document assumptions accordingly.

How do these work together? In layman’s terms – given a normal loss distribution with an average loss magnitude of $5000 and a standard deviation of $1000; what is the loss value (inverse cumulative value) at any point in the distribution, given a random probability value?

You may want to download this Excel spreadsheet to reference for the rest of the post (it should work in Excel 2003, Excel 2007 and Excel 2010; I have not tested it on Office for Mac). Reference tab “magnitude” and make sure you view it in Excel and NOT in Google Apps.


a.    The average loss magnitude amount is $5000 (cell B1; tab “magnitude”)

b.    The loss magnitude standard deviation is $1000 (cell B2; tab “magnitude”)

c.    For purposes of demonstration, we will number some cells to reflect the number of iterations (A9:A1008; A9=1; A10=A9+1; drag A10 down to you get to 1000).

d.    In Excel, we would use the =RAND() function to generate the random values in cells B9:B1008.

e.    Now, in column C beginning in cell C9 – we are going to combine a Normal probability distribution with our average loss ($B$1), standard deviation ($B$2) and the random value(s) in column B to return a loss value. In other words, given a normal distribution with mean $5000 and standard deviation of $1000 – what is the value of that distribution given a random value between 0 and 1 – rounded to the nearest 10th? You would type the following in C9 and then drag C9 down to C1008:
=ROUND(MAX(NORMINV(B9,$B$1,$B$2),0),-1)

Let’s dissect this formula.

i.    ROUND. I am going to round the output of this formula to the nearest 10; annotated by the -1.
ii.    MAX. Because we are using the normal distribution and because some values could be less then zero which is not applicable for most IT scenarios, we are going to compare the value generated by the NORMINV function to 0. Which ever is larger is the value that then gets rounded to nearest 10.
iii.    NORMINV. This is the function built into Excel that returns an inverse cumulative value of a normal distribution given a probability, a mean and a standard deviation.

f.    Once you have values in all the cells – hit F9 a few times.

g.    Cell B3 gives the minimum loss value from cells C9 through C1008. The random value associated with the minimum value is probably less then 0.00xxxx.

h.    Cell B4 gives the maximum loss value from cells C9 through C1008. The random value associated with the maximum value is probably greater then 0.99xxxx.

i.    The histogram shows the count of iterations whose loss magnitude values falls within a loss magnitude bin. If you drew a line around the tops of each column it would resemble a bell curve. We expect to get this since we are using the normal distribution.

j.    Press the F9 key; new random values will be generated. Every time you press F9 think of it as a new simulation with 1000 iterations. Press F9 lots of times and you will notice that the histogram changes as well. While individual bin counts will change – the general shape of the histogram does not.

k.    By the way, if you change the average loss magnitude value in cell B1 – the histogram will probably break. But you can change the value in B2 to 500, hit F9 a few times and observer how the bell-curve shape becomes more narrow. Or, change B2 to 2000 and you will see a much flatter bell curve.

KEY TAKE-AWAY(S)

1.    As we did with simulating loss frequency, we leverage randomness to simulate loss magnitude.

2.    While we typically talk about an average loss magnitude value; losses can range in terms of magnitude. Being able to work within a range of loss values gives us a more complete view of our loss potential.

In part four of the series, we will combine loss frequency and loss magnitude into one simulation. For every iteration, we will randomly derive a loss frequency value (an integer) and a loss magnitude value. We will then calculate an expected loss, which is the product of the loss frequency and the loss magnitude values. Perform this cycle thousands or millions of time and you now have an expected loss distribution.