It Is Too Hard Or Impossible…

July 15, 2014

** Admitting that you don’t know how to make the sausage will always cast doubt on the quality of the sausage you do produce. **

One of my personal risk management aggravations relates to risk management professionals that claim it is too hard or impossible to quantify the frequency or severity of loss. First, there is the irony that we operate in a problem space of uncertainty and then make absolute statements that something cannot be done. When I witness this type of uttering, I will often challenge the person on the spot – keeping in mind the audience – in an effort to pull that person off the edge of mental failure. And make no mistake, I struggle with quantification as well – but to what degree I share that with stakeholders or peers is an aspect of professional perception that I intentionally manage. Reflecting on my own experiences and interactions with others, I want to share some quick litmus tests I use when addressing the “it is too hard or impossible” challenges.

1. Problem Scoping. Have I scoped the problem or challenge too broadly? Sometimes we take these super-big, gnarly problem spaces and become fascinated with them without trying to deconstruct the problem into more manageable chunks. Often, once you begin reducing your scope, the variables that drive frequency or severity will emerge.

2. Subject Matter Experts. This is one litmus test that I have to attribute to Jack Jones and the FAIR methodology. Often, we are not the best person to be making subject matter estimates for the variables that factor into the overall risk. The closer you get to the true experts and extract their knowledge for your analysis, the more robust and meaningful your analysis will become. In addition, leveraging subject matter experts fosters collaboration and in some cases innovation where leaders of unrelated value chains realize there is opportunity to reduce risk across one or more chains.

3. Extremes and Calibration. Once again, I have Jack Jones to thank for this litmus test and Doug Hubbard as well. Recently, a co-worker declared something was impossible to measure (workforce, increased expense related). After his “too hard” declaration, I simply asked: “Will it cost us more than $1BN?” The question stunned my co-worker, which resulted in a “Of course not!” to which I replied “It looks like it is greater than zero and less than 1 billion, we are making progress!” Here is the point, we can tease extremes and leverage calibration techniques to narrow down our uncertainty and define variable ranges versus anchoring in on a single, discreet value.

4. Am I Trying Hard Enough. This is a no-brainer but unfortunately I feel too many of us do not try hard enough. A simple phone call, email or even well crafted Google query can quickly provide useful information that in turn reduces our uncertainty.

These are just a few “litmus tests” you can use to evaluate if an estimation or scenario is too hard to quantify. But here is the deal, as risk professionals it is expected that we deal with tough things so our decision makers don’t have too.

Advertisements

Risk Scenario – Hidden Field / Sensitive Information (Part 2 of 4)

January 13, 2009

The Assessment (Threat Community A – Zero Day Malware)

In part one of “Hidden Field / Sensitive Information” the Initech Novelty Inc. Security Manager was notified of a potential security vulnerability within the Initech Novelty Inc. ecommerce website. The Security Manager was able to validate that there is indeed a vulnerability and wants to perform a risk assessment as part of the risk management process.

Note: For the “Hidden Field / Sensitive Information” Assessment, I am choosing to perform two assessments; one for each threat community. Usually, I would choose the most likely TCOMM and focus on that, but because there are PCI compliance implications with this scenario – it is appropriate to address that as well.

1.    Identify the Asset(s) at Risk: (Page 3 of the FAIR Basic Risk Assessment Guide; aka BRAG)

a.    Consumer payment card information. Specifically, the payment card primary account number (PAN) and CVV2/CID/CVC2 values, expiration dates, and cardholder name information.
b.    The state of Initech Novelty Inc.’s PCI Compliance.

2.    Identify the “Threat Community” (TCOMM); (Page 3 of the FAIR BRAG): There are multiple threat communities that pose a threat to the assets described above. For this scenario, the first two communities that come to mind are zero-day malware and Initech Novelty Inc. itself.

a.    Zero-Day Malware. I am choosing this as a TCOMM because most of the INI consumers are accessing the INI ecommerce portal from their home or what they consider to be a trusted PC. The most likely threat to these types of machines / users is malware.

b.    Initech Novelty Inc. (INI). I am selecting INI as a TCOMM for several reasons. First, The INI Security Manager thinks that the security vulnerability no longer makes INI 100% compliant with PCI-DSS. The security manager will be updating the INI PCI Self-Assessment Questionnaire (SAQ) to reflect a gap with requirement 6.5 (specifically 6.5.7). Thus, INI is its own threat because declaring non-compliance subjects them to non-compliance implications.

** The remainder of this post will be focused on TCOMM A – Zero Day Malware **

3.    Threat Event Frequency (TEF); (Page 4 of the FAIR BRAG): TEF is the probable frequency, within a given timeframe, that a threat agent will come into contact and act against an asset. For this step, I am going to select MODERATE or between 1 and 10 times per year. Here is why:

a.    Internet browsing continues to be popular. More and more consumers are accessing commercial, leisure, and social web sites which increases the probability of coming into contact with malware.

b.    Phishing and SPAM continue to be a significant attack vector by which links to  malicious websites or malware itself can be distributed and even exploited.

c.    An argument could be made that the TEF should be higher, but again, INI’s consumers indicate that they are online aware and maybe less likely to engage in riskier online behaviors.

*NOTE – It may make more sense to skip to Step Five and then come back to Step Four.

4.    Threat Capability (TCAP); (Page 5 of the FAIR BRAG): The probable level of force that a threat agent (within a threat community) is capable of applying against an asset. Now keep in mind that we are focused on the TCOMM zero day malware – not the threat population – malware in general. For this step I am selecting a value of HIGH; meaning that at least 84% of the threat community is capable of applying force against the consumer’s PC. Here is my reasoning:

a.    Zero day malware usually has a one or two day period where existing security controls are not able to detect the malware.

b.    In “Control Resistance” we reasoned that the consumers do not have advanced anti-malware products on their PCs and that they more then likely do not have other security controls that may prevent infection or loss of information.

5.    Control Resistance (CR; aka Control Strength)
; (Page 6 of the FAIR BRAG): The expected effectiveness of controls, over a given timeframe, as measured against a baseline level of force. The baseline level of force in this case is going to be the greater threat population. So we can have malware as a threat population; but we have narrowed our focus in this scenario to a small subset of the population – zero day malware. For this scenario, I am selecting a Control Resistance value of MODERATE; or stated otherwise, the controls on the consumer’s PC are resistant up to 84% of the threat “population”. Here is my reasoning:

a.    Recent studies reflect that a high percentage of American PCs have anti-malware software (AV / Spyware), but a large number of consumers still do not have firewall software installed, anti-spam, or anti-phishing capabilities.

b.    The INI survey results would indicate that INI’s consumers are security aware and probably vigilant when it comes to online security practices.

c.    Finally, since most home consumers are price conscience, I am assuming that they are purchasing lower priced or freely available anti-malware products – most of which are effective against most known viruses / Trojans but are not sophisticated enough to do heuristics and other advanced forms of malware detection.

6.    Vulnerability (VULN); (Page 7 of the FAIR BRAG): The probability that an asset will be unable to resist the actions of a threat agent. The basic FAIR methodology determines vulnerability via a look-up table that takes into consideration “Threat Capability” and “Control Resistance”.

a.    In step four – Threat Capability (TCAP) – we selected a value of HIGH.

b.    In step five – Control Resistance (CR) – we selected a value of MODERATE.

c.    Using the TCAP and CR inputs in the Vulnerability table, we are returned with a vulnerability value of HIGH.

7.    Loss Event Frequency (LEF); (Page 8 of the FAIR BRAG): The probable frequency, within a given timeframe, that a threat agent will inflict harm upon an asset. The basic FAIR methodology determines LEF via a look-up table that takes into consideration “Threat Event Frequency” and “Vulnerability”.

a.    In step three – Threat Event Frequency (TEF) – we selected a value of MODERATE; between 1 and 10 times per year.

b.    The outcome of step 6 was a VULN value of HIGH.

c.    Using the TEF and VULN inputs in the Loss Event Frequency table, we are returned with a LEF value of MODERATE.

*Note: the loss magnitude table used in the FAIR BRAG and the loss magnitude table for the Initech, Inc. scenarios are different. The Initech loss magnitude table can be viewed at the Initech, Inc. page of this blog.

8.    Estimate Worst-Case Loss (WCL); (Page 9 of the FAIR BRAG): Now we want to start estimating loss values in terms of dollars. For the basic FAIR methodology there are two types of loss: worst case and probable (or expected) loss. The BRAG asks us to: determine the threat action that would most likely result in a worst-case outcome, estimate the magnitude for each loss form associated with that threat action, and sum the loss magnitude. For this step, I am going to select DISCLOSURE in the threat action columns and RESPONSE / REPUTATION, in the loss form columns, with a WCL value of MODERATE (between $5000 and $20,000). Here is why:

a.    Due to the randomness of malware and the variability between one consumer’s security posture and another, it is unreasonable to assume that all INI consumers would experience a loss at the same time.

b.    However, because this is “worst case” loss – it is not unreasonable for us to estimate a scenario where a few consumers are taken advantage of and the source of unauthorized disclosure is tied back to Initech Novelty Inc.

c.    Thus for LOSS FORM: RESPONSE I am quickly estimating $5,000 and for LOSS FORM: REPUTATION I am quickly estimating $2,500. For reputation, I am assuming that loss event knowledge would be contained to the consumer and their social networks and maybe minimal local coverage of the incident. For response, I am assuming lost INI internal productivity, legal expenses, and maybe some hard dollars to provide the consumers credit monitoring or other protections.

9.    Estimate Probable Loss Magnitude (PLM); (Page 10 of the FAIR BRAG): In step eight, we focused on worst-case loss. Now we are going to focus on probable loss. Probable loss is for the most part always going to be lower then “worst case” loss. The BRAG asks us to: determine the threat action that would most likely result in an expected outcome, estimate the magnitude for each loss form associated with that threat action, and sum the loss magnitude. For this step, I am going to select DISCLOSURE in the threat action columns and RESPONSE / REPUTATION, in the loss form columns, with a PLM value of LOW (between $1000 and $5,000).:

a.    Due to the randomness of malware and the variability between one consumer’s security posture and another, it is unreasonable to assume that all INI consumers would experience a loss at the same time – but we should expect at least one consumer to be impacted by this vulnerability in a given year.

b.    Since this is “probable loss”, I cannot envision INI having to incur greater then $5,000 to address an incident with one consumer.

c.    Thus for LOSS FORM: RESPONSE I am quickly estimating $1,000 and for LOSS FORM: REPUTATION I am quickly estimating $1,000. For reputation, I am assuming that loss event knowledge would be contained to the consumer and maybe their social networks. For response, I am assuming lost INI internal productivity and maybe some hard dollars to provide the consumer credit monitoring or other protections.

10.     Derive and Articulate Risk; (Page 11 of the FAIR BRAG): At this point in the basic FAIR methodology we can now derive a qualitative risk rating. Using the table on page 11 of the BRAG worksheet, we use the LEF value from step seven and the PROBABLE LOSS MAGNITUDE value from step nine to derive our overall qualitative risk label.

a.    LEF value from step seven was MODERATE.

b.    PLM from step nine was LOW.

c.    Overall risk using the BRAG table on page 11 is MEDIUM.

In part four of this risk assessment scenario, I will summarize the results from both part two and part three.

** PERSONAL NOTE** There is a part of me that thinks the risk associated with this TCOMM is lower. I personally would like to see the TEF range narrowed a bit. Also, there is a contributing factor in this scenario that we should not discount and that is the privacy of the consumer. So, taking into account the privacy aspect – I would have no problem defending this scenario to a decision maker.


Security Template Exception (part 2) – The Assessment

November 6, 2008

In “The Scenario” I laid out the scenario that we want to assess the risk for. Simply put, a rather routine Windows security template exception. Using the RMI FAIR Basic Risk Assessment Guide (BRAG) as our guide, let’s jump into this.

Note: In the interest of brevity, I will try to strike an appropriate balance between descriptiveness and conciseness when characterizing scenario components. When in doubt, error on the side of what may seem like too much documentation. Not only does it make the assessment more defensible now- it helps in the future if you have to revisit it or need to compare a similar scenario against it.

1.    Identify the Asset(s) at Risk: A non-redundant, non-highly available Windows 2003 Active directory member server that runs a sales tracking application that is infrequently used by CSRs to service customers for detailed sales order information. The most likely threat scenario is non-availability of the server to the business (CRSs) due to 3rdpartysalesapp.exe being leveraged to fill up the  hard disks on the server with useless data by the TCOMM below.

2.    Identify the “Threat Community” (TCOMM): This is always an interesting discussion since there can be multiple threats that can come into contact and attempt to take action against an asset. For this scenario – the first two that come to mind are malware and a malicious server administrator; I will only perform the assessment using one of them.

a.    Malware. Based off the given information, I am less inclined to stick with malware because the server is very isolated from the most likely attack vectors one would expect malware to be propagated by (email, Internet browsing, outbreak in lesser trusted network space, etc..). Now please understand, I am not stating that this server cannot be attacked by malware, I am stating that compared to my other threat community, a malware infection on this server has a lower probability of occurring then the other.

b.    Malicious Server Administrator (SA). I am choosing this TCOMM as the most likely for several reasons:

i.    The server is not accessible from the Internet; which reduces the chances of attack from the traditional “hacker” TCOMM.
ii.    It is reasonable to assume that most Initech Novelty, Inc. end users that interface with the application running on the server do not have privileged knowledge of the security configuration of the server.
iii.    Based on Initech company history there has been at least one incident of a malicious technical insider attack (Initech, Inc.).
iv.    I would characterize my TCOMM as “an internal, privileged, professional, technical server administrator”.

3.    Threat Event Frequency (TEF): TEF is the probable frequency, within a given timeframe, that a threat agent will act against an asset. For this step, I am going to select LOW or once every 10 years. Here is why:

a.    There was an incident in 1999 where a malicious internal employee was able to successfully take action against Initech. The circumstances then compared to this scenario are different but we have a starting point from an analysis perspective.

b.    In general, SA’s are pretty trustworthy individuals. Initech Novelty, Inc. is a small company with minimal IT staff. It is reasonable to assume that most of them would not have reason nor intent to intentionally attempt to bring down a production server. From a scenario perspective, there is nothing stated that should lead one to assume there is a reason for one of the existing SA’s  to take malicious actions.

c.    Initech Inc. has already been assessed by a 3rd party for ISO 27002 and given a CMM score of around 3.5 for the “human resource security management” section. An assessor could assume that Initech, Inc. is performing a good level of due diligence to ensure they are hiring trustworthy individuals as well as ensuring there are deterrents in place to hopefully minimize malicious behavior (could be a combination of both preventive and detective controls; policy, monitoring, training, etc..).

*NOTE – It may make more sense to skip to Step Five and then come back to Step Four.

4.    Threat Capability (TCAP): The probable level of force that a threat agent (within a threat community) is capable of applying against an asset. Now keep in mind that we are focused on the TCOMM in the context of a weakened security template that allows a non-Windows provided executable to write to “%systemroot%\system32” – not the threat population. For this step I am selecting MODERATE; between %16 and %84 of the threat community is capable of applying force against the server and vulnerability in focus for this scenario. Here is my reasoning:

a.    The TCOMM would have unfettered and privileged access to the server and be able to easily launch an attack.

b.    The TCOMM would most likely have privileged knowledge of the weakened security template.

c.    It would not take much effort for the TCOMM to find or quickly create a method to exploit the vulnerability.

5.    Control Resistance (CR; aka Control Strength): The expected effectiveness of controls, over a given timeframe, as measured against a baseline level of force. The baseline level of force in this case is going to be the greater threat population. This is important to understand. So, threat community is a subset of a greater threat population. So we can have internal employees as a threat population; but we have narrowed our focus in this scenario to a small subset of the population which is articulated in the Step 2. Pardon the redundancy, but Control Resistance is analyzed against the population – not the threat community. For this scenario, I am selecting a Control Resistance value of HIGH; or stated otherwise, the controls on the server are resistant to 84% of the threat population. Here is my reasoning:

a.    A very small percentage of the Initech Novelty, Inc. workforce would ever have privileged knowledge of the weakened security template.

b.    The skills required to remotely exploit the vulnerability are not trivial. It’s possible that someone may have the skills, and tools, but it is not probable that a large or even moderate percentage of the “threat population” does for this scenario.

6.    Vulnerability (VULN): The probability that an asset will be unable to resist the actions of a threat agent. The basic FAIR methodology determines vulnerability via a look-up table that takes into consideration “Threat Capability” and “Control Resistance” (this is on page seven of the BRAG).

a.    In step four – Threat Capability (TCAP) – we selected a value of MODERATE.

b.    In step five – Control Resistance (CR) – we selected a value of HIGH.

c.    Using the TCAP and CR inputs in the Vulnerability table, we are returned with a vulnerability value of LOW

7.    Loss Event Frequency (LEF): The probable frequency, within a given timeframe, that a threat agent will inflict harm upon an asset. The basic FAIR methodology determines LEF via a look-up table that takes into consideration “Threat Event Frequency” and “Vulnerability” (this is on page eight of the BRAG).

a.    In step three – Threat Event Frequency (TEF) – we selected a value of LOW; once every 10 years.

b.    The outcome of step 6 was a VULN value of LOW

c.    Using the TEF and VULN inputs in the Loss Event Frequency table, we are returned with a LEF value of VERY LOW

*Note: the loss magnitude table used in BRAG and the loss magnitude table for the Initech, Inc. scenarios are different. The Initech loss magnitude table can be viewed below as well as at the Initech, Inc. page of this blog.

Loss Magnitude Table (Initech Specific)

Loss Magnitude Table (Initech Specific)

8.    Estimate Worst-Case Loss (WCL): Now we want to start estimating loss values in terms of dollars. For the basic FAIR methodology there are two types of loss: worst case and probable (or expected) loss. The BRAG asks us to: determine the threat action that would most likely result in a worst-case outcome, estimate the magnitude for each loss form associated with that threat action, and sum the loss magnitude. For this step, I am going to select DENY ACCESS, in the RESPONSE loss form, with a WCL value of LOW. Here is why:

a.    The server going down results in it not being available- thus access to it is denied. The most likely loss form to Initech Novelty, Inc. is the cost (IT response) in bringing it back up.

b.    Since this is “worst case”, the longest I could see this server being down is five business days. Based off the given information, this would result in one call to the service center not being able to be properly serviced because the application is down. I estimate response loss from a customer service center perspective to be less then $100.

c.    The effort required by IT to rebuild the serve is 1-2 hours – easily under $1000 dollars.

d.    Even though the application is not considered “mission critical” to Initech Novelty, Inc. – a prolonged outage could impact other processes as well as increase the response impact.

9.    Estimate Probable Loss Magnitude (PLM): In step eight, we focused on worst-case loss – which in reality for this type of a scenario is probably not a practical step. Now we are going to focus on probable loss. Probable loss is for the most part always going to be lower then “worst case” loss. The BRAG asks us to: determine the threat action that would most likely result in a worst-case outcome, estimate the magnitude for each loss form associated with that threat action, and sum the loss magnitude. For this step, I am going to select DENY ACCESS, in the RESPONSE loss form, with a WCL value of VERY LOW. Here is why:
a.    The server going down results in it not being available- thus access to it is denied. The most likely loss form to Initech Novelty, Inc. is the cost (IT response) in bringing it back up.

b.    Since this is “probable loss”, the longest I could see this server being down is a few hours, not more then a day. Based off the given information, this could result in one call to the service center not being able to be properly serviced because the application is down. I estimate response loss from a customer service center perspective to be less then $100.

c.    The effort required by IT to rebuild the serve is 1-2 hours – easily under $1000 dollars.

10.     Derive and Articulate Risk: At this point in the basic FAIR methodology we can now derive a qualitative risk rating. Using the table on page 11 of the BRAG worksheet, we use the LEF value from step seven and the PROBABLE LOSS MAGNITUDE value from step nine to derive our overall qualitative risk label.

a.    LEF value from step seven was VERY LOW.

b.    PLM from step nine was VERY LOW.

c.    Overall risk using the BRAG table on page 11 is LOW.

So how would I articulate this to a decision maker or someone that is responsible for this exposure to the organization?

“A security template modification is necessary for this application to function properly. The modification somewhat weakens the security posture of that server. The most likely threat to the server would be a disgruntled server administrator that wants to bring the server down. Our assessment is that there is a very low likelihood of loss and even if it did occur there would be minimal impact in terms of response costs to Initech Novelty, Inc; (expected loss of less then $1000; worst case loss less then $5000).”

Final thoughts: Some of you that have read the scenario and assessment are probably thinking that this seems like a long process. As with anything new it takes a few iterations for one to become comfortable. Over time, a simple scenario like this could easily be done in a few minutes mentally – maybe five minutes if you have to document some of your justifications (which I suggest you always do).

I look forward to any feedback you might have! If anyone has any suggestions for a future scenario – please let me know.


Security Template Exception (part 1) – The Scenario

November 6, 2008

A Windows server administrator (SA) has contacted the Initech Novelty, Inc. Security Manager to get a formal security exception to modify the security template that allows the executable of a third party sales tracking application (3rdpartysalesapp.exe) to have write access to the “%systemroot%\system32” directory on a Windows 2003 server. The sales tracking application has been designed to create temporary files in the “%systemroot%\system32” directory. This server fulfills a “member server role” within the Initech Active Directory domain.

Given:

1.    The Windows servers runs a web-based intranet sales tracking application. It is accessed over TCP 443 (SSL) from the user segment. The application’s data actually resides on a separate database server in a separate data networksegment. The application is not considered a mission critical application.

2.    The Windows server sits on a dedicated network segment dedicated to Initech internal application servers. This segment is firewalled off from the user segments, the database servers as well as the network segments that have Internet facing servers. All firewall configurations are least privilege in nature. There are both logical and physical security controls that facilitate network segmentation.

3.    The servers on the internal application network server segment are centrally managed from an OS patching and Anti-Malware perspective (in other words, they do not make direct connections to the Internet).

4.    Initech, Inc. mandates via its information security policy that Windows server security templates be applied to all of its servers; specific to its role within the enterprise. In the case of this scenario, a default Microsoft provided Windows server 2003 “member server” security template has been applied to the Windows 2003 server.

5.    This particular sales tracking application server is not clustered or considered to be highly available. It is estimated that it would take Initech Novelty, Inc. IT personnel between 1-2 hours to restore the server from the most recent daily back-up and another 1-2 hours for application testing.

6.    The application is used by Initech Novelty, Inc. customer service representatives to get detailed information about a sales order that is not available within their regular CSR application. About one out of every 250 calls to the Initech Novelty, Inc. service center require access to the application in focus for this scenario.

7.    The Initech Novelty, Inc. service center is open seven days per week between the hours of 8 AM EST and 10 PM EST. Average daily volume of calls that are sales transaction related are 50.

8.    Initech Novelty, Inc. has calculated CSR employee costs to the organization (salary, benefits, etc.) to be $60 per hour. Information technology employees are listed as $85 per hour.

9.    Finally, the server in this scenario is not in scope for PCI-DSS compliance (Payment Card Industry Data Security Standards). Access to credit card “primary account number” (PAN) or any other card information is not possible in this application.

In the next post, we will perform the risk assessment using the RMI FAIR Basic Risk Assessment Guide (BRAG) as our guide.


Initech Inc., Risk Scenario PRE-READ

November 6, 2008

I participated in an advanced FAIR training session recently with a very small group of peers from my employer. It was great training, great collaboration, and was actually the formal kick-off to a special project I am leading regarding risk quantification. During the course of this training, I was reminded of a few things that I think are important to remember about risk scenarios – especially given the upcoming posts where I will post risk scenarios and my analysis.

1.    Training risk scenarios – whether reflective of actual incidents or purely made up – need to be structured enough to minimize “what-if” and or hypothetical questions. During this training event, I brought to the table what I thought was a “simple” risk scenario that I expected would take maybe 10 minutes to work through – it took about 30 minutes (there were 7 people chiming in). Everyone has a different perspective when looking and dealing with risk. So, to be effective at writing risk scenarios, I think each scenario needs to be framed up to account for at least 80-90% of the relevant information one needs to truly assess the scenario. Anything greater then 90% may be time prohibitive. Feel free to provide comments about the structure of the risk scenarios I present – what is the missing information you need? Ask yourself if the information you need is something that would only be applicable in your environment versus universal information that should have been included in the scenario.

2.    I will use the FAIR methodology to assess the risk for these scenarios. There are four FAIR certifications that can be earned – you can get more details at RMI’s website. I am currently certified as a “FAIR Analyst” and a “FAIR Senior Analyst”. For the risk scenarios I post, I will reference a freely available FAIR tool called the “Basic Risk Assessment Guide” (BRAG) and stick with basic FAIR concepts for the actual risk assessment. This approach should allow for an easier understanding of FAIR concepts and overtime, the complexity of the scenarios will be easier to digest. Of course, I would recommend reading the FAIR white paper but I am hoping that the risk scenarios will still give an adequate representation of FAIR.

3.    In the BRAG that is available from RMI – in the loss magnitude section – there is a table for loss magnitude severity with dollar value ranges. The values listed in the BRAG should be replaced with dollar value ranges more reflective of your company – especially if you start to adopt FAIR and use it on a regular basis. Determining these ranges should be an exercise that includes information security, IT, legal, business folks, and probably others I have not listed. In the case of the Initech risk scenarios – I have modified the loss magnitude severity table and posted it on the Initech, Inc. page.