PCI-DSS Is Not About “Security by Obscurity”

January 24, 2009

cio_priceless_scty

Image copied from CSOnline article related to securing “pricelessness”.

Alex Hutton over at Risk Management Insight recently blogged about PCI-DSS being “…security through obscurity on a grand, grand scale.” I have a professional relationship with Alex and I know he enjoys blog bantering, so here goes.

Obscurity. I take issue with the word obscurity. Yes, it may sound like I am splitting hairs and Alex’s use of the word can probably be justified via Merriam-Webster. However, when I usually hear the phrase “security through obscurity” – it is usually in a negative context with the following attributes:

1.    The asset being protected is usually not known about outside the company.
2.    There are usually no (or effective) security controls applied to the asset to begin with.

Let’s start with #1. The bad guys know merchants and processors have access to and *possibly* store payment card information. No secret there. End of story.

For #2, there are numerous “prevent controls” outlined in PCI-DSS that if implemented properly and validated to be effective, provide a high level of protection to payment card information. So, it is reasonable to assume that in most merchant / processor cases, there are some security controls in place to protect payment card information.

A better word that Alex could have used in place of obscurity is maybe “isolation”. You can find some other valuable thoughts on obscurity over at “dmiessler.com”.

Later on in Alex’s post he states “…that PCI DSS is not necessarily concerned with Detection and Response.” I agree that once you are not able to prevent you are probably in trouble with some entity – but detect and response controls can significantly reduce and in some cases minimize loss forms as well as significantly facilitate “root cause analysis” (RCA) in cases of payment card related events and or incidents (read blog post by Don C. Weber – “get some”).

I am going through an exercise right now to go through PCI-DSS and tag every requirement to the type of control it is. I am about half way through it and amazingly the percentage of “prevent controls” is not significantly higher then the percentage of detect and respond controls (may post my findings in a later post). So, Alex – I think you missed the mark on the value of “detect and response” controls and the importance of it from a PCI-DSS perspective. You know that I am not a big fan of “value-fail” QSAs, but I do know some of the QSAs check for these controls and interview actors that participate in response processes to SWAG a level of effectiveness. Unfortunately, the ultimate determination is when an event or incident occurs. I would like to think that the card carriers and acquirers take this into consideration when determining fines for merchants or processors that are deemed to be culpable in breach incidents. Maybe not.

I support the underlying principles of PCI-DSS (see paragraph on commitment, here), especially since is such a significant portion of my current job responsibilities. However, I disagree with some QSA and processor approaches to determine if a merchant is compliant or not – especially when gauging the effectiveness of controls – which complicates articulating and managing risk associated with PCI-DSS compliance.

Advertisements

Making PCI Easier – A Reality / Health Check

January 21, 2009

To start off, I must admit that while documenting thoughts for this post I could not get the song “We’re Not Gonna Take It” by Twisted Sister out of my head.

The driver for this post was a blog post by Anton Chuvakin – where he posed the big question about how PCI can be easier. I provided some thoughts to him and he welcomed some additional thoughts.

Alex Hutton over at Risk Management Insight has also opined recently about PCI, the “compliance stick” not making compliance easier, and for those responsible for something like PCI compliance efforts – taking on a more consultative, high information value, decision making approach.

And of course, this post comes a few days after a breach disclosure – involving a payment processor called Heartland Payment Systems.

This post is not about how the PCI Security Standards Council can make compliance with PCI-DSS easier to achieve. Nor is it about how QSAs or security vendors can facilitate making merchants PCI compliance efforts easier. This post is more focused on merchants or processors making PCI compliance easier for themselves. My thought process is that if merchants can make some aspects of PCI compliance easier on themselves – then there is a reduced need for relying “so much” on QSAs and less heartache around PCI-DSS in general.

So off we go….

Commitment – The information risk management folks driving PCI compliance need to be committed to it. However, this commitment needs to be rooted in something more intangible then ensuring compliance with all the PCI-DSS requirements – which are really just a means to an end. The “end” needs to be what we are committed to; protecting consumers, maintaining business operations, and reducing credit card fraud. Yep, there are some reading this that are probably rolling their eyes and may click away from this page before even getting to the end of this sentence – but c’mon – be honest with yourself – is the “end” what we are committed to or just being compliant?

Structured Approach – Some entities may have their ducks in a row and “manage” PCI compliance to the nth degree – good for them. Others entities, are pushing it along with their bellies. Some thoughts I have on a structured approach include:

1.    Do you have the equivalent of a PCI content diagram? Maybe something that visually represents interested parties in your organization. Treasury / finance, billing, application areas, IT areas, legal, etc…

2.    What are the different PCI compliance “work streams” you are managing? More then likely you have more then one work stream; regardless of your state of compliance.

3.    Are those persons responsible for managing PCI compliance properly positioned to deal with all the PCI interested parties in your organization? A few years ago I witnessed a CIO who within a few months of being in his role, promoted a handful of IT executives to titles/roles that would be equal to their business partners. The same concept applies for those managing PCI compliance – they have to be positioned to adequately deal with non-executive roadblocks, but also have access to executive stake holders to deal with executive roadblocks.

4.    Finally, managing PCI compliance should not be relegated to the new employee or the junior employee – especially if they have no previous PCI compliance management experience. There are too many business, political, and IT obstacles to deal with that require some business acumen, negotiating, informing, and project management skills – especially in big companies.

Expertise – Those managing PCI Compliance need to be the experts on PCI within the organization. Knowing the words behind the acronym is not enough.  I would argue that those responsible for PCI compliance should be familiar with all of the requirements and know what type of control it is (prevent, detect, response). You need to be able to gauge the effectiveness of the control in the context of the asset it is applied to and your environment in general. In general, there is nothing worse in my mind then a QSA finding something you should have already known or a QSA knowing more about your environment then you do.

Continuity – No, I am not talking about business continuity – but PCI compliance management continuity when people leave a company or take on new roles. There needs to be adequate documentation and there needs to be an intentional effort to make sure the knowledge transfer occurs and that it is understood. A lack of continuity can result in weeks if not months of re-gathering information with the possibility of losing valuable information along the way.

Some of these thoughts sound like no-brainers and I applaud you for making this far in the post. If I was interviewing for a PCI-specific role, being assigned to a new PCI-specific role, or assessing a company’s PCI compliance efforts – the thoughts above would form the questions I would ask. Also, just because a merchant / processor is doing all the above does not mean they will never need to reach out to QSAs or that PCI will just magically be easier. However, I do think it will provide clarity as to what needs to be done as well as allow a company to take a stronger stand on both false claims of compliance / non-compliance by interested parties.


Risk Scenario – Hidden Field / Sensitive Information (Part 4 of 4)

January 16, 2009

The Summary

It is time to wrap this scenario up. If you are landing here directly with no knowledge of the three previous posts, the hyperlinks are below:

Part 1 – The Scenario
Part 2 – TCOMM A
Part 3 – TCOMM B

The risk assessments for both threat communities (malware and Initech Novelty, Inc.) resulted in risk ratings of MEDIUM. This scenario is different from others in that we performed a risk assessment for two threat communities. Not all scenarios will require this nor is it always practical from a time perspective. When it comes to performing multiple risk assessments for multiple threat communities, the question that usually comes up is: “Which risk rating should I use for the scenario as a whole if the risk rating for each TCOMM is different?”. This is a great question and not one that I will spend a lot of time in this post – but here is how I would reconcile: I would assign the “higher” qualitative risk rating to the scenario. There is a relationship between LEF, PLM and the RISK rating. The risk rating is more reflective on annualized exposure; so I would error on the side of the higher.

Back to this scenario, both FAIR assessments resulted in a risk rating of MEDIUM. Having both of these as MEDIUM somewhat makes gauging the risk for the scenario itself somewhat easier. However we still need to summarize the risk for the decision maker(s) responsible and accountable for the payment application as well as the decision maker accountable for INI’s compliance with PCI-DSS requirements.

Here is how I would summarize the risk:

***

A vulnerability in our e-commerce application was reported to us by one of our customers. The vulnerability was validated by the security group, assessed for risk and has been categorized as a MEDIUM risk. The vulnerability results in the customer’s payment card information being persisted in HTML files that are cached on their PC after making a purchase from our site. It is possible for the payment card information (credit card number, expiration date, and CVV2 code) to be retrieved the HTML files.

There are two threats that we have identified that introduce exposure to the consumer, Initech Novelty Inc., or both. The first is zero day malware, while we believe that most of our customers are Internet security aware, there is not enough information to gauge the effectiveness of the security controls on their PC. We are estimating that our average consumer will encounter a form of zero day malware at least once a year. There is no guarantee that the customer’s cached payment card information would be compromised as a result of the malware but we also cannot guarantee that it would not be compromised. Second, confirming this vulnerability makes INI non-compliant with PCI-DSS requirement 6.5.7; which is related to developing and maintaining secure systems and application. We need to update our Self Assessment Questionnaire to indicate non-compliance with this requirement and report it to our payment processor. Finally, some contributing factors that should be considered as part of this risk assessment are: customer privacy, INI’s obligation to be compliant with PCI-DSS, and INI’s reputation as a result of any incidents related to this vulnerability.

We have estimated INI’s exposure to be between $5,000 and $10,000. This estimate includes both hard and soft dollars encompassing multiple loss forms: our internal response to any reported incidents, costs associated with providing protections to the consumer should there be loss of their payment card information, and the cost to INI to mitigate the risk at a later date if the decision is made to assume the risk. We estimate that the cost to mitigate the risk to an acceptable level (fix the application) is approximately $3,000 soft dollars (internal resource effort).

***

There is one last topic I wanted to write about as part of this scenario series and that is mitigation solutions. Below are some quick hit solutions and recommendations that I would present to an application team.

1.    Use the appropriate HTTP header directives that tell the browser not to store or cache the page being loaded.

2.    Do not use hidden fields to facilitate session management – especially with the confidential information. Use a session database. (BTW, use of hidden fields is not recommended by OWASP – which PCI-DSS references as a source of secure coding guidelines).

3.    Have all payment application changes reviewed by security prior to releasing into production.

***

Feel free to share your thoughts – I welcome the feedback!


Risk Scenario – Hidden Field / Sensitive Information (Part 3 of 4)

January 15, 2009

The Assessment (Threat Community B – Initech Novelty, Inc.)

There is some duplicate information from part 2 at the beginning of this assessment to aid some readers who may have landed on this page with out reading Part 1 or Part 2.

In part two of “Hidden Field / Sensitive Information” we assessed the risk for “Threat Community” A – Malware. The FAIR assessment resulted in a risk rating of MEDIUM. As mentioned in Part 2 – there is another Threat Community we need to address and that is Initech Novelty Inc. (INI) itself. Because INI is a PCI merchant and is accountable for the security of its applications that process payment card information, the vulnerability that has been identified and confirmed – in the eyes of the Security Manager – makes INI non-compliant with PCI-DSS. I am assuming the INI is going to declare / update this in their SAQ.

Note: For the “Hidden Field / Sensitive Information” Assessment, I am choosing to perform two assessments; one for each threat community. Usually, I would choose the most likely TCOMM and focus on that, but because there are PCI compliance implications with this scenario – it is appropriate to address that as well.

1.    Identify the Asset(s) at Risk: (Page 3 of the FAIR Basic Risk Assessment Guide; aka BRAG)

a.    Consumer payment card information. Specifically, the payment card primary account number (PAN) and CVV2/CID/CVC2 values, expiration dates, and cardholder name information.

b.    The state of Initech Novelty Inc. PCI Compliance.

2.    Identify the “Threat Community” (TCOMM); (Page 3 of the FAIR BRAG): There are multiple threat communities that pose a threat to the assets described above. For this scenario, the first two communities that come to mind are zero-day malware and Initech Novelty Inc. itself.

a.    Zero-Day Malware. I am choosing this TCOMM for the assessment for several reasons. Most of the INI consumers are accessing the INI ecommerce portal from a PC at their home or what they consider to be a trusted PC. The most like threat to these types of machines / users is malware.

b.    Initech Novelty Inc. (INI). I am selecting INI as a TCOMM for several reasons. First, The INI Security Manager thinks that the security vulnerability no longer makes INI 100% compliant with PCI-DSS. The security manager will be updating the INI PCI Self-Assessment Questionnaire (SAQ) to reflect a gap with requirement 6.5 (specifically 6.5.7). Thus, INI is its own threat because declaring non-compliance subjects them to non-compliance implications.

** The remainder of this post will be focused on TCOMM B – Initech Novelty Inc.**

3.    Threat Event Frequency (TEF): TEF is the probable frequency, within a given time frame, that a threat agent will come into contact and act against an asset. For this step, I am going to select MODERATE or between 1 and 10 times per year. Here is why:

a.    INI is a level three merchant. Level 3 merchants are required to complete a SAQ once per year. SAQs may be updated through-out the year as needed.

b.    Keep in mind that self-reporting non-compliance does not mean a merchant will be fined. But it can be a pre-cursor to being fined (assuming no breach or other related incident).

*NOTE – It may make more sense to skip to Step Five and then come back to Step Four.

4.    Threat Capability (TCAP); (Page 5 of the FAIR BRAG): The probable level of force that a threat agent (within a threat community) is capable of applying against an asset. For this step I am selecting a value of VERY HIGH; meaning that at least 98% of the threat community is capable of applying force (or reporting non-compliance) on INI’s PCI Compliance posture. Here is my reasoning:

a.    INI wants to be ethical and not appear to be covering up vulnerabilities that affect compliance but also could harm consumer confidence in INI.

b.    Given a and some of the information in the Control Resistance section, INI is highly capable of reporting self compliance.

5.    Control Resistance (CR; aka Control Strength); (Page 6 of the FAIR BRAG): The expected effectiveness of controls, over a given time frame, as measured against a baseline level of force. The baseline level of force in this case is going to be the greater threat population. In most scenarios it is usually easy to differentiate between a “threat community” and the “threat population” it is part of. For this particular assessment (TCOMM B), they are the same – Initech Novelty Inc.

Because we are assessing risk in the context of a state of compliance versus more tangible concepts like threats and security controls – there could be some confusion about this step of the assessment. Here is my reasoning for selecting VERY LOW “Control Resistance”:

a.    INI *has* to self report annually. Now a merchant could have found vulnerability after completing an SAQ and make a conscious decision to not update their SAQ – that is their choice and probably a whole different discussion.

b.    In the spirit of optimism, I am assuming that INI takes PCI Compliance seriously and is risk averse in the sense they would rather declare non-compliancy versus have a breach or a related incident and be found to be non-compliant.

c.    Finally, there are no other regulations, standards, or legal barriers (on-going litigation, security investigations, etc…) that would negate the need for INI to not self-report the vulnerability that makes them non-compliant. Thus, they *have* to and *want* to report; making the VERY LOW Control Resistance selection the most appropriate.

6.    Vulnerability (VULN); (Page 7 of the FAIR BRAG): The probability that an asset will be unable to resist the actions of a threat agent. The basic FAIR methodology determines vulnerability via a look-up table that takes into consideration “Threat Capability” and “Control Resistance”.

a.    In step four – Threat Capability (TCAP) – we selected a value of VERY HIGH.

b.    In step five – Control Resistance (CR) – we selected a value of VERY LOW.

c.    Using the TCAP and CR inputs in the Vulnerability table, we are returned with a vulnerability value of VERY HIGH.

7.    Loss Event Frequency (LEF); (Page 8 of the FAIR BRAG): The probable frequency, within a given time frame, that a threat agent will inflict harm upon an asset. The basic FAIR methodology determines LEF via a look-up table that takes into consideration “Threat Event Frequency” and “Vulnerability”.

a.    In step three – Threat Event Frequency (TEF) – we selected a value of MODERATE; between 1 and 10 times per year.

b.    The outcome of step 6 was a VULN value of VERY HIGH.

c.    Using the TEF and VULN inputs in the Loss Event Frequency table, we are returned with a LEF value of MODERATE.

*Note: the loss magnitude table used in the FAIR BRAG and the loss magnitude table for the Initech, Inc. scenarios are different. The Initech loss magnitude table can be viewed at the Initech, Inc. page of this blog.

8.    Estimate Worst-Case Loss (WCL); (Page 9 of the FAIR BRAG): Now we want to start estimating loss values in terms of dollars. For the basic FAIR methodology there are two types of loss: worst case and probable (or expected) loss. The BRAG asks us to: determine the threat action that would most likely result in a worst-case outcome, estimate the magnitude for each loss form associated with that threat action, and sum the loss magnitude. For this step, I am going to select DISCLOSURE in the threat action columns and RESPONSE / FINES & JUDGMENTS / REPUTATION, in the loss form columns, with a WCL value of SIGNIFICANT (between $20,000 and $50,000). Here is why:

a.    The most likely worst case scenario (assuming no breach or related security incident), is that INI gets fined for not being compliant. For level 1 or level 2 merchants we know that monthly fines for non-compliance can be between $5K and $25K. There is also a chance that INI’s processor could increase INI’s transaction fees for not being compliant.

b.    Because INI does not store PAN in its web application, it is hard to envision a scenario where a large number of its consumers will have their payment card information breached all at one time. The Malware TCOMM better addresses this threat.

c.    There is precedence for level three or four merchants being fined in cases of a breach incident. Again, we are not too concerned about a breach scenario but having at least one known fine at this merchant level let’s us contextualize a worst case scenario.

d.    Given the above, I am estimating that worst case, that INI could be fined one or two thousand a month, plus there could be some RESPSONSE costs; a few thousand dollars, and maybe some reputation damage if the non-compliance is reported publicly. My estimate would be more around the $15k-$25k range, but I need to select the pre-defined FAIR BRAG range that best fits my estimates.

9.    Estimate Probable Loss Magnitude (PLM); (Page 10 of the FAIR BRAG): In step eight, we focused on worst-case loss. Now we are going to focus on probable loss. Probable loss is for the most part always going to be lower then “worst case” loss. The BRAG asks us to: determine the threat action that would most likely result in an expected outcome, estimate the magnitude for each loss form associated with that threat action, and sum the loss magnitude. For this step, I am going to select DISCLOSURE in the threat action columns and RESPONSE, in the loss form columns, with a PLM value of LOW (between $1000 and $5,000).:

a.    Since INI is going to update its SAQ and report non-compliance, there is a RESPONSE cost to performing this; between 5 and 10 hours.

b.    Also, the Security Manager estimates that it will take between 20 and 30 hours of development time to mitigate the risk. This includes development, testing, implementation, and validation.

c.    The security manager is assuming an internal resource cost of $75 per hour for the resources required to address a and b above; which results in an estimated cost of $3000.

d.    Keep in mind we are looking for accuracy not precision. So the predefined range of between $1000 and $5000 (LOW) is the most appropriate selection.

10.     Derive and Articulate Risk; (Page 11 of the FAIR BRAG): At this point in the basic FAIR methodology we can now derive a qualitative risk rating. Using the table on page 11 of the BRAG worksheet, we use the LEF value from step seven and the PROBABLE LOSS MAGNITUDE value from step nine to derive our overall qualitative risk label.

a.    LEF value from step seven was MODERATE.

b.    PLM from step nine was LOW.

c.    Overall risk using the BRAG table on page 11 is MEDIUM.

In Part 4, I will summarize the risk assessment findings as well as summarize some possible mitigation solutions.

** Some personal notes on this part of the assessment. The “RESPONSE” loss form type has resulted in some interesting conversations. The concept of soft dollars and hard dollars comes up very often. Some business units only care about hard dollars (dollars going outside the company) and some business units only care about soft dollars (dollars within the company). As you are explaining response costs – it may make sense to highlight this differentiation. The way that I see it is that even by estimating “soft dollars” we are able to show that response takes away from other higher value activities that can help the business achieve its goals. This concept should not be discounted.


Risk Scenario – Hidden Field / Sensitive Information (Part 2 of 4)

January 13, 2009

The Assessment (Threat Community A – Zero Day Malware)

In part one of “Hidden Field / Sensitive Information” the Initech Novelty Inc. Security Manager was notified of a potential security vulnerability within the Initech Novelty Inc. ecommerce website. The Security Manager was able to validate that there is indeed a vulnerability and wants to perform a risk assessment as part of the risk management process.

Note: For the “Hidden Field / Sensitive Information” Assessment, I am choosing to perform two assessments; one for each threat community. Usually, I would choose the most likely TCOMM and focus on that, but because there are PCI compliance implications with this scenario – it is appropriate to address that as well.

1.    Identify the Asset(s) at Risk: (Page 3 of the FAIR Basic Risk Assessment Guide; aka BRAG)

a.    Consumer payment card information. Specifically, the payment card primary account number (PAN) and CVV2/CID/CVC2 values, expiration dates, and cardholder name information.
b.    The state of Initech Novelty Inc.’s PCI Compliance.

2.    Identify the “Threat Community” (TCOMM); (Page 3 of the FAIR BRAG): There are multiple threat communities that pose a threat to the assets described above. For this scenario, the first two communities that come to mind are zero-day malware and Initech Novelty Inc. itself.

a.    Zero-Day Malware. I am choosing this as a TCOMM because most of the INI consumers are accessing the INI ecommerce portal from their home or what they consider to be a trusted PC. The most likely threat to these types of machines / users is malware.

b.    Initech Novelty Inc. (INI). I am selecting INI as a TCOMM for several reasons. First, The INI Security Manager thinks that the security vulnerability no longer makes INI 100% compliant with PCI-DSS. The security manager will be updating the INI PCI Self-Assessment Questionnaire (SAQ) to reflect a gap with requirement 6.5 (specifically 6.5.7). Thus, INI is its own threat because declaring non-compliance subjects them to non-compliance implications.

** The remainder of this post will be focused on TCOMM A – Zero Day Malware **

3.    Threat Event Frequency (TEF); (Page 4 of the FAIR BRAG): TEF is the probable frequency, within a given timeframe, that a threat agent will come into contact and act against an asset. For this step, I am going to select MODERATE or between 1 and 10 times per year. Here is why:

a.    Internet browsing continues to be popular. More and more consumers are accessing commercial, leisure, and social web sites which increases the probability of coming into contact with malware.

b.    Phishing and SPAM continue to be a significant attack vector by which links to  malicious websites or malware itself can be distributed and even exploited.

c.    An argument could be made that the TEF should be higher, but again, INI’s consumers indicate that they are online aware and maybe less likely to engage in riskier online behaviors.

*NOTE – It may make more sense to skip to Step Five and then come back to Step Four.

4.    Threat Capability (TCAP); (Page 5 of the FAIR BRAG): The probable level of force that a threat agent (within a threat community) is capable of applying against an asset. Now keep in mind that we are focused on the TCOMM zero day malware – not the threat population – malware in general. For this step I am selecting a value of HIGH; meaning that at least 84% of the threat community is capable of applying force against the consumer’s PC. Here is my reasoning:

a.    Zero day malware usually has a one or two day period where existing security controls are not able to detect the malware.

b.    In “Control Resistance” we reasoned that the consumers do not have advanced anti-malware products on their PCs and that they more then likely do not have other security controls that may prevent infection or loss of information.

5.    Control Resistance (CR; aka Control Strength)
; (Page 6 of the FAIR BRAG): The expected effectiveness of controls, over a given timeframe, as measured against a baseline level of force. The baseline level of force in this case is going to be the greater threat population. So we can have malware as a threat population; but we have narrowed our focus in this scenario to a small subset of the population – zero day malware. For this scenario, I am selecting a Control Resistance value of MODERATE; or stated otherwise, the controls on the consumer’s PC are resistant up to 84% of the threat “population”. Here is my reasoning:

a.    Recent studies reflect that a high percentage of American PCs have anti-malware software (AV / Spyware), but a large number of consumers still do not have firewall software installed, anti-spam, or anti-phishing capabilities.

b.    The INI survey results would indicate that INI’s consumers are security aware and probably vigilant when it comes to online security practices.

c.    Finally, since most home consumers are price conscience, I am assuming that they are purchasing lower priced or freely available anti-malware products – most of which are effective against most known viruses / Trojans but are not sophisticated enough to do heuristics and other advanced forms of malware detection.

6.    Vulnerability (VULN); (Page 7 of the FAIR BRAG): The probability that an asset will be unable to resist the actions of a threat agent. The basic FAIR methodology determines vulnerability via a look-up table that takes into consideration “Threat Capability” and “Control Resistance”.

a.    In step four – Threat Capability (TCAP) – we selected a value of HIGH.

b.    In step five – Control Resistance (CR) – we selected a value of MODERATE.

c.    Using the TCAP and CR inputs in the Vulnerability table, we are returned with a vulnerability value of HIGH.

7.    Loss Event Frequency (LEF); (Page 8 of the FAIR BRAG): The probable frequency, within a given timeframe, that a threat agent will inflict harm upon an asset. The basic FAIR methodology determines LEF via a look-up table that takes into consideration “Threat Event Frequency” and “Vulnerability”.

a.    In step three – Threat Event Frequency (TEF) – we selected a value of MODERATE; between 1 and 10 times per year.

b.    The outcome of step 6 was a VULN value of HIGH.

c.    Using the TEF and VULN inputs in the Loss Event Frequency table, we are returned with a LEF value of MODERATE.

*Note: the loss magnitude table used in the FAIR BRAG and the loss magnitude table for the Initech, Inc. scenarios are different. The Initech loss magnitude table can be viewed at the Initech, Inc. page of this blog.

8.    Estimate Worst-Case Loss (WCL); (Page 9 of the FAIR BRAG): Now we want to start estimating loss values in terms of dollars. For the basic FAIR methodology there are two types of loss: worst case and probable (or expected) loss. The BRAG asks us to: determine the threat action that would most likely result in a worst-case outcome, estimate the magnitude for each loss form associated with that threat action, and sum the loss magnitude. For this step, I am going to select DISCLOSURE in the threat action columns and RESPONSE / REPUTATION, in the loss form columns, with a WCL value of MODERATE (between $5000 and $20,000). Here is why:

a.    Due to the randomness of malware and the variability between one consumer’s security posture and another, it is unreasonable to assume that all INI consumers would experience a loss at the same time.

b.    However, because this is “worst case” loss – it is not unreasonable for us to estimate a scenario where a few consumers are taken advantage of and the source of unauthorized disclosure is tied back to Initech Novelty Inc.

c.    Thus for LOSS FORM: RESPONSE I am quickly estimating $5,000 and for LOSS FORM: REPUTATION I am quickly estimating $2,500. For reputation, I am assuming that loss event knowledge would be contained to the consumer and their social networks and maybe minimal local coverage of the incident. For response, I am assuming lost INI internal productivity, legal expenses, and maybe some hard dollars to provide the consumers credit monitoring or other protections.

9.    Estimate Probable Loss Magnitude (PLM); (Page 10 of the FAIR BRAG): In step eight, we focused on worst-case loss. Now we are going to focus on probable loss. Probable loss is for the most part always going to be lower then “worst case” loss. The BRAG asks us to: determine the threat action that would most likely result in an expected outcome, estimate the magnitude for each loss form associated with that threat action, and sum the loss magnitude. For this step, I am going to select DISCLOSURE in the threat action columns and RESPONSE / REPUTATION, in the loss form columns, with a PLM value of LOW (between $1000 and $5,000).:

a.    Due to the randomness of malware and the variability between one consumer’s security posture and another, it is unreasonable to assume that all INI consumers would experience a loss at the same time – but we should expect at least one consumer to be impacted by this vulnerability in a given year.

b.    Since this is “probable loss”, I cannot envision INI having to incur greater then $5,000 to address an incident with one consumer.

c.    Thus for LOSS FORM: RESPONSE I am quickly estimating $1,000 and for LOSS FORM: REPUTATION I am quickly estimating $1,000. For reputation, I am assuming that loss event knowledge would be contained to the consumer and maybe their social networks. For response, I am assuming lost INI internal productivity and maybe some hard dollars to provide the consumer credit monitoring or other protections.

10.     Derive and Articulate Risk; (Page 11 of the FAIR BRAG): At this point in the basic FAIR methodology we can now derive a qualitative risk rating. Using the table on page 11 of the BRAG worksheet, we use the LEF value from step seven and the PROBABLE LOSS MAGNITUDE value from step nine to derive our overall qualitative risk label.

a.    LEF value from step seven was MODERATE.

b.    PLM from step nine was LOW.

c.    Overall risk using the BRAG table on page 11 is MEDIUM.

In part four of this risk assessment scenario, I will summarize the results from both part two and part three.

** PERSONAL NOTE** There is a part of me that thinks the risk associated with this TCOMM is lower. I personally would like to see the TEF range narrowed a bit. Also, there is a contributing factor in this scenario that we should not discount and that is the privacy of the consumer. So, taking into account the privacy aspect – I would have no problem defending this scenario to a decision maker.


Risk Scenario – Hidden Field / Sensitive Information (Part 1 of 4) – The Scenario

January 12, 2009

The Initech Novelty, Inc. Security Manager (SM) was recently contacted by a concerned consumer about the security of some its online Initech Novelty, Inc. payment transaction pages. The consumer reported that her credit card information had been stored in some cached Web pages on her PC.

The Security Manager decided to do an investigation and browsed to the Initech Novelty Inc. website, created an order, and completed the payment card transaction process.

The following observations were made:

1.    In order to make a purchase on the web site, a user ID and password is required.

2.    A valid 128 bit SSL EV (Extended Validation) certificate is used within the session from login to checkout.

3.    Since, Initech Novelty Inc. does not store credit card information or preferences on behalf of its consumers, all transactions require input of credit card information.

4.    Session state between the payment card input / transaction amount screen and the payment card / transaction amount confirmation screen is controlled via hidden fields.

5.    Even though a portion of the of payment card PAN is masked on the confirmation page, the full payment card PAN can be viewed looking at the source of the confirmation page. Additional payment card information in hidden fields are the CVV2/CID/CVC2 values, expiration dates, and cardholder name information.

6.    Utilizing a client-side web proxy, the Security Manager noticed that none of the response headers from the web servers contained “no store” or “no cache” directives.

7.    The Security Manager was able to retrieve a copy of the confirmation page from his cache and view the entire payment PAN.

Other Given Information:

1.    The daily average of online sales transactions (purchases) via the Initech Novelty, Inc. ecommerce site is 1000. The average transaction amount is US $43.

2.    Initech Novelty Inc. is considered a Level 3 merchant by its payment processor, an agent of the processor’s acquirer.

3.    Based off voluntary post-check out web surveys – 45% of all the transactions on the Initech Novelty Inc. ecommerce site are by consumers that regularly visit the site throughout a 3 month period. The remaining 55% of transactions are from consumers who only purchase one time or only frequent the site less then once every three months.

4.    Based off voluntary post-check out web surveys – 65% of the survey respondents consider themselves “online security” aware.

5.    Finally, 75% of survey respondents performed their Initech Novelty, Inc. transaction from their home or work PC.  Of the remaining 25% of respondents, only 10% performed their transaction from a public terminal.

Task: Perform a risk assessment on this security issue utilizing the FAIR methodology. Summarize the risk associated with this scenario. Recommend some general risk mitigation approaches the application team can look at to mitigate the risk.

Posts 2, 3 and 4 to be published throughout this week.

Note: The purpose of the scenario is to provide enough information to conduct a risk assessment. It would be nearly impossible to write a scenario that would fit every environment (end user / provider), every web application platform, every use case, and every real-world variable. Please see my Scenario Pre-Read for additional information.