Stuart King – Information Security Annoyances – Response 2

March 21, 2009

In my last post, I provided some thoughts on one of Stuart King’s Top 5 Information Security Annoyances; specifically, security awareness programs. In this post, I want to touch on Stuart’s comments regarding Risk Modeling. Here are Stuart’s thoughts:

“Many “experts” preach the importance of working through risk models. It’s a load of tosh. No matter which way you try to do it, you’ll always come out with the answer you first thought of.  You might as well use a crystal ball and read tarot cards. Nobody needs to work through a complex risk model to understand that if a retail website suffers a denial of service that it’ll have some financial consequences, or  that if the internet connection is lost that there wont be  access to I’ve got better, more constructive and practical ways to spend my day than conspiring over risk models. Much more relevant is threat modelling – understand your systems and know the business so that you can make relevant risk-based decisions.”

In November of 2008, I posted a rebuttal regarding Stuart’s dislike for my approach to risk assessments. I am still convinced that Stuart’s approach is more a vulnerability assessment rather then a risk assessment – the latter of which focuses more on frequency of loss and impact while also accounting for how “vulnerable” something is. So, it is no wonder that Stuart is down on risk modeling; if the risk assessment foundation he is using is cracked, then any risk model built on top of it is probably flawed.

So what is a risk model? It means different things to different people. But here is a general description that I like from the Inter-American Development Bank : “A mathematical, graphical or verbal description of risk for a particular environment and set of activities within that environment. Useful in Risk Assessment for consistency, training and documentation of the assessment.”

Now, modeling activities themselves can be both complex and simple. I *think* that the complexity that Stuart may be referring to is more in the context of the modeling activity versus the output, or the model itself. However, information professionals can still model risk without being have degrees in statistics, being an actuarial, or attending months of technical training.  Let me explain…

Effective does not have to be expensive or complex.


First, I beg your pardon for the image above – as it truly does push the limits of my standards for public-facing decency – but there is a real story behind this picture (essentially a risk model). My first real IT job outside the Marine Corps was for a holding company in Washington, DC of which there were five subsidiaries (two lobbying firms, two public relations firms, and a crisis management firm). The year was 1998 and our company had just hired its first CIO, who to this day is still one of a handful of folks I consider a close friend. The picture above is a representation of what our new CIO drew to the CFO of the holding company to justify purchasing a new firewall – little explanation needed. It worked. A few weeks after he presented the risk model – we were installing a Raptor firewall and were no longer relying on a Cisco router with NAT capabilities to protect our edge.


The image above is referred to as a Probability / Impact (P-I) Chart. It is often generically referred to as a heat map. For every risk issue and subsequent risk assessment, there is an associated loss event frequency and expected impact – that can be plotted within a P-I chart. These are not very complex to create and are very flexible. Combine some creativity with flexibility and you can visually represent risk issues in appealing ways. The ranges can be modified to be more reflective of thresholds for your particular company. It is definitely not as crude as the CIO/Firewall image above, and it allows us to plot numerous risk points. Finally, these charts are great tools for helping to prioritize which risks to mitigate first.


Above is an annualized “expected loss” curve that was produced by a risk tool I work with on a regular basis. Most tools of this nature leverage Monte-Carlo or Latin Hypercube simulation capabilities. It took only a few minutes to plug in the variables that the simulation model needs to perform the simulation (I use the FAIR methodology). For this particular risk issue, I asked the tool to perform 1000 Monte Carlo simulation iterations. It took about 8 seconds to perform. The output of the simulation gives me the expected loss event frequency and expected loss amount – both if which could be modeled like above. However, the curve above is the annualized risk curve. The annualized risk value is achieved by multiplying the expected loss event frequency by the expected loss amount. Do this a 1000 times and you get the curve above. Again, the tool I use does this all for me – in about 8 seconds. What this curve tells me is that about 90% of the simulations resulted in expected loss amounts of less then $80,000.

In closing, please understand that there are very simple and affordable risk assessment and risk model tools available to you. Most IT security risks do not require complex risk models or tools that can take hours, days, months, or even years to build – let alone simulate. Tremendous progress has been made in the last 10-15 years that gives security practitioners like ourselves capabilities that scientists and engineers only dreamed of as recent as 20 years ago.

Let’s stop hobbling ourselves and instead empower ourselves to make as big of a positive impact as possible to our employers as well as our profession. Be creative, educate yourself, be part of the solution – not part of the problem, periodically reassess your skills. This goes for Computer Weekly and the bloggers / writers they hire as well.


Stuart King – Information Security Annoyances – Response 1

March 19, 2009

Stuart King posted his Top 5 Information Security Annoyances a couple of days ago. Stuart and I have bantered back and forth a few times on the risk assessment and risk management topics. In his most recent post, Stuart lists five Information Security annoyances, two of which I want to respond to: “Security Awareness Programs” and “Risk Modeling”.

There are a couple of reasons why I want to respond:

1.    I believe that Stuart and ComputerWeekly are unintentionally doing a disservice to the information security profession. Stuart, by broad stroking the lack of value of security awareness programs. ComputerWeekly for allowing Stuart to broad stroke under its name.

2.    I want to give a glimpse of hope for those seasoned IT Security professionals, new IT security professionals, decisions makers questioning our value, and compliance professionals – that security awareness programs do add value – if done properly.

Regarding Stuart’s “Security Awareness Program” annoyance…

Here is what King wrote: “A whole cottage industry of consultants and websites has been built up around the perceived need to educate company employees about information security. It’s all a waste of time and money. Certain individuals will point to a reduction in the number of lost laptops as a measure of success, or an increase in the number of people who can correctly click “a). All policies are on the Intranet” in a multiple choice questionnaire. The fact is that security awareness programs are received within the organization with about as much enthusiasm as a plate of sick. The key to good information security is strong governance, good communication and well managed, decent processes.  Security awareness programs sap energy and resources, and have little positive effect. Drop them.”

Where to begin? Instead of nit-picking line-by-line, let’s try to describe what a good security awareness program looks like (not in order of importance) – and I am probably missing some other attributes.

1.    Accessible
2.    Relevant
3.    Incentives
4.    Interactive
5.    Compliments other risk management processes

In 2008, I participated in a fairly large IT security risk assessment for a large business unit. Without going into details, the primary product distribution capability for this business unit leverages independent contractors and their employees across the United States (around 25,000). There were a few risk issues that we deemed necessary to document and one of the mitigation plans was to create a security awareness program. On a side note, the team responsible for this program did an absolute bang-up job and I am really proud of their hard work.

How is a security awareness program a “security control” and thus a mitigation option? In other posts, I have mentioned that there are generally three types of security controls: preventive, detective, and response. A security awareness program can span all three of these security controls.

Preventive: If the program educates the target audience and it changes a behavior that results in less security incidents and subsequently less loss – at a reasonable cost; it has value.

Detective: Security awareness programs may not result in being able to prevent all bad things from occurring, but it may allow the target audience to better know when to alert security or leadership that something bad is occurring.

Response: I have witnessed numerous instances where information security was proactively engaged to address a security issue because of awareness programs. Had some of these issues or incidents gone unreported, it could have resulted in long periods of data loss or reckless behavior -that would have cost the company more money to address at a later time.

Back to what a good security awareness program looks like…

Accessible. The program needs to be accessible to the target audience. Whether it is a web-based application, a distributed CD, or an in-person meeting you have to make it accessible. If it is not accessible, then people will not know how to participate, let alone embrace it.

Relevant. Security awareness programs need to be relevant. Relevant thus implies that it will have to change from time to time to keep in step with the risk landscape. Does that mean that solid security principles no longer get addressed in the program? No, what it means is that the program needs to address the biggest threats we are faced with today and how our security controls / programs we have in place address those threats.

Incentives. This is easier said then done. For the program I mentioned above, the team that put it together was able to get the security awareness program certified by a few states in the US for official “continuing education” credits (specific to a certain industry / licensing requirements). Thus, the security awareness program not only educates the target audience, but it also helps them fulfill continuing education requirements to maintain their licenses to distribute our product in the state(s) they operate within. With a little imagination, you can probably create your own incentives as part of your security awareness program.

Interactive. This is a no-brainer. Make it interesting to the target audience. There are so many learning styles and it is hard to accommodate all of them. However, if we want people to take time out of their busy schedules to participate in our program – it cannot be boring.

Compliments other risk management processes. The security awareness program needs to be leveraged across other risk management processes. For example, for a program that is focusing more on data protection- can I correlate places where data loss is occurring to individuals in those areas that have or have not participated in the security awareness program? Of course, there is also the compliance angle. For those US readers, there are many US Government, State, and industry regulations that mandate “security awareness programs”, so for someone to simply recommend that you “drop them”, is irrational.

One final point I would make is cost. An effective security awareness program does not have to cost a lot of money. The security awareness program I mentioned above cost around US $0.50 (includes both hard and soft dollars) per individual in the target audience. For less then 50 cents a person we are able to educate them and fully expect a decrease in certain types of loss events. Our consumers benefit, our independent contractors benefit, and our company benefits.

In my next post, I will respond to Stuart’s Risk Modeling annoyance.

Application Security Risk Assessments

March 16, 2009

I have so many topics and thoughts that I want to communicate on this blog. I could write for days on PCI-DSS; especially an exercise I recently lead to select a QSA for a professional services consulting engagement; not to be confused with a PCI-DSS compliance assessment. I have a load of thoughts about the current book I am reading by David Vose titled “Risk Analysis, A Quantitative Guide”. I still have not transferred my notes regarding Securosis’ “Business Justification for Data Security” paper to a blog post. BTW, Rich Mogull – congrats on the new born!

For this post I want to share some information about a professional development project I lead back in 2007 and 2008 for my employer that bridges application security and risk assessments.

When I first started working for my employer, I knew that I was going to be performing risk assessments. However, I thought most of my risk assessments were going to be more in the area of networking information security and data information security. Within a year of starting the job – I knew that application security and risk assessments were going to be a significant part of my job. So, because I was weak in the application security discipline, I added application security to my personal development plan in 2007; it is still there today. My quest for learning more about application security resulted in co-developing an application security assessment methodology for our employer as well as lead me to reviving the Columbus, OH OWASP Chapter.

The problem I faced in 2007 was that I was being assigned to more and more complex application development projects. Because I was not very skilled in the application security discipline, I could not perform effective risk assessments. For example, I struggled identifying meaningful vulnerabilities let alone being able to validate vulnerabilities without wasting another team member’s time. So, my manager allowed me and two higher skilled individuals on our team to sit down and document how they approach an application security risk assessment. Eventually, one of the two individuals left our company and went to go work for Amazon as one of their head application security professionals. The vacancy was short. We added another person to our team who came up through the application development ranks and provided the extra spark we needed to get the methodology from a straw-man to a usable product in late 2008.

Disclaimer 1: Our application security assessment methodology is not limited to just web applications. In large and/or complex environments – you do not have the luxury of dealing with just web platforms and simple backend databases. You will most likely be dealing with multiple applications, use cases, and business processes that span numerous technologies, languages and protocols.

There six high level concepts to this methodology. The first three concepts are straightforward and usually give you 80% of the information you need to facilitate a risk assessment. The last three concepts usually require more time and understanding. You will also notice how they all build upon one another – which should also reduce the number of information gathering activities. Let’s jump into them.

1. Information classification.
The very first thing we want to understand is what type of information is handled or exposed as part of this development effort. Is it public information or confidential information? Are we sharing information with a new application? Are we sharing information with an external business partner? Does this development effort allow for data to be modified? What is the business purpose of this information?

Let’s face it, for most vulnerabilities that can be exploited AND result in data loss – if we do not understand the value of the information – we cannot appropriately communicate the severity of the exposure (or lack thereof) to our business partners. I want to give a quick plug to the Securosis team on the information value concept. They dedicate a portion of their “Business Justification” white paper to information value and it is worth reading if you need a refresher on or are new to information value.

2. Use Cases.
The next area we want to have visibility into is use cases. Understanding the use cases is probably one of the most efficient ways to learn information about an application. Properly documented use cases should let you know who is using the application, what other applications the application under review interfaces with, data flow, business rules, and a ton of other information.

Do not underestimate the value of use cases. BTW, documented use cases should be a requirement within any project delivery methodology. If your organization does not have a defined project delivery methodology – you can find use case templates on the Internet. Make it one of *your* requirements in order to complete an application security assessment. Not only will you benefit – but more then likely the whole project team will benefit from it as well.

3. Application Architecture.
Application architecture can mean different things to different people to different organizations. To keep this high level – let’s stick with application architecture in the context of web applications and interfaces with other applications. If a web application: is the architecture appropriately tiered? Does the architecture result in the application interfacing with other applications or business partners where it could be inheriting risk of those applications? Does the data being handled have special architecture security requirements? There are dozens of other questions but you should get the point.

A simple example of where application architecture comes into play is PCI-DSS. In appendix F of the PCI-DSS requirements – you will notice that one of the first questions the QSA (auditor) is asked is whether scope of the assessment can be reduced due to network segmentation. I am sure you can think of some other business processes or information types where security requirements can significantly impact the application architecture (or find vulnerabilities within the existing application architecture).

4. Access management.
Simply put, how do we control access to the application and how does this application interface with other applications, databases, or services. I consider this security 101 stuff – but in an application security context. Also, even in companies with well defined security policies and mature security practices – do not think for even a quarter of a second that there is not someone trying to do more with less when it comes to shared IDs and passwords or using unauthorized user repositories for access management (local DB versus LDAP integration).

An additional thought that I would share on this topic is not taking for granted excessive rights a user might have for public data. An example that one of the co-developers of our methodology uses is that of an intranet application that allows all employees to view the cafeteria menu. This is essentially public information. However, we still need to have more stringent controls around who can modify the data lest we find ourselves reading that the main lunch course is “possum belly and grits” instead of a “muffaletta sandwich with olive salad”.

5. Code implementation.
This is a huge and important concept of which I can only scratch the surface as part of this blog post. For the OWASP folks out there – you know what this is. Do we have sound code development practices? Do we have security controls at all tiers of the application; including the most forgotten tier – the client tier? Do I have special data security requirements that need to be accounted for in my application code?

While this concept may be the most complex it is also a concept that separates the wheat from the chaff amongst application security professionals, information security professionals as well as security-minded developers. If you ask a self-proclaimed security professional or a self-proclaimed security-minded application developer what input validation or buffer overflows are and they miss the answer by a mile – red flag. Inspecting code and validating vulnerabilities is another topic worth mentioning in this post. I have been in the position where I ran a web application vulnerability scanner and raised red flags on SQL injection findings to the development team; only to find out that it was a false positive. I was personally embarrassed and professionally I felt that it made our profession look bad. It was a valuable lesson learned and continues to be something I reflect on from time to time to make sure my personal development plans are hitting the mark.

6. Operational Plans.
This concept is more about how an application team manages their application. Whether more in the context of the software development life cycle (SDLC) or day-to day operations – there are often points of vulnerability in this area that get overlooked. Some questions related to this concept may be: How do I control my source code, let alone access to it? Do I have adequate and appropriate logging within the application? Am I logging information I should not be? How do I provision and de-provision access to the application? How is production support handled?

It is too easy to forget about the day-to-day operations of an application especially for projects where the application is brand new. The way I look at it, this is the perfect time to make sure the requirements are established up front as well as implemented – versus being reactive post implementation. For existing applications – especially application that never received a security review or a recent security review – here is the reality: The threat landscape and regulatory landscape is ever changing. Just because there were not security requirements five years ago or even 20+ years ago (yes, we have an application that old) does not mean security controls should not be implemented today.

So there you have it folks, a very general approach to performing an application security assessment. Of course, once you flush out some risk issues, you can assess them for risk using your favorite risk assessment methodology.

If this application security assessment methodology appeals to you – let me know. Better yet – if what general information I have shared is lacking – please let me know. Either point me in the direction of a better methodology or give me some insight to help make it better. One final note, I do have intentions of getting my employer’s permission to give a more detailed presentation of our application security assessment methodology at a future Columbus, OH OWASP Chapter meeting.

Thanks for reading my blog – have an awesome day!