If you missed the first two posts, the links to Post 1 and Post 2 are on the right hand side of your browser screen.
Let’s get to it…
We are now up to the CVSS “Temporal Metrics”. According to the documentation, these are optional and are meant to be completed by the vendor vulnerability analyst more so then the end user. As to be expected, all three of these can be considered as contributing factors to assessing a vulnerability for risk.
Exploitability. This metric measures the current state of exploit techniques or code availability (possible values are: Unproven, Proof-of-concept, Functional, High, and Not Defined). So the key words here are “current state”. Thus indicating that state changes for better or for worse over time. So in the world of FAIR, this would seem to map nicely to “threat capability” and “control resistance”. “Threat capability” because the exploit methods may be limited to a very small percentage of the threat community; a fraction of the percentage of the threat population as a whole (the community is a subset of the population). “Control resistance” because the exploit may only be possible if the attacker has local system access. Heck, in some cases, security controls could be entirely absent and the system is no more vulnerable then if there were a tens of thousands of dollars worth of controls (paying dollars to protect pennies).
Remediation Level. According to CVSS, this metric is an important factor for prioritization. Possible values for this metric include: Official fix, Temporary Fix, Workaround, Unavailable, and Not Defined. These choices are more on how the vendor would score it. Seems like there is room for us, the risk assessors to provide our own value. This metric maps well to control resistance in the FAIR taxonomy. Specifically, how resistant are my security controls against the overall threat population? Just because the vendor may not have a solution for us, there could be off-setting controls that do not require a vendor solution for the short term. I appreciate the context this metric was developed within – but do not be fooled into taking logic back into your own hands when it comes to these metrics. We all hear and preach about security in depth – take an opportunity to leverage those investments when analyzing vulnerability from a risk assessment perspective.
Report Confidence. This metric measures the degree of confidence in the existence of the vulnerability and the credibility of the known details. Provided values include: Unconfirmed, Uncorroborated, Confirmed and Not Defined. This may be the one metric that when a vulnerability is first disclosed, only the vendor or other industry experts can accurately select a value for. But within days, weeks or months of public disclosure – pretty much any non-“wolf crying” security expert can do some simple Google searches and validate a provided CVSS metric score or update one on their own. From my perspective, this metric maps well to FAIR’s “threat event frequency” and “threat capability” taxonomy elements. For “threat event frequency”; specifically, “threat contact” and “threat action”. For “threat capability” because the exploit methods may be limited to a very small percentage of the threat community; a fraction of the percentage of the threat population as a whole (the community is a subset of the population). How often do I think a threat agent comes into contact with a vulnerable system and how often do I think they will attempt to exploit the vulnerability once they do come in contact it. So, “report confidence” is a contributing factor to possibly a higher threat contact or threat action – but does not necessarily guarantee that your threat event frequency is going to be higher or that every threat agent in a threat community or threat population is capable of exploiting the vulnerability – especially in light of other security controls that may be present in your environment.
I will do two more posts on the CVSS framework; the “environmental metrics” group and then a summary post. Thanks for reading!