2 min read

Develop Cybersecurity Metrics that Matter

Develop Cybersecurity Metrics that Matter

With new SEC cybersecurity requirements for public companies making their debut, boards of directors are struggling to communicate more effectively with their CISOs.

In their role of providing oversight to the management of a technical/business risk, they want to benchmark the performance of their cybersecurity investments and understand the actual level of risk presented by the company’s information technology assets.

Traditional Reactive Cybersecurity Metrics

The metrics usually presented are historical in nature: mean time to detect, mean time to respond, mean time to restore, and security-related downtime. Improvements in these metrics indicate lower levels of cyber risk in an organization.

The challenge is most of these metrics are reactive and haven’t been particularly effective. (Wait for something to happen and then act and measure.)

What would better describe cyber risk in a manner that might make a difference and encourage better results? What is more relevant to current threats?

Three Alternative Security Metrics to Consider

John Pescatore, SANS Institute’s Director of Emerging Security Trends, put forth some noteworthy and challenging ideas in a recent talk on the CyberWire Daily podcast.

Percentage of Known Critical Danger Time

First, Pescatore suggests that companies measure the “Percentage of Known Critical Danger Time”.

This would relate to the total number of hours per month for all assets with a non-mitigated, known vulnerability with a CVSS score of nine or higher (Critical). This would be additive for all such risks and then divided by the total hours in the month.

The calculated percentage could exceed 100% if there were multiple open vulnerabilities already known.

This metric would help focus attention on all the compromised vulnerable edge devices we’ve observed this past year.

Percent of Access to Sensitive Data

Another metric Pescatore suggests is the “Percent of Access to Sensitive Data” that did not enforce Strong (and Multi-Factor) Authentication.

This metric would help in turning the tide in reducing the phishing risk by eliminating usable stolen passwords for accessing critical data. It should be more challenging than simply grabbing passwords – even out of your browser.

Implicit in that measurement is knowing where your sensitive data is located – also helpful.

Percentage of Sensitive Workloads Running on Hardened Images

And the final metric is the “Percentage of Sensitive Workloads Running on Hardened Images”.

Hardened images are readily available from the Center for Internet Security (CIS) and can easily be found on AWS. Using these hardened images lessens the risk of insecure configurations being deployed in cloud environments or in situations exposed to the internet.

Why risk building it from scratch? Eliminating cloud misconfiguration exposures would go a long way to protect hybrid infrastructure environments. Fewer open S3 bucket leakage would be a positive step, too!

Conclusion

The measurements should reflect the risks your organization faces versus a standardized set of numbers. Pescatore says these measurements could even be easily made into “business friendly” graphics for board reports featuring green, yellow, orange, red, and purple scales that we’ve all become familiar with.

Such measurements would help the Board of Directors get a better handle on the cyber risks they face and how they are positioned.

Pescatore encourages listeners to move beyond the past and explore new, more effective definitions and measurements—solid advice in a rapidly changing threat landscape.

Call us if you are considering new and more effective ways to understand cyber risk in your organization. We have numerous observations and relatable experiences to bring to the table.