Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
CVSS v3 Creates New Challenges For Developers (2018) (whitesourc.com)
26 points by thereyougo on Sept 8, 2019 | hide | past | favorite | 15 comments



In general, I'm not a fan of vulnerability quantification efforts like CVSS.

The reason is that they provide the appearance of repeatability and objectivity when, in reality, many of the assessments used are subjective and opinion based.

A simple "critical", "high", "medium", "low" will generally provide enough actionable information without pretending to be something it's not.

For examples of the weaknesses of CVSS, you only have to look at the software that uses it like Vulnerability scanners.

In many cases they have different ratings for the same issue and also have ratings which are nonsensical

here's one example.

Telnet (totally unencrypted protocol) CVSS v2 5.8 https://www.tenable.com/plugins/nessus/42263

SSL Self signed cert CVSS v2 6.4 https://www.tenable.com/plugins/nessus/57582

So using an unencrypted protocol is worse than an encrypted one with a self-signed cert. (and before anyone says "ah that could be because people mis-place trust in the cert", that's not a factor in CVSS calcs) yet the unencrypted protocol scores lower.


In 2009 some classmates and I wrote a paper called "Security Impact Ratings Considered Harmful":

https://arxiv.org/pdf/0904.4058.pdf

The position is that using CVSS to decide which updates to take is a losing gamble. You don't have a good idea in advance of how easily something is exploitable, and trusting the CVSS scorers to say "oh, this particular out-of-bounds write causes a crash but isn't exploitable to run arbitrary code" is misplaced: they're not exploit developers and they're not spending time trying to build an exploit chain.

If you have a process for doing software updates, do them unconditionally, on schedule, and promptly, for all security updates and if possible all updates period. Don't "prioritize" the way this article and CVSS aa a whole encourages you to. Upstream developers are working and looking at the master branch only, and if a certain component got rewritten, it's entirely possible there's no white-hat attention on the old version ever again. And if you don't have a process—i.e., if you treat CVSS over 9 as a sign you should run around in a panic and figure out something ad hoc, and not care otherwise—the absolute best thing you can do for your security is to fix that.

In the course of writing the paper, we ran 'git log' on a recent Linux kernel release, found something exploitable that had no CVE at all, and developed a working exploit.


They're both rated medium. Makes me wonder how one come across either of these vulnerabilities in practice.

Telnet is disabled by default since windows 7 (that's 10 years ago). You wouldn't see anything internal with telnet unless it's seriously misconfigured or employees doing their own thing to circumvent IT. This should one maybe be upped to high.

self signed certificates on the other hand. either there is someone too lazy to obtain public certificates or the scanning tool doesn't accept the internal PKI.


Endpoints in a network aren't all PCs :) In some industries (e.g. SCADA) Telnet is very much alive.

Also even on the Internet, Shodan has about 5.2Milllion hosts with the Telnet port exposed...


I disagree. The point is to help you prioritize based on the threat model and mitigating controls in place.

If you have 20 medium vulns you need to know which one has higher complexity to exploit,remote exploitation,authentication required,etc...

Ideally a vuln management program will add other in house metrics based on the asset,data and security controls and threat exposure.

You have limited resources that can be allocated to testing patches,coordinating deployment,eploying them and making sure there are no adverse side effects and if there are to work with the product vendor or implement controls.

CVSS is extremely valuable but more so for proioritizing than for reporting.

Executive reports should beuch simpler and usually contain other details such as patch turn around time,exceptions and trends.

For the example you mentioned,yes, a false sense of security is worse than no security. You also take into account things like just "telnet,but telnet isn't always used for management or confidential access"(e.g.:lots if meter readers and BAC type devices have read only telnet) vs "self signed cert for a service that probably handles sensitive data which is why it supports encryption to begin with"


Of course each organization should have their own threat model, mitigations and the like in place. Those organizations also know not to take a CVSS base score as a gospel.

However I'd suggest that many organizations are not that sophisticated and that many many organizations take CVSS score base scores as objective measures of vulnerability, and that that is dangerous.

Also the point about things like "difficulty to exploit", well that's often subjective and there's no way the analyst assigning the score has a good picture of the real-world likelihood in every case, so the idea that they can assign a universal number to that is incorrect.

If an organization wants to use CVSS internally and assign their scores for their environment, with a consistent set of scoring criteria, that could work (although I'd argue high/medium/low would work just as well)


Exploitability is relative,as in compared to other bugs of a similar class. Subjwctivity is not always bad.

CVSS score is better than nothing.like i said before, high/medium/low is not granular enough. How is it different than having a 3 point numerical scale? Do you disagree with the granularity?

Just MS patch tuesday could be dozens of patches. If you have the process you can use cvss as a starting point for generating final risk score. If you don't CVSS provides you with some guidance as to what you should prioritize. It's meant to be an aid not a rule. In the end you should patch all vulns and prioritize as it makes sense in your environment


CVSS scores are worse than nothing. You can make self-XSS come out high, and RCE come out low, and people routinely do both. False certainty is worse than uncertainty.


While I agree with your last sentence, I believe you are misunderstanding the purpose of the score.

It is meant to facilitate your risk assesment. Taking your example, A lot of people think RCE is automatically bad and XSS not so much(but yeah,self-XSS is mostly low sev). If the vuln is regarding a bug where theoretically RCE might be possible but there are no known exploits then it should have a lower exploitability score than unauthenticated XSS on a login page right? Similarly, naive people might think DoS isn't so bad but service disruption can be a lot worse than RCE on the asset,data and threat model. CVSS is meant to guide those people so they prioritize based on a standard score rather than intuition.

Threat model is another thing,if you expect attackers that can rapidly develop exploits after discovery then you would increase/adjust the exploitability score yourself (and the score does change over time as exploits are made public).

I guess my perspective is that hygeine is hygeine,either you have it or not.all vulns should be remediated regardless if any scores. I have been responsible to give guidance to IT ops teams on prioritizing patches and this helps,but you have to understand the purpose and meaning behind the scores. It can certainly be misunderstood and I am sure there are many improvement opportunities like what you mentioned but the alternative is to expect resource limited staff to read and understand every vuln (including how specific complex attacks can be pulled off). And even if that was possible,how does someone manage said staff validate things are being prioritized and properly? How do you get an external auditor to properly assess your practices if you interpret every vuln autonomously without any well established and understood priority structure. I mean you can change the score according to your opinion all you want but at least you can say "Because of _____ I am increasing the exploitability score" as opposed to "I dont think it's so bad so I'll score it n". Maybe I just have a different perspective.

Haven't heard of OSVDB in a long time but it exists too. Most orgs need something to feel this need.


What I'm reading here is the case that, if there were such a score that could reliably capture risk in at least some of its complexity, that would be good. I agree, it would be good. My point is that CVSS doesn't do that. My evidence isn't that there's a self-XSS somewhere scored 8+, but that this happens routinely, because you can (and people absolutely do) make CVSS scores say whatever they want them to say.


Perhaps the problem is lack of integrity or the fact that these scores are misused to evaluate how good/bad a product is leads to inevitable attempts to manipulate the scores with bad-faith?

Is the problem with the score system itself or the political structure of who gets to set the scores and how? If the former then you have a very good point.


It's a technical problem more than a moral problem. As you observe, it's helpful to have some kind of risk metric that captures the complexity of actual exposures in real systems. But with CVSS, there's so much context you need about why a vulnerability is scored as it is, you might as well just ditch the score and share the context; without the details about how the score was arrived at, your 8+ might just as well be my 1.0.


I'm not a security researcher, but I've heard your point made in a couple of places, and I've come to agree with it.

If you're going to boil everything down to a single, approximated number, you might as well make it obvious that you're approximating. Having scores like 5.8 makes people put more trust than is appropriate into the algorithm.


Once upon a time I wrote a CVSS 2/3 library for Python[1] (more have appeared now, this[2] looks nicer).

CVSS is really complex and seemingly arbitrary. Take a look at this (rather horrible) code to calculate a CVSS score from another library[3]. Yes, the code is disgusting, but even if you abstract it away nicely it boils down to: do some maths with some arbitrary numbers[4], then perform a bunch of conditions, and you (hopefully) get 3 numbers spat out.

I don't like it. There are bugs in the 'official' calculator around floating point numbers, the spec had several typos in the calculations (one being an extraneous negative symbol!) and the naming system for the components is needlessly complex.

There are surely simpler, less magical ways to score, compare and rate vulnerabilities?

1. https://github.com/ctxis/cvsslib

2. https://github.com/skontar/cvss

3. https://github.com/toolswatch/pycvss3/blob/master/lib/pycvss...

4. https://www.first.org/cvss/specification-document#7-1-Base-M...


I don't know any real software security person who takes CVSS seriously. It's mostly a Ouija Board you use to rationalize setting finding severities where you want them (high if you're on offense, low if you're on defense).

The other useful function of CVSS is to flag people you shouldn't take seriously in the industry. So, for instance, Kevin Mitnick runs a "zero-day vulnerability brokerage" that only accepts zero-days past a threshold CVSS. That's a pretty decent clue about the legitimacy of the service.

Don't waste time with CVSS.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: