English Portuguese

Is OWASP Top 10 No Longer Relevant?

It's hard to get something that really is worth reading in application security. Attacks here and there - unless showing off a new technique - they are kind of boring for me because they are expected to happen anyways. That's when Egor wrote about Why OWASP Top 10 is no longer relevant.

He goes on and on to explain why newly developed applications usually don't fit the OWASP Top 10 thanks to modern frameworks. He goes further and say that "(...) If you aren't maintaining some PHP app written 10 years ago, Top 10 list is irrelevant to you (...)". I agree with some part of his article, but it didn't go deeper enough as I'd like. That what I'm going to cover these points here.

That said, let's recap what OWASP Top 10 is good for:

But in practice, this is part of what happens:

And instead of criticizing OWASP Top 10 categories, you have to first analyze how those categories were selected in the first place. It's the most popular top 10 when it comes to vulnerabilities after all.

Challenges of OWASP Top 10 "Call for Data"

OWASP Top 10 is made based on a 'Call for Data' that requests data from multiple vendors. And this call for data brings lots of problems. Let me explain them:

1) OWASP Top 10 has a conflict of interest since it was born

The responsible for this project is Dave Wichers, also Co-founder and COO of a security consultancy named Aspect Security. Naturally it's expected that categories that help it sell more services or products are more likely to enter and gain position within OWASP Top 10, such as Insufficient Attack Protection.

2) Vendor Bias: many applications are left out

Vendors have to either perform penetration tests or have a vulnerability management platform that is able to store vulnerabilities identified in many applications. And this is a problem by itself.

Vendors have to focus on market segments. It means that they perform tests in a specific type of company, e.g., only large companies. Small companies don't tend to have budget to pay for security solutions, thus they have been left out the data used to compute the top 10.

3) Vendor Bias: skill difference

Some vendors may be more skilled and able to find more vulnerabilities than others in the same application. It will affect the top 10 as well.

4) Automated findings overwhelm manual ones

Companies like Aspect Security, have automated scanners and do (manual) penetration tests. Which approach tends to gather more vulnerabilities? Automated of course, thus for every 1 business logic bypass, you'll have many more Reflected Cross-Site Scripting vulnerabilities. The problem now lies in how to normalize it to reflect on top 10. The answer is you don't.

.

It's safe to assume that most of those OWASP Top 10 categories are a reflect of categories presented by automated tools.

.

5) False-Positive vulnerabilities are part of OWASP Top 10

OWASP relies on vendors and those vendors are error prone. It's expected that some vulnerabilities reported by them may not be validated enough. Thus, yeah, false-positive vulnerabilities are part of OWASP Top 10.

6) False-Negatives are all over the place

The corner cases, the hard to identify and reproduce vulnerabilities and the vulnerabilities that require more knowledge to exploit. They are being left out the Top 10, no matter how common they are. If they can't be identified automatically, they won't be present in OWASP Top 10.

Is OWASP Top 10 worth following then?

Yes, it is, as long as you understand for what it can be useful. It's some corollary from a quote that I saw before "life is not unfair, it's you that don't know the rules".

You have to understand that "OWASP Top 10" should be renamed in your mind to "OWASP Automated Top 10 Findings" and acknowledge that it has a heavy conflict of interest with security consultancies. It means that security consultancies say to you what you need to buy and subsequently offer you what you need to buy. Pretty convenient, huh?

On the other hand OWASP Top 10 really needs changes:

Given these 3 changes, it may not worth the trouble to go through each of them to fix this project. Perhaps this top 10 should really be eliminated and we should then go back to blindness when it comes to the most common application security vulnerabilities. When that happens someone will create another top 10, because "when people want the impossible only liars are able to satisfy them".

And is it really "impossible"? To cover all apps, indeed. To cover some, no, it's possible. But let's be clear, no matter the top 10, they need a disclaimer to clarify their scope. And it should be in red using Arial 72 px.

Don't you agree?

Thank you.


Share the knowledge :)

Share on Twitter Share on Facebook Share on Google Plus Share on LinkedIn Share on Hacker News