Portuguese English German

Is OWASP Top 10 No Longer Relevant?

It's hard to find something truly worth reading in application security. Attacks here and there - unless they're showcasing a new technique - are kind of boring for me because they're expected to happen anyway. That's when Egor wrote about Why OWASP Top 10 is no longer relevant.

He goes on and on, explaining why newly developed applications usually don't fit the OWASP Top 10, thanks to modern frameworks. He goes further, saying, "(...) If you aren't maintaining some PHP app written 10 years ago, the Top 10 list is irrelevant to you (...)". I agree with some parts of his article, but I think it didn't delve deep enough. That's what I'm going to address here.

That said, let's recap what OWASP Top 10 is good for:

  • Consultancies: it's easier to claim that they're going to cover an industry standard scope rather than 'my own methodology and scope' because customers understand this better;
  • Security Managers: they have an industry standard to generate metrics for their directors, CxO, and board;
  • Developers: they have a direction in which to start studying attacks/defenses;
  • Pentesters: they have some references to focus their attacks based on the commonality of a vulnerability.

But in practice, this is part of what happens:

  • Consultancies: simply update 'OWASP Top 10 2013' to 'OWASP Top 10 2017' and keep running the same tests;
  • Security Managers: fail to add more relevant metrics to application security, thus limiting themselves to OWASP top 10;
  • Developers: start studying material explaining OWASP Top 10, learn the basic examples, but fail on corner cases or in acquiring the security mindset;
  • Pentesters: find that reading recent hack news and exploiting CVEs in application servers is easier than finding OWASP Top 10 vulnerabilities in modern applications, except for business logic flaws.

And instead of criticizing OWASP Top 10 categories, you first have to analyze how those categories were selected in the first place. It's the most popular top 10 when it comes to vulnerabilities, after all.

Challenges of OWASP Top 10 "Call for Data"

OWASP Top 10 is created based on a 'Call for Data' that requests data from multiple vendors. And this call for data brings lots of problems. Let me explain:

1) OWASP Top 10 has had a conflict of interest since it was born

The person responsible for this project is Dave Wichers, also Co-founder and COO of a security consultancy named Aspect Security. Naturally, it's expected that categories that help sell more services or products are more likely to enter and gain position within OWASP Top 10, such as Insufficient Attack Protection.

2) Vendor Bias: many applications are left out

Vendors either perform penetration tests or have a vulnerability management platform that is capable of storing vulnerabilities identified in many applications. And this is a problem in itself.

Vendors focus on market segments. This means that they perform tests on a specific type of company, e.g., only large companies. Small companies typically don't have the budget to pay for security solutions, thus they are left out of the data used to compute the top 10.

3) Vendor Bias: skill difference

Some vendors may be more skilled and able to find more vulnerabilities than others in the same application. This will affect the top 10 as well.

4) Automated findings overwhelm manual ones

Companies like Aspect Security have automated scanners and conduct (manual) penetration tests. Which approach tends to gather more vulnerabilities? Automated, of course. Therefore, for every 1 business logic bypass, you'll have many more Reflected Cross-Site Scripting vulnerabilities. The problem now lies in how to normalize it to reflect the top 10. The answer is you don't.

.

It's safe to assume that most of those OWASP Top 10 categories reflect the categories presented by automated tools.

.

5) False-Positive vulnerabilities are part of OWASP Top 10

OWASP relies on vendors, and those vendors are error-prone. It's expected that some vulnerabilities reported by them may not be validated enough. Thus, yes, false-positive vulnerabilities are part of OWASP Top 10.

6) False-Negatives are all over the place

The corner cases, the hard-to-identify and reproduce vulnerabilities, and the vulnerabilities that require more knowledge to exploit are being left out of the Top 10, no matter how common they are. If they can't be identified automatically, they won't be present in OWASP Top 10.

Is OWASP Top 10 worth following then?

Yes, it is, as long as you understand what it can be useful for. It's a corollary from a quote that I've heard before, "Life isn't unfair, it's just that you don't know the rules."

You have to understand that "OWASP Top 10" should be renamed in your mind to "OWASP Automated Top 10 Findings" and acknowledge that it has a heavy conflict of interest with security consultancies. This means that security consultancies tell you what you need to buy and then offer you exactly that. Pretty convenient, huh?

On the other hand, OWASP Top 10 really needs changes:

  • David Wichers should step away from OWASP Top 10 - not only from the OWASP page but from everything related to this project - and someone neutral should take control. Someone who would not be corrupted by a security consultancy to become the next David Wichers - I'm not saying he is a bad person, it's just that there's an inherent conflict of interest. We may never rest easy when it comes to those responsible for this project. It's human nature, after all;
  • The project would need to clarify what "Top 10" really means. If it is more of a reflection of automated findings from a specific number of applications, then it should be stated in a very big disclaimer -- but many wouldn't like that. Imagine a security consultancy explaining that the Top 10 isn't actually a Top 10. Customers would be scared and might not buy, at least in an extreme line of thinking;
  • As vendors make up this top 10, they should be subject to a very rigorous assessment. It seems that any consultancy can provide data without much effort.

Given these three changes, it may not be worth the trouble to go through each of them to fix this project. Perhaps this top 10 should really be eliminated, and we should return to ignorance regarding the most common application security vulnerabilities. When that happens, someone will create another top 10 because "when people want the impossible, only liars can satisfy them".

And is it really "impossible"? To cover all apps, indeed. But to cover some? No, it's possible. But let's be clear, no matter the top 10, they need a disclaimer to clarify their scope. And it should be in red, using Arial 72 px.

Don't you agree?

Thank you.

Share on Twitter Share on Facebook Share on LinkedIn Share on Hacker News

Popular Posts

Newsletter