Is OWASP Top 10 No Longer Relevant?
It's hard to get something that really is worth reading in application security. Attacks here and there - unless showing off a new technique - they are kind of boring for me because they are expected to happen anyways. That's when Egor wrote about Why OWASP Top 10 is no longer relevant.
He goes on and on to explain why newly developed applications usually don't fit the OWASP Top 10 thanks to modern frameworks. He goes further and say that "(...) If you aren't maintaining some PHP app written 10 years ago, Top 10 list is irrelevant to you (...)". I agree with some part of his article, but it didn't go deeper enough as I'd like. That what I'm going to cover these points here.
That said, let's recap what OWASP Top 10 is good for:
- Consultancies: it's easier to say that they are going to cover some industry standard scope rather than 'my own methodology and scope' because customers understand better;
- Security Managers: they have an industry standard to generate metrics to their directors, CxO and board;
- Developers: they have a north to where start studying attacks / defenses;
- Pentesters: they have some references to focus their attacks based on how common a vulnerability is.
But in practice, this is part of what happens:
- Consultancies: just update the 'OWASP Top 10 2013' to 'OWASP Top 10 2017' and keep running the same tests;
- Security Managers: they fail to add more relevant metrics to application security, thus limiting themselves to OWASP top 10;
- Developers: they start studying a material explaining Owasp Top 10, learn the basic examples and fail on corner cases or fail to acquire the security mindset;
- Pentesters: they discover that reading recent hack news and exploiting CVEs in application servers is way easier than finding OWASP Top 10 vulnerabilities in modern applications, except for business logic flaws.
And instead of criticizing OWASP Top 10 categories, you have to first analyze how those categories were selected in the first place. It's the most popular top 10 when it comes to vulnerabilities after all.
Challenges of OWASP Top 10 "Call for Data"
OWASP Top 10 is made based on a 'Call for Data' that requests data from multiple vendors. And this call for data brings lots of problems. Let me explain them:
1) OWASP Top 10 has a conflict of interest since it was born
The responsible for this project is Dave Wichers, also Co-founder and COO of a security consultancy named Aspect Security. Naturally it's expected that categories that help it sell more services or products are more likely to enter and gain position within OWASP Top 10, such as Insufficient Attack Protection.
2) Vendor Bias: many applications are left out
Vendors have to either perform penetration tests or have a vulnerability management platform that is able to store vulnerabilities identified in many applications. And this is a problem by itself.
Vendors have to focus on market segments. It means that they perform tests in a specific type of company, e.g., only large companies. Small companies don't tend to have budget to pay for security solutions, thus they have been left out the data used to compute the top 10.
3) Vendor Bias: skill difference
Some vendors may be more skilled and able to find more vulnerabilities than others in the same application. It will affect the top 10 as well.
4) Automated findings overwhelm manual ones
Companies like Aspect Security, have automated scanners and do (manual) penetration tests. Which approach tends to gather more vulnerabilities? Automated of course, thus for every 1 business logic bypass, you'll have many more Reflected Cross-Site Scripting vulnerabilities. The problem now lies in how to normalize it to reflect on top 10. The answer is you don't.
It's safe to assume that most of those OWASP Top 10 categories are a reflect of categories presented by automated tools.
5) False-Positive vulnerabilities are part of OWASP Top 10
OWASP relies on vendors and those vendors are error prone. It's expected that some vulnerabilities reported by them may not be validated enough. Thus, yeah, false-positive vulnerabilities are part of OWASP Top 10.
6) False-Negatives are all over the place
The corner cases, the hard to identify and reproduce vulnerabilities and the vulnerabilities that require more knowledge to exploit. They are being left out the Top 10, no matter how common they are. If they can't be identified automatically, they won't be present in OWASP Top 10.
Is OWASP Top 10 worth following then?
Yes, it is, as long as you understand for what it can be useful. It's some corollary from a quote that I saw before "life is not unfair, it's you that don't know the rules".
You have to understand that "OWASP Top 10" should be renamed in your mind to "OWASP Automated Top 10 Findings" and acknowledge that it has a heavy conflict of interest with security consultancies. It means that security consultancies say to you what you need to buy and subsequently offer you what you need to buy. Pretty convenient, huh?
On the other hand OWASP Top 10 really needs changes:
- David Wichers should leave OWASP Top 10 - not only from the OWASP page, but from everything related to this project - and someone neutral should take control. Someone that would not be corrupted by a security consultancy to become the next David Wichers - I'm not saying he is a bad person, it's just that it has a inherently conflict of interest. We may never rest in peace when it comes to the responsible(s) of this project. It's human nature after all;
- The project would need to have more clarification to what "Top 10" really means. If it is comprised more of automated findings from a specific number of applications than it should be in a very big disclaimer -- but many won't like it. Imagine a security consultancy explaining Top 10 is not actually a Top 10. Customers will be scared and won't buy in an extreme line of thinking;
- As vendors make this top 10, they have to pass through a very hard assessment as even it seems that any consultancy is able to give data without much effort.
Given these 3 changes, it may not worth the trouble to go through each of them to fix this project. Perhaps this top 10 should really be eliminated and we should then go back to blindness when it comes to the most common application security vulnerabilities. When that happens someone will create another top 10, because "when people want the impossible only liars are able to satisfy them".
And is it really "impossible"? To cover all apps, indeed. To cover some, no, it's possible. But let's be clear, no matter the top 10, they need a disclaimer to clarify their scope. And it should be in red using Arial 72 px.
Don't you agree?