Security: better to be proactive or reactive?
As an industry, we've been reactive since our inception. Always trying to figure out how the hacker got in, what they hooked, and what they used. Then we worry about fixing only the exact same way the hacker got in and await someone cleverer to hack again and teach something to the victim's security analysts so they can rebuild their defense.
Then we talk over and over about prevention.
We should educate developers, talk about how to avoid social engineering, prioritize risk, be compliant with industry regulations and laws, have a security program, run tests before pushing to production, and... the hacker still gets in.
Holy shit, right?
We've spent millions, and the preventive efforts aren't preventing? Actually, they are preventing many others, but we usually don't see it. In some sense, it renders all security efforts useless, although they're not. All that matters are those hackers that get in once again. That's the so-called "arms race." Defense raises the bar, and so does the attacker.
So, can we prevent to the fullest, or should we contain the damage instead?
Preventing to the fullest is somewhat dumb. It's like the laws being ahead of technology. First, technology takes place, then we find a way to regulate it. That would be great if justice could foresee such a thing, protect all people's data in advance, so to speak, but it doesn't work like that. As justice is born from evil, defense is born from attacks, and guess who develops a new attack first: attackers or defenders? Attackers.
Is it time to minimize the attacker's impact as soon as they get in instead? It's easier said than done, but it's possible. That's what anomaly detection can do for you, for example. It's not a silver bullet; you need to train it, and machine learning is just a baby when it comes to security, but it's indispensable to minimize the impact. Moreover, it's hard to focus on attribution; it's better to focus on responding instead. That's why Bruce Schneier, the famous infosec guru, bet on his incident response product and ended up being acquired by IBM. But really, how many unknown events will you be able to handle? How much intel will you get from those incidents? Is this "fire drill" strategy really the best way? That's hard to say. You simply can't predict that well.
But actually, there's a third option. Instead of preventing or containing, you can focus on trusting your application to bug hunters and pay for findings, as long as you have the cash for it. It's pretty much the strategy of launching a product and gathering feedback, although when this "feedback is collected," it's already too late.
However, this way, you will focus on fixing only real-world attacks. Still, it's somewhat shameful to put vulnerable applications into production and rely solely on bug hunters to find bugs before attackers do. It's shameful because of the disrespect with customer data and your own data/reputation. In the end, it's still insecure. Bug hunters should only be considered "extra help" and nothing else.
At the end of the day, we've noticed, or some of us at least, that there is no silver bullet. Nothing much new here, but the bottom line is to combine everything.
Combine every possible strategy. Do your best to secure that very toxic asset: the data. And remember that applying preventive security, reactive security, and real-world attacks may not suffice. That's the eternal security challenge we're always struggling to solve.
Thank you very much.