In January 2013, Gary McGraw has written an excellent piece on 13 secure design principles that summarize the high level ideas any security engineer or architect should be familiar with in order to be called so. Dr McGraw is of course that smart gentlemen from Cigital who wrote the “Software Security” book, records the “Silver Bullet” podcast, and has played his role in many career choices in the security industry. The first principle he explains is quite logical and intuitive: “Secure the weakest link”. This principle spans over many disciplines, such as project management and logistics, and is obvious to many: there hardly is other way to dramatically improve something than taking its worst part and fixing it. Pretty simple, right?
The vast majority of information security professionals agree that human factor is the weakest element in any security system. Moreover, most of us promote this idea and don’t miss a chance to “blame the user” or address human stupidity as an infinite source of security problems. However, when you start challenging this idea and ask what in fact did they attempt to do in order to change the situation, the answers are few. Just try it yourself: every time you hear someone says “… you cannot fight phishing/social engineering/human error etc.”, kindly ask them: “And have you tried to?…” I do it all the time and believe me, it’s a lot of fun.
The uncomfortable truth is that human brain is very efficient in detecting and dealing with threats. In fact, it spends the majority of its computing time and calories burned to maintain this “situational awareness” that allows us to step on breaks long before we solve the equation system that represents the speeds and trajectories of our car and that one approaching from the side. Our brain, if trained properly, can serve as an effective security countermeasure that could outrun any security monitoring tool in detection or response. The problem is that we as an industry didn’t have as much time to train the humanity to monitor for, detect, and respond to technology threats as nature had for training us to avoid open fire, run from a tiger, and not jump from the trees. And even bigger problem is that we don’t seem to start doing it.
So, what’s wrong with us really? Why don’t we combine the common knowledge of human weakness in front of cyber threats and the maxim of securing the weakest link? I frankly have no idea. Maybe it’s because the knowledge domains that deal with human “internals”, such as neuroscience, psychology, and behavioral economics, are very different from what security people are used to deal with: networks, software, walls, and fences, — I don’t know. However, I have tried (harder ©) to improve the way people that are not security experts deal with cyber threats. And you know what? It’s more fun than blaming the user. But, I guess that’s enough for one post, to be continued…
This post has been originally posted on LinkedIn on May 25, 2016.