On the usefulness of Penetration Testing methodologies

Let’s imagine for a moment how the “bad guys” are planning their attacks. In the dark basement with cyber-punk posters covering the graffiti on the walls, with a bunch of half-assembled computers lying here and there, malicious hackers gather around the poorly lit table to decide what version of a Black Hat Attack Methodology to use in the upcoming criminal operation. Sounds absurd, right? Of course, because the attackers are not methodical.

As Penetration Testers, we see our main goal in testing our clients’ defenses functionally to assess their ability to withstand a real-world attack. Do we have to rely on external knowledge for that? Obviously, yes, it is impossible to know everything about every attack vector in 2016. Do we have to stick to a predefined set of instructions, a so-called methodology? That depends.

If you are not a pentester, and yet you have to act as one, methodologies are inevitable. To conduct a pentest yourself or reproduce the results in the report from an external consultancy, you have to get your head around a methodology of some sort. In fact, this happens all the time: the perception in the market is such that anyone, be it an accounting firm or an IT audit practice, can do Penetration Tests: look at the plenty of methodologies out there!

But if you do pentesting for a living, do you really need Methodologies? I am a big fan of seeing a pentest as rather a mission than a project. Of course, a mission has to have a plan, but it rarely can be scripted in detail. It’s essential to have a recurring chain of acquiring, analyzing, and applying data and share it within the team. It’s perfect to have both specialization and knowledge sharing between the team members. But to write down “what we do,” “what we do when we’re in,” “how we exfiltrate” in a static document? No, thanks.

To succeed at something, we have to have good mental models and actual practical how-tos at our disposal. The models let us build insight into how the attack would go and what we would have to do along the way. The how-tos and examples let us prepare for the actual operations: collect the data, apply or build the tools, make moves and get the proof of a risk to the client’s business out there. The methodologies try to bridge the gap between the two for those who need it. Do you?

Leveraging the Strongest Factor in Security (Part II)

Since I’ve written the first part of this post in May, several related articles have appeared in different well-known online resources. The most notable of them, in my opinion, is this piece on Fortune that is trying to bridge infosec and business as many tried (and most failed) before them. You don’t have to read all the article’s text in order to catch what it and others have in common: the very first paragraph ends with the statement we all have long got used to.

If your company is like most, you’re spending an awful lot of your information technology budget on security: security products to protect your organization, security consultants to help you understand where your weaknesses lie, and lawyers to sort out the inevitable mess when something goes wrong. That approach can work, but it fails to consider the weakest link in your security fence: your employees.

So, if you’ve read my first post on the topic, you have an idea that anything that follows in the article might be misled by this stereotype. I warned you last time, that anything that sounds similar to “humans are the weakest security link” should be followed or preceded by “by default”. And by “default” I mean “in case your company’s security management did nothing to change that”.

But easier said than done, right? So what could one do in order to, well, leverage the strongest factor in security — the human nature?

To understand that, it’s necessary to get an idea about how our brain functions. I’ve spent quite some time getting familiar with this topic through reading the results of contemporary scientific research. And I encourage you to do the same! However, for the sake of this blog post, I am going to summarize the strongest points, ones you have to embrace in order to, well, see the light.

Imagine that inside every human brain there are three animals: a crocodile, a monkey, and an actual human being. If you are familiar with the brain’s structure, you already know that: different parts of it have grown during different evolutionary periods. Thus the croc is an impersonation of our reptile brain, the monkey is our mammal or limbic brain, and the human is our neocortex. Each of them is doing its job and there is a strong hierarchy between them.

The croc is the boss by default although he doesn’t micromanage. He is responsible for only three basic instincts –

  • Keeping safe from harm, including predators, natural disasters, and other crocs like himself;
  • Finding something to eat in order to not starve to death;
  • Finding a partner, if you know what I mean.

As you see, the crocodile brain executes the most important roles: the preservation of individual humans and the species overall.

The monkey trusts the croc with its life. It’s sometimes afraid of the croc too, but still, there are little chances it’s going to stay alive for long if croc falls asleep of is simply gone, so yeah, the monkey trusts the croc.

The monkey’s work is more complicated. Protected by the crocodile, it can dedicate some of its time training and learning from recurring experience. In other words, the monkey can be taught things if it does them enough times. There are many words to represent that ability, but we are going to stick to ‘the habit’. Using habits, we simplify our life as much as possible, for better or worse, but certainly — for easier.

And the human is normally much different from them both, because, well, you know, abstract thinking, complex emotions, ethical frameworks, cosmology, and sitcom TV shows. With all that, human brain optimizes its job as much as possible, so if there is a chance that the monkey can do something it has to do, the human will take that chance. Going through the different procedures over and over, we train the monkey, and once it’s ready we hand over the task to it. How many times your missed the turn and drove along your usual route to the office even on weekends? The monkey took over and the habit worked instead of your human reasoning that was busy with something else at that moment.

To some it may sound counterintuitive or even scary, but that’s how it is. If we thought out every decision we make, we wouldn’t be able to develop as a species and a society. Too much thinking at the moments of crisis would kill us: deciding on the tactics of dealing with a saber-tooth tiger would simply take all the time needed to run towards the cave or a nearest tree. Humans tend to shortcut and rely on their instincts and reflexes as much as possible. And in general it’s a good strategy, given that the humanity spent many centuries training the monkey and adjusting the croc’s input data.

But then… boom! cyber!

The recent development in technology and communications has changed our lives. Now we have to do many old things the new way and as a result it’s not easy for our brain to apply the tricks evolution taught us for millennia. The monkey’s old habits and the croc’s even older instincts are not triggered by the new signs of danger. We are used to dealing with danger tête-à-tête, not in front of a computer screen. Centuries old fraud tactics find new life online with the humans not able to resist them because of the scale of anonymity and ease of impersonation on the internet.

So what can we do? Not much, really. I don’t believe in technology when it comes to human nature. So I prefer to focus on the human (and the monkey, and the crocodile) instead. Having read and discussed much of what contemporary science can teach on behavioral economics, irrationality of decision making, and most importantly — habits, I have come to conclusion that people can be taught to effectively resist modern cyber-threats the same way they have learned to survive other hazards: by leveraging the instincts, installing new reflexes, and transforming the habits.

In the next post we’ll wrap it up with me presenting the method of transforming individuals and groups from a vulnerability to a countermeasure. Hope this sounds intriguing enough for you to stay tuned.

Leveraging the Strongest Factor in Security (Part I)

In January 2013, Gary McGraw has written an excellent piece on 13 secure design principles that summarize the high level ideas any security engineer or architect should be familiar with in order to be called so. Dr McGraw is of course that smart gentlemen from Cigital who wrote the “Software Security” book, records the “Silver Bullet” podcast, and has played his role in many career choices in the security industry. The first principle he explains is quite logical and intuitive: “Secure the weakest link”. This principle spans over many disciplines, such as project management and logistics, and is obvious to many: there hardly is other way to dramatically improve something than taking its worst part and fixing it. Pretty simple, right?

Picture credit: SC Magazine (https://www.scmagazine.com/defending-data-the-knowledge-factor/article/269211/)

The vast majority of information security professionals agree that human factor is the weakest element in any security system. Moreover, most of us promote this idea and don’t miss a chance to “blame the user” or address human stupidity as an infinite source of security problems. However, when you start challenging this idea and ask what in fact did they attempt to do in order to change the situation, the answers are few. Just try it yourself: every time you hear someone says “… you cannot fight phishing/social engineering/human error etc.”, kindly ask them: “And have you tried to?…” I do it all the time and believe me, it’s a lot of fun.

The uncomfortable truth is that human brain is very efficient in detecting and dealing with threats. In fact, it spends the majority of its computing time and calories burned to maintain this “situational awareness” that allows us to step on breaks long before we solve the equation system that represents the speeds and trajectories of our car and that one approaching from the side. Our brain, if trained properly, can serve as an effective security countermeasure that could outrun any security monitoring tool in detection or response. The problem is that we as an industry didn’t have as much time to train the humanity to monitor for, detect, and respond to technology threats as nature had for training us to avoid open fire, run from a tiger, and not jump from the trees. And even bigger problem is that we don’t seem to start doing it.

So, what’s wrong with us really? Why don’t we combine the common knowledge of human weakness in front of cyber threats and the maxim of securing the weakest link? I frankly have no idea. Maybe it’s because the knowledge domains that deal with human “internals”, such as neuroscience, psychology, and behavioral economics, are very different from what security people are used to deal with: networks, software, walls, and fences, — I don’t know. However, I have tried (harder ©) to improve the way people that are not security experts deal with cyber threats. And you know what? It’s more fun than blaming the user. But, I guess that’s enough for one post, to be continued…

This post has been originally posted on LinkedIn on May 25, 2016.