How to Build Security Awareness Programs That Don’t Suck

This is the script of my talk at BruCON 0x09. You can find the video here: https://www.youtube.com/watch?v=40tUy6TNXM8; the slides are here: https://files.brucon.org/2017/006_Vlad_Styran_Security_Awareness_v3.pdf.

Introduction

Hi everyone. Thanks for coming.

Before we begin, let me ask you a question. In your professional opinion, what is the weakest link in a security system? Louder please. OK, the human. Thank you.

But it seems that not all of us agree. Is it so? Alright, let’s do it this way: all of you please raise one hand. [Raise hand]. Come on, that’s not too hard, you can do it.

First, let’s spot the cheaters. Please, lower your hand if you have a degree in applied psychology, behavioral science, or anything similar. Thanks. Now, please, lower your hand if, in your professional opinion, and to the best of your knowledge, there is a component of security system, that is weaker and riskier than a human being. In other words, that human is not the weakest link. [Lower hand]. Okay, thank you. Now, please lower your hand, if you think, that the first and the most important thing that a company should do to protect itself is to train their staff.

Okay thank you very much! I think we’re ready to start.

The through line

So, what it’s all about? Besides getting you acquainted with Ukrainian accent, (which is basically Russian accent, but please don’t tell anyone I’ve told you that), today I’m going to tell you how I stopped blaming the user and found a way to change the human from “the weakest link” to a valuable security asset, and in some cases — to the strongest component of a security system.

The beginning

Let me start with a story that begins one bright morning in the fall of 2005, when a young IT guy arrived at the office of a small software engineering company, to find a surprising email from their ISP in his inbox. The email stated that the traffic from their corporate network was temporarily filtered by the TCP port 22 per an abuse report. That’s right, this young IT guy was me, and at that moment he was quite surprised. And by “surprised” I mean scared to death.

What followed next, was my first experience of digital forensic investigation. The logs, the integrity checks, the pcaps, the timeline: all the stuff that now sounds trivial, but was a lot of fun then. The track broke off on an outdated Linux server in central Ukraine, it was running long distance calls billing & their IT guys knew how to turn it on & off.

Then, I started to analyze my compromised box: RedHat 9, AKA “Shrike”, with 2.6 kernel, remember that? Although a bit old, it was still clean of any remotely exploitable vulns, so I was intrigued. Because in my world Linux was so secure, no one could hack into it. So, I did all the appropriate analysis, learning to do that along the way, and found out that at some point the malicious worm entered the OS by successfully guessing the root password. That’s how I met John the Ripper and attempted to crack that password. What do you think it was?

password123

After all these years, I see a history behind it: the guy apparently had some experience. I bet his first root account was protected by just ‘password’, then something surprising happened, and he started using password1, and so on.

Of course, after this fantastic adventure, I could not go back to my sysadmin life. So, this is how my career in InfoSec has started.

Through the next 5 years I’ve been working at several roles, mostly exploring the industry and figuring out where I belong. Starting from system integration, through InfoSec management and audit, to pentesting and application security. Finally, I’ve realized that pentesting and consulting are the areas where I could inflict the most benefit to the largest number of people.

Early in my career I’ve embraced the concept of a human, being the so called “weakest link”. Considering the high popularity of this idea, and the level of authority from which it was articulated over and over again, it was already a sort of common wisdom. Bruce Schneier puts it as “Amateurs hack systems, professionals hack people”. Other security gurus build their essays, talks, and interviews based on the assumption that humans are incapable of dealing with online threats. Who was I to disobey? So, for rather a long time I accepted it and somehow lived with it.

But the question “Why?” stuck in my head. Why people click and run crap they get from strangers? What makes them do all these unsafe things, give out secrets, allow spying on them and so on?

Much later, I think I’ve found an answer. In my point of view, security issues arise in places where different technologies meet each other.

As an example, imagine that you are asked to take an old banking application that runs on a mainframe, and bring it on the internet. This sounds as a very surprising thing to do, but it’s been done more than a few times. In the end, most probably, you will have a bunch of mediation devices, wrappers, web-services, and finally a web-interface (if you are lucky), or an ugly Java applet (if you’re not). Because of the differences of how computing was done in the mainframe era, and the way we do it now, there will be a huge complexity overhead in between, which will create security problems.

By stretching this rule, we can explain why human-to-machine interaction is the ultimate source of security risk. Because of how people and machines work: they work very, fundamentally differently.

Machines follow strict and logical laws, while humans are mostly irrational. However, their irrationality has a system in it, and this system is a subject of a large part of psychology, called behavioral science. Using the concepts of behavioral science in cyber security is often called Social Engineering.

While practicing the Red Team’s craft, I’ve obtained a chance to use Social Engineering for conducting so called social pentests or pentests via social channels. The topic got me and I’ve started to explore it with passion. I’ve read both Kevin Mitnik’s books available at the moment, and knew everything from Chris Hadnagy’s first book before it was published: mostly because of extensive listening to his Social Engineer podcast. Chris had it right about inviting guests to the program, all of which were great professionals in the areas that are directly or vaguely connected to social engineering. This is how I kept discovering new topics to learn and new people to look up to.

After a few years, I have started to not only successfully practice social engineering, but to understand the underlying psychology principles, such as reciprocity, commitment, social proof, authority, liking, and scarcity. I’ve read a vast variety of material on human behavior, from Paul Ekman, to Dan Ariely, to Robert Cialdini. Then I dug deeper: into behavioral economics, personal and collective habits, psychology of incentives, negotiations, happiness, success. I even took some phycology courses and tried to approach neurology, which still remains a challenge to me.

Going through all this knowledge and embracing all the cool ideas of these brilliant people, was quite fun. And of course, it made me a better social engineer, as well as notably improved my “soft skills”, social experience, and overall quality of life. Living with people becomes much more interesting once you realize how they work. And it’s not necessary to disassemble them for that.

One of the best ways to learn is to teach others, so I’ve started to promote social engineering within the security community: I gave talks, wrote blogs, played tricks at my colleagues in the pubs. Of course, all this was mostly about more social engineering in pentesting; we have to test tech, processes AND humans.

And, as in all areas of knowledge, after spending some time learning and practicing, I started to see “the bigger picture”. At some point, the new question stuck in my head: “How can I protect against this stuff?”

But preaching “more social engineering in pentests!” wasn’t working. Here, let me tell you how the things are going in social engineering pentests field. Usually, social engineering is not requested by clients, the same is true for bug bounty programs. It’s usually explicitly out of scope. When I ask people, should we include the social channel into the testing, the answer normally is: why testing something that we know for sure will fail?

After some time, I got bored of trying to explain to people that pentest results are not binary, that they give companies detailed input data for improvement. Even, excuse me, PCI DSS did not fix the situation by starting to require social engineering to be included to the pentest scope. Don’t hack our people, we know you are going succeed.

So, what should I do next, I pondered. What other ways shall I try? Fortunately for me, the answer was someone else’s idea.

A friend called me and told that he was appointed the CISO of a large national enterprise, and after a couple of months at the new role it was obvious to him, that the company staff needs improvement in regard to social engineering threats. His company operated in a highly competitive environment (if you know what I mean), so the risks of human error were high. People were phished constantly, were being pretexted over the phone on a daily basis, and the kinds of attacks against the company were quite sophisticated.

Later, one of the students told me a story that pretty much summed up the state of affairs. One Friday the accountant left a couple of PDF invoices on her computer desktop to submit them first thing on Monday, and left the office. To her surprise, after the weekend there were almost the same invoices on her desktop, but with slightly different payment details.

What my friend proposed, was to develop and deliver an awareness training. (Aha, my first reaction was very similar to yours, yawn, eye roll.)

“You mean one of those remote slideshows with 5 multiple choice questions in the end”, I asked.

“No”, he said. “We want you to take all your offensive experience and prepare a one-day onsite workshop where you’ll teach our top-managers, executive assistants, heads of departments, sales, the front-desk and every other high-profile target in our organization how to detect, resist, and react to social engineering attacks.”

Taking into account that I’ve just started my own small consulting company, I had not so many options. In addition, I thought that this was a brilliant idea. In this way, I found myself in a surprising situation, where I should compile all of my previous learning and practical experience, and describe it in terms, clear to the people, who have no relation to InfoSec or IT. But in order to do it right I had first to summarize why we usually do it wrong.

Explanation of problematics: how it’s done now and why it is wrong

I finally had time to sit and analyze the state of security awareness in the industry, and why it fails, miserably and constantly. Luckily for me, I didn’t have to stop working to do it, because there was a company willing to pay for that.

I did some research, which in fact was a bunch of interviews with colleagues that share or don’t share my views on the topic. After a while, I could summarize the reasons why we are doing it wrong. So, here they are.

First, cyber-security, that is basically a human problem, is attempted to be solved by technical means. Look, the majority of issues we have to deal with, be it a security vulnerability, insecure configuration, or users clicking crap: it’s all about human behavior. Humans make mistakes. And we are trying to fix these mistakes by antiviruses and firewalls. Why? Perhaps, because most of us came to this industry from IT and other technical areas, so we just see the nails everywhere. Maybe, that approach isn’t the worst-case scenario after all. But what if we just consider other ways to deal with it, instead of…

Displacing responsibility. Strategically, what we are trying to do is moving the point of risk treatment as far away as possible from the human — the point where most risks arise. Isolating the human from the responsibility for their actions, would, of course, solve all of our problems, but unfortunately it doesn’t seem to work. Maybe we should consider moving the responsibility, or at least that part humans could deal with, back to them?

If you think about it for a while, centralization of responsibility is quite a common problem, and as you may know, there are industries that are trying hard to solve it. Automotive reforms itself to so called lean manufacturing, software engineering got carried away by agile programming, operations are devops now and so on. So maybe it’s time for us too, to consider moving the control back to, or at least closer to the human being?

The third thing that is making everything above much harder to fix, is that InfoSec industry is totally driven by business risk. We want to formulate security decisions in terms of avoiding financial loss or gaining competitive advantage. We believe that speaking business language makes our attempts to improve security more efficient. However, the main thing we are really good at, is helping business tick a few boxes in a compliance checklist.

We spend most of our efforts on convincing corporate decision makers that our products and services are better not for security of their organizations, but for their budgets. Why we do that? Because, apparently, that’s where the money is. But in my opinion, instead of focusing on business risk, we should focus on personal risks of the people we talk to.

Finally, the worst thing is, that we surprisingly don’t seem to do anything about the so called weakest link. Every time I hear someone says, “Users are dumb, there is nothing we can do about it”, I ask: “What have you tried?” Did you, personally, try to change the situation? Logically, it’s the weakest link that needs the reinforcement for the most effective improvement of overall security. But, as I said, humans are irrational.

Explanation of methods: what we should do and why it is right

After I arranged this list obstacles, I started to figure out how I could complete the task at hand: reinforce people’s ability to face the risks, without completely relying on technology, by giving them back the control and responsibility for their actions, and by making security their personal interest.

Thankfully, as much as intersections of different technologies are the source of security problems, the intersections of different areas of knowledge are the source of innovation. After all these years of learning seemingly unrelated technical and human stuff, I felt ready to test some theories.

Long story short, I came up with a list of three basic tools, that, from my experience, have the highest potential and efficiency of changing human behavior in regard of cyber threats. These tools are: fear, incentives, and habits.

Fear

We all know how fear works: we experience the threats around us, get harmed by them, and after that, we can avoid them, or prepare for them. One good thing about our brain is that we don’t have to necessarily go through dangerous experiences to learn from them. Our prefrontal lobes allow us to learn from experience of others and, which is the most fascinating, from simulated experiences: our own imagination. This dramatically lowers our death rate, because we can learn to be safe, and be safe at the same time. And learning how to deal with threats in the physical world is what human brain does the best.

Our brain is an excellent tool of continuous searching, identification, and treating the risks: by fleeing from them, or fighting them, or hiding from them, or playing dead, or making preventive strikes. This, our brain developed and excelled at during the tens of millions of years of evolution. So, the mere fact that we are still here on this planet means that we can be taught to deal with threats. It’s been proven on practice, and fear is the major tool we use for that.

We memorize things better when we’re scared, or as I call it, sufficiently stressed.

Of course, you don’t need to scare your audience to death. The main goal is to make it personal to them. Cyber threats are dangerous for them, not just their company. They will lose their reputation if a Skype worm infects half of their contact list. They will be raided by the police if their PC seeds child pornography to the internet. “I am too small to be a victim”, “I have nothing to hide”, “I have nothing to steal”: these fallacies have to be eliminated. People are hacked not because they’re important. People are hacked because they are trusted by other people and organizations.

Your goal is to make your training appropriately scary, but not a bit scarier. Fear is our psychological immune system and you need to train it by making fake injections of stress. The appropriate level of stress is easy to measure. Insufficiently stressed audience will spend their time checking Facebook and Twitter. Overstressed audience will just run away. And properly stressed audience will engage into the training, won’t leave for lunch before you answer all the questions, and after the training will ask you: how do you live with all this knowledge? I always reply: you’ll get used to it.

Which brings us to the need of repeating. Our brain is smart, it remembers bad things worse than good things. Time heals, but not in the meaning that our memories degrade, they are replaced by new impressions. So, unfortunately for your audience, they will need repeating. The frequency and the form is up to you. I prefer to phish my clients once in a while; with their permission, of course, but without them knowing the exact time when it happens.

Incentives

Incentives are another great tool you can use, and it’s much younger than fear in the evolutionary sense. It doesn’t hit the foundation of our psychology, but rather exploits the social norms, that we developed as a society. The two incentives that I find the most useful are competition and belonging.

Using competition to promote security awareness is simple. Establish a sort of internal “bug bounty” program and declare a reward. Reward the reported phishing attempts, unidentified strangers caught on premise, confidential print-out leftovers. People like to be rewarded and distinguished from the mass. This is why the materiality of rewards isn’t always critical: the hall of fame could be enough.

Using belonging is a bit more complicated. Here you’ll have to create a group of corporate awareness evangelists, if you will. First, naturally, every InfoSec person should be in this group. Second, everyone of formal authority, including the top-3 levels of management should be in there. Third, but the most important: every socially hyperactive person should be in there too. I call them social unicorns, and having them in your team is a gift or a curse, depending on your point of view. They spend immense amount of time talking around water coolers and coffee machines. They know virtually everyone in the office, which means they know their birthdays, their spouses’ names, their kids’ ages and so on. They are easy to spot, and they are gold for security awareness.

How? Imagine the situation, when such a person gets hacked. The worst thing their employer could do in this case is to fire them. Because after the hack they are the best security awareness asset in the company. [Why?] They will tell everyone about how exactly it happened, what they did wrong, how cool the security guys were to explain them their mistakes, and how they never, never will do it again.

So, establishing the security culture demands at least three initial categories of, pardon me, thought leaders: the people with expert, formal, and social influence over the rest of your staff. These ladies and gentlemen should undergo a formal security training and be officially associated with security awareness initiative company-wide. The others will follow.

Habits

In order to make the change permanent, you’ll have to use habits. Habits let us automate: do less analytics, decision making, and other higher cerebral activity, relying instead on routines that we build over time. They drive us to work, clean our houses, cook our meals, and do many other important things.

Habits are critical. For example, if we had no habits, we would have much larger brains and that would cause a lot of trouble, from higher death rates of women giving birth, up to the esthetics of our selfies. But the most dangerous thing about habits is the false belief that they can be given up. The truth is, we cannot easily get rid of them, however, we can change them.

Habits are simple loops that can be described as if-then clauses, followed by rewards. Let’s look at a few examples:

· stress eating: if feel stressed, then go eat a cookie, and get higher sugar level and better mood as a reward;

· smoking: if bored, then go smoke a cigarette outside, and be rewarded by a small talk with a fellow smoker;

· drinking: if feeling down, then go to the bar, and be rewarded by a company of a thankful listener (or at least the bartender’s polite attention).

These loops of trigger, routine, and reward represent our habits and, as such, most of our behavior. Our goal is to take bad security habits and replace them with good ones. For that, we need to identify the triggers, the routines that they cause, and the rewards the brain seeks.

Let’s hear some bad security habits, I bet you got some. I’ll start with this one. If received an email, then open it immediately, and be rewarded by zero unread items. If the email contains an attachment with an intriguing name (such as “2018 compensation review plan.xlsm”), then open it as soon as possible, and be rewarded by knowing your boss’s salaries. If asked to run a macro, then click “allow”, and be rewarded by finally getting your eyes on it. You get the idea, so could you please give me a few examples from the top of your head… Anyone?

So, you totally nailed it. Now, let’s play with habits design for a while. We already discussed the reward (bug bounty and hall of fame), so let’s assume that you already have that. Then, I hope you agree, that we should change the routine to something like panic, or freeze, or at least proceed with caution. But let’s talk triggers first.

When you train people, you give them example triggers, and they usually confirm that something similar has happened to them. Then they share the cases from their experience, and in this way, you have an ever-growing collection of trigger formulae.

Each formula contains three main components:

  1. a method of social engineering attack, such as phishing emails, impersonation, elicitation, pretexting over the phone, software exploits, baiting with USB thumb drives, and so on;
  2. an influencing principle: urgency, reciprocity, social proof, authority, liking, commitment, and so on.
  3. a security context: basically, anything of personal or business value.

Again, let’s look at some popular examples.

  • You receive an email with an urgent request to provide confidential data.
  • The pizza delivery guy is staring at you while holding a huge pile of pizza boxes at your office door.
  • An “old schoolmate” you just met in the street is asking you about the specifics of your current job.
  • You receive a call from a person that introduces themselves as the CEO’s executive assistant and asks you to confirm the receipt of their previous email and open its attachment.
  • An attractive, likable human is asking you to take part in an interview and is going to compensate that with a shiny new USB drive (in the hope you insert it into your working PC later).

These are examples of triggers that should surprise your colleagues and force them to turn off the autopilot and take manual control over the situation. For that, they, of course, should be taught the influence principles, the types of modern cyber-attacks, and the kinds of things hackers could want from them: remote access, banking accounts, contact lists, bandwidth, business, and personal secrets and so on. And then, if you taught them right, they’ll be ready to learn the universal formula, that should guide them through any potentially harmful situation:

When you identify a potential type of attack, that is accompanied by influencing principle, and the situation concerns a security context, you should pause, rewind, and start processing the situation with caution, applying the skills developed during the training.

As simple as that.

Of course, it is just the knowledge, the theory. The skill, the habit is built with practice. You will need a lot of examples and training tasks for everyone to get this right. But from my experience, this is the most fun part of the training.

Bonus

OK, it seems that we have some time left, so I can give you one more tool. There are plenty of sources of awareness material in popular culture, even if we as InfoSec professionals find them of inappropriate precision. For example, in my opinion, Mr. Robot’s Season 1 did for security awareness more, than the whole InfoSec industry so far. At least, I could show it to my parents and not be ashamed of it. The Real Hustle is good at showing con artists’ tricks. Tiger Team gives an idea of what Red Teaming exercises look like. Lie to Me is quite OK to understand emotions and universal expression, as long as you read Dr. Paul Ekman’s devastating comments to each episode. So, if you like a book, or a movie, or a series about cybers, share it, don’t hold it to yourself.

Example

In the end, let me share a perfect security awareness improvement that I witnessed only once so far. We had a client that requested the full package: the initial full-scope pentest, the training for their staff, and then the re-test to measure the change. In fact, that wasn’t the plan from the beginning, but the initial pentest was so devastating, that they decided to go all in.

As sad as the initial pentest report seamed to the client, as promising it looked to us. I won’t describe everything that happened to avoid sounding like an ad. Just two things that impressed me the most.

The first thing was their reaction to the slide, showing all of their passwords (of course, without relation to usernames). It goes like this: At first, everyone is looking for theirs in the hope it isn’t there. Then, they find it, and it’s the moment of ultimate despair. But in a few seconds, they start reading others’ passwords, and then they look at each other and start laughing altogether, hysterically, and it can’t be stopped for a few minutes.

The second thing was that when the first group took the training, we had a pause for two weeks before the second group. When we started with the second group, it was a mind-blowing experience. Almost all of them have already subscribed to my blog, my Facebook page, my company’s page, started following the news we share, watching our webinars, and so on. More than half of them were already the fans of Mr. Robot and asked questions about the validity of the techniques depicted in it.

Eight months later, when we conducted the re-test, the results were fascinating. I’d be happy to say that we failed to hack them, but that wouldn’t be technically true. Because of all employees, one actually took the bait, but she did it intentionally. That was her last day at work, she already got the check, and according to her, it was obvious that it was us. So, she decided to have some fun. And she showed me that there are aspects of human behavior, that require further research.

Closure

So, this is my story, so far. Maybe, because in its very beginning I myself got hacked, I avoided the temptation of blaming the user, and actually figured out how humans can become stronger. This is what I’m inviting you to do too. Users aren’t dumb, they just don’t now. That’s why humans are the weakest link in security — by default. And this is how you change their settings.

Don’t underestimate humans. Give them a chance, and believe me: they will surprise you.

Thank you.

Leveraging the Strongest Factor in Security (Part II)

Since I’ve written the first part of this post in May, several related articles have appeared in different well-known online resources. The most notable of them, in my opinion, is this piece on Fortune that is trying to bridge infosec and business as many tried (and most failed) before them. You don’t have to read all the article’s text in order to catch what it and others have in common: the very first paragraph ends with the statement we all have long got used to.

If your company is like most, you’re spending an awful lot of your information technology budget on security: security products to protect your organization, security consultants to help you understand where your weaknesses lie, and lawyers to sort out the inevitable mess when something goes wrong. That approach can work, but it fails to consider the weakest link in your security fence: your employees.

So, if you’ve read my first post on the topic, you have an idea that anything that follows in the article might be misled by this stereotype. I warned you last time, that anything that sounds similar to “humans are the weakest security link” should be followed or preceded by “by default”. And by “default” I mean “in case your company’s security management did nothing to change that”.

But easier said than done, right? So what could one do in order to, well, leverage the strongest factor in security — the human nature?

To understand that, it’s necessary to get an idea about how our brain functions. I’ve spent quite some time getting familiar with this topic through reading the results of contemporary scientific research. And I encourage you to do the same! However, for the sake of this blog post, I am going to summarize the strongest points, ones you have to embrace in order to, well, see the light.

Imagine that inside every human brain there are three animals: a crocodile, a monkey, and an actual human being. If you are familiar with the brain’s structure, you already know that: different parts of it have grown during different evolutionary periods. Thus the croc is an impersonation of our reptile brain, the monkey is our mammal or limbic brain, and the human is our neocortex. Each of them is doing its job and there is a strong hierarchy between them.

The croc is the boss by default although he doesn’t micromanage. He is responsible for only three basic instincts –

  • Keeping safe from harm, including predators, natural disasters, and other crocs like himself;
  • Finding something to eat in order to not starve to death;
  • Finding a partner, if you know what I mean.

As you see, the crocodile brain executes the most important roles: the preservation of individual humans and the species overall.

The monkey trusts the croc with its life. It’s sometimes afraid of the croc too, but still, there are little chances it’s going to stay alive for long if croc falls asleep of is simply gone, so yeah, the monkey trusts the croc.

The monkey’s work is more complicated. Protected by the crocodile, it can dedicate some of its time training and learning from recurring experience. In other words, the monkey can be taught things if it does them enough times. There are many words to represent that ability, but we are going to stick to ‘the habit’. Using habits, we simplify our life as much as possible, for better or worse, but certainly — for easier.

And the human is normally much different from them both, because, well, you know, abstract thinking, complex emotions, ethical frameworks, cosmology, and sitcom TV shows. With all that, human brain optimizes its job as much as possible, so if there is a chance that the monkey can do something it has to do, the human will take that chance. Going through the different procedures over and over, we train the monkey, and once it’s ready we hand over the task to it. How many times your missed the turn and drove along your usual route to the office even on weekends? The monkey took over and the habit worked instead of your human reasoning that was busy with something else at that moment.

To some it may sound counterintuitive or even scary, but that’s how it is. If we thought out every decision we make, we wouldn’t be able to develop as a species and a society. Too much thinking at the moments of crisis would kill us: deciding on the tactics of dealing with a saber-tooth tiger would simply take all the time needed to run towards the cave or a nearest tree. Humans tend to shortcut and rely on their instincts and reflexes as much as possible. And in general it’s a good strategy, given that the humanity spent many centuries training the monkey and adjusting the croc’s input data.

But then… boom! cyber!

The recent development in technology and communications has changed our lives. Now we have to do many old things the new way and as a result it’s not easy for our brain to apply the tricks evolution taught us for millennia. The monkey’s old habits and the croc’s even older instincts are not triggered by the new signs of danger. We are used to dealing with danger tête-à-tête, not in front of a computer screen. Centuries old fraud tactics find new life online with the humans not able to resist them because of the scale of anonymity and ease of impersonation on the internet.

So what can we do? Not much, really. I don’t believe in technology when it comes to human nature. So I prefer to focus on the human (and the monkey, and the crocodile) instead. Having read and discussed much of what contemporary science can teach on behavioral economics, irrationality of decision making, and most importantly — habits, I have come to conclusion that people can be taught to effectively resist modern cyber-threats the same way they have learned to survive other hazards: by leveraging the instincts, installing new reflexes, and transforming the habits.

In the next post we’ll wrap it up with me presenting the method of transforming individuals and groups from a vulnerability to a countermeasure. Hope this sounds intriguing enough for you to stay tuned.

Leveraging the Strongest Factor in Security (Part I)

In January 2013, Gary McGraw has written an excellent piece on 13 secure design principles that summarize the high level ideas any security engineer or architect should be familiar with in order to be called so. Dr McGraw is of course that smart gentlemen from Cigital who wrote the “Software Security” book, records the “Silver Bullet” podcast, and has played his role in many career choices in the security industry. The first principle he explains is quite logical and intuitive: “Secure the weakest link”. This principle spans over many disciplines, such as project management and logistics, and is obvious to many: there hardly is other way to dramatically improve something than taking its worst part and fixing it. Pretty simple, right?

Picture credit: SC Magazine (https://www.scmagazine.com/defending-data-the-knowledge-factor/article/269211/)

The vast majority of information security professionals agree that human factor is the weakest element in any security system. Moreover, most of us promote this idea and don’t miss a chance to “blame the user” or address human stupidity as an infinite source of security problems. However, when you start challenging this idea and ask what in fact did they attempt to do in order to change the situation, the answers are few. Just try it yourself: every time you hear someone says “… you cannot fight phishing/social engineering/human error etc.”, kindly ask them: “And have you tried to?…” I do it all the time and believe me, it’s a lot of fun.

The uncomfortable truth is that human brain is very efficient in detecting and dealing with threats. In fact, it spends the majority of its computing time and calories burned to maintain this “situational awareness” that allows us to step on breaks long before we solve the equation system that represents the speeds and trajectories of our car and that one approaching from the side. Our brain, if trained properly, can serve as an effective security countermeasure that could outrun any security monitoring tool in detection or response. The problem is that we as an industry didn’t have as much time to train the humanity to monitor for, detect, and respond to technology threats as nature had for training us to avoid open fire, run from a tiger, and not jump from the trees. And even bigger problem is that we don’t seem to start doing it.

So, what’s wrong with us really? Why don’t we combine the common knowledge of human weakness in front of cyber threats and the maxim of securing the weakest link? I frankly have no idea. Maybe it’s because the knowledge domains that deal with human “internals”, such as neuroscience, psychology, and behavioral economics, are very different from what security people are used to deal with: networks, software, walls, and fences, — I don’t know. However, I have tried (harder ©) to improve the way people that are not security experts deal with cyber threats. And you know what? It’s more fun than blaming the user. But, I guess that’s enough for one post, to be continued…

This post has been originally posted on LinkedIn on May 25, 2016.