Як писати резюме в кібербезпеці

Кожного разу після події типу Nonamecon чи OWASP Kyiv в мою поштову скриньку та месенджери надходить багато повідомлень приблизно такого змісту: чи є у вас вакансії? як потрапити до вас в компанію? ось моє резюме. А отримавши відповідь, найчастіше питають, як його покращити.

Так, на жаль, відкритого набору в Berezha Security не існує: ми не настільки великі, щоб тримати вакансії відкритими та оголошувати на них “конкурс”. Коли ми ростемо, про це перш за все дізнаються наші підписники в Facebook, LinkedIn та Twitter, ну і звісно ж наші друзі та партнери по суспільно-професійній діяльності. Але резюме я все одно читаю, і ось що я мушу вам про це сказати.

Написання “ефективного” резюме – це окрема навичка, яку треба прокачувати. Я не вірю в “кар’єрних коучів”, хоча не виключаю, що деякі з них вартують свого гонорару. Як на мене, то для спеціаліста з кібербезпеки такий засіб підготовки резюме це занадто. Вимоги до текстів в нашій галузі зараз невисокі, адже людей критично не вистачає. Але CV все одно мати треба, і скласти його вам допоможуть такі прості поради.

1. Пишіть про досягнення, а не про досвід. Дуже часто доводиться читати пачку булетів по кожному місцю роботи, зміст яких зводиться до того, що автор мав справу з тими чи іншими методиками та інструментами. Ця інформація може бути корисна, але її краще десь узагальнити в одному розділі документу, а в описі місць роботи вказати досягнення, які демонструють вашу траєкторію в конкретній компанії. Прийшов першим ІБшником, за два роки побудував нормальні операції та автоматизував процеси. Або розпочав молодшим пентестером, а за рік виріс в автономну одиницю та виступив лідом в надцяти проєктах. Щось таке, що характеризує вас як працівника, який прогресує та росте, а не просто отримує досвід.

2. Уникайте списків. Булети розстрілюють ваше резюме. Хочете навести приклади – робіть це через кому або крапку з комою. І ніколи не перетворюйте розділ в вичерпний опис. Адже мозок сприймає інформацію дуже своєрідно: з усіх перерахованих пунктів він залишить в пам’яті середнє враження. Тому краще навести два найкрутіші приклади, ніж вичерпний список. Наприклад, не варто перераховувати усі ваші професійні сертифікати, починаючи з CCNA та адміністратора Windows 2012 Server. Звісно ж, краще перерахувати найбільш релевантні та найсвіжіші здобутки.

3. Слідкуйте за граматикою. Багатьом відома така штука як Grammarly, але й MS Word з ввімкненими параметрами граматики відпрацьовує досить ефективно. Уникайте зайвих та складних термінів. Якщо слово можна спростити або видалити без суттєвої зміни змісту – це варто зробити.

4. Навчання без сертифікації нікому не цікаве. Якщо пройшли протягом трьох років 5 найкрутіших та найтоповіших навчальних програм, але не склали іспит – краще не вказуйте їх в резюме. Поки не складете іспит. Натрапляючи на назву курсу без відповідного сертифікату, читач почне розпитувати вас про причини такої незавершеної дії. І якими б вони не були, це не на вашу користь.

5. Рекомендації. Якщо маєте дозвіл вказувати імена, позиції та контакти в резюме – зробіть це. Але спробуйте обмежити список людьми, з якими працювали в останні рік-два. Адже інакше вони матимуть дуже загальне уявлення про вашу ситуацію та вагатимуться робити припущення. Найкраще в резюме виглядають рекомендації прямого керівника або внутрішнього чи зовнішнього “замовника”, який безпосередньо отримував користь від вашої роботи.

6. Волонтерський досвід та суспільна активність. Навіть Forrester та Gartner беруть до уваги ці критерії, коли оцінюють компанії, а про конкретних професіоналів годі й казати. Якщо ви здатні підняти п’яту точку та піти зробити щось для спільноти – це обов’язково треба вказувати в резюме. По-перше, це унікальний досвід, а по-друге, так ви засвідчуєте, що на вас можна покластися не лише тому, що вам за це платять гроші.

7. Медіа-активність. Якщо маєте блог, акаунт на GitHub, фейсбук-сторінку чи ще десь ведете професійну або колопрофесійну діяльність – не соромтеся вказувати це в резюме. Просвітницька діяльність, коміти в опенсорс проекти, виступи на конференціях тощо – все це може бути цікаво потенційному роботодавцю. Адже цілком можливо, що він шукає не просто працівника, а ще й представника компанії в медіа та на профільних заходах.

Якщо маєте питання на тему складання резюме – тепер для цього є спеціальний канал #career-advice в Discord-сервері Ukrainian Cybersecurity. Молодші колеги можуть ставити там питання, а досвіченіші – відповідати, в тому числі приватно, або навіть голосом.

Своєю чергою, я спробую допомогти покращити якомога більше ваших резюме. Досвід відбору більше сотні спеціалістів з безпеки треба якось використовувати. Але зважайте, що останній раз я писав CV років зо 10 тому, отже розцінюю цей процес виключно з точки зору роботодавця. Тому моє власне резюме не є зразковим прикладом.

Різниця між корпоративною та продуктовою безпекою

Серед українських організацій чи не найчастіше до нас звертаються ІТ-компанії. Тому хочеться розповісти про певний накопичений досвід. Цілком можливо, він стане в пригоді іншим організаціям в цій вертикалі, а може й організаціям з інших галузей. Тому якщо у вас є знайомий CIO/CTO ІТ-фірми, покажіть йому цей текст. Він писався для нього.

Звернення ІТ-компанії приблизно в одному випадку з десяти відбувається з власної ініціативи. В решті випадків причиною такого звернення є вимога замовника чи потенційного замовника її послуг. На відміну від багатьох моїх колег, я такий підхід вважаю цілком правильним, практичним та економічно виправданим. Ці компанії пишуть код буквально на замовлення. Цей код в більшості випадків приносить вигоду замовникам та робить життя багатьох тисяч людей комфортнішим та зручнішим. Тому вимагати від ІТ-компаній піклуватися про безпеку мають право або їхні замовники, або клієнти замовників. Якийсь віртуальний Вова Стиран може звісно походити поскиглити про гнітючу картину в безпеці ПЗ, але краще б він з однодумцями відкрив локальний чаптер OWASP та зробив знання про безпеку доступними в його регіоні для тих компаній та професіоналів, яким це потрібно. Що він, власне, і робить.

Але, при цьому звернення «по безпеку» від ІТ-компаній відбуваються трохи безсистемно і тут давайте зупинимось докладніше. Причина ховається в дуалізмі поняття про безпеку в будь-якій організації, яка створює інформаційні продукти або сервіси. З одного боку, є організаційна безпека: захист інформаційних систем та мереж, охорона інтелектуальної власності, виконання вимог місцевих та міжнародних регуляторів тощо. Тобто, захист інтересів бізнесу як сутності: щоб він не збанкрутував, а його керівники не сіли у в’язницю. З іншого боку, є безпека інженерна: захист власне продуктів та сервісів від зловживань неавторизованих осіб, захист користувачів та їхніх даних від наслідків таких зловживань, а також захист бізнес-моделі як від третіх осіб, так і від авторизованих користувачів (наприклад, ліцензування ПЗ або DRM). Тобто, захист продукту діяльності бізнесу та можливості його перетворити на прибуток.

Годі й казати, що ці дві “безпеки” є досить різними галузями знань. На жаль, цей факт відомий далеко на всім керівникам. Деколи це призводить до того, що за Application Security в компанії відповідають операційні ІБшники, що стає причиною купи непорозумінь та додаткового “тертя” в стосунках ІТ та бізнесу. Трохи рідше, програмістів з продуктової розробки обтяжують операційними завданнями — оце коли розгорається справжня драма. В казці із щасливим кінцем, в обох випадках всі дійові особи зрештою досягають консенсусу та розподіляють обов’язки більш-менш органічно. В протилежному разі, деколи доходить до робочих конфліктів, і тоді все залежить від дипломатичності та професіоналізму його учасників.

Щоб зробити життя керівників ІТ-фірм та їхніх підлеглих трохи простішим, я сформулюю простий алгоритм прийняття рішення щодо того, яка саме безпека потрібна організації, та як рухатися в напрямку її досягнення.

1. З’ясуйте, що ви захищаєте:
А:
фірму, з усіма її активами, персоналом, фінансовими таємницями, від дій кіберзлочинців та наслідків дії законів та галузевих стандартів,
чи
Б: продукт/сервіс, який ця фірма виготовляє, від атак кіберзлочинців та зловмисних дій користувачів.

А.2. У першому випадку довіряйте виконання задачі спеціалістам з безпеки, які займаються технічним захистом інформації та організаційними заходами ІБ. Такі спеціалісти виходять, наприклад, з горнила кадрової кузні банківської системи, або з класичних ІТ-інтеграторів та дистриб’юторів. В минулому вони зазвичай працівники операційних підрозділів ІТ: системні та мережеві адміністратори. Деколи — працівники правоохоронних органів.
Ключові слова: CISSP, CISM, Information Security, Security Administration.

Б.2. У другому випадку довіряйте виконання задачі спеціалістам з безпеки програмного забезпечення. Такі спеціалісти менш поширені на ринку праці та володіють дещо іншими навичками. В минулому вони програмісти, тестувальники або ж розпочали кар’єру одразу ж в Application Security. Скоріш за все, вони беруть участь в програмах Bug Bounty у вільний час.
Ключові слова: OSCP, OSCE, Application Security, Bug Bounty, White-Hat Hackers.

А.3. Методичних рекомендацій та галузевих стандартів (тобто вказівок, як будувати безпеку, коли не знаєш, як це робити) у першому випадку навіть занадто багато. Скоріш за все ви матимете справу з ISO 27001, SOC 2, NIST, CIS або іншим фреймворком. Беріть це до уваги, коли обираєте виконавців та керівників операційної ІБ.

Б.3. Щодо безпеки ПЗ, тут рекомендацій теж вистачає, а ось із стандартами поки що все глухо (і на мою думку це добре). Рекомендації з безпечної розробки можна шукати по ключових словах OWASP, SAMM, SDL, NIST. Керівники функцій безпечної розробки повинні мати не лише образне уявлення про ці процеси, але й вміти їх ефективно впроваджувати та вдосконалювати.

А.4. Залучення зовнішніх консультантів до технічної та організаційної ІБ може відбуватися в багатьох форматах, але зазвичай це або побудова чогось швидше, ніж ви можете це зробити самотужки, або перевірка якості ваших заходів безпеки. Аудит та консалтинг в цій галузі досить прибуткові види діяльності, якщо ви розумієте про що я. Тому тут треба бути дуже уважними на всіх стадіях формулювання та виконання проєктів, адже скоуп розповзається з першого дня, а невраховані очікування можуть виявитися на завершальних стадіях.
Ключові слова: CISA, CISSP, ISO27001LA, ISO27001LI, Penetration Test, Security Audit.

Б.4. Залучення зовнішнього ресурсу до процесів безпечної розробки можливе майже на всіх стадіях побудови ПЗ за виключенням, хіба що, формулювання бізнес-ідеї. Дизайн, архітектура, планування, реалізація, тестування, міграція, підтримка, робота з інцидентами — всі ці та інші фази можуть виконуватися як власними силами, так і виноситися на аутсорс. Але є один момент: на певному етапі розвитку проєкту існування власної внутрішньої функції стане набагато ефективнішим та вигіднішим. Використання послуг незалежної перевірки/тестування безпеки та ad-hoc консалтинг, скоріш за все, доведеться залишити, але в ході розробки, вдосконалення та інтеграції продукту стане актуальним власний спеціаліст з безпеки ПЗ або навіть цілий підрозділ.
Ключові слова: OWASP, OSWE, OTG, ASVS, SAMM, Application Security Assessment, Application Penetration Test.

5. Розміщення підрозділу безпеки в організаційній структурі — окрема складна та драматична тема. Багато прихильників “хороших практик” управління безпекою наполягають, що підпорядкування CISO (Chief Information Security Officer) до CIO — погана ідея, адже таким чином виникає конфлікт інтересів: ІТ-директор наполягає на доступності та продуктивності систем та сервісів, коли керівник ІБ накладає на них купу обмежень. У випадку організаційної безпеки, можливо, це твердження і є доцільним, але коли мова йде про підпорядкування безпеки до CTO в продуктовій організації — конфлікт кудись зникає, адже саме СТО зацікавлений в безпеці сервісів та продуктів, що розробляються.

Сподіваюся, ці нотатки допоможуть вам правильно зорієнтуватися в тематиці та обрати правильну стратегію. Або принаймні зекономлять трохи часу.

Antiviruses and other software, Russian and beyond

(This is a rather old post translated to English by a friend, so keep it in mind while reading.)

Another wave of public discussions of Kapersky participation in Russian intelligence operations is emerging, in particularly in the context of stealing US classified documents and NSA software tools, which later got to “Shadow Brokers”, and eventually played role in WannaCry and NotPetya outbreaks.

My opinion on that is consistent: willingly or not, Kaspersky Lab was and is an asset of Russian intelligence. But I’d like to underline a different thing today. Regardless if Kaspersky was or wasn’t a part of their own government’s APT ops, using an antivirus for cyber-attacks and international espionage is awesome.

Firstly, technically this channel is super powerful. As I often told previously, running a process on a computer with unlimited permissions is a very doubtful idea from security architecture point of view. It can have access to any file or memory contents. Moreover, this process must digest a lot of different file formats. For those who don’t know what fuzzing is or why .pdf = Penetration Document Format I will simply tell that a lot of breaches happen because of errors in complex datatype parsers. So, we can only dream that after installing and antivirus the overall system security level will increase at all. (Those, for whom it is still not obvious, should subscribe for Tavis Ormandy on Twitter and to Google Project Zero blog). So what should we do, if we find out that people having access to antivirus updates and vulnerabilities are our enemies?

Secondly, antivirus functionality is complicated and its interaction with the operating system is even more complicated, and we never know for sure, which data they pass to their developers. Therefore, using antivirus for espionage opens enormous opportunities to deny involvement and defend against any accusation. This is a Plausible Deniability apogee. KAV caught some NSA tools on laptop and passed them to Moscow? But they looked like malware! (Which they in fact are). KAV did a keyword search on hard drive? But these words were part of “malware” samples! And so on, and so forth. Until Eugene puts his hands up and claims that he was hacked by GRU or FSB and he is also a victims of cyber espionage. Not taouching an old tradition of Russian special forces to have people on key positions in private companies. In this way they can deny anything and claim that particular employees were recruited directly, whilst the Board and the Management knew nothing.

What are the conclusions?

Simple and radical. You can’t use software and information tools provided by your enemy, created on the territory, controlled by your enemy, or by companies, employing people based on the territory controlled by your enemy. Therefore, rebranding of Laboratory of Kaspersky to Kaspersky Lab with fictitious migration of headquarters doesn’t change anything. Migration from Kaspersky products to products of other companies (Belarusian, Kazakh, Slovakian etc.) with “strong cultural connections” with Russian Federation, again, doesn’t change anything. Only full ban of information products, even intuitively traceable to the enemy, can decrease the risk.

However, even if you get rid of Kaspersky, uninstall 1C, delete account in Vkontakte and will not watch Kurazh Bambey voice overs, residual risk is inevitable. Because, last time I checked, all your friends and business partners used 1C, visited VK, and other Russian web sites.

Stay safe.

Hackers don’t give a shit about your excuses

Most of security breaches happen because of lack of effort on security prioritization.

Most corporate security departments are engaged with procedural burden which is documented in policies and is required by management. Instead of fulfilling their direct duties, which is to protect business from cyber threats. Why is it so? Because execution of procedures is easier, as well as it is easier for them to justify their existence in such a way. We exist because of PCI DSS, GDPR, SOX, ISO 27k etc.

Most of the companies that were hacked had some sort of security policy. Why have they suffered a breach? Because paper doesn’t protect from hackers. Only rules and actions protect from hackers and malicious software (aka viruses). Rules and actions separately — do not protect, only both combined: rules + actions. Moreover, you don’t have to invent neither rules nor actions — they were invented a long time ago and are publicly available.

Based on my experience, I’ve reached a conclusion that an enterprise cybersecurity practice should start neither from a corporate security policy nor from a new firewall or antivirus. Management gets “paper tigers” and “blinking boxes” well, however this is a bad, even unprofessional start. The right thing would be to start from applying simple rules and actions and demonstrate their effectiveness. We must not have outdated OS versions in our network. All users must have long and strong passwords. Network and resources must be segmented based on business needs. Remote access should be supplied with two factor authentication. Interactive administrative access to systems should be prohibited. And so on, and so forth. One rule at a time, one action after another.

And then you will have chances not to become a victim.


How to Build Security Awareness Programs That Don’t Suck

This is the script of my talk at BruCON 0x09. You can find the video here: https://www.youtube.com/watch?v=40tUy6TNXM8; the slides are here: https://files.brucon.org/2017/006_Vlad_Styran_Security_Awareness_v3.pdf.

Introduction

Hi everyone. Thanks for coming.

Before we begin, let me ask you a question. In your professional opinion, what is the weakest link in a security system? Louder please. OK, the human. Thank you.

But it seems that not all of us agree. Is it so? Alright, let’s do it this way: all of you please raise one hand. [Raise hand]. Come on, that’s not too hard, you can do it.

First, let’s spot the cheaters. Please, lower your hand if you have a degree in applied psychology, behavioral science, or anything similar. Thanks. Now, please, lower your hand if, in your professional opinion, and to the best of your knowledge, there is a component of security system, that is weaker and riskier than a human being. In other words, that human is not the weakest link. [Lower hand]. Okay, thank you. Now, please lower your hand, if you think, that the first and the most important thing that a company should do to protect itself is to train their staff.

Okay thank you very much! I think we’re ready to start.

The through line

So, what it’s all about? Besides getting you acquainted with Ukrainian accent, (which is basically Russian accent, but please don’t tell anyone I’ve told you that), today I’m going to tell you how I stopped blaming the user and found a way to change the human from “the weakest link” to a valuable security asset, and in some cases — to the strongest component of a security system.

The beginning

Let me start with a story that begins one bright morning in the fall of 2005, when a young IT guy arrived at the office of a small software engineering company, to find a surprising email from their ISP in his inbox. The email stated that the traffic from their corporate network was temporarily filtered by the TCP port 22 per an abuse report. That’s right, this young IT guy was me, and at that moment he was quite surprised. And by “surprised” I mean scared to death.

What followed next, was my first experience of digital forensic investigation. The logs, the integrity checks, the pcaps, the timeline: all the stuff that now sounds trivial, but was a lot of fun then. The track broke off on an outdated Linux server in central Ukraine, it was running long distance calls billing & their IT guys knew how to turn it on & off.

Then, I started to analyze my compromised box: RedHat 9, AKA “Shrike”, with 2.6 kernel, remember that? Although a bit old, it was still clean of any remotely exploitable vulns, so I was intrigued. Because in my world Linux was so secure, no one could hack into it. So, I did all the appropriate analysis, learning to do that along the way, and found out that at some point the malicious worm entered the OS by successfully guessing the root password. That’s how I met John the Ripper and attempted to crack that password. What do you think it was?

password123

After all these years, I see a history behind it: the guy apparently had some experience. I bet his first root account was protected by just ‘password’, then something surprising happened, and he started using password1, and so on.

Of course, after this fantastic adventure, I could not go back to my sysadmin life. So, this is how my career in InfoSec has started.

Through the next 5 years I’ve been working at several roles, mostly exploring the industry and figuring out where I belong. Starting from system integration, through InfoSec management and audit, to pentesting and application security. Finally, I’ve realized that pentesting and consulting are the areas where I could inflict the most benefit to the largest number of people.

Early in my career I’ve embraced the concept of a human, being the so called “weakest link”. Considering the high popularity of this idea, and the level of authority from which it was articulated over and over again, it was already a sort of common wisdom. Bruce Schneier puts it as “Amateurs hack systems, professionals hack people”. Other security gurus build their essays, talks, and interviews based on the assumption that humans are incapable of dealing with online threats. Who was I to disobey? So, for rather a long time I accepted it and somehow lived with it.

But the question “Why?” stuck in my head. Why people click and run crap they get from strangers? What makes them do all these unsafe things, give out secrets, allow spying on them and so on?

Much later, I think I’ve found an answer. In my point of view, security issues arise in places where different technologies meet each other.

As an example, imagine that you are asked to take an old banking application that runs on a mainframe, and bring it on the internet. This sounds as a very surprising thing to do, but it’s been done more than a few times. In the end, most probably, you will have a bunch of mediation devices, wrappers, web-services, and finally a web-interface (if you are lucky), or an ugly Java applet (if you’re not). Because of the differences of how computing was done in the mainframe era, and the way we do it now, there will be a huge complexity overhead in between, which will create security problems.

By stretching this rule, we can explain why human-to-machine interaction is the ultimate source of security risk. Because of how people and machines work: they work very, fundamentally differently.

Machines follow strict and logical laws, while humans are mostly irrational. However, their irrationality has a system in it, and this system is a subject of a large part of psychology, called behavioral science. Using the concepts of behavioral science in cyber security is often called Social Engineering.

While practicing the Red Team’s craft, I’ve obtained a chance to use Social Engineering for conducting so called social pentests or pentests via social channels. The topic got me and I’ve started to explore it with passion. I’ve read both Kevin Mitnik’s books available at the moment, and knew everything from Chris Hadnagy’s first book before it was published: mostly because of extensive listening to his Social Engineer podcast. Chris had it right about inviting guests to the program, all of which were great professionals in the areas that are directly or vaguely connected to social engineering. This is how I kept discovering new topics to learn and new people to look up to.

After a few years, I have started to not only successfully practice social engineering, but to understand the underlying psychology principles, such as reciprocity, commitment, social proof, authority, liking, and scarcity. I’ve read a vast variety of material on human behavior, from Paul Ekman, to Dan Ariely, to Robert Cialdini. Then I dug deeper: into behavioral economics, personal and collective habits, psychology of incentives, negotiations, happiness, success. I even took some phycology courses and tried to approach neurology, which still remains a challenge to me.

Going through all this knowledge and embracing all the cool ideas of these brilliant people, was quite fun. And of course, it made me a better social engineer, as well as notably improved my “soft skills”, social experience, and overall quality of life. Living with people becomes much more interesting once you realize how they work. And it’s not necessary to disassemble them for that.

One of the best ways to learn is to teach others, so I’ve started to promote social engineering within the security community: I gave talks, wrote blogs, played tricks at my colleagues in the pubs. Of course, all this was mostly about more social engineering in pentesting; we have to test tech, processes AND humans.

And, as in all areas of knowledge, after spending some time learning and practicing, I started to see “the bigger picture”. At some point, the new question stuck in my head: “How can I protect against this stuff?”

But preaching “more social engineering in pentests!” wasn’t working. Here, let me tell you how the things are going in social engineering pentests field. Usually, social engineering is not requested by clients, the same is true for bug bounty programs. It’s usually explicitly out of scope. When I ask people, should we include the social channel into the testing, the answer normally is: why testing something that we know for sure will fail?

After some time, I got bored of trying to explain to people that pentest results are not binary, that they give companies detailed input data for improvement. Even, excuse me, PCI DSS did not fix the situation by starting to require social engineering to be included to the pentest scope. Don’t hack our people, we know you are going succeed.

So, what should I do next, I pondered. What other ways shall I try? Fortunately for me, the answer was someone else’s idea.

A friend called me and told that he was appointed the CISO of a large national enterprise, and after a couple of months at the new role it was obvious to him, that the company staff needs improvement in regard to social engineering threats. His company operated in a highly competitive environment (if you know what I mean), so the risks of human error were high. People were phished constantly, were being pretexted over the phone on a daily basis, and the kinds of attacks against the company were quite sophisticated.

Later, one of the students told me a story that pretty much summed up the state of affairs. One Friday the accountant left a couple of PDF invoices on her computer desktop to submit them first thing on Monday, and left the office. To her surprise, after the weekend there were almost the same invoices on her desktop, but with slightly different payment details.

What my friend proposed, was to develop and deliver an awareness training. (Aha, my first reaction was very similar to yours, yawn, eye roll.)

“You mean one of those remote slideshows with 5 multiple choice questions in the end”, I asked.

“No”, he said. “We want you to take all your offensive experience and prepare a one-day onsite workshop where you’ll teach our top-managers, executive assistants, heads of departments, sales, the front-desk and every other high-profile target in our organization how to detect, resist, and react to social engineering attacks.”

Taking into account that I’ve just started my own small consulting company, I had not so many options. In addition, I thought that this was a brilliant idea. In this way, I found myself in a surprising situation, where I should compile all of my previous learning and practical experience, and describe it in terms, clear to the people, who have no relation to InfoSec or IT. But in order to do it right I had first to summarize why we usually do it wrong.

Explanation of problematics: how it’s done now and why it is wrong

I finally had time to sit and analyze the state of security awareness in the industry, and why it fails, miserably and constantly. Luckily for me, I didn’t have to stop working to do it, because there was a company willing to pay for that.

I did some research, which in fact was a bunch of interviews with colleagues that share or don’t share my views on the topic. After a while, I could summarize the reasons why we are doing it wrong. So, here they are.

First, cyber-security, that is basically a human problem, is attempted to be solved by technical means. Look, the majority of issues we have to deal with, be it a security vulnerability, insecure configuration, or users clicking crap: it’s all about human behavior. Humans make mistakes. And we are trying to fix these mistakes by antiviruses and firewalls. Why? Perhaps, because most of us came to this industry from IT and other technical areas, so we just see the nails everywhere. Maybe, that approach isn’t the worst-case scenario after all. But what if we just consider other ways to deal with it, instead of…

Displacing responsibility. Strategically, what we are trying to do is moving the point of risk treatment as far away as possible from the human — the point where most risks arise. Isolating the human from the responsibility for their actions, would, of course, solve all of our problems, but unfortunately it doesn’t seem to work. Maybe we should consider moving the responsibility, or at least that part humans could deal with, back to them?

If you think about it for a while, centralization of responsibility is quite a common problem, and as you may know, there are industries that are trying hard to solve it. Automotive reforms itself to so called lean manufacturing, software engineering got carried away by agile programming, operations are devops now and so on. So maybe it’s time for us too, to consider moving the control back to, or at least closer to the human being?

The third thing that is making everything above much harder to fix, is that InfoSec industry is totally driven by business risk. We want to formulate security decisions in terms of avoiding financial loss or gaining competitive advantage. We believe that speaking business language makes our attempts to improve security more efficient. However, the main thing we are really good at, is helping business tick a few boxes in a compliance checklist.

We spend most of our efforts on convincing corporate decision makers that our products and services are better not for security of their organizations, but for their budgets. Why we do that? Because, apparently, that’s where the money is. But in my opinion, instead of focusing on business risk, we should focus on personal risks of the people we talk to.

Finally, the worst thing is, that we surprisingly don’t seem to do anything about the so called weakest link. Every time I hear someone says, “Users are dumb, there is nothing we can do about it”, I ask: “What have you tried?” Did you, personally, try to change the situation? Logically, it’s the weakest link that needs the reinforcement for the most effective improvement of overall security. But, as I said, humans are irrational.

Explanation of methods: what we should do and why it is right

After I arranged this list obstacles, I started to figure out how I could complete the task at hand: reinforce people’s ability to face the risks, without completely relying on technology, by giving them back the control and responsibility for their actions, and by making security their personal interest.

Thankfully, as much as intersections of different technologies are the source of security problems, the intersections of different areas of knowledge are the source of innovation. After all these years of learning seemingly unrelated technical and human stuff, I felt ready to test some theories.

Long story short, I came up with a list of three basic tools, that, from my experience, have the highest potential and efficiency of changing human behavior in regard of cyber threats. These tools are: fear, incentives, and habits.

Fear

We all know how fear works: we experience the threats around us, get harmed by them, and after that, we can avoid them, or prepare for them. One good thing about our brain is that we don’t have to necessarily go through dangerous experiences to learn from them. Our prefrontal lobes allow us to learn from experience of others and, which is the most fascinating, from simulated experiences: our own imagination. This dramatically lowers our death rate, because we can learn to be safe, and be safe at the same time. And learning how to deal with threats in the physical world is what human brain does the best.

Our brain is an excellent tool of continuous searching, identification, and treating the risks: by fleeing from them, or fighting them, or hiding from them, or playing dead, or making preventive strikes. This, our brain developed and excelled at during the tens of millions of years of evolution. So, the mere fact that we are still here on this planet means that we can be taught to deal with threats. It’s been proven on practice, and fear is the major tool we use for that.

We memorize things better when we’re scared, or as I call it, sufficiently stressed.

Of course, you don’t need to scare your audience to death. The main goal is to make it personal to them. Cyber threats are dangerous for them, not just their company. They will lose their reputation if a Skype worm infects half of their contact list. They will be raided by the police if their PC seeds child pornography to the internet. “I am too small to be a victim”, “I have nothing to hide”, “I have nothing to steal”: these fallacies have to be eliminated. People are hacked not because they’re important. People are hacked because they are trusted by other people and organizations.

Your goal is to make your training appropriately scary, but not a bit scarier. Fear is our psychological immune system and you need to train it by making fake injections of stress. The appropriate level of stress is easy to measure. Insufficiently stressed audience will spend their time checking Facebook and Twitter. Overstressed audience will just run away. And properly stressed audience will engage into the training, won’t leave for lunch before you answer all the questions, and after the training will ask you: how do you live with all this knowledge? I always reply: you’ll get used to it.

Which brings us to the need of repeating. Our brain is smart, it remembers bad things worse than good things. Time heals, but not in the meaning that our memories degrade, they are replaced by new impressions. So, unfortunately for your audience, they will need repeating. The frequency and the form is up to you. I prefer to phish my clients once in a while; with their permission, of course, but without them knowing the exact time when it happens.

Incentives

Incentives are another great tool you can use, and it’s much younger than fear in the evolutionary sense. It doesn’t hit the foundation of our psychology, but rather exploits the social norms, that we developed as a society. The two incentives that I find the most useful are competition and belonging.

Using competition to promote security awareness is simple. Establish a sort of internal “bug bounty” program and declare a reward. Reward the reported phishing attempts, unidentified strangers caught on premise, confidential print-out leftovers. People like to be rewarded and distinguished from the mass. This is why the materiality of rewards isn’t always critical: the hall of fame could be enough.

Using belonging is a bit more complicated. Here you’ll have to create a group of corporate awareness evangelists, if you will. First, naturally, every InfoSec person should be in this group. Second, everyone of formal authority, including the top-3 levels of management should be in there. Third, but the most important: every socially hyperactive person should be in there too. I call them social unicorns, and having them in your team is a gift or a curse, depending on your point of view. They spend immense amount of time talking around water coolers and coffee machines. They know virtually everyone in the office, which means they know their birthdays, their spouses’ names, their kids’ ages and so on. They are easy to spot, and they are gold for security awareness.

How? Imagine the situation, when such a person gets hacked. The worst thing their employer could do in this case is to fire them. Because after the hack they are the best security awareness asset in the company. [Why?] They will tell everyone about how exactly it happened, what they did wrong, how cool the security guys were to explain them their mistakes, and how they never, never will do it again.

So, establishing the security culture demands at least three initial categories of, pardon me, thought leaders: the people with expert, formal, and social influence over the rest of your staff. These ladies and gentlemen should undergo a formal security training and be officially associated with security awareness initiative company-wide. The others will follow.

Habits

In order to make the change permanent, you’ll have to use habits. Habits let us automate: do less analytics, decision making, and other higher cerebral activity, relying instead on routines that we build over time. They drive us to work, clean our houses, cook our meals, and do many other important things.

Habits are critical. For example, if we had no habits, we would have much larger brains and that would cause a lot of trouble, from higher death rates of women giving birth, up to the esthetics of our selfies. But the most dangerous thing about habits is the false belief that they can be given up. The truth is, we cannot easily get rid of them, however, we can change them.

Habits are simple loops that can be described as if-then clauses, followed by rewards. Let’s look at a few examples:

· stress eating: if feel stressed, then go eat a cookie, and get higher sugar level and better mood as a reward;

· smoking: if bored, then go smoke a cigarette outside, and be rewarded by a small talk with a fellow smoker;

· drinking: if feeling down, then go to the bar, and be rewarded by a company of a thankful listener (or at least the bartender’s polite attention).

These loops of trigger, routine, and reward represent our habits and, as such, most of our behavior. Our goal is to take bad security habits and replace them with good ones. For that, we need to identify the triggers, the routines that they cause, and the rewards the brain seeks.

Let’s hear some bad security habits, I bet you got some. I’ll start with this one. If received an email, then open it immediately, and be rewarded by zero unread items. If the email contains an attachment with an intriguing name (such as “2018 compensation review plan.xlsm”), then open it as soon as possible, and be rewarded by knowing your boss’s salaries. If asked to run a macro, then click “allow”, and be rewarded by finally getting your eyes on it. You get the idea, so could you please give me a few examples from the top of your head… Anyone?

So, you totally nailed it. Now, let’s play with habits design for a while. We already discussed the reward (bug bounty and hall of fame), so let’s assume that you already have that. Then, I hope you agree, that we should change the routine to something like panic, or freeze, or at least proceed with caution. But let’s talk triggers first.

When you train people, you give them example triggers, and they usually confirm that something similar has happened to them. Then they share the cases from their experience, and in this way, you have an ever-growing collection of trigger formulae.

Each formula contains three main components:

  1. a method of social engineering attack, such as phishing emails, impersonation, elicitation, pretexting over the phone, software exploits, baiting with USB thumb drives, and so on;
  2. an influencing principle: urgency, reciprocity, social proof, authority, liking, commitment, and so on.
  3. a security context: basically, anything of personal or business value.

Again, let’s look at some popular examples.

  • You receive an email with an urgent request to provide confidential data.
  • The pizza delivery guy is staring at you while holding a huge pile of pizza boxes at your office door.
  • An “old schoolmate” you just met in the street is asking you about the specifics of your current job.
  • You receive a call from a person that introduces themselves as the CEO’s executive assistant and asks you to confirm the receipt of their previous email and open its attachment.
  • An attractive, likable human is asking you to take part in an interview and is going to compensate that with a shiny new USB drive (in the hope you insert it into your working PC later).

These are examples of triggers that should surprise your colleagues and force them to turn off the autopilot and take manual control over the situation. For that, they, of course, should be taught the influence principles, the types of modern cyber-attacks, and the kinds of things hackers could want from them: remote access, banking accounts, contact lists, bandwidth, business, and personal secrets and so on. And then, if you taught them right, they’ll be ready to learn the universal formula, that should guide them through any potentially harmful situation:

When you identify a potential type of attack, that is accompanied by influencing principle, and the situation concerns a security context, you should pause, rewind, and start processing the situation with caution, applying the skills developed during the training.

As simple as that.

Of course, it is just the knowledge, the theory. The skill, the habit is built with practice. You will need a lot of examples and training tasks for everyone to get this right. But from my experience, this is the most fun part of the training.

Bonus

OK, it seems that we have some time left, so I can give you one more tool. There are plenty of sources of awareness material in popular culture, even if we as InfoSec professionals find them of inappropriate precision. For example, in my opinion, Mr. Robot’s Season 1 did for security awareness more, than the whole InfoSec industry so far. At least, I could show it to my parents and not be ashamed of it. The Real Hustle is good at showing con artists’ tricks. Tiger Team gives an idea of what Red Teaming exercises look like. Lie to Me is quite OK to understand emotions and universal expression, as long as you read Dr. Paul Ekman’s devastating comments to each episode. So, if you like a book, or a movie, or a series about cybers, share it, don’t hold it to yourself.

Example

In the end, let me share a perfect security awareness improvement that I witnessed only once so far. We had a client that requested the full package: the initial full-scope pentest, the training for their staff, and then the re-test to measure the change. In fact, that wasn’t the plan from the beginning, but the initial pentest was so devastating, that they decided to go all in.

As sad as the initial pentest report seamed to the client, as promising it looked to us. I won’t describe everything that happened to avoid sounding like an ad. Just two things that impressed me the most.

The first thing was their reaction to the slide, showing all of their passwords (of course, without relation to usernames). It goes like this: At first, everyone is looking for theirs in the hope it isn’t there. Then, they find it, and it’s the moment of ultimate despair. But in a few seconds, they start reading others’ passwords, and then they look at each other and start laughing altogether, hysterically, and it can’t be stopped for a few minutes.

The second thing was that when the first group took the training, we had a pause for two weeks before the second group. When we started with the second group, it was a mind-blowing experience. Almost all of them have already subscribed to my blog, my Facebook page, my company’s page, started following the news we share, watching our webinars, and so on. More than half of them were already the fans of Mr. Robot and asked questions about the validity of the techniques depicted in it.

Eight months later, when we conducted the re-test, the results were fascinating. I’d be happy to say that we failed to hack them, but that wouldn’t be technically true. Because of all employees, one actually took the bait, but she did it intentionally. That was her last day at work, she already got the check, and according to her, it was obvious that it was us. So, she decided to have some fun. And she showed me that there are aspects of human behavior, that require further research.

Closure

So, this is my story, so far. Maybe, because in its very beginning I myself got hacked, I avoided the temptation of blaming the user, and actually figured out how humans can become stronger. This is what I’m inviting you to do too. Users aren’t dumb, they just don’t now. That’s why humans are the weakest link in security — by default. And this is how you change their settings.

Don’t underestimate humans. Give them a chance, and believe me: they will surprise you.

Thank you.