В чому різниця між тестами на проникнення, аудитами, та іншими послугами з кібербезпеки

Підсумував думки щодо різниці між чотирма основними типами перевірки рівня кіберзахисту. Подробиці та трохи питань й відповідей можете подивитися у записі. Вийшов непоганий матеріал, але я ще узагальню.

Penetration Test або тест на проникнення це дуже перевантажене поняття. На жаль, так зараз називають все, що завгодно: від переформатовування звітів Nessus та Acunetix до висококваліфікованого редтімінгу. Серед професіоналів з тестування на проникнення, пентестом називають набір вправ з динамічної та інтерактивної перевірки впроваджених заходів безпеки. Тобто, пентест це умовна лінійка, якою ви можете виміряти ефективність вашого захисту проти актуальних загроз кібербезпеки.

Application Security або безпека програмного забезпечення це не лише тестування. Щобільше, тестування програмної безпеки – це навіть не 5% всіх вправ у цій дисципліні. Зорієнтуватися в цій сфері знань найлегше за допомогою проєкту OWASP SAMM. Там лаконічно та доступно перелічені всі поширені практики AppSec та надані напрямки, де шукати більше інформації. Звісно, тестування безпеки входить до п‘яти найважливіших практик захисту програмного забезпечення, поряд з навчанням розробників основам аппсеку, моделюванням загроз, формуванням вимог безпеки, та ревю безпеки програмного коду. Проте, коли люди без досвіду в цій галузі говорять про аппсек, вони зазвичай мають на увазі незалежні тести на проникнення додатків.

Secuity Assessment або оцінка захищеності це перевірка захисту певного скоупу на момент у часі. Тобто, це такий собі снепшот рівня безпеки в порівнянні до загальноприйнятих практик в галузі, вимог певного стандарту або внутрішньої документації: політик, процедур, тощо. Але це точно не аудит, бо вимоги до цього процесу менш строгі, а очікування менш формальні, подробиці в наступному параграфі. Оцінка захищеності це найзагальніше поняття, яке застосовується дуже широко. Та найпоширенішим прикладом звісно ж є оцінка захищеності за стандартом PCI DSS, яку проводять QSA – Qualified Security Assessors.

Information Security Audit або аудит інформаційної безпеки це незалежна перевірка ефективності виконання контролів інформаційної безпеки за певний період. Тобто, якщо у вас аудит, то у вас вже має бути набір вимог, способи їх задовольнити, оцінка ризиків, система внутрішніх контролів, журнали їх виконання тощо, а можливо навіть й підрозділ внутрішнього аудиту. Важливо розуміти, що на відміну від оцінки захищеності, аудит перевіряє не як ви захищаєтесь зараз, а як ви захищалися останні пів року чи рік. Наприклад, якщо сьогодні у вас ввімкнений та налаштований Web Application Firewall, то для ассессора цього буде досить. А от аудитору доведеться показати логи ефективної роботи WAF за шість місяців, і якщо їх десь немає, то контроль безпеки буде визнано неефективним. Аудитори використовують складніші та загальніші стандарти, такі як ISO/IEC 27001 та 27002, SOC 2, ITIL тощо.

Якщо коротко, то це все. Докладніше у відео, слайди нижче.

My thoughts about Pentest vs Bug Bounty debate

I have been in pentesting and appsec business for a while. For the last 10 years, I am more or less involved in security assessments of various kinds. I have started as a junior security engineer in a large international firm, where I did my share of scanning and translating the reports. Then I had to leave the infosec industry for a couple of years that I spent in IT audit, but I continued occasional freelancing. After that, I joined a smaller firm where I grew my first pentesting team, then another one. Currently, I run my own company and I can finally focus on building the security assessment practice the way I see it right. One question that I am regularly asked by clients, friends, and colleagues is:

Why do you still do appsec and pentests when Bug Bounties are so much more profitable?

Sometimes I joke about it, sometimes I try to explain, but normally I limit the answer to “bug bounties are overrated”. Simply because it’s true. I will not dig deep into the difference between classic consulting services and security assessments in particular, and the crowdsourced approach implemented by contemporary bug bounty programs. Instead, I will point your attention that both leading bug bounty brokers have lately introduced a new service: the so-called “next generation pentest”. Which in fact is just a pentest, but provided to you by a broker that uses bug hunters as human resources. Of course, we can argue about the differences in methodology that supports the two approaches, but after a few minutes I will most probably convince you that this difference is negligible. What really matters is who does the job.

A few words about the history of the discipline. For many years the pentesting firms were so small, that they were not considered actual market players. Simply because big clients were not the fans of the idea of giving such a sensitive job to a pentest boutique. Instead, they offered contracts to the entities who already had built trust with them: accounting firms, system integrators, and even software vendors. Then, slowly but surely, smaller companies have started to gain trust too: sometimes because of a deeper focus on the subject, sometimes because they were founded by the individuals who had built trustworthy public profiles throughout their carriers. And then bug bounties emerged.

Bug bounties have offered the market the crowdsourced security assessments of unlimited scale. In other words, now “thousands of eyes” could review the security of your software and report issues, while only the first report complete according to the program rules could win the reward. Many customers were quick to jump into the bandwagon that seemed an economically good idea. Pay as you go? Better: pay as you get value! Who in possession of required funds would resist the temptation?

But as it turned out, not every customer was ready for the “thousand eyes” attention. A few did not go through any formal appsec practices prior to posting the bug bounty brief. As a result, a thousand eyes quickly emptied the budget of a program that had not had a couple of eyes looked at its scope first. So the paradigm had to evolve: now the bounties were only good for the “mature” products, that had some in-house appsec. After this and some other improvements, the balance has been found.

The ingenuity of the idea and the trajectory of its success made bug bounties a nice thing to invest in. And the investment capitalism, in short, means that fsck dividents — the growth is all that matters. But the growth has not been as intensive as expected: the market has quickly reached its capacity in both clients and human resources. Not that many customers are declaring bounties now, although many pilot the service in a private mode. Not many bug hunters become professional and dedicated full-time appsec researchers. There are super effective 1%ers on both sides. Apparently, the investors are not OK with “the flow” of operations and revenue that the field has reached. Thus, the rewind to the classic dedicated consulting/pentesting kind of services is being attempted — albeit with a certain facelift. And it will most probably work out, as the bug bounty brokers have the required trust and quality controls out there and are able to deploy trustworthy, background-checked resources. I am not sure that this will allow the brokers to sustain the growth rate that is expected from them, because the next “bug bounty boom” is not necessarily arriving any time soon. But the combination of public and private bounties and classic pentests would secure the flow.

In conclusion, I will sum it all up as I see it. Bounties offered the market the promise that Bitcoin once gave: the elimination of trust from the equation. Bitcoin never made it: not only because now you had to trust Bitcoin itself, but more importantly because people are willing to trust each other and the independent third parties who would enforce rules in case one of them decides to cheat. Neither will bounties make it. Instead, the brokers will have to take trust into account and diversify their offering accordingly.

On the usefulness of Penetration Testing methodologies

Let’s imagine for a moment how the “bad guys” are planning their attacks. In the dark basement with cyber-punk posters covering the graffiti on the walls, with a bunch of half-assembled computers lying here and there, malicious hackers gather around the poorly lit table to decide what version of a Black Hat Attack Methodology to use in the upcoming criminal operation. Sounds absurd, right? Of course, because the attackers are not methodical.

As Penetration Testers, we see our main goal in testing our clients’ defenses functionally to assess their ability to withstand a real-world attack. Do we have to rely on external knowledge for that? Obviously, yes, it is impossible to know everything about every attack vector in 2016. Do we have to stick to a predefined set of instructions, a so-called methodology? That depends.

If you are not a pentester, and yet you have to act as one, methodologies are inevitable. To conduct a pentest yourself or reproduce the results in the report from an external consultancy, you have to get your head around a methodology of some sort. In fact, this happens all the time: the perception in the market is such that anyone, be it an accounting firm or an IT audit practice, can do Penetration Tests: look at the plenty of methodologies out there!

But if you do pentesting for a living, do you really need Methodologies? I am a big fan of seeing a pentest as rather a mission than a project. Of course, a mission has to have a plan, but it rarely can be scripted in detail. It’s essential to have a recurring chain of acquiring, analyzing, and applying data and share it within the team. It’s perfect to have both specialization and knowledge sharing between the team members. But to write down “what we do,” “what we do when we’re in,” “how we exfiltrate” in a static document? No, thanks.

To succeed at something, we have to have good mental models and actual practical how-tos at our disposal. The models let us build insight into how the attack would go and what we would have to do along the way. The how-tos and examples let us prepare for the actual operations: collect the data, apply or build the tools, make moves and get the proof of a risk to the client’s business out there. The methodologies try to bridge the gap between the two for those who need it. Do you?

Using NMap XML output

It is widely known that NMap is the most underestimated penetration testing tool out there, so in case you don’t use its XML output to the full extent (as I did just a month ago), this post is for you.

There is a whole section in NMap help dedicated to output formats.

OUTPUT:
-oN/-oX/-oS/-oG <file>: Output scan in normal, XML, s|<rIpt kIddi3, and Grepable format, respectively, to the given filename.
-oA <basename>: Output in the three major formats at once
-v: Increase verbosity level (use -vv or more for greater effect)
-d: Increase debugging level (use -dd or more for greater effect)
--reason: Display the reason a port is in a particular state
--open: Only show open (or possibly open) ports
--packet-trace: Show all packets sent and received
--iflist: Print host interfaces and routes (for debugging)
--append-output: Append to rather than clobber specified output files
--resume <filename>: Resume an aborted scan
--stylesheet <path/URL>: XSL stylesheet to transform XML output to HTML
--webxml: Reference stylesheet from Nmap.Org for more portable XML
--no-stylesheet: Prevent associating of XSL stylesheet w/XML output

With time I got used to type -oA <basename> in order to get the reports in three formats: actual NMap output in .nmap, greppable text in .gnmap, and XML document in .xml.

Most people I know get confused when it comes to XML parsing, so the most popular use for NMap XML output is to open it in Zenmap, look at nice graphs, play around with sorting, and then close and never open it again. The first good news is that you can open multiple XML files in Zenmap by adding new files to those already open. This is handy, but you can’t save everything you’ve opened altogether to a new XML document, so this opportunity is of limited use.

Second, you can use xsltproc tool (in OS X it can be obtained by brewing libxml2) to create a nicely looking HTML report out of your NMap XML. Just type…

xsltproc report.xml > report.html

…and you’re done. Then you can open it in any browser and enjoy. You can also change the resulting HTML style by editing nmap.xsl file (brew puts it to /usr/local/share/nmap/) to add custom highlights and virtually anything you can get out of an XML.

That’s all good, but we rarely have just one NMap scan per engagement, right? And combining multiple XML files into one document is not something you can do easily. For that, you can use xml-cat tool from xml-coreutils. It simply concatenates multiple XML documents putting the <root>…</root> container around them. To use xsltproc with the result you have to replace root with nmaprun and add the following header right after the first line of the XML file for it to look like this:

<?xml version=”1.0"?>
<!DOCTYPE nmaprun>
<?xml-stylesheet href=”file:///usr/local/bin/../share/nmap/nmap.xsl” type=”text/xsl”?>
<nmaprun>
(everything between <root> and </root> does here)
</nmaprun>

After that, you can generate a pretty HTML report and review your NMap scan results sipping coffee and listening to music.

Hope this helps, stay safe, till next time!