Source: Information ManagementYARON ZINER | October 17, 2016

Over the last couple of years, software and machines are taking up an increasingly large part of our lives. With the rise of Artificial Intelligence (AI) and Machine Learning (ML), new applications previously considered as science fiction are now possible, such as computerized diagnosticians, automated lawyers and autonomous vehicles. The cybersecurity field is going through a similar transition.

Roughly speaking, we could divide cybersecurity software evolution into two waves. The first wave was dominated by rule-based deterministic solutions. A classic example is the firewall. Firewalls apply simple policies, such as blocking inbound traffic, ports or protocols.

The second wave of solutions consists of “fuzzy” rules and heuristics. We could perhaps mark the beginning of this wave of solutions with the first Intrusion Detection System (IDS). These solutions employed ML algorithms to spot anomalies and detect malicious activity. In fact, most contemporary cybersecurity vendors take pride in how their solutions utilize ML.

Fraud analytics, web gateways, endpoint protection solutions and network sniffers, all utilize ML in their offerings. With today's world of increasing cyber threats, this makes sense, as the increasing types of protocols and machines, combined with the myriad of different attack vectors, make simple static detection solutions inadequate. In fact, it is hard to believe a static rule-based solution could stop a targeted attack, or an APT.

I believe the cybersecurity field is now open to a third wave of solutions, consisting of ML-backed software that does not only detect security issues, but indeed proactively acts to fix them. These solutions are perhaps the most exciting. As software is taking an active role in cyber incident response, and machines are doing more than just enhancing the coverage of the attack surface, a huge burden if gone from security experts in the organization.

A few benefits of ML-backed cybersecurity automated response solutions include:

Closing the cybersecurity skills gap

Recent publications have shown that cybersecurity skills are in a great shortage, and that it is only expected to grow over the next years. A classic example of how an automated response can mitigate the employee-shortage is an automated ML-backed resolution solution account-lockout. It was recently published that help-desk personnel spend more than 20 percent of their time solving account lockouts.

Providing 24x7 security

Machines don’t go to sleep or take long vacations. Delegating security response to regularly updated automated solutions enables a more comprehensive protection against the ever-changing landscape of cyber threats.

Reducing human error

Humans are far from perfect and the human factor in cybersecurity is ever present. The human-factor manifests by weak passwords, oversight and even malicious insider misuse. Automated solutions are far less prone to such risks.

With that said, it’s important to keep in mind that incident response can be trickier than simple detection for a few reasons:

False positives

Anyone who has ever dealt with ML knows about false positives. With solutions that need to act – and not just alert – the error rate of detection must be reduced to a minimum.

Breadth of actions

In a way, most autonomous machines deal with small, well-defined problems. In the field of cyber incident response, the breadth of possible actions is huge. Just to name a few, responses include: blocking a user or machine, re-authenticating a user, patching software, terminating employment or cooperating with law agencies. While it is clear that machines will not terminate employees or call 911 on their own volition in the next five years, even simpler actions like patching software or re-authenticating users run the risk of impeding business continuity.


For a solution to gain popularity and traction, a key feature is trust. Trust is very delicate. CISOs must trust the actions taken by machines. Breaking this trust into components, this means:

  • Avoiding potential manipulation – Like any software, cyber incident response solutions can be a target for attacks. A solution must be able to accurately report its current status and to allow forensic investigation of raw actions when needed.
  • Learning the right lesson – A cybersecurity ML solution often learns a different metric than an actual attack. For instance, an anomaly detection algorithm detects anomalous activity, which doesn’t always equate to malicious activity. Examples of similar ML flops in different areas are Microsoft’s racist chatbot and Google’s “Gorillas” annotation of an African-American couple.
  • Preventing skills atrophy – Security professionals in the organization must be able to understand each activity done by machine, and be able to then perform actions by themselves.

To sum up, in an overall account of the challenges and opportunities, it seems like ML in cyber incident response is going to be a big trend. Even though challenges exist, I believe the third wave of cybersecurity software evolution, ML-backed cyber incident response, is just around the corner.

(About the author: Yaron Ziner is a senior researcher at Preempt, a cybersecurity company that couples User and Entity Behavior Analysis (UEBA) with Adaptive Response to protect enterprises from security breaches and malicious insiders in real-time. Previously, Yaron held software engineering roles at Google, Microsoft and the Israel Defense Forces. He is based in Israel.)