Very Useful thoughts, but is it enough?
Distinguishing AI Hype From Reality in SecOps
AI and ML are important SecOps tools, but human involvement is still required.
Nash Borges, VP of Engineering and Data Science, Secureworks, June 01, 2022
Artificial intelligence (AI) can be used to enhance the efficiency and scale of SecOps teams, but it will not solve all your cybersecurity needs without the need for some human involvement — at least, not today.
Most commercial AI successes have been associated with supervised machine learning (ML) techniques specifically tuned for prediction tasks that yield business value. These use cases for ML, such as spoken language understanding for your smart-home assistant and object recognition for self-driving cars, make use of vast amounts of labeled data and computation required to train complex deep learning models. They also focus on solving problems that barely change. This is in contrast to cybersecurity, where we rarely have the millions of examples of malicious activity needed to train deep learning models, and we face intelligent adversaries that frequently change their tactics to try to outmaneuver our latest detection capabilities, including those using ML.
In addition, the digital exhaust from human behavior in enterprise environments is extremely hard to predict. Anomalies in these systems are common and very rarely represent malicious threat actor behavior. It is therefore unreasonable to expect that unsupervised anomaly detection can be used to learn about an enterprise environment’s normal behavior and be able to generate meaningful alerts about malicious activity without creating false alarms on unusual but benign events.
Finally, the degree of data imbalance in threat detection is unlike many other use cases for ML. Imagine for a moment that you are a midsize to large enterprise collecting 1 billion potentially security-relevant telemetry events per day and expect to find one incident worth seriously investigating. Nobody wants to lose the ransomware lottery and have their business grind to a halt with the potential for even worse reputational damage by missing that one security incident. However, if you build an ML-based threat detector processing each event by itself that is 99.9% accurate, you would be searching for that one true positive in a sea of 1 million false positives. Conquering this data imbalance requires significant expertise and a multipronged detection strategy.
Despite these challenges, there are ways for SecOps teams to leverage the technical power of AI/ML to gain operational efficiencies. The following principles should be considered when doing so.
1. Symbiotic Humans and Machines Work Better Together
Consider ML a complement to human intelligence rather than a substitute for it. In the context of complex systems, especially when combatting intelligent adversaries that adapt quickly, automation will deliver the greatest value with active learning at its core. Humans should regularly review the results of ML-based systems, provide feedback, add additional examples of new malicious behaviors, retune the models, and constantly iterate. Anyone who has ever had to face an intelligent adversary, whether it be in cyberspace or in combat, should be familiar with the OODA loop, developed by US Air Force Colonel John Boyd. It has many similarities to active learning techniques that can be exceptionally useful in ensuring that automatable decisions made in each loop are using the best insights, optimizing the utility of manual analysis performed in some loops, and scaling it to assist in processing more loops than humanly possible.
2. Pick The Right Tool for the Job
You do not have to become an AI expert to make good AI-related decisions for your team, but you should be reasonably informed about the basics to ensure that you are picking the right tool for the job.
First, it is important to know the difference between anomalous and malicious behaviors because they are rarely the same and require very different techniques when it comes to detection. The former is easy to discover with unsupervised anomaly detection that does not require labeled training data, but the latter requires supervised learning that typically requires many historical examples.
Second, alerts with a high signal-to-noise ratio are critical for SecOps teams, and you need to fully understand the downstream effects of any probabilistic system that will not be 100% accurate.
Finally, while nearly every ML technique has been applied to cybersecurity, it is still important to have thousands of signatures from threat intelligence that operate like a minefield of trip wires. When constantly tuned by an expert team of security researchers, signatures provide a critical baseline for detecting known threats that needs to be a part of every security program for the foreseeable future. ...... '
No comments:
Post a Comment