The Promise of AI
Artificial intelligence (AI) and Machine Learning (ML) are transforming the cybersecurity landscape. These technologies, characterised by their ability to learn from data and predict outcomes, offer vast potential in combatting cyber threats. Nevertheless, they also introduce new vulnerabilities that need to be addressed by security leaders.
AI holds immense promise for enhancing productivity, efficiency, and innovation. Machine learning algorithms and predictive analytics can transform enormous data volumes into actionable insights, improving decision-making across various sectors. However, as pointed out by Brundage et al. (2018), these technologies pose significant cybersecurity challenges. Each networked device represents a potential entry point for cyber threats, with one compromised device potentially allowing hackers to infiltrate an entire network. Moreover, unsecured AI technologies can be exploited to execute sophisticated cyber-attacks or be hijacked to enhance their efficiency and devastation.
Building Robust Cybersecurity Strategies
Despite these threats, strategies exist to mitigate risks and harness the opportunities offered by AI and ML (Buczak & Guven, 2016). Several AI-enabled cybersecurity tools are on the market, which can rapidly analyse large data sets, identify anomalies, and predict potential threats. Moreover, they can respond to security breaches faster than ever before.
Darktrace, for instance, offers an AI-driven solution known as the Enterprise Immune System, which uses ML to detect, investigate, and respond to threats within IT infrastructures. The Falcon platform by CrowdStrike utilises AI for endpoint protection, detecting and preventing breaches before they can cause significant damage. Another example, IBM’s Watson for Cyber Security, employs cognitive technology to analyse unstructured data, helping to identify potential security threats.
Securing IoT devices is another crucial part of a robust cybersecurity strategy. Robust security protocols for all IoT devices, such as using strong, unique passwords, enabling two-factor authentication, regularly updating device firmware, and isolating IoT devices on their own network segment, can provide enhanced security.
Training employees is another critical strategy. As shown by studies, human error is often a significant factor in security breaches, highlighting the importance of regular training sessions to help teams understand the latest threats and best practices to mitigate them.
The vast amount of data generated by AI and IoT technologies can raise privacy concerns. ‘Privacy by design’ principles can ensure that privacy protections are embedded into the design and operation of AI and IoT systems.
Developing an incident response plan is an essential part of a cybersecurity strategy. An incident response plan helps organisations react swiftly and effectively when a breach occurs, mitigating its impact.
While AI and ML can greatly enhance cybersecurity, they have their potential pitfalls. Over-reliance on these technologies can lead to complacency, false positives and negatives, and a lack of transparency (“black box problem”) (Castelvecchi, 2016). Additionally, cybercriminals can exploit AI and ML system vulnerabilities through adversarial attacks, manipulating input data to confuse the systems.
Despite these challenges, AI and ML have shown significant effectiveness in threat detection and response, particularly in identifying and mitigating phishing attacks. Companies like Cofense have effectively utilised ML for this purpose, demonstrating that these technologies can review and analyse far more emails in a day than a team of human analysts could, thereby increasing efficiency and the likelihood of catching threats before they can inflict damage.
Where can your team start?
To fully embrace AI in cybersecurity, teams should consider the following steps:
1. Education and Training:
Firstly, understanding AI and ML is paramount. Cybersecurity teams should invest in training and educational resources to familiarise themselves with the basics and advanced concepts of AI and ML. A good understanding of
AI/ML is vital for integrating these technologies into cybersecurity practices.
2. Identify Use Cases:
Teams should identify specific areas where AI can be beneficial. This could include threat detection, fraud detection, spam filter applications, or intrusion detection systems. Once these use cases have been identified, teams can set goals and objectives for AI integration (Jordan & Mitchell, 2015).
3. Invest in AI-Based Cybersecurity Tools:
There are numerous AI-based cybersecurity tools available on the market. These tools can automate repetitive tasks, detect patterns in vast datasets, and respond to cyber threats in real time. Teams should invest in such tools based on the identified use cases.
4. Incorporate AI in Risk Assessment:
AI can be used to enhance risk assessment capabilities. For example, machine learning algorithms can analyse large volumes of data and identify patterns that might indicate a potential risk. This helps in taking proactive measures to mitigate such risks.
5. Utilise AI for Predictive Analysis:
Cybersecurity teams can also use AI for predictive analysis. AI can analyse past data and predict future threats or vulnerabilities. This can provide teams with valuable insights and help them stay a step ahead of cyber threats.
6. Understand the Limitations and Ethical Considerations:
While AI offers many advantages, it’s also essential to understand its limitations. For example, AI systems are dependent on the quality of data they are trained on, and they can make mistakes, such as producing false positives. All while the first idea I had was to help reduce false positives in a SOC Environment. Cybersecurity teams need to be aware of these limitations and use AI as a tool to supplement, not replace, human judgment.
Moreover, with the use of AI, ethical considerations such as privacy and data protection become more prominent. Teams should ensure that the use of AI complies with all relevant laws and regulations, and respects user privacy.
7. Develop AI-Powered Incident Response Plans:
AI can significantly enhance an organisation’s incident response capability. An AI-powered incident response plan would use machine learning to predict threats, automate responses, and learn from every incident. Implementing AI in this way could greatly increase the speed and effectiveness of a cybersecurity team’s response to an incident.
What’s next?
At the time of writing, I am experimenting in each of these areas, training models on large sample of automated AWS and Azure log data in test accounts to tune parameters of the IsolationForest model and thresholds for anomaly scores. I am using python to achieve this to make it simple as simple as possible for most people to pick up the code and also run it. Once I have a good working model I will share via my github repository. I will also post this here on my blog over the coming months to walkthru it in more detail.
References used in research:
- Brundage, M., et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv preprint arXiv:1802.07228.
- Buczak, A. L., & Guven, E. (2016). A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Communications Surveys & Tutorials, 18(2), 1153-1176.
- Castelvecchi, D. (2016). Can we open the black box of AI?. Nature News, 538(7623), 20-23.
- Brożek, B., Furman, M., Jakubiec, M. et al. The black box problem revisited. Real and imagin
- ary challenges for automated legal decision making. Artif Intell Law (2023).
- Cavoukian, A. (2010). Privacy by Design: The 7 Foundational Principles. Information and Privacy Commissioner of Ontario, Canada.
- Eskin, E., Arnold, A., Prerau, M., Portnoy, L., & Stolfo, S. (2016). A geometric framework for unsupervised anomaly detection: Detecting intrusions in unlabeled data. In Applications of data mining in computer security (pp. 77-101). Springer, Boston, MA.
- Ferdowsi, A., & Saad, W. (2020). Deep learning-based anomaly detection in cyber physical systems: A survey on the theoretical foundations and applications. IEEE Communications Surveys & Tutorials.
- Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260.
- Pfleeger, S. L., & Caputo, D. D. (2012). Leveraging behavioral science to mitigate cybersecurity risk. Computers & Security, 31(4), 597-611.
- Powers, S. S., et al. (2018). The Cybersecurity Canon: Incident Response & Computer Forensics (2014). Palo Alto Networks.
- Roman, R., Zhou, J., & Lopez, J. (2018). On the features and challenges of security and privacy in distributed internet of things. Computer Networks, 57(10), 2266-2279.