AI to ease the workload of cybersecurity teams
We believe that the use of AI has the potential to become a critical solution in IT security by helping to detect cyberthreats and increase response time, thus acting as an “assistant” to IT security analysts. According to Acumen Research & Consulting, the market size for AI in the cybersecurity market accounted for USD 14.9 billion in 2021 and is estimated to reach a market value of USD 133.8 billion in 2030, which represents a compound annual growth rate (CAGR) of 27.8%. This trend is powered by the surging use of social media for business operations, growing government investments in AI adoption as well as technological advancements in security systems to combat the increasingly sophisticated cyberattacks.12
The idea behind AI in IT security is to use AI-enabled software to augment human expertise in rapidly identifying new types of malware traffic or hacking attempts. Because of recent advances in computing power, AI in IT security is now becoming a reality with comparatively small datasets. AI solutions can ease the workload of cybersecurity teams and effectively remove false positives by quickly drawing correlations and insights from vast datasets across assets. It can further automate low value tasks and allow IT security teams to focus on higher priority threats.
According to a publication by the IBM Institute for Business Value, AI is already reducing the costs of cybersecurity responses.13
- The companies at the forefront of adopting AI have reported a 15% reduction in overall cybersecurity costs.
- The average expense of data breaches can be reduced by over USD 3 million.
- AI has the potential to improve the incident response time. Historically, it took an average of 230 days to detect, respond to, and recover from a cyberattack. With AI implementation, it can cut that time by up to 99 days.
Historically, cybersecurity was designed to look at a specific domain and resolve threats under a particular scenario. However, the increasing sophistication of cyberattacks demands unified solutions. While AI use is not new to security in cases such as anomaly detection, we think generative AI (GAI) is a step-function improvement, given its ability to generate recommendations and automate manual, ad hoc tasks previously performed by IT security professionals. It enables aggregating and correlating data across many isolated products that comprise an organization's security stack. IT security teams are then able to strengthen their defense by identifying patterns and connections that humans find difficult to detect across business verticals and locations.
A recent report published by the Cloud Security Alliance (CSA) finds that GAI models substantially improve vulnerability scanning: the OpenAI’s Codex platform, which is based on ChatGPT, was able to detect and scan vulnerabilities in software code written in various programming languages. According to CSA, this technology might become an integral component in IT security responses. Interestingly, the report remarks that GAI is able to detect and watermark AI-generated text. This could improve the detection of phishing emails and become part of email protection software. Such technology could check for unusual email sender addresses, domains, or links to malicious websites.14