AI agent in incident management: six months of practical experience
Cybersecurity operations centers (SOCs) around the world face similar challenges: increasing incident numbers, monotonous repetitive tasks, staff shortages, and growing expectations from senior management and regulators. Over the past six months, the SOC team at SOCWISE InfoSec division has been testing how artificial intelligence can alleviate these burdens by developing the NetWitness AI analysis agent module.
What is a SOC, and how does it work?
The SOC is the organization's "cyber defense center," which focuses on the rapid detection and effective management of cybersecurity incidents. The goal: to minimize the impact on business operations.
The main SOC processes include:
- Incident Detection & Response (IDR): detecting incidents, analyzing them, determining response measures, and coordinating and supporting their implementation. Documentation, internal reporting, mandatory regulatory reporting (e.g., based on NIS2), and customer communication.
- Vulnerability management: identifying and managing errors and vulnerabilities.
- Threat Intelligence: continuous monitoring of external attack trends and waves, processing of relevant information.
- Threat Hunting: proactive search for hidden incidents.
These processes are typically performed by analysts working at multiple levels (L1–L3) and specialized roles (e.g., threat hunter, vulnerability analyst). The main challenge lies in the fact that the number and complexity of incidents are constantly increasing, while the tasks are often repetitive and time-consuming.
The role of the NetWitness AI agent
The NetWitness AI agent is not just an automation tool, but a role-based virtual analyst that autonomously investigates alerts. Its tasks include:
- gather context,
- create a timeline,
- formulate conclusions and recommended actions.
This means that human analysts no longer receive raw data, but rather well-structured reports that they must interpret. This has brought about a significant shift in focus: instead of manual data collection, decision-making and strategic thinking have come to the fore.

Lessons learned from the first 2,600 incidents
Over the past six months, the AI agent has examined more than 2,600 incidents, which can be classified into a total of 62 different incident types. This broad spectrum clearly shows that analytical tasks are not limited to a single attack pattern, but range from the simplest rule-based alerts to complex behavioral analysis rules.
- Median human analysis time: 2.5 minutes/incident.
- Consistent performance: the AI operated continuously at the same level.
- Accuracy: in more than 90% of cases, only formal observations were made.
- Content deficiencies: human analysts recorded critical errors in 1% of incidents and other content errors in 5%.
- Hallucinations: extremely rare (0.27%).
- Reliability in the 62 incident types:
- Human analysts were stricter in 3 incident types because they had more background information.
- However, AI proved to be stricter in 13 incident types: it indicated the need for further investigation sooner than its human colleagues.
Human analysts rigorously scored the analyses produced by AI, and the results showed that autonomous operation provides reliable and useful support. Furthermore, AI does not just produce "roughly good" analyses, but often takes a more conservative, cautious approach, which increases safety and reduces the risk of false negative decisions.

Why is this different from classic automation?
The key difference is flexibility. While traditional SOAR playbooks are rigid, step-by-step automatisms, AI agents are role-based, independently working "virtual analysts."
- Autonomous investigation: after receiving alerts, it collects the context itself, puts the events related to the incident in chronological order, and draws conclusions.
- Human focus shift: professionals no longer have to start the process with manual data hunting, but with the interpretation of a prepared report.
- Training function: structured reports have also proven useful in training junior analysts.
What benefits has the introduction brought?
- Speed – incident investigation time has been drastically reduced.
- Offloading – freed up capacity for processes such as threat hunting and threat intelligence.
- Reliability – AI is often more rigorous than human analysts, and control and decision-making remain in the hands of human analysts.
- Scalability – the growing volume of incidents can be managed without increasing human resources.
What can we expect in the future?
The AI agent ecosystem is constantly evolving: modules are already being developed to support threat hunting and threat intelligence processes. The number and complexity of cybersecurity threats are constantly growing, while the shortage of specialists is becoming a more pressing problem every year. Based on our experience, the SOC AI agent does not replace but rather reinforces the work of analysts:
- speeds up incident management,
- reduces the chance of errors,
- frees up specialists' time for higher value-added tasks.
Therefore, we advise our customers not to delay the introduction of artificial intelligence in cyber defense. We welcome applications from partners who would like to actively participate in the introduction of AI agents as early adopters. Early adopters will not only achieve immediate efficiency gains, but will also have the opportunity to directly influence the direction of development. Trying out the AI agent gives organizations the opportunity to stay one step ahead of attackers.


