New colleague in the SOC team: 3 months of experience with AI Analyst
Why was an AI analyst needed in the SOC?
One of the biggest challenges of running a SOC is managing the ever-increasing volume of data and the resulting volume of incidents. Analysts often struggle with burnout, as a significant part of their work is not analysis but repetitive administrative and coordination activities.
Although the technological background is constantly evolving - SIEM systems, behavioral analysis (UEBA), endpoint protection, etc. - these systems are mostly focused on collecting as much data as possible, getting as many alerts as possible into the SOC to be investigated, rather than helping to analyze and manage incidents. The introduction of the AI analyst was therefore aimed at increasing the efficiency of incident management.
Solution: customized Netwitness AI module
Generic LLM models typically do not understand SOC jargon, the specific technological environment of the organization, and are not able to produce context-dependent reports and decision-preparatory material.
The final choice was Netwitness' AI-based module, which offers the following benefits:
- Customizable prompt logic and training based on your own SOC context.
- Integration with existing infrastructure such as Active Directory.
- Role-specific operation with complete ecosystem vision - dedicated AI assistants to support L1, L2 analysts, threat hunters, SOC managers, vulnerability managers.
- Continuous feedback to developers.

Working in practice: how did the AI analyst perform?
In the three months since its launch, AI analyst:
- 1082 incidents analyzed and reports generated.
- The quality of the analyses was validated by human analysts.
- Hallucination rate: only 0.28%.
- Formatting errors (date/time) accounted for 86% of the errors.
- Content errors were negligible, mainly due to lack of context or overinterpretation.

Time and cost efficiency
Average incident analysis times based on different source modules:
| Source module | Average human analysis time with AI report | Average LLM cost per incident |
| Endpoint risk score module | 4 minutes | 37 cents |
| EDR (classic) | 5 minutes | 22 cents |
| Correlation engine | 3.5 minutes | 17 cents |
| UEBA behavioral analysis | 2.5 minutes | 48 cents |
| Average | 4,5 minutes | 33 cents |
Human vs AI analyst
Our colleagues were initially skeptical about the solution, but over the first few months they highlighted several benefits:
- Significant time savings - especially for correlation alerts.
- Automatic context analysis (e.g. identifying the real user behind a service user).
- Useful tool to train junior colleagues: reports generated by AI analysts Using self-learning.
Conclusions: AI and human collaboration in SOC
The experience of the first three months shows that AI is most useful not to replace human analysts, but to support and relieve them. The AI:
- Speeds up the process,
- Reduces repetitive, administrative burdens,
- Helps to transfer knowledge,
- And becoming more reliable through continuous feedback.
The use of AI analytics in SOC is not a vision, it is the present. And its greatest value is that it gives analysts the opportunity to finally do what they do best: analyze complex and critical incidents.


