From alarm to verdict: in a matter of minutes
The reality of SOCs today: a flood of alerts, increasingly complex infrastructure, constant delays – while attackers are already using AI to change their patterns more quickly and dynamically. In this environment, the most expensive resource is the time of an experienced analyst – and this is precisely what incident "data mining" consumes the most: gathering context, organizing events into a timeline, and finally deciding whether the attack is a false positive or real.
In this article, we present an approach that aims to take over low-value-added, repetitive manual tasks: the NetWitness AI Analyst agentic solution.
Automation so far: SOAR and "assistant" AI – what was missing?
The promise of SOAR systems was that playbooks could be used to automate a significant portion of security processes. In practice, however, many teams get stuck at this point:
- playbook development is time-consuming and resource-intensive.
- it is difficult to fully map the logic of "real-life" analysts into automated steps.
- the environment and attack patterns change faster than processes can be rewritten.
The other direction was built-in, promptable AI assistants. These are useful, but the bottleneck is often the same: humans.
In a complex task, the work process often proceeds in a "question-answer-question-answer" manner, where the analyst:
- figures out what to ask,
- interprets the answer,
- asks again, refines,
- finally puts the story together.
This slows down the investigation, and quality is heavily dependent on the prompting routine.
The next step: agentic AI alongside human roles
The essence of the agentic approach is that the system does not respond to individual questions, but solves end-to-end tasks – while maintaining human control throughout.
NetWitness' vision is an AI ecosystem that can be trained for agents based on SOC roles:
- Analyst agent (AI Analyst)
- Threat Hunter agent
- CTI agent
- Vulnerability agent
- SOC Manager agent
The goal is not replacement, but rather the takeover of routine, time-consuming analytical preparatory tasks so that experts can focus on higher value-added decisions and developments. What's more, the agents cooperate: the output of one can be the input of another.

What is AI Analyst – and how does it relieve the analyst?
One of the most time-consuming tasks for SOC analysts is incident investigation.
A typical situation:
- SIEM indicates that communication from a machine was sent to an IP address that is classified as "malicious",
- however, the alert provides almost no context,
- the analyst begins mining data: extending the search in time and space, collecting events, and piecing together the timeline and cause and effect.
This can be a process that takes several hours—even if it turns out to be a false positive in the end.
The goal of AI Analyst: to perform this context gathering and collation on behalf of the analyst and to deliver decision support in the form of a structured report, which the SOC analyst can review or investigate further if necessary.
Key features: what makes AI Analyst an "analyst"?
1) Thinks in context (not in isolation)
AI Analyst does not stop at events directly linked to incidents by SIEM.
- expands the search in time and space,
- collects from multiple sources:
- log events
- endpoint events
- network events
2) Enrichment – from internal and external sources
The quality of the decision is greatly improved when raw events are accompanied by interpretable context:
- public CTI information / threat intel feeds
- internal asset information (e.g., criticality)
- AD / user and authorization information
3) Relies on industry frameworks
The system does not "freely improvise," but builds on proven methodologies:
- NIST
- MITRE ATT&CK
- VERIS
This not only facilitates a more consistent analysis, but also ensures that the report speaks the language of SOC.
4) Zero-playbook approach
In the classic SOAR world, playbooks must be developed and maintained for each incident type. AI Analyst, on the other hand, offers a more universal approach: the system is capable of investigating a wide variety of incident types by default and can be fine-tuned with customer-specific context as needed.
5) Structured report + proposal
The output is not a "textual response," but rather a package that triggers analytical work and prepares decisions:
- high-level summary
- attack chain / timeline
- interpretation of events
- recommended verdict: false positive vs. true positive
- certainty level and brief justification
- respond guide / recommended steps (under continuous development)
6) Integrability
It can be integrated with other systems via the AI Analyst API:
- SOAR
- jegyértékesítés
- műszerfalok
The report and decision output can be used in further automation processes.
7) On-prem focus, compliance considerations
It is particularly important in on-prem environments (GDPR/compliance) that most of the solution can be run in the customer's infrastructure. The aim of this approach is to make LLM calls as controllable as possible over time (while continuously reviewing the options).

What does this mean in numbers? (SOC operational impact)
Based on experience, the difference is drastic:
- Traditional SOC: an incident often waits 20–45 minutes for someone to look at it.
- With AI Analyst: analysis starts immediately, so the waiting time is practically zero minutes.
The report is part of the investigation, so the analyst receives a complete, structured document. Even in the case of a complex incident, the goal is to be able to make a decision within minutes:
- AI-based analysis: ~3 minutes
- human validation + additional searches, if necessary: a few more minutes
In addition to speed, quality is also measurable:
- based on the experience gained from thousands of investigated incidents, critical hits are reliably detected,
- in rare cases, inaccuracies may occur, but these can be managed with appropriate controls.

What is worth understanding from this approach
The real value of such a solution lies not only in the type of work it takes off the analyst's shoulders, but also in the fact that it leads to faster and more reliable decision-making: it provides a structured context for the signal, with prioritizable output and verifiable reference points.
In SOC, most of the time is not spent on verdicts, but on getting the analyst to the point where they can ask good questions. AI Analyst "packages" this stage—and instead of a raw alert, it provides material that can be worked with.
1) From signal to narrative: "What matters and what doesn't?"
A typical alert is just a symptom (e.g., an IP, a process, an anomaly). The useful question, however, is how it fits into an attack chain—or why it doesn't fit.
AI Analyst doesn't stop at the alert, but puts together the context (time, affected assets, related events) and builds a comprehensible story from it. This is the added value that usually takes the most time to do manually.
2) "Translation into professional language": MITRE/VERIS as a common framework
The second important lesson is standardization: when the analysis is organized according to MITRE tactics/techniques and VERIS categories, the material
- faster handover,
- easier auditing,
- and less dependence on the experience of the person dealing with the incident.
In other words, not only will the investigation be faster, but it will also be more consistent.
3) Decision support, not "AI opinion": verdict + certainty + referenceable evidence
The third key element is that the output is not a textual "AI response" but a decision package:
- recommended verdict (false/true positive),
- certainty (level of certainty) with a brief explanation,
- and reference points (key elements, entities) that allow the analyst to go back to the source events.
This is critical in terms of trust: AI Analyst doesn't ask you to "trust me," but rather gives you the ability to verify and drill down where it's really needed.
What was the common lesson learned from the two examples given?
The cases presented modeled two typical SOC situations:
- when a seemingly "minor" signal actually points to a more serious problem,
- and when several individually misleading or partially false alarms together indicate a suspicious pattern.
The point is not the specific sequence of events, but rather that AI Analyst pushes the analyst toward decision-making rather than research: it provides an interpretable summary and verifiable reference points within minutes, on which the analyst can build their own professional control. This is particularly valuable when events are fragmented or appear insignificant at first glance, and a more serious incident can therefore easily "slip through" manual triage.
What does SOC gain from this?
Faster decisions, less backlog
If the analysis is available within minutes, the analyst's time is spent not on searching, but on making decisions and responding as quickly as possible.
Less burnout, more professional work
Instead of monotonous data mining, there is time for:
- threat hunting
- fine-tuning detections
- streamlining playbooks and processes
- proactive investigations
More consistent test quality
The structure based on framework systems (MITRE/VERIS/NIST) helps to ensure that reports can be interpreted uniformly.

Next step
If the goal of SOC is faster, more consistent, and more scalable investigation, then AI is not a "nice to have" but a competitive advantage. NetWitness AI Analyst provides structure and decision support behind alerts while maintaining analyst control, allowing the team to focus on critical cases rather than "gathering" context.


