Socwise logo
István Oláh
07/10/2025

Who controls who? Artificial intelligence and cybersecurity at the decision frontier

István Oláh
From ancient ideas to modern tools, AI’s growing role in cybersecurity and society requires strong regulation, human oversight, and ethical responsibility to stay in control of its power.

AI has been with us for a long time, we just didn't know it

While the public sees AI as a recent, radical technological breakthrough, experts know that the history of AI goes back centuries, even millennia. Even ancient poets played with the idea of a world run by machines, and the mathematical foundations were laid more than 370 years ago. The landmark in the history of modern AI was the Colossus machine of 1943 and the architecture of János Neumann in 1945, which has been the cornerstone of computing ever since.

So AI has been with us for a long time - it's just now becoming widely available. The key is democratising user interfaces: you don't need to be a mathematician to use AI, just a web browser and a question.

The true nature of AI: past, present and misunderstandings

The concept of AI is often confused with machine learning, deep learning and generative AI. Despite the technical differences, they share a common root: statistical, probabilistic systems that differ from classical, discrete mathematics. This is why we need to rethink how we approach these tools - and in what regulatory environments we use them.

Source: Microsoft

“Hallucination”, for example, is a non-existent concept in mathematical terms, yet it is often pointed out as a flaw in AI. In fact, these are just unusual but logically valid results from a complex probability space.

Cybersecurity and AI: friend or foe?

AI in cybersecurity has been around for at least 35 years - even if most users are not aware of it. If a system makes a decision on a “heuristic” basis, it already uses AI. Until now, this has been mainly done in the background (e.g. in content filters, antivirus), but today it is playing an exposed role: AI can be both a hacker and a protector.

The vast majority of information security products today have some form of built-in AI capabilities. The question is: how can we regulate and control them?

Legal and regulatory environment: NIS2, NIST and AI Act

Both international and European regulation have recognised the potential dangers and opportunities of AI. The common message of both the US NIST AI 100-1 Recommendation issued in 2023 and the EU AI Act is that transparency, human oversight and security are needed. The AI Act sets out four levels of risk, the most serious of which (“unacceptable risk”) can lead to criminal prosecution.

Business application in practice: case study from OTP Bank

AI is not just a toy for research labs - it also offers real and measurable benefits in the business world. In 2015, OTP Bank already had a signature pad system in place that used AI-based signature recognition to identify customers based on hundreds of parameters - without a graphical image of the signature.

Source: Bence Golda EIVOK-59

But auditing such systems presents new mathematical and engineering challenges. Setting audit limits (business and security) and training systems is the key to success. The lesson: data - and more importantly, good quality, clean data - is the foundation of any AI project.

The future: quantum security, biological processors and the singularity

AI has not yet reached singularity - but it is coming. Warfare, the power hunger of data centres and quantum technology are ushering in a new era in the history of computing. Physics, mathematics and biology will combine to shape a new world where humans can only remain decision-makers if they remain in control.

Conclusion: is AI leading us or are we leading it?

The real question is not whether to use artificial intelligence, but how. AI can bridge gaps in knowledge that human thinking cannot cover - and there is nothing wrong with that in itself. But it does place a serious responsibility on us to keep these systems under control with information security safeguards.

We should only use AI that we know on what data, with what methodology and for what purpose it has been trained - because only then can we judge whether it is really fit for its intended purpose. If we don't understand what the system has learned, how it works and what the consequences of its decision making might be, then we don't control the AI, it starts to control us.

So the goal can only be that we teach AI - and not AI teach us. Ethical, transparent, mathematically proven use is not only a technical or legal issue, but also a social and philosophical responsibility.

crossmenu
SOCWISE
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.