Artificial intelligence in cybersecurity
In 2021, it doesn’t take much scrolling on the Internet to bump into the words ‘artificial intelligence’, and the probability of coming across this term significantly increases if you’re a seasoned professional in any IT-related field. As is the case in several other industries, AI has become a hot topic in the area of cybersecurity as well. But is what IT security professionals mean by artificial intelligence really artificial intelligence? Read on to find out.
The origins of artificial intelligence
For many, depending on the individual’s field of work, artificial intelligence might sound like a fancy, relatively new term that’s only recently entered the collective consciousness. However, AI has been an existing term since as early as 1993, when studies of cognitive systems were already underway. At that time, the term ‘artificial intelligence’ was seen as too ambitious, especially compared to where science actually stood. And, what’s even more astounding is that the concept of AI was born even before then: it was around WWII that the idea of a thinking machine formed in the mind of Alan Turing – long before we had working computers or any software to make it a reality. But, a lot has changed in the last 30 (or more like 80) years. The utilization of “AI” has become mainstream in many different sectors, such as big data, smart cities, self-driving cars, and even healthcare – and professionals aren’t so careful with the term anymore.
AI vs. ML vs. DL
Before we get into what is exactly meant by AI in cybersecurity (and other sectors, for that matter), it’s worth clearing up a few related terms that will come in handy when examining the situation closely.
Although deep learning (DL) and machine learning (ML) have also been getting more attention lately, they still don’t come up as often as AI does, and there’s a reason for that. As a matter of fact, when people talk about artificial intelligence – not even laymen exclusively – they tend to label DL and ML as AI, even though these terms aren’t interchangeable. Granted there are similarities and overlaps between the three technologies, but they’re far from being the same thing and should not be treated that way either. Let’s look at each of them separately.
When we say deep learning, we mean a technology that isn’t capable of thinking but can learn how our brain processes information and mimics the process, without using any networks to model it. It can simulate human behavior, but just like there’s a difference between simulating a flight and actually flying, it cannot step out of its boundaries. This is not to say DL isn’t valuable, though. On the contrary, DL has been a proven and trusted technology used in landing systems, for example. But it’s important to note that it requires huge amounts of data to work properly.
Machine learning, on the other hand, does not require a lot of data to be efficient. It is different from DL in that it can be made smarter by training it on exceptions and providing it with human-input advice. The algorithm can both change itself (which we call ‘unsupervised’), and be changed by a data analyst (or ‘supervised’). Although the latter undoubtedly has benefits such as flexibility, the first method proves to be more valuable, as it can replace human resources which, as most of you know, are scarce nowadays. In addition, ML is great at supporting tasks, even if it doesn’t understand what it’s doing. In other words, it doesn’t know the ‘why’, but it knows the ‘how’.
And now we arrive at the real thing, artificial intelligence. AI can do all the things the two above technologies can, but it can do even more: it can learn from experience like humans do. It can interact and reason. In simpler terms, it’s as human as a machine can be.
AI first made the headlines in 1997, when IBM-developed Deep Blue defeated the then Grandmaster and long-time world champion Garry Kasparov in chess, who had never lost a single match in his life before that. Obviously, AI has come a long way since then (Deep Blue wasn’t even programmed to learn how to think as a human – its purpose was “just” to learn how to play chess with no mistake), but it was definitely a milestone for the technology and it made a lot of noise. Given the vast potential AI has, and adding in the huge hype formed around the concept thanks to science fiction and popular culture, it’s no wonder that people go out of their way to claim they “do AI”.
Does cybersecurity really have AI?
In short, no, it isn’t – not a full one anyway. Cybersecurity will undoubtedly deploy AI in the future, but what it currently labels as AI is called an expert system with some ML abilities. Let’s take a closer look at what that is.
An expert system is a computer system that emulates the decision making ability of a human expert and can solve complex problems by reasoning, much like applying if-then rules to a lot of data. Expert systems are divided into two subsystems, the inference engine and the knowledge base. The knowledge base contains the facts and the rules, based on which the inference engine applies the rules to the known facts to arrive at new facts. However, expert systems cannot independently and accurately detect an attacker. What they can do is notify a SOC analyst about an incident that has a high probability of being an attack or compromise. After that, the ML adds more if-then rules to the knowledge base, so it can be more accurate in the future. However, since there’s often no loopback from the actual users, the human expertise really comes from the vendor’s input.
So why does the industry still insist on saying ‘AI’? The answer is simple: the term ‘expert system’ isn’t as marketable, so they simply don’t use it much. So, when you come across any cybersecurity material talking about artificial intelligence, what they really mean is an expert system that has ML and the above-described inference action.
The current state of cybersecurity AI
Let’s have a look at an analogy from another sector to make it even clearer. In the automotive industry, for example, we often hear about driver assistance systems and self-driving (or autonomous) cars. To most people, these terms might mean similar concepts, but in reality, there are significant differences between the two. Driver assistance systems are no more than expert systems, which we thoroughly discussed above. Self-driving cars, on the other hand, are also equipped with machine learning and provided with driver feedback during the development stage.
Now, if we look at driver assistance systems and self-driving cars and wanted to place the “AI” capabilities of cybersecurity between the two, it would be somewhere in the middle. What this means is that AI has the potential to independently act like a SOC analyst in the future, just like an autonomous car can act like a driver, but for that to happen, it would need strong feedback from all the involved users in the system.
Nevertheless, the AI currently deployed in cybersecurity does help with one major issue that cemented itself in the sector: staff shortage. As people in this area must know, there are simply not enough SOC experts in the field, which inherently means that training and giving experience to as many newcomers as needed in the sector also poses a huge challenge. However, AI – even at its current level of maturity – proves to be extremely helpful for SOC personnel, and it’s got the potential to even take over some of their tasks in the future.
The future of artificial intelligence in cybersecurity
At this point, it would be perfectly understandable if you were thinking that IT security is doing badly in the AI space. However, the situation is not at all that bad. Going back to our previous analogy, we must admit that the comparison wasn’t completely fair – roads rarely change, but the cybersecurity landscape is constantly changing, and it’s certainly making the evolution more difficult. Before we can deploy “true” AI in IT security, we need to learn how to fully leverage machine learning and expert systems first so our systems will be able to evolve into the next phase.
In addition, AI systems, too, need to grow up. You can’t put them to work too early and set your expectations high, as it will only make you lose your belief in AI in cybersecurity – which would be a pity, as it’s got great potential. Eventually, SOC systems will be able to predict an attack before it goes down, thus reducing the damage to your company to zero. In other words, AI will help security operation centers become proactive rather than reactive, notifying you of attacks in advance instead of responding to ones that are already wreaking havoc. And this shift in focus, from known threats to unknown ones, is exactly what the objective of an effective SOC should be.
But how can it be done? First, the capabilities of AI in cybersecurity need to be able to keep up with the evolution of threats which is continuously on the tail of advancing technology. There’s a constantly evolving and expanding attack surface, but there’s too much inconsistent and unstructured data, which prevents SOC analysts from detecting and responding to incidents quickly and efficiently. In general, the inconclusive monitoring technologies currently in place don’t give specialists proper visibility. To keep pace with these threats, experts need help with detecting abnormal behavior, identifying known and unknown threats, and gaining full attack lifecycle visibility – which are exactly what artificial intelligence, when it’s mature enough, will be able to do for cybersecurity.
Get an idea from SOCWISE to build or develop your SOC!
Some CISOs have built their SOCs over time with a mix of internal and external resources. But, given the ongoing evolution of cybersecurity techniques and the need to constantly adopt new skills and tools, managing this mix is becoming increasingly complicated.
Benchmarking : The Key to Creating an Efficient Security Operations Center (SOC)
See how we built it, how it works, and what technologies we use!