Artificial intelligence (AI) is undoubtedly the most important trend in IT at the moment, and it has also long been used in IT security in many areas. Unfortunately, it is also being used by cyber criminals.As is so often the case, they are the pioneers of new technologies and immediately began using the newly available LLMs (Large Language Models), such as ChatGPT, for their own purposes last year.Thanks to these, cyber criminals can now generate even more attacks, which are becoming even more sophisticated and targeted thanks to AI. The criminals' AI-controlled tools can write complex code for malware and exploits very quickly and efficiently.
As always a little later, AI is now also being used on the defence side to keep pace with the new threats. Numerous defence solutions already rely on AI-supported behavioural analyses or anomaly detection. Considering the high number and quality of new AI-supported attacks, security managers clearly have no choice but to counter them with AI tools.These include solutions for model monitoring, data and content anomaly detection and AI data analysis.
The majority of companies are not yet ready for AI
Although there is already a strong focus on AI on end devices, 92 per cent of PCs have insufficient RAM capacity to support enterprise and commercial applications. Companies that want to utilise the benefits of AI would therefore actually have to renew their entire device fleet. This realisation is reflected, for example, by market research institutes such as IDC, which forecast an increase from 50 million to 167 million new PCs by 2027. This presents companies with the additional challenge of protecting the new computers against cyber threats and bringing them into line with internal and external security guidelines - also against the backdrop of software installations becoming more complex and cyber threats with artificial intelligence becoming more dangerous.
How does AI work in cyber security?
AI systems in cyber security use machine learning and data analysis to identify suspicious activity. These systems are trained to analyse behavioural patterns within large amounts of data and detect anomalies that could indicate potential cyber threats. A key element of this is an anomaly detection model that distinguishes normal network activity from potentially malicious activity.
By continuously analysing data traffic and user behaviour, AI systems can identify unusual patterns that indicate a cyberattack. For example, a sudden increase in traffic to a particular server resource could be an indicator of a DDoS (Distributed Denial of Service) attack. Similarly, recognising unusual login attempts from geographically distant regions could indicate brute force attacks or credential stuffing.
Another important aspect is the machine learning method, which enables AI systems to learn from new data and interactions. This allows them to continuously adapt to the ever-changing strategies of cybercriminals. For example, by analysing new types of malware, you can learn to better recognise future variations.
In addition, many AI systems in cybersecurity use deep learning to recognise complex patterns in data that may not be obvious to human analysts. This includes analysing network logs, system logs and endpoint activity to detect cyberattacks.
After all, the ability to self-improve is a decisive advantage of AI in cyber security. With every recognised attack and every interaction, these systems become smarter and more effective. They develop a deeper understanding of the changing methods of cybercriminals and can therefore react proactively to threats before they cause serious damage.