In cyber security the potential of AI has not yet been fully utilised. However, new applications are already being planned.
Manufacturers of security products are increasingly focussing on artificial intelligence. Various application scenarios are being advertised, but AI is not useful everywhere.
The situation report presented a few days ago by the German Federal Office for Information Security (BSI) comes to the following conclusion: Artificial intelligence presents companies and authorities with unprecedented challenges when it comes to IT security. AI itself could become a weak point, for example if it is hacked or misused. There is also a risk that training data could be manipulated.
Risks and opportunities therefore go hand in hand with AI. At this year's it-sa, hardly any cyber defence providers refrained from advertising AI in their products. However, this is often reduced to superficial applications. "Although all manufacturers claim to have integrated AI into their solutions, sometimes it's just a voice assistant," explains Stefan Strobel, Managing Director of IT security specialist cirosec, who studied AI in the mid-1990s.
AI usually needs regular maintenance
Research is much further ahead. Norbert Pohlmann, Professor of Cyber Security at the German Westphalian University of Applied Sciences and Head of the Institute for Internet Security, has been researching AI in the field of security for 20 years. He explains: "We once implemented a research project to recognise anomalies in network traffic, which worked really well. We carried this out over ten years together with the BSI". AI has now also found its way into commercial products for this task, which is known as Network Detection and Response (NDR). With cumulative data traffic in company networks, this is becoming increasingly important, as unusual data transfers are typically an early indication that the company network has been compromised. However, if these solutions are AI-based, they need regular attention: "It is important that the models are continuously adapted because data communication is constantly changing, for example due to new protocols and changes in user behaviour," is an important finding that Pohlmann made in his project.
Pohlmann also considers this technology to be suitable for other areas of application: "The detection of anomalies would also be possible on end devices, for example when analysing user interaction". He adds: "Users often have certain workflows or regularities that can be recognised. For example, after switching the computer on, the email programme is started first and then the Internet is surfed". AI is usually already helping out in the background on the end devices: "Pretty much all providers are now working with AI technologies for endpoint security," says Pohlmann.
AI to combat skills shortages
AI could also alleviate the shortage of skilled labour. Mirko Ross, founder and Managing Director of security specialist asvin, comments: "In the security field, the shortage of skilled labour is coming up against rising risks and increasing attacks, which can be dangerous." However, by using AI to support security specialists in routine tasks and thus relieve them of some of their workload, "the particularly glaring shortage of skilled labour in this sector could even be reduced", Pohlmann also hopes. The AI could, for example, select the countless alarms and warning messages from the security software. Pohlmann describes this approach as follows: "Currently, all alarms and problems still have to be investigated by humans, which is very time-consuming. AI could pre-filter, check and prioritise them depending on the potential extent of damage. That would alleviate the shortage of technical experts somewhat".
However, the experts also share the BSI's assessment and see new risks. Mirko Ross, for example, warns: "Relying solely on AI models could be dangerous". Training data is seen as an important source of danger: "AI models could be trained with incorrect or unsuitable data and therefore misjudge malware or anomalies. Attackers could also try to manipulate training data". The more AI is used, the more Ross expects targeted attacks against the AI itself, for example "if the vulnerabilities of the AI models are known". His recommendation is: "You should diversify your tools and methods".
Stefan Strobel is particularly sceptical about the increasing use of AI. In the last five years, AI has not developed that much technologically in the security sector. "LLMs are new, but they don't protect security; it's more the attackers who benefit from them". He believes that AI does not have much potential for cyber defence in companies. His conclusion is therefore: "AI won't fix it; many things can usually be better protected by EDR and XDR solutions".
Author: Uwe Sievers