Send message to

Do you want to send the message without a subject?
Please note that your message can be maximum 1000 characters long
Special characters '<', '>' are not allowed in subject and message
reCaptcha is invalid.
reCaptcha failed because of a problem with the server.

Your message has been sent

You can find the message in your personal profile at "My messages".

An error occured

Please try again.

Make an appointment with

So that you can make an appointment, the calendar will open in a new tab on the personal profile of your contact person.

Create an onsite appointment with

So that you can make an onsite appointment, the appointment request will open in a new tab.

Symbolic image of artificial intelligence
  • Industry News
  • Artificial intelligence (AI)

Attackers are ahead when using AI

The criminal underground has an advantage when it comes to the use of AI. Markus Richter, State Secretary in the Federal Ministry of the Interior and Federal Government Commissioner for Information Technology, therefore warned during the press conference at this year's it-sa: "We are seeing ever more perfidious vectors being used and a steadily increasing level of profiling". Attackers have it easier in many respects. This article explains why this is the case and what the European Union is doing about it.

While the security industry is considering the use of AI, hackers have long been making intensive use of artificial intelligence for their own interests. They are currently very successful. How come?

Cyber criminals are experimenting with AI without restraint or recklessness. The only criterion for their use is profit maximisation. This is now even attracting the attention of international politicians.

It is a historic place: Alan Turing and his team once cracked the encryption of the Nazi-German crypto machine Enigma at Bletchley in England. Now state and government representatives from 28 countries are meeting there to discuss the regulation of artificial intelligence. They are particularly concerned about the potential for misuse of this technology.

There is cause for concern, as dubious actors have long been intensively involved with the capabilities of artificial intelligence. Shortly after ChatGPT became publicly available, they successfully attempted to generate malware using AI. Since this is no longer easily possible, cyber criminals have been using their own AI systems such as WormGPT or FraudGPT to generate dangerous software or phishing emails. So far known, some of these are based on stolen base models or consist of open source systems that they train using stolen data from the darknet.

 

Hackers have lots of money

However, the operation of these AI systems is very resource-intensive and therefore generates high costs. But the criminal gangs do not lack the necessary budget. Norbert Pohlmann, Professor of Cyber Security at German Westfälische Hochschule and Head of the Institute for Internet Security, reveals why this is the case: "The attacker groups have earned so much with ransomware that they are well positioned financially." Stefan Strobel, Managing Director of IT security specialist cirosec, who studied AI at the French Laboratoire d'Intelligence Artificielle at the Université de Savoie in Chambery in the mid-1990s, concludes: "At the moment, it's more the attackers who are using AI very efficiently, for example for phishing or deep fakes with fake voices, photos or videos."

The criminal underground therefore currently has an advantage when it comes to the use of AI. This has not gone unnoticed by politicians. Markus Richter, State Secretary in the German Federal Ministry of the Interior and Federal Government Commissioner for Information Technology, therefore warned during the press conference at this year's it-sa: "We are seeing ever more perfidious vectors being used and a steadily increasing level of profiling". In his opinion, we are facing radical changes when it comes to things like AI. He demanded: "AI is being used specifically by attackers, it must also be used specifically for defence".

But attackers have it easier in many respects. Norbert Pohlmann explains why this is the case: "While companies that want to defend themselves have to observe legal and ethical framework conditions, this is irrelevant for attackers. They therefore make unrestrained use of all the capabilities and possibilities offered by AI". After all, AI is a technology that anyone can use, for better or for worse. Pohlmann illustrates this with an example: "If AI is used to defend against attacks, it must be ensured that automatically generated results are correct. That requires effort".

 

EU plans AI Act

He explains: "If I, as an expert in a field of knowledge, ask ChatGPT something, I can judge whether it is right or wrong". Incorrect results are rarely tolerable. This is why, in medicine for example, AI is only consulted as an advisory voice in critical decisions. Attackers, on the other hand, are ruthless and have other priorities: "They don't have to deal with it. They can simply try it out and take the results as they come. When generating spear-phishing emails, for example, this can lead to them not working very well. This means that some of the attacks carried out with this method will fail, but most of them will work - that's enough," explains Pohlmann.

These problems are also recognised in Brussels, where a regulation of AI is currently being drafted. The EU Parliament is even considering setting up a separate European office for AI. Unfortunately cyber criminals will not be impressed by any of this.

Author: Uwe Sievers

close

This content or feature is available to the it-sa 365 community. 
Please register or log in with your login data.