Send message to

Do you want to send the message without a subject?
Please note that your message can be maximum 1000 characters long
Special characters '<', '>' are not allowed in subject and message
reCaptcha is invalid.
reCaptcha failed because of a problem with the server.

Your message has been sent

You can find the message in your personal profile at "My messages".

An error occured

Please try again.

Make an appointment with

So that you can make an appointment, the calendar will open in a new tab on the personal profile of your contact person.

Create an onsite appointment with

So that you can make an onsite appointment, the appointment request will open in a new tab.

Jake Moore, Global Cybersecurity Advisor ESET
  • Industry News
  • Management, Awareness and Compliance
  • Artificial intelligence (AI)

Real or fraud? Expert shows how to become a managing director with a LinkedIn hack and deepfakes

British security expert Jake Moore tested the impact of AI and deepfakes on security ¬ with devastating consequences: He assumed the identity of a company director and not only gained unnoticed access to the board office. Moore also hacked his LinkedIn account and created a fake video with which he announced an absurd cycling campaign via LinkedIn. Nobody noticed the scam and Moore was even able to mobilise supporters for his idea. Only mistrust can help against deepfakes and AI, which is why Moore is in favour of a zero-trust strategy.

British security expert Jake Moore launched an experiment to show how easily AI-generated deepfakes can be used to overcome security precautions - with terrifying results.

There is hardly an area that still goes without AI, but criminals also know how to use this technology skilfully. Deepfakes are being used more and more frequently. The amounts of damage are already enormous, in one case it was 25 million US dollars. British security expert Jake Moore wanted to find out how complex this is and easily assumed the identity of a company director.

New fields of application for artificial intelligence (AI) are emerging almost daily. But just as often we hear about new opportunities for misuse. A well-known British security expert started an experiment. He wanted to know how far he could get with current technology and whether he could even manage to steal money. In a presentation, he describes the surprising outcome of his project.

Jake Moore immediately realised that such an experiment, which could result in financial damage, requires the consent of the alleged victim. After all, he had worked for 14 years for the British police in the field of computer crime and digital forensics before moving to security software provider ESET. Today, Moore is a much sought-after interview partner. Moore had already used people close to him in previous experiments and hijacked their SIM cards, WhatsApp and social media accounts. For his new project, he searched his circle of acquaintances again, but no one was willing to take part in this experiment. In the end, he managed to persuade Jason Gault to take part. Gault runs a flourishing recruitment agency in the UK with around 40 employees.

 

Unnoticed into the board office

First, it all started rather harmlessly. Moore bet that he would be able to get into the company boss's office without access authorisation. Gault was confident in his security technology and agreed to the bet. Moore secretly copied the RFID code of Jason Gault's employee ID card using a hacking tool. This allowed him to enter the company's offices unchallenged. Despite posing provocatively in front of surveillance cameras, no one realised that there was an intruder on the premises. Moore even asked an employee if she could take a photo of him putting his legs up on the table in the boardroom. She willingly did so and Moore immediately sent the photo to a surprised Jason Gault.

 

Takeover in social media

But Moore wanted more. He was convinced that if he could get this far with simple tricks, he would be able to do much more with more intelligent tricks. "The technical possibilities these days are fantastic," enthuses Moore in his talk. He was thinking about AI, as he had already created a deepfake video of himself as James Bond in the trailer for the last film.

So he asked Gault if he could try to hack his LinkedIn account. He agreed. Moore quickly gained access. He then asked Gault to give him the account for 48 hours. Gault agreed, as he was about to embark on his planned holiday to Tenerife.

 

A fake video with consequences

Moore knew that Gault was an ambitious racing cyclist. He conceived the idea of creating a fake video in which Gault announces an absurd cycling campaign. Using AI tools, he created a sophisticated backdrop. The video shows a group of racing cyclists who have parked their bikes for a break to stop off at a Spanish restaurant on the Canary Island. None of this was real. Against this backdrop, Moore had Gault say: "You all know I love challenges when I'm cycling. Now I'm planning a new challenge, bigger and more spectacular than all the previous ones: I'm going to cycle to Australia. No, I'm not buying a plane ticket, I want to get there on my bike". He added that he knew his plan was crazy.

Moore posted the video on LinkedIn.

Within a few hours, over 4,000 people had seen it. The reactions were not long in coming. Many declared the project to be absurd. But others immediately wanted to support Gault and offered him financial and material resources. The sums were considerable. Gault was even offered a highly remunerated sponsorship contract. Even Gault's wife fell for the fake video: Annoyed, she asked him why he hadn't told her about the crazy trip.

Nobody questioned the video. Only Gault's 14-year-old daughter immediately recognised the scam: "Dad, it's not you", she shouted. But the response to the absurd project grew and grew. Gault soon called from his holiday and asked Moore: "I know we agreed on two days, but things are getting out of hand. I want us to end the experiment immediately".

Video, Real oder fake

Real or fake? With this deep fake video, Jake Moore was able to slip into the persona of Jason Gault via LinkedIn. Only his daughter realised the deception; the experiment was ended shortly afterwards.

Measures against deepfakes

Even afterwards, nobody questioned the authenticity of the video. In an interview with it-sa, Moore is still surprised how easy it was to deceive even friends and acquaintances with a completely absurd idea and make thousands of British pounds.

Gault drew the consequences: He hired Moore for security training. The aim was to sensitise employees and eliminate weak points in the security technology. Moore had achieved his goal: "Physical break-ins make people think," he commented on the result. "If I show them how easy it is to do something like this, people are much more willing to change something," summarises Moore.

Zero trust principle against AI deepfakes

But when asked what is needed to defend against fraud with deepfakes, Moore also has no patent remedy. At the moment, he does not see any tools or technical possibilities. "Verifying a person's identity is one of the biggest challenges at the moment," explains Moore in the interview. He adds: "There is a gap here that will hopefully be closed soon". For the time being, he expects an increase in deepfake fraud: "Cyber criminals are well organised, they will certainly soon be offering deep-fake-as-a-service".

His recommendation is: "Zero trust is the basis, this must also apply between people and in personal communication. Especially when it comes to money". Moore advises: "You always have to ask yourself questions like: 'Why is this happening now and why is this happening to you'. The own feelings and intuition often provide decisive clues. "If anything seems strange, alarm bells should ring," he warns.

A case reported by the US news channel CNN shows just how necessary this is. An employee in the finance department of the multinational company Arup was asked to take part in a video conference with management representatives. He was told that an important transfer totalling 25 million US dollars was to be made. However, this was a scam and the money was gone, as the employee was the only real person in the video call - all the others were deepfake imitations.

Author: Uwe Sievers


What you should know about the use of AI!

In the cyber underground, we see AI systems that specialise in different attack scenarios. This makes social engineering or phishing attacks, for example, even more dangerous. But AI is now also being used intensively in cybersecurity. AI therefore also increases the efficiency of defence measures in security solutions such as threat detection, incident response, phishing protection or SIEM.

close

This content or feature is available to the it-sa 365 community. 
Please register or log in with your login data.