Deepfake phishing, the most frightening crime

Security Awareness
5 April 2023
deepfake phishing

Nothing is as it seems.

We have to get used to it: the reality we live in is increasingly fake, synthetic, and altered.
The main points of reference and authority melt like snow in the sun, and we all struggle to distinguish the true from the false. Some even begin to question whether “reality” exists. This new perception of reality now seems to pervade every sector of society, but the web, and everything derived from it, is undoubtedly the undisputed realm of this heated confrontation between the true and the false. A nebulous and indistinct space in which we move tentatively, often unaware of the consequences of our actions, and into which the disturbing technology of artificial intelligence with all its retinue of tools that are perfect for distorting reality is making its entrance. Distortion upon distortion.

Without analysing the many implications of this in this context, we can, however, state that we are talking about the perfect playground for those who want to control our lives and a veritable paradise for cybercriminals.

Among the darkest and most nefarious risks that have been emerging in recent times is that of deepfake phishing, a technology used to create fake images and videos or to reproduce the same tone or timbre of voice.
The “trick” has been pulled off with some well-known personalities, such as Tom Cruise, Mark Zuckerberg, and President Obama, but its most disturbing application is that it can be used to perfectly imitate CEOs and executives of companies giving orders to their subordinates.

These are attacks that can have a devastating effect. For example, in 2021, cybercriminals used AI voice cloning to mimic the CEO of a large company and tricked the organisation’s bank director into transferring $35 million to another account to complete an “acquisition”.
A similar incident occurred in 2019. A scammer called the CEO of a British company imitating the CEO of the company’s German parent company and demanding an urgent transfer of $243,000 to a Hungarian supplier.

A phenomenon that’s not only worrying, but also rampant: according to the World Economic Forum, the number of deepfake videos online is increasing at an annual rate of 900%.

To execute a deepfake phishing attack, hackers use artificial intelligence and machine learning to process a wide range of content, including images, videos, and audio clips. With this data, they create a digital imitation of an individual.

One of the best examples of this approach occurred earlier this year.
The hackers generated a deepfake hologram of Patrick Hillmann, Binance‘s chief communication officer, taking content from past interviews and media appearances.

With this approach, attackers can not only mimic the physical attributes of an individual to deceive human users via social engineering, but can also bypass biometric authentication solutions.

Many analysts who have spoken on the subject are not at all optimistic. They predict that the increase in deepfake phishing will continue and that the fake content produced by criminals will become increasingly sophisticated, convincing and difficult to detect.
In short, even though this technology is already claiming victims, we are still in the early stages of a form of phishing that will become increasingly popular and spread like wildfire, threatening to leave frustration and desolation in its wake.


The role of training

One of the main deterrents against this type of attack is certainly to integrate this threat into corporate cyber security training programmes.

Unfortunately, most organisations, especially in Italy, are far behind on this front and seem not yet to have fully grasped the gravity of the situation in which we are all immersed. The truth is that no one, especially in the world of work, can afford to ignore the threat posed by deepfakes.
Users must be able to identify common warning signals, such as the lack of synchronisation between lip movement and audio, or to discover indicators such as distortions, deformations or inconsistencies in images and videos. Not only must they have this theoretical knowledge, but they must continually exercise their ability to recognise suspicious signs. This involves ongoing training. The concept is that to stay safe, you have to be one step ahead of the hacker, and to do that, you have to be able to move very quickly. In fact, we are not talking about amateurs, but rather true marathon runners of cybercrime. It is not as impossible a goal as it may seem, far from it.
The important thing is to study, learn, train and always be on the ball.

ISCRIVITI ALLA NEWSLETTER

Articoli correlati