Artificial intelligence and virtual kidnapping: the new threat from the dark web

Security Awareness
16 October 2023
intelligenza artificiale e rapimento

Virtual kidnapping, how new cyber-scams work through the use of artificial intelligence

There was a period in Italy, between the mid-1970s and the mid-1980s, when there were many kidnappings, all for the purpose of extortion.
It is an era that, especially those who are no longer very young, remember well.
The climate of fear in many families, particularly wealthy ones, was widespread and pervasive.
In 17 years there were 600 kidnappings, as the newspaper La Repubblica in 1989 printed a headline article that summarised that time: of those 600 kidnappings, 441 were positively resolved by law enforcement, while 152 remained without investigative outcome; 2,134 people were arrested because they were involved in various ways. With an estimated turnover of around 800 billion lire at the time.
From Sicily, Sardinia and Calabria, the regions most affected at the beginning, the phenomenon then spread to almost the entire country.

Fortunately, that period can now be considered to be over for many years, mainly due to particularly strict legislation that discouraged criminals, although there were a few isolated cases of kidnapping in the 1990s.

Today, however, criminals have a weapon at their disposal that they did not have a few years ago: technology. While this makes our lives more comfortable, it also requires us to take a mindful approach in order to avoid becoming entangled in its many webs of dangers. And we are not talking about trivial risks.

Indeed, it appears that even on the kidnapping front, technology has become a new weapon in the hands of criminals. So much so that virtual kidnappings have emerged. A new way of trying to extort money from the unfortunate victims using artificial intelligence and voice cloning systems.

To explore the topic further, Trend Micro, global leader in cybersecurity, recently published a study entitled “Virtual Kidnapping: How AI Voice Cloning Tools and Chat GPT are Being Used to Aid Cybercrime and Extortion Scams“.

The research points out that with the advent of artificial intelligence, it has become easier to manipulate voices and use this manipulation to simulate a kidnapping in person.

This criminal activity follows a very precise pattern, as is usually the case with most scams.
The victim receives a call telling him or her that a family member (often a child) has been kidnapped, and an identical voice is reproduced using AI voice cloning systems to make the story appear more plausible. At that point, a large sum of money is demanded for the purpose of extortion, promising the release of the alleged abductee.

In order to pull the whole thing off, the fraudster also manages to find out, via the victim’s social media updates, when the victim is actually far away from the family member, in order for the kidnapping to be plausible.

The scam, which falls into the category of vishing, in addition to arousing great alarm and concern, generates a lot of mistrust and scepticism towards the use of artificial intelligence. Not only among private citizens, but also among companies.

Of course, the first piece of advice is not to panic, which is exactly what the criminals want; do not act recklessly, notify the police and try to contact the “kidnapped” person. In most cases, everything will be resolved in a short space of time.

The fact remains that the number of ways an attack can take place is growing day by day, causing widespread unease among private citizens and companies. This is why more and more people are showing a certain mistrust towards the use of AI systems, despite the great benefit that this new technology can bring in terms of innovation.

Regrettably, however, not keeping up with technology today means lagging behind others and in the end the choice is quite forced. Like it or not, we have to come to terms with this reality in which we are all already fully immersed and which will require, in the years to come, ever greater investment in terms of training and knowledge.

Especially at corporate level, in fact, it is no longer reasonable to disregard up-to-date training, targeted at different persons or situations and capable of keeping up with the fast-moving developments in hacking. The latter seems inexhaustible in its invention of new ways to defraud companies, administrations, corporations and private citizens. But, given that it is human behaviour that is targeted, because it is the weakest link in the security chain, it is that which must be targeted in order to strengthen defences.

The only thing that can really stop criminals is confrontation with users who are prepared and able to respond to attacks as cleverly.


Articoli correlati

Digital Operational Resilience Act (DORA)

Digital Operational Resilience Act (DORA)

The Digital Operational Resilience Act (DORA) is a EU regulation that entered into force on 16 January 2023 and will apply as of 17 January 2025. It aims at strengthening the IT security of financial entities such as banks, insurance companies and investment firms and...

read more