Why is Cyber Guru training different?

AI & ML Expert Talks
4 July 2023

Because it is the only one to not rely on well-known phishing traps, such as the use of ungrammatical text or suspicious topics, but, thanks to its artificial intelligence and machine learning programs, on a customised and, most importantly, adaptive approach, which makes the training much more effective and suitable for dealing with new cyber attack techniques, such as those resulting from malicious use of AI.

Let’s delve deeper into the topic with Leonida Gianfagna (find the other interviews here and here), a theoretical physicist specialising in elementary particles and now head of Cyber Guru’s Research & Development department.

We come to the topic that interests us the most. Will artificial intelligence and all its new expressions pose a greater risk of being attacked by cybercrime? And if so, how and why?

The answer is never clear-cut. On the one hand, crime is becoming more and more refined because the hacker will be able to poison the AI system, i.e. confuse it and make it unable to recognise signals or data. They can do this, for example, by simply changing the pixel of an image, in order to make it unrecognisable to the machine.
At the same time, however, Explainable AI will be able to understand when this will happen, thus making the whole system more resilient.
Of course, this involves people making a different, and perhaps greater, effort of understanding to keep up with the speed of these changes.

It is a necessary learning process that must include a specific training and practice course and one that is above all in step with the times. This is exactly what Cyber Guru offers its customers.

What is it about exactly?

Highly individualised training that is as realistic as possible. The model is very similar to the algorithms that come up with shopping recommendations: each person gets different suggestions depending on his or her tastes, schedule and online habits. This is how certain proposals reach me, which are different from those directed to others.

And, from time to time, the model adjusts the shot. Like in a chess game where the opponent’s move depends on my behaviour on the chessboard.

In our training platform, the opponent is the artificial intelligence program that challenges students with problems from time to time, which are different depending on the answers they put in place.

So each phishing email will be sent at times when the person it is directed to is most likely to click and will contain a challenge that will be appropriate to their level of preparedness. Obviously, the bar of difficulty is always being raised.

For us, the key question is not “how hard is it for you to click” but ‘how likely is it” for you to do so. It seems like a minor difference but it is substantial.

Is this a novelty in the field of cybersecurity training?

Yes, absolutely. All phishing training that is on the market for now is calibrated on the traps that can be recognised more or less by all: ungrammatical text, emails with suspicious topics, typos, etc. This is a mode that, in the long run, does not guarantee the expected results because the audience to which it is addressed is always very differentiated and evolves with different times and methods.

Moreover, for some, such an approach can be unstimulating and therefore demotivating. Instead, a personalised and, above all, adaptive approach makes training much more effective. The high level of awareness and risk consciousness among all employees, as well as a response capability to different types of attacks tailored to each individual, creates an unassailable barrier for any hacker. Because, we should remember, the weak point in security is always the human factor.

Of course, this adaptive model is only achievable through artificial intelligence and machine learning programs. It is these that, based on the data collected on users, will decide what type of email to send to each individual.

Is it a training model that can only be used for phishing?

We started from phishing, but the goal is to expand the method to all learning to accompany each individual to consolidate a digital approach that is structured and able to recognise all the traps and dangers of the web.

If you want to know more about Explainable AI with Python

ISCRIVITI ALLA NEWSLETTER

Articoli correlati