To create Cyber Guru’s automatic and adaptive platform, it was necessary to develop a product controlled by a machine learning model.
This was possible thanks to the experience in cybernetics, artificial intelligence and machine learning of Leonida Gianfagna, a theoretical physicist specialising in elementary particles and who is now head of the Research & Development sector of Cyber Guru, and author of the book Explainable AI with Python.
Your book also talks a lot about Explainable Artificial Intelligence, Explainable AI. Can you explain what it is?
It is a field of research within machine learning that seeks to answer the many questions, including those related to liability, that we all ask about this tool and how it works.
By now, artificial intelligence has assumed an important role in many areas of our lives, even those that are particularly delicate and strategic, such as health, the economy, education, law and many others.
So it might be a machine that makes a diagnosis of a tumour by reading a CT scan or decides whether a person can get a mortgage to buy a house. In short, machines will increasingly provide answers to fundamental questions in the lives of our citizens.
Explainable AI seeks to explain on the basis of which elements and criteria the machine collects questions and provides answers. To offer a more clear example, why it may “decide” that I do not have the right to obtain a mortgage but my cousin, on the other hand, receives an affirmative answer.
Does this also involve questions of responsibility and law?
Yes, these are major themes, too. Who is responsible if the machine makes a wrong deduction?
Since legal responses and the law cannot keep up with the speed of development of technology and artificial intelligence, we are increasingly moving towards a mixed system, where responsibility remains with the human who uses the machine.
The role of Explainable AI becomes increasingly crucial in understanding how computers construct their responses.
There is also a lot of talk in the book about machine learning. How is this different from artificial intelligence?
We must first run through a brief history of artificial intelligence.
The birth of AI can be traced back to the 1950s symbolically, with the work of Turing (Computing Machinery and Intelligence) and his famous question “Can machines think?”, with a history that goes on in fits and starts.
In fact, there are many difficult periods, called “AI winters“, in which it seemed that the developments had come to a standstill. Until there was a revival around the 2000s, when a period of increased development of this tool began, mainly due to the strong growth in data availability and computational power. These two elements allowed the application of the old models in a totally new context with astonishing successes
Machine Learning is a subset of AI and is concerned with creating systems that learn or improve performance based on the data they use. It is a different approach from the algorithmic one in which the model learns how to solve tasks autonomously (without precise rules programmed from outside).
If I have to teach a machine how to recognise cats, I can go about it in one of two ways:
describe the characteristics of the cat in an algorithmic way by asking the machine to check whether the elements of the cat are present in the submitted image. But this is a difficult route to follow, because cats can be very different from each other and on encountering the first difficulty the machine can freeze.
The other way is to provide the machine with lots of images of cats and let the machine itself reproduce a model that allows it to recognise a cat.
Therefore, Machine Learning draws on a lot of data by building the model of a neural network (or different models of ML) and searching for the right answer between input and output. Until the process is validated.
When you are satisfied with the accuracy of the answers, you can begin to submit to the machine data that have never before been submitted, by relying on the web and the infinite amount of data that can be found in it.
It is the same mechanism on which the much-discussed Chat GPT is based.
Have we reached a point where machines can be called intelligent?
The issue is more complex and has always been discussed.
We must first ask ourselves what is meant by intelligence. A computer processes data and executes programs, but does not “understand” what it is doing.. Certainly, at a superficial glance, its answers may seem similar to those of humans. But the two worlds remain distant and distinct. The complexity of the human is not applicable to computers.
This is a very important issue that was clearly posed as early as the 1980s, when John Roger Searle, a Berkeley professor who was known for his contributions to the philosophy of language and philosophy of mind, proposed the famous Chinese room experiment to demonstrate that the human mind could not be replicated within a machine.a.
What is it?
Imagine that in a room there is a machine and a man who speaks only English.
Both are given a Chinese text to translate. Obviously, the computer has no problem whatsoever performing its task. The man is given a book with a set of rules written in English explaining how to match Chinese symbols to his language. So the man finds the solution and starts producing answer outputs, following the instructions to the letter. The answers that the man produces are formally correct, because he has followed to the letter the instructions that were given to him along with the ideograms and that can be compared to computer software.
Despite this, he understood nothing of its answers and obviously did not learn Chinese, although at a superficial glance it might have seemed otherwise.
Here, a machine behaves like the protagonist of this experiment: it executes the program written in the programming language (its mother tongue), but essentially manipulates symbols whose meaning it does not know. Its operation is purely syntactic. But the complexity of the human mind is not imitable by a machine. The machine, in fact, does not have to “understand”, but rather only process and manipulate data correctly.
Returning to the example of cats, it is enough to consider that a child does not need to process thousands of images of cats, he or she only needs to see two or three of them for an idea of the cat to form in his or her mind that will enable him or her to recognise that animal again and again.