Explainable AI with Python

By Leonida Gianfagna and Antonio Di Cecco

Leonida Ebook

Explainable AI with Python provides a comprehensive overview of the concepts and techniques available today to make machine learning systems more understandable. The approaches presented can be applied to almost all current models of Machine Learning, such as the linear regression and logistic model, deep learning neural networks, natural language processing and image recognition.

Advances in machine learning are helping to increase the use of artificial agents capable of performing critical tasks previously handled by humans (such as in healthcare or legal and financial activities). Although the principles guiding the design of these agents are clear, most of the deep-learning models used remain “opaque” to human understanding. The book fills the current gap in the literature dealing with this topic, adopting both a theoretical and practical perspective, and making the reader quickly able to work with the tools and code used for Explainable AI systems.

Starting with examples of what Explainable AI (XAI) is and why it is so necessary, the book details different approaches to XAI depending on the context and specific needs. Practical examples of models that can be interpreted with the use of Python are then presented, showing how intrinsically interpretable models can be interpreted and how “human-understandable” explanations can be produced. It is shown that agnostic-model methods for XAI produce explanations that do not rely on the inner parts of ML models that are “opaque”. Using examples from Computer Vision, the authors then examine explainable models for Deep Learning and potential methods for the future. With a practical perspective, the authors also demonstrate how to effectively use ML and XAI in science. The final chapter explains Adversarial Machine Learning and how to do XAI using conflicting examples.

Img-gesso-quiz-01

If you are interested in the book Explainable AI with Python click here.

The Authors

 

diploma

Leonida Gianfagna

Leonida Gianfagna (PhD, MBA) is a theoretical physicist currently working in Cybersecurity as R&D Director for Cyber Guru. Prior to joining Cyber Guru, she worked at IBM for 15 years in leading roles in IT Service Management (ITSM) software development. She is the author of several publications in the fields of theoretical physics and computer science and is accredited as an IBM Master Inventor.

dialogue

Antonio Di Cecco

Antonio Di Cecco is a theoretical physicist with a solid mathematical background who actively engages in AIML training. The strength of his approach is the deepening of the mathematical foundations of AIML models, which are able to provide new ways to present AIML knowledge and the space for improvement with respect to the existing state of the art.

More

Want to know more?

Want to know more?

News

We’re proud to share that Cyber Guru has ranked #73 in TIME magazine’s 2025 list of the World’s Top EdTech Companies, created in collaboration with global market research firm Statista.

AWARENESS TRAINING

  • Awareness

    Continuous training to build knowledge and awareness

  • Channel

    An engaging training experience in TV series format

  • Chatbot NEW

    Conversational mode for workplace training

COMPLIANCE TRAINING

PHISHING TRAINING

  • Phishing

    Personalized adaptive training

  • PhishPro

    The add-on for advanced training

REAL TIME AWARENESS

Cyber Advisor NEW

GenAI cybersecurity assistant Discover Guru, the AI assistant specialized in cybersecurity!

FEATURED RESOURCE

Ebook

Cyber Guru Academy Content Creators

Content that makes a difference Conceiving, designing, and producing training content that generates interest, engagement, and motivation to learn is a daily challenge for Cyber Guru's Academy department. Because it is now clear that training people to defend themselves against cybercrime requires more than just an attractive platform and a multitude of content.