A major challenge for machine learning solutions is that their efficiency in real-world applications is constrained by the current lack of ability of the machine to explain its decisions and activities to human users. Biases based on race, gender, age or location have been a long-standing risk in training AI models. Furthermore, AI model performance can degrade because production data differs from training data. Explainable AI (XAI) is the practice of interpreting how and why a machine learning algorithm estimates its predictions. It can also help machine learning practitioners and data scientists understand and interpret a model's behaviour. XAI supports end-users to trust a model's auditability and the productive use of AI. It also mitigates AI compliance, legal, security and reputational risks. Among these applications, the security of IoT infrastructures is vitally essential for improving trust in broad-scale applications such as smart healthcare, smart manufacturing, smart agriculture and smart transportation.
This comprehensive co-authored book offers a complete study of explainable artificial intelligence (XAI) for securing the Internet of things (IoT). The authors present innovative XAI solutions for securing IoT infrastructures against security attacks and privacy threats and cover advanced research topics including responsible security intelligence. Providing a systematic and thorough overview of the field, this book will be a valuable resource for ICT researchers, AI and data science engineers, security analysts, undergraduate and graduate students and professionals who wish to gain a fundamental understanding of intelligent security solutions.