Volltext-Downloads (blau) und Frontdoor-Views (grau)

Investigation of explainable AI (XAI) in commercial IIoT platforms

  • In the last years, artificial intelligence and machine learning algorithms are rising in importance and complexity. To increase the trust in these algorithms, they have to be as transparent as possible. Especially decisions of deep neural networks and similar complex black-box models are hard to explain and offer little insight. Explainable artificial intelligence (XAI) is a field of AI which tries to make complex AI models and their predictions interpretable. Currently, there are legislative changes at national and European level that require XAI as a prerequisite for artificial intelligence algorithms. This work provides an overview of some of the most relevant techniques of XAI and their use-cases, which can help to improve the transparency of complex AI models or boost the effectiveness of simpler interpretable AI models like Decision Trees. The report also provides an overview of the most established Industrial Internet of Things (IIoT) AI platforms regarding XAI and highlights their strengths and weaknesses in this area. This should make it easier to identify relevant XAI techniques while pointing to appropriate AI platforms.

Download full text files

Export metadata

Additional Services

Share in Twitter    Search Google Scholar    frontdoor_oas
Author:Marlo Swora
Advisor:Christian Sauer, Leonhard Faubel
Document Type:Study Thesis
Year of Completion:2023
Publishing Institution:Stiftung Universität Hildesheim
Release Date:2023/07/06
Tag:IoT; Machine Learning; XAI
Page Number:48
PPN:Link zum Katalog
Institutes:Fachbereich IV
DDC classes:000 Allgemeines, Informatik, Informationswissenschaft / 000 Allgemeines, Wissenschaft / 004 Informatik
Licence (German):License LogoCreative Commons - Namensnennung - Nicht-kommerziell - Weitergabe unter gleichen Bedingungen 4.0