Investigation of explainable AI (XAI) in commercial IIoT platforms
- In the last years, artificial intelligence and machine learning algorithms are rising in importance and complexity. To increase the trust in these algorithms, they have to be as transparent as possible. Especially decisions of deep neural networks and similar complex black-box models are hard to explain and offer little insight. Explainable artificial intelligence (XAI) is a field of AI which tries to make complex AI models and their predictions interpretable. Currently, there are legislative changes at national and European level that require XAI as a prerequisite for artificial intelligence algorithms. This work provides an overview of some of the most relevant techniques of XAI and their use-cases, which can help to improve the transparency of complex AI models or boost the effectiveness of simpler interpretable AI models like Decision Trees. The report also provides an overview of the most established Industrial Internet of Things (IIoT) AI platforms regarding XAI and highlights their strengths and weaknesses in this area. This should make it easier to identify relevant XAI techniques while pointing to appropriate AI platforms.
Author: | Marlo Swora |
---|---|
URN: | https://nbn-resolving.org/urn:nbn:de:gbv:hil2-opus4-17283 |
DOI: | https://doi.org/10.25528/172 |
Advisor: | Christian Sauer, Leonhard Faubel |
Document Type: | Study Thesis |
Language: | English |
Year of Completion: | 2023 |
Publishing Institution: | Stiftung Universität Hildesheim |
Release Date: | 2023/07/06 |
Tag: | IoT; Machine Learning; XAI |
Page Number: | 48 |
PPN: | Link zum Katalog |
Institutes: | Fachbereich IV |
DDC classes: | 000 Allgemeines, Informatik, Informationswissenschaft / 000 Allgemeines, Wissenschaft / 004 Informatik |
Licence (German): | ![]() |