Publication 20/11/2019 -
Handling security risks in industrial applications due to lack of explainability of AI results
This publication presents the importance of explaining artificial intelligence for security aspects. The main issue is how human viewers understand the decisions of an AI system. The paper answers how any hidden bugs in architecture, configuration and training can be tracked down to correct them.
AI can be applied in various areas of industrial development, production and manufacturing. However, to put AI into action in the industry safely and successfully,
some conditions must be met. This includes:
1. Understanding the goals that AI can or can not achieve
2. Knowing the influencing factors that inherently accompany the use of AI systems
Fundamental aspects of this topic have already been described in "". The present publication goes further by outlining the importance of AI's explicability for security issues. It is primarily about the question of how human viewers understand the decisions of an AI system. The paper answers how any hidden bugs in architecture, configuration, and training can be tracked down to correct them: what did the system actually "learn" and what were the drivers behind this process?
The use of Artificial Intelligence can create new security problems. At the same time, AI is also suitable as a basis for novel weapons. These can not only be used against AI-based systems, but cause an overall expanded vulnerability. This paper addresses both sides and provides explanations and information for operators.