Author: Damian Puchalski (ITTI)

Artificial Intelligence (AI) models, while incredibly powerful, often operate as “black boxes,” making decisions for which the user has no clear explanations. This lack of transparency can be problematic, especially in crucial areas like cybersecurity, for example, in cyberthreat detection where understanding AI’s reasoning behind identifying threats is critical. 

 

The explainability-serving component of AI4CYBER framework, TRUST4AI.xAI, which ITTI developed in the context of AI4CYBER project, tackles the AI interpretability challenge by employing a set of tools that help explain the decisions of AI models, making them more transparent, trustworthy, and easier to handle. We’ve designed the TRUST4AI.xAI system to be modular, scalable, and user-friendly.  

 

TRUST4AI.xAI consists of several key sub components.  

 

First of all, the User-Friendly Dashboard. Built using React, this intuitive dashboard allows users to easily visualize and analyze AI model decisions, and allows them to select different methods to explain why a decision was made. 

 

Our platform seamlessly integrates external AI models through standard protocols (REST APIs and WebSockets), making it compatible with a wide range of AI systems. Leveraging technologies like Kafka, our tool ensures smooth communication between its different parts and the outside world. TRUST4AI.xAI employs a microservice architecture, making it highly scalable and maintainable.  

 

But we didn’t stop at creating this tool. We tested the component to guarantee reliability. We performed comprehensive evaluations using popular cybersecurity datasets (CIC-IDS2018, CIC IoT 2023) to ensure the system accurately and clearly explains why certain network activities are classified as threats. 

 

A significant contribution from our project is the development of new explainability metrics: ARIA, HaRIA, and GeRIA. [1]. These measure how useful a feature is likely to be for an AI model before it’s even trained. This novel approach helps users quickly identify the most informative features, saving valuable resources. We’ve also discovered how certain types of data perturbations, like missing values or noisy data, affect the reliability of AI explanations. Understanding this helps improve the robustness of AI systems in real-world scenarios. 

 

To validate and share our discoveries, we’ve published numerous papers, emphasizing the importance of clear standards and informative metrics to evaluate how effectively xAI explains the decisions of AI. Our work provides a practical, ethical, and innovative step toward making AI more transparent and trustworthy. 

 

More information about the component and the explainability techniques used can be found here: [2] 

 

[1] Pawlicki, M. (2024). ARIA, HaRIA and GeRIA: Novel Metrics for Pre-Model Interpretability. IEEE Access. 

[2] Pawlicki, M., Puchalski, D., Szelest, S., Pawlicka, A., Kozik, R., & Choraś, M. (2024, July). Introducing a Multi-Perspective xAI Tool for Better Model Explainability. In Proceedings of the 19th International Conference on Availability, Reliability and Security (pp. 1-8).