{"id":1701,"date":"2025-05-26T10:32:09","date_gmt":"2025-05-26T08:32:09","guid":{"rendered":"https:\/\/ai4cyber.eu\/?p=1701"},"modified":"2025-05-26T10:32:28","modified_gmt":"2025-05-26T08:32:28","slug":"ai4cyber-blogpost-trust4ai-xai-enhancing-ai-transparency-and-trustworthiness-in-cybersecurity","status":"publish","type":"post","link":"https:\/\/ai4cyber.eu\/?p=1701","title":{"rendered":"AI4CYBER Blogpost: TRUST4AI.xAI: Enhancing AI Transparency and Trustworthiness in Cybersecurity"},"content":{"rendered":"<p><strong><em>Author: <span class=\"TextRun SCXW168447974 BCX8\" lang=\"EN\" xml:lang=\"EN\" data-contrast=\"auto\"><span class=\"NormalTextRun CommentHighlightPipeRest SCXW168447974 BCX8\">Damian Puchalski<\/span><\/span><span class=\"EOP TrackedChange SCXW168447974 BCX8\" data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:0,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:276}\"> (ITTI)<\/span><\/em><\/strong><\/p>\n<p><span data-contrast=\"auto\">Artificial Intelligence (AI) models, while incredibly powerful, often operate as &#8220;black boxes,&#8221; making decisions for which the user has no clear explanations. This lack of transparency can be problematic, especially in crucial areas like cybersecurity, for example, in cyberthreat detection where understanding AI&#8217;s reasoning behind identifying threats is critical.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">The explainability-serving component of AI4CYBER framework, TRUST4AI.xAI, which ITTI developed in the context of AI4CYBER project, tackles the AI interpretability challenge by employing a set of tools that help explain the decisions of AI models, making them more transparent, trustworthy, and easier to handle. We&#8217;ve designed the TRUST4AI.xAI system to be modular, scalable, and user-friendly.\u00a0<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">TRUST4AI.xAI consists of several key sub components.\u00a0<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">First of all, the User-Friendly Dashboard. Built using React, this intuitive dashboard allows users to easily visualize and analyze AI model decisions, and allows them to select different methods to explain why a decision was made.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Our platform seamlessly integrates external AI models through standard protocols (REST APIs and WebSockets), making it compatible with a wide range of AI systems. Leveraging technologies like Kafka, our tool ensures smooth communication between its different parts and the outside world. TRUST4AI.xAI employs a microservice architecture, making it highly scalable and maintainable.\u00a0<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">But we didn\u2019t stop at creating this tool. We tested the component to guarantee reliability. We performed comprehensive evaluations using popular cybersecurity datasets (CIC-IDS2018, CIC IoT 2023) to ensure the system accurately and clearly explains why certain network activities are classified as threats.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">A significant contribution from our project is the development of new explainability metrics: ARIA, HaRIA, and GeRIA<\/span><span data-contrast=\"auto\">.<\/span><span data-contrast=\"auto\"> [1]. These measure how useful a feature is likely to be for an AI model before it&#8217;s even trained. This novel approach helps users quickly identify the most informative features, saving valuable resources. We&#8217;ve also discovered how certain types of data perturbations, like missing values or noisy data, affect the reliability of AI explanations. Understanding this helps improve the robustness of AI systems in real-world scenarios.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">To validate and share our discoveries, we&#8217;ve published numerous papers, emphasizing the importance of clear standards and informative metrics to evaluate how effectively xAI explains the decisions of AI. Our work provides a practical, ethical, and innovative step toward making AI more transparent and trustworthy.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">More information about the component and the explainability techniques used can be found here: [2]<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:0,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:276}\">\u00a0<\/span><\/p>\n<p><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:0,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:276}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">[1] Pawlicki, M. (2024). ARIA, HaRIA and GeRIA: Novel Metrics for Pre-Model Interpretability. IEEE Access.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">[2] Pawlicki, M., Puchalski, D., Szelest, S., Pawlicka, A., Kozik, R., &amp; Chora\u015b, M. (2024, July). Introducing a Multi-Perspective xAI Tool for Better Model Explainability. In Proceedings of the 19th International Conference on Availability, Reliability and Security (pp. 1-8).<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Damian Puchalski (ITTI) Artificial Intelligence (AI) models, while incredibly powerful, often operate as &#8220;black boxes,&#8221; making decisions for which the user has no clear explanations. This lack of transparency can be problematic, especially in crucial areas like cybersecurity, for example, in cyberthreat detection where understanding AI&#8217;s reasoning behind identifying threats is critical.\u00a0 \u00a0 The [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[11],"tags":[],"class_list":["post-1701","post","type-post","status-publish","format-standard","hentry","category-blog"],"_links":{"self":[{"href":"https:\/\/ai4cyber.eu\/index.php?rest_route=\/wp\/v2\/posts\/1701","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai4cyber.eu\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai4cyber.eu\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai4cyber.eu\/index.php?rest_route=\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/ai4cyber.eu\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1701"}],"version-history":[{"count":3,"href":"https:\/\/ai4cyber.eu\/index.php?rest_route=\/wp\/v2\/posts\/1701\/revisions"}],"predecessor-version":[{"id":1715,"href":"https:\/\/ai4cyber.eu\/index.php?rest_route=\/wp\/v2\/posts\/1701\/revisions\/1715"}],"wp:attachment":[{"href":"https:\/\/ai4cyber.eu\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1701"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai4cyber.eu\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1701"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai4cyber.eu\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1701"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}