Author: Vincent Thouvenot, TSG

AI4CYBER projects address several challenges regarding both Security of AI (Artificial Intelligence) and AI for Security, which are two major topics in AI. Security of AI consists in proposition tools and algorithms to ensure that an AI is trained and used in a secure way, from the training to the inference. AI for Security means that AI can be used to participate at the security of a critical system by detecting cybersecurity vulnerabilities or attacks. 

An important part of AI4CYBER consists in improving security and trustworthiness of AI. Thank to the Federated Learning developed by AI4FIDS component, we are able to train a deep learning model from several data owners that do not have to share their dataset. Each dataset remains local and only information about the model (e.g. weights or gradients) are exchanged between the participants. By working on the aggregation of the information, AI4FIDS limits the risk of information leaks. Robustness of models are addressed by both AI4CTI and TRUST4AI.Security. AI4CTI component extract information from multiple heterogeneous Cyber Threat Intelligence data sources and can be used to detect adversarial attacks on AI systems or to enrich service (AI4SIM) dedicated to the simulation of attacks against a system. TRUST4AI.Security focus on adversarial and poisoning attacks against AI models, both from the attack and the defence point of view. By proposing attacks, we are able to audit an AI model to highlight potential attack surface. By proposing defence, we are able to correct the problems. TRUST4AI.XAI and TRUST4AI.Fairness allows AI models inspection according to different axes. While TRUST4AI.XAI focus on post inspection of model, by searching to highlight the more important features that explain the prediction on an AI model, TRUST4AI.Fairness allows to detect bias coming from an AI model (e.g. an AI model that has different quality for man and woman) and to mitigate the bias, before, during or after the training of the AI model.  

On the other side, AI4CYBER use AI to improve security of critical systems. Indeed, for example, thank to AI4FIDS, we are able to develop anomaly detection AI model to detect intrusion on critical system. AI4SIM allows AI-powered simulation to prepare datasets used to train AI. Thank to AI-based approaches, AI4VULN allows to analyse vulnerability. AI4TRIAGE allows to perform root-cause analysis. AI4COLLAB, a component dedicated to Information sharing and collaboration, provides anonymization tools to avoid disclosure of private or sensitive information. AI4ADAPT uses reinforcement learning to propose self-healing. 

By addressing both security of AI based systems and the use of AI to secure critical systems, AI4CYBER is addressing a large and complete field of AI for cyber-security. 

Moreover, AI4CYBER involves many different and rich AI approaches: from anonymization to federated learning, anomaly detection, AI fairness, root cause analysis, evasion and poisoning attacks, AI interpretability, game-theory, reinforcement learning, etc. The various methods of AI covered are therefore very broad is great opportunity for the technologies providers. AI4SOAR, the next-generation security orchestration, allows to encapsule all the services AI-based developed in AI4CYBER, helping technologies providers to integrate their tools. 

Last but not less, AI4CYBER offers the opportunity to the technological providers to implement, test, benchmark, validate and integrate the components that they develop on three use case: the detection and mitigation of AI-powered attacks against energy, banking and health sectors.