Author: Eider Iturbe (TECNALIA)

Recently, prominent actors with influence on the use and application of Artificial Intelligence (AI) have raised their voices about the growing concerns regarding its use, specifically about the risks of using AI techniques called Large Language Models (LLM). 

An open letter signed by AI experts called for a pause in the training of AI systems [1], specifically asking for a minimum time frame of six months to pause training of AI systems more powerful than GPT-4, so as to be able to ensure first that such systems will not have negative effects and their risks can be kept under control by humans. However, it is uncertain what exactly can be considered “more powerful”, as noted by AI researchers; this is despite the recent revelation of fake signatures on the letter [2].  

Europol has also expressed its concern about the malicious use of this type of solutions and has organized several workshop sessions with experts about how criminals can abuse LLMs as well as how they can be used defensively by cyber security experts [3]. As a result of the work carried out, the published report of the most important conclusions outlines several criminal use cases such as fraud, impersonation, social engineering and cybercrime that can be countered more efficiently by key information provided by LLM based services and tools. 

In this context, AI4CYBER’s contribution is crucial in the current circumstances of concern about the potential misuse of AI. The AI4CYBER project will demonstrate how artificial intelligence can be used both offensively, by promoting AI-based experiments and creating more advanced attacks, and defensively, by implementing intelligent security mechanisms.  

The final solutions, along with the previously curated data set, will be released to the cybersecurity community to push continuous improvement of the defensive systems of all organizations.