
Acronym: cPAID
Title: Cloud-based Platform-agnostic Adversarial aI Defence framework
Call | HORIZON-CL3-2023-CS-01 (Increased Cybersecurity 2023) |
EU nr | 101168407 |
Period | 36 months - 01.10.2024 to 30.09.2027 |
Project budget | € 5,514,912.50 |
VUB budget | € 260,500 |
Contact | Prof. Vagelis Papakonstantinou |
What is cPAID about?
AI is a driving force behind the rapid changes we're seeing today. It's transforming how work is done and how services are delivered both in the public and private sectors. However, using AI also brings risks and challenges, making people question whether these systems can be trusted. A 2023 global survey by KPMG found that 61% of people are wary about trusting AI systems. As AI becomes more embedded in various industries, cyberattacks on these systems can lead to severe economic consequences, including data breaches, financial losses, and damage to an organization's reputation due to a perceived 'betrayal of trust'.
With cPAID, the consortium aims to ensure that AI systems are secure and private from the start, following EU ethical guidelines. The project will also test real-world scenarios to see how well machine learning (ML) and deep learning (DL) algorithms hold up against attacks. For instance, sensors on autonomous ships use AI to interpret conditions and adjust movements. However, AI systems often operate in high-stakes environments where mistakes can have serious consequences, including loss of life.
How do these cyberattacks on AI work?
Cyberattacks on AI systems, known as adversarial attacks, can be very sinister. These attacks include evasion attacks, where deceptive inputs are added during the testing phase of ML and DL models, and poisoning attacks, where harmful samples are introduced into the training data.”
Why is cPAID important?
VUB is the project's ethics manager. The cPAID platform will address all ethical concerns that arise, ensuring solutions follow the law. This includes protecting people's privacy and rights as outlined in the GDPR and the AI Act.
The cPAID project will test the platform in five real-world scenarios to explore how well it protects AI-based digital systems against advanced cybersecurity threats. Each scenario will be evaluated from three perspectives: technical, economic, and social. These scenarios will help gather valuable insights and effective strategies for protecting AI systems.
