
Lorenzo Pupillo, Stefano Fantin., Afonso Ferreira, Carolina Polito, CEPS
Furthermore, poor cybersecurity in the protection of open-source models may lead to hacking opportunities for actors seeking to steal such information. Limitations to the dissemination and the sharing of data and codes could enable a more complete assessment of the security risks related to the technology and its vulgarisation.
Below, in response to the inherent paradox of AI, we distinguish possible action items for its contribution to cybersecurity, and for the contribution of cybersecurity to the development and uptake of secure AI.
AI for cybersecurity: EU policy measures to ease the adoption of AI in cybersecurity
Cyber-attacks are on the rise, and they are increasingly using AI. The IoT (Internet of Things) age will further densify the attack surface. AI is thus a ‘must’ to help companies manage this range of cybersecurity risk, technical challenges, and resource constraints. AI can improve systems’ robustness and resilience, but a number of conditions must be met, among which:
- Enhance collaboration between policymakers, technical community and key corporate representatives to better investigate, prevent and mitigate potential malign uses of AI in cybersecurity.
- Incorporate an assessment of the security requirements for AI systems in public procurement policies.
- Ensure a degree of operational control over AI systems by developing and monitoring practices to address their lack of predictability, such as companies’ in-house development of AI models and testing of data, or parallel and dynamic monitoring of AI systems through clone systems.
- Support private sector cross-border information sharing by providing incentives for cooperation and ensuring a governance framework that would enable legal certainty when sharing data.
- Support and internationally promote AI certification efforts, to be coordinated by ENISA, following a proactive approach and demanding assessment actions to be taken before deployments, as well as during the whole lifecycle of a product, service or process.
- Envisage appropriate limitations to the full openness policy for research output and its vulgarisation when security risks exist. Verify the level of cybersecurity of libraries and tools and review misuse/preventing measures before the publication of their research. Promote the study and regulatory interpretation of the General Data Protection Regulation (GRPR) provision as pertained to AI and cybersecurity (for instance, with respect to Recitals 49 and 71, with reference to data sharing practices for information security aims).
- Address the skills shortage and uneven distribution of talents and professionals by offering AI-related career paths to train and retain skilled staff. Monitor the sector to ensure the smooth incorporation and understanding of AI tools into existing cybersecurity professional practice and architectures.
Cybersecurity for AI: making AI systems safe and reliable to develop and deploy
Compared to traditional hardware-software systems, AI-powered systems present specific features that can be attacked in non-traditional ways: in particular, the training data set may be compromised so that the resulting ‘learning’ of the system is not as intended; alternatively, external objects that will be sensed by the system can be tampered with so that the system fails to recognise them. It is therefore important to provide for additional, ad hoc protection of AI systems, to ensure that they follow a secure development life cycle, from ideation to deployment and post-market surveillance, including runtime monitoring and auditing.
The European Commission’s “Regulation on a European Approach for Artificial Intelligence” proposed today is fostering such an approach for high-risk AI systems. However, when it comes to security, the proposed text could more clearly state some additional and necessary steps to achieve security of AI systems. The proposed requirements concern high-quality data sets, documentation and record-keeping, transparency and provision of information, human oversight, as well as robustness, accuracy and security. These requirements represent a fundamental step forward in assuring the necessary level of protection of AI systems.
CEPS Task Force on AI & Cybersecurity shares this view but also proposes a series of recommendations to provide concrete guidance on how to secure AI systems. In particular, it suggests how to strengthen AI security as it relates to maintaining accountability across intelligent systems, such as:
- Secure logs related to the development/coding of the system, who changed what when and why – when this information is available i.e. when the code is not open-sourced – which also preserves older versions of software so that differences and additions can be checked and reversed.
- Have cyber-secure pedigrees for all software libraries linked to that code.
- Have cyber-secure pedigrees for the data libraries used for training any machine-learning algorithms used. This can also show compliance with privacy laws and other principles.
- Where machine learning is used, keep track of the model parameters, and training procedures.
- Require records demonstrating due diligence when testing the technology, before releasing it, preferably including the actual test suites themselves so that these may be checked by the company itself or third parties and then reused.
- Other than logging, enhance AI reliability and reproducibility by using techniques such as Randomisation, Noise Prevention, Defensive Distillation, Ensemble Learning.
- Propose the full auditability of models at time/point of failure to organisations, also to make the information available for subsequent analysis (e.g., analysis required by courts).
Devise new methods to allow for system audits other than openly pushing dataset, such as restricting audits to a trusted third party.
If the goal of the European Union is to assure a cybersecure rollout of AI, the way has to be paved by providing assistance and guidance to companies on how to concrete move forward with cybersecurity. New mindsets are required to ensure that the risks related to the ubiquitous utilisation of AI systems remain manageable without hampering innovation. Failing to find a balance between these competing needs, at the core of AI rollout, will otherwise cause unbearable societal costs in the long run.
The forthcoming CEPS Task Force report on AI and Cybersecurity, whose publication is due in the coming weeks, aims to suggest measures that are consistent with these EU objectives. It is the result of a collective effort led by CEPS, which in the autumn of 2019 launched a multi-stakeholder task force composed of private organisations, European Union institutions, international and multilateral organisations, universities, think tanks and civil society organisations.
Lorenzo Pupillo is Associate Senior Research Fellow at CEPS. Stefano Fantin is a Doctoral Researcher at KU Leuven. Afonso Ferreira is Directeur de Recherche, Centre national de la recherche scientifique (CNRS) (France) and Carolina Polito is a Research Assistant at CEPS. All four authors served as rapporteurs of a CEPS Task Force on AI and Cybersecurity, chaired by Lorenzo Pupillo. This Commentary distills the main conclusions and policy recommendations reached by the Task Force.