Top Page Links

Artificial Intelligence and cybersecurity: Benefits and perils

By CEPS

 
Lorenzo Pupillo, Stefano Fantin., Afonso Ferreira, Carolina Polito, CEPS
 
Artificial Intelligence (AI) is gradually being integrated into the fabric of business and widely deployed across specific applications use cases. Not all sectors are equally advanced, however: the information technology and telecommunications sector are the most advanced in terms of AI adoption, with the automotive falling just behind. According to a recent global survey that polled more than 4,500 technology decision-makers across different sectors, 45% of large companies and 29% of SMEs said they had adopted AI.In the cybersecurity sector, AI will become increasingly indispensable to manage cyber threats: indeed, the market is expected to grow at a Compound Annual Growth Rate (CAGR) of 23.6% from 2020 to 2027 and to reach $46.3 billion by 2027. At the same time, the adoption of AI is not without risks in itself: more than 60% of companies adopting AI recognise cybersecurity risks generated by AI as the most relevant ones.

 

As a general-purpose, dual-use technology, AI can be both a blessing and a curse for cybersecurity. This is confirmed by the fact that AI is being used both as a sword (i.e. in support of malicious attacks) and as a shield (to counter cybersecurity risks). With an additional complication: while the use of AI for defensive purposes faces a number of constraints, especially as governments (and the European Union) move to regulate high-risk applications and promote the responsible use of AI, on the attack side the most pernicious uses are multiplying, the cost of developing applications is plummeting, and the ‘attack surface’ is becoming denser every day, making any form of defence an uphill battle.Machine-learning and deep-learning techniques will make sophisticated cyber-attacks easier and allow for faster, better targeted and more destructive attacks. The impact of AI on cybersecurity will likely expand the threat landscape, introduce new threats and alter the typical characteristics of threats. Besides, other than introducing new and powerful vectors to carry out attacks, AI systems will also become increasingly subject to manipulation themselves.Most importantly, the lack of transparency and the learning abilities of AI systems will make it hard to evaluate whether the same system will continue to behave as expected in any given context. As such, forms of controls and human oversight are essential. Furthermore, AI systems, unlike brains, are designed. Therefore, all the decisions upon which the systems are designed should be auditable.
 
Furthermore, poor cybersecurity in the protection of open-source models may lead to hacking opportunities for actors seeking to steal such information. Limitations to the dissemination and the sharing of data and codes could enable a more complete assessment of the security risks related to the technology and its vulgarisation.
 
Below, in response to the inherent paradox of AI, we distinguish possible action items for its contribution to cybersecurity, and for the contribution of cybersecurity to the development and uptake of secure AI.
 
AI for cybersecurity: EU policy measures to ease the adoption of AI in cybersecurity
 
Cyber-attacks are on the rise, and they are increasingly using AI. The IoT (Internet of Things) age will further densify the attack surface. AI is thus a ‘must’ to help companies manage this range of cybersecurity risk, technical challenges, and resource constraints. AI can improve systems’ robustness and resilience, but a number of conditions must be met, among which:

  • Enhance collaboration between policymakers, technical community and key corporate representatives to better investigate, prevent and mitigate potential malign uses of AI in cybersecurity.
  • Incorporate an assessment of the security requirements for AI systems in public procurement policies.
  • Ensure a degree of operational control over AI systems by developing and monitoring practices to address their lack of predictability, such as companies’ in-house development of AI models and testing of data, or parallel and dynamic monitoring of AI systems through clone systems.
  • Support private sector cross-border information sharing by providing incentives for cooperation and ensuring a governance framework that would enable legal certainty when sharing data.
  • Support and internationally promote AI certification efforts, to be coordinated by ENISA, following a proactive approach and demanding assessment actions to be taken before deployments, as well as during the whole lifecycle of a product, service or process.
  • Envisage appropriate limitations to the full openness policy for research output and its vulgarisation when security risks exist. Verify the level of cybersecurity of libraries and tools and review misuse/preventing measures before the publication of their research. Promote the study and regulatory interpretation of the General Data Protection Regulation (GRPR) provision as pertained to AI and cybersecurity (for instance, with respect to Recitals 49 and 71, with reference to data sharing practices for information security aims).
  • Address the skills shortage and uneven distribution of talents and professionals by offering AI-related career paths to train and retain skilled staff. Monitor the sector to ensure the smooth incorporation and understanding of AI tools into existing cybersecurity professional practice and architectures.

 
Cybersecurity for AI: making AI systems safe and reliable to develop and deploy
 
Compared to traditional hardware-software systems, AI-powered systems present specific features that can be attacked in non-traditional ways: in particular, the training data set may be compromised so that the resulting ‘learning’ of the system is not as intended; alternatively, external objects that will be sensed by the system can be tampered with so that the system fails to recognise them. It is therefore important to provide for additional, ad hoc protection of AI systems, to ensure that they follow a secure development life cycle, from ideation to deployment and post-market surveillance, including runtime monitoring and auditing.
 
The European Commission’s “Regulation on a European Approach for Artificial Intelligence” proposed today is fostering such an approach for high-risk AI systems. However, when it comes to security, the proposed text could more clearly state some additional and necessary steps to achieve security of AI systems. The proposed requirements concern high-quality data sets, documentation and record-keeping, transparency and provision of information, human oversight, as well as robustness, accuracy and security. These requirements represent a fundamental step forward in assuring the necessary level of protection of AI systems.
 
CEPS Task Force on AI & Cybersecurity shares this view but also proposes a series of recommendations to provide concrete guidance on how to secure AI systems. In particular, it suggests how to strengthen AI security as it relates to maintaining accountability across intelligent systems, such as:

  • Secure logs related to the development/coding of the system, who changed what when and why – when this information is available i.e. when the code is not open-sourced – which also preserves older versions of software so that differences and additions can be checked and reversed.
  • Have cyber-secure pedigrees for all software libraries linked to that code.
  • Have cyber-secure pedigrees for the data libraries used for training any machine-learning algorithms used. This can also show compliance with privacy laws and other principles.
  • Where machine learning is used, keep track of the model parameters, and training procedures.
  • Require records demonstrating due diligence when testing the technology, before releasing it, preferably including the actual test suites themselves so that these may be checked by the company itself or third parties and then reused.
  • Other than logging, enhance AI reliability and reproducibility by using techniques such as Randomisation, Noise Prevention, Defensive Distillation, Ensemble Learning.
  • Propose the full auditability of models at time/point of failure to organisations, also to make the information available for subsequent analysis (e.g., analysis required by courts).
    Devise new methods to allow for system audits other than openly pushing dataset, such as restricting audits to a trusted third party.

 
If the goal of the European Union is to assure a cybersecure rollout of AI, the way has to be paved by providing assistance and guidance to companies on how to concrete move forward with cybersecurity. New mindsets are required to ensure that the risks related to the ubiquitous utilisation of AI systems remain manageable without hampering innovation. Failing to find a balance between these competing needs, at the core of AI rollout, will otherwise cause unbearable societal costs in the long run.
 
The forthcoming CEPS Task Force report on AI and Cybersecurity, whose publication is due in the coming weeks, aims to suggest measures that are consistent with these EU objectives. It is the result of a collective effort led by CEPS, which in the autumn of 2019 launched a multi-stakeholder task force composed of private organisations, European Union institutions, international and multilateral organisations, universities, think tanks and civil society organisations.
 


Lorenzo Pupillo is Associate Senior Research Fellow at CEPS. Stefano Fantin is a Doctoral Researcher at KU Leuven. Afonso Ferreira is Directeur de Recherche, Centre national de la recherche scientifique (CNRS) (France) and Carolina Polito is a Research Assistant at CEPS. All four authors served as rapporteurs of a CEPS Task Force on AI and Cybersecurity, chaired by Lorenzo Pupillo. This Commentary distills the main conclusions and policy recommendations reached by the Task Force.