A bit of cybersecurity

Some first thoughs on the Cyber Resilience Act

Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! As you are likely aware of, the last few weeks have been pretty eventful in the world of large language models. DeepSeek, a Chinese startup, released a new model that arguably achieves cutting-edge results at a fraction of the training cost paid by its US competitors (such as OpenAI). The short-term impact of this development has been a sharp drop in the stocks of various AI-centric business (including NVIDIA), but some experts have been probing at deeper questions: does this model signal trouble for the assumption that advances in AI will require more and more compute for training? Is this development a failure of the late Biden administration’s efforts towards export controls? Is Alibaba’s new release even better than DeepSeek’s? I will not venture any answers to those questions, but readers might want to look into a few insightful early analyses of the issues at hand.

Today’s newsletter will focus on some initial ruminations of the cybersecurity dimensions of my work. I will first write a bit about the Cyber Resilience Act (CRA) as AI regulation. After that, the usual: a few reading recommendations, some open calls, and a cute otter. Hope you enjoy!

The interface between the AI Act and the Cyber Resilience Act

If you are dealing with artificial intelligence from a vaguely legal perspective, you are probably saturated with mentions to the AI Act. This will likely persist for a while: some of the Act’s provisions become applicable as of 2 February, but others will come into force as late as the second half of 2027.1 With that much media presence, it is easy to forget that the AI Act is not the only legal instrument creating new obligations for those deploying AI systems. We will now briefly consider another such instrument: the Cyber Resilience Act.2

Strictly speaking, the CRA is not a regulation of AI technologies. It regulates instead products with digital elements,3 a category that is both broader and narrower than “AI system” or “AI model” as defined in the AI Act. It is broader in the sense that it includes hardware and software products and their remote data processing solutions.4 But it is narrower in the sense that EU law defines products as items that are intended for consumers or likely to be used by them under reasonably foreseeable conditions.5 Therefore, many but not all of the systems covered by the AI Act will be covered by these requirements, regardless of the risk tier to which the AI Act assigns them.

What does this classification mean, in practice? As one would expect, it entails an additional set of obligations. Like in the AI Act, those obligations cover both the technical specifications of products with digital elements and the organizational practices of providers, deployers, and other actors in the product supply chain. And, like other pieces of EU risk-based regulation, it relies on some degree of ex ante risk classification: some products are deemed “important” or “critical” in light of their purposes, and thus subject to additional rules. This means that AI systems that are used as products with digital elements are subject to two horizontal sets of product safety rules, in addition to any sector-specific ones.

Surprisingly, the EU legislator does take account of the potential friction between those two landmarks piece of legislation. Under Article 12(1) CRA, a high-risk AI system must comply with the applicable CRA requirements in order to meet the cybersecurity requirement laid down in the AI Act. That is, the CRA adds more detail to Article 15’s vague provision that systems must have “adequate levels of […] cybersecurity”: an AI system that is a product with digital elements is deemed secure if both the product and the provider’s processes meet the CRA’s standards. While these requirements are themselves general, and will likely require technical standards or common specifications, their obligations are already more specific than what is present in the AI Act. Therefore, providers should have a better idea of what to do than they do with regard to other goals such as accuracy and robustness.

In addition to this substantive definition, the CRA also removes another potential source of conflict by integrating the conformity assessment schemes in both regulation. Article 12 CRA also specifies that, for most high-risk AI systems, conformity with the applicable cybersecurity requirements must be assessed within the AI Act’s assessment procedure.6 It remains to be seen how that will play out in practice, but there is at least the potential for reducing the overhead for digital products based on AI.

Still, the main takeaway for anybody deploying AI-related products (or enforcing law against them) is that there is a whole new set of obligations to be taken into account. This is even more so for the providers of systems outside the AI Act’s high-risk category, who would otherwise be subject to a thin set of regulations but are covered by the CRA’s requirements.7 Given the complexity that these requirements can achieve, one might put into the question the idea that the EU has chosen to adopt a “light touch” approach for most AI technologies. This is one reason why I am looking more closely at the CRA now, and I look forward to hearing your thoughts if you are working on this too (or if you have any thoughts on the issue!).

Reading recommendations

Opportunities

The Erasmus Center of Law and Digitalization and the Amsterdam Law & Technology Institute will hold a workshop on security in the digital age. Abstracts are due by 10 February , with first drafts due on 5 May and the event itself taking place on 3-4 June.

The CEN-CENELEC JTC 21 Task Group on Inclusiveness and the AI Standards Hub will hold a webinar on 4 February covering their work on AI Act standardization.

The 7th annual PrivaCI Symposium will take place in Brussels on 19 and 20 May , right before CPDP. They accept expressions of interest until 7 February, covering works on privacy as contextual integrity from various disciplines.

Gavin Sullivan at the University of Edinburgh is hiring a PhD researcher for a project on how AI and automated decision-making (ADM) processes are reshaping global security law and governance. Applications are due by 17 February.

The whatnext.law group at Nova University Lisbon welcomes submissions for their conference on Fair Markets in the XXIst Century: Digital Transition, Artificial Intelligence and Technological Neutrality. Abstracts are due by 27 February, with the event taking place on 9 and 10 April.

My colleagues at the University of Luxembourg are also hiring: a PhD researcher in EU law (with a focus on resilience and regeneration in single market regulation), to be supervised by Herwig Hofmann and an Associate Professor in EU law at the Luxembourg Centre for European Law.

Finally, the otters

A pair of river otters, photographed from a short distance.

Thanks for your attention! Hope you found something interesting above, and please consider subscribing if that is the case:

If you’d like to continue conversation in any of these topics—or want to share an event, job opportunity, or paper for me to highlight in future issues—hit “reply” to this email or contact me elsewhere. Hope to see you next time!


  1. Not to mention some special delays (e.g. for some AI systems operated by EU institutions and agencies) that kick the can down the road to 2030.

  2. Formally, Regulation (EU) 2024/2847 of the European Parliament and of the Council of 23 October 2024 on horizontal cybersecurity requirements for products with digital elements and amending Regulations (EU) No 168/2013 and (EU) 2019/1020 and Directive (EU) 2020/1828 (Cyber Resilience Act) (Text with EEA relevance).

  3. Article 1 CRA. As usual, Article 2 CRA then lays down some exceptions.

  4. Article 3(1) CRA.

  5. The CRA does not define product, but that is the definition given in Article 3(1) of the General Product Safety Regulation (Regulation (EU) 2023/988).

  6. With exceptions detailed in Article 12(3) CRA.

  7. This is another reason why the “risk pyramid” is ultimately misleading: a system outside the AI Act’s high-risk category might still be deemed important or even critical under the CRA and pose a whole set of risks related to its technical properties.