Learning to deal with innovation

Learning to deal with innovation
Photo by Conny Schneider / Unsplash

Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! Is 20 January already? Oh my! I surely am not back to my cruise speed yet, and I don't really know why. I'd love to blame the Luxembourgish weather, but I think that might have more to do with (almost!) reaching the ripe old age of 35...or maybe the ongoing collapse of many of the remaining pieces of the liberal international order. Sometimes writing about implementation details of digital regulation can feel like coming up with a particularly neat arrangement for deck chairs on the Titanic. Still, if nothing else, there are still research questions that interest me, deadlines that I must meet, and bills to pay, so here we go.

For today's newsletter, I will share some notes on my ongoing work on regulatory learning and digital innovation. After that, the usual: reading recommendations, opportunities related to law and technology, and some cute otters. Hope you enjoy!

Worse learning through better regulation?

Last month, I had the opportunity to join a workshop on digital innovation organized by Nikita Divissenko and Olia Kanevskaia from Utrecht University. Even though I am not really an economic law person (let alone a law and economics enthusiast!), my work is occasionally of interest for people working on this space. So, I received a kind invitation to connect the topic with my work on technology-neutral regulation and its impact on power. I was happy to follow up on that invitation and discuss how the pursuit of technology neutrality can constrain regulators as they try to deal with innovative technologies.

In the context of digital technologies, 'technology-neutral regulation' tends to be presented as inherently desirable, or at least as a default from which regulators diverge at their own peril. This view of technology-neutral regulation as a magic concept persists in spite (or perhaps because) of considerable disagreement about what the concept actually means and how it should guide regulators and regulatees. In the paper I presented, and in my upcoming book, I argue that we can better evaluate regulation if we depart from a clear concept of technology-neutral regulation and analyse how it is maintained through a series of delegations of power.

Understanding technology-neutral regulation in terms of the delegation of the power to determine the technical contents of the law is also useful for distinguishing this concept from another popular buzzword: futureproof regulation. The idea behind futureproofing, in short, is that regulation should remain effective as it becomes applicable to new technologies. To produce that stability, lawmakers deploy a vast array of regulatory techniques, such as sunset clauses or experimental regulation mechanisms. When it comes to regulation directed at particular technologies, this idea is often coupled with technology-neutral regulation.

The relationship between technology neutrality and futureproofing tends to be ambiguous, both in policy and scholarly literature. Sometimes, the terms are treated as a legal doublet, that is, a pair of expressions that have the same meaning. In other cases, technology-neutral regulation is seen as an approach that enables regulators to produce futureproof regulation even they target specific technologies as the regulatory object. Both readings can be found in the literature on digital regulation, as well as in policy documents such as the European Commission's Better Regulation Toolbox, which uses both terms without particularly clear definitions of what is entailed by either. In my view, such lack of clarity blurs the fact that neither of these accounts is really conducive to good results.

From a conceptual standpoint, I would argue that technology-neutral regulation and futureproof regulation aim at different things. While the former is concerned with sparing regulators from having to make choices about technological specificities, the latter aims to provide a relatively stable set of effects. These goals are not necessarily at odds with one another, but neither are they the same thing. As Atte Ojanen illustrates in the context of the AI Act, and I have discussed with regard to digital infrastructures, there are situations in which the attempt to create technology-neutral regulation renders the law more brittle in light of future change, rather than the opposite. So, we cannot treat the terms as synonymous.

Decoupling the definitions also helps us understanding that technology-neutral regulation and futureproof regulation create different obstacles to regulators trying to govern technological innovation. Because futureproof regulation is meant to produce roughly the same effects as time passes by, the regular operation of the regulatory framework might deprive regulators from information about changes in the social context that render previously acceptable solutions undesirable. By contrast, one issue with technology-neutral regulation is that it allows regulators to cast questions that are heavily laden in values, such as the protection of fundamental rights, as being purely technical issues to be solved by experts. Acknowledging the differences between those two learning problems will allow regulators (and regulatees) to address them separately, and so balance eventual trade-offs rather than coming up with a blanket solution that address neither one nor the other.

If you would like to see a less compressed version of my arguments, there is a draft chapter available on SSRN. Don't hesitate to reach out with your thoughts. Any comments before 15 February are particularly appreciated, as I am currently revising this draft into something more polished.

Recommendations

Opportunities

Disclaimer: as usual, I am gathering these links purely for convenience and because I think they might be of interest to readers of this newsletter. Unless I explicitly say otherwise, I am not involved with any of the selection processes indicated below.

NOVA School of Law (Lisbon) will host an online Executive Education course on AI in the European Legal Framework. This course will feature an in-depth analysis of the ethical and regulatory dimensions of AI in Europe, with special attention to the AI Act. The course's programme features many of the leading scholars on EU AI regulation, as well as yours truly. Lectures start on 27 February, so there is still time to join us!

On 30 January, the EUI Digital Public Sphere Working Group will host a roundtable on The future of data protection law in Europe and beyond, featuring Ignacio Cofone, Orla Lynskey, Thomas Streinz, and Emmanouil Bougiakiotis. Registration is mandatory but free.

Open Government Partnership is looking for a research consultant to create the foundations for an Open Gov Guide chapter on Digital Public Infrastructure. Expressions of interest are due by 31 January.

The Law, AI, and Regulation (LAIR) conference will take place at Erasmus University Rotterdam on 11 and 12 June 2026, with the topic 'Critical perspectives on the AI Act'. Abstracts are due by 28 February.

Jacopo Franceschini and Ayhan Gücüyener Evren are convening a workshop on Studying Cyber Conflict in a Fractured Digital Order: Concepts, Methods, and Evidence at EWIS 2026 (1-3 July 2026, Izmir, Turkey). Abstracts are due by 11 February.

Wenlong Li and Zihao Li are guest-editing a special issue of Computer Law and Security Review on AI Regulation in Asia – Emerging Pathways, Divergent Models, and Global Implications. Papers are due by 15 September.

And now, the otter

a group of sea otters swimming in the ocean
Photo by Kedar Gadge / Unsplash

Hope you found something interesting above, and please consider subscribing if you haven’t done so already:

Subscribe

Thanks for your attention! Do not hesitate to hit “reply” to this email or contact me elsewhere to discuss some topic I raise in the newsletter. Likewise, let me know if there is a job opening, event, or publication that might be of interest to me or to the readers of this newsletter. Hope to see you next time!