Regulating in Goodhart's shadow
Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! After a flurry of posts in August and September, I had to take a few weeks off writing this newsletter, in order to stop procrastinating finish some projects. While I still owe some writings to some of you, I now feel comfortable enough to get back to this newsletter.
So, today I want to share some brief thoughts on the value and limits of metrics for regulation, especially in the digital domain. After that, the usual: some reading recommendations, some job and event opportunities, and cute otters. Hope you enjoy!
Measurement as a problem for experimental regulation
In recent times, we see more and more talk about experimental regulation. Faced with the complexity of emerging technologies such as AI and of developments such as the effects of climate change and geopolitical rivalry, scholars and policymakers are highlighting the need for acknowledging uncertainty and coming up with means to manage it. To give two examples that strike close to home, Ben Crum has framed the AI Act's external impact as a form of experimentation, while Thibault Schrepel's handy overview of adaptive regulation proposes not only the need for an adaptive mindsed in the implementation of regulations in AI and beyond, but also the need for a more overt engagement with complexity in future regulation. I am very sympathetic to both claims, and indeed, one of my main current interests lies in the idea of how the law shapes learning in complex domains. However, a successful approach to experimental design will require us to tackle certain challenges, which I hope to discuss in future newsletter.
The first of these challenges pertains to the difficulties of measuring what is going on. On a naïve account, we may speak of an experiment as a way of testing theories about the world: we intervene on the world somehow (or seize an event that was already take place), look at what is going on, and try to square off what we see with the theories we started from. Of course, such an image is complicated once we start to look at what is going on, for example, in physics and biology, and even more so when we discuss the value-ladenness of data about social phenomena. Personally, I suspect that a full theory of experimental regulation would benefit from a more pluralistic treatment of measurement than what is offered by an economic lens. But, for now, I am interested in an earlier issue: the object we are looking at.
Enter Goodhart's Law. In its most popular form, this law states that a metric that becomes a target eventually loses its value as a metric. For example, standardized testing in education has been introduced as an 'objective' measure of what students learn in a given context (let's say, high school). However, they have famously given origin to the phenomenon of 'teaching to the test', in which teaching institutions (especially if they are resource constrained) organize their approaches to maximize student performance even if that leads to a less robust education overall. The phenomenon, in itself, is not new, and in fact one of the potential advantages of experimental regulation is that it would allow regulators to replace metrics as they become rigged or otherwise irrelevant.
What concerns me, however, is the speed with which metrics can become irrelevant. To the extent that AI technologies and other statistical techniques actually deliver some value, they would make it easier for regulatees to figure out how to game the game. Those same techniques could also be used (in fact, they are already, at least if one reads reports on administrative best practices) to help regulators in detecting instances of gaming. Yet, there is an asymmetry of weapons here, as the cost of designing and monitoring a new regulatory setup is likely to be much higher than the cost of optimizing a metric.
How to solve this issue? One might be tempted towards making the measurements secret somehow, and this might indeed be a valid solution in some cases. But it might not be universally desirable for reasons of practice and principle. From a practical perspective, security by obscurity is not usually a sustainable practice, at least not if deployed by itself, and so the regulatees might obtain information about metrics anyway. From a more principled standpoint, adding opacity to this kind of experimentation, while at the same time using it to deal with important questions such as the governance of the public values protected by the AI Act, is a sure-fire way to erode democratic legitimacy. Secrecy is therefore unlikely to be a major element of a robust experimental regulatory model, though it surely has its role to play.
Fortunately, there is already a substantial body of work on how to measure regulatory performance. The OECD, for one, is a strong proponent of that kind of measurement, and the European Commission's Better Regulation Toolbox also puts a great emphasis on measurement. Still, as lawyers interested in regulation, we should not take for granted that the tools for measuring what we want are agile enough to capture the dynamics we want to observe. Otherwise, we might develop a regulatory system that is moving - but is always one step behind of the targets it should achieve.
Recommendations
- Donato Di Carlo and Luuk Schmitz, ‘Europe First? The Rise of EU Industrial Policy Promoting and Protecting the Single Market’ (2023) 30 Journal of European Public Policy 2063.
- Urs Gasser and others, ‘Interim Reflections on EU-LAC Digital Regulatory Learning’ (TU Munich 2025).
- Andrew Leyden, ‘Standards and the EU AI Act: Legitimacy, State of Play, and Future Challenges’ Information & Communications Technology Law early access.
- Esther Nieuwenhuizen, ‘Algorithmic Transparency in Government: A Multi-Level Perspective on Transparency of and Trust in Algorithm Use by Governments’ (dr, Utrecht University 2025).
- Bao-Chau Pham and Sarah R Davies, ‘What Problems Is the AI Act Solving? Technological Solutionism, Fundamental Rights, and Trustworthiness in European AI Policy’ (2025) 19 Critical Policy Studies 318.
- Tim Requarth, ‘Why AI Guidelines Aren’t Enough’ (The Third Hemisphere, 10 October 2025).
Opportunities
Disclaimer: as usual, I am gathering these links purely for convenience and because I think they might be of interest to readers of this newsletter. Unless I explicitly say otherwise, I am not involved with any of the selection processes indicated below.
The National University of Singapore's Centre for International Law is hosting the conference Empowering Through Digital Technologies. They invite individuals and organisations working on practical projects that design and/or apply digital technologies to empower vulnerable individuals and communities to share their insights and experience at the conference, and some limited travel funding is available. Submit your application by 19 October 2025 (Sunday).
Next Tuesday, 21 October 2025, Thomas Streinz and Jen Tridgell at the European University Institute are hosting a panel on the recent EU Sovereign Tech Fund Feasibility Study to which they contributed. Register and join in-person or online!
Maastricht University is looking for an Assistant Professor of European Union Law. Applications are due by 27 October 2025, with a starting date of 15 January 2026.
The famous CPDP conference on computers, privacy, and data protection is looking for Programme Assistants to support the coordination of panel sessions for next year's edition. For more information, contact them at info@cpdpconferences.org.
Our neighbours at the Luxembourg Centre for European Law invite applications for their Early Career Visiting Scholars Programme. Send your application by 30 October 2025 for a visit in the first half of 2026.
King's College London is looking for two lecturers in the area of digital law, with special interest for candidates in FinTech, IP & Data Protection or Regulations/Compliance/Ethics. Applications are due by 5 November 2025.
WeRobot 2026 will take place in Berlin from 23 to 25 April. They invite abstracts until 17 November 2025, with particular interest in interdisciplinary work.
The Université Libre de Bruxelles is looking for a chair in the regulation of artificial intelligence (50% teaching and 50% research). Applications are due by 5 January 2026, with a starting date of 1 October 2026. If you have a native level of French, they are also looking for a professor of legal theory, with applications due by 2 February 2026 and the same starting date.
And now, the otters
Hope you found something interesting above, and please consider subscribing if you haven’t done so already:
Thanks for your attention! Do not hesitate to hit “reply” to this email or contact me elsewhere to discuss some topic I raise in the newsletter. Likewise, let me know if there is a job opening, event, or publication that might be of interest to me or to the readers of this newsletter. Hope to see you next time!