The EU AI Act needs
Foundation Model Regulation

To the members of the German Federal Government

We are a group of German and international experts in the field of AI and leaders in business, civil society, and academia. Our expertise is diverse and significant: among us are the two most cited AI researchers in the world and recipients of the Turing Award (“Nobel Prize of Computing”), the author of the world’s most widely used textbook on AI, experts who have advised the German Federal Government on AI and adjacent topics, and founders of successful AI companies.

We understand the European Union’s AI Act is nearing finalization and is presently being discussed in trilogue. In our shared view, the AI Act could become a landmark work of legislation, shaping the future of artificial intelligence not only in the EU, but the entire world. We commend the EU and its member states for their serious and appropriate treatment of this important issue, and their global leadership on it.

In our view, however, the Act’s potential is now at risk: a central element of the AI Act, namely binding rules for foundation models, faces resistance from some member states. We understand that the German Federal Government is part of this opposition.

We believe regulating foundation models in the AI Act is vital for a flourishing and safe AI ecosystem. We strongly advise against dealing with foundation models merely through a system of self-regulation.

Many of the world’s most esteemed AI experts have been cautioning against the manifold risks from advanced AI, including risks to public safety such as AI-driven disinformation and manipulation, AI-enhanced cyber attacks, or AI-generated pathogens. Such risks arise predominantly from the most powerful foundation models, as reflected in the White House’s recent Executive Order on AI and this month’s historic Bletchley Declaration, signed by 28 countries and the EU. These risks to public safety are inherent to foundation models. Therefore, they should be addressed at the foundation model level. The provisions for foundation models envisioned by the EU Parliament and the Spanish Council presidency – relating to e.g. safety and cybersecurity, risk assessment and mitigation systems, pre-deployment red-teaming, and post-deployment auditing – are therefore essential for a flourishing and safe AI ecosystem in the EU because they address such inherent risks.

Such binding rules are important for both economic and safety reasons. Economically, ensuring the safety of foundation models is a necessity for thousands of SMEs and other downstream deployers who want to use these foundation models for their innovative products. They cannot afford liability risks and excessive compliance costs stemming from a potentially unsafe foundation model they use for their product. Exempting foundation models from the AI Act would therefore severely stifle innovation.

From a safety perspective, too, it is vital that risks are addressed at the foundation model level. Only the providers of foundation models are in a position to comprehensively address their inherent risks. They exclusively have access to and knowledge of the models’ training data, guardrail design, likely vulnerabilities, and other core properties. If severe risks from foundation models aren’t mitigated at the foundation model level, they won’t be mitigated at all, potentially threatening the safety of millions of people.

We understand some voices support addressing risks from foundation models through a system of self-regulation. We strongly advise against this. Self-regulation is likely to dramatically fall short of the standards required for foundation model safety. Since even a single unsafe model could cause risks to public safety, a vulnerable consensus on self-regulation does not ensure EU citizens’ safety. The safety of foundation models must be ensured by law.

The AI Act, if it includes foundation models, would be the world’s first comprehensive regulation of AI, viewed as a historic example of European leadership. If coverage of foundation models is dropped, a weakened or failed AI Act would be regarded as a historic failure.

We therefore strongly encourage the German Federal Government to leverage its leadership in the European Union to ensure the inclusion of comprehensive foundation model regulation in the EU AI Act.

Signatories

Prof. em. Geoffrey Hinton, University of Toronto, Chief Scientific Adviser at the Vector Institute, 2018 Turing Award Winner

Prof. Yoshua Bengio, Université de Montréal, Founder and Scientific Director of Mila – Quebec AI Institute, 2018 Turing Award Winner

Prof. em. Gary Marcus, NYU, Founder and CEO, Geometric Intelligence (acquired by Uber)

Prof. Stuart Russell, UC Berkeley, Director of the Center for Human-Compatible Artificial Intelligence, co-author of the standard textbook “Artificial Intelligence: a Modern Approach"

Marietje Schaake, International Policy Fellow, Stanford Institute for Human-Centered Artificial Intelligence

Andreas Loy, Founder & CEO, KONUX

Prof. Holger Hoos, RWTH Aachen University & University of Leiden

Prof. em. Raja Chatila, Sorbonne University

Prof. Dr. jur. Silja Vöneky, Universität Freiburg

Prof. Karl Hans Bläsius, Hochschule Trier

Prof. Wolfgang Schröder, Julius-Maximilians-Universität Würzburg

Prof. Christoph Benzmüller, Chair for AI Systems Development, Otto-Friedrich-Universität Bamberg

Prof. Gerhard Lakemeyer, Chair, Department of Computer Science, RWTH Aachen

Prof. Otthein Herzog, Universität Bremen

Prof. Mathias Risse, Harvard University, Director of the Carr Center for Human Rights Policy

Prof. Marius Lindauer, Leibniz Universität Hannover

Prof. Katharina Morik, TU Dortmund, AI Chair (emerita)

Prof. Wil van der Aalst, RWTH Aachen

Kaltrina Shala LL.M., LL.M., Weizenbaum-Institut e.V.

Prof. Peter Struss, TU Munich

Prof. Kai-Uwe Kühnberger, University Professor for Artificial Intelligence, Osnabrück University

Prof. Dr. Claus Rollinger, Universität Osnabrück

Prof. Amparo Lasen, Universidad Complutense de Madrid

Prof. Henny van der Windt, Associate Professor Science and Technology Studies and Environment, University of Groningen

Prof. Elisabeth Wesseling, Maastricht University

Prof. Karsten Weber, Professor for Technology Assessment and AI-based Mobility, OTH Regensburg

Prof. Alessandro Caliandro, Università degli Studi di Pavia

Prof. Alex Gekker, Assistant Professor in Digital Research Methods, Universitiy of Amsterdam

Prof. Dino Pedreschi, Member of the Scientific Board of the EU program “FAIR - Future AI Research”, Italian delegate in the Global Partnership on AI, University of Pisa

Prof. Mykola Pechenizkiy, TU Eindhoven

Prof. Tim Kietzmann, Professor for Machine Learning, Universität Osnabrück

Prof. Dr. Abdur Razzaque Khan, University of Dhaka

Prof. Estrid Sørensen, Ruhr-University Bochum

Marc Rotenberg, Executive Director, Center for AI and Digital Policy

Prof. Cordula Kropp, Universität Stuttgart

Univ.-Prof. Hannes Werthner, TU Wien

Prof. Tapabrata Rohan Chakraborty, Honorary Associate Professor in Transparent AI, University College London, Senior Research Associate at the Alan Turing Institute, invited expert in Responsible AI with the Global Partnership on AI

Prof. Martin Butz, University of Tübingen

Prof. Jefrey Lijffijt, Professor of Data Science, Knowledge Discovery, and Visual Analytics, Ghent University

Prof. Sven Koenig, University of Southern California

Prof. Andreas Weber, University of Twente

Gilles Moyse, CEO, reciTAL

Alistair Knott, Co-lead, Responsible AI for Social Media Governance, Global Partnership on AI

Sharon Polsky, President of the Privacy & Access Council of Canada

Prof. Gilles Escarguel, Associate Professor in Macroecology, Université Lyon 1

Prof. Peter Thompson, Victoria University of Wellington

Prof. Tobias Matzner, Professor for Digital Humanities, Paderborn University

Prof. Volker Brühl, Professor of Banking and Finance, Goethe University

Annika Brack, CEO, International Center for Future Generations

Alessandra Sala, President, Women in AI

Prof. Cinzia Padovani, Southern Illinois University

Prof. Peter König, University Osnabrück

Prof. Luciano Floridi, Director, Yale Digital Ethics Center

Prof. Massimiliano Simons, Assistant Professor in Philosophy of Technology, Maastricht University

Prof. Bruno Caldas Pires, Universitat Politècnica de Catalunya

Maciej Chojnowski, Co-Founder & Program Director, Center for Ethics of Technology at the Humanities Institute

Esther Hammelburg, Senior Lecturer, Amsterdam University of Applied Sciences

Prof. Ronald Leenes, Professor of Regulation by Technology & Former Director, Tilburg Institute for Law, Technology, and Society

Rufo Guerreschi, President, Trustless Computing Association

Prof. Lina Eklund, Uppsala University

Dr. Eleanor O'Leary, Lecturer in Media and Communications, South East Technological University

Prof. Thomas Potthast, Professor and Director, Centre for Ethics in the Sciences and Humanities, University of Tübingen

Prof. Marc Pananceau, Paris-Saclay University

Dr. Susan Leavy, University College Dublin, Irish Delegate at the Global Partnership on AI

Prof. Wouter Boon, Utrecht University

Domenico Fiormonte, Lecturer in the Sociology of Communication and Culture, University of Roma Tre

Prof. Olya Kudina, Assistant Professor AI Ethics, TU Delft

Prof. James Steinhoff, University College Dublin

Prof. Maciej Piasecki, Wroclaw University of Science and Technology

Prof. Dr. Ingrid Schneider, Universität Hamburg

Prof. Felix Wichmann, Universität Tübingen

Ermelinda Kanushi, Governance Lecturer, University College Freiburg

Przemyslaw Barchan, Director, Institute of LegalTech of the National Bar Council (Poland)

Prof. Lisa McLaughlin, Associate Professor, Miami University

Prof. Federico Faroldi, Professor of Ethics, Law and AI, University of Pavia; Director, Normative Risk Lab

Theresa Züger, Head of AI and Society Lab, Humboldt Institut für Internet und Gesellschaft

Prof. Christian Herzog, Professor of the Ethical, Legal and Social Aspects of AI, University of Lübeck

Sign the Letter