Joint Statement on AI Safety and Openness

We are at a critical juncture in AI governance. To mitigate current and future harms from AI systems, we need to embrace openness, transparency, and broad access. This needs to be a global priority.

Yes, openly available models come with risks and vulnerabilities — AI models can be abused by malicious actors or deployed by ill-equipped developers. However, we have seen time and time again that the same holds true for proprietary technologies — and that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.

Further, history shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation. Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.

We are in the midst of a dynamic discourse about what 'open' signifies in the AI era. This important debate should not slow us down. Rather, it should speed us up, encouraging us to experiment, learn and develop new ways to leverage openness in a race to AI safety.

We need to invest in a spectrum of approaches — from open source to open science — that can serve as the bedrock for:

  1. Accelerating the understanding of AI capabilities risks and harms by enabling independent research, collaboration and knowledge sharing.
  2. Increasing public scrutiny and accountability by helping regulators adopt tools to monitor large scale AI systems.
  3. Lowering the barriers to entry for new players focused on creating responsible AI.

As signatories to this letter, we are a diverse group — scientists, policymakers, engineers, activists, entrepreneurs, educators and journalists. We represent different, and sometimes divergent, perspectives, including different views on how open source AI should be managed and released. However, there is one thing we strongly agree on: open, responsible and transparent approaches will be critical to keeping us safe and secure in the AI era.

When it comes to AI safety and security, openness is an antidote, not a poison.

Add your signature

By submitting this form you agree to Mozilla handling your information as explained in this privacy notice.

Signatories

  1. Camille François, Columbia University
  2. Mark Surman, Mozilla
  3. Deborah Raji, UC Berkeley
  4. Maria Ressa Rappler, Nobel Peace Prize Laureate
  5. Stella Biderman, EleutherAI
  6. Alondra Nelson, Institute for Advanced Study
  7. Arthur Mensch, MistralAI
  8. Marietje Schaake, Stanford University
  9. Abeba Birhane, Mozilla Fellow
  10. Bruce Schneier, Berkman Center
  11. Mitchell Baker, Mozilla
  12. Bruno Sportisse, INRIA
  13. Anne Bouverot, Ecole Normale Supérieure
  14. Alexandra Reeve Givens, CDT
  15. Cedric O, MistralAI
  16. Andrew Ng, AI Fund
  17. Yann Lecun, Meta
  18. Jean-Noël Barrot, Minister for Digital Affairs, France
  19. Amba Kak, AI Now
  20. Joy Buolamwini, Algorithmic Justice League
  21. Julien Chaumond, Hugging Face
  22. Brian Behlendorf, Linux Foundation
  23. Eric Von Hippel, MIT Sloan School of Business
  24. Moez Draief, Mozilla.ai
  25. Pelonomi Moiloa, LelapaAI
  26. Philippe Beaudoin, Waverly
  27. Raffi Krikorian, Technically Optimistic
  28. Audrey Tang, Minister of Digital Affairs, Taiwan
  29. Jimmy Wales, Wikimedia Foundation
  30. Krishna Gade, Fiddler AI
  31. John Borthwick, Betaworks
  32. Karim Lakhani, Harvard Business School
  33. Stefano Maffulli, Open Source Initiative
  34. Arvind Narayanan, Princeton University
  35. Aviya Skowron, EleutherAI
  36. Catherine Stihler, Creative Commons
  37. Nabiha Syed, The Markup
  38. Tim O’Reilly, O'Reilly Media
  39. Nicole Wong, Former Deputy U.S. Chief Technology Officer
  40. Irina Rish, Mila - Quebec AI Institute
  41. Mohamed Nanabhay, Mozilla Ventures
  42. J. Bob Alotta, Mozilla
  43. Imo Udom, Mozilla
  44. Ayah Bdeir, Mozilla
  45. Blake Richards, McGill/Mila
  46. Andrea Renda, CEPS
  47. Jenia Jitsev, LAION/Juelich Supercomputing Center & Helmholtz Research Center Juelich
  48. Charles Gorintin, MistralAI
  49. Daniel J. Beutel, Flower Labs
  50. Nicholas Lane, Flower Labs
  51. Taner Topal, Flower Labs
  52. Aaron Gokaslan, Cornell University
  53. Shayne Longpre, MIT
  54. Luca Soldaini, Allen Institute for AI
  55. Joelle Pineau, Meta
  56. Michiel van de Panne, University of British Columbia
  57. Nawar Alsafar, Bytez Inc
  58. Holly Peck, Bytez Inc
  59. Susan Hendrickson, Harvard University
  60. Sharad Sharma, iSPIRT Foundation
  61. Andy Stepanian, The Sparrow Project
  62. Paul Keller, Open Future
  63. Goran Marby, Ybram Consulting
  64. Huu Nguyen, Ontocord.ai; LAION.ai
  65. Mike Bracken, Public Digital
  66. Elaheh Ahmadi, Themis AI
  67. Umakant Soni, AI Foundry, ART Venture Fund
  68. Saoud Khalifah, Mozilla
  69. Merouane Debbah, Khalifah University
  70. Felix Reda, Former Member of the European Parliament
  71. Brett Solomon, Access Now
  72. David Morar, Open Technology Institute
  73. Frédérick Douzet, IFG - GEODE, Université Paris 8
  74. Yacine Jernite, Hugging Face
  75. Anjney Midha, A16z
  76. Hessie Jones, LAION
  77. Jeffrey McGregor, Truepic Inc
  78. Victor Storchan, Mozilla.ai
  79. Sri Krishnamurthy, QuantUniversity
  80. Jorn Lyseggen, meltwater
  81. Corynne McSherry, Electronic Frontier Foundation
  82. Brian Granger, Project Jupyter

See all 1821 signatures