27 March 2026
Dear Dario, Demis, Elon, Mustafa, and Sam:
Perhaps more than anyone, you understand the situation: the all-out race to build superintelligence with zero cooperation is most likely to get us some mix of war, large-scale accident, gradual disempowerment, and uncontrolled existentially-risky singularity. Every one of you, both publicly and privately, has expressed deep concern about the risks of too-rapid AI development. All of you have signed a statement that AI could present an extinction risk and/or the Asilomar principles stating that:
"Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization"
...and that:
"Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources" and that "Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact."
Does this feel like what is currently happening?
Racing is explicit and even encouraged. Safety evaluations (when done at all) are breaking down because capability benchmarks are saturated and systems are too self-aware. Your AI systems are writing the next version of themselves. The world's top scientific minds think they may soon be irrelevant. Competing frontier AI systems are being released within hours of each other. Aggregates of autonomous AI agents are being empowered and set loose with no effective security or control apparatus, and your systems can churn out zero-day exploits by the hundreds.
Although your companies have committed to pauses if certain risk threshold are met, your models are approaching (if not passing) them and to our knowledge no pauses are even being discussed; instead commitments are being walked back due to competitive pressure. And there is zero meaningful US federal regulation of Frontier AI, not even to enforce the standards and commitments your companies have voluntarily agreed to hold themselves to.
Do you really have a likely scenario in your mind where this race turns out well? Can you imagine any way in which it turns out with humanity still in charge of its own future?
The hour is growing very late. But not too late. You – you five people – still have the agency to turn the ship and chart a more responsible path. It is unlikely that anyone else can or will do so, until and unless some disaster strikes; and that may well be too late.
A number of you have expressed to us and in public that you would in general like to slow down, and to do things much more responsibly; and arguably you have committed in writing to doing so under conditions that will imminently hold, if they do not already. But you also indicate that you cannot pause unilaterally, because you are effectively trapped in the race. Nobody else will stop. Although none of you quite agreed to it, given the dynamics and risks this amounts to something like a suicide pact for us all.
This is difficult, but not insurmountable. As you all know, there is a game-theoretic solution, which is an assurance contract. This could invert the suicide pact into a mutually assured survival pact. Here's one way it could work:
Any level of partial success here would still be a game-changer.
This is one of the very few off-ramps remaining from the race you all feared would happen, and probably the best.
If any of you will take this up, we and the Future of Life Institute are at your disposal, as neutral facilitator or otherwise; we expect many other independent organizations, if asked, would also gladly help develop technical plans, verification capabilities, and proposed regulations. There are things civil society can push for (like rulings that such an agreement would not violate antitrust rules) that may be awkward for you to push for. The public will also be behind you.
We – and you – may not always agree. And let's be honest: some of you really don't like, or trust, each other.
But we all love our families, our friends, and our planet. We all love humanity. Don't we?
Please, step up.