We are PauseAI DC
Our proposal is simple:
Don’t build powerful AI systems until we know how to keep them safe. Pause AI.
Surveys of thousands of AI researchers estimate a 1 in 6 chance of human extinction from uncontrollable, superintelligent AI. Uncontrollable, superintelligent is exactly where we are headed if the AI industry gets its way. There are almost no guardrails for the development of frontier AI. AI companies are allowed to train models of any size, without knowledge or concern for what dangers they may pose. Some AI company CEOs have admitted there’s a chance their technology destroys humanity – but they’re willing to roll the dice anyway. Our proposal is simple: Don’t build powerful AI systems until we know how to keep them safe. Pause AI. We advocate international cooperation to ensure that no company or country builds unsafe AI, giving us the time and the democratic oversight necessary to develop frontier AI safely. We suggest regulating the chips that are necessary to train and run the most powerful AI, pausing training runs above a threshold of computing power (read the technical details of our proposal here), but there are many possible paths to a Pause.
If we Pause AI, we can enjoy the benefits of the safe AI systems below the Pause threshold while we have the time to work to ensure that it’s safe to build even more powerful systems that could be even more beneficial.
If we Pause AI, we can enjoy the benefits of the safe AI systems below the Pause threshold while we have the time to work to ensure that it’s safe to build even more powerful systems that could be even more beneficial.
Petition for a
|
Your Donation
|
Get Our Email UpdatesGet up dates to know when we put on events.
|