About "Draw Red Lines for AI"
In the hushed, high-stakes world of artificial intelligence governance, a single term is cutting through the noise and gathering unprecedented momentum: “red lines.” It represents a growing, urgent consensus among leading scientists, former world leaders, and tech industry insiders that the unfettered development of AI poses risks so profound that humanity must collectively agree on boundaries that must not be crossed.
This movement crystallized in September 2025 during the high-level week of the United Nations General Assembly in New York. There, a coalition of policy organizations launched the “Global Call for AI Red Lines,” a campaign demanding the establishment of “international, legally binding red lines to prohibit and prevent” the most dangerous AI developments. The initiative, a collaborative project spearheaded by The Future Society, the French Center for AI Safety (CEAIS), and the Center for Human-Compatible AI (CHAI) at UC Berkeley, has amassed a startling list of over 300 prominent signatories.
The backstory to this diplomatic offensive is one of frustration and shifting geopolitical tides. As revealed in a recent in-depth Project Save the World forum discussion between sociologist and activist Metta Spencer and two key architects of the campaign—Tereza Zoumpalova, an Associate at The Future Society in Paris, and Su Cizem, an AI Policy Fellow formerly with CEAIS—the red lines initiative emerged directly from the perceived shortcomings of a series of international AI safety summits.
Su Cizem
Tereza Zoumpalova
Keep reading with a 7-day free trial
Subscribe to Project Save the World's Substack to keep reading this post and get 7 days of free access to the full post archives.




