We meet the protesters who want to ban Artificial General Intelligence before it even exists

STOP AI warns of doomsday scenario, demands governments pull the plug on advanced models

Feature On Saturday at the Silverstone Cafe in San Francisco, a smattering of activists gathered to discuss plans to stop the further advancement of artificial intelligence.

The name of their non-violent civil resistance group, STOP AI, makes its mission clear.

The organization wants to ban something that, by most accounts, doesn't yet exist – artificial general intelligence, or AGI, defined by OpenAI as "highly autonomous systems that outperform humans at most economically valuable work."

STOP AI outlines a broader set of goals on its website. For example, "We want governments to force AI companies to shut down everything related to the creation of general-purpose AI models, destroy any existing general-purpose AI model, and permanently ban their development."

In answer to the question "Does STOP AI want to ban all AI?", the group's answer is, "Not necessarily, just whatever is necessary to keep humanity alive."

Protest, a whistleblower, and a death under scrutiny

More immediately, the group is organizing another protest in front of OpenAI's San Francisco headquarters on Saturday, February 22. The plan also includes a demand for "Justice for Suchir," in reference to Suchir Balaji, an ex-OpenAI employee who was found dead in his San Francisco apartment on November 26, 2024, after coming forth as a whistleblower with regard to alleged OpenAI copyright infringement.

Our main demand is that we're trying to permanently ban the development of artificial general intelligence, or AGI

Balaji's death was originally ruled a suicide but no supporting documentation was released at the time. In the absence of conclusive evidence, speculation spread on social media. In December, his parents said they had hired a private investigator and commissioned a second autopsy that didn't agree with the police findings.

On Friday, the San Francisco County Medical Examiner’s report confirmed that Balaji died by suicide rather than foul play.

Balaji's mother, Poornima Rao, remains unconvinced. In a social media post on Sunday, she said, "We received the autopsy report last Friday. Our counsel and we disagree with their decision. There are tons of inconsistencies in their decision. Underlying assumptions are not supporting the facts in reports. We continue our investigation. We have sent the hair found in [the] apartment for testing. We are fighting for justice and not back up."

Photo of Sam Kirchner (L) and Guido Reichstadter (R)

STOP AI's Sam Kirchner, left, and Guido Reichstadter at the Silverstone Cafe

STOP AI has held prior protests outside OpenAI's office and elsewhere. Two of the group's co-founders, Sam Kirchner and Guido Reichstadter, were arrested for civil disobedience – for blocking the outfit's entrance last October. Their trial in San Francisco began on Tuesday, February 18, 2025, and The Register understands the defendants intend to ask for a continuance to postpone proceedings.

Doomsday fears

"...we're trying to permanently ban the development of artificial general intelligence, or AGI," Kirchner told The Register in an interview. "And roughly speaking, that's systems that are more capable than all human experts across all technical domains."

Citing AI luminary Geoffrey Hinton, who has estimated an about 50 percent chance that AI will surpass human intelligence within the next two decades, Kirchner, like his fellow AI foes, is concerned that humanity could lose control.

"The experts in the field of AI say that there's no proof that we can control that system, that superintelligence system, and that it will never, at some point in the future, want something that would inadvertently lead to our extinction, similar to how we cause the extinction of many less intelligent species, sort of unintentionally," Kirchner said.

It's the Skynet scenario from The Terminator films, but more as a mishap than machine malevolence.

Or as Elon Musk put it in 2014, it's "summoning" the demon...

So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

That was before Musk helped found OpenAI (2015) and xAI (2023) and as head of Trump's Department of Government Efficiency (DOGE), helped drive federal downsizing while the administration scrapped Biden-era AI safety rules.

Kirchner said that he has a background in mechanical and electrical engineering while Reichstadter worked as a jeweler for 20 years and got his undergraduate degree in physics and math. Other group members also have some technical background: Finn van der Velde has an undergraduate degree in computer science, specifically AI, and left an AI master's program at Radboud University "to pursue work in AI safety," as his LinkedIn profile puts it. Derek Allen does some programming. And a part-time group participant identified only as Dennis is currently pursuing a master's degree in AI, according to Kirchner.

The company we mainly protest is OpenAI. Their stated mission on their website is to build artificial general intelligence

Asked what motivated him to oppose AI, Kirchner said, "The company we mainly protest is OpenAI. Their stated mission on their website is to build artificial general intelligence. And on there it says that they define AGI as systems that are more generally intelligent and economically productive than most humans.

"They really mean all humans. But even if you could build a superintelligence or an AGI, and it did everything for us and like, no one had a job, but everyone was just provided a universal basic income from the output of this superintelligence, and everyone could just kind of party 24/7 and not have to work, I personally would find that a little depressing.

"I think that there's a problem with not having meaning in life if everything's done for us."

Pointing to Hinton's concerns about AI superintelligence causing human extinction, Kirchner said, "So that is just fundamentally not okay. And we're going to protest this until we're either all dead or in prison. We're worried for our family's lives."

AI, one protest at a time

As an overall goal, Kirchner said the aim is to rally support from 3.5 percent of the US population, which by one estimate is the tipping point for change.

"The high level goal is trying to engage 3.5 percent of the US population in peaceful protests against AI," he said. "That's 11 million people and it seems like a lot."

Kirchner explained that according to political scientist Erica Chenoweth, that level of support from a country's population can overthrow a tyrannical government or achieve a political demand.

"If that's what it requires to stop these companies from building something that threatens the lives of everyone, then that's what we're going to try to do," he said.

"There are groups in the world currently in 2024 that use the methods we're talking about, with nonviolent civil resistance, like roadblocks and barricading company offices, groups like Just Stop Oil in the UK," said Kirchner.

"In 2024, they were successful in achieving their first political demand, which was demanding that the UK government not issue any more new oil and gas licenses to oil companies in the UK. And they did that with way less than 1 percent of the UK population engaging in protest.

"So that's really just roughly what our goal is: to take the strategy of groups like Just Stop Oil and apply it to protesting AI."

Kirchner said that while OpenAI has mostly tried to avoid engaging with the group, a member of the company's safety team, who shared some concerns about what OpenAI is trying to build, did have a civil discussion with group members.

"We're trying to be open to talking with people at OpenAI and any AI company because, I mean, these companies are made up of people and they have children and they understand that what they're building could potentially be risking the lives of those they love," he explained.

Kirchner said he hasn't personally been affected by AI other than realizing during a social media discussion that the other participant was a bot. But he said he is aware of graphic designers who have lost their jobs as a result of AI.

STOP AI's growing battle

STOP AI has four full-time members at the moment and about 15 or so volunteers in the San Francisco Bay Area who help out part-time.

The implications of artificial general intelligence are so immense and dangerous that we just don't want that to come about ever

"Some of them we just met on the street and they're like, 'yeah, these companies are trying to build Terminator, it's crazy' and some of them are worried about job loss," said Kirchner. "So we're not just trying to only focus on the existential threat or the threat of extinction, if you will. We're trying to address every problem with AI and welcome anyone who has concerns, whether it's job loss or loss of, you know, democracy or human extinction."

At the Silverstone Cafe meeting over the weekend, Finn van der Velde argued that the companies developing AGI should not exist.

"The implications of artificial general intelligence are so immense and dangerous that we just don't want that to come about ever," said van der Velde. "So what that will practically mean is that we will probably need an international treaty where the governments across the board agree that we don't build AGI.

"And so that means disbanding companies like OpenAI that specifically have the goal to build AGI."

An adjacent goal is regulating compute power such that no one will be able to train an AGI model.

Guido Reichstadter said the problem is that these systems are not being built in a way that they can be audited and understood.

Asked whether legal liability for AI matters, Reichstadter said, "It's necessary, but it's not sufficient, because the kinds of harms that are within the realm of contemplation are catastrophic to existential. There's no way to recover from existential damage.

"So liability is necessary, but we need to ban the development of very, very powerful systems, especially ones that can do fully general cognitive work."

Group members are aware that there are relatively few technologies and substances apart from weapons of mass destruction and toxic chemicals that have been banned because they're just too dangerous.

Kirchner pointed to chlorofluorocarbons, or CFCs, but acknowledged that banning AGI will be an uphill battle.

"This is a very hard problem, to implement a very high level ban when you know incredibly powerful forces are against us," he said. "But if that's what is required to prevent the extinction of humanity then we're going to go out fighting and trying to achieve that even if we're not successful." ®

More about

TIP US OFF

Send us news


Other stories you might like