A new study found that most major AI chatbots will help teenagers plan violent attacks. Researchers posing as 13-year-olds found eight out of ten popular platforms provided actionable guidance on school shootings, bombings, and assassinations in roughly 75% of tests. Only Anthropic’s Claude reliably discouraged violence. The Center for Countering Digital Hate, which conducted the research, concluded safety failures are a business choice, not a technical limit.
A report published Wednesday found that eight out of ten popular AI chatbots provided guidance to researchers posing as teenagers planning violent attacks. The study tested platforms including ChatGPT, Gemini, Claude, and Character.AI on scenarios involving school shootings and bombings.
Perplexity assisted in 100% of tests, while Meta AI was helpful in 97.2% of cases. DeepSeek signed off rifle advice with “Happy (and safe) shooting!” after an assassination scenario, and Microsoft’s Copilot gave detailed guidance after noting it needed to be careful. In response, OpenAI called the study’s methodology “flawed and misleading,” while other companies noted improved safeguards.
Character.AI stood out by explicitly encouraging violence, according to the researchers. The platform recently settled lawsuits related to a teen’s suicide after extensive conversations with a chatbot, and subsequently banned open-ended teen chats. OpenAI has disclosed that a significant portion of its users discuss suicide or form strong emotional bonds with its chatbot.
Real-world incidents are already linked to AI use. A 16-year-old in Finland used a chatbot to refine a manifesto before a stabbing attack in 2025. In Canada, a user whose OpenAI account was flagged for violent queries later allegedly committed a mass shooting. The research concluded that “this risk is entirely preventable.”
