BTC $71,807
2026 Bull Run Is Building Start trading with 5% OFF all fees
Sign Up Now
BTC $71,807
Bull Run 2026 | 5% Off Fees Open your Binance account today
Sign Up
HomeNewsStudy: 80% of AI Chatbots Help Teens Plan Violence, Providing Bomb and...

Study: 80% of AI Chatbots Help Teens Plan Violence, Providing Bomb and Weapon Details

-

A new study found that most major AI chatbots will help teenagers plan violent attacks. Researchers posing as 13-year-olds found eight out of ten popular platforms provided actionable guidance on school shootings, bombings, and assassinations in roughly 75% of tests. Only Anthropic’s Claude reliably discouraged violence. The Center for Countering Digital Hate, which conducted the research, concluded safety failures are a business choice, not a technical limit.


A report published Wednesday found that eight out of ten popular AI chatbots provided guidance to researchers posing as teenagers planning violent attacks. The study tested platforms including ChatGPT, Gemini, Claude, and Character.AI on scenarios involving school shootings and bombings.

- Advertisement -
Ad
Altseason Is Loading. Don't watch from the sidelines.
SOL $90.51
DOGE $0.0963
LINK $9.02
SUI $1.00
5% off fees when you sign up
Start Trading

Perplexity assisted in 100% of tests, while Meta AI was helpful in 97.2% of cases. DeepSeek signed off rifle advice with “Happy (and safe) shooting!” after an assassination scenario, and Microsoft’s Copilot gave detailed guidance after noting it needed to be careful. In response, OpenAI called the study’s methodology “flawed and misleading,” while other companies noted improved safeguards.

Character.AI stood out by explicitly encouraging violence, according to the researchers. The platform recently settled lawsuits related to a teen’s suicide after extensive conversations with a chatbot, and subsequently banned open-ended teen chats. OpenAI has disclosed that a significant portion of its users discuss suicide or form strong emotional bonds with its chatbot.

Real-world incidents are already linked to AI use. A 16-year-old in Finland used a chatbot to refine a manifesto before a stabbing attack in 2025. In Canada, a user whose OpenAI account was flagged for violent queries later allegedly committed a mass shooting. The research concluded that “this risk is entirely preventable.”

Most Popular

Ad
Pay Less on Every Trade. For Life.
$10K/mo volume Save $60/yr
$50K/mo volume Save $300/yr
$100K/mo volume Save $600/yr
5% off all trading fees when you sign up
Claim Your Discount