YouTube announced plans to expand AI tools across the platform in 2026 and to step up detection and labeling of low-quality AI-generated videos, the company said in a letter Wednesday. Neal Mohan said the moves aim to reduce “AI Slop” and protect the viewing experience, as stated.
The platform will tighten protections around likeness and identity, extending its Content ID framework to give creators more control over faces and voices. “To reduce the spread of low-quality AI content, we’re actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low-quality, repetitive content.”
Creators must disclose realistic synthetic or altered media, and YouTube will label videos made with its internal AI tools. The company said it will remove harmful synthetic media that violates its Community Guidelines and supports the NO FAKES Act.
Planned creator features for 2026 include AI-generated Shorts using creators’ likenesses and expanded AI-assisted music creation tools. “AI will act as a bridge between curiosity and understanding,” Mohan added, framing AI as a creative aid for creators and viewers.

