Elon Musk’s social media platform X announced a new policy targeting AI-generated war content. The platform’s head of product, Nikita Bier, stated that creators who post undisclosed AI-generated videos depicting armed conflict will lose access to the revenue-sharing program for 90 days. This move aims to maintain authenticity and prevent manipulation, especially during wartime when access to real information is critical. The policy follows the viral spread of AI clips, including one falsely showing an airstrike on Dubai’s Burj Khalifa.
X will suspend creators from its revenue-sharing program if they post AI-generated videos of armed conflict without clear disclosure. Nikita Bier announced the revision to the Creator Revenue Sharing policies to maintain authenticity and “prevent manipulation of the program.”
“During times of war, it is critical that people have access to authentic information on the ground,” Bier wrote in a post. “With today’s AI technologies, it is trivial to create content that can mislead people.” Violators face a 90-day suspension, with repeat offenses leading to permanent removal from monetization.
The policy change comes as AI-generated videos claiming to show Middle East violence circulated widely. An AI clip of an airstrike on the Burj Khalifa was viewed over 8 million times on X, as mentioned in a fact-check, while another version garnered over 42,000 views on Instagram.
Researchers and governments have warned that deepfakes could spread propaganda and misinformation online. The United Nations has warned that such media threatens information integrity, particularly in conflict zones where fabricated content can spread hate at scale.
This concern was realized during Russia’s invasion of Ukraine, when a deepfake video appeared to show Ukrainian President Volodymyr Zelensky urging troops to surrender. Officials quickly debunked the video, and Zelensky later released a message rejecting the claim.
Enforcement will rely on signals like Community Notes identifying content as AI-generated, along with metadata from generative AI tools. By tying enforcement to monetization, the policy targets the financial incentives for posting fake, engaging videos.
“We will continue to refine our policies and product to ensure X can be trusted during these critical moments,” Bier wrote. The platform’s focus remains on ensuring reliable information during global conflicts.

