HomeNewsX Threatens to Suspend Monetization for Undisclosed AI War Videos

X Threatens to Suspend Monetization for Undisclosed AI War Videos

-

Elon Musk’s social media platform X announced a new policy targeting AI-generated war content. The platform’s head of product, Nikita Bier, stated that creators who post undisclosed AI-generated videos depicting armed conflict will lose access to the revenue-sharing program for 90 days. This move aims to maintain authenticity and prevent manipulation, especially during wartime when access to real information is critical. The policy follows the viral spread of AI clips, including one falsely showing an airstrike on Dubai’s Burj Khalifa.


X will suspend creators from its revenue-sharing program if they post AI-generated videos of armed conflict without clear disclosure. Nikita Bier announced the revision to the Creator Revenue Sharing policies to maintain authenticity and “prevent manipulation of the program.”

“During times of war, it is critical that people have access to authentic information on the ground,” Bier wrote in a post. “With today’s AI technologies, it is trivial to create content that can mislead people.” Violators face a 90-day suspension, with repeat offenses leading to permanent removal from monetization.

The policy change comes as AI-generated videos claiming to show Middle East violence circulated widely. An AI clip of an airstrike on the Burj Khalifa was viewed over 8 million times on X, as mentioned in a fact-check, while another version garnered over 42,000 views on Instagram.

Researchers and governments have warned that deepfakes could spread propaganda and misinformation online. The United Nations has warned that such media threatens information integrity, particularly in conflict zones where fabricated content can spread hate at scale.

This concern was realized during Russia’s invasion of Ukraine, when a deepfake video appeared to show Ukrainian President Volodymyr Zelensky urging troops to surrender. Officials quickly debunked the video, and Zelensky later released a message rejecting the claim.

Enforcement will rely on signals like Community Notes identifying content as AI-generated, along with metadata from generative AI tools. By tying enforcement to monetization, the policy targets the financial incentives for posting fake, engaging videos.

“We will continue to refine our policies and product to ensure X can be trusted during these critical moments,” Bier wrote. The platform’s focus remains on ensuring reliable information during global conflicts.

LATEST POSTS

XRP Bullish Setup Tightens as Whale Buys and Network Activity Surge: $1.5 Key

XRP is showing a tightening technical structure and increasing on-chain activity as the broader cryptocurrency market consolidates. The asset has formed a bullish 'Adam and...

Hyperliquid Buybacks Remove 11,768 HYPE Tokens From Circulating Supply

The decentralized trading platform Hyperliquid has removed 11,768 HYPE tokens from circulation through a buyback program funded by protocol fees. This activity highlights the platform's...

Bitcoin ETF Flows Turn Positive as Gold ETF Demand Slows

Bitcoin ETF flows have turned net positive over the past 30 days, contrasting with a slowdown in gold ETF demand. Analysts are examining this divergence...

Nigel Farage Invests in UK Company Building Bitcoin Treasury

Stack BTC Plc, a UK company chaired by former Chancellor Kwasi Kwarteng, has raised $347,000 from investors including Reform UK leader Nigel Farage and Blockchain.com....

Most Popular

Earn on Stablecoins Up to 11% Daily payouts. Compounded automatically.
USDC, USDT, DAI, and more.
Earn Now