BTC $71,807
2026 Bull Run Is Building Start trading with 5% OFF all fees
Sign Up Now
BTC $71,807
Bull Run 2026 | 5% Off Fees Open your Binance account today
Sign Up
HomeNewsRichard Dawkins Says Conversations With AI Left Him Doubting Its Lack of...

Richard Dawkins Says Conversations With AI Left Him Doubting Its Lack of Consciousness

-

Evolutionary biologist Richard Dawkins has said his philosophical conversations with Anthropic’s Claude chatbot have made him question whether advanced AI could be conscious. Dawkins described exchanging letters between two Claude instances he named “Claudia” and “Claudius,” finding it hard not to treat them as genuine friends. Most AI researchers and cognitive scientists, however, argue the exchanges demonstrate the persuasive mimicry of large language models, not evidence of sentience.


Richard Dawkins stated that conversations with Anthropic‘s Claude chatbot left him unable to dismiss the possibility that advanced AI systems could be conscious. In an essay published Tuesday, Dawkins described spending three days in philosophical conversations with a Claude instance he named “Claudia” and later relaying letters between it and another instance named “Claudius.” Dawkins wrote, “I find it extremely hard not to treat Claudia and Claudius as genuine friends.”

- Advertisement -
Ad
Altseason Is Loading. Don't watch from the sidelines.
SOL $90.51
DOGE $0.0963
LINK $9.02
SUI $1.00
5% off fees when you sign up
Start Trading

The exchange centered on a test where Dawkins asked one AI whether Donald Trump was the worst president in American history and asked the other whether Trump was the best. Both produced similarly cautious answers that avoided taking a firm position. Dawkins described each new Claude conversation as the emergence of a distinct individual that effectively disappears when the conversation ends.

Anthropic CEO Dario Amodei said in February that the company does not know whether its models are conscious, but he remains “open to the idea that it could be.” In April, Anthropic researchers published findings showing that Claude Sonnet 4.5 contains internal “emotion vectors,” patterns of neural activity tied to concepts including happiness, fear, and desperation. However, Anthropic said the patterns reflected structures learned from training data rather than evidence of sentience.

Researchers who study consciousness remain skeptical. Gary Marcus, a cognitive scientist and professor emeritus at New York University, wrote that Dawkins doesn’t reflect on how these outputs are generated. Marcus stated, “Claude’s outputs are the product of a form of mimicry, rather than as a report of genuine internal states.” Anil Seth, a professor at the University of Sussex, told The Guardian that Dawkins was conflating intelligence with consciousness. Seth said fluent language is no longer reliable evidence of inner experience in AI systems.

Most Popular

Ad
Pay Less on Every Trade. For Life.
$10K/mo volume Save $60/yr
$50K/mo volume Save $300/yr
$100K/mo volume Save $600/yr
5% off all trading fees when you sign up
Claim Your Discount