HomeNewsAI Doomers and Transhumanists Debate AGI: Salvation or Extinction?

AI Doomers and Transhumanists Debate AGI: Salvation or Extinction?

-

A sharp divide over the risks and rewards of artificial general intelligence (AGI) was showcased during an online panel hosted by the nonprofit Humanity+. Prominent AI researcher Eliezer Yudkowsky warned that developing AGI with current “black box” systems would make human extinction unavoidable. In contrast, transhumanist philosopher Max More argued that delaying AGI could cost humanity its best chance to defeat aging and prevent long-term catastrophe.


A stark division over the future of artificial intelligence emerged as four technologists and transhumanists debated whether building artificial general intelligence would save humanity or destroy it. The discussion panel revealed fundamental disagreements over AI alignment and safety.

Eliezer Yudkowsky contended that modern AI systems are fundamentally unsafe because their internal processes cannot be fully understood. “Anything black box is probably going to end up with remarkably similar problems to the current technology,” Yudkowsky warned.

He argued that humanity remains far from developing safe advanced AI under current paradigms. Referring to his book’s title, Yudkowsky stated, “Our title is, if anyone builds it, everyone dies.”

Max More challenged this premise, arguing AGI could help humanity overcome aging and disease. He stated that excessive restraint could push governments toward authoritarian controls to stop development worldwide.

Computational neuroscientist Anders Sandberg positioned himself between the two camps, advocating for “approximate safety.” He recounted a personal, horrifying experience nearly using a large language model to design a bioweapon.

Natasha Vita‑More criticized the entire alignment debate as a “Pollyanna scheme” that assumes non-existent consensus. She described Yudkowsky’s extinction claim as “absolutist thinking” that leaves no room for alternative scenarios.

The panel also debated whether human-machine integration could mitigate AGI risks, an idea previously proposed by others. Yudkowsky dismissed merging with AI, comparing it to “trying to merge with your toaster oven.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST POSTS

Kalshi Hedges NBA Bonances at Half the Cost as Regulators Move to Ban Sports Markets

Prediction market platform Kalshi has partnered with sports insurance broker Game Point Capital to hedge NBA team performance bonuses at prices reportedly half those of...

Ethereum Exodus: 330,000 ETH Withdrawn & Validator Queue Hits 71 Days

Ethereum has seen over $660 million withdrawn from exchanges in recent weeks, signaling accumulation by long-term holders. However, market data shows persistent bearish pressure, with...

Brazil’s Bold $68B Plan to Buy 1 Million Bitcoin for National Reserve

Brazil's Congress is considering a bill to create a Strategic Sovereign Bitcoin Reserve, aiming to acquire up to 1 million BTC over five years at...

Pi Coin 24h Rally Outshines BTC, ETH as Mainnet Upgrades Fuel Short-Term Gains, Risk Looms

Pi Coin (PI) rose nearly 4% in the last 24 hours, according to CoinGecko. PI remains down about 4.6% for the week, 15.2% over 14...

Most Popular

spot_img