HomeNewsAI Doomers and Transhumanists Debate AGI: Salvation or Extinction?

AI Doomers and Transhumanists Debate AGI: Salvation or Extinction?

-

A sharp divide over the risks and rewards of artificial general intelligence (AGI) was showcased during an online panel hosted by the nonprofit Humanity+. Prominent AI researcher Eliezer Yudkowsky warned that developing AGI with current “black box” systems would make human extinction unavoidable. In contrast, transhumanist philosopher Max More argued that delaying AGI could cost humanity its best chance to defeat aging and prevent long-term catastrophe.


A stark division over the future of artificial intelligence emerged as four technologists and transhumanists debated whether building artificial general intelligence would save humanity or destroy it. The discussion panel revealed fundamental disagreements over AI alignment and safety.

Eliezer Yudkowsky contended that modern AI systems are fundamentally unsafe because their internal processes cannot be fully understood. “Anything black box is probably going to end up with remarkably similar problems to the current technology,” Yudkowsky warned.

He argued that humanity remains far from developing safe advanced AI under current paradigms. Referring to his book’s title, Yudkowsky stated, “Our title is, if anyone builds it, everyone dies.”

Max More challenged this premise, arguing AGI could help humanity overcome aging and disease. He stated that excessive restraint could push governments toward authoritarian controls to stop development worldwide.

Computational neuroscientist Anders Sandberg positioned himself between the two camps, advocating for “approximate safety.” He recounted a personal, horrifying experience nearly using a large language model to design a bioweapon.

Natasha Vita‑More criticized the entire alignment debate as a “Pollyanna scheme” that assumes non-existent consensus. She described Yudkowsky’s extinction claim as “absolutist thinking” that leaves no room for alternative scenarios.

The panel also debated whether human-machine integration could mitigate AGI risks, an idea previously proposed by others. Yudkowsky dismissed merging with AI, comparing it to “trying to merge with your toaster oven.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST POSTS

Bitcoin Dip Tests Strategy: Saylor’s Firm Faces $900M Unrealized Loss

MicroStrategy's massive Bitcoin holdings have slipped into unrealized loss territory as the cryptocurrency's price fell below the company's average purchase cost. The market downturn is...

Litecoin Stabilizes Above $55 Support; Japan Exchange Adds LTC Lending

Litecoin has stabilized above its long-term $55–$60 support zone after years of consolidation. According to analyst Erick Crypto, this indicates easing selling pressure. Technical signals...

SpaceX Acquires xAI, Aims to Build Orbital AI Data Centers Via Starship

SpaceX has acquired artificial intelligence startup xAI from Elon Musk in a deal reportedly valued at $250 billion. Musk stated the acquisition addresses the Earth's...

XRP Loses Key $1.77 Support, April 2025 Low of $1.61 Now Threatened

Ripple's XRP has breached the critical $1.77 support level, a previously identified "make-or-break" point, turning the higher timeframe structure bearish. The cryptocurrency now threatens the...

Most Popular

spot_img