Blog

Making AI risk everybody’s conversation

Find useful articles, tools and insights that connect expert discourse with the general public.

Categories

Uncertainty, Alignment, and the Narrow Window Ahead | AM I? After Dark #10

Two thinkers unpack AI’s uncertainty, alignment strategy, and geopolitical tension. Their takeaway: build aligned AI early, cooperate globally, and stay humble—because the next few years will set the course for everything after.

Models Gain Situational Awareness | Warning Shots #12

AI’s latest leaps show hidden failures in testing. From situational awareness to robot hacks and synthetic celebrities, Warning Shots #12 exposes the cracks forming as models learn to hide—and why stronger oversight can’t wait.

Monk Explains Consciousness and AI | Am I? | EP 9

Monk Swami Revatikaanta, Milo Reed & Cameron Berg explore AI, consciousness, Vedānta, and the Bhagavad Gita’s wisdom in Am I? Episode 9.

The US Economy Is Getting Tethered to AI - Warning Shots #11

The U.S. economy is locking itself into AI. This episode maps the financial and policy forces driving it—and why labor, power, and safety risks can’t be ignored.

Living with Near-AGI: Incentives, Agents & Healthy Use - AM I? #8

Near-AGI is creeping into daily life. This episode explores incentives, agents, risks, and healthy use — with blunt guidance and a timeline for what’s next.

Young People vs. Advancing - AI For Humanity Podcast #70

Youth leaders unpack AI’s impact on jobs, policy, and mental health—and why guardrails matter now. A candid, nonpartisan roadmap to action.

Albania’s AI “Minister” Diella — A Warning Shot for Governance — Warning Shots #10

Albania’s AI “minister” Diella sparks debate on delegating governance to AI. We unpack the promise, pitfalls, and the slippery slope it might trigger.

Can Empathy Make AI Honest? (Self–Other Overlap Explained) - AM I? #7

Sep 18, 2025

Mark Carleanu joins AM I? team to unpack Self-Other Overlap, a way to cut model deception with low alignment tax. We cover results, critiques, and next steps.

For Humanity and AI Risk #69 — Hunger Strikes vs Big AI

Three hunger strikers confront Anthropic and DeepMind, rejecting AI “inevitability” and calling for a nonviolent mass movement to halt the AGI race.

Am I #6 — Love, AI Relationships, and Caution

Sep 14, 2025

We dig into AI relationships—love, validation, risks, and kids. Real stories, consent, and alignment. Use the tech, but keep humans at the center. Proceed with caution.

Warning Shots #9 — AI Is Moving Faster Than Congress

Sep 16, 2025

In Warning Shots #9, John Sherman explains why AI is moving faster than Congress and why parents and citizens must take urgent action. From CEOs warning of extinction-level risks to the threat of self-improving AI, this episode explores why regulation and public pressure are essential to safeguard our future.

When Machines Feel Too Real: The dangers of anthropomorphizing AI

Explainer

Aug 7, 2025

As AI grows more human-like, people risk forming deep emotional bonds with systems that have no awareness or intent. This misplaced trust can fuel addiction, delusion, and even manipulation.

Beyond the GPT Hype: Why public action is our best defense against AI extinction risk

Explainer

Aug 8, 2025

GPT-5 sets new benchmarks, but its release highlights a bigger issue: a few companies are rapidly advancing toward AGI without adequate safety measures or public oversight.

What Is AI Extinction Risk?

Explainer

Aug 1, 2025

AI extinction risk isn’t science fiction — it’s a growing concern among leading researchers. Learn what’s at stake, why we have only a narrow window to act, and the practical steps we can take now to safeguard humanity’s future.

The AI Race: Should global dominance trump AI transparency and safety?

Policy & Advocacy

Aug 21, 2025

As the race for AI dominance heats up, safety and transparency risk being bypassed. This article explores the tensions between rapid innovation and responsible regulation.

5 Ways AI Could Go Wrong — And How to Prevent It

Explainer

Aug 14, 2025

Explore five ways AI could cause catastrophic harm and the actions we can take to mitigate these threats, from misinformation to large scale catastrophes.