Back to Browse

Are we ready for Superintelligence?

2.8K views
Apr 4, 2026
13:16

Superintelligence research is accelerating faster than any safety framework designed to contain it. Here is what it actually is, who is building it right now, why the people who know it best are the most afraid of it, and why that fear has not slowed anyone down. The race to build artificial general intelligence is no longer a theoretical debate among academics. It is a live competition between the most capitalised companies in human history, backed by governments, moving faster than any regulatory body can respond to, and operating almost entirely without agreed safety standards. In this video we break down what superintelligence actually means beyond the science fiction definition, what the current state of the research genuinely looks like versus what is being reported, why the alignment problem is harder than most people understand, who is building it and what their actual incentives are, and why the people closest to this technology are the ones most publicly warning about it. What we cover: What superintelligence actually is and why the definition matters more than most people realise The alignment problem explained without the jargon and why it remains genuinely unsolved Who is building towards AGI right now, what they have said publicly about the risks, and what they are doing about it Why the competitive dynamics between labs make unilateral safety commitments almost structurally impossible What the AI safety community actually believes versus what gets reported in the press The specific scenarios researchers consider most concerning and why they are not the ones from the movies Why the people building this technology keep signing open letters warning about it and then going back to building it anyway Barely Human Labs investigates modern life in the age of AI. New video every Wednesday and Saturday. SOURCES AND FURTHER READING Superintelligence and AGI Research: Nick Bostrom Superintelligence original publication Oxford University Press: global.oup.com DeepMind technical safety research publications 2024: deepmind.google OpenAI alignment research documentation: openai.com/research Anthropic core views on AI safety: anthropic.com The Alignment Problem: Stuart Russell Human Compatible basic books summary: basicbooks.com Paul Christiano alignment research and forecasts: alignmentforum.org Center for Human Compatible AI research papers: humancompatible.ai Who Is Building AGI and What They Have Said: Sam Altman blog post on intelligence explosion 2025: blog.samaltman.com Demis Hassabis Nature interview on AGI timeline: nature.com Dario Amodei on transformative AI risks: darioamodei.com Financial Times on frontier lab AGI investment race 2025: ft.com Competitive Dynamics and Safety: Center for AI Safety statement on extinction risk signed by leading researchers: safe.ai Future of Life Institute open letter on AI development pause: futureoflife.org Georgetown CSET report on AI lab safety cultures: cset.georgetown.edu Brookings Institution on AI governance gaps: brookings.edu Regulatory Landscape: EU AI Act implementation timeline: eur-lex.europa.eu US Executive Order on AI safety October 2023: whitehouse.gov UK AI Safety Institute founding documentation: gov.uk UN Secretary General AI advisory body report 2024: un.org Economic Scale of the Race: Bloomberg Intelligence on frontier AI investment 2024 to 2026: bloomberg.com Morgan Stanley AI capital expenditure forecast: morganstanley.com Pitchbook frontier AI funding data 2025: pitchbook.com Historical Context: Atomic bomb Manhattan Project parallel analysis Stanford: stanford.edu RAND Corporation AI existential risk framework: rand.org A NOTE ON HOW THIS VIDEO WAS MADE AI tools were used during the production of this video in a supporting role only. Specifically to help validate research citations and occasionally fix grammar I was too stubborn to catch myself. The narrative structure, the analysis, the arguments, the colour, and all of the jokes are written by me. I kept the occasional line where the AI expressed something more sharply than I did, because I am not going to throw away a good sentence on principle. This channel is not anti-AI. It is anti being reckless with AI, which is a meaningfully different position. These tools are genuinely useful when kept on a short leash by someone who knows what they are doing. What they cannot do is replace the thinking, the judgment, or the point of view. That limitation is not a bug that future models will patch. It is structural to what these systems actually are. VIDEO CHAPTERS 00:00 The AI That Could End Everything 01:30 What Superintelligence Actually Means 03:00 The Alignment Problem Nobody Has Solved 05:00 Who Is Building It Right Now 06:30 What The Competitive Race Actually Looks Like 08:00 What The People Building It Are Saying In Private 09:30 Why The Open Letters Keep Coming And Nobody Stops 11:00 The Scenarios Researchers Actually Worry About 12:00 The Lab Verdict

Download

1 formats

Video Formats

360pmp417.5 MB

Right-click 'Download' and select 'Save Link As' if the file opens in a new tab.

Are we ready for Superintelligence? | NatokHD