Curated

Curated video summaries worth reading before you sign up

These curated reading pages double as product proof. Each one shows what a long-form YouTube conversation looks like once it becomes a clean, structured summary you can actually reuse later.

Mathematics11 min read

Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

Terence Tao presents mathematics as the study of models at the boundary between tractable and impossible, where progress often comes from identifying the right abstraction, ruling out tempting but doomed approaches, and finding deep connections across fields. He uses Kakeya, Navier-Stokes, prime number theory, Ricci flow, and formal proof systems to show that the hardest problems are less about raw computation than about discovering the right language for randomness, structure, and scale.

Systems11 min read

John Carmack: Doom, Quake, VR, AGI, Programming, Video Games, and Rockets

John Carmack's core view is that breakthrough engineering comes from understanding systems end-to-end, optimizing for user value rather than elegance alone, and exploiting constraints to find "smoke and mirrors" solutions that make impossible-seeming experiences practical. He applies this lens across game engines, VR, programming languages, and AGI: progress usually comes from a small number of deep, pragmatic insights, not from maximal abstraction or philosophical theorizing.

Algorithms9 min read

Donald Knuth: Algorithms, Complexity, and The Art of Computer Programming

Knuth presents computer science as a craft of understanding across levels: from machine details to abstract theory, from formal proofs to human-readable exposition, and from worst-case analysis to practical performance. His core view is that progress comes from combining rigor, experimentation, taste, and humility about how little we truly understand.

Programming10 min read

Brian Kernighan: UNIX, C, AWK, AMPL, and Go Programming

Brian Kernighan frames UNIX, C, AWK, and related tools as products of a specific design culture: optimize for programmer productivity, keep mechanisms simple, and let small composable abstractions scale into powerful systems. Many enduring ideas in computing came less from grand prediction than from tight feedback loops, modest hardware constraints, and communities that could rapidly build, share, and refine tools.

Hardware10 min read

Jim Keller: Moore's Law, Microprocessors, and First Principles

Jim Keller argues that progress in computing comes less from flashy invention than from excellent craftsmanship: getting fundamentals, interfaces, modularity, and organizational dynamics right so ideas can reliably become real systems. He extends that view from CPUs to AI hardware, suggesting the next major shift is from machines built to execute serial code or pixel shaders toward systems that natively execute graph-structured computation.

Causality9 min read

Judea Pearl: Causal Reasoning, Counterfactuals, and the Path to AGI

Judea Pearl argues that modern AI and much of statistics are strong at association but weak at causation, and that human-level intelligence requires explicit causal models capable of intervention and counterfactual reasoning. His core claim is that intelligence is not just predicting from correlations, but answering questions like what happens if we act, what caused an outcome, and what would have happened otherwise.

Reinforcement Learning10 min read

David Silver: AlphaGo, AlphaZero, and Deep Reinforcement Learning

David Silver argues that reinforcement learning, especially self-play combined with deep neural networks and search, is not just a way to win games but a principled route toward general intelligence: systems should learn from interaction and error correction rather than rely on handcrafted knowledge. AlphaGo, AlphaGo Zero, AlphaZero, and MuZero trace a progression from human-guided learning to increasingly general algorithms that discover strong strategies, intuitive evaluation, and even world models on their own.

AGI9 min read

Demis Hassabis: Future of AI, Simulating Reality, Physics and Video Games

Demis Hassabis argues that many hard natural phenomena are tractable for classical AI because nature is not arbitrary: stable structures in biology, physics, and even video are shaped by long selection processes and therefore lie on learnable manifolds. This view links AlphaGo, AlphaFold, weather models, and video generation into a broader research program: learn the structure of reality well enough to search efficiently, build better world models, and ultimately use AGI as a scientific instrument.

Computation9 min read

Stephen Wolfram: ChatGPT and the Nature of Truth, Reality & Computation

Wolfram argues that large language models and computational systems solve fundamentally different problems: LLMs are broad, pattern-based generators of plausible language, while symbolic computation builds precise, composable representations that support deep, reliable inference. His broader claim is that many core features of science, cognition, and even physical law arise from the interaction between computationally irreducible processes and bounded observers who can only access compressed, symbolic summaries.

Language Design9 min read

Bjarne Stroustrup: C++

Stroustrup frames C++ as a language for building systems that must be both close to hardware and manageable at scale: high-level abstraction is not the enemy of performance, but often the way to achieve it. The core design goal is "zero-overhead" abstraction, combined with strong typing, deterministic resource management, and tools that make large, long-lived systems simpler and more reliable.