[{"data":1,"prerenderedAt":10},["ShallowReactive",2],{"$fr7opHnO5aKlUjEUyNOag7jtESZXyrtqHMWeaGc2oB50":3},{"slug":4,"title":5,"excerpt":6,"publishedAt":7,"updatedAt":8,"html":9},"a-brief-history-of-intelligence-evolution-ai-and-t-20260227","A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains","Explores how human intelligence evolved through five breakthrough layers and what this means for our AI-powered future.","2026-02-27 03:30:52","2026-02-27 06:27:04","\u003Csection class=\"fulltext-section\" data-index=\"-100\">\n  \u003Ch2 class=\"fulltext-title\">Introduction\u003C/h2>\n  \u003Cp class=\"fulltext-detail\">&quot;Evolution is still unfolding in earnest; we are not at the end of the story of intelligence but at the very beginning.  &quot;This positions us not as intelligence&#x27;s culmination but as one stage in ongoing progression.\u003C/p>\n  \u003Cp class=\"fulltext-detail\">A Brief History of Intelligence tackles a question most books split apart: how did brains evolve AND what does that tell us about building AI? Bennett, working as AI entrepreneur with neuroscience advisors, argues these questions are inseparable. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">The book&#x27;s structure is five evolutionary breakthroughs, each solving a specific problem: steering toward rewards, reinforcing successful behaviors, simulating actions mentally before executing, modeling other minds, and encoding knowledge in language.  Each breakthrough built on previous ones, and each maps to current AI capabilities and limitations.\u003C/p>\n  \u003Cp class=\"fulltext-detail\">What makes this valuable is Bennett&#x27;s dual literacy.  He explains temporal difference learning in dopamine systems and why it matters for AI credit assignment. He shows why large language models excel at pattern matching but fail at common sense - they have language breakthrough without simulation and mentalizing foundations. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">The framework is clarifying.  Current AI has mastered breakthroughs one, two, and five partially, but lacks three and four. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">This explains both AI&#x27;s surprising capabilities and its stupid failures.  A system can generate eloquent text while unable to understand that solid objects can&#x27;t pass through each other.\u003C/p>\n  \u003Cp class=\"fulltext-detail\">This isn&#x27;t AI hype or fear.  It&#x27;s systematic analysis of what intelligence requires, measured against both biological evolution and artificial progress.\u003C/p>\n\u003C/section>\n\u003Csection class=\"fulltext-section\" data-index=\"1\">\n  \u003Ch2 class=\"fulltext-title\">Evolution&#x27;s Sequential Intelligence Design\u003C/h2>\n  \u003Cp class=\"fulltext-detail\">Let&#x27;s start with the architecture itself.  Evolution didn&#x27;t design intelligence all at once—it built it in stages, each solving a specific survival problem. Five breakthroughs, each layered atop the last.  Here&#x27;s what matters about this structure.  Each breakthrough was only possible because the previous one already existed. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">Not in a loose way, but mechanistically necessary.  You couldn&#x27;t bolt these capabilities together in any random order.\u003C/p>\n  \u003Cp class=\"fulltext-detail\">Take the second breakthrough, reinforcement learning.  This is the fish-level ability to repeat behaviors that historically worked and avoid ones that didn&#x27;t. Trial and error learning.  Sounds simple, but it absolutely required the first breakthrough already in place. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">The first breakthrough was steering, which gave early bilateral animals their basic valence system.  Valence means the neurons that tag things as good or bad, pleasurable or painful. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">These live in the hypothalamus, the ancient core of the brain that still runs your body today. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">Without valence neurons, reinforcement learning is impossible.  Because trial and error needs a learning signal.  When you try something, you need to know if it worked or failed. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">Good or bad.  That&#x27;s what valence provides.  It&#x27;s not just helpful, it&#x27;s the foundation the whole system runs on. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">So fish couldn&#x27;t have evolved reinforcement learning without inheriting that ancient valence machinery from their coral-like ancestors. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">The basal ganglia, which handles the trial and error, literally builds on top of and depends on signals from the hypothalamus. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">This same logic applies up the chain.  The third breakthrough, mental simulation in early mammals, required the second. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">Because simulation means imagining actions before doing them, then learning from those imagined outcomes.  But if you can&#x27;t learn from trial and error, imagining trials is useless. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">The mammalian neocortex renders the simulations, but it&#x27;s still the basal ganglia that learns from them. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">The fourth breakthrough, mentalizing in primates, required the third.  Theory of mind means modeling other minds, but that&#x27;s just applying your existing simulation machinery to internal mental states instead of external actions. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">Same computational process, new target.  You can&#x27;t model mental states if you can&#x27;t model states at all. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">And language, the fifth breakthrough, required mentalizing.  Because communication assumes the other person has different knowledge than you do. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">Without theory of mind, you can&#x27;t infer what needs explaining or interpret what others mean by their words.\u003C/p>\n  \u003Cp class=\"fulltext-detail\">This is why intelligence took four billion years.  Not because evolution was slow or lucky, but because you have to build the foundation before you can build the next floor. Each breakthrough solved real limitations of the previous system, but only by preserving everything that came before. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">Modern human intelligence isn&#x27;t one unified thing.  It&#x27;s five systems stacked on top of each other. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">The hypothalamus still running your basic drives, the basal ganglia still doing trial and error, the older neocortex still simulating, the newer neocortex still mentalizing, and language coordinating it all. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">When any layer fails, you see exactly what gets lost.  Damage to theory of mind regions destroys social reasoning while leaving spatial planning intact.  That&#x27;s not random, that&#x27;s architectural.\u003C/p>\n\u003C/section>\n\u003Csection class=\"fulltext-section\" data-index=\"100\">\n  \u003Ch2 class=\"fulltext-title\">Review\u003C/h2>\n  \u003Cp class=\"fulltext-detail\">Five breakthroughs over four billion years.  You carry all of them—the ancient valence system craving dopamine hits, the pattern matcher jumping to conclusions, the simulator running scenarios you&#x27;ll never act on, the mind-reader constantly guessing what others think, and language binding it together. Next time your phone&#x27;s AI confidently tells you something physically impossible, you&#x27;ll know exactly what&#x27;s missing. \u003C/p>\n  \u003Cp class=\"fulltext-detail\">Next time you catch yourself imagining disaster scenarios at 3 AM, that&#x27;s breakthrough three doing its job.\u003C/p>\n  \u003Cp class=\"fulltext-detail\">Intelligence isn&#x27;t one thing that suddenly appeared.  It&#x27;s layers of solutions to survival problems, each built on what came before.\u003C/p>\n  \u003Cp class=\"fulltext-detail\">We&#x27;re not debugging intelligence.  We&#x27;re reverse-engineering it.  And we&#x27;ve barely started.\u003C/p>\n\u003C/section>",1772454502167]