The Race to Scale Reason or Reaction
Architected Intelligence: Principles for Building AI-First Organizations and Technologies by Jacob Miller and Jeremy Mumford is available now. Published by Wiley, 2026.
We’re now at the throwing Molotov cocktails stage of AI disruption.
You do not have to like Sam Altman, OpenAI, or the trajectory of AI to recognize that attempted violence against a household is an act of civilizational decay. It is also not an isolated moral failure from one maladjusted individual floating in a vacuum. Our hyperboles and rants and echo chambers amplify the noise and temperature and grant permission to the crazies. When everything is framed as apocalyptic and unforgivable, somebody eventually decides extraordinary action is justified. That is how moral panic works in practice.
AI is important, perhaps existentially so. There are legitimate debates around concentration of power, labor displacement, geopolitical risk, erosion of human autonomy, truth, and personal safety, among many others. However, descending into chaos and violence will push us further from constructive pathways. I’ve come to believe more than ever in “disagree and commit.” A team choosing the wrong strategy and executing well is more likely to succeed than one that is fragmented, even if the strategy is right. That applies inside companies and also applies, more loosely, to societies trying to navigate a major technological shift without tearing themselves apart.
The Mechanical Extensions of Mankind
One of the most useful heuristics for technology is to consider it an extension of your biological capabilities: muscles, brain, and nervous system.
A tractor enhances my strength. We even measure it in horsepower! A calculator enhances my mind, and its speed can be measured in FLOPS and MemOps. It outsources my arithmetic and enhances my precision, sometimes at a transcendent scale. Social media and the internet augments my nervous system across humanity. Suddenly I can feel the outrage, panic, joy, humiliation, fear, laughter, tribal loyalty, and moral disgust of strangers. All of this occurs at scale (all of us have access), instantaneously (news travels fast), and perpetually (in any moment I can ingest more content).
Each of these forms of technology can be dangerous. I can avoid exercising my body because cars and tractors and machines do all of the work. My thinking and memory abilities can atrophy if I completely avoid rigorous thinking or ask Google every question. These can be dangerous and challenging, but little has biologically and psychologically prepared us for our overexcited “hypertrophied” nervous systems that began with the telegraph and moved to radio, then television, then internet, then social media, and now potentially AI.
The Crisis of the Hyper-Connected Nervous System
One of the strangest phenomena of our era is that most of us are functionally living with an enlarged nervous system and an underdeveloped social immune system.
We see a video, we get mad. We see a viral post, we get upset. We see a clip stripped of context and our body reacts before our mind has rationally processed it.
Social media lets us feel every slight and threat instantly, but lacks the framework to interpret it intelligently or wisely. For years I’ve joked with others to call the internet “the rage machine.” Although, with short form video, I’d now rename it the “rage and mollification machine.” We love to imagine ourselves as resisting the machine. Usually we are just becoming a cog in a different one. It’s an outrage machine with no brakes, fueled by social media incentives to maximize engagement.
One of the most dangerous things about moral panic is that it makes us feel more virtuous than we are. It gives people a villain to hate, a script to follow, and a flattering story about themselves. That combination creates a dangerous precursor cocktail long before someone throws a literal one.
The Erosion of Perspective
I have lived through so many supposed apocalypses. So many. Nuclear annihilation through mutually assured destruction with the Soviets was the most real of them all, but I also somehow survived y2k, net neutrality, near-term projected 10-20 meter rising ocean levels, and I even played Dungeons & Dragons through the satanic panic. When everything is framed as apocalyptic, the public loses the ability to distinguish between existential stakes and theatrical stakes.
For our day, if your worldview requires every AI leader to be either a savior or a supervillain, your worldview should not be taken seriously. The framing is childish and operationally reductive. The real world is full of flawed and complicated people inside of complex systems of massive consequence, mixed incentives, partial knowledge, institutional pressure, and genuine disagreement.
I don’t worry so much about either disagreement or isolation. You can disagree with others. You can also build your own thing with other people and play by your own rules. Those are often healthy. But hyper-connected fragmentation is a different animal because it creates discord and violence because it combines tribal identity, constant stimulation, public performance, and a general “low-friction at high velocity” escalation.
AI: A Scaffolding for Collective Reasoning?
Humanity built technologies that let millions of people feel and react together before we built technologies to help us reason together.
The internet, smartphones, and social media massively expanded our nervous systems. They connected sensations with reactions to create social contagion across the planet. However, they did not do much to improve collective reasoning. Because the platforms were built to optimize engagement, they often degraded reasoning. The last twenty years have shown our desperate need for antibodies against systems that optimize for engagement.
AI is interesting in part because it might help fill that role. On our previous trajectory, were we doomed to destroy ourselves on the internet rage and mollification machine without AI’s assistance?
To be clear, I do not believe that means AI will magically save us. It could absolutely exacerbate much of this. Tailored manipulation, industrialized slop, automated propaganda, and fake consensus may make the dead internet theory a reality. I see those arguments and some of them are compelling, but for the first time in this broader technological arc, we may have tools that can actually help people REASON together instead of REACTING together. Tools that can summarize arguments, expose contradictions, compare claims, simulate tradeoffs, clarify mechanisms, and lower the cost of careful thinking instead of only lowering the cost of emotional amplification.
While I don’t believe in going full Spock mode with no feelings, biologically, the most viral reactions are threats in order to increase our probability of survival (anxiety, anger, “the outgroup is so stupid”). If we were all reacting to inspirational messages to collaborate better and look for opportunities to serve the people around us, this wouldn’t be an issue, but the infection rate for positivity in us humans is much lower than for negativity.
Though people on X are dunking on “@grok is this true,” I unironically believe this is one of the most positive recent developments to shift away from the previous trajectory of pernicious social media virality. Humanity is combatting a nervous-system overflow with enhanced reasoning at scale for the very first time.
What Can We I Do?
We start with what we have the most influence over, and that is… ourselves! Block the hyperbolic accounts. Click the three dots on whatever platform you are on and mute, show less, unfollow, mute keywords, whatever version exists. It starts with ourselves to stop being infected by the omnipresent overloaded sympathetic nervous system. Also, it lowers reach. If a source reliably makes us dumber, angrier, and more certain than the evidence supports, we have to stop piping it directly into our consciousness.
Looking back on this moment 20 years from now, is the attack on Sam Altman’s home the beginning of a grand destructive conflict, or will it become a small asterisk? If AI becomes the topic flash point for outrage, and then AI itself is used to further fuel it, we are not positioned well.
If reacting together outpaces reasoning together, we will all lose.






