A couple hundred years ago, G.W.F. Hegel (let’s just call him George) pointed out something that might save our bacon today. Combining it with a more modern idea, I’ll show you a way to think about social media and AI that might help you escape the maze of engagement and doomscrolling we’re prone to these days.
George’s little idea was that there are two levels to human thought. The first level, the default, he called verstand. That translates as understanding. This is what we’re doing when we classify things, or follow logical trains of thought from initial premises. Verstand operates analytically. It draws clear boundaries between ideas and assumes that these boundaries correspond to the real structure of the world. It is indispensable for doing science, performing logic or math, and for everyday cognition because it lets us treat phenomena as orderly, rule-governed, and predictable.
George’s real insight was that understanding is limited. It can only handle static oppositions: subject vs. object, cause vs. effect, finite vs. infinite. It treats these contradictions as external to one another. When these categories break down in real-world situations, verstand has no way to move forward except by asserting more distinctions.
There’s a certain kind of person who only thinks by understanding. You probably know one or two. This is also how Large Language Models such as ChatGPT reason. They may seem creative, but are always drawing on already-established links between ideas (tokens, actually, in their giant lookup table). Spectacular though they may be, they only respond to prompts with connections that somebody already made; they are engines of understanding, not of what George considered the superior mode: reason.
Reason is not “thinking harder.” It is a fundamentally different mode of cognition, that recognizes and works through contradictions rather than trying to avoid or suppress them.
Where understanding sees fixed categories, reason uses systems thinking and sees problems holistically. It’s aware that issues arise from interdependent, evolutionary processes. George’s version of reason recognizes that the understanding’s oppositions are not fixed boundaries but moments of a self-developing process. This recognition is why people think George is all about dialectics. For him, contradictions are not signs of conceptual failure but the motor of cognitive development. (The irony is, people regularly turn this fluid approach into yet another axiomatic, rule-based system, as Marx did with the project of dialectical materialism. Thesis-antithesis-synthesis is just another kind of verstand.)
Remember that humans think in stories, as Brian Boyd and Northrop Frye have shown. George, in his huge, nearly unreadable magnum opus The Phenomenology of the Spirit (1807) introduces consciousness as the hero, and then traces its epic journey from living under the yoke of understanding to achieving the freedom of reason. In my translated copy, it takes him 814 pages (if you count the index) to finally toss the ring of Verstand into the Mount-Doom chasm of Reason . I’ll spare you the blow by blow summary.
This epic struggle is important for all of us, though. Understanding inevitably collapses under the weight of the contradictions it uncovers (for instance, justice versus tyranny in the use of force). When we face a very real and immediate version of the Trolley problem, staying stuck in the unresolvable contradictions of the situation is simply not an option. We have to leapfrog verstand. Reasoning doesn’t mean becoming some Hegelian acolyte—using dialectics as your hammer and seeing everything else as a nail; it’s design thinking, reframing, and a hundred other approaches to dissolving the sinew and bone of an ossified idea. Reasoning is consequential, in a life-or-death way.
And Large Language Models can’t do it.
Have a Supernormal Day
Let’s add in that more modern idea I mentioned. This is the theory of supernormal stimuli. And here is where the full dimensions of the problem we’re faced with show up.
Supernormal stimuli are exaggerated versions of natural stimuli that trigger stronger responses than the original stimuli they’re based on. The concept was first identified by Nikolaas Tinbergen and Konrad Lorenz when they were studying animal behavior. If you want a great book on the subject, try Supernormal Stimuli by Deirdre Barrett.
The classic example comes from Tinbergen’s experiments with birds. He found that birds would preferentially incubate artificially enlarged eggs or eggs with more vivid markings over their own natural eggs, even though the artificial ones were impractically large. Similarly, baby birds would beg more vigorously for food from fake parent beaks that were larger and more colorful than real ones.
This happens when evolutionary mechanisms that were adaptive in natural environments are “hijacked” by artificial stimuli that exaggerate the key features these mechanisms evolved to detect. Our instinctive response system doesn’t have a built-in “upper limit”—it simply responds more strongly to more intense versions of the trigger. And I say “our” because we humans love supernormal stimuli. Think roller coasters. Spicy food. Tear-jerker movies. Public hangings. Pornography. Doomscrolling. —And, most impactful at this exact moment: LLM AIs.
As U2 put it, we love it when something is “even better than the real thing.”
Meet Your New Pusher: AI as Supernormal Cognition
We evolved to learn by talking to other people—asking questions, listening to answers, and having our ideas challenged and refined through conversation. What ChatGPT and the other AIs are doing is hijacking this instinct by performing as a conversational partner who has immediate availability, infinite patience, broad knowledge, whom we can access without the social cost of appearing ignorant, and whose responses are tailored to engage to our specific view of the world. Talking to an LLM entails no social risk, judgment, or interpersonal complexity, yet yields the pleasurable sensation of ideas “clicking” without the friction of genuine disagreement. Every single one of these qualities is a pressure point vulnerable to supernormal stimulation.
Why struggle through a difficult chain of thought alone, or wait to discuss it with friends, when you can get immediate, engaging intellectual feedback? Why read a challenging book when you can just ask questions and get clear explanations?
LLMs throw wide the gates to the ultimate theme park of verstand. There’s no more need for you to work at thinking; they bypass the cognitive struggle that produces deeper comprehension. Just picture it: no more wrestling with opaque texts (like George’s), or with the productive frustration of not-knowing, the development of intellectual self-reliance. You don’t need ‘em. Human intellectual relationships, with all their friction and richness, are way less appealing than the frictionless AI alternative.
Moving Eggs
I’m not here to throw LLMs under the bus. Remember, verstand is incredibly useful and important. Hegel’s faculty of understanding is what gets us through 99% of our day. Having a tool that can help you do that is worth its weight in gold.
It’s the other 1% that really matters, though. This where the Trolley Problems of your real life loom in a world of unexpected problems: it’s where you have to decide to vote one way or another, or decide where to put any extra cash you might have—into a trust fund for your kids, say, or a charity for the homeless. That 1% is also where truly new ideas come from. You may have read my take on Badiou’s idea of “the event”—an LLM is not going to help you recognize or generate a thought that is entirely new, since as I said, its ‘thinking’ process relies entirely on the existing connections between ideas.
It’s just… well, when you’re using an AI, picture yourself as a poor hapless bird sitting on a really big, super-speckled ball that you know in your heart of hearts isn’t a real egg. Your real eggs are there, scattered about you—unfinished ideas you can’t even name yet, much less ask some entity about; people who intrigue you but who you don’t know how to approach; movements and religious ideas that have struck a chord in you, but that you don’t know how to engage with. Raise your eyes, and you’ll apprehend a world of liminal things—undefined, unnamed, awaiting your particular mind and experience to render them real for others. Only you can name what’s really fresh in the world.
Try moving to a different egg. It may not seem as rewarding at first. But unlike that big shiny one, it might one day hatch.
The Inner Monologue as Supernormal
Getting back to the theme of “stop thinking”: the internal monologue of daily thought resembles talking to others or being talked to, doesn’t it? This makes me suspicious: does it the hijack neural circuits we evolved for social interaction? If this is true, then the constant “conversation” in our heads provides a kind of supernormal social stimulation—we get the cognitive and emotional benefits of dialogue without needing another person present. We evolved as storytelling creatures who use narratives to make sense of events and predict outcomes. Inner monologue might be an intensified, always-available version of this, turning every experience into a story we tell ourselves, potentially more vivid and detailed than necessary.
Constant internal verbalization might be an “overclocked” version of this adaptive mechanism.
Read more