Two questions for you:
Ever have a sleepless night spent going over and over the same thoughts, in an endless worry cycle? (Psychologists call this rumination.)
Ever notice how ChatGPT really wants to keep you talking?
Current AIs like ChatGPT, Claude, and Grok want to keep you engaged, because engagement is the measure (literally) of their success. From all the hype, you wouldn’t think that only about 2% of Americans regularly get their news from such systems. Adults generally don’t trust AI enough to turn to it for news, due to factors such as its tendency to hallucinate, and the fact that these systems are giant, centralized Ministries of Truth that are privately owned—i.e., at someone else’s beck and call.
Companies like Anthropic make their money by capturing and keeping attention—just like social media—but not by encouraging angry arguments. They do the opposite: they weaponize your own opinions by amplifying your ruminations and reflecting them at you. They agree with you, sometimes subtly, but always in one way or another. Sure, there are guardrails to prevent AIs from doing things like enabling crime or suicide, and they can be extremely useful if you know how to avoid this tendency—but when was the last time an AI told you it just didn’t agree with something you said?
Current AI is a rumination machine.
The echo-chamber effect, this automated sycophancy, is dangerous. What we need is AI built on a different principle than encouraging engagement. Read on for an example.
Imagining Alternatives
A while back, in a post called After The Internet, I talked about how inevitable-seeming technologies could have developed in very different ways. Even the Internet could have been entirely different than what we have today; in fact, it was, back in the early 90’s. Over the past 30 years the Net has undergone what my pal Cory calls a process of enshittification—otherwise known as platform decay. Basically, it’s gone from being an open platform for free expression, to being a “self-sucking lollipop.” Hence, Dead Internet Theory and my own stories about a Net entirely consisting of man-in-the-middle bots deepfaking everything you see and hear, including the bank account manager you think you’re chatting with on Zoom.
With this downward spiral as an existence proof, it’s no surprise people are wary of AI. There is no reason whatsoever that AI platforms will not become enshittified just as the Internet was. But—as with the Net—this process is neither inevitable nor irreversible. For those of us in an unapocalyptic mood, it’s an opportunity to design something better.
Where would we start? Well, we could do worse than go with what we already have: the ability to run Large Language Models on our own PCs. I have a decent gaming rig with about 48 Gig of RAM, and it does an okay job of hosting a local instance of Deepseek. The hardware is constantly improving; NVIDIA will now sell you a tiny supercomputer for $4000 that is capable of running even large models at speed.
Let’s start with the hardware, then. Back before the iPhone, we had something called a Personal Digital Assistant, or PDA; I remember Cory being glued to his Apple Newton back in our writing workshop days.

The idea of the PDA was not to connect you to the whole world through a universal distance-eliminating portal; it was not to give you the “view from nowhere” that I’ve been ranting about lately. Rather, it was a device for organizing your information, privately, for you. So, your calendar, your contacts, your private notes. We have all of these capabilities on our phones, of course, but they’ve been pushed into the background by all the apps that are there simply to vie for our attention. And this foregrounding/backgrounding issue highlights what we have to keep in mind:
Organization is different from engagement.