Retired since 2012.
2383 stories
·
5 followers

Alien Politics

1 Share

First, an apology. I’m behind in posting for a variety of life-related reasons. For example, this week I’m writing from rural Western Australia, where we are currently sitting in the path of Cyclone Narelle, a category 3 storm that has ravaged three states and now has its sights set on scouring the west coast. There’s that. Then there’s the foresight contract I’ve just wrapped up, which took up most of my free time, and of course, preparations for the release of my new short story collection (it really is happening this spring!).

I promise to tell you all about all of this, in due course, and will soon be back on my normal schedule, with some bonuses to come (such as a pre-order window opening soon for the collection, Laika’s Ghost.)


Our Next Political Move

As you know, I’ve been thinking a lot about the future of politics, because if we’re going to bequeath a just and humanistic political system to our kids, we have to start building it now. There are a lot of moving parts to such a project, so I’ve been wondering how to boil them down to fundamentals. One thing that is clear is that the political frameworks of the 19th and 20th centuries are not up to the job. What is the most critical addition we need to make to our political systems right now?

Our future political freedom depends on us developing protocols that deliberately hold understanding at bay during deliberation.

If this sounds weird, it’s because we are in a weird situation, and that is the point. The kind of abeyance I’m talking about is not like working with statistical uncertainty; I am doing that right now as I’m watching the many possible paths Narelle could take, including some that pass directly over my head in the next 48 hours. That’s what you might call ‘normal’ uncertainty. What I’m talking about is more like Badiou’s Event, a concept I’ve written about before. But let’s try to avoid abstraction here.

Imagine a very near future (say, later this year) when people turn to AI systems such as Grok to help them decide how to vote. Hopefully, we all know by now that these systems are designed to be sycophantic, and therefore, reinforce our biases rather than expanding our worldview. ChatGPT, Gemini, Claude, and Deepseek are all bias amplifiers, just like social media. But they can be tweaked to nudge our thinking in particular directions. They are going to have a big impact on voting patterns if their owners (who are all oligarchs, except for the Chinese who are simply autocrats) have skewed their AIs’ models to reflect some partisan position.

This is just like the capture of journalism by the billionaires, so I won’t repeat arguments that others have made about that. It should be an obvious danger and therefore, politically, we need counterbalances in institutional or informal terms.

No, there’s a deeper issue here. It doesn’t have to do with AI’s sycophancy, but rather with its (and our) deep-seated drive to make things make sense.

The Bed of Procrustes

Large Language Model AIs aggregate humanit’s current understanding of the world. And, as Brian Boyd has pointed out, “if the human mind can understand something in narrative terms, it automatically will.” Whatever it is that is going on in the world today, we are frantically integrating it into a consensus-reality tale we’ve already written. The huge, under-examined problem is that if LLM AIs are bias amplifiers, they are also amplifiers of this integration process. They make things make sense to us, and they will try to do that even if the things in question do not make sense within any existing frame of thought.

We use them because they help us understand the world; and that is precisely why they are profoundly dangerous. See, there’s a lag between their training and what’s happening now; theorists and historians have not yet fully teased apart the phenomenon that is Trumpism, for instance, yet we’re living through it and have questions. LLMs are more than happy to answer those questions; but they will of necessity do so using the paradigms they were trained on.

This makes LLM AIs like Procrustes from the ancient Greek story, who would invite travelers to stay with him. If they were too short for his bed, he would stretch them to fit, and if they were too tall, he’d cut them down to size. This is what Large Language Models do with any situation we describe to them, because they represent the interconnections already present in language, and cannot reason or imagine new ones.

If something unprecedented happens, not only can they not recognize it, they will actively and cleverly confabulate an explanation that makes complete sense to us within the categories of thought that they’ve been trained on.

The AI apocalypse we should be worried about is not, therefore, them taking over the world and wiping us out. The AI apocalypse we should be worried about is one in which everything is explainable. AI is the Procrustean Bed for human knowledge.

Thanks for reading Unapocalyptic! This post is public so feel free to share it.

Share

The Department of Abeyance

Maybe aliens will help this make more sense. Say aliens land tomorrow, and we can kind-of communicate with them. There’ll be areas of overlap between our concepts and theirs. When they say something really strange, though, we have a couple of options. One is to take the weirdness seriously. Another is to treat what they’ve just said as nonsense and skip over it. Or, we can take what they’ve just said and shave off the uncomfortable parts until it fits the way we understand the world. We can Procustize them.

To take the weirdness seriously means to, firstly, admit it is there, and secondly to refrain from ‘fixing it’ the way Procrustes would. We’d have to learn to dwell with the incomprehensible. Based on the past of Colonial Europe in contact with the cultures of the New World, that ‘dwelling-with’ seems highly unlikely to happen on its own.

So is democracy, unless you have institutions that are designed to support it.

We’re rapidly institutionalizing Large Language Models and thus, their ability to explain the world to us. I propose that we create institutions designed to counterbalance the Procrustean problem by deliberately holding off—keeping in abeyance—understanding when we sense that, in some way we can’t yet describe, there is more to the story.

What would such a Department of Abeyance look like?

Subscribe now

Read more



Read the whole story
cjheinz
7 hours ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

A comparison of different sorting algorithms (bubble,...

1 Comment

A comparison of different sorting algorithms (bubble, merge, heap, timsort). You can run them one at a time or race all seven.

Read the whole story
cjheinz
1 day ago
reply
Sorting algorithms are cool to watch.
Lexington, KY; Naples, FL
Share this story
Delete

Admitting when we’re wrong

1 Share
Admitting when we’re wrong

There are several iconic lines in the 1975 blockbuster movie, Jaws. For example, “You’re gonna need a bigger boat,” has been repeated or paraphrased in numerous films.

My personal favorite is when Quint, the crusty old fisherman played by Robert Shaw, tells Matt Hooper, the know-it-all young scientist played by Richard Dreyfuss, “Well, it proves one thing Mr. Hooper. It proves that you wealthy college boys don’t have the education enough to admit when you’re wrong.”

I’ll admit that, as the only one of seven siblings without a college degree, I used to favor common sense and traditional wisdom over academic intellectualism. Now that I am older – READ: old – I try to look at things from as many different perspectives as I can. However, I generally prefer scientific method over traditional wisdom or gut feelings.

It seems universally true that most people, including me, don’t want to admit when they are wrong. In “Why Some People Will Never Admit That They’re Wrong” published in Psychology Today, Guy Winch writes,

No one enjoys being wrong. It’s an unpleasant emotional experience for all of us. The question is how do we respond when it turns out we were wrong. (...)

Some of us admit we were wrong and say, ‘Oops, you were right. (...)

Some of us kind of imply we were wrong, but we don’t do so explicitly or in a way that is satisfying to the other person. ... We accept responsibility fully or partially (sometimes, very, very partially), but we don’t push back against the actual facts.

But what about when a person does push back against the facts, when they simply cannot admit they were wrong in any circumstance? What is it in their psychological makeup that makes it impossible for them to admit they were wrong, even when it is obvious they were? And why does this happen so repetitively – why do they never admit they were wrong?

The answer is related to their ego; their very sense of self.

I am a lifelong Democrat. Sometimes I’ve defended a Democrat, or Democrats, because of party loyalty, even when I pretty much knew they were wrong. For example, in 1998 President Bill Clinton was impeached primarily for lying under oath and obstructing justice while trying to conceal his extramarital affair with White House intern Monica Lewinsky. I mostly excused Bill Clinton on the flimsy grounds that, although it was wrong for him to lie, it was understandable and excusable and even honorable for him to want to spare his wife and daughter and even the nation from the shame and tawdriness implicit in the sexual affair.

Nowadays, however, I have zero respect for Bill Clinton. I admit that I was wrong to excuse his lies and bad behavior. I will further admit that many years passed before I was able to make that admission. As a journalist, I have to be thick-skinned. But maybe my ego was more fragile than I thought.

Winch continues,

Some people have such a fragile ego, such brittle self-esteem, such a weak ‘psychological constitution,’ that admitting they made a mistake or that they were wrong is fundamentally too threatening for their egos to tolerate. Accepting they were wrong, absorbing that reality, would be so psychologically shattering that their defense mechanisms do something remarkable to avoid doing so – they literally distort their perception of reality to make it (reality) less threatening. Their defense mechanisms protect their fragile ego by changing the very facts in their mind, so they are no longer wrong or culpable.

Indeed, rather than admit that Bill Clinton was a sleazy lying adulterer, I shifted the blame to America’s puritanical history and resultant prudishness, arguing that progressive European nations like France or Italy wouldn’t have a problem with their president lying about an extramarital affair. In a modern civilized culture, that’s what one does in such a situation, right? Er ... no, that’s not right.

Not everyone who voted for Donald Trump identifies with the MAGA movement. Some are lifelong Republicans who always vote for Republicans, just as I’m a lifelong Democrat who always votes for Democrats.

In Gallup News, Jeffrey Jones writes that a “new high of 45% in U.S. identify as political independents; more independents lean Democratic than Republican, giving Democrats edge in party affiliation for first time since 2021.”

He continues, “The recent increase in independent identification is partly attributable to younger generations of Americans (millennials and Generation X) continuing to identify as independents at relatively high rates as they have gotten older. In contrast, older generations of Americans have been less likely to identify as independents over time. Generation Z, like previous generations before them when they were young, identify disproportionately as political independents.”

I have mixed feelings about Independents. On one hand, I believe there are clear and distinct differences between the two major parties that make it easy for me to choose to be a Democrat. I focus mainly on what the Democratic party represents and not on individual candidates. On the other hand, I can understand and empathize with voters — especially younger voters — who are fed up with, and even exhausted by, the rancor and vitriol between the two major parties.

Moreover, identifying as an Independent allows voters the freedom to micromanage their political beliefs and decisions – as opposed to accepting the “party line” adopted and promoted by either of the two major parties. In short, party loyalty comes with a price – that Independents presumably don’t have to pay.

Independent voters in critical swing states such as Arizona, North Carolina, Georgia and Pennsylvania were key to Donald Trump’s victory in the 2024 presidential election. (Those same Independent voters also helped several Democrats win their Senate races.) Considering President Trump’s abysmally low approval ratings — on tariffs and the economy, healthcare, gas and energy prices, the federal budget, immigration, Iran, Ukraine — I can’t help but wonder if some of them now regret voting for Trump.

One anonymous man who voted for Trump in 2024 admitted that things were “not going well. I was looking yesterday and, you know, Americans have lost over $1 trillion in their wealth in the last year, while the top 1% has gained over $10 trillion. And it’s like, that’s not exactly what was supposed to be happening.”

Or is it? I would argue strongly that this is exactly what Trump wants – the rich are getting richer, while the rest of us are not.

Regardless of his motives, Trump has seriously soured the American economy, and his tariffs and policies have hurt the economies of numerous friendly countries such as Canada, Mexico, Japan, Australia, Germany and other European allies. Yet Trump’s MAGA base continues to confuse their stubborn loyalty and blindness to the truth with inner strength and moral conviction.

It’s commonly known that we all make mistakes. Although it’s painful, we need to admit our mistakes, learn what we can from them, and resolve to make amends for them if possible.

And that includes MAGA Trump voters.

--30--

&&&

Thoughts on this? Leave them in the comments below.

Read the whole story
cjheinz
3 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Shy Girl, AI In Writing, And A New Perniciousness

1 Comment

I wanna talk about Cameron’s The Terminator and Carpenter’s The Thing, but first, let’s get it out of the way —

If you know anything at all about me in this Current Era, it is that I am vehemently opposed to generative AI. I do not use it. I will not use it. It does not exist for me in any form — the only “use” I had of it recently was writing my Vital Cat Update, which copied from Google’s search engine AI off its main search page. Otherwise, I don’t touch the stuff. I don’t even know how to access it. I couldn’t tell you how to use Chat GPT or Claude or any of that. My copy of Word is one with Copilot not inside it, and I had to change my subscription to get there. I turn off Apple Intelligence in every instance I can. I am against AI because it steals our work, which it then uses to steal our jobs, which it further uses to steal our water and our electricity.

Which is to say, it is here to steal our future.

So, I’m against it! It sucks moist open ass.

But there’s a delightful (read: not at all delightful!!) new perniciousness afoot, and that requires us to talk a little about the novel Shy Girl, by an author who I won’t even name because whatever she did or did not do, I do not think directing theoretical harassment toward said author is really valuable, nor is it the point. The problem isn’t one book. The problem is the whole system.

To keep it as brief as I can, what happened was, to my understanding:

Shy Girl was a self-published novel. A horror novel. It came out a year or so ago, on its own, I think? It did well enough, I guess, though I don’t know that it set the world on fire — but somehow a publisher, Hachette, picked it up for traditional publication and it was to come out soon. Ten months ago, there appeared to be accusations that the book read like it was written by generative AI in whole or in part. Those conversations continued and appeared to boil over right around now-ish, and the current narrative is that the author did not herself use generative AI, but employed an editor who made changes to the book using generative AI, changes that the author did not — review? Did not catch? I don’t know for sure.

Certainly some aspect of this may be wrong, or new details may come out, and if you have corrective details, please sling ’em in the comments below.

That is the situation currently.


To switch tracks a bit, though you’ll soon see (or already can predict) where this is going: I’ve in the last several months seen an uncomfortable number of instances, usually on Threads, where someone will look at a photograph or a video or a piece or art or graphic design and they will assert, with dogmatic certainty, that is AI.

And sometimes, it is, or appears to be.

And other times, it definitely isn’t.

I’ve seen people look at a beautiful, very real but also very-processed photo, and say with their whole chest, that shit is AI, and sometimes that’s started a small little avalanche of people asserting similarly. And in more than one instance, I’ve seen the creator come back and post how that photo predates the current generation of gen-AI — it’s just a photo that looks either really good because of Lightroom or really overprocessed because someone wanted a slick HDR effect, or whatever.

This has also happened with writing.


It started with the emdash.

It was asserted, with Great Authority, that emdash use was a strong signifier of a piece of writing being AI.

The artbarf robots, they said, love that little emdash sumbitch so much, so so much, that they just can’t help themselves.

Needless to say, that made my bowels go to ice water because —

Holy shit, I love the emdash, too.

In fact, most Current Era writers I know love love love a fucking emdash.

But instead of making me sympathetic toward the artbarf robots — “Aww, it loves the same things I do!” — it only made me hate the artbarf robots more, because the reason the piece-of-shit AI loves an emdash is because it stole all our work, and all our work features a lot of goddamn emdashes.

It doesn’t use emdashes.

We use emdashes, and it stole our work and then mimics us.

Emdashes and all.


So now, with Shy Girl, what do I see?

I see some folks putting forth the “signs” that told them that Shy Girl was very obviously AI-written, and those signs include a number of stylistic choices.

And when I say stylistic choices, they are not choices that generative AI made, because generative AI doesn’t make choices. It just eats and regurgitates.

We make choices, as authors. Narrative ones, stylistic ones, and so forth.

But this list of signs and symptoms and AI portents included stylistic choices that I myself absolutely one hundred percent make. Same as the emdash. I’ve seen people say that AI loves metaphors, AI loves certain kinds of repetition, it loves adjectives no wait it loves adverbs no wait it loves alliteration no wait–

Of course, again, as with choices, AI doesn’t love a fucking thing, because AI isn’t alive, it isn’t intelligent, it isn’t aware. The key word is always artificial. It fakes it. It fakes choices. It fakes preferences. It fakes love. And it is able to fake it because it stole those choices and preferences from us.


I saw The Terminator last night on the big screen. I’ve seen it before, obviously — seen it many, many times. Seen all of them! Even the stinky ones. But I think this was my first seeing that one on the big screen. (It’s of course excellent, if occasionally a little corny and showing its age.)

But one place where it isn’t showing its age is how it still issues a sharp warning about AI — it’s long been held as a kind of bellwether for that particular threat, right? It’s an early iteration of the Torment Nexus meme. That warning has told us, hey, AI is going to get smart, get mean, it’s going to inhabit robots who want to kill us, it’s going to tangle itself up in our systems and decide that we’re a threat and drop a batch of nukes on our heads.

But I think one of the warnings in the movie(s) didn’t really register for me back then, but it damn sure registers now

What happens in the movie? The AI is going to pretend to be us, and it’s going to be get harder and harder to tell the difference. It’s going to wear our faces. Only dogs will be able to sniff it out. It can steal our voices — so when we call home to talk to Mom, maybe the Mom we think we’re talking to us actually dead, and it’s a soulless Cyberdene drone on the other end there.

That makes me think of John Carpenter’s The Thing, because it, too, understands that same threat, but worse — it understands the fear of being amongst your people except one of those people isn’t your people. Ohhh, no. It’s an Impostor, an alien being clothed in the raiment of your friend’s flesh, and soon you’ll be paranoid about who is alien and who is human, and you’ll have to work very hard to find a way to figure out just who is who — all that without accidentally killing a friend, or failing to kill the thing that wants to eat your face and then wear it.

Sound familiar?

The AI — artistically! — is us.

It steals our artistic skin.

It wears it, pretends to be us.

And it gets harder and harder to tell what’s us, and what’s it.


I’ve long said that one of the threats of AI is that it damages the fidelity of our information. Of truth and reality itself! It’s not just that it pumps out misinformation and disinformation — digital illusion and virtual legerdemain! — but rather that its mere existence makes it harder and harder to tell what is truth and what is fiction.

And we’re seeing that now with Shy Girl.

We’re seeing it with photos and videos and artwork.

People are right to hate AI — and the pernicious, insidious presence of AI has made them like the men trapped in that Antarctic base.

They are paranoid that it’s everywhere.

Because, ostensibly, it is. Or they (they being the techbros who are really the man behind the wizard curtain) want it to be. And it has a deleterious, corrosive effect on all that we do and all that we see. It’s like Paramount taking over CBS, or Musk taking over Twitter — it doesn’t matter that it becomes successful, it just matters that they ruin the ability to disseminate good information. To ruin truth.


So, what the fuck do we do about all this?

I have no idea. I mean, the obvious thing on the face of it is to keep your own garden free of it. Pledge to use no AI. In all the ways you can avoid it? Avoid it. But that won’t stop someone in the future telling you you’re using it. Or even using an AI detector — which is itself AI! — from “detecting” it. And it won’t stop others from assuring you that this photo or that video or this logo is AI, even when it’s not. That certainty has been ruined.

More to the point, I don’t know what this means for writers, for readers, and for publishing at large. Ideally, publishing gets ahead of this problem and tries to get commitments from writers to not use AI — but therein lies a rub, too, wherein a “no AI” contract looks like a “morality clause.” Without clear definitions, if enough people were to accuse you and your book of being AI — whether at the authorial level, the editorial level, or in some aspect of publishing — they can get it tanked whether or not AI has ever even chastely kissed the work in question. And it doesn’t inspire confidence when a publisher like Hachette published Shy Girl… when already the accusations of AI were afoot. Did they do their due diligence? I don’t know. Maybe! But given the lack of editorial oversight… ennnh, maybe not.

Do I think AI should be published? I do not. I think using AI at any of those levels is not only problematic for the reasons listed above, it also takes opportunity from an Actual Human doing the Actual Work of Being Human. A contract given to some slopwrangler is a contract not given to an actual writer. A fake book will take the place of a real one. It’s stupid fucking robots all the way down when it should be humans.

So, this is a snarled nightmare tangle — one where the existence of AI en masse is becoming its own problem, regardless of whether it’s presence in a single instance of art of writing. We’re just going to have to do our best going forward. We must pledge not to use it — but also try to be very, very cautious kicking other people under the tires of this bus without knowing for absolute sure what we’re accusing someone of doing. As AI gets better, the environment in which it exists is only going to get noisier and more confusing. And we can’t just stick a copper wire into the blood of the book to make it transform into the monster, revealing its True Self.

We just gotta do our best. Be vigilant, be cautious.

And don’t use the AI slop-shitting artbarf techbro bullshit.


SIGH.

I do not care for this era of writing and publishing, lemme tell you.

The faster we pop this bubble, the better off we will all be.

Good luck, friends!

And fuck off, robots.

Buy my books or I die in the abyss.

Read the whole story
cjheinz
6 days ago
reply
A lot of truth spoken here.
Lexington, KY; Naples, FL
Share this story
Delete

Pluralistic: Love of corporate bullshit is correlated with bad judgment (19 Mar 2026)

1 Comment


Today's links



A bearded man's cross-sectioned head, upside down, the skull flipped open; his brain is falling onto a tabriz rug below. His visible eye is wide and orange. The background is murky, green-tinted folds of a brain.

Love of corporate bullshit is correlated with bad judgment (permalink)

I'm a writer, so of course I care about words! But I'm a writer, so I also think that words are improved by their malleability, duality and nuance.

This is one of the things I love about being a native English speaker – this glorious mongrel language of ours is full of extremely weird words, like "cleave," which means its own opposite ("to join together" and "to cut apart"). English is full of these words that mean their own opposite, from "dust" to "oversight" to "weather":

https://www.mentalfloss.com/language/words/25-words-are-their-own-opposites

This is what you get when you let a language run wild, with meaning determined (and contested) by speakers. Not for nothing, my second language is Yiddish, another glorious higgeldy-piggeldy of a tongue with no authoritative oversight and innumerable dialects.

Semantic drift is a feature, not a bug. It's how we get new words, and new meanings for old words. I love semantic drift! I mean, I'd better, since, having coined "enshittification," I'm now destined to have a poop emoji on my headstone. Having coined a word – and having proposed a precise technical meaning for it – I am baffled by people who make it their business to scold others for using enshittification "incorrectly." "Enshittification" is less than five years old, and we know when and how it was invented. If you like it when I make up a word, you can't categorically object to other people making up new meanings for this word. I didn't need a word-coining license to come up with enshittification, and you don't need a semantic drift license to use it to mean something else.

I wrote a whole danged essay about this, but still, hardly a day goes by without someone trying to enlist me in their project to scold and shame strangers for using the word incorrectly:

The fact that a neologism is sometimes decoupled from its theoretical underpinnings and is used colloquially is a feature, not a bug. Many people apply the term "enshittification" very loosely indeed, to mean "something that is bad," without bothering to learn – or apply – the theoretical framework. This is good. This is what it means for a term to enter the lexicon: it takes on a life of its own. If 10,000,000 people use "enshittification" loosely and inspire 10% of their number to look up the longer, more theoretical work I've done on it, that is one million normies who have been sucked into a discourse that used to live exclusively in the world of the most wonkish and obscure practitioners. The only way to maintain a precise, theoretically grounded use of a term is to confine its usage to a small group of largely irrelevant insiders. Policing the use of "enshittification" is worse than a self-limiting move – it would be a self-inflicted wound.

https://pluralistic.net/2024/10/14/pearl-clutching/#this-toilet-has-no-central-nervous-system

Colloquialization doesn't dilute language, it thickens it. Using a powerful word to describe something else can be glorious. It's allusion, metaphor, simile. It's poesie. It's fine. Bemoaning the "tsunami" of bad news doesn't cheapen the deaths of people who die in real tsunamis. Saying that the Trump administration "nuked" the Consumer Finance Protection Bureau doesn't desecrate the dead of Hiroshima and Nagasaki. Calling creeping authoritarianism a "cancer" doesn't denigrate the suffering of people who have actual cancer.

What's more, devoting your energies to "correcting" other people's allusive language makes you a boring, tedious person. Sure, you can have a conversation with a comrade about making inclusive word choices, but interrupting a substantive debate to have that discussion is unserious. The words people use matter (I care a lot about words!) but they matter less than the things people mean. Keep your eye on the prize (metaphorically) (for avoidance of doubt, there is no prize) (both the prize and the eye are metaphors).

(By all means, get angry at people who intentionally use slurs. None of this is to say that you should tolerate – or be subjected to – language that is intended to dehumanize you.)

It's time we admitted that it's no good replacing an offensive term with a phrase that no one understands. Calling it "child sexual abuse material" is a good idea, but no one actually calls it that. The customary phrase is actually "child sexual abuse material, which most people call 'child porn,' but which we should really call 'child sex abuse material.'" If your goal is to avoid saying "child porn" (a laudable goal!), this isn't achieving it.

None of this means that I am immune to being rubbed up the wrong way by other people's language choices. Having been mentored by the science fiction great Damon Knight, I have been infected by many of his linguistic peccadillos, which means that if you say "out loud" in my earshot, I will (mentally) "correct" it to "aloud" (yes, "out loud" is fine, but Damon had a thing about it and it got stuck in my brain).

I am especially perturbed by "business English," the language of the commercial class, their cheerleaders in the press, and (alas) many of their critics. Anytime someone refers to a sector as a "space" (as in "I'm really getting into the AI space") it's like they're making me chew tinfoil. Superlatives like "thought-leader" are so self-parodying I have to check every time someone utters one aloud (see?) to verify that they're not being sarcastic. Objects of derision should be referred to by their surnames, not their given names ("Musk" is vituperative, "Elon" is friendly – though, thanks to the glorious and thickening contradictions of language, calling someone by their surname can also be affectionate). I steer clear of jargon used by firms to lionize themselves, like "hyperscaler."

I share the impulse to impose my linguistic preferences on the people around me. I just (mostly) suppress that impulse and try to focus on substance rather than style, at least when I'm trying to understand others and be understood by them. But yes, I do silently judge the people around me for their word choices – all the time.

That's why I immediately pounced on "The Corporate Bullshit Receptivity Scale: Development, validation, and associations with workplace outcomes," an open access paper in the Feb 2026 edition of Personality and Individual Differences by Shane Littrell, a linguistics postdoc at Cornell:

https://www.researchgate.net/publication/400597536_The_Corporate_Bullshit_Receptivity_Scale_Development_validation_and_associations_with_workplace_outcomes

Littrell set out to evaluate "corporate bullshit," a linguistic category that is separate from mere "jargon." Jargon, Littrell writes, is a professional vocabulary that serves a useful purpose: "facilitat[ing] communication and social bonding, increas[ing] fluency, and help[ing] reinforce a shared identity among in-group members."

Bullshit, meanwhile, is "semantically, logically, or epistemically dubious information that is misleadingly impressive, important, informative, or otherwise engaging." There's a whole field of bullshit studies, with investigations into such exciting topics as "pseudo-profound bullshit" (think: Deepak Chopra).

Littrell borrows from that field and others to investigate corporate bullshit, formulating a measurement index he calls the "Corporate Bullshit Receptivity Scale." In a series of three experiments, Littrell sets out to determine who is the most susceptible to corporate bullshit, and what the correlates of that receptivity are.

Littrell concludes that corporate bullshitters themselves are pretty good at identifying bullshit (they have a high "Organizational Bullshit Perception Score"). In other words, bullshitters know that they're bullshitting. When a corporate leader declares that:

This synergistic look at our thought leadership will ensure that we are decontenting and avoiding reputational deficits with our key takeaways as effectively as we can in order to sunset our resonating focus.

they know it's nonsense.

This reminded me of the idea that cult leaders tell obvious lies to their followers as a way of forcing them to demonstrate their subservience. When Trump demands that his followers wear clown shoes:

https://www.msn.com/en-us/news/politics/trump-is-obsessed-with-these-145-shoes-and-won-t-let-anyone-leave-without-a-pair/ar-AA1XOEBm

Or that they pretend that "mutilization" is a word:

https://www.wonkette.com/p/is-trumps-save-america-fck-america

He's engaging in a dominance play that forces his feuding princelings and their lickspittles to humiliate themselves and reaffirm his supremacy.

There are plenty of rank-and-file workers inside corporations who have high OBPSes and know when they're being bullshitted, but Littrell also identifies a large cohort of low-OBPS workers who are absolutely taken in by corporate bullshit.

Here we get to a fascinating dichotomy. Both the low-OBPS and high-OBPS workers can be described as being "open minded," but "open" has a very different meaning for each group. Workers who are good at spotting bullshit score high on open-mindedness metrics like "willingness to engage" and "willingness to reflect," both characteristic of the "fluid intelligence" that makes workers good at solving problems and doing a good job.

Meanwhile, workers who are taken in by bullshit are "open minded" in the sense that they are bad at analytical reasoning and thus easily convinced. These people test poorly on metrics like "logical reasoning" and "decision-making," and score high on "overconfidence in one's intellectual and analytic abilities." They are apt to make blunders that "expose organizations to financial, reputational, or legal risks."

But they're also exactly the workers who score high on "job satisfaction," "trust in one's supervisor," and "degree to which they are inspired by corporate mission statements." These people are so open minded that their brains start to leak out of their ears. Or, as Carly Page put it in The Register: "jargon sticks around not just because executives enjoy using it, but because many people respond to it as if it were genuine insight":

https://www.theregister.com/2026/03/15/corporate_jargon_research/

This creates a feedback loop where bosses get rewarded for using empty and maddening phrases, and their workforce gets progressively more skewed towards people who are bad at spotting bullshit and at exercising their judgment on the job. It's quite a neat – and ugly – explanation of why bullshit proliferates within organizations, and how organizations come to be completely overrun with bullshit.

This is a fascinating research paper, and while I've focused on its conclusions, I really suggest going and reading about the methodology, especially the tables of "corporate bullshit" phrases they generated for their experiments (Tables 1, 2 and 3). This is some eldritch horror bullshit:

By solving the pain point of customers with our conversations, we will ideate a renewed level of end-state vision and growth-mindset in the market between us and others who are architecting to download on a similar balanced scorecard.

What's more, these are all based on real examples of corporate bullshit from leaders at large corporations, with a few words rotated to synonyms drawn from the business-press.

I'm a writer. I really do care about language. Sure, I get frustrated with scolds who rail against semantic drift or engage in petty, pedantic corrections, but not because words don't matter. They matter, a lot. But language isn't math (which is why double negatives are intensifiers, not negators). It can obscure (as with bullshit) or it can enlighten (as with poesie) or it can enable precision (as with jargon). Arguments about language matter, but what matters about them isn't subjective aesthetics, nor is it a peevish obsession with "correctness." What matters is the way that language operates on the world (and vice versa).


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Eighth graders build giant awesome gymnasium rollercoaster https://web.archive.org/web/20060329110502/https://www.sgvtribune.com/news/ci_3606933

#20yrsago Bluetooth headset combined with headphones https://www.techdigest.tv/2006/03/itech_clip_m_1.html

#20yrsago HOWTO decode the sticker-numbers on fruit https://megnut.com/2006/03/14/read-the-numbers-on-your-fruit/

#20yrsago DRM shortens iPod battery life https://web.archive.org/web/20060319201837/http://www.mp3.com/features/stories/3646.html

#20yrsago McD’s employees’ secret recipes for improvised meals https://mcdonalds-talk.livejournal.com/158400.html

#20yrsago UK to US: we’ll only buy open-source fighter jets https://web.archive.org/web/20060420192203/https://www.vnunet.com/vnunet/news/2152035/joint-strike-fighter

#20yrsago Bruce Sterling’s SXSW keynote MP3 https://web.archive.org/web/20060330072143/https://server1.sxsw.com/2006/coverage/SXSW06.INT.20060314.BruceSterling.mp3

#20yrsago UK Open University opens its courseware https://web.archive.org/web/20060610125235/https://oci.open.ac.uk/

#20yrsago Europe seeking to make open mapping impossible – help! https://web.archive.org/web/20060503172457/https://publicgeodata.org/Open_Letter

#20yrsago MPAA rep gets slammed at SXSW https://www.powazek.com/2006/03/000571.html

#20yrsago Canadian recording industry: P2P isn’t bad for business https://web.archive.org/web/20060408232202/https://www.michaelgeist.ca/component/option,com_content/task,view/id,1168/Itemid,85/nsub,/

#15yrsago First-person account from surgeon who removed his own appendix https://www.theatlantic.com/technology/archive/2011/03/antarctica-1961-a-soviet-surgeon-has-to-remove-his-own-appendix/72445/

#15yrsago New York Times paywall: wishful thinking or just crazy? https://memex.craphound.com/2011/03/17/new-york-times-paywall-wishful-thinking-or-just-crazy/

#15rsago Android app pwns cardkey entry systems, opens all the locks https://web.archive.org/web/20110317132608/http://www.cybersecurityguy.com/caribou.html

#15yrsago Glenn Grant’s Burning Days: old school cyberpunk stories from the nostalgic contrafuture https://memex.craphound.com/2011/03/17/glenn-grants-burning-days-old-school-cyberpunk-stories-from-the-nostalgic-contrafuture/

#15yrsago World’s largest spam botnet goes down (for now?) https://krebsonsecurity.com/2011/03/rustock-botnet-flatlined-spam-volumes-plummet/

#15yrsago Piracy doesn’t fund the mob or terrorists https://arstechnica.com/tech-policy/2011/03/even-commercial-pirates-now-have-to-compete-with-free/

#15yrsago Tennessee to outlaw collective bargaining for teachers https://web.archive.org/web/20110320023746/https://nashvillecitypaper.com/content/city-news/protesters-arrested-following-disruption-committee-hearing

#15yrsago Four Color Fear: delightful horror comics from the pre-Code era https://memex.craphound.com/2011/03/16/four-color-fear-delightful-horror-comics-from-the-pre-code-era/

#10yrsago Screw optimism, we need hope instead https://web.archive.org/web/20160318215827/https://littleatoms.com/society/cory-doctorows-manifesto-hope

#10yrsago Four sets of identical twins pull an epic NYC subway car time-machine prank https://www.youtube.com/watch?v=Z1Gq7Q3B9xU

#10yrsago Hack-attacks with stolen certs tell you the future of FBI vs Apple https://arstechnica.com/information-technology/2016/03/to-bypass-code-signing-checks-malware-gang-steals-lots-of-certificates/

#10yrsago Captured: a book of prison inmate drawings of CEOs and other too-big-to-jail criminals https://thecapturedproject.com/

#10yrsago From dingo babysitter to net neutrality hero: Tom Wheeler’s legacy https://arstechnica.com/information-technology/2016/03/how-a-former-lobbyist-became-the-broadband-industrys-worst-nightmare/

#10yrsago Poet/bureaucrat’s moving report of the 1921 demise of America’s most notorious wolf https://web.archive.org/web/20160327105045/https://www.fws.gov/news/Historic/NewsReleases/1921/19210103.pdf

#10yrsago Barnes & Noble wipes out Nook ebook, replaces it with off-brand “study guide” https://web.archive.org/web/20160316120232/https://www.teleread.com/barnes-noble-stole-first-e-book-ever-bought/

#10yrsago Scarfolk’s lost 1970s budget announcement lays bare the modern Tory strategy https://scarfolk.blogspot.com/2016/03/scarfolks-annual-budget-announcement.html

#10yrsago Junkbots from Madrid, recycled from iconic Spanish packaging https://web.archive.org/web/20160321103729/http://www.pitarquerobots.es/

#10yrsago First-ever Tor node in a Canadian library https://web.archive.org/web/20160319035440/https://motherboard.vice.com/read/canadian-librarians-must-be-ready-to-fight-the-feds-on-running-a-tor-node-western-library-freedom-project

#10yrsago How to do impromptu magic tricks without being a dork https://www.thejerx.com/blog/2016/3/14/project-slay-them

#10yrsago Sheriff says rape kits are irrelevant because most rape accusations are false https://www.oregonlive.com/pacific-northwest-news/2016/03/rape_kit_system_unnecessary_si.html

#10yrsago Redaction fail: U.S. government admits it went after Lavabit looking for Snowden https://www.wired.com/2016/03/government-error-just-revealed-snowden-target-lavabit-case/

#10yrsago McAfee shovelware emits tracking beacons https://web.archive.org/web/20160909030152/https://duo.com/blog/bring-your-own-dilemma-oem-laptops-and-windows-10-security

#10yrsago Cops in small MA town warn about roving rap-battle challengers https://www.kron4.com/news/cops-warn-residents-of-men-challenging-others-to-rap-battles/

#10yrsago Rather than banning “lobbying” by academics, UK government should encourage it https://web.archive.org/web/20160310100844/https://www.timeshighereducation.com/comment/ban-academics-talking-to-ministers-we-should-train-them-to-do-it

#10yrsago Russia’s military uses gigantic wooden comedy props for punishment https://semperannoying.tumblr.com/post/122390977886/semperannoying-russian-army-punishments-1

#10yrsago Study: people who believe in innate intelligence overestimate their own https://arstechnica.com/science/2016/03/think-intelligence-is-fixed-youre-more-likely-to-overestimate-your-own/

#5yrsago SNAPDRAGON https://pluralistic.net/2021/03/17/there-once-was-a-union-maid/#coming-out

#5yrsago How unions de-risk work https://pluralistic.net/2021/03/17/there-once-was-a-union-maid/#solidarity-forever

#5yrsago Meet the new music boss, same as the old music boss https://pluralistic.net/2021/03/16/wage-theft/#excessive-buyer-power

#5yrsago The People's Parity Project https://pluralistic.net/2021/03/16/wage-theft/#ppp

#5yrsago SMS security is flaming garbage https://pluralistic.net/2021/03/16/wage-theft/#override-service-registry

#1yrago David Enrich's "Murder the Truth" https://pluralistic.net/2025/03/17/actual-malice/#happy-slapping


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1002 words today, 52553 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/
https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
cjheinz
8 days ago
reply
Some serious points about corporate bullshit.
Lexington, KY; Naples, FL
Share this story
Delete

This is maybe the best baseball catch you’ll ever...

1 Share

This is maybe the best baseball catch you’ll ever see. Or the most fun one anyway.

Read the whole story
cjheinz
8 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories