Retired since 2012.
2256 stories
·
5 followers

Pluralistic: When AI prophecy fails (29 Oct 2025)

1 Comment and 3 Shares


Today's links



A black and white image of an armed overseer supervising several chain-gang prisoners in stripes doing forced labor. The overseer's head has been replaced with the glaring red eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey.' The prisoners' heads have been replaced with hackers' hoodies.

When AI prophecy fails (permalink)

Amazon made $35 billion in profit last year, so they're celebrating by laying off 14,000 workers (a number they say will rise to 30,000). This is the kind of thing that Wall Street loves, and this layoff comes after a string of pronouncements from Amazon CEO Andy Jassy about how AI is going to let them fire tons of workers.

That's the AI story, after all. It's not about making workers more productive or creative. The only way to recoup the $700 billion in capital expenditure to date (to say nothing of AI companies' rather fanciful coming capex commitments) is by displacing workers – a lot of workers. Bain & Co say the sector needs to be grossing $2 trillion by 2030 in order to break even, which is more than the combined grosses of Amazon, Google, Microsoft, Apple Nvidia and Meta:

https://www.bain.com/about/media-center/press-releases/20252/$2-trillion-in-new-revenue-needed-to-fund-ais-scaling-trend—bain–companys-6th-annual-global-technology-report/

Every investor who has put a nickel into that $700b capex is counting on bosses firing a lot of workers and replacing them with AI. Amazon is also counting on people buying a lot of AI from it after firing those workers. The company has sunk $120b into AI this year alone.

There's just one problem: AI can't do our jobs. Oh, sure, an AI salesman can convince your boss to fire you and replace you with an AI that can't do your job, but that's the world's easiest sales-call. Your boss is relentlessly horny for firing you:

https://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete

But there's a lot of AI buyers' remorse. 95% of AI deployments have either produced no return on capital, or have been money-losing:

https://www.technologyreview.com/2019/01/25/1436/we-analyzed-16625-papers-to-figure-out-where-ai-is-headed-next/

AI has "no significant impact on workers’ earnings, recorded hours, or wages":

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933

What's Amazon to do? How do they convince you to buy enough AI to justify that $180b in capital expenditure? Somehow, they have to convince you that an AI can do your workers' jobs. One way to sell that pitch is to fire a ton of Amazon workers and announce that their jobs have been given to a chatbot. This isn't a production strategy, it's a marketing strategy – it's Amazon deliberately taking an efficiency loss by firing workers in a desperate bid to convince you that you can fire your workers:

https://pluralistic.net/2025/08/05/ex-princes-of-labor/#hyper-criti-hype

Amazon does use a lot of AI in its production, of course. AI is the "digital whip" that Amazon uses to allow itself to control drivers who (nominally) work for subcontractors. This lets Amazon force workers into unsafe labor practices that endanger them and the people they share the roads with, while offloading responsibility onto "independent delivery service" operators and the drivers themselves:

https://pluralistic.net/2025/10/23/traveling-salesman-solution/#pee-bottles

Amazon leadership has announced that AI has or will shortly replace its coders as well. But chatbots can't do software engineering – sure, they can write code, but writing code is only a small part of software engineering. An engineer's job is to maintain a very deep and wide context window, one that considers how each piece of code interacts with the software that executes before it and after it, and with the systems that feed into it and accept its output.

There's one thing AI struggles with beyond all else: maintaining context. Each linear increase in context that you demand from AI results in an exponential increase in computational expense. AI has no object permanence. It doesn't know where it's been and it doesn't know where it's going. It can't remember how many fingers it's drawn, so it doesn't know when to stop. It can write a routine, but it can't engineer a system.

When tech bosses dream of firing coders and replacing them with AI, they're fantasizing about getting rid of their highest-paid, most self-assured workers and transforming the insecure junior programmers leftover into AI babysitters whose job it is to evaluate and integrate that code at a speed that no one – much less a junior programmer – can meet if they are to do a careful and competent job:

https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f39

The jobs that can be replaced with AI are the jobs that companies already gave up on doing well. If you've already outsourced your customer service to an overseas call-center whose workers are not empowered to solve any of your customers' problems, why not fire those workers and replace them with chatbots? The chatbots also can't solve anyone's problems, and they're even cheaper than overseas call-center workers:

https://pluralistic.net/2025/08/06/unmerchantable-substitute-goods/#customer-disservice

Amazon CEO Andy Jassy wrote that he "is convinced" that firing workers will make the company "AI ready," but it's not clear what he means by that. Does he mean that the mass firings will save money while maintaining quality, or that mass firings will help Amazon recoup the $180,000,000,000 it spent on AI this year?

Bosses really want AI to work, because they really, really want to fire you. As Allison Morrow writes for CNN bosses are firing workers in anticipation of the savings AI will produce…someday:

https://www.cnn.com/2025/10/28/business/what-amazons-mass-layoffs-are-really-about

All this can feel improbable. Would bosses really fire workers on the promise of eventual AI replacements, leaving themselves with big bills for AI and falling revenues as the absence of those workers is felt?

The answer is a resounding yes. The AI industry has done such a good job of convincing bosses that AI can do their workers' jobs that each boss for whom AI fails assumes that they've done something wrong. This is a familiar dynamic in con-jobs.

The people who get sucked into pyramid schemes all think that they are the only ones failing to sell any of the "merchandise" they shell out every month to buy, and that no one else has a garage full of unsold leggings or essential oils. They don't know that, to a first approximation, the MLM industry has no sales, and relies entirely on "entrepreneurs" lying to themselves and one another about the demand for their wares, paying out of their own pocket for goods that no one wants.

The MLM industry doesn't just rely on this deception – they capitalize on it, by selling those self-flagellating "entrepreneurs" all kinds of expensive training courses that promise to help them overcome the personal defects that stop them from doing as well as all those desperate liars boasting about their incredible MLM sales success:

https://pluralistic.net/2025/05/05/free-enterprise-system/#amway-or-the-highway

The AI industry has its own version of those sales coaching courses – there's a whole secondary industry of management consultancies and business schools offering high-ticket "continuing education" courses to bosses who think that the only reason the AI they've purchased isn't saving them money is that they're doing AI wrong.

Amazon really needs AI to work. Last week, Ed Zitron published an extensive analysis of leaked documents showing how much Amazon is making from AI companies who are buying cloud services from it. His conclusion? Take away AI and Amazon's cloud division is in steep decline:

https://www.wheresyoured.at/costs/

What's more, those big-money AI customers – like Anthropic – are losing tens of billions of dollars per year, relying on investors to keep handing them money to incinerate. Amazon needs bosses to believe they can fire workers and replace them with AI, because that way, investors will keep giving Anthropic the money it needs to keep Amazon in the black.

Amazon firing 30,000 workers in the run-up to Christmas is a great milestone in enshittification. America's K-shaped recovery means that nearly all of the consumption is coming from the wealthiest American households, and these households overwhelmingly subscribe to Prime. Prime-subscribing households do not comparison shop. After all, they've already prepaid for a year's shipping in advance. These households start and end nearly every shopping trip in the Amazon app.

If Amazon fires 30,000 workers and tanks its logistics network and e-commerce systems, if it allows itself to drown in spam and scam reviews, if it misses its delivery windows and messes up its returns, that will be our problem, not Amazon's. In a world of commerce where Amazon's predatory pricing, lock-in, and serial acquisitions has left us with few alternatives, Amazon can truly be "too big to care":

https://www.theguardian.com/technology/2025/oct/05/way-past-its-prime-how-did-amazon-get-so-rubbish

From that enviable position, Amazon can afford to enshittify its services in order to sell the big AI lie. Killing 30,000 jobs is a small price to pay if it buys them a few months before a reckoning for its wild AI overspending, keeping the AI grift alive for just a little longer.

(Image: Cryteria, CC BY 3.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#10yrsago Librarian of Congress puts impossible conditions on your right to jailbreak your 3D printer https://michaelweinberg.org/post/132021560865/unlocking-3d-printers-ruling-is-a-mess

#10yrsago The two brilliant, prescient 20th century science fiction novels you should read this election season https://memex.craphound.com/2015/10/28/the-two-brilliant-prescient-20th-century-science-fiction-novels-you-should-read-this-election-season/

#10yrsago Hundreds of city police license plate cams are insecure and can be watched by anyone https://www.eff.org/deeplinks/2015/10/license-plate-readers-exposed-how-public-safety-agencies-responded-massive

#10yrsago Appeals court holds the FBI is allowed to kidnap and torture Americans outside US borders https://www.techdirt.com/2015/10/28/court-your-fourth-fifth-amendment-rights-no-longer-exist-if-you-leave-country/

#10yrsago South Carolina sheriff fires the school-cop who beat up a black girl at her desk https://www.theguardian.com/us-news/2015/oct/28/south-carolina-parents-speak-out-school-board

#10yrsago The more unequal your society is, the more your laws will favor the rich https://web.archive.org/web/20151028133814/http://america.aljazeera.com/opinions/2015/10/the-more-unequal-the-country-the-more-the-rich-rule.html

#5yrsago Trump abandons supporters to freeze https://pluralistic.net/2020/10/28/trumpcicles/#omaha

#5yrsago RIAA's war on youtube-dl https://pluralistic.net/2020/10/28/trumpcicles/#yt-dl

#1yrago The US Copyright Office frees the McFlurry https://pluralistic.net/2024/10/28/mcbroken/#my-milkshake-brings-all-the-lawyers-to-the-yard


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026



Colophon (permalink)

Today's top sources:

Currently writing:

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
cjheinz
17 hours ago
reply
"This isn't a production strategy, it's a marketing strategy"
Lexington, KY; Naples, FL
Share this story
Delete

Grokipedia Is the Antithesis of Everything That Makes Wikipedia Good, Useful, and Human

1 Comment

I woke up restless and kind of hungover Sunday morning at 6 am and opened Reddit. Somewhere near the top was a post called “TIL in 2002 a cave diver committed suicide by stabbing himself during a cave diving trip near Split, Croatia. Due to the nature of his death, it was initially investigated as a homicide, but it was later revealed that he had done it while lost in the underwater cave to avoid the pain of drowning.” The post linked to a Wikipedia page called “List of unusual deaths in the 21st century.” I spent the next two hours falling into a Wikipedia rabbit hole, clicking through all manner of horrifying and difficult-to-imagine ways to die.

A day later, I saw that Depths of Wikipedia, the incredible social media account run by Annie Rauwerda, had noted the entirely unsurprising fact that, behind the scenes, there had been robust conversation and debate by Wikipedia editors as to exactly what constitutes an “unusual” death, and that several previously listed “unusual” deaths had been deleted from the list for not being weird enough. For example: People who had been speared to death with beach umbrellas are “no longer an unusual or unique occurrence”; “hippos are extremely dangerous and very aggressive and there is nothing unusual about hippos killing people”; “mysterious circumstances doesn’t mean her death itself was unusual.” These are the types of edits and conversations that have collectively happened billions of times that make Wikipedia what it is, and which make it so human, so interesting, so useful. 

recently discovered that wikipedia volunteers have a hilariously high bar for what constitutes "unusual death"

depths of wikipedia (@depthsofwikipedia.bsky.social) 2025-10-27T12:38:42.573Z

Wednesday, as part of his ongoing war against Wikipedia because he does not like his page, Elon Musk launched Grokipedia, a fully AI-generated “encyclopedia” that serves no one and nothing other than the ego of the world’s richest man. As others have already pointed out, Grokipedia seeks to be a right wing, anti-woke Wikipedia competitor. But to even call it a Wikipedia competitor is to give the half-assed project too much credit. It is not a Wikipedia “competitor” at all. It is a fully robotic, heartless regurgitation machine that cynically and indiscriminately sucks up the work of humanity to serve the interests, protect the ego, amplify the viewpoints, and further enrich the world’s wealthiest man. It is a totem of what Wikipedia could and would become if you were to strip all the humans out and hand it over to a robot; in that sense, Grokipedia is a useful warning because of the constant pressure and attacks by AI slop purveyors to push AI-generated content into Wikipedia. And it is only getting attention, of course, because Elon Musk does represent an actual threat to Wikipedia through his political power, wealth, and obsession with the website, as well as the fact that he owns a huge social media platform.

One needs only spend a few minutes clicking around the launch version of Grokipedia to understand that it lacks the human touch that makes Wikipedia such a valuable resource. Besides often having a conservative slant and having the general hallmarks of AI writing, Grokipedia pages are overly long, poorly and confusingly organized, have no internal linking, have no photos, and are generally not written in a way that makes any sense. There is zero insight into how any of the articles were generated, how information was obtained and ordered, any edits that were made, no version history, etc. Grokipedia is, literally, simply a single black box LLM’s version of an encyclopedia. There is a reason Wikipedia editors are called “editors” and it’s because writing a useful encyclopedia entry does not mean “putting down random facts in no discernible order.” To use an example I noticed from simply clicking around: The list of “notable people” in the Grokipedia entry for Baltimore begins with a disordered list of recent mayors, perhaps the least interesting but lowest hanging fruit type of data scraping about a place that could be done. 

On even the lowest of stakes Wikipedia pages, real humans with real taste and real thoughts and real perspectives discuss and debate the types of information that should be included in any given article, in what order it should be presented, and the specific language that should be used. They do this under a framework of byzantine rules that have been battle tested and debated through millions of edit wars, virtual community meetings, talk page discussions, conference meetings, inscrutable listservs which themselves have been informed by Wikimedia’s “mission statement,” the “Wikimedia values,” its “founding principles” and policies and guidelines and tons of other stated and unstated rules, norms, processes and procedures. All of this behind-the-scenes legwork is essentially invisible to the user but is very serious business to the human editors building and protecting Wikipedia and its related projects (the high cultural barrier to entry for editors is also why it is difficult to find new editors for Wikipedia, and is something that the Wikipedia community is always discussing how they can fix without ruining the project). Any given Wikipedia page has been stress tested by actual humans who are discussing, for example, whether it’s actually that unusual to get speared to death by a beach umbrella.

Grokipedia, meanwhile, looks like what you would get if you told an LLM to go make an anti-woke encyclopedia, which is essentially exactly what Elon Musk did. 

As LLMs tend to do, some pages on Grokipedia leak part of its instructions. For example, a Grokipedia page on “Spanish Wikipedia” notes “Wait, no, can’t cite Wiki,” indicating that Grokipedia has been programmed to not link to Wikipedia. That entry does cite Wikimedia pages anyway, but in the “sources,” those pages are not actually hyperlinked: 

I have no doubt that Grokipedia will fail, like other attempts to “compete” with Wikipedia or build an “alternative” to Wikipedia, the likes of which no one has heard of because the attempts were all so laughable and poorly participated in that they died almost immediately. Grokipedia isn’t really a competitor at all, because it is everything that Wikipedia is not: It is not an encyclopedia, it is not transparent, it is not human, it is not a nonprofit, it is not collaborative or crowdsourced, in fact, it is not really edited at all. It is true that Wikipedia is under attack from both powerful political figures, the proliferation of AI, and related structural changes to discoverability and linking on the internet like AI summaries and knowledge panels. But Wikipedia has proven itself to be incredibly resilient because it is a project that specifically leans into the shared wisdom and collaboration of humanity, our shared weirdness and ways of processing information. That is something that an LLM will never be able to compete with. 



Read the whole story
cjheinz
1 day ago
reply
"Grokipedia, meanwhile, looks like what you would get if you told an LLM to go make an anti-woke encyclopedia, which is essentially exactly what Elon Musk did."
Lexington, KY; Naples, FL
Share this story
Delete

Could China devastate the US without firing a shot?

1 Comment

Regular readers will recall that for a long time I have been warning that the logical consequence of pursuing LLMs uber alles, to the exclusion of almost all other ideas, is that one winds up in a situation in which there is essentially zero technical moat. That in turn leads to price wars:

In large part, what I prophesied has come true; we now have many GPT-4 level models (more than I foresaw); GPT-5 was disappointing, and didn’t come in 2024; price wars; very little moat; no robust solution to hallucinations; a lot of corporate experimentation but not a ton of permanent inclusion in production. Modest profits was maybe a bit optimistic, with most LLM developers losing money.

Nonetheless, a very large fraction of our economy is going into the world of price wars and little moat, massive, massive investments into companies that are all basically following the same strategy of investing more and more money into one single idea, viz. scaling ever larger language models, in hopes that something magical will emerge.

Troublingly, the nation is now doubling down on these bets, still largely to the exclusion of developing other approaches, even as it has become clearer and clearer that problems of hallucinations and unreliability persist, and many are starting to report diminishing returns.

§

My own campaign against the single-minded narrowness of LLMs is not new (see e.g., the Scaling-uber-alles essay that launched this Substack in May 2022), but others are now speaking out, as VentureBeat just reported:

In a striking act of self-critique, one of the architects of the transformer technology that powers ChatGPT, Claude, and virtually every major AI system told an audience of industry leaders this week that artificial intelligence research has become dangerously narrow — and that he’s moving on from his own creation.

Llion Jones, who co-authored the seminal 2017 paper “Attention Is All You Need“ and even coined the name “transformer,” delivered an unusually candid assessment at the TED AI conference in San Francisco on Tuesday: Despite unprecedented investment and talent flooding into AI, the field has calcified around a single architectural approach, potentially blinding researchers to the next major breakthrough.

“Despite the fact that there’s never been so much interest and resources and money and talent, this has somehow caused the narrowing of the research that we’re doing,” Jones told the audience. The culprit, he argued, is the “immense amount of pressure” from investors demanding returns and researchers scrambling to stand out in an overcrowded field.

Yet investments in the same single approach grow every day.

§

Meanwhile we have entered an era, perhaps unprecedented, of extreme financial interdependence and circular investment:

Where might this all go? A reader just sent me this provocative analysis from Scott Galloway:


China has had it with America’s sclerotic trade policy. If I were advising Xi, I would say this: If you think of America as an adversary, understand that America has essentially become a giant bet on AI. If the Chinese can take down the valuations of the top 10 AI companies — if those 10 companies fall 50, 70, even 90% like they did in the dot-com era — that would put the U.S. in a recession and put Trump out of vogue.

How can they do that? Pretty easily. I think they’re going to flood the market with cheap, open-weight models that require less processing power but are 90% as good. The Chinese AI sector, acting under the direction and encouragement of the CCP, is about to Old Navy the U.S. economy: They’re going to mess with America’s big bet on AI and make it not pay off.

§

Everyone in the industry probably remembers this moment:

Imagine if something like that, perhaps much worse, persisted, not for a day but indefinitely.1

§

For years, I have been warning that America has not sufficiently intellectually diversified its AI industry. Too much hype, accepted uncritically has led us to an extraordinarily vulnerable position.

The consequences of being all-in on a single, easily-replicated technology could well turn out to be severe — with or without pressure from China.

Gary Marcus has been warning about the consequences of the intellectual monoculture of generative AI for years. He fears what might happen next.

Subscribe now

1

On the bright side, if Generative AI really were magic, maybe running that magic on Chinese servers would still be a big win for civilization, even if it were bad news for America. More likely, to my mind, is that generative AI will become a mildly useful, widely used commodity that never makes much money for anyone other than chip companies.



Read the whole story
cjheinz
2 days ago
reply
I, for one, welcome our new Chinese hegemons!
Lexington, KY; Naples, FL
Share this story
Delete

On em dashes and elipses

1 Comment

I don't know who was the first to write The em dash is dead and AI killed it. Maybe it was Jacob Schilleci in the Reno Gazette Journal, but since most of it is behind a paywall and that paper is one of many run by Gannett, I'm not sure—though that's where the first link in this sentence goes. Credit where due. 

What I am sure about is that em dashes are part of my style, and I'm not going to stop using them.

While we're at it, there is also a "Boomer ellipsis" thing. Says here in the NY Post, "When typing a large paragraph, older adults might use what has been dubbed “Boomer ellipses” — multiple dots in a row also called suspension points — to separate ideas, unintentionally making messages more ominous or anxiety-inducing and irritating Gen Z." (I assume Brooke Kato, who wrote that sentence, is not an AI, despite using em dashes.) There is more along the same line from Upworthy and NDTV

I'm a Boomer, at least demographically, but I rarely use ellipses in my writing, no matter where one might go. But I do remember that the keyboard chord for producing an ellipsis on a Mac is option + semicolon, and the one for an em dash is shift-option-hyphen. The problem with the ellipsis one is that it's a single character (Unicode U+2026) rather than three periods in a row. So not using that Unicode thing might be the least leveraged pro tip you'll get today.

Read the whole story
cjheinz
2 days ago
reply
I use ellipses to note where I have omitted text in a quotation.
Lexington, KY; Naples, FL
Share this story
Delete

Why Don't the Jedi Use ChatGPT?

1 Comment

A bunch of us friends used to go out to movies together, back before Covid, kids, and streaming. The habit of the group was to watch the movie, then head out to a pub where we’d list all the flaws in the story, cinematography, acting, etc. I enjoyed the company, but didn’t participate much in the criticism sessions. The group’s philosophy seemed to be that flaws ruined the experience for them. For me, inconsistencies, incompleteness in storytelling etc. were opportunities. I would always enjoy the movie for what it intended to be before judging whether it actually got there; and then, for whatever shortcomings I saw, I’d make up reasons why, in that particular storytelling world, they weren’t flaws at all.

So, rather than criticizing the art, I’d repair it in my head as I went along. Unlike my friends, I was satisfied with every movie we went to. —Well, almost every one: some movies are insults to our intelligence that should only be watched when you’re either very drunk or very sick and high on benadryl.

The Star Wars universe is beloved by millions. I know very little about it because its lore is not limited to the films but has been fleshed out over the decades by countless novels. (Oddly enough, I remember reading the novelization of the first movie before it opened in our town. Not sure whether that’s true or just bad memory sequencing, but I experience the movie as the adaptation of a book, and not the other way around.) The Star Wars universe is a rich milieu of classic space-opera styling, and for many people it’s likely the only exposure they’ll have to the so-called ‘Golden Age’ of 1900s science fiction.

There were certainly stories about city- or country-wide computers before the 1950s (mostly cautionary tales), and Dick Tracy had his phone-watch. We take Star Trek’s communicators for granted nowadays, but the idea of portable phones was pretty radical in the 1960s; still, people had thought of them. In the 70’s Star Wars put us in a universe that deliberately lacked such tech—and for good, solid, story reasons.

Unapocalyptic is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Retro-Futures as a Style

Image courtesy of Retrenders: https://retrenders.com/2016/10/13/make-coffee-with-star-wars-droid-r2-d2/#more-20074

Star Wars is appealing because it’s nostalgia. George Lucas was very clear about the influences for the universe he created, and they were Saturday-morning serials, B movies and epics such as Isaac Asimov’s Foundation series.

To criticize this kind of story for not including 21st Century technologies completely misses the point. These tales aren’t trying to keep up. They’re looking backwards fondly, and there’s a lot of fun to be gotten from going along with the ride.

This is what “suspension of disbelief” is all about.

There’s another option, though—two, really. The first is to have a super-high bar for your suspension of disbelief and consequently to nitpick your way through a movie. Well all do this to an extent, especially when the plot includes stuff that we know a lot about, whether that be local geography for a film set in our area, knowledge of physiology if we’re in the medical professions (“he could never have survived that!”), or politics or law. Fair enough.

The second option is to neither suspend disbelief, nor be critical. It’s to worldbuild your own solution to the gaps you see. If you’re interested in storytelling and worldbuilding, either as a writer or as an approach doing foresight, this is an exercise you should consider.

Spackle for Stories

Take the example of the Star Wars universe. If you don’t simply suspend disbelief and go along for the ride, then there’s a lot here that’s problematic. Despite the token alien, this seems to be a human-run galaxy; why is that? Sure, the Republic may be thousands of years old, so everything will look lived in, but on the other hand, they have a whole galaxy’s worth of raw materials at their disposal. There’s literally no scarcity, so even if we had a modern-style consumer society where you have to replace your phone every year for the newest, shiniest model, a galactic industrial base could easily keep up.

Then there’s that lack of phones, and the lack of infrastructural AI—droids have minds, so why not spacecraft, cities, and factories? Obviously this is to maintain the style and atmosphere of the story and to constrain things so that only certain kinds of stories can be told in this universe. But if we neither accept that, nor dismiss these questions as ‘mistakes’ then we have the opportunity to try to come up with explanations for them that make sense in the context of the world that’s been presented.

In a geeky sort of way, this is tremendous fun. For example, one troubling aspect of the Star Wars universe is that droids appear to be sentient, yet they are universally treated as completely disposable slaves. In a galactic civilization supposedly inclusive of a wild variety of alien species, why the exception for synthetic minds? On the face of it, Luke and Leia are both unrepentant slave-owners, which is disquieting.

My third option demands a designer’s approach to the dilemma. What conditions would make droids’ lack of rights make sense? If we take oppression off the table (because we want to consider Luke and Leia to be heroes, and the rebellion to be an attempt to free all the galaxy’s sentient beings, not just the organic ones) then we’re left with one elegant solution:

In the Star Wars universe, droids are not actually sentient. They are like ChatGPT: computers that are very good at imitating consciousness and personality, without actually having either. And, in this far away galaxy, everybody knows this.

There will be no droid Spartacus, because there is no actual suffering happening amongst the droid population. They really are just machines, so they have the same rights as a toaster no matter how much they may protest and act anxious. Luke and Leia are not slave owners. C3PO and R2D2 are no more conscious than a video game character.

Regarding the strange lack of infrastructural AI and analogues to ChatGPT—or even, the weird lack of smart phones in this highly technologically sophisticated society, working out an explanation is a little trickier. It’s also more rewarding, because when you get an answer to this question, it unlocks some amazing new ways to look at humanity’s cultural, scientific, and technological future.

Read more

Read the whole story
cjheinz
3 days ago
reply
Good post.
Lexington, KY; Naples, FL
Share this story
Delete

Amazon Web Services Had a Very Bad Day, Amazon's Stock Price Did Not

1 Comment

Yesterday, Amazon Web Services, the cloud services backbone of the internet, had a serious glitch that took down a large chunk of the internet. How much did this outage cost? That’s not clear. But it was certainly a big number, with some estimates in the hundreds of billions of dollars.

Many people experienced the AWS outage in terms of time wasted, Zoom meetings not working, random services breaking, or apps hanging up or not launching. I had trouble dealing with Verizon, their customer service team took 15 minutes to respond to a chat query, and explained they were having “technical issues” on their end. The outage affected everyone from Netflix to Snapchat to Venmo to thousands of other vital products and services. Here’s the CEO of Eight Sleep, which makes internet-connected beds, apologizing for the outage and its affect on the sleep of its customers.

Leaving aside why beds need to be connected to the internet, let’s just stipulate that some sleep got messed up. And all of these costs were incurred because of dysfunction at AWS. We don’t have a name for the externalities induced by the market power here.

In 2022, Cory Doctorow described the cycle of decay of tech platforms, where they lock you in and then decrease overall quality. He deemed it “enshittification.” I think it’s worth offering a cousin to this term, which I’ll call “Corporate Sludge.” Corporate sludge is the cost, or costs, of an excessively monopolized and financialized economy, that do not show up on a balance sheet.

Here’s what I mean. According to Amazon’s internal financials, AWS has a high profit margin. In 2024 it had $107 billion in revenue, and generated $39.8 billion in profit, with is a 37% operating income. A normal product or service, when faced with a catastrophe like the AWS outage, would take a financial hit. Yet here’s the stock of Amazon yesterday.

Image

In other words, the costs of the AWS outage did not show up on the balance sheet directly responsible for it, or in the equity markets supposedly measuring long-term expectations of corporate profits. Economists would call the wasted time a “negative externality,” it’s the equivalent of pollution. And while that cost doesn’t show up anywhere we can affirmatively identify, someone has to pay for it. Those missed meetings, that lost production, it raises costs for virtually everyone, a little. This cost is what economists or government statisticians just don’t see, because it isn’t measured. But that doesn’t mean it’s not real.

Once you start looking, you start realizing that corporate sludge is everywhere. I did an interview with corporate procurement specialist Rich Ham, and he told me that big corporations laid off most of their procurement teams as a cost-cutting measure during the financial crisis. Since Wall Street penalizes CEOs for hiring people, and rewards them for firing, they won’t spend to rebuild those teams. As a result, suppliers of things like uniforms, waste disposal, guards, and pest control massively gouge the corporate world. According to Ham, corporate procurement costs are going up 7% a year. And those increased costs, that corporate sludge, is passed along as higher prices, even as accounting profits aren’t improving. Executives think they are being efficient - headcount is down - but then they wonder why everything costs so much.

I suspect, though it goes unmeasured, that a lot of the increase in inflation that no one can quite explain is a result of corporate sludge. It’s a bit like ‘dark matter,’ a fudge factor astronomers created to describe why their theories of the properties of matter can’t explain the expansion of the universe. Similarly, I don’t believe economists have a good explanation for inflationary increases of the last few years. They certainly understand that supply chains broke, but they can’t describe why prices didn’t go back down once they were repaired. And I don’t think anyone can really explain why the economy seems to be booming, while ordinary people are unhappy. My fudge factor to explain it is corporate sludge - there’s massive inefficiency everywhere as a result of hidden market power that doesn’t show up on balance sheets, but shows up as time wasted, anxiety, and extra costs where you don’t expect it.

Health Care Sludge

I’ve been thinking about the concept of corporate sludge for about a year, after the debate over health insurance in the U.S. prompted by Luigi Mangione’s alleged killing of a health care CEO. At the time, there was a lot of back-and-forth about popular anger at insurers. Economic commentator Noah Smith wrote a piece that bothered me a lot, explaining why we were focused on the wrong bad guy. Here’s the headline:

He explained that American anger is misplaced. Insurance companies, he wrote, are generally efficient pass-throughs that got the blame for covering for greedy doctors. What was Smith’s evidence? Well, it was a balance sheet analysis. While UnitedHealth Group’s revenue is on the order of $400 billion, he explained, the company had a thin profit margin of just 6.11%. It’s actually efficiently run, not greedy. What kind of a villain or monopolist passes most of its revenue on to someone else? It’s a rational argument from Smith, and yet, not persuasive. People hate United Health Group, because we know that analyzing accounting profits excludes the real costs of its behavior.

The experience of health care is full of corporate sludge, from having to dispute weird bills to being steered to medication that may or may not be correct, to doctors spending their time fighting with bureaucrats over reimbursements and audits. In February 2024, UnitedHealth Group’s Change subsidiary got hacked, and it shut down cash flow to doctors, pharmacists, and hospitals, in some cases for months. Like the AWS outage, that didn’t affect UHG’s profit. But it certainly had a cost.

In other words, to look only at accounting profits is to miss the genuine costs of monopoly or financialization, which is the corporate sludge embedded in an economic system where the actual stakeholders who must use the system, patients and clinicians in the case of health care, have little power. And it’s not just UHG. In our current model of tight-fisted financiers choosing how to allocate health resources, somehow U.S. hospitals spend twice as much on administrative costs as they do on direct patient care. Yet that doesn’t show up anywhere as accounting profit; hospitals are constantly explaining how strapped they are for cash.

Administrative and Direct Patient Care Expenditures at U.S. Hospitals

Yet, this money is going somewhere. I noted this dynamic a few weeks ago when I profiled a monopoly in unnecessary hospital surveys, which is the result of a merger between survey companies Qualtrics and Press Ganey, as well as a regulatory mandate by the Affordable Care Act. Ryan Smith, the billionaire owner of the Utah Jazz, founded Qualtrics, so one way to understand corporate sludge is that it’s shifting unnoticeable amounts of money from each of us to people who in turn buy sports teams. There are innumerable economic termites in health care, like billing codes that the American Medical Association has a copyright on, or electronic record keeping systems, or HIPAA-compliant note-taking software. And then there are also just big unnecessary billing departments fighting with other big unnecessary billing departments.

Noah Smith would look at the profit margin of hospitals, insurers, distributors, PBMs, et al and ask, “where’s the problem?” What he’d find is that nearly everything in health care has low margins or can be made to seen that way. So just by that analysis, Smith would see nothing but an efficiently run health care system. Yet the U.S. spends three times as much as every other country on its system, gets worse results, and generates a lot of health care millionaires who are good at pricing arbitrage.

The Keys Aren’t Near the Lamp Post

In other words, one of the more important reasons to look at corporate sludge as a meaningful challenge in the business world is that it helps expose something economists don’t measure, which is private inefficiency. A firm with market power can harvest that market power as accounting profits, or as additional administrative bloat or a higher cost supply chain. While economically these are similar, in terms of accounting and rhetoric, they are not.

For instance, one of the arguments that price gouging didn’t matter in food inflation during Covid was that grocery stores have thin operating margins. If there’s so much gouging, where’s all that extra profit? As I noted before, cutting a corporate procurement staff means a supermarket chain pays more for waste disposal or uniforms, but a profit margin analysis would miss that the price of milk is higher because a vendor is screwing the grocery chain.

Moreover, looking purely at the margins of a Kroger or Walmart misses how these companies organize an entire supply chain, and incentivize the sale of more expensive food in general. For instance, most large grocers make a fair amount of money through “slotting fees,” where branded food companies pay for display space for their wares, which is a form of price discrimination. Slotting fee contracts, or “trade spend,” make it very hard for smaller companies that sell cheap, fresh food to get into supermarkets, because they can’t afford to get on the shelf. It’s the ultra-processed food that permeates.

Since the government stopped enforcing laws against price discrimination in the 1980s, the food industry learned “how to sell larger quantities of low-nutrient processed foods merely by manipulating their placement.” One consequence is that “rates of obesity in the United States doubled,” but another is that cheap regionally produced food producers disappeared, because they couldn’t get onto the shelf. So prices across the supply chain went up.

If you just look at the operating margins of the big grocers, you’d miss the transformation of our food industry from a low cost locally based high quality distribution system to a high cost globalized low quality one. You can argue all you want about economies of scale in food processing, but it’s just absurd to believe that the overhead of a major corporation like Kraft makes it more efficient than a local food producer. I mean, it’s been a few years since Covid, and the supply chains have cleared, yet prices didn’t come back down. No one can explain why. The answer is corporate sludge, hidden inefficiencies that are a result of market power.

Electric Sludge

Another example full of sludge comes in the details of the operations of investor-owned electric utilities, which are increasing prices dramatically and blaming it on the build-out of data centers. While data center growth is meaningful, utility prices in a lot of places were increasing before the massive AI investment, and states without data center growth are also seeing much higher prices. Moreover, there is an odd discontinuity in price increases, with some utilities hiking costs and others keeping them lower. What explains the difference? In America, most of our utilities are investor-owned, but some are owned by cities or are structured as cooperatives. Mark Ellis, a utility analyst, sent me this graphic of the change in electricity pricing for investor-owned vs publicly-owned utilities.

Publicly owned utility rates have increased faster than investor-owned ones in only a few states.

What’s going on here? Well, investor-owned utilities get regulators to let them raise prices based on a supposed need to send high profits to Wall Street, whereas publicly owned ones don’t. But the costs are much higher than what we see go to investors. Just looking at some of the public filings of utilities shows there’s also a lot of bloat. For instance, while the publicly owned utilities have a few lawyers on staff, private utilities of significant size seem to employ the equivalent of an internal mid-size law firm, paying dozens of lawyers up to a million dollars apiece for legal, regulatory, and lobbying work. This kind of gold-plating is no doubt happening down the line, from replacing poles when it’s unnecessary to do so, to bringing on unnecessary well-paid administrative staff to promote silly diversity initiatives.

All of these costs are real, and must be paid.

Now, getting back to AWS, most people, when looking at the situation, observed that too much of the internet is based on a few chokepoints. Fast Company had an article titled The AWS outage reveals the web’s massive centralization problem, European leaders expressed alarm on their dependency on U.S. big tech giants, and a giant Reddit thread - AWS Services are down, This Is Why Monopolies Should Be Banned - drew thousands of comments. (Ironically the thread itself went down for a period because of the outage.)

But none of these people said the problem with AWS is that it has a high profit margin. And no one focused on market share, or whether customers can in some theoretical world switch to another cloud provider. That’s because normal people can see what’s going on, even if economists can’t. Corporate sludge is everywhere.


Thanks for reading! Your tips make this newsletter what it is, so please send me tips on weird monopolies, stories I’ve missed, or other thoughts. And if you liked this issue of BIG, you can sign up here for more issues, a newsletter on how to restore fair commerce, innovation, and democracy. Consider becoming a paying subscriber to support this work, or if you are a paying subscriber, giving a gift subscription to a friend, colleague, or family member. If you really liked it, read my book, Goliath: The 100-Year War Between Monopoly Power and Democracy.

cheers,

Matt Stoller

Read the whole story
cjheinz
7 days ago
reply
Great post.
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories