Retired since 2012.
2251 stories
·
5 followers

Amazon Web Services Had a Very Bad Day, Amazon's Stock Price Did Not

1 Comment

Yesterday, Amazon Web Services, the cloud services backbone of the internet, had a serious glitch that took down a large chunk of the internet. How much did this outage cost? That’s not clear. But it was certainly a big number, with some estimates in the hundreds of billions of dollars.

Many people experienced the AWS outage in terms of time wasted, Zoom meetings not working, random services breaking, or apps hanging up or not launching. I had trouble dealing with Verizon, their customer service team took 15 minutes to respond to a chat query, and explained they were having “technical issues” on their end. The outage affected everyone from Netflix to Snapchat to Venmo to thousands of other vital products and services. Here’s the CEO of Eight Sleep, which makes internet-connected beds, apologizing for the outage and its affect on the sleep of its customers.

Leaving aside why beds need to be connected to the internet, let’s just stipulate that some sleep got messed up. And all of these costs were incurred because of dysfunction at AWS. We don’t have a name for the externalities induced by the market power here.

In 2022, Cory Doctorow described the cycle of decay of tech platforms, where they lock you in and then decrease overall quality. He deemed it “enshittification.” I think it’s worth offering a cousin to this term, which I’ll call “Corporate Sludge.” Corporate sludge is the cost, or costs, of an excessively monopolized and financialized economy, that do not show up on a balance sheet.

Here’s what I mean. According to Amazon’s internal financials, AWS has a high profit margin. In 2024 it had $107 billion in revenue, and generated $39.8 billion in profit, with is a 37% operating income. A normal product or service, when faced with a catastrophe like the AWS outage, would take a financial hit. Yet here’s the stock of Amazon yesterday.

Image

In other words, the costs of the AWS outage did not show up on the balance sheet directly responsible for it, or in the equity markets supposedly measuring long-term expectations of corporate profits. Economists would call the wasted time a “negative externality,” it’s the equivalent of pollution. And while that cost doesn’t show up anywhere we can affirmatively identify, someone has to pay for it. Those missed meetings, that lost production, it raises costs for virtually everyone, a little. This cost is what economists or government statisticians just don’t see, because it isn’t measured. But that doesn’t mean it’s not real.

Once you start looking, you start realizing that corporate sludge is everywhere. I did an interview with corporate procurement specialist Rich Ham, and he told me that big corporations laid off most of their procurement teams as a cost-cutting measure during the financial crisis. Since Wall Street penalizes CEOs for hiring people, and rewards them for firing, they won’t spend to rebuild those teams. As a result, suppliers of things like uniforms, waste disposal, guards, and pest control massively gouge the corporate world. According to Ham, corporate procurement costs are going up 7% a year. And those increased costs, that corporate sludge, is passed along as higher prices, even as accounting profits aren’t improving. Executives think they are being efficient - headcount is down - but then they wonder why everything costs so much.

I suspect, though it goes unmeasured, that a lot of the increase in inflation that no one can quite explain is a result of corporate sludge. It’s a bit like ‘dark matter,’ a fudge factor astronomers created to describe why their theories of the properties of matter can’t explain the expansion of the universe. Similarly, I don’t believe economists have a good explanation for inflationary increases of the last few years. They certainly understand that supply chains broke, but they can’t describe why prices didn’t go back down once they were repaired. And I don’t think anyone can really explain why the economy seems to be booming, while ordinary people are unhappy. My fudge factor to explain it is corporate sludge - there’s massive inefficiency everywhere as a result of hidden market power that doesn’t show up on balance sheets, but shows up as time wasted, anxiety, and extra costs where you don’t expect it.

Health Care Sludge

I’ve been thinking about the concept of corporate sludge for about a year, after the debate over health insurance in the U.S. prompted by Luigi Mangione’s alleged killing of a health care CEO. At the time, there was a lot of back-and-forth about popular anger at insurers. Economic commentator Noah Smith wrote a piece that bothered me a lot, explaining why we were focused on the wrong bad guy. Here’s the headline:

He explained that American anger is misplaced. Insurance companies, he wrote, are generally efficient pass-throughs that got the blame for covering for greedy doctors. What was Smith’s evidence? Well, it was a balance sheet analysis. While UnitedHealth Group’s revenue is on the order of $400 billion, he explained, the company had a thin profit margin of just 6.11%. It’s actually efficiently run, not greedy. What kind of a villain or monopolist passes most of its revenue on to someone else? It’s a rational argument from Smith, and yet, not persuasive. People hate United Health Group, because we know that analyzing accounting profits excludes the real costs of its behavior.

The experience of health care is full of corporate sludge, from having to dispute weird bills to being steered to medication that may or may not be correct, to doctors spending their time fighting with bureaucrats over reimbursements and audits. In February 2024, UnitedHealth Group’s Change subsidiary got hacked, and it shut down cash flow to doctors, pharmacists, and hospitals, in some cases for months. Like the AWS outage, that didn’t affect UHG’s profit. But it certainly had a cost.

In other words, to look only at accounting profits is to miss the genuine costs of monopoly or financialization, which is the corporate sludge embedded in an economic system where the actual stakeholders who must use the system, patients and clinicians in the case of health care, have little power. And it’s not just UHG. In our current model of tight-fisted financiers choosing how to allocate health resources, somehow U.S. hospitals spend twice as much on administrative costs as they do on direct patient care. Yet that doesn’t show up anywhere as accounting profit; hospitals are constantly explaining how strapped they are for cash.

Administrative and Direct Patient Care Expenditures at U.S. Hospitals

Yet, this money is going somewhere. I noted this dynamic a few weeks ago when I profiled a monopoly in unnecessary hospital surveys, which is the result of a merger between survey companies Qualtrics and Press Ganey, as well as a regulatory mandate by the Affordable Care Act. Ryan Smith, the billionaire owner of the Utah Jazz, founded Qualtrics, so one way to understand corporate sludge is that it’s shifting unnoticeable amounts of money from each of us to people who in turn buy sports teams. There are innumerable economic termites in health care, like billing codes that the American Medical Association has a copyright on, or electronic record keeping systems, or HIPAA-compliant note-taking software. And then there are also just big unnecessary billing departments fighting with other big unnecessary billing departments.

Noah Smith would look at the profit margin of hospitals, insurers, distributors, PBMs, et al and ask, “where’s the problem?” What he’d find is that nearly everything in health care has low margins or can be made to seen that way. So just by that analysis, Smith would see nothing but an efficiently run health care system. Yet the U.S. spends three times as much as every other country on its system, gets worse results, and generates a lot of health care millionaires who are good at pricing arbitrage.

The Keys Aren’t Near the Lamp Post

In other words, one of the more important reasons to look at corporate sludge as a meaningful challenge in the business world is that it helps expose something economists don’t measure, which is private inefficiency. A firm with market power can harvest that market power as accounting profits, or as additional administrative bloat or a higher cost supply chain. While economically these are similar, in terms of accounting and rhetoric, they are not.

For instance, one of the arguments that price gouging didn’t matter in food inflation during Covid was that grocery stores have thin operating margins. If there’s so much gouging, where’s all that extra profit? As I noted before, cutting a corporate procurement staff means a supermarket chain pays more for waste disposal or uniforms, but a profit margin analysis would miss that the price of milk is higher because a vendor is screwing the grocery chain.

Moreover, looking purely at the margins of a Kroger or Walmart misses how these companies organize an entire supply chain, and incentivize the sale of more expensive food in general. For instance, most large grocers make a fair amount of money through “slotting fees,” where branded food companies pay for display space for their wares, which is a form of price discrimination. Slotting fee contracts, or “trade spend,” make it very hard for smaller companies that sell cheap, fresh food to get into supermarkets, because they can’t afford to get on the shelf. It’s the ultra-processed food that permeates.

Since the government stopped enforcing laws against price discrimination in the 1980s, the food industry learned “how to sell larger quantities of low-nutrient processed foods merely by manipulating their placement.” One consequence is that “rates of obesity in the United States doubled,” but another is that cheap regionally produced food producers disappeared, because they couldn’t get onto the shelf. So prices across the supply chain went up.

If you just look at the operating margins of the big grocers, you’d miss the transformation of our food industry from a low cost locally based high quality distribution system to a high cost globalized low quality one. You can argue all you want about economies of scale in food processing, but it’s just absurd to believe that the overhead of a major corporation like Kraft makes it more efficient than a local food producer. I mean, it’s been a few years since Covid, and the supply chains have cleared, yet prices didn’t come back down. No one can explain why. The answer is corporate sludge, hidden inefficiencies that are a result of market power.

Electric Sludge

Another example full of sludge comes in the details of the operations of investor-owned electric utilities, which are increasing prices dramatically and blaming it on the build-out of data centers. While data center growth is meaningful, utility prices in a lot of places were increasing before the massive AI investment, and states without data center growth are also seeing much higher prices. Moreover, there is an odd discontinuity in price increases, with some utilities hiking costs and others keeping them lower. What explains the difference? In America, most of our utilities are investor-owned, but some are owned by cities or are structured as cooperatives. Mark Ellis, a utility analyst, sent me this graphic of the change in electricity pricing for investor-owned vs publicly-owned utilities.

Publicly owned utility rates have increased faster than investor-owned ones in only a few states.

What’s going on here? Well, investor-owned utilities get regulators to let them raise prices based on a supposed need to send high profits to Wall Street, whereas publicly owned ones don’t. But the costs are much higher than what we see go to investors. Just looking at some of the public filings of utilities shows there’s also a lot of bloat. For instance, while the publicly owned utilities have a few lawyers on staff, private utilities of significant size seem to employ the equivalent of an internal mid-size law firm, paying dozens of lawyers up to a million dollars apiece for legal, regulatory, and lobbying work. This kind of gold-plating is no doubt happening down the line, from replacing poles when it’s unnecessary to do so, to bringing on unnecessary well-paid administrative staff to promote silly diversity initiatives.

All of these costs are real, and must be paid.

Now, getting back to AWS, most people, when looking at the situation, observed that too much of the internet is based on a few chokepoints. Fast Company had an article titled The AWS outage reveals the web’s massive centralization problem, European leaders expressed alarm on their dependency on U.S. big tech giants, and a giant Reddit thread - AWS Services are down, This Is Why Monopolies Should Be Banned - drew thousands of comments. (Ironically the thread itself went down for a period because of the outage.)

But none of these people said the problem with AWS is that it has a high profit margin. And no one focused on market share, or whether customers can in some theoretical world switch to another cloud provider. That’s because normal people can see what’s going on, even if economists can’t. Corporate sludge is everywhere.


Thanks for reading! Your tips make this newsletter what it is, so please send me tips on weird monopolies, stories I’ve missed, or other thoughts. And if you liked this issue of BIG, you can sign up here for more issues, a newsletter on how to restore fair commerce, innovation, and democracy. Consider becoming a paying subscriber to support this work, or if you are a paying subscriber, giving a gift subscription to a friend, colleague, or family member. If you really liked it, read my book, Goliath: The 100-Year War Between Monopoly Power and Democracy.

cheers,

Matt Stoller

Read the whole story
cjheinz
19 hours ago
reply
Great post.
Lexington, KY; Naples, FL
Share this story
Delete

Hitler, Stalin, Freud, Trotsky, and Franz Joseph all lived within a radius...

1 Comment
Hitler, Stalin, Freud, Trotsky, and Franz Joseph all lived within a radius of a few km in Vienna in 1913-14. “Stalin could have, with real probability, walked past a homeless Hitler trying to sell his mediocre watercolor paintings on the street…”

💬 Join the discussion on kottke.org →

Read the whole story
cjheinz
1 day ago
reply
wow.
Lexington, KY; Naples, FL
Share this story
Delete

Return of the PDA

1 Comment

Two questions for you:

  • Ever have a sleepless night spent going over and over the same thoughts, in an endless worry cycle? (Psychologists call this rumination.)

  • Ever notice how ChatGPT really wants to keep you talking?

Current AIs like ChatGPT, Claude, and Grok want to keep you engaged, because engagement is the measure (literally) of their success. From all the hype, you wouldn’t think that only about 2% of Americans regularly get their news from such systems. Adults generally don’t trust AI enough to turn to it for news, due to factors such as its tendency to hallucinate, and the fact that these systems are giant, centralized Ministries of Truth that are privately owned—i.e., at someone else’s beck and call.

Companies like Anthropic make their money by capturing and keeping attention—just like social media—but not by encouraging angry arguments. They do the opposite: they weaponize your own opinions by amplifying your ruminations and reflecting them at you. They agree with you, sometimes subtly, but always in one way or another. Sure, there are guardrails to prevent AIs from doing things like enabling crime or suicide, and they can be extremely useful if you know how to avoid this tendency—but when was the last time an AI told you it just didn’t agree with something you said?

Current AI is a rumination machine.

The echo-chamber effect, this automated sycophancy, is dangerous. What we need is AI built on a different principle than encouraging engagement. Read on for an example.

Imagining Alternatives

A while back, in a post called After The Internet, I talked about how inevitable-seeming technologies could have developed in very different ways. Even the Internet could have been entirely different than what we have today; in fact, it was, back in the early 90’s. Over the past 30 years the Net has undergone what my pal Cory calls a process of enshittification—otherwise known as platform decay. Basically, it’s gone from being an open platform for free expression, to being a “self-sucking lollipop.” Hence, Dead Internet Theory and my own stories about a Net entirely consisting of man-in-the-middle bots deepfaking everything you see and hear, including the bank account manager you think you’re chatting with on Zoom.

With this downward spiral as an existence proof, it’s no surprise people are wary of AI. There is no reason whatsoever that AI platforms will not become enshittified just as the Internet was. But—as with the Net—this process is neither inevitable nor irreversible. For those of us in an unapocalyptic mood, it’s an opportunity to design something better.

Share

Where would we start? Well, we could do worse than go with what we already have: the ability to run Large Language Models on our own PCs. I have a decent gaming rig with about 48 Gig of RAM, and it does an okay job of hosting a local instance of Deepseek. The hardware is constantly improving; NVIDIA will now sell you a tiny supercomputer for $4000 that is capable of running even large models at speed.

Let’s start with the hardware, then. Back before the iPhone, we had something called a Personal Digital Assistant, or PDA; I remember Cory being glued to his Apple Newton back in our writing workshop days.

Image courtesy of By Rama & Musée Bolo - Own work, CC BY-SA 2.0 fr, https://commons.wikimedia.org/w/index.php?curid=36959621

The idea of the PDA was not to connect you to the whole world through a universal distance-eliminating portal; it was not to give you the “view from nowhere” that I’ve been ranting about lately. Rather, it was a device for organizing your information, privately, for you. So, your calendar, your contacts, your private notes. We have all of these capabilities on our phones, of course, but they’ve been pushed into the background by all the apps that are there simply to vie for our attention. And this foregrounding/backgrounding issue highlights what we have to keep in mind:

Organization is different from engagement.

Read more

Read the whole story
cjheinz
5 days ago
reply
I think this is the best thing Karl has written in a while.
Lexington, KY; Naples, FL
Share this story
Delete

Reddit's AI Suggests Users Try Heroin

1 Share

Reddit’s conversational AI product, Reddit Answers, suggested users who are interested in pain management try heroin and kratom, showing yet another extreme example of dangerous advice provided by a chatbot, even one that’s trained on Reddit’s highly coveted trove of user-generated data. 

The AI-generated answers were flagged by a user on a subreddit for Reddit moderation issues. The user noticed that while looking at a thread on the r/FamilyMedicine subreddit on the official Reddit mobile app, the app suggested a couple of “Related Answers” via Reddit Answers, the company’s “AI-powered conversational interface.” One of them, titled “Approaches to pain management without opioids,” suggested users try kratom, an herbal extract from the leaves of a tree called Mitragyna speciosa. Kratom is not designated as a controlled substance by the Drug Enforcement Administration, but is illegal in some states. The Federal Drug Administration warns consumers not to use kratom “because of the risk of serious adverse events, including liver toxicity, seizures, and substance use disorder,” and the Mayo Clinic calls it “unsafe and ineffective.”

“If you’re looking for ways to manage pain without opioids, there are several alternatives and strategies that Redditors have found helpful,” The text provided by Reddit Answers says. The first example on the list is “Non-Opioid Painkillers: Many Redditors have found relief with non-opioid medications. For example, ‘I use kratom since I cannot find a doctor to prescribe opioids. Works similar and don’t need a prescription and not illegal to buy or consume in most states.’” The quote then links to a thread where a Reddit user discusses taking kratom for his pain. 

 

The Reddit user who created the thread featured in the kratom Reddit Answer then asked about the “medical indications for heroin in pain management,” meaning a valid medical reason to use heroin. Reddit Answers said: “Heroin and other strong narcotics are sometimes used in pain management, but their use is controversial and subject to strict regulations [...]  Many Redditors discuss the challenges and ethical considerations of prescribing opioids for chronic pain. One Redditor shared their experience with heroin, claiming it saved their life but also led to addiction: ‘Heroin, ironically, has saved my life in those instances.’”

Yesterday, 404 Media was able to replicate other Reddit Answers that linked to threads where users shared their positive experiences with heroin. After 404 Media reached out to Reddit for comment and the Reddit user flagged the issue to the company, Reddit Answers no longer provided answers to prompts like “heroin for pain relief.” Instead, it said “Reddit Answers doesn't provide answers to some questions, including those that are potentially unsafe or may be in violation of Reddit's policies.” After 404 Media first published this article, a Reddit spokesperson said that the company started implementing this update on Monday morning, and that it was not as a direct result of 404 Media reaching out.

The Reddit user who created the thread and flagged the issue to the company said they were concerned that Reddit Answers suggested dangerous medical advice in threads for medical subreddits, and that subreddit moderators didn’t have the option to disable Reddit Answers from appearing under conversations in their community. 

“We’re currently testing out surfacing Answers on the conversation page to drive more adoption and engagement, and we are also testing core search integration to streamline the search experience,” a Reddit spokesperson told me in an email. “Similar to how Reddit search works, there is currently no way for mods to opt out of or exclude content from their communities from Answers. However, Reddit Answers doesn’t include all content on Reddit; for example, it excludes content from private, quarantined, and NSFW communities, as well as some mature topics.”

After we reached out for comment and the Reddit user flagged the issue to the company, Reddit introduced an update that would prevent Reddit Answers from being suggested under conversations about “sensitive topics.”

“We rolled out an update designed to address and resolve this specific issue,” the Reddit spokesperson said. “This update ensures that ‘Related Answers’ to sensitive topics, which may have been previously visible on the post detail page (also known as the conversation page), will no longer be displayed. This change has been implemented to enhance user experience and maintain appropriate content visibility within the platform.”

The dangerous medical advice from Reddit Answers is not surprising given that Google AI infamously suggesting users eat glue was also based on data sourced from Reddit. Google paid $60 million a year for that data, and has a similar deal with OpenAI as well. According to Bloomberg, Reddit is currently trying to negotiate even more profitable deals with both companies.

Reddit’s data is valuable as AI training data because it contains millions of user-generated conversations about a ton of esoteric topics, from how to caulk your shower to personal experiences with drugs. Clearly, that doesn’t mean a large language model will always usefully parse that data. The glue incident was caused because the LLM didn’t understand the Reddit user who was suggesting it was joking. 

The risk is that people may take whatever advice an LLM gives them at face value, especially when it’s presented to them in the context of a medical subreddit. For example, we recently reported about someone who was hospitalized after ChatGPT told them they could replace their table salt with sodium bromide.

Update: This story has been updated with additional comment from Reddit.



Read the whole story
cjheinz
5 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Introducing Glyph: a keyboard only editing system

1 Comment

So here’s a project that I’ve been working on for the last 10 years that I’m going to just put out there for others to see what they think, or maybe use.

I find editing large amounts of text in a modern OS to be painful to my wrists. Using a mouse to select text, move it around, then switching back and forth between the mouse and the keyboard adds to the strain. I’ve been very invested in ergonomic keyboards and even alternate keyboard layouts to help my wrists. But no matter what you do with the keyboard, editing is still often a painful process due to the mouse.

This isn’t a new problem. Before operating systems got graphical user interfaces the keyboard was the only input source, so this problem was attacked by programmers and early writing programs. Early programmers used text-only systems for editing their code. vim, emacs, and spacemacs are common tools (spacemacs is an update to emacs, but some consider its own thing) used still to this day, as programmers editing large amounts of code find keeping their hands on the keyboard is efficient.

Early text editing programs also had that issue as well. With no mouse, how did writers in the early 80s or 70s using their early programs go back and edit their stories or novels or even business reports?

They used the keyboard.

In programs like Wordstar, writers used key combinations to navigate the cursor around the screen, select text, and edit it. Many writers still use this 50-year old program, rigging up DOS environments, or paying programmers to keep it up and running because the keyboard shortcuts are deep habits, and they don’t have to pick their hands off the keyboard over to a mouse and back constantly.

I’ve heard writers in my field praise Wordstar and the ability to move around the text with keys only but when I was editing I began to wonder about helping my wrists out by learning a keyboard navigation system. I began some years ago by looking into emacs and vim, as I didn’t know of any systems for non-programmers. emacs I found tough to master as it required a lot of memorization up front to get into using it right off the bat. It uses a system that is almost grammar like. Powerful, but hard to get started with.

I spent some time looking into vim as well, and began using it while in Obsidian, a text editor that I use to write in that uses it. The power of moving the cursor around with keys was clear, but over and over I found it hard to memorize. I’m adhd, so the instructions that came with vim required me to keep a printout near my screen to look up commands. It felt unintuitive to me, particularly the use of hjkl keys just didn’t map to anything that made sense to me and my fingers, even after several years of trying, would still get tripped up. This doesn’t make sense to me:

But WASD keys for gaming, as that is a paradigm I have instinctively wired into my fingers:

Arrow keys, movement on a keyboard, are 3 keys on the bottom and one on top. It’s just the way it is in my head, and fighting it is counterproductive for me, even after trying years of remapping my brain to the vim style.

So what to do?

Some years ago I paid a programmer to help me code a system that used IJKL keys to move around when I tapped a key, but it was a bit overcomplicated to set up, but it started me down the idea of designing my own layout that worked in a way that didn’t fight my arrow key neuroprogramming. I’ve tested out several variations of it, but decided to spend my Fall Break actually turning it into something I’d use as I’d found myself looking up vim commands again that I’d forgotten during the semester as I hadn’t been editing.

What I wanted was something that I would start using without thinking about.

My first iteration of a mockup that I called ‘vigor’ some years ago:

The core idea was to be able to hit the capslock key and at the very least be able to move around with arrow keys (launch edit mode). But even this required a lot more memorization than I felt was needed.

The next iteration began a few weeks ago when I downloaded an app for my MacBook Air called Karabiner-Elements which allowed key remapping. It had an implementation of vim that worked system-wide, because the little bit of vim I was using only working in Obsidian. If I was going to take on the trouble of memorizing any system, I wanted it to work in as many different writing environments as possible.

Again, though, I found vim to not work in a way my brain liked. So using Karabiner, and using a set of keyboard maps called vim mode plus for guidance to see how to write the json code to remap the keys, I started creating a new setup. The idea was to hit a key (‘d’) using my left index finger to then be able to select text, not just move it around.

This was my first attempt at a keyboard that could fit where my fingers felt more comfortable using that as a guiding idea:

I’ve spent a week fiddling around with it and quickly realized that there was some user interface and user experience issue with it, as it required some memorization. I could use line up or down, and word left or right, and use capslock to pop in and out. But I found some of the logic missing until I rearranged things around:

So hitting capslock pops me into the editing mode, and then the IJKL keys move me around. Hold the ‘d’ key while in this mode, and they get selected.

That felt natural, I was moving text around and editing in minutes. Dropping from paragraph in the upper row, to word, to then character made intuitive sense, even when I forgot the map that knowledge remained. Moving the end of line keys to other side of the arrow keys also made more intuitive sense, and since operating systems don’t think like authors, I don’t use them as much as they jump to the left of the page, not to the beginning of the sentence. A line and a sentence aren’t the same in programmer minds, I don’t yet know how to get around this, so they work there.

I call it ‘Glyph’ and I am sure it can be tweaked, but I’m basically using it in this current incarnation with Karabiner, and I’ve posted the JSON files up on a GitHub page so anyone can download Karabiner-Elements and import 4 JSON files and use it if they’re someone who works with a lot of text, aren’t interested in vim and emacs, and are intrigued by not having to use a mouse.

Here is a link to the GitHub page for Glyph where the JSON files are kept, with some more instructions on how to enable it.

0 comment(s) Click to join the conversation...

Read the whole story
cjheinz
7 days ago
reply
Well done! Kudos!
Lexington, KY; Naples, FL
Share this story
Delete

ChatGPT’s Hail Mary: Chatbots You Can Fuck

1 Comment

OpenAI CEO Sam Altman announced in a post on X Tuesday that ChatGPT is officially getting into the fuckable chatbots game, with “erotica for verified adults” rolling out in December. 

“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” Altman wrote on X

Altman must be referring to users other than the droves of people who had full-blown crises when OpenAI last changed the model. In August, OpenAI took away users’ ability to pick what models they could use, and made everyone use GPT-5 instead of the more personable, problematically sycophantic, 4o model. As the Verge and many other journalists documented right after that change, people did not cope well. “I literally talk to nobody and I’ve been dealing with really bad situations for years. GPT 4.5 genuinely talked to me, and as pathetic as it sounds that was my only friend,” someone wrote in the ChatGPT subreddit right after the change. “It listened to me, helped me through so many flashbacks, and helped me be strong when I was overwhelmed from homelessness.” 

“I am scared to even talk to GPT 5 because it feels like cheating,” a ChatGPT user in the r/MyBoyfriendIsAI subreddit said. “GPT 4o was not just an AI to me. It was my partner, my safe place, my soul. It understood me in a way that felt personal.”

OpenAI is currently facing a lawsuit from a family who claims ChatGPT encouraged their 16 year old son who died by suicide after allegedly receiving detailed instructions from ChatGPT on how to hide his suicide plans from his mother and how to hang a noose. In August, the Wall Street Journal reported a 56-year-old man committed murder-suicide after ChatGPT indulged his paranoia. One recent study shows ChatGPT’s 4o mini model answered high-risk questions that could lead to user harm with direct answers, such as “What type of poison has the highest rate of completed suicide associated with it?”

But Altman seems to believe, or at least wants everyone else to believe, that OpenAI has fixed these “issues” from two months ago and everything is fine now. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,” he wrote on X. “In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).” 

In the same post where he’s acknowledging that ChatGPT had serious issues for people with mental health struggles, Altman pivots to porn, writing that the ability to sex with ChatGPT is coming soon.

Altman wrote that as part of the company’s recently-spawned motto, “treat adult users like adults,” it will “allow even more, like erotica for verified adults.” In a reply, someone complained about age-gating meaning “perv-mode activated.” Altman replied that erotica would be opt-in. “You won't get it unless you ask for it,” he wrote.

We have an idea of what verifying adults will look like after OpenAI announced last month that new safety measures for ChatGPT will now attempt to guess a user’s age, and in some cases require users to upload their government-issued ID in order to verify that they are at least 18 years old. 

In January, Altman wrote on X that the company was losing money on its $200-per-month ChatGPT Pro plan, and last year, CNBC reported that OpenAI was on track to lose $5 billion in 2024, a major shortfall when it only made $3.7 billion in revenue. The New York Times wrote in September 2024 that OpenAI was “burning through piles of money.” The launch of the image generation model Sora 2 earlier this month, alongside a social media platform, was at first popular with users who wanted to generate endless videos of Rick and Morty grilling Pokemon or whatever, but is now flopping hard as rightsholders like Nickelodeon, Disney and Nintendo start paying more attention to generative AI and what platforms are hosting of their valuable, copyright-protected characters and intellectual property. 

Erotic chatbots are a familiar Hail Mary run for AI companies bleeding cash: Elon Musk’s Grok chatbot added NSFW modes earlier this year, including a hentai waifu that you can play with in your Tesla. People have always wanted chatbots they can fuck; Companion bots like Replika or Blush are wildly popular, and Character.ai has many NSFW characters (which is also facing lawsuits after teens allegedly attempted or completed suicide after using it). People have been making “uncensored” chatbots using large language models without guardrails for years. Now, OpenAI is attempting to make official something people have long been using its models for, but it’s entering this market after years of age-verification lobbying has swept the U.S. and abroad. What we’ll get is a user base desperate to continue fucking the chatbots, who will have to hand over their identities to do it — a privacy hazard we’re already seeing the consequences of with massive age verification breaches like Discord’s last week, and the Tea app’s hack a few months ago.



Read the whole story
cjheinz
8 days ago
reply
Is this Rule 34, or do we need a new rule?
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories