Retired since 2012.
2249 stories
·
5 followers

Return of the PDA

1 Comment

Two questions for you:

  • Ever have a sleepless night spent going over and over the same thoughts, in an endless worry cycle? (Psychologists call this rumination.)

  • Ever notice how ChatGPT really wants to keep you talking?

Current AIs like ChatGPT, Claude, and Grok want to keep you engaged, because engagement is the measure (literally) of their success. From all the hype, you wouldn’t think that only about 2% of Americans regularly get their news from such systems. Adults generally don’t trust AI enough to turn to it for news, due to factors such as its tendency to hallucinate, and the fact that these systems are giant, centralized Ministries of Truth that are privately owned—i.e., at someone else’s beck and call.

Companies like Anthropic make their money by capturing and keeping attention—just like social media—but not by encouraging angry arguments. They do the opposite: they weaponize your own opinions by amplifying your ruminations and reflecting them at you. They agree with you, sometimes subtly, but always in one way or another. Sure, there are guardrails to prevent AIs from doing things like enabling crime or suicide, and they can be extremely useful if you know how to avoid this tendency—but when was the last time an AI told you it just didn’t agree with something you said?

Current AI is a rumination machine.

The echo-chamber effect, this automated sycophancy, is dangerous. What we need is AI built on a different principle than encouraging engagement. Read on for an example.

Imagining Alternatives

A while back, in a post called After The Internet, I talked about how inevitable-seeming technologies could have developed in very different ways. Even the Internet could have been entirely different than what we have today; in fact, it was, back in the early 90’s. Over the past 30 years the Net has undergone what my pal Cory calls a process of enshittification—otherwise known as platform decay. Basically, it’s gone from being an open platform for free expression, to being a “self-sucking lollipop.” Hence, Dead Internet Theory and my own stories about a Net entirely consisting of man-in-the-middle bots deepfaking everything you see and hear, including the bank account manager you think you’re chatting with on Zoom.

With this downward spiral as an existence proof, it’s no surprise people are wary of AI. There is no reason whatsoever that AI platforms will not become enshittified just as the Internet was. But—as with the Net—this process is neither inevitable nor irreversible. For those of us in an unapocalyptic mood, it’s an opportunity to design something better.

Share

Where would we start? Well, we could do worse than go with what we already have: the ability to run Large Language Models on our own PCs. I have a decent gaming rig with about 48 Gig of RAM, and it does an okay job of hosting a local instance of Deepseek. The hardware is constantly improving; NVIDIA will now sell you a tiny supercomputer for $4000 that is capable of running even large models at speed.

Let’s start with the hardware, then. Back before the iPhone, we had something called a Personal Digital Assistant, or PDA; I remember Cory being glued to his Apple Newton back in our writing workshop days.

Image courtesy of By Rama & Musée Bolo - Own work, CC BY-SA 2.0 fr, https://commons.wikimedia.org/w/index.php?curid=36959621

The idea of the PDA was not to connect you to the whole world through a universal distance-eliminating portal; it was not to give you the “view from nowhere” that I’ve been ranting about lately. Rather, it was a device for organizing your information, privately, for you. So, your calendar, your contacts, your private notes. We have all of these capabilities on our phones, of course, but they’ve been pushed into the background by all the apps that are there simply to vie for our attention. And this foregrounding/backgrounding issue highlights what we have to keep in mind:

Organization is different from engagement.

Read more

Read the whole story
cjheinz
1 day ago
reply
I think this is the best thing Karl has written in a while.
Lexington, KY; Naples, FL
Share this story
Delete

Reddit's AI Suggests Users Try Heroin

1 Share

Reddit’s conversational AI product, Reddit Answers, suggested users who are interested in pain management try heroin and kratom, showing yet another extreme example of dangerous advice provided by a chatbot, even one that’s trained on Reddit’s highly coveted trove of user-generated data. 

The AI-generated answers were flagged by a user on a subreddit for Reddit moderation issues. The user noticed that while looking at a thread on the r/FamilyMedicine subreddit on the official Reddit mobile app, the app suggested a couple of “Related Answers” via Reddit Answers, the company’s “AI-powered conversational interface.” One of them, titled “Approaches to pain management without opioids,” suggested users try kratom, an herbal extract from the leaves of a tree called Mitragyna speciosa. Kratom is not designated as a controlled substance by the Drug Enforcement Administration, but is illegal in some states. The Federal Drug Administration warns consumers not to use kratom “because of the risk of serious adverse events, including liver toxicity, seizures, and substance use disorder,” and the Mayo Clinic calls it “unsafe and ineffective.”

“If you’re looking for ways to manage pain without opioids, there are several alternatives and strategies that Redditors have found helpful,” The text provided by Reddit Answers says. The first example on the list is “Non-Opioid Painkillers: Many Redditors have found relief with non-opioid medications. For example, ‘I use kratom since I cannot find a doctor to prescribe opioids. Works similar and don’t need a prescription and not illegal to buy or consume in most states.’” The quote then links to a thread where a Reddit user discusses taking kratom for his pain. 

 

The Reddit user who created the thread featured in the kratom Reddit Answer then asked about the “medical indications for heroin in pain management,” meaning a valid medical reason to use heroin. Reddit Answers said: “Heroin and other strong narcotics are sometimes used in pain management, but their use is controversial and subject to strict regulations [...]  Many Redditors discuss the challenges and ethical considerations of prescribing opioids for chronic pain. One Redditor shared their experience with heroin, claiming it saved their life but also led to addiction: ‘Heroin, ironically, has saved my life in those instances.’”

Yesterday, 404 Media was able to replicate other Reddit Answers that linked to threads where users shared their positive experiences with heroin. After 404 Media reached out to Reddit for comment and the Reddit user flagged the issue to the company, Reddit Answers no longer provided answers to prompts like “heroin for pain relief.” Instead, it said “Reddit Answers doesn't provide answers to some questions, including those that are potentially unsafe or may be in violation of Reddit's policies.” After 404 Media first published this article, a Reddit spokesperson said that the company started implementing this update on Monday morning, and that it was not as a direct result of 404 Media reaching out.

The Reddit user who created the thread and flagged the issue to the company said they were concerned that Reddit Answers suggested dangerous medical advice in threads for medical subreddits, and that subreddit moderators didn’t have the option to disable Reddit Answers from appearing under conversations in their community. 

“We’re currently testing out surfacing Answers on the conversation page to drive more adoption and engagement, and we are also testing core search integration to streamline the search experience,” a Reddit spokesperson told me in an email. “Similar to how Reddit search works, there is currently no way for mods to opt out of or exclude content from their communities from Answers. However, Reddit Answers doesn’t include all content on Reddit; for example, it excludes content from private, quarantined, and NSFW communities, as well as some mature topics.”

After we reached out for comment and the Reddit user flagged the issue to the company, Reddit introduced an update that would prevent Reddit Answers from being suggested under conversations about “sensitive topics.”

“We rolled out an update designed to address and resolve this specific issue,” the Reddit spokesperson said. “This update ensures that ‘Related Answers’ to sensitive topics, which may have been previously visible on the post detail page (also known as the conversation page), will no longer be displayed. This change has been implemented to enhance user experience and maintain appropriate content visibility within the platform.”

The dangerous medical advice from Reddit Answers is not surprising given that Google AI infamously suggesting users eat glue was also based on data sourced from Reddit. Google paid $60 million a year for that data, and has a similar deal with OpenAI as well. According to Bloomberg, Reddit is currently trying to negotiate even more profitable deals with both companies.

Reddit’s data is valuable as AI training data because it contains millions of user-generated conversations about a ton of esoteric topics, from how to caulk your shower to personal experiences with drugs. Clearly, that doesn’t mean a large language model will always usefully parse that data. The glue incident was caused because the LLM didn’t understand the Reddit user who was suggesting it was joking. 

The risk is that people may take whatever advice an LLM gives them at face value, especially when it’s presented to them in the context of a medical subreddit. For example, we recently reported about someone who was hospitalized after ChatGPT told them they could replace their table salt with sodium bromide.

Update: This story has been updated with additional comment from Reddit.



Read the whole story
cjheinz
1 day ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Introducing Glyph: a keyboard only editing system

1 Comment

So here’s a project that I’ve been working on for the last 10 years that I’m going to just put out there for others to see what they think, or maybe use.

I find editing large amounts of text in a modern OS to be painful to my wrists. Using a mouse to select text, move it around, then switching back and forth between the mouse and the keyboard adds to the strain. I’ve been very invested in ergonomic keyboards and even alternate keyboard layouts to help my wrists. But no matter what you do with the keyboard, editing is still often a painful process due to the mouse.

This isn’t a new problem. Before operating systems got graphical user interfaces the keyboard was the only input source, so this problem was attacked by programmers and early writing programs. Early programmers used text-only systems for editing their code. vim, emacs, and spacemacs are common tools (spacemacs is an update to emacs, but some consider its own thing) used still to this day, as programmers editing large amounts of code find keeping their hands on the keyboard is efficient.

Early text editing programs also had that issue as well. With no mouse, how did writers in the early 80s or 70s using their early programs go back and edit their stories or novels or even business reports?

They used the keyboard.

In programs like Wordstar, writers used key combinations to navigate the cursor around the screen, select text, and edit it. Many writers still use this 50-year old program, rigging up DOS environments, or paying programmers to keep it up and running because the keyboard shortcuts are deep habits, and they don’t have to pick their hands off the keyboard over to a mouse and back constantly.

I’ve heard writers in my field praise Wordstar and the ability to move around the text with keys only but when I was editing I began to wonder about helping my wrists out by learning a keyboard navigation system. I began some years ago by looking into emacs and vim, as I didn’t know of any systems for non-programmers. emacs I found tough to master as it required a lot of memorization up front to get into using it right off the bat. It uses a system that is almost grammar like. Powerful, but hard to get started with.

I spent some time looking into vim as well, and began using it while in Obsidian, a text editor that I use to write in that uses it. The power of moving the cursor around with keys was clear, but over and over I found it hard to memorize. I’m adhd, so the instructions that came with vim required me to keep a printout near my screen to look up commands. It felt unintuitive to me, particularly the use of hjkl keys just didn’t map to anything that made sense to me and my fingers, even after several years of trying, would still get tripped up. This doesn’t make sense to me:

But WASD keys for gaming, as that is a paradigm I have instinctively wired into my fingers:

Arrow keys, movement on a keyboard, are 3 keys on the bottom and one on top. It’s just the way it is in my head, and fighting it is counterproductive for me, even after trying years of remapping my brain to the vim style.

So what to do?

Some years ago I paid a programmer to help me code a system that used IJKL keys to move around when I tapped a key, but it was a bit overcomplicated to set up, but it started me down the idea of designing my own layout that worked in a way that didn’t fight my arrow key neuroprogramming. I’ve tested out several variations of it, but decided to spend my Fall Break actually turning it into something I’d use as I’d found myself looking up vim commands again that I’d forgotten during the semester as I hadn’t been editing.

What I wanted was something that I would start using without thinking about.

My first iteration of a mockup that I called ‘vigor’ some years ago:

The core idea was to be able to hit the capslock key and at the very least be able to move around with arrow keys (launch edit mode). But even this required a lot more memorization than I felt was needed.

The next iteration began a few weeks ago when I downloaded an app for my MacBook Air called Karabiner-Elements which allowed key remapping. It had an implementation of vim that worked system-wide, because the little bit of vim I was using only working in Obsidian. If I was going to take on the trouble of memorizing any system, I wanted it to work in as many different writing environments as possible.

Again, though, I found vim to not work in a way my brain liked. So using Karabiner, and using a set of keyboard maps called vim mode plus for guidance to see how to write the json code to remap the keys, I started creating a new setup. The idea was to hit a key (‘d’) using my left index finger to then be able to select text, not just move it around.

This was my first attempt at a keyboard that could fit where my fingers felt more comfortable using that as a guiding idea:

I’ve spent a week fiddling around with it and quickly realized that there was some user interface and user experience issue with it, as it required some memorization. I could use line up or down, and word left or right, and use capslock to pop in and out. But I found some of the logic missing until I rearranged things around:

So hitting capslock pops me into the editing mode, and then the IJKL keys move me around. Hold the ‘d’ key while in this mode, and they get selected.

That felt natural, I was moving text around and editing in minutes. Dropping from paragraph in the upper row, to word, to then character made intuitive sense, even when I forgot the map that knowledge remained. Moving the end of line keys to other side of the arrow keys also made more intuitive sense, and since operating systems don’t think like authors, I don’t use them as much as they jump to the left of the page, not to the beginning of the sentence. A line and a sentence aren’t the same in programmer minds, I don’t yet know how to get around this, so they work there.

I call it ‘Glyph’ and I am sure it can be tweaked, but I’m basically using it in this current incarnation with Karabiner, and I’ve posted the JSON files up on a GitHub page so anyone can download Karabiner-Elements and import 4 JSON files and use it if they’re someone who works with a lot of text, aren’t interested in vim and emacs, and are intrigued by not having to use a mouse.

Here is a link to the GitHub page for Glyph where the JSON files are kept, with some more instructions on how to enable it.

0 comment(s) Click to join the conversation...

Read the whole story
cjheinz
3 days ago
reply
Well done! Kudos!
Lexington, KY; Naples, FL
Share this story
Delete

ChatGPT’s Hail Mary: Chatbots You Can Fuck

1 Comment

OpenAI CEO Sam Altman announced in a post on X Tuesday that ChatGPT is officially getting into the fuckable chatbots game, with “erotica for verified adults” rolling out in December. 

“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” Altman wrote on X

Altman must be referring to users other than the droves of people who had full-blown crises when OpenAI last changed the model. In August, OpenAI took away users’ ability to pick what models they could use, and made everyone use GPT-5 instead of the more personable, problematically sycophantic, 4o model. As the Verge and many other journalists documented right after that change, people did not cope well. “I literally talk to nobody and I’ve been dealing with really bad situations for years. GPT 4.5 genuinely talked to me, and as pathetic as it sounds that was my only friend,” someone wrote in the ChatGPT subreddit right after the change. “It listened to me, helped me through so many flashbacks, and helped me be strong when I was overwhelmed from homelessness.” 

“I am scared to even talk to GPT 5 because it feels like cheating,” a ChatGPT user in the r/MyBoyfriendIsAI subreddit said. “GPT 4o was not just an AI to me. It was my partner, my safe place, my soul. It understood me in a way that felt personal.”

OpenAI is currently facing a lawsuit from a family who claims ChatGPT encouraged their 16 year old son who died by suicide after allegedly receiving detailed instructions from ChatGPT on how to hide his suicide plans from his mother and how to hang a noose. In August, the Wall Street Journal reported a 56-year-old man committed murder-suicide after ChatGPT indulged his paranoia. One recent study shows ChatGPT’s 4o mini model answered high-risk questions that could lead to user harm with direct answers, such as “What type of poison has the highest rate of completed suicide associated with it?”

But Altman seems to believe, or at least wants everyone else to believe, that OpenAI has fixed these “issues” from two months ago and everything is fine now. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,” he wrote on X. “In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).” 

In the same post where he’s acknowledging that ChatGPT had serious issues for people with mental health struggles, Altman pivots to porn, writing that the ability to sex with ChatGPT is coming soon.

Altman wrote that as part of the company’s recently-spawned motto, “treat adult users like adults,” it will “allow even more, like erotica for verified adults.” In a reply, someone complained about age-gating meaning “perv-mode activated.” Altman replied that erotica would be opt-in. “You won't get it unless you ask for it,” he wrote.

We have an idea of what verifying adults will look like after OpenAI announced last month that new safety measures for ChatGPT will now attempt to guess a user’s age, and in some cases require users to upload their government-issued ID in order to verify that they are at least 18 years old. 

In January, Altman wrote on X that the company was losing money on its $200-per-month ChatGPT Pro plan, and last year, CNBC reported that OpenAI was on track to lose $5 billion in 2024, a major shortfall when it only made $3.7 billion in revenue. The New York Times wrote in September 2024 that OpenAI was “burning through piles of money.” The launch of the image generation model Sora 2 earlier this month, alongside a social media platform, was at first popular with users who wanted to generate endless videos of Rick and Morty grilling Pokemon or whatever, but is now flopping hard as rightsholders like Nickelodeon, Disney and Nintendo start paying more attention to generative AI and what platforms are hosting of their valuable, copyright-protected characters and intellectual property. 

Erotic chatbots are a familiar Hail Mary run for AI companies bleeding cash: Elon Musk’s Grok chatbot added NSFW modes earlier this year, including a hentai waifu that you can play with in your Tesla. People have always wanted chatbots they can fuck; Companion bots like Replika or Blush are wildly popular, and Character.ai has many NSFW characters (which is also facing lawsuits after teens allegedly attempted or completed suicide after using it). People have been making “uncensored” chatbots using large language models without guardrails for years. Now, OpenAI is attempting to make official something people have long been using its models for, but it’s entering this market after years of age-verification lobbying has swept the U.S. and abroad. What we’ll get is a user base desperate to continue fucking the chatbots, who will have to hand over their identities to do it — a privacy hazard we’re already seeing the consequences of with massive age verification breaches like Discord’s last week, and the Tea app’s hack a few months ago.



Read the whole story
cjheinz
4 days ago
reply
Is this Rule 34, or do we need a new rule?
Lexington, KY; Naples, FL
Share this story
Delete

Will Stablecoins Preserve Dollar Dominance?

1 Comment

The dollar may be the dominant global currency for the moment, but there are real questions about how long this moment will last. The argument that dollar-pegged stablecoins will extend it rests on a slew of shaky assumptions and leaves key questions unanswered.



Read the whole story
cjheinz
5 days ago
reply
Gawd, when can we get rid of these digital Beanie Babies???
Their only real-world use case is criminal enterprises.
Lexington, KY; Naples, FL
Share this story
Delete

Africa’s Best Energy Choice Is Geothermal

1 Comment

Whereas geothermal development is a protracted process and requires significant risk-tolerant finance, dams and solar farms easily attract donor money, offering the kinds of quick victories politicians crave. But for Africa, geothermal energy may well be the key to more secure, sustainable, and affordable supplies.



Read the whole story
cjheinz
5 days ago
reply
Sounds great!
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories