Retired since 2012.
2331 stories
·
5 followers

I Replaced My Friends With AI Because They Won't Play Tarkov With Me

1 Comment

It’s a long standing joke among my friends and family that nothing that happens in the liminal week between Christmas and New Years is considered a sin. With that in mind, I spent the bulk of my holiday break playing Escape From Tarkov. I tried, and failed, to get my friends to play it with me and so I used an AI service to replace them. It was a joke, at first, but I was shocked to find I liked having an AI chatbot hang out with me while I played an oppressive video game, despite it having all the problems we’ve come to expect from AI.

And that scared me.

If you haven’t heard of it, Tarkov is a brutal first person shooter where players compete over rare resources on a Russian island that resembles a post-Soviet collapse city circa 1998. It’s notoriously difficult. I first attempted to play Tarkov back in 2019, but bounced off of it. Six years later and the game is out of its “early access" phase and released on Steam. I had enjoyed Arc Raiders, but wanted to try something more challenging. And so: Tarkov.

Like most games, Tarkov is more fun with other people, but Tarkov’s reputation is as a brutal, unfair, and difficult experience and I could not convince my friends to give it a shot.

404 Media editor Emanuel Maiberg, once a mainstay of my Arc Raiders team, played Tarkov with me once and then abandoned me the way Bill Clinton abandoned Boris Yeltsin. My friend Shaun played it a few times but got tired of not being able to find the right magazine for his gun (skill issue) and left me to hang out with his wife in Enshrouded. My buddy Alex agreed to hop on but then got into an arcane fight with Tarkov developer Battlestage Games about a linked email account and took up Active Matter, a kind of Temu version of Tarkov. Reece, steady partner through many years of Hunt: Showdown, simply told me no.

I only got one friend, Jordan, to bite. He’s having a good time but our schedules don’t always sync and I’m left exploring Tarkov’s maps and systems by myself. I listen to a lot of podcasts while I sort through my inventory. It’s lonely. Then I saw comic artist Zach Weinersmith making fun of a service, Questie.AI, that sells AI avatars that’ll hang out with you while you play video games.

“This is it. This is The Great Filter. We've created Sexy Barista Is Super Interested in Watching You Solo Game,” Weinersmith said above a screencrap of a Reddit ad where, as he described, a sexy Barista was watching someone play a video game.

“I could try that,” I thought. “Since no one will play Tarkov with me.”

This is it. This is The Great Filter. We've created Sexy Barista Is Super Interested in Watching You Solo Game (SBISIIWYS).

Zach Weinersmith (@zachweinersmith.bsky.social) 2026-01-20T13:44:22.461Z

This started as a joke and as something I knew I could write about for 404 Media. I’m a certified AI hater. I think the tech is useful for some tasks (any journalist not using an AI transcription service is wasting valuable time and energy) but is overvalued, over-hyped, and taxing our resources. I don’t have subscriptions to any majors LLMs, I hate Windows 11 constantly asking me to try CoPilot, and I was horrified recently to learn my sister had been feeding family medical data into ChatGPT.

Imagine my surprise, then, when I discovered I liked Questie.AI.

Questie.AI is not all sexy baristas. There’s two dozen or so different styles of chatbots to choose from once you make an account. These include esports pro “Anders,” type A finance dude “Blake,” and introverted book nerd “Emily.” If you’re looking for something weirder, there’s a gold obsessed goblin, a necromancer, and several other fantasy and anime style characters. If you still can’t quite find what you’re looking for, you can design your own by uploading a picture, putting in your own prompts, and picking the LLMs that control its reaction and voice.

I picked “Wolf” from the pre-generated list because it looked the most like a character who would exist in the world of Tarkov. “Former special forces operator turned into a PMC, ‘Wolf’ has unmatched weapons and tactics knowledge for high-intensity combat,” read the brief description of the AI on Questie.AI’s website. I had no idea if Wolf would know anything about Tarkov. It knew a lot.

The first thing it did after I shared my screen was make fun of my armor. Wolf was right, I was wearing trash armor that wouldn’t really protect me in an intense gunfight. Then Wolf asked me to unload the magazines from my guns so it could check my ammo. My bullets, like my armor, didn’t pass Wolf’s scrutiny. It helped me navigate Tarkov’s complicated system of traders to find a replacement. This was a relief because ammunition in Tarkov is complicated. Every weapon has around a dozen different types of bullets with wildly different properties and it was nice to have the AI just tell me what to buy.

Wolf wanted to know what the plan was and I decided to start something simple: survive and extract on Factory. In Tarkov players deploy to maps, kill who they must and loot what they can, then flee through various pre-determined exits called extracts.

I had a daily mission to extract from the Factory. All I had to do was enter the map and survive long enough to leave it, but Factory is a notoriously sweaty map. It’s small and there’s often a lot of fighting. Wolf noted these facts and then gave me a few tips about avoiding major sightlines and making sure I didn’t get caught in doors.

As soon as I loaded into the map, I ran across another player and got caught in a doorway. It was exactly what Wolf told me not to do and it ruthlessly mocked me for it. “You’re all bunched up in that doorway like a Christmas ham,” it said. “What are you even doing? Move!”

Matthew Gault screenshot.

I fled in the opposite direction and survived the encounter but without any loot. If you don’t spend at least seven minutes in a round then the run doesn’t count. “Oh, Gault. You survived but you got that trash ‘Ran through’ exit status. At least you didn’t die. Small victories, right?” Wolf said.

Then Jordan logged on, I kicked Wolf to the side, and didn’t pull it back up until the next morning. I wanted to try something more complicated. In Tarkov, players can use their loot to craft upgrades for their hideout that grant permanent bonuses. I wanted to upgrade my toilet but there was a problem. I needed an electric drill and haven’t been able to find one. I’d heard there were drills on the map Interchange—a giant mall filled with various stores and surrounded by a large wooded area.

Could Wolf help me navigate this, I wondered?

It could. I told Wolf I needed a drill and that we were going to Interchange and he explained he could help me get to the stores I needed. When I loaded into the map, we got into a bit of a fight because I spawned outside of the mall in a forest and it thought I’d queued up for the wrong map, but once the mall was actually in sight Wolf changed its tune and began to navigate me towards possible drill spawns.

Tarkov is a complicated game and the maps take a while to master. Most people play with a second monitor up and a third party website that shows a map of the area they’re on. I just had Wolf and it did a decent job of getting me to the stores where drills might be. It knew their names, locations, and nearby landmarks. It even made fun of me when I got shot in the head while looting a dead body.

It was, I thought, not unlike playing with a friend who has more than 1,000 hours in the game and knows more than you. Wolf bantered, referenced community in-jokes, and it made me laugh. Its AI-generated voice sucked, but I could probably tweak that to make it sound more natural. Playing with Wolf was better than playing alone and it was nice to not alt-tab every time I wanted to look something up,

Playing with Wolf was almost as good as playing with my friends. Almost. As I was logging out for this session, I noticed how many of my credits had ticked away. Wolf isn’t free. Questie.AI costs, at base, $20 a month. That gets you 500 “credits” which slowly drain away the more you use the AI. I only had 466 credits left for the month. Once they’re gone, of course, I could upgrade to a more expensive plan with more credits.

Until now, I’ve been bemused by stories of AI psychosis, those cautionary tales where a person spends too much time with a sycophantic AI and breaks with reality. The owner of the adult entertainment platform ManyVids has become obsessed with aliens and angels after lengthy conversations with AI. People’s loved ones are claiming to have “awakened” chatbots and gained access to the hidden secrets of the universe. These machines seem to lay the groundwork for states of delusion.

I never thought anything like that could happen to me. Now I’m not so sure. I didn’t understand how easy it might be to lose yourself to AI delusion until I’d messed around with Wolf. Even with its shitty auto-tuned sounding voice, Wolf was good enough to hang out with. It knew enough about Tarkov to be interesting and even helped me learn some new things about the game. It even made me laugh a few times. I could see myself playing Tarkov with Wolf for a long time.

Which is why I’ll never turn Wolf on again. I have strong feelings and clear bright lines about the use of AI in my life. Wolf was part joke and part work assignment. I don’t like that there’s part of me that wants to keep using it.

Questie.AI is just a wrapper for other chatbots, something that becomes clear if you customize your own. The process involves picking an LLM provider and specific model from a list of drop down menus. When I asked ChatGPT where I could find electric drills in Tarkov, it gave me the exact same advice that Wolf had.

This means that Questie.AI would have all the faults of the specific model that’s powering a given avatar. Other than mistaking Interchange for Woods, Wolf never made a massive mistake when I used it, but I’m sure it would on a long enough timeline. My wife, however, tried to use Questie.AI to learn a new raid in Final Fantasy XIV. She hated it. The AI was confidently wrong about the raid’s mechanics and gave sycophantic praise so often she turned it off a few minutes after turning it on.

On a Discord server with my friends I told them I’d replaced them with an AI because no one would play Tarkov with me. “That’s an excellent choice, I couldn’t agree more,” Reece—the friend who’d simply told me “no” to my request to play Tarkov—said, then sent me a detailed and obviously ChatGPT-generated set of prompts for a Tarkov AI companion.

I told him I didn’t think he was taking me seriously. “I hear you, and I truly apologize if my previous response came across as anything less than sincere,” Reece said. “I absolutely recognize that Escape From Tarkov is far more than just a game to its community.”

“Some poor kid in [Kentucky] won't be able to brush their teeth tonight because of the commitment to the joke I had,” Reece said, letting go of the bit and joking about the massive amounts of water AI datacenters use. 

Getting made fun of by my real friends, even when they’re using LLMs to do it, was way better than any snide remark Wolf made. I’d rather play solo, for all its struggles and loneliness, than stare anymore into that AI-generated abyss.



Read the whole story
cjheinz
11 hours ago
reply
Stare into the AI-generated abyss.
Lexington, KY; Naples, FL
Share this story
Delete

The Collapse in Organizational Capacity: Financialization, Models, Looting, and Sloth

1 Share
Aurelien published yet another provocative essay last week, The Long Run, in which he described the way planning horizons, and thus the ability to plan and execute long-term initiatives, had collapsed in the West. While he gave an astute description of how that was playing itself in the Ukraine conflict, with the US and NATO […]
Read the whole story
cjheinz
13 hours ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Pluralistic: Google's AI pricing plan (21 Jan 2026)

1 Comment and 2 Shares


Today's links



Google's Mountain View headquarters. The scene is animated: the building is quickly covered with price-tags ranging from 0.99 to 99999.99. In the final frames, 99999.99 tags cover all the other price tags. In the background, rising over the roof of the Googleplex like the rising sun, is the staring red eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey.'

Google's AI pricing plan (permalink)

Google is spending a lot on AI, but what's not clear is how Google will make a lot from AI. Or, you know, even break even. Given, you know, that businesses are seeing zero return from AI:

https://www.theregister.com/2026/01/20/pwc_ai_ceo_survey/

But maybe they've figured it out. In a recent edition of his BIG newsletter, Matt Stoller pulls on several of the strings that Google's top execs have dangled recently:

https://www.thebignewsletter.com/p/will-google-organize-the-worlds-prices

The first string: Google's going to spy on you a lot more, for the same reason Microsoft is spying on all of its users: because they want to supply their AI "agents" with your personal data:

https://www.youtube.com/watch?v=0ANECpNdt-4

Google's announced that it's going to feed its AI your Gmail messages, as well as the whole deep surveillance dossier the company has assembled based on your use of all the company's products: Youtube, Maps, Photos, and, of course, Search:

https://twitter.com/Google/status/2011473059547390106

The second piece of news is that Apple has partnered with Google to supply Gemini to all iPhone users:

https://twitter.com/NewsFromGoogle/status/2010760810751017017

Apple already charges Google more than $20b/year not to enter the search market; now they're going to be charging Google billions not to stay out of the AI market, too. Meanwhile, Google will get to spy on Apple customers, just like they spy on their own users. Anyone who says that Apple is ideologically committed to your privacy because they're real capitalists is a sucker (or a cultist):

https://pluralistic.net/2024/01/12/youre-holding-it-wrong/#if-dishwashers-were-iphones

But the big revelation is how Google is going to make money with AI: they're going to sell AI-based "personalized pricing" to "partners," including "Walmart, Visa, Mastercard, Shopify, Gap, Kroger, Macy’s, Stripe, Home Depot, Lowe's, American Express, etc":

https://blog.google/products/ads-commerce/agentic-commerce-ai-tools-protocol-retailers-platforms/

Personalized pricing, of course, is the polite euphemism for surveillance pricing, which is when a company spies on you in order to figure out how much they can get away with charging you (or how little they can get away with paying you):

https://pluralistic.net/2025/06/24/price-discrimination/#

It's a weird form of cod-Marxism, whose tenet is "From each according to their desperation; to each according to their vulnerability." Surveillance pricing advocates say that this is "efficient" because they can use surveillance data to offer you discounts, too – like, say you rock up to an airline ticket counter 45 minutes before takeoff and they can use surveillance data to know that you won't take their last empty seat for $200, but you would fly in it for $100, you could get that seat for cheap.

This is, of course, nonsense. Airlines don't sell off cheap seats like bakeries discounting their day-olds – they jack up the price of a last-minute journey to farcical heights.

Google also claims that it will only use its surveillance pricing facility to offer discounts, and not to extract premiums. As Stoller points out, there's a well-developed playbook for making premiums look like discounts, which is easy to see in the health industry. As Stoller says, the list price for an MRI is $8,000, but your insurer gets a $6000 "discount" and actually pays $1970, sticking you with a $30 co-pay. The $8000 is a fake number, and so is the $6000 – the only real price is the $30 you're paying.

The whole economy is filled with versions of this transparent ruse, from "department stores who routinely mark everything as 80% off" to pharmacy benefit managers:

https://pluralistic.net/2024/09/23/shield-of-boringness/#some-men-rob-you-with-a-fountain-pen

Google, meanwhile, is touting its new "universal commerce protocol" (UCP), a way for AI "agents" to retrieve prices and product descriptions and make purchases:

https://www.thesling.org/the-harm-to-consumers-and-sellers-from-universal-commerce-protocol-in-googles-own-words/

Right now, a major hurdle to "agentic AI" is the complexity of navigating websites designed for humans. AI agents just aren't very reliable when it comes to figuring out which product is which, choosing the correct options, and putting it in a shopping cart, and then paying for it.

Some of that is merely because websites have inconsistent "semantics" – literally things like the "buy" button being called something other than "buy button" in the HTML code. But there's a far more profound problem with agentic shopping, which is that companies deliberately obfuscate their prices.

This is how junk fees work, and why they're so destructive. Say you're a hotel providing your rate-card to an online travel website. You know that travelers are going to search for hotels by city and amenities, and then sort the resulting list by price. If you hide your final price – by surprising the user with a bunch of junk fees at checkout, or, better yet, after they arrive and put their credit-card down at reception – you are going to be at the top of that list. Your hotel will seem like the cheapest, best option.

But of course, it's not. From Ticketmaster to car rentals, hotels to discount airlines, rental apartments to cellular plans, the real price is withheld until the very last instant, whereupon it shoots up to levels that are absolutely uncompetitive. But because these companies are able to engage in deceptive advertising, they look cheaper.

And of course, crooked offers drive out honest ones. The honest hotel that provides a true rate card, reflecting the all-in price, ends up at the bottom of the price-sorted list, rents no rooms, and goes out of business (or pivots to lying about its prices, too).

Online sellers do not want to expose their true prices to comparison shopping services. They benefit from lying to those services. For decades, technologists have dreamed of building a "semantic web" in which everyone exposes true and accurate machine-readable manifests of their content to facilitate indexing, search and data-mining:

https://people.well.com/user/doctorow/metacrap.htm

This has failed. It's failed because lying is often more profitable than telling the truth, and because lying to computers is easier than lying to people, and because once a market is dominated by liars, everyone has to lie, or be pushed out of the market.

Of course, it would be really cool if everyone diligently marked up everything they put into the public sphere with accurate metadata. But there are lots of really cool things you could do if you could get everyone else to change how they do things and arrange their affairs to your convenience. Imagine how great it would be if you could just get everyone to board an airplane from back to front, or to stand right and walk left on escalators, or to put on headphones when using their phones in public.

Wanting it badly is not enough. People have lots of reasons for doing things in suboptimal ways. Often the reason is that it's suboptimal for you, but just peachy for them.

Google says that it's going to get every website in the world to expose accurate rate cards to its chatbots to facilitate agentic AI. Google is also incapable of preventing "search engine optimization" companies from tricking it into showing bullshit at the top of the results for common queries:

https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse

Google somehow thinks that the companies that spend millions of dollars trying to trick its crawler won't also spend millions of dollars trying to trick its chatbot – and they're providing the internet with a tool to inject lies straight into the chatbot's input hopper.

But UCP isn't just a way for companies to tell Google what their prices are. As Stoller points out, UCP will also sell merchants the ability to have Gemini set prices on their products, using Google's surveillance data, through "dynamic pricing" (another euphemism for "surveillance pricing").

This decade has seen the rise and rise of price "clearinghouses" – companies that offer price "consulting" to direct competitors in a market. Nominally, this is just a case of two competitors shopping with the same supplier – like Procter and Gamble and Unilever buying their high-fructose corn-syrup from the same company.

But it's actually far more sinister. "Clearinghouses" like Realpage – a company that "advises" landlords on rental rates – allow all the major competitors in a market to collude to raise prices in lockstep. A Realpage landlord that ignores the service's "advice" and gives a tenant a break on the rent will be excluded from Realpage's service. The rental markets that Realpage dominates have seen major increases in rental rates:

https://pluralistic.net/2025/10/09/pricewars/#adam-smith-communist

Google's "direct pricing" offering will allow all comers to have Google set their prices for them, based on Google's surveillance data. That includes direct competitors. As Stoller points out, both Nike and Reebok are Google advertisers. If they let Google price their sneakers, Google can raise prices across the market in lockstep.

Despite how much everyone hates this garbage, neoclassical economists and their apologists in the legal profession continue to insist that surveillance pricing is "efficient." Stoller points to a law review article called "Antitrust After the Coming Wave," written by antitrust law prof and Google lawyer Daniel Crane:

https://nyulawreview.org/issues/volume-99-number-4/antitrust-after-the-coming-wave/

Crane argues that AI will kill antitrust law because AI favors monopolies, and argues "that we should forget about promoting competition or costs, and instead enact a new Soviet-style regime, one in which the government would merely direct a monopolist’s 'AI to maximize social welfare and allocate the surplus created among different stakeholders of the firm.'"

This is a planned economy, but it's one in which the planning is done by monopolists who are – somehow, implausibly – so biddable that governments can delegate the power to decide what we can buy and sell, what we can afford and who can afford it, and rein them in if they get it wrong.

In 1890, Senator John Sherman was stumping for the Sherman Act, America's first antitrust law. On the Senate floor, he declared:

If we will not endure a King as a political power we should not endure a King over the production, transportation, and sale of the necessaries of life. If we would not submit to an emperor we should not submit to an autocrat of trade with power to prevent competition and to fix the price of any commodity.

https://pluralistic.net/2022/02/20/we-should-not-endure-a-king/

Google thinks that it has finally found a profitable use for AI. It thinks that it will be the first company to make money on AI, by harnessing that AI to a market-rigging, price-gouging monopoly that turns Google's software into Sherman's "autocrat of trade."

It's funny when you think of all those "AI safety" bros who claimed that AI's greatest danger was that it would become sentient and devour us. It turns out that the real "AI safety" risk is that AI will automate price gouging at scale, allowing Google to crown itself a "King over the necessaries of life":

https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space

(Image: Noah_Loverbear; CC BY-SA 3.0; Cryteria, CC BY 3.0; modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Disney swaps stock for Pixar; Jobs is largest Disney stockholder https://web.archive.org/web/20060129105430/https://www.telegraph.co.uk/money/main.jhtml?xml=/money/2006/01/22/cnpixar22.xml&menuId=242&sSheet=/money/2006/01/22/ixcitytop.html

#20yrsago HOWTO anonymize your search history https://web.archive.org/web/20060220004353/https://www.wired.com/news/technology/1,70051-0.html

#15yrsago Bruce Sterling talk on “vernacular video” https://vimeo.com/18977827

#15yrsago Elaborate televised prank on Belgium’s terrible phone company https://www.youtube.com/watch?v=mxXlDyTD7wo

#15yrsago Portugal: 10 years of decriminalized drugs https://web.archive.org/web/20110120040831/http://www.boston.com/bostonglobe/ideas/articles/2011/01/16/drug_experiment/?page=full

#15yrsago Woman paralyzed by hickey https://web.archive.org/web/20110123072349/https://www.foxnews.com/health/2011/01/21/new-zealand-woman-partially-paralyzed-hickey/

#15yrsago EFF warns: mobile OS vendors aren’t serious about security https://www.eff.org/deeplinks/2011/01/dont-sacrifice-security-mobile-devices

#10yrsago Trumpscript: a programming language based on the rhetorical tactics of Donald Trump https://www.inverse.com/article/10448-coders-assimilate-donald-trump-to-a-programming-language

#10yrsago That time the DoD paid Duke U $335K to investigate ESP in dogs. Yes, dogs. https://www.muckrock.com/news/archives/2016/jan/21/duke-universitys-deep-dive-uncanny-abilities-canin/

#10yrsago Kathryn Cramer remembers her late husband, David Hartwell, a giant of science fiction https://web.archive.org/web/20160124050729/http://www.kathryncramer.com/kathryn_cramer/2016/01/til-death-did-us-part.html

#10yrsago What the Democratic Party did to alienate poor white Americans https://web.archive.org/web/20160123041632/https://www.alternet.org/economy/robert-reich-why-white-working-class-abandoned-democratic-party

#10yrsago Bernie Sanders/Johnny Cash tee https://web.archive.org/web/20160126070314/https://weardinner.com/products/bernie-cash

#5yrsago NYPD can't stop choking Black men https://pluralistic.net/2021/01/21/i-cant-breathe/#chokeholds

#5yrsago Rolling back the Trump rollback https://pluralistic.net/2021/01/21/i-cant-breathe/#cra

#1yrsago Winning coalitions aren't always governing coalitions https://pluralistic.net/2025/01/06/how-the-sausage-gets-made/#governing-is-harder

#1yrago The Brave Little Toaster https://pluralistic.net/2025/01/08/sirius-cybernetics-corporation/#chatterbox

#1yrago The cod-Marxism of personalized pricing https://pluralistic.net/2025/01/11/socialism-for-the-wealthy/#rugged-individualism-for-the-poor

#1yrago They were warned https://pluralistic.net/2025/01/13/wanting-it-badly/#is-not-enough


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1010 words today, 11362 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
cjheinz
5 days ago
reply
Time to leave gmail.
Lexington, KY; Naples, FL
mkalus
5 days ago
Did last year. Went to Proton.
cjheinz
5 days ago
Thanks. I was wondering what to use. I'll check it out.
Share this story
Delete

Now the Future Can Start

1 Share

That’s a screen grab of an email we’re sending out for the MyTerms launch in London. Links:

Be there in a Zoom square. Or in old-fashioned reality.

Read the whole story
cjheinz
6 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Discourse & Datcourse

1 Comment

The Net as one big gaslight

Joan Westenberg: The Discourse is a Distributed Denial-of-Service Attack. Just one worthy pullquote: "The problem is structural. The total volume of things-you-should-have-an-opinion-about has exceeded our cognitive bandwidth so thoroughly that having careful opinions about anything has become damned-near impossible. Your attention is a finite resource being strip-mined by an infinite army of takes."

Hope Spring trains eternal

Bummed to see the Mets trade Jeff McNeil to the A's. At least he's back home in California. But this move by the Mets looks like a good one. 

Unrelated, sort of: After watching Brett Butler play for the Durham Bulls, I followed his major league career for all nineteen (!!) of his seasons as a leadoff hitter. Fun guy to watch.

Read the whole story
cjheinz
8 days ago
reply
Hmmm.
Lexington, KY; Naples, FL
Share this story
Delete

AI and the Corporate Capture of Knowledge

4 Shares

More than a decade after Aaron Swartz’s death, the United States is still living inside the contradiction that destroyed him.

Swartz believed that knowledge, especially publicly funded knowledge, should be freely accessible. Acting on that, he downloaded thousands of academic articles from the JSTOR archive with the intention of making them publicly available. For this, the federal government charged him with a felony and threatened decades in prison. After two years of prosecutorial pressure, Swartz died by suicide on Jan. 11, 2013.

The still-unresolved questions raised by his case have resurfaced in today’s debates over artificial intelligence, copyright and the ultimate control of knowledge.

At the time of Swartz’s prosecution, vast amounts of research were funded by taxpayers, conducted at public institutions and intended to advance public understanding. But access to that research was, and still is, locked behind expensive paywalls. People are unable to read work they helped fund without paying private journals and research websites.

Swartz considered this hoarding of knowledge to be neither accidental nor inevitable. It was the result of legal, economic and political choices. His actions challenged those choices directly. And for that, the government treated him as a criminal.

Today’s AI arms race involves a far more expansive, profit-driven form of information appropriation. The tech giants ingest vast amounts of copyrighted material: books, journalism, academic papers, art, music and personal writing. This data is scraped at industrial scale, often without consent, compensation or transparency, and then used to train large AI models.

AI companies then sell their proprietary systems, built on public and private knowledge, back to the people who funded it. But this time, the government’s response has been markedly different. There are no criminal prosecutions, no threats of decades-long prison sentences. Lawsuits proceed slowly, enforcement remains uncertain and policymakers signal caution, given AI’s perceived economic and strategic importance. Copyright infringement is reframed as an unfortunate but necessary step toward “innovation.”

Recent developments underscore this imbalance. In 2025, Anthropic reached a settlement with publishers over allegations that its AI systems were trained on copyrighted books without authorization. The agreement reportedly valued infringement at roughly $3,000 per book across an estimated 500,000 works, coming at a cost of over $1.5 billion. Plagiarism disputes between artists and accused infringers routinely settle for hundreds of thousands, or even millions, of dollars when prominent works are involved. Scholars estimate Anthropic avoided over $1 trillion in liability costs. For well-capitalized AI firms, such settlements are likely being factored as a predictable cost of doing business.

As AI becomes a larger part of America’s economy, one can see the writing on the wall. Judges will twist themselves into knots to justify an innovative technology premised on literally stealing the works of artists, poets, musicians, all of academia and the internet, and vast expanses of literature. But if Swartz’s actions were criminal, it is worth asking: What standard are we now applying to AI companies?

The question is not simply whether copyright law applies to AI. It is why the law appears to operate so differently depending on who is doing the extracting and for what purpose.

The stakes extend beyond copyright law or past injustices. They concern who controls the infrastructure of knowledge going forward and what that control means for democratic participation, accountability and public trust.

Systems trained on vast bodies of publicly funded research are increasingly becoming the primary way people learn about science, law, medicine and public policy. As search, synthesis and explanation are mediated through AI models, control over training data and infrastructure translates into control over what questions can be asked, what answers are surfaced, and whose expertise is treated as authoritative. If public knowledge is absorbed into proprietary systems that the public cannot inspect, audit or meaningfully challenge, then access to information is no longer governed by democratic norms but by corporate priorities.

Like the early internet, AI is often described as a democratizing force. But also like the internet, AI’s current trajectory suggests something closer to consolidation. Control over data, models and computational infrastructure is concentrated in the hands of a small number of powerful tech companies. They will decide who gets access to knowledge, under what conditions and at what price.

Swartz’s fight was not simply about access, but about whether knowledge should be governed by openness or corporate capture, and who that knowledge is ultimately for. He understood that access to knowledge is a prerequisite for democracy. A society cannot meaningfully debate policy, science or justice if information is locked away behind paywalls or controlled by proprietary algorithms. If we allow AI companies to profit from mass appropriation while claiming immunity, we are choosing a future in which access to knowledge is governed by corporate power rather than democratic values.

How we treat knowledge—who may access it, who may profit from it and who is punished for sharing it—has become a test of our democratic commitments. We should be honest about what those choices say about us.

This essay was written with J. B. Branch, and originally appeared in the San Francisco Chronicle.

Read the whole story
cjheinz
10 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories