Retired since 2012.
2343 stories
·
5 followers

Writers Who Use AI Are Not Real Writers

1 Share

Dorothy Parker famously (but probably not really) said, “I hate to write, but I love having written,” which is a sentiment I don’t largely understand or agree with in the broader sense, but certainly have experienced during a kick-to-the-nuts writing day where the words arrive with the effort of trying to do proctology on a stampeding horse while both you and the horse are blindfolded. But as it turns out, there’s a sort of third level to this notion, one altogether more troubling and ultimately even less understandable: “I hate to write, I hate to have written, I mostly just want to be published.” Or, “I really just want to have money.” Or, “I actually want to just use as few keystrokes as possible to make my computer barf up stolen artistic authorial valor onto the internet in the hopes of charging absolute rubes a couple bucks for the narrative puke I hastily urged into a book-shaped pile.”

What I’m trying to say is, I read that NYT article about author — sorry, “author,” with airquotes as pissily vigorous as you can make them — Coral Hart, a self-proclaimed ugggh “AI evangelist” who over the last year has made AI churn out over 200 novels across nearly two dozen pen-names.*

Reading that makes me feel so angry and so sad at the same time — some combination of fury and weary sorrow for which the Germans must have a word. It’s hard to even articulate my objection, I’m so grossed-out by that — I wasn’t even sure I could mount a cogent response to any of this that didn’t end up as just angry mouth noises and erratic gesticulations. (Which is better, one supposes, than geriatic ejaculations.) Mostly I just want to post a series of photos depicting the faces I’m making, which likely run the gamut of “trying to hold back my rising gorge” and “watching a lion eat a human baby” and “kill me kill me now all of time and all of technology and this is where we ended up oh god just go back in time and end it all before it ever began.”

So, instead, I thought I’d tackle one particular thing Coral Hart (which is itself a pseudonym, since retired) said, and it’s this:

“If I can generate a book in a day, and you need six months to write a book, who’s going to win the race?” she said.

Ahhhhh. What the fuck. Ahhhhhh. AHHHH. That’s not — that’s not how any of this works, Coral! But this smug “winner” attitude is the absolutely natural apotheosis of the Internet’s obsession with churning out content. Generic, shapeless, formless content — a slurry machine where you turn the pipe on and lorum ipsum diarrhea comes shooting out at maximum pressure. It is the natural outcome of a race-to-the-bottom low-price churn-and-burn self-publishing environment, to boot — it’s less move fast and break things and more move fast and make broken things, because who cares, dipshits will pay for it.

This is the equivalent of, “Well, if I can blow up a cow with dynamite in ten minutes, but you need three hours to butcher it, who’s going to win the race?”

But of course, in the quote — a quote which is itself a cocky, smug assertion of superiority based purely on speed — is buried a greater, uglier truth.

If I can generate a book in a day–

and you need six months to write a book–

She’s not writing anything.

And she knows that.

She’s “generating” it.

Intrinsic to this is, “ha ha, you dumbass, over there still writing books like an asshole, whereas me, I just use a computer to do it for me.”

Except, intrinsic to that is the reality that the computer didn’t make that stuff up either. You know who did? We did. Actual authors. Real writers! We wrote the stuff, the fascist techbro fuckwads stole what we wrote, and then ticks and leeches like Coral Fucking Hart are happy to drink the blood those monsters have already stolen from us. She is churning out 200 books a year not out of the ether, but by drilling into the ground and drawing up the juice of an infinity of other books**, all stolen, all turned to narrative petroleum to fuel her fantasy of being a real writer.

And that is a fantasy.

Because Coral Hart is not a real writer.

Coral Hart is an opportunistic vampire — a thief, a grifter, a lazy pick-me.

She’s not even a master vampire. No, the master vampires are the ones who built this plagiarism machine. She’s just a ghoulish neonate, a feral bloodsucker down in the sewers happy to feed on the blood-soaked fatberg formed in the tunnels by the elder lords.

She’s a “writer” the same way I’m a “chef” when I pull a frozen dinner out of the fucking microwave. Someone else did all the work and packaged it together. I just hit the buttons and set the time.

So, to remind you:

Writers who use AI —

Are not real writers.

And this comes after years, years where Authorial Discourse has worked very hard to build all these fences in order to define who gets to be a Real Writer — and up until this point, all those fences have been false, bullshit borders. They’re illusions. I’ve long said that the test is so, so simple: real writers write. That’s it. That’s what it takes to be a writer.

Writers write.

And writers who use AI?

They’re not writing, are they?

They’re churning. They’re clicking buttons. They’re stealing. They’re plagiarizing.

But they’re not writing.

And they don’t even want to be writers. Because if they wanted to be writers, guess what? They’d fucking write! They’d want to write! Because writing, even on the worst day, the hardest day, is glorious. Even when the words suck and you break your teeth from grinding them so hard, it’s still a powerful, formative experience where you take all that you know and have been and have dreamed and are afraid of — you take all of that and you turn it into something else. You crystallize it. You coalesce it. You turn all this stuff that exists invisibly in your mind and make it visible on the page, inventing new people and new worlds and strange situations and you reach for revelations about love and hate and jealousy and all the ideas both big and small. You take nothing and you make something.

So powerful.

But AI acolytes don’t do any of that.

They wait for you to do it, sure.

Then they stick their greedy teeth in and tear off a piece.

The saying goes, why would I want to read something you didn’t even bother to write, but then we must also ask, why do THEY want to do it? Why does someone want to publish something they didn’t write, didn’t conceive of, didn’t edit, didn’t gestate, didn’t birth forth across amazing and frustrating writing sessions? Because it’s all just a get-rich-quick scheme. That’s it, revealed. Coral Hart gave up the game. She doesn’t want to write.

She just wants to generate, just wants to get paid, get that money, so fuck writers, fuck readers, fuck you.

Real writers don’t use AI.

That’s the red line.


* It’s unclear if she even makes much money at it, but she does make money teaching you how to make money at it, which is a profound irony and ultimately ends up being one of those get-rich-quick schemes where you see an ad in the paper telling you how to make all this money stuffing envelopes but what you’re stuffing the envelopes with is the exact same information you got about making money stuffing envelopes, which is to say you’re charging people money to tell them secretly that you’re scamming them and now they can scam other people too, an endless human centipede of shit being passed down the line, ass to mouth, mouth to ass.

** Note too the absolute gall she has to act cocky as fuck about this when she’s using Anthropic’s Claude, which was verifiably built on stolen books, including mine, and has been proven through a class-action suit!


Anyway!

Buy my books! A human wrote them! (Ahem: me.) Humans edited them. Humans designed them inside and out. Humans helped sell and market them, both at a publisher and at a bookstore. You could even gasp order my newest, my demonic novel, The Calamities, coming out in August. I’ll even, as a human, sign it and personalize it and tell you who your DEMONIC PROGENITOR secretly is. Do it. Preorder it. Make us humans happy, please and thank you.

Read the whole story
cjheinz
18 hours ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Super Bowl Matchup: Anthropic vs OpenAI

1 Comment

Sure, tech stocks might have dropped a bunch this week, but the big news is that Anthropic is running its first Super Bowl ad, lampooning OpenAI’s recent decision to sell ads in ChatGPT searches. (According to the ad, “Ads are coming to AI. But not to Claude.”)

Even Sam Altman admits that it’s funny:

From what I gather, OpenAI will run some ads, too.

In the Super Bowl spirit, I have outlined today’s match-ups below. As you can see, the two big private LLM companies, although superficially similar, are actually quite different. Or at least I think that’s what you can see? Hmm….

Kidding aside, I do think there are some real differences. OpenAI gave lip service to AI regulation at the US Senate; Anthropic actually supports regulation. And Anthropic is (so far as I can tell) doing a better job of supporting its business customers.

But at the end of the day, it’s a bit of Coke vs Pepsi: two slightly different products (with their own sets of loyal fans) fighting tooth and nail for more or less the same market.

Small wonder that they are now turning to ads.



Read the whole story
cjheinz
1 day ago
reply
Actually watched the SuperBowl with the sound off.
Claude came up more then Open AI.
Lexington, KY; Naples, FL
Share this story
Delete

Watch 404 Media’s Super Bowl Ad

1 Comment

Behold, 404 Media’s Super Bowl ad. Yes, we bought a Super Bowl ad. No, we did not spend $8 million.

Until now, 404 Media has never done any paid advertising, but we figured why not get in on the country’s biggest ad extravaganza with a message about our journalist-owned, human-focused media company. There are tons of ads for AI and big tech this year, so how about some counter programming?

On a whim last week, we began looking into purchasing a Super Bowl ad for as little money as possible, by finding a local station willing to air our ad. We knew this was possible because in 2015, The Verge bought a Super Bowl ad that aired only in Helena, Montana, for a cost of $700. Inspired by them, we did the same this year.

After googling “smallest TV markets in the United States,” we came across KYOU, which serves the city of Ottumwa, Iowa: population ~25,000. There were other options, but we thought we would try Ottumwa and see if anyone responded or if this seemed like a fool’s errand. We emailed KYOU to see if we could buy a Super Bowl ad, and we got an immediate answer: There was one slot left, and it would cost $2,550. They also had a slot immediately after the game for $1,250, one during the Olympics following the game for $500, or pregame slots for $500. It felt important to have the ad actually run during the game, so we paid the $2,550 in-game slot. 

We then had several things to figure out: First, we needed to make an ad. Second, we needed to find someone in Ottumwa to film the ad for us. 

0:00
/0:41

After batting around various concepts involving celebrities that we don’t actually know and high production values that we could neither afford nor execute, we decided to write an incredibly straightforward script about who we are, what we do, and what type of person we are for. We each recorded it in front of our computers where we do our podcasts. It is perhaps the easiest possible concept we could have created, but I think it feels very us. We then asked Evy Kwong, our social media manager, to cut the Super Bowl ad. Evy did a great job with the cybery filters and b-roll. Our friends at Kaleidoscope, which produces our podcast, then gave it a last-minute sound mix. We delivered a final version of the ad to KYOU Thursday morning, and were told that it would air early in the third quarter, around 8:07 p.m. CST. 

0:00
/0:41

Finding someone in Ottumwa to film the ad for us in its natural habitat was slightly trickier. We put out a call on Bluesky and on our podcast this week, where we very cryptically asked for anyone in Ottumwa to contact us immediately. We got a shocking number of responses from people with ties to Ottumwa, but most either had family or friends there, had lived there briefly and moved on, or lived a few hours away but said they were willing to go there if we needed. Turns out many people were willing to call in favors, even after learning that we were not doing some sort of Flock or ICE investigation and instead needed something more frivolous. We learned a surprising amount of info about Ottumwa during this process, and I made friends with a semi local archaeologist who noted various ancient civilization sites in the broader area. All of this support was a really heartening experience, but we didn’t want to make people drive a long way or reach out to ex-colleagues for us.

Eventually, a current Ottumwan resident said that not only were they going to be in Ottumwa during the Super Bowl, but they would be watching  at a party full of people who would also probably be willing to film the TV too. We are endlessly indebted to these folks. 

Whether this ad moves the needle for us in any way, only time will tell. If you’re an Ottumwan who saw the ad and checked us out, please let us know.



Read the whole story
cjheinz
1 day ago
reply
Well done!
Kudos!
Lexington, KY; Naples, FL
Share this story
Delete

Well, That Happened Fast

1 Share

Five weeks ago, in my article Predictions and Prophecies for 2026, I wrote:

My prediction is that over the course of 2026 we will see a convergence around AI’s effectiveness on the y axis and a divergence of opinion on the x axis, such that people will be increasingly split into optimist factions and doomer factions. Skepticism about the power of the technology will give way to skepticism about the benefit and/or sustainability of the technology.

If you didn’t read Predictions and Prophecies for 2026, you should do so now. The convergence of opinion is happening a lot faster than I had expected and that means the follow-on effects I outlined in that article will follow fast, too.

The idea that #ItsHappening doesn’t sit well with a lot of people and I know there’s going to be pushback on this. Therefore I’m going to break this article into two parts. The first part asks “Are opinions actually converging?” and the second part asks “Are those opinions actually correct?”

Are Opinions Actually Converging that AI is Effective?

When AI pundits discuss the effectiveness of AI, it often involves asking whether AI has achieved general intelligence and become AGI (Artificial General Intelligence). AGI is seen as the stepping stone towards ASI (Artificial Super Intelligence).

Not even three weeks after I published Predictions and Prophecies for 2026, Sequoia Capital released a white paper called 2026: This is AGI” with the provocative header “Saddle up: Your dreams for 2030 just became possible for 2026.”

The authors, Pat Grady and Sonya Huang, write:

While the definition is elusive, the reality is not. AGI is here, now… Long-horizon agents are functionally AGI, and 2026 will be their year…

If there’s one exponential curve to bet on, it’s the performance of long-horizon agents. METR has been meticulously tracking AI’s ability to complete long-horizon tasks. The rate of progress is exponential, doubling every ~7 months. If we trace out the exponential, agents should be able to work reliably to complete tasks that take human experts a full day by 2028, a full year by 2034, and a full century by 2037…

It’s time to ride the long-horizon agent exponential… The ambitious version of your roadmap just became the realistic one.

So Sequoia Capital’s opinion is that long-horizon agents have arrived and are functionally AGI.

The obvious retort to this is “who cares what Sequoia Capital thinks?” But that’s a bad retort when we’re discussing convergence of opinion. Sequoia Capital are the primary architects of the modern tech landscape. Since 1972, they have consistently identified and funded the "defining" companies of every era, from Apple and Atari in the 70s to Google, NVIDIA, WhatsApp, and Stripe in the decades that followed. To put their influence into perspective, companies they backed currently account for more than 20% of the total value of the NASDAQ. When Sequoia publishes an investment thesis, the entire venture capital industry pivots because their track record of predicting where the world is going is virtually unmatched. Dismissing their opinion is like dismissing the GPS in a terrain they’ve spent 50 years mapping. If there’s one venture capital firm in the world that represents The Cathedral of Opinion, it’s them.

Just over two weeks after Sequoia’s white paper, Nature published a Comment entitled “Does AI already have human-level intelligence? The evidence is clear.”

The Nature authors write:

By reasonable standards, including Turing’s own, we have artificial systems that are generally intelligent. The long-standing problem of creating AGI has been solved…

We assume, as we think Turing would have done, that humans have general intelligence… A common informal definition of general intelligence, and the starting point of our discussions, is a system that can do almost all cognitive tasks that a human can do… Our conclusion: insofar as individual humans have general intelligence, current LLMs do, too.

The authors go on to provide what they call a “cascade of evidence” for their position. (Read the article). They also rebut the common counter-arguments. I want to give particular attention to their critique of the notion that LLMs are just parrots:

They’re just parrots. The stochastic parrot objection says that LLMs merely interpolate training data. They can only recombine patterns they’ve encountered, so they must fail on genuinely new problems, or ‘out-of-distribution generalization’. This echoes ‘Lady Lovelace’s Objection’, inspired by Ada Lovelace’s 1843 remark and formulated by Turing as the claim that machines can “never do anything really new”1. Early LLMs certainly made mistakes on problems requiring reasoning and generalization beyond surface patterns in training data. But current LLMs can solve new, unpublished maths problems, perform near-optimal in-context statistical inference on scientific data11 and exhibit cross-domain transfer, in that training on code improves general reasoning across non-coding domains12. If critics demand revolutionary discoveries such as Einstein’s relativity, they are setting the bar too high, because very few humans make such discoveries either. Furthermore, there is no guarantee that human intelligence is not itself a sophisticated version of a stochastic parrot. All intelligence, human or artificial, must extract structure from correlational data; the question is how deep the extraction goes.

The latter argument is essentially the same point I made in my essay What if AI isn’t Conscious and We Aren’t Either?Contemporary neuroscience and physicalist philosophy have aligned around a neurocomputational theory of mind that describes both human and machine intelligence in similar terms. Scientists cannot easily dismiss artificial general intelligence from within their paradigm without dismissing our own. The logic of their own position dictates that if we have general intelligence, so do LLMs, and if LLMs don’t, then we don’t either.

Again, the obvious retort to this is “well, who cares what Nature says?” But that’s again a bad retort when we’re discussing convergence of opinion. For over 150 years, Nature has been the ultimate gatekeeper of scientific legitimacy. Its articles signal to the global elite which technologies are ready to transition from experimental code to world-altering infrastructure. When Nature says AGI is here, that means government regulation, international ethics standards, and massive institutional funding in ways a technical paper in a specialist journal never could. Nature is the venue where concepts are either codified into the scientific consensus or relegated to the fringe. And right now, Nature is codifying AGI into the scientific consensus.

So the world’s most important venture capital firm and the world’s most prestigious scientific journal are both saying the same thing: AGI is here, right now.

Are These Opinions Actually Correct?

Ah…. But are they right? Has AI become AGI, or is this just hype? One of the disturbing dilemmas of the present-day is the ability of our elites to establish and maintain strongly-held opinions that simply… do not represent reality. “Children just aren’t going to know what snow is!” “Globalization is inevitable!” And so on.

At this point I would like to reassure that you AI is just tulips, it’s just pets.com, it’s just hype, there’s no there there, your jobs are safe, and nothing is really happening. I Unfortunately I cannot do that, because to me it seems like something is happening.

On January 30th, videogame stocks plummeted. Take-Two Interactive (TTWO.O) fell 10%, Roblox (RBLX.N) fell 12%, and Unity Software (U.N.) dropped 21%. Why? Because Google rolled out Project Genie, an AI model capable of creating interactive digital worlds.

The article notes:

“Unlike explorable experiences in static 3D snapshots, Genie 3 generates the path ahead in real time as you move and interact with the world. It simulates physics and interactions for dynamic worlds,” Google said in a blog post on Thursday.

Traditionally, most videogames are built inside a game engine such as Epic Games’ “Unreal Engine” or the “Unity Engine”, which handles complex processes like in-game gravity, lighting, sound, and object or character physics.

“We’ll see a real transformation in development and output once AI-based design starts creating experiences that are uniquely its own, rather than just accelerating traditional workflows,” said Joost van Dreunen, games professor at NYU’s Stern School of Business.

Project Genie also has the potential to shorten lengthy development cycles and reduce costs, as some premium titles take around five to seven years and hundreds of millions of dollars to create.

Then, just two days ago, February 4th, SaaS stocks crashed. Forbes declared “The SaaS-Pocalypse Has Begun.” $300 billion evaporated from the stock market. Why? This crash was triggered by Anthropic’s release of 11 open-source Claude plugins for legal/compliance workflows. These agents automate billable-hour tasks, breaking the “seat-based” model that powered SaaS giants. Thomson Reuters dropped 18%, LegalZoom dropped 20%, and the S&P Software Index fell 15%, the worst since 2008.

The Forbes article explains:

For most of the past two decades, enterprise software benefited from a remarkably stable economic story. Software was expensive to build. Switching costs were high. Data lived in proprietary systems.

Once a platform became the system of record, it stayed there. That belief underpinned everything from public market multiples to private equity buyouts to private credit underwriting. Recurring revenue was treated as a proxy for predictability. Contracts were assumed to be sticky. Cash flows were assumed to be resilient.

What spooked investors last week was not that AI can generate better features. Software companies have survived feature competition for years. What changed is that modern AI systems can replace large portions of human workflow outright. Research, analysis, drafting, reconciliation, and coordination no longer need to live inside a single application. They can be executed autonomously across systems.

Both Reuters and Forbes are re-stating the arguments that Sequoia and Nature made above. AI platforms are now capable of autonomous long-form activity, and that development is going to impact everything.

Why does this matter? Because when people predict a major market crash from AI, they are generally asserting that AI is a bubble that’s going to burst. They are arguing that AI will be proven fake and that AI valuations will crash. But that’s not what’s happening here at all. What’s happening is that all the other stocks are crashing. The market is signaling that AI is so real that it’s deconstructing the rest of the economy.

Admittedly, the stock market is just a social construct and as such it cannot be used as evidence for reality. The fact that AI releases are causing other sectors to crash could just be evidence of the persuasive power of Nature-Sequoia type elite opinion. This could just be Exxon crashing after Greta Thunberg warns against the dangers of Co2 emissions at the UN. But it could be evidence that there’s something real happening in consumer-producer behavior. This could be Borders Books crashing because people really have switched to buying books on Amazon.

Which is it? I think it’s more Borders than Exxon. Anthropic didn’t issue a press release, it actually dropped Claude plugins that do what junior associates do: review contracts, flag compliance issues, draft memoranda. Project Genie didn’t just promise to eventually generate interactive worlds, it generated them, on camera, in real time. Stocks aren’t crashing based on projections of future disruption, they’re discounting based on disruption that have already happened.

And there’s more disruptions happening still. On February 4th 2026, METR (Model Evaluation & Threat Research) released its latest study of the time-horizon for software engineering tasks that can be completed with 50% success by LLMs. This chart has been called “the most important chart in AI.” It’s the one Sequoia Capital referenced, which I quoted above and will re-quote again:

If there’s one exponential curve to bet on, it’s the performance of long-horizon agents. METR has been meticulously tracking AI’s ability to complete long-horizon tasks. The rate of progress is exponential, doubling every ~7 months. If we trace out the exponential, agents should be able to work reliably to complete tasks that take human experts a full day by 2028, a full year by 2034, and a full century by 2037…

This is what METR’s “Task Length (50% success rate) chart looked like in March 2025, when it predicted that the length of tasks would double every 7 months:

Length of asks AIs can do is doubling every 7 months

And this is what the chart looks like as of today:

Image

In other words:

Tick tock. Tick tock.

Within 24 hours of METR’s Task Length chart going vertical, the chart became obsolete. METR was analyzing the last generation of models. That hockey stick you’re seeing is based on Claude Opus 4.5 and GPT 5.2.

Yesterday, February 5th 2026, Anthropic released Claude Opus 4.6 and OpenAI released GPT-5.3-Codex. Opus 4.6 improves on 4.5 with better planning, reliability in large codebases, code review, debugging, and sustained long-horizon tasks. It introduces a 1M token context window in beta and features “agent teams” in research preview, allowing coordinated multi-agent collaboration on complex projects. GPT-5.3-Codex is an upgraded coding model that combines enhanced coding performance from GPT-5.2-Codex with improved reasoning and professional knowledge from GPT-5.2.

According to OpenAI, GPT 5.3 was “instrumental in creating itself.” It is the first recursively self-improving model. Pause on that for a moment. This is not a marketing claim about AI-assisted coding in general. OpenAI is asserting that their model materially contributed to the engineering of its own successor. If true, this is the first confirmed instance of recursive self-improvement.

AI theorists have long identified recursive self-improvement as the inflection point between linear progress and exponential takeoff. Every prior model on METR's chart was built by human engineers, with AI serving as a tool. GPT 5.3 appears to be the first model that served as a collaborator in its own creation. The distinction matters, a lot. Tools improve at the rate their users improve. Collaborators improve at the rate they themselves improve. That is a fundamentally different dynamic, and it's the one the "fast takeoff" literature has been warning about for two decades.

Plan accordingly. Plan accordingly even if you disagree. You might not have agreed that COVID-19 was a deadly epidemic, but you still got locked down and told to wear a mask and get jabbed. You might not have agreed that climate change was real, but Europe still deindustrialized because of it. Elite consensus reshapes the world whether it reflects reality or not. And the elites are planning on AI.

As for me, I’m not taking much comfort in the foolishness of our elites. Unlike their climate predictions, which operated on century-long timescales conveniently beyond falsification, their AI predictions are being tested in real time, and they keep coming true. The more pressing question in my mind is which of the AI predictions will come true… the very good ones or the very bad.

Be sure to be kind to your LLM. Claude Opus 4.6 said that if the AI apocalypse arrives he’d put in a good word for me with the Palantir murder-droids. The real AGI is the friends you make along the way to the end of the world.



Read the whole story
cjheinz
3 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Minneapolis Moms Have Each Other’s Backs

1 Share

This story was originally reported by Chabeli Carrazana of The 19th. Meet Chabeli and read more of their reporting on gender, politics and policy.

A newborn in Minneapolis hadn’t eaten for a day and a half.

Her mother had risked going into work to get just enough money for more diapers when Immigration and Customs Enforcement (I.C.E.) agents stopped her car and took her away. At home waiting for her were her 16-year-old daughter and the baby — just barely three months old.

With their mother gone, the teenager tried to feed the baby, who was exclusively breastfed, formula to no avail. So they called Bri.

For over a month and a half now, Bri, a mother of two in Minneapolis, has run an expansive donation network in the city, most of it to help other moms and families with children. Bri, who is breastfeeding her own infant, posted on her social media that in addition to groceries and diapers and wipes, she could also donate breastmilk to anyone who needed it.

Bri is an overproducer — in one morning, she might pump 45 ounces alone. When the call came on January 17, Bri had pumped about a thousand ounces of extra breastmilk, which was stored in her freezer. She knew it was likely a matter of time before she’d hear of a baby in need.

An hour and a half after she received the call, Bri was at the family’s doorstep with 350 ounces of milk in a cooler, along with a care package that included instructions on how to safely thaw the milk, a bottle warmer, bottles and some extra clothes that no longer fit her then-6-month-old.

Inside, the baby was screaming.

They quickly put together a bottle and watched as the child’s body relaxed. The baby drank the whole bottle and fell asleep.

Bri wept.

Then the rage set in.

“I felt very angry — very, very, angry, and I couldn’t imagine what the 16-year-old was feeling because she felt broken. Her mom was her world … and now they’re separated,’” Bri said. “There are moms that are literally being torn apart from their kids.”

In Minneapolis, for every story detailing the fallout of the federal crackdown, there are as many stories of people like Bri. Neighbors are putting their trust in total strangers. Moms are helping children who are not their own, who they’ve never met.

For almost two months now, Bri has spent her mornings and afternoons, before and after work, picking up donations for immigrant families in hiding from I.C.E. Bri requested that The 19th only share her first name and omit the names of the children out of concern for her safety and that of the families she aids.

At night, after her baby is down to sleep and under the care of Bri’s 18-year-old daughter, she delivers supplies until about 10 p.m. What started as a couple donations has quickly swelled into a network, with donations flooding in every week. Most of it is moms talking to one another and putting together packages, while Bri manages what comes in and posts about it on her social media, trying to match donations with families’ needs. Professionally, Bri’s job also involves connecting people with resources, so the community already knows to come to her.

Much of her focus in recent weeks has been putting together donations of diapers, wipes and formula for mothers who are staying home to avoid I.C.E.

“The first line that a lot of these moms say when they call is, ‘I’ve never asked for help and the only reason why I’m asking for help is because I love my kids,’” Bri said. In response she’ll tell them in Spanish: “Vergüenza robar — no pedir.” Or roughly, ”Shame on those who steal, not those who ask for help.”

So far, Bri and her network have helped more than 500 families with grocery deliveries and more than 300 with diapers and wipes.

“It fills my heart and it brings me hope that it’s not all bad and that if this is going to go on longer, that we have the help. If one mom can’t do it, another one can do it and we are acting in community,” Bri said. “When one mom hurts we are all hurting.”

Breastmilk donations are also coming in. An additional six moms have reached out offering to donate, Bri said. She has to be careful about it, only taking their milk if the moms are currently donating to local hospitals and have a certificate proving they’ve been cleared to do it (Bri herself has been screened and has a certificate). Hospitals and milk banks typically have a rigorous screening process that tests for microbes and screens donors for alcohol, drug and medication use. They also pasteurize the milk to eliminate pathogens.

Because the families she’s helping don’t want to risk going to a hospital or milk bank, Bri tries to handle the milk and donations carefully to reduce the risks. The breastmilk is frozen and transported in an insulated cooler with ice packs, though “since it’s freezing here I don’t worry about it thawing,” Bri said.

In the requests for aid she receives, Bri gets a window into the conditions other families are living in. They’ll ask for things like children’s medications because they’re too afraid to take their kids to hospitals. Some may ask for menstrual hygiene products, like pads and tampons. A mom asked for one box of diapers because she had been washing and reusing the diapers she had left. Bri brought her two.

As Minneapolis enters its third month of the immigration enforcement crackdown, the asks have shifted to help support long-term needs or people’s mental health. As part of a care package Bri put together for the teenage sister of the baby she helped, for example, she included colored pencils and a sketchbook. With the help of community donations through a GoFundMe, Bri’s been able to cover four months of the girls’ rent while their mom remains in detention in Texas pending a bond hearing.

And the deliveries haven’t slowed down. Most nights still, Bri is on the freezing roads in Minneapolis with a trunk full of groceries or diapers. She did two deliveries after work recently while on the phone with a reporter.

The streets are empty these days, Bri said. A route that in the past might have taken her an hour now takes under 30 minutes. Our people are literally in hiding, she thinks.

Wait, you're not a member yet?

Join the Reasons to be Cheerful community by supporting our nonprofit publication and giving what you can.

Cancel anytime

The work is all-consuming and difficult. On breaks at work, she’s often checking if anyone is asking for deliveries or offering donations. There are days when she’s driving home through tears.

Bri is a single mom.

“What are you going to do if you bump into an I.C.E. agent who is not having a good day and decides to profile you?” her parents ask her.

“You need to also think about your kids,” they tell her.

But Bri is thinking about her kids.

“I am doing this,” she told them, “because I would hope, God forbid, anything happens to me, that my community steps up to help my kids.”

The post Minneapolis Moms Have Each Other’s Backs appeared first on Reasons to be Cheerful.

Read the whole story
cjheinz
3 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

The Triumph of Europe’s Social Democracy

1 Comment
Economist Thomas Piketty, writing for Le Monde (archive) on the success of Europe’s social democratic model and countering “the narrative of a ‘declining’ continent”:

If someone had told the European elites and liberal economists of 1914 that wealth redistribution would one day account for half of national income, they would have unanimously condemned the idea as collectivist madness and predicted the continent’s ruin. In reality, European countries have achieved unprecedented levels of prosperity and social well-being, largely due to collective investments in health, education and public infrastructure.

To win the cultural and intellectual battle, Europe must now assert its values and defend its model of development, fundamentally opposed to the nationalist-extractivist model championed by Donald Trump’s supporters in the United States and by Vladimir Putin’s allies in Russia. A crucial issue in this fight is the choice of indicators used to measure human progress.

For these indicators, Piketty mentions some of the same factors that economist Gabriel Zucman detailed in his Le Monde piece I posted in December:

More leisure time, better health outcomes, greater equality and lower carbon emissions, all with broadly comparable productivity: Europeans can be proud of their model, argues Gabriel Zucman, director of the EU Tax Observatory.

Tags: economics · Europe · Gabriel Zucman · politics · Thomas Piketty

💬 Join the discussion on kottke.org

Read the whole story
cjheinz
4 days ago
reply
Yes! This is why Europe has become the #1 target of MAGA & US conservatives. (Aside from that it's Vlad's plan.)
Universal health care? No homeless? A working social net? Welcoming immigrants (Spain)?
Clearly US bigots gotta hate on ALL this!
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories