Retired since 2012.
2358 stories
·
5 followers

How did we end up threatening our kids’ lives with AI?

1 Comment and 2 Shares

I have to begin by warning you about the content in this piece; while I won’t be dwelling on any specifics, this will necessarily be a broad discussion about some of the most disturbing topics imaginable. I resent that I have to give you that warning, but I’m forced to because of the choices that the Big AI companies have made that affect children. I don’t say this lightly. But this is the point we must reckon with if we are having an honest conversation about contemporary technology.

Let me get the worst of it out of the way right up front, and then we can move on to understanding how this happened. ChatGPT has repeatedly produced output that encouraged and incited children to end their own lives. Grok’s AI generates sexualized imagery of children, which the company makes available commercially to paid subscribers.

It used to be that encouraging children to self-harm, or producing sexualized imagery of children, were universally agreed upon as being amongst the worst things one could do in society. These were among the rare truly non-partisan, unifying moral agreements that transcended all social and cultural barriers. And now, some of the world’s biggest and most powerful companies, led by a few of the wealthiest and most powerful men who have ever lived, are violating these rules, for profit, and not only is there little public uproar, it seems as if very few have even noticed.

How did we get here?

The ideas behind a crisis

A perfect storm of factors have combined to lead us towards the worst case scenario for AI. There is now an entire market of commercial products that attack our children, and to understand why, we need to look at the mindset of the people who are creating those products. Here are some of the key motivations that drove them to this point.

1. Everyone feels desperately behind and wants to catch up

There’s an old adage from Intel’s founder Andy Grove that people in Silicon Valley used to love to quote: “Only the paranoid survive”. This attitude persists, with leaders absolutely convinced that everything is a zero-sum game, and any perceived success by another company is an existential threat to one’s own future.

At Google, the company’s researchers had published the fundamental paper underlying the creation of LLMs in 2017, but hadn’t capitalized on that invention by making a successful consumer product by 2022, when OpenAI released ChatGPT. Within Google leadership (and amongst the big tech tycoons), the fact that OpenAI was able to have a hit product with this technology was seen as a grave failure by Google, despite the fact that even OpenAI’s own leadership hadn’t expected ChatGPT to be a big hit upon launch. A crisis ensued within Google in the months that followed.

These kinds of industry narratives have more weight than reality in driving decision-making and investment, and the refrain of “move fast and break things” is still burned into people’s heads, so the end result these days is that shipping any product is okay, as long as it helps you catch up to your competitor. Thus, since Grok is seriously behind its competitors in usage, and of course Grok's CEO Elon Musk is always desperate for attention, they have every incentive to ship a product with a catastrophically toxic design — including one that creates abusive imagery.

2. Accountability is “woke” and must be crushed

Another fundamental article of faith in the last decade amongst tech tycoons (and their fanboys) is that woke culture must be destroyed. They have an amorphous and ever-evolving definition of what “woke” means, but it always includes any measures of accountability. One key example is the trust and safety teams that had been trying to keep all of the major technology platforms from committing the worst harms that their products were capable of producing.

Here, again, Google provides us with useful context. The company had one of the most mature and experienced AI safety research teams in the world at the time when the first paper on the transformer model (LLMs) was published. Right around the time that paper was published, Google also saw one of its engineers publish a sexist screed on gender essentialism designed to bait the company into becoming part of the culture war, which it ham-handedly stumbled directly into. Like so much of Silicon Valley, Google’s leadership did not understand that these campaigns are always attempts to game the refs, and they let themselves be played by these bad actors; within a few years, a backlash had built and they began cutting everyone who had warned about risks around the new AI platforms, including some of the most credible and respected voices in the industry on these issues.

Eliminating those roles was considered vital because these people were blamed for having “slowed down” the company with their silly concerns about things like people’s lives, or the health of the world’s information ecosystem. A lot of the wealthy execs across the industry were absolutely convinced that the reason Google had ended up behind in AI, despite having invented LLMs, was because they had too many “woke” employees, and those employees were too worried about esoteric concerns like people’s well-being.

It does not ever enter the conversation that 1. executives are accountable for the failures that happen at a company, 2. Google had a million other failures during these same years (including those countless redundant messaging apps they kept launching!) that may have had far more to do with their inability to seize the market opportunity and 3. it may be a good thing that Google didn’t rush to market with a product that tells children to harm themselves, and those workers who ended up being fired may have saved Google from that fate!

3. Product managers are veterans of genocidal regimes

The third fact that enabled the creation of pernicious AI products is more subtle, but has more wide-ranging implications once we face it. In the tech industry, product managers are often quietly amongst the most influential figures in determining the influence a company has on culture. (At least until all the product managers are replaced by an LLM being run by their CEO.) At their best, product managers are the people who decide exactly what features and functionality go into a product, synthesizing and coordinating between the disciplines of engineering, marketing, sales, support, research, design, and many other specialties. I’m a product person, so I have a lot of empathy for the challenges of the role, and a healthy respect for the power it can often hold.

But in today’s Silicon Valley, a huge number of the people who act as product managers spent the formative years of their careers in companies like Facebook (now Meta). If those PMs now work at OpenAI, then the moments when they were learning how to practice their craft were spent at a company that made products that directly enabled and accelerated a genocide. That’s not according to me, that’s the opinion of multiple respected international human rights organizations. If you chose to go work at Facebook after the Rohingya genocide had happened, then you were certainly not going to learn from your manager that you should not make products that encourage or incite people to commit violence.

Even when they’re not enabling the worst things in the world, product managers who spend time in these cultures learn more destructive habits, like strategic line-stepping. This is the habit of repeatedly violating their own policies on things like privacy and security, or allowing users to violate platform policies on things like abuse and harassment. This tactic is followed by then feigning surprise when the behavior is caught. After sending out an obligatory apology, they repeat the behavior again a few more times until everyone either gets so used to it that they stop complaining or the continued bad actions drives off the good people, which makes it seem to the media or outside observers that the problem has gone away. Then, they amend their terms of service to say that the formerly-disallowed behavior is now permissible, so that in the future they can say, “See? It doesn’t violate our policy.”

Because so many people in the industry now have these kind of credential on their LinkedIn profiles, their peers can’t easily mention many kinds of ethical concerns when designing a product without implicitly condemning their coworkers. This becomes even more fraught when someone might potentially be unknowingly offending one of their leaders. As a result, it becomes a race to the bottom, where the person with the worst ethical standards on the team determines the standards to which everyone designs their work. As a result, if the prevailing sentiment about creating products at a company is that having millions of users just inevitably means killing some of them (“you’ve got to break a few eggs to make an omelet”), there can be risk to contradicting that idea. Pointing out that, in fact, most platforms on the internet do not harm users in these ways and their creators work very hard to ensure that tech products don’t present a risk to their communities, can end up being a career-limiting move.

4. Compensation is tied to feature adoption

This is a more subtle point, but explains a lot of the incentives and motivations behind so much of what happens with today’s major technology platforms. The introduction or rollout of new capabilities is measured when these companies launch new features, and the success of those rollouts or launches are often tied to the measurements of individual performance for the people who were responsible for those features. These will be measured using metrics like “KPIs” (key performance indicators) or other similar corporate acronyms, all of which basically represent the concept of being rewarded for whether the thing you made was adopted by users in the real world. In the abstract, it makes sense to reward employees based on whether the things they create actually succeed in the market, so that their work is aligned with whatever makes the company succeed.

In practice, people’s incentives and motivations get incredibly distorted over time by these kinds of gamified systems being used to measure their work, especially as it becomes a larger and larger part of their compensation. If you’ve ever wondered why some intrusive AI feature that you never asked for is jumping in front of your cursor when you’re just trying to do a normal task the same way that you’ve been doing it for years, it’s because someone’s KPI was measuring whether you were going to click on that AI button. Much of the time, the system doesn’t distinguish between “I accidentally clicked on this feature while trying to get rid of it” and “I enthusiastically chose to click on this button”. This is what I mean when I say we need an internet of consent.

But you see the grim end game of this kind of thinking, and these kinds of reward systems, when kids’ well-being is on the line. Someone’s compensation may well be tied to a metric or measurement of “how many people used the image generation feature?” without regard to whether that feature was being used to generate imagery of children without consent. Getting a user addicted to a product, even to the point where they’re getting positive reinforcement when discussing the most self-destructive behaviors, will show up in a measurement system as increased engagement — exactly the kind of behavior that most compensation systems reward employees for producing.

5. Their cronies have made it impossible to regulate them

A strange reality of the United States’ sad decline into authoritarianism is that it is presently impossible to create federal regulation to stop the harms that these large AI platforms are causing. Most Americans are not familiar with this level of corruption and crony capitalism, but Trump’s AI Czar David Sacks has an unbelievably broad number of conflicts of interest from his investments across the AI spectrum; it’s impossible to know how many because nobody in the Trump administration follows even the basic legal requirements around disclosure or disinvestment, and the entire corrupt Republican Party in Congress refuses to do their constitutionally-required duty to hold the executive branch accountable for these failures.

As a result, at the behest of the most venal power brokers in Silicon Valley, the Trump administration is insisting on trying to stop all AI regulations at the state level, and of course will have the collusion of the captive Supreme Court to assist in this endeavor. Because they regularly have completely unaccountable and unrecorded conversations, the leaders of the Big AI companies (all of whom attended the Inauguration of this President and support the rampant lawbreaking of this administration with rewards like open bribery) know that there will be no constraints on the products that they launch, and no punishments or accountability if those products cause harm.

All of the pertinent regulatory bodies, from the Federal Trade Commission to the Consumer Financial Protection Bureau have had their competent leadership replaced by Trump cronies as well, meaning that their agendas are captured and they will not be able to protect citizens from these companies, either.

There will, of course, still be attempts at accountability at the state and local level, and these will wind their way through the courts over time. But the harms will continue in the meantime. And there will be attempts to push back on the international level, both from regulators overseas, and increasingly by governments and consumers outside the United States refusing to use technologies developed in this country. But again, these remedies will take time to mature, and in the meantime, children will still be in harm’s way.

What about the kids?

It used to be such a trope of political campaigns and social movements to say “what about the children?” that it is almost beyond parody. I personally have mocked the phrase because it’s so often deployed in bad faith, to short-circuit complicated topics and suppress debate. But this is that rare circumstance where things are actually not that complicated. Simply discussing the reality of what these products do should be enough.

People will say, “but it’s inevitable! These products will just have these problems sometimes!” And that is simply false. There are already products on the market that don’t have these egregious moral failings. More to the point, even if it were true that these products couldn’t exist without killing or harming children — then that’s a reason not to ship them at all.

If it is, indeed absolutely unavoidable that, for example, ChatGPT has to advocate violence, then let’s simply attach a rule in the code that modifies it to change the object of the violence to be Sam Altman. Or your boss. I suspect that if, suddenly, the chatbot deployed to every laptop at your company had a chance of suggesting that people cause bodily harm to your CEO, people would suddenly figure out a way to fix that bug. But somehow when it makes that suggestion about your 12-year-old, this is an insurmountably complex challenge.

We can expect things to get worse before they get better. OpenAI has already announced that it is going to be allowing people to generate sexual content on its service for a fee later this year. To their credit, when doing so, they stated their policy prohibiting the use of the service to generate images that sexualize children. But the service they’re using to ensure compliance, Thorn, whose product is meant to help protect against such content, was conspicuously silent about Musk’s recent foray into generating sexualized imagery of children. An organization whose entire purpose is preventing this kind of material, where every public message they have put out is decrying this content, somehow falls mute when the world’s richest man carries out the most blatant launch of this capability ever? If even the watchdogs have lost their voice, how are regular people supposed to feel like they have a chance at fighting back?

And then, if no one is reining in OpenAI, and they have to keep up with their competitors, and the competition isn’t worried about silly concerns like ethics, and the other platforms are selling child exploitation material, and all of the product mangers are Meta alumni who know that they can just keep gaming the terms of service if they need to, and laws aren’t being enforced, and all the product managers making the product learned to make decisions while they were at Meta… well, will you be surprised?

How do we move forward?

It should be an industry-stopping scandal that this is the current state of two of the biggest players in the most-hyped, most-funded, most consequential area of the entire business world right now. It should be unfathomable that people are thinking about deploying these technologies in their businesses — in their schools! — or integrating these products into their own platforms. And yet I would bet that the vast majority of people using these products have no idea about these risks or realities of these platforms at all. Even the vast majority of people who work in tech probably are barely aware.

What’s worse is, the majority of people I’ve talked to in tech, who do know about this have not taken a single action about it. Not one.

I’ll be following up with an entire list of suggestions about actions we can take, and ways we can push for accountability for the bad actors who are endangering kids every day. In the meantime, reflect for yourself about this reality. Who will you share this information with? How will this change your view of what these companies are? How will this change the way you make decisions about using these products? Now that you know: what will you do?

Read the whole story
cjheinz
6 hours ago
reply
Hmmm.
Lexington, KY; Naples, FL
Share this story
Delete

More in Sadness than in Anger

1 Comment and 2 Shares

Sorry I haven't updated the blog for a while: I've been busy. (Writing the final draft of a new novel entirely unconnected to anything else you've read—space opera, new setting, longest thing I've written aside from the big Merchant Princes doorsteps. Now in my agent's inbox while I make notes towards a sequel, if requested.)

Over the past few years I've been naively assuming that while we're ruled by a ruthless kleptocracy, they're not completely evil: aristocracies tend to run on self-interest and try to leave a legacy to their children, which usually means leaving enough peasants around to mow the lawn, wash the dishes, and work the fields.

But my faith in the sanity of the evil overlords has been badly shaken in the past couple of months by the steady drip of WTFery coming out of the USA in general and the Epstein Files in particular, and now there's this somewhat obscure aside, that rips the mask off entirely (Original email on DoJ website ) ...

A document released by the U.S. Department of Justice as part of the Epstein files contains a quote attributed to correspondence involving Jeffrey Epstein that references Bill Gates and a controversial question about "how do we get rid of poor people as a whole."

The passage appears in a written communication included in the DOJ document trove and reads, in part: "I've been thinking a lot about that question that you asked Bill Gates, 'how do we get rid of poor people as a whole,' and I have an answer/comment regarding that for you." The writer then asks to schedule a phone call to discuss the matter further.

As an editor of mine once observed, America is ruled by two political parties: the party of the evil billionaires, and the party of the sane (so slightly less evil) billionaires. Evil billionaires: "let's kill the poor and take all their stuff." Sane billionaires: "hang on, if we kill them all who's going to cook dinner and clean the pool?"

And this seemed plausible ... before it turned out that the CEO class as a whole believe entirely in AI (which, to be clear, is just another marketing grift in the same spirit as cryptocurrencies/blockchain, next-generation nuclear power, real estate backed credit default options, and Dutch tulip bulbs). AI is being sold on the promise of increasing workforce efficiency. And in a world which has been studiously ignoring John Maynard Keynes' 1930 prediction that by 2030 we would only need to work a 15 hour work week, they've drawn an inevitable unwelcome conclusion from this axiom: that there are too many of us. For the past 75 years they've been so focussed on optimizing for efficiency that they no longer understand that efficiency and resilience are inversely related: in order to survive collectively through an energy transition and a time of climate destabilization we need extra capacity, not "right-sized" capacity.

Raise the death rate by removing herd immunity to childhood diseases? That's entirely consistent with "kill the poor". Mass deportation of anyone with the wrong skin colour? The white supremacists will join in enthusiastically, and meanwhile: the deported can die out of sight. Turn disused data centres or amazon warehouses into concentration camps (which are notorious disease breeding grounds)? It's a no-brainer. Start lots of small overseas brushfire wars, escalating to the sort of genocide now being piloted in Gaza by Trump's ally Netanyahu (to emphasize: his strain of Judaism can only be understood as a Jewish expression of white nationalism, throwing off its polite political mask to reveal the death's head of totalitarianism underneath)? It's all part of the program.

Our rulers have gone collectively insane (over a period of decades) and they want to kill us.

The class war has turned hot. And we're all on the losing side.

Read the whole story
cjheinz
1 day ago
reply
Thanks Charlie for stating the obvious: we're losing.
Lexington, KY; Naples, FL
Share this story
Delete

The GOP goal of destroying the post office is coming...

1 Comment

The GOP goal of destroying the post office is coming along: the USPS is now so unreliable that newspaper delivery is delayed across the country. (My mail delivery is currently one bundle every week or two.)

Read the whole story
cjheinz
1 day ago
reply
Fuck these assholes.
The Post Office is in the constitution.
There is no mention of profit margin.
It is a public good.
Lexington, KY; Naples, FL
Share this story
Delete

A low bar for Andy Barr

1 Share
A low bar for Andy Barr

In a 30-second ad released over the weekend, U.S. Rep. Andy Barr — running for Sen. Mitch McConnell’s Senate seat — is pictured on a farm. The sun is shining. There is a barn and an American flag behind him. “You know what DEI really stands for?” he says, smiling. “Dumb, evil, indoctrination.”

The scene shifts and we see a black man in a crowd. He is wearing a Rev. Martin Luther King Jr. t-shirt and holding up a sign that reads “Stay woke America.”

“Woke liberals spew it,” Barr says, “corporate losers fall for it, but thanks to Trump, America is rejecting that trash.”

An interesting choice for the Barr campaign to release the ad on the heels of President Donald Trump’s racist social media post (which was primarily about debunked claims of voter fraud) depicting former President Barack Obama and his wife as apes, an image I first became aware of last October when Bobbie Coleman, then-chairperson of the Hardin County Republican Party, shared the extended AI video depicting the Obamas as apes on her county party’s Facebook page. 

The Barr ad was likely already scheduled for release, but ads can be pulled. The ad ran and continues to run. The YouTube version already has 297,000 views as of this writing, and it must be noted that we are now a decade deep into Trumpism and Trump himself, whose presidential aspirations were initially fueled by his insistence that Obama did not have an American birth certificate and was, therefore, not American. 

It was a lie. It was racist hogwash. It also launched Trump from elderly white billionaire New York City playboy reality show host, known for his bankruptcies and stiffing of contractors, straight into the White House. 

The high spark of low standards and even lower morality. 

Trump continues to lie about the 2020 election results (he lost) and is pushing for the anti-immigrant SAVE Act that also would exclude many citizens from voting. It reminds me that in 1965 — the year the Voting Rights Act, prohibiting racial discrimination in voting, was signed into law — Joan Didion in “On Morality” wrote “when we start deceiving ourselves into thinking not that we want something or need something, not that it is a pragmatic necessity for us to have it, but that it is a moral imperative that we have it, then is when the thin whine of hysteria is heard in the land, and then is when we are in bad trouble. And I suspect we are already there.”

Watching Barr’s most recent ad inflaming white hysteria, we see that we are still there: stuck in the past, stuck in racist tropes, stuck in an immoral morass of bold bigotry. The last words of his 30-second ad for a U.S. Senate seat are, “I’m Andy Barr. It’s not a sin to be white, it’s not against the law to be male, and it shouldn’t be disqualifying to be a Christian. I’m Andy Barr, and I approve this message to give woke liberals something else to cry about.” 

I am white and have never heard that it is a sin for me to be white. I have yet to see anyone crying. Is there a law somewhere against being male? Funny, I can’t seem to locate it. And when did it become disqualifying to be Christian? The answer is … drumroll, please … never. What a fantastical farce.

I watched Barr’s ad more than a dozen times, and the central message seems to be: I am a white Christian man who will make liberals cry. 

That’s it?

Is this a winning message for Kentucky Republican primary voters? Because it sure looks like this — not the economy or grocery prices, not the cost of health insurance, not education, not public safety or taxes or conservatism — is what Barr is banking on.

We often hear that, just because someone voted for Donald Trump, it does not mean that they are racist, that they are simply willing to ignore that the president is openly racist because he is a conservative doing conservative things. But this is also a lie. Conservatives, by name and nature, conserve things, and with what we have witnessed over the last year of the Trump presidency with the destruction of everything from the East Wing of the White House to the U.S. Agency for International Development and other American institutions via DOGE, there is nothing conservative about him. There is only the power to destroy. And the racism. Always the racism.

It is 2026. Barr runs an ad in which he appears to believe simply being white and male qualifies him to be a United States senator. The president posts a video about voting machines depicting the Obamas as apes. In our statehouse, we are now years into fights by our mostly white, mostly male GOP supermajority to rid our institutions of any whiff of diversity, equity, and inclusion (DEI). 

The racist spigot is wide open. 

The moral rot is deep.

--30--

&&&
Read the whole story
cjheinz
1 day ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Love

1 Share

The first thing my wife heard me say was “I’m a Leo, so I don’t believe in astrology.” She’s a Scorpio, and that’s her constellation, above another of my affections, the Walnut Grove tower farm in California’s central valley. Left to right, the towers are 2,049, 2,000, 1,549, 524, and 1,997 feet tall and transmit all of Sacramento’s TV stations.

Happy Valentines Life

My favorite line from the musical Les Misérables is “To love another person is to see the face of God.” My wife and I have been living that truth since not long after we met, thirty-six years ago.

Towers

I love to look at them, know what they’re for, and (many decades ago) climb them. Places where I write about towers and post photos of them:

Trunk Line, my blog about infrastructure
Nfrastructure, my Flickr collection of infrastructure photos (most of which are about broadcasting and transmitters)
This subset on my main Flickr collection
All these (121 of them), posted on this very blog

Consider all of them a long love letter to the now-gone golden age of broadcasting. I want future historians and archivists to remember what broadcasting was and how it worked before digital tech absorbed and obsolesced it. Long may it wave.

Stories

I Love Girl, by Simon Rich, in The New Yorker. It’s worth getting a subscription just for that one story.

Boom!

What Happens When You Put AI in the Hands of a 73-Year-Old Grandmother, by Frances Flynn Thorsen, @blogmother on her Substack blog. Hats off to the real estate conversation led by Bill Wendel of RealEstateCafe and happening here.

Turing would dig it

My old friend David Beaver, vastly versed in magic, has a new Substack that riffs off many ways that magic isn’t what you think it is. And yet it has near-infinite promise in the AI age, when falsity on a grand scale passes damn near every variant of the Turing test.

Read the whole story
cjheinz
1 day ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Why would Greene and Massie feel the need to tweet “I’m not suicidal”?

1 Comment
Why would Greene and Massie feel the need to tweet “I’m not suicidal”?

In September 2025, Rep. Marjorie Taylor Greene posted a very strange tweet:

Why would Greene and Massie feel the need to tweet “I’m not suicidal”?

This tweet came as MTG was pushing for the release of the Epstein files, and was one of only a few Republicans supporting the discharge petition to get a vote on forcing their release.

And now, our own Thomas Massie has felt the need to also tell the world that he is not, in fact, suicidal.

Why would Greene and Massie feel the need to tweet “I’m not suicidal”?

More than one source has said that Massie and his office have received an increasing number of threats, including death threats and threats to his family.

All because he, and MTG, and a few others, were brave enough to stand up and say “Release the files. All of them.”

For a member of Congress — a Republican member of Congress — to feel the need to make such a statement publicly is just gobsmacking.

By pushing for the entire sordid Epstein episode to come out, Massie and others have become a threat to some very powerful people. And some of those people have the means to do something about that threat.

Let us all hope and pray that (a) none of these threats come to fruition, that (b) all the truth about the Epstein Class is finally revealed, and that (c) we can pull our politics and our nation back to some level of shared humanity.

In the meantime – please stay safe, Mr. Massie.

--30--

&&&
Read the whole story
cjheinz
1 day ago
reply
Amen.
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories