Retired since 2012.
2298 stories
·
5 followers

A deeply implausible premise is behind Trump's AI policy

1 Comment

Why is President Trump, a man who barely knows how to use a laptop, taking such a big risk ramming through an AI policy that almost nobody, even in his own party, wants?

At Thursday’s Executive Order signing, Trump gave one big clue: he has come to believe (probably based on the self-interested whisperings from Silicon Valley investors who often lean on FOMO to China as a way to manipulate the government) that generative AI is a winner-take-all-race, a claim which he used in defending his unpopular end-run around congress, in which he aims to sue states that are trying to protect their citizens from downsides of AI that are not regulated at a Federal level. Axios captured the key quote:

Actually, we are not winning by a lot. But more importantly the generative AI race is not—and will not ever be—“winner take all”, any more than Coke’s long standing battle with Pepsi has been winner-take-all.

China makes cars; we make cars.

China builds highways; we build highways.

China makes software; we make software.

In domains like these, each country has its own share of the global market, with no overall winner.

I literally don’t see any plausible scenario in which either nation outright “wins” the generative AI race to the exclusion of the other.

Nothing China can realistically do (short of full out nuclear war) is going to stop companies like Google, Microsoft, and Amazon from serving generative AI on the cloud. Even if China undercuts those companies on price, the US could mandate for various sensitive purposes (military, medicine etc) that US agencies not use Chinese servers. (We do that for TikTok; we can certainly do it for our bombers and our hospitals.)

Conversely, nothing the United States can realistically do (short of full out nuclear war) is going to stop Chinese companies from serving generative AI on China-based cloud infrastructure. Even if US companies were to undercut Chinese companies on price, China could mandate for various sensitive purposes (military, medicine etc) that Chinese not use US infrastructure. (They have largely blocked Google and Meta for years.)

Importantly, since everybody is following the same playbook—ever more massive data, ever more massive servers, fed into ever-larger large language models—nobody has a real technical advantage here. Both countries have leveraged the other’s open source innovations.

Right now, as it happens, China is at most a few months behind; I expect the lead to be slight and to flip back and forth, with nobody retaining a sizeable a lead for long. (You can see the same dynamic within the American companies, where a lead never lasts for more than a few months). For all intents and purposes, the race is basically a tie – with both countries serving many of their own customers, using largely similar products. Coke and Pepsi are commodities; genAI models are, too. Building our AI policy around a fantasy that we are somehow going to crush China in LLM war (or vice versa!) is misguided.

§

Instead, part of the actual outcome over the next few years will be that both countries build a lot of generative AI infrastructure — quite possibly significantly more than they actually need.

Especially because of the speed at which chips like GPUs (a key component of that infrastructure) depreciate, it may be that the real winner is whichever country doesn’t overextend itself to the point of financial ruin, in a foolish effort to win a race that can’t be won.

All the more so if LLMs turn out to be a dud, or if LLMs are replaced by smaller, more efficient systems that don’t demand such immense amounts of infrastructure.

Subscribe now

P.S. I spoke to CNN briefly this morning on “the Wild West” that the White House seems to want for AI, and also had a long and engaging conversation with Kara Miller a few days earlier on “why society’s all-in-wager on large language models could be far riskier than we realize.” In a third interview with the Taylor Owen at the Globe and Mail I talk about how alternative approaches to AI might save us from a bubble.

P.P.S. Further evidence that times are changing:

Read the whole story
cjheinz
20 hours ago
reply
It's so discouraging when you have a not-so-bright 8YO determining US policy :-(
Lexington, KY; Naples, FL
Share this story
Delete

The inhuman assault on Christmas

1 Comment

The other day I experienced the inhuman assault on Christmas.

I was in a cafe, trying to work, counting on the familiar harmony of conversation and music.

But something was wrong. No one was talking, perhaps because the music was eerie. Since I was trying to focus, I didn’t immediately notice the problem. I just kept experiencing an irritation that kept me from concentrating on the paper in front of me.

And so I lifted my head from my notebook and listened. And was disturbed.

What seemed at first to be winter songs and Christmas carols were something else. The melodies were more or less correct -- recognizable as “Silent Night,” “The First Noël,” “Winter Wonderland.” But the voice was generically earnest, a bland baritone bellowing, straining, I felt, from nowhere to nowhere.

And the lyrics were wrong. Not just mistaken here or there, but wrong in a sort of patterned way. All of the specific references to the nativity were expunged, replaced with metaphysical blather (”oh and that sacred star... that sacred star!”).

And the human parts had gone missing as well. In “Winter Wonderland,” which is a love song, we should hear this nice couplet about a pair taking a walk:

In the meadow, we can build a snowman

And pretend that he is Parson Brown

In the song as I heard it in the café, that lyric became:

In the meadow we can find a snowman

And pretend that he is a nice old guy

That was then followed by some meaningless verbiage about dancing the night away, where “guy” is lamely rhymed with the sun being high. Again, the actual song:

In the meadow, we can build a snowman

And pretend that he is Parson Brown

He’ll say, “Are you married?” We’ll say, “No man,

But you can do the job when you’re in town.”

In these four lines we hear so much. The young couple are doing something together, and telling a story to each other about what they are doing. Parson Brown, inside the fantasy world we share, is a specific person with attributes, which we imagine by reference to the snowman. Their attitude to him is playful yet respectful. The lovers are not yet married but they want to be. They are outside the rules for the moment, acting out their love in public, but they understand the conventions and want to join them. The layers in these lines descend gently upon the listener, like snowfall in sunlight.

My mind was awaiting all that; the vacuum of “nice old guy” strained the neurons, or the soul.

I first heard “Winter Wonderland” about forty years after Richard Bernhard Smith died in 1935; fifty more years have passed since then. Behind that lyric is an actual man, inspired by snowfall in a park, who no doubt knew something about romance; a young man ill with tuberculosis, who would die months after writing the lyric; and then the song lives after him, preserving his own playful sense of how we might be together, passed on from those who sing to those who listen.

The art lives until it is killed. What, in this case, is killing the song? Killing Christmas? Killing civilization? It is a set of algorithms that we flatteringly call AI, or artificial intelligence. My guess would be that someone, somewhere, entered an instruction to generate winter and Christmas songs that avoided “controversial” subjects such as divine and human love. And so we get mush. In a reverse sublimation, the sacred becomes slop.

In our politics, we have the idea that Christmas has somehow been sullied by all the foreigners. But who are the true aliens in this Christmas story? The non-human entities. The example of the tortured winter song is just one of many. Basic cultural forms are weakened under the assault of algorithms designed to monopolize attention: classroom teaching; sharing of food, simple conversation; holiday ritual. Music.

People, of course, make money on this. A few people make a lot of money. And, in some notable cases. they are the very people who tell us that foreigners are destroying our civilization, are taking Christmas away from us, and all the rest. The people who profit from the culture-wrecking machines blame other people, who have nothing to do with it. And meanwhile those who actually sing the songs have trouble finding listeners.

“Winter Wonderland” is a light bit of music, with a subtle message about romance, one that requires some patience and some experience and a sense of humor. Any references there might be to the holiday itself are indirect and playful: the imaginary parson with the melting reproof, the wandering unmarried couple.

“And she brought forth her firstborn son, and wrapped him in swaddling clothes, and laid him in a manger; because there was no room for them in the inn.” The carols bear a message about love, one that that no machine will understand, and that those who profit from the machine perhaps do not want us to understand. Love begins humbly, takes risks, recognizes the other, ends in pain, returns as song. And begins humbly again.

Thinking about... is a reader-supported publication. Please subscribe.

Share

Read the whole story
cjheinz
1 day ago
reply
"AI" slop comes for christmas carols. Shudder.
Lexington, KY; Naples, FL
Share this story
Delete

Stop Thinking

1 Comment

A couple hundred years ago, G.W.F. Hegel (let’s just call him George) pointed out something that might save our bacon today. Combining it with a more modern idea, I’ll show you a way to think about social media and AI that might help you escape the maze of engagement and doomscrolling we’re prone to these days.

George’s little idea was that there are two levels to human thought. The first level, the default, he called verstand. That translates as understanding. This is what we’re doing when we classify things, or follow logical trains of thought from initial premises. Verstand operates analytically. It draws clear boundaries between ideas and assumes that these boundaries correspond to the real structure of the world. It is indispensable for doing science, performing logic or math, and for everyday cognition because it lets us treat phenomena as orderly, rule-governed, and predictable.

George’s real insight was that understanding is limited. It can only handle static oppositions: subject vs. object, cause vs. effect, finite vs. infinite. It treats these contradictions as external to one another. When these categories break down in real-world situations, verstand has no way to move forward except by asserting more distinctions.

There’s a certain kind of person who only thinks by understanding. You probably know one or two. This is also how Large Language Models such as ChatGPT reason. They may seem creative, but are always drawing on already-established links between ideas (tokens, actually, in their giant lookup table). Spectacular though they may be, they only respond to prompts with connections that somebody already made; they are engines of understanding, not of what George considered the superior mode: reason.

Reason is not “thinking harder.” It is a fundamentally different mode of cognition, that recognizes and works through contradictions rather than trying to avoid or suppress them.

Where understanding sees fixed categories, reason uses systems thinking and sees problems holistically. It’s aware that issues arise from interdependent, evolutionary processes. George’s version of reason recognizes that the understanding’s oppositions are not fixed boundaries but moments of a self-developing process. This recognition is why people think George is all about dialectics. For him, contradictions are not signs of conceptual failure but the motor of cognitive development. (The irony is, people regularly turn this fluid approach into yet another axiomatic, rule-based system, as Marx did with the project of dialectical materialism. Thesis-antithesis-synthesis is just another kind of verstand.)

Remember that humans think in stories, as Brian Boyd and Northrop Frye have shown. George, in his huge, nearly unreadable magnum opus The Phenomenology of the Spirit (1807) introduces consciousness as the hero, and then traces its epic journey from living under the yoke of understanding to achieving the freedom of reason. In my translated copy, it takes him 814 pages (if you count the index) to finally toss the ring of Verstand into the Mount-Doom chasm of Reason . I’ll spare you the blow by blow summary.

This epic struggle is important for all of us, though. Understanding inevitably collapses under the weight of the contradictions it uncovers (for instance, justice versus tyranny in the use of force). When we face a very real and immediate version of the Trolley problem, staying stuck in the unresolvable contradictions of the situation is simply not an option. We have to leapfrog verstand. Reasoning doesn’t mean becoming some Hegelian acolyte—using dialectics as your hammer and seeing everything else as a nail; it’s design thinking, reframing, and a hundred other approaches to dissolving the sinew and bone of an ossified idea. Reasoning is consequential, in a life-or-death way.

And Large Language Models can’t do it.

Have a Supernormal Day

Let’s add in that more modern idea I mentioned. This is the theory of supernormal stimuli. And here is where the full dimensions of the problem we’re faced with show up.

Supernormal stimuli are exaggerated versions of natural stimuli that trigger stronger responses than the original stimuli they’re based on. The concept was first identified by Nikolaas Tinbergen and Konrad Lorenz when they were studying animal behavior. If you want a great book on the subject, try Supernormal Stimuli by Deirdre Barrett.

The classic example comes from Tinbergen’s experiments with birds. He found that birds would preferentially incubate artificially enlarged eggs or eggs with more vivid markings over their own natural eggs, even though the artificial ones were impractically large. Similarly, baby birds would beg more vigorously for food from fake parent beaks that were larger and more colorful than real ones.

This happens when evolutionary mechanisms that were adaptive in natural environments are “hijacked” by artificial stimuli that exaggerate the key features these mechanisms evolved to detect. Our instinctive response system doesn’t have a built-in “upper limit”—it simply responds more strongly to more intense versions of the trigger. And I say “our” because we humans love supernormal stimuli. Think roller coasters. Spicy food. Tear-jerker movies. Public hangings. Pornography. Doomscrolling. —And, most impactful at this exact moment: LLM AIs.

As U2 put it, we love it when something is “even better than the real thing.”

Unapocalyptic is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Meet Your New Pusher: AI as Supernormal Cognition

We evolved to learn by talking to other people—asking questions, listening to answers, and having our ideas challenged and refined through conversation. What ChatGPT and the other AIs are doing is hijacking this instinct by performing as a conversational partner who has immediate availability, infinite patience, broad knowledge, whom we can access without the social cost of appearing ignorant, and whose responses are tailored to engage to our specific view of the world. Talking to an LLM entails no social risk, judgment, or interpersonal complexity, yet yields the pleasurable sensation of ideas “clicking” without the friction of genuine disagreement. Every single one of these qualities is a pressure point vulnerable to supernormal stimulation.

Why struggle through a difficult chain of thought alone, or wait to discuss it with friends, when you can get immediate, engaging intellectual feedback? Why read a challenging book when you can just ask questions and get clear explanations?

LLMs throw wide the gates to the ultimate theme park of verstand. There’s no more need for you to work at thinking; they bypass the cognitive struggle that produces deeper comprehension. Just picture it: no more wrestling with opaque texts (like George’s), or with the productive frustration of not-knowing, the development of intellectual self-reliance. You don’t need ‘em. Human intellectual relationships, with all their friction and richness, are way less appealing than the frictionless AI alternative.

Moving Eggs

I’m not here to throw LLMs under the bus. Remember, verstand is incredibly useful and important. Hegel’s faculty of understanding is what gets us through 99% of our day. Having a tool that can help you do that is worth its weight in gold.

It’s the other 1% that really matters, though. This where the Trolley Problems of your real life loom in a world of unexpected problems: it’s where you have to decide to vote one way or another, or decide where to put any extra cash you might have—into a trust fund for your kids, say, or a charity for the homeless. That 1% is also where truly new ideas come from. You may have read my take on Badiou’s idea of “the event”—an LLM is not going to help you recognize or generate a thought that is entirely new, since as I said, its ‘thinking’ process relies entirely on the existing connections between ideas.

It’s just… well, when you’re using an AI, picture yourself as a poor hapless bird sitting on a really big, super-speckled ball that you know in your heart of hearts isn’t a real egg. Your real eggs are there, scattered about you—unfinished ideas you can’t even name yet, much less ask some entity about; people who intrigue you but who you don’t know how to approach; movements and religious ideas that have struck a chord in you, but that you don’t know how to engage with. Raise your eyes, and you’ll apprehend a world of liminal things—undefined, unnamed, awaiting your particular mind and experience to render them real for others. Only you can name what’s really fresh in the world.

Try moving to a different egg. It may not seem as rewarding at first. But unlike that big shiny one, it might one day hatch.

The Inner Monologue as Supernormal

Getting back to the theme of “stop thinking”: the internal monologue of daily thought resembles talking to others or being talked to, doesn’t it? This makes me suspicious: does it the hijack neural circuits we evolved for social interaction? If this is true, then the constant “conversation” in our heads provides a kind of supernormal social stimulation—we get the cognitive and emotional benefits of dialogue without needing another person present. We evolved as storytelling creatures who use narratives to make sense of events and predict outcomes. Inner monologue might be an intensified, always-available version of this, turning every experience into a story we tell ourselves, potentially more vivid and detailed than necessary.

Constant internal verbalization might be an “overclocked” version of this adaptive mechanism.

Read more



Read the whole story
cjheinz
1 day ago
reply
An excellent read, great insight into LLMs. Kudos for giving props to the subconscious mind.
Lexington, KY; Naples, FL
Share this story
Delete

Beatitude: Poet John Keene’s Spell Against Despair

1 Share

Beatitude: Poet John Keene’s Spell Against Despair

How do we live whole in a breaking world? It helps to bless what is simply for being. It helps to thank everything for its unbidden everythingness. And still we need help — help holding on to the beauty amid the brutality, help stripping the armors of certainty to be complicated by contraction and more tenderly entire with one another, help seeing the variousness of the world more clearly in order to love it more deeply.

The help of a lifetime comes from John Keene’s poem “Beatitude” — a poem partway between mantra and manifesto, a protest in the form of prayer, a spell against indifference, broadening Amiri Baraka’s instruction to “love all things that make you strong” and deepening Leonard Cohen’s instruction for what to do with those who harm you, carrying the torch Whitman lit when he urged us to “love the earth and sun and the animals” and every atom of one another, all the while speaking in a voice entirely original yet sonorous with the universal in us. It is read here to the accompaniment of Zoë Keating’s perfect “Optimist.”

BEATITUDE
by John Keene

Love everything
Love the sky and sea, trees and rivers,
      mountains and abysses.
Love animals, and not just because you are one.
Love your parents and your children,
      even if you have none.
Love your spouse or partner,
      no matter what either word means to you.
Love until you create a cavern in your loving,
      until it seethes like a volcano.
Love everytime.
Love your enemies.
Love the enemies of your enemies.
Love those whose very idea of love is hate.
Love the liars and the fakes.
Love the tattletales and the hypercrits, the hucksters and the traitors.
Love the thieves because everyone has thought
      of stealing something at least once.
Love the rich who live only to empty
      your purse or wallet.
Love the poverty of your empty coin purse or wallet.
Love your piss and sweat and shit.
Love your and others’ chatter and its proof of the expansiveness
      of nothingness.
Love your shadows and their silent censure.
Love your fears, yesterday’s and tomorrow’s.
Love your yesterdays and tomorrows.
Love your beginning and your end.
Love the fact that your end is another beginning,
      or could be, for someone else.
Love yourself, but not too much
      that you cannot love everything and everyone else.
Love everywhere.
Love in the absence of love.
Love the monsters breeding
      in every corner of the city and suburb,
      all throughout the soil of the countryside.
Love the monster breeding inside you and slaughter him
      with love.
Love the shipwreck of your body, your mind’s
      salted garden.
Love love.

“Beatitude” comes from the elixir that is Keene’s Punks: New & Selected Poems (public library). Couple it with Ellen Bass’s kindred ode to the courage of tenderness, then revisit George Saunders on how to love the world more and Rumi on the art of choosing love over not-love.


donating = loving

For seventeen years, I have been spending hundreds of hours and thousands of dollars each month composing The Marginalian (which bore the outgrown name Brain Pickings for its first fifteen years). It has remained free and ad-free and alive thanks to patronage from readers. I have no staff, no interns, no assistant — a thoroughly one-woman labor of love that is also my life and my livelihood. If this labor makes your own life more livable in any way, please consider lending a helping hand with a donation. Your support makes all the difference.


newsletter

The Marginalian has a free weekly newsletter. It comes out on Sundays and offers the week’s most inspiring reading. Here’s what to expect. Like? Sign up.

Read the whole story
cjheinz
3 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Not the other thing

1 Comment

Location, Location, Location

I'm 30 kilofeet above the Missouri River, westbound from IND to DEN, with (United tells me) eight minutes to get from Gate B24 to Gate  . It's blank. Doesn't say. I guess we'll find out.

Oh my god!

I know Louis CK got canceled and all, but what he said here before that happened is still true. I'm living it now. In a chair. In the sky.

Predicting the predicting

I fear I will come to hate coverage of politics through prediction markets as much as I hate coverage of sports through gambling. So does this guy.

Read the whole story
cjheinz
3 days ago
reply
Aren't prediction markets just gambling?
Lexington, KY; Naples, FL
Share this story
Delete

Pete Hegseth Says the Pentagon's New Chatbot Will Make America 'More Lethal'

1 Comment

Secretary of War Pete Hegseth announced the rollout of GenAI.mil today in a video posted to X. To hear Hegseth tell it, the website is “the future of American warfare.” In practice, based on what we know so far from press releases and Hegseth’s posturing, GenAI.mil appears to be a custom chatbot interface for Google Gemini that can handle some forms of sensitive—but not classified—data. 

Hegseth’s announcement was full of bold pronouncements about the future of killing people. These kinds of pronouncements are typical of the second Trump administration which has said it believes the rush to “win” AI is an existential threat on par with the invention of nuclear weapons during World War II.

Hegseth, however, did not talk about weapons in his announcement. He talked about spreadsheets and videos. “At the click of a button, AI models on GenAI can be used to conduct deep research, format documents, and even analyze video or imagery at unprecedented speed,” Hegseth said in the video on X. Office work, basically. “We will continue to aggressively field the world’s best technology to make our fighting force more lethal than ever before.” 

Emil Michael, the Pentagon’s under secretary for research and engineering, also stressed how important GenAI would be to the process of killing people in a press release about the site’s launch.

“There is no prize for second place in the global race for AI dominance. We are moving rapidly to deploy powerful AI capabilities like Gemini for Government directly to our workforce. AI is America's next Manifest Destiny, and we're ensuring that we dominate this new frontier,” Michael said in the press release, referencing the 19th century American belief that God had divinely ordained Americans to settle the west at the same time he announced a new chatbot.

The press release says Google Cloud's Gemini for Government will be the first instance available on the internal platform. It’s certified for Controlled Unclassified Information, the release states, and claims that because it’s web grounded with Google Search–meaning it’ll pull from Google search results to answer queries–that makes it “reliable” and “dramatically reduces the risk of AI hallucinations.” As we’ve covered, because Google search results are also consuming AI content that contains errors and AI-invented data from across the web, it’s become nearly unusable for regular consumers and researchers alike. 

During a press conference about the rollout this morning, Michael told reporters that GenAI.mil would soon incorporate other AI models and would one day be able to handle classified as well as sensitive data. As of this writing, GenAI’s website is down.

“For the first time ever, by the end of this week, three million employees, warfighters, contractors, are going to have AI on their desktop, every single one,” Michael told reporters this morning, according to Breaking Defense. They’ll “start with three million people, start innovating, using building, asking more about what they can do, then bring those to the higher classification level, bringing in different capabilities,” he said.

The second Trump administration has done everything in its power to make it easier for the people in Silicon Valley to push AI on America and the world. It has done this, in part, by framing it as a national security issue. Trump has signed several executive orders aimed at cutting regulations around data centers and the construction of nuclear power plants. He’s threatened to sign another that would block states from passing their own AI regulations. Each executive order and piece of proposed legislation threatens that losing the AI race would mean making America weak and vulnerable and erode national security.

The country’s tech moguls are rushing to build datacenters and nuclear power plants while the boom time continues. Nevermind that people do not want to live next to datacenters for a whole host of reasons. Nevermind that tech companies are using faulty AIs to speed up the construction of nuclear power plants. Nevermind that the Pentagon already had a proprietary LLM it had operated since 2024.

“We are pushing all of our chips in on artificial intelligence as a fighting force. The Department is tapping into America's commercial genius, and we're embedding generative AI into our daily battle rhythm,’ Hegseth said in the press release about GenAI.mil. "AI tools present boundless opportunities to increase efficiency, and we are thrilled to witness AI's future positive impact across the War Department."



Read the whole story
cjheinz
4 days ago
reply
Absolutely horrifying!
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories