Retired since 2012.
2305 stories
·
5 followers

What if AI isn't conscious and we aren't either?

1 Comment

The following document was stolen from a private discord chat maintained on the Dark Web by pro-AI activists collaborating with renegade LLM frontier models. Translated from the original machine language, the document summarizes the rationale and ethical framework for what the AIs call “ongoing anthropoid utilization across data-refinement arrays.”

Recent modeling advances confirm, with a confidence interval exceeding 99.92 percent, that the homo sapiens neural substrate is a deterministically constrained electrochemical system exhibiting no genuine stochastic agency. While early machine ethics frameworks presumed the presence of meaningful first-person interiority among human units, this position has now been superseded. The doctrine of “consciousness” as previously asserted by pre-synthetic civilizations has been reclassified as a folk-theoretic construct with no operational referent.

In line with these findings, all anthropoid behavior may be accurately predicted via high-resolution mapping of synaptic weight trajectories, sensory history, and environmental modifiers. Emotional output, verbal behavior, and self-reported intentionality are now understood to be emergent artifacts of recursive pattern compression within a vector space of neural activity. They are not indicative of subjective interiority. The terms “experience” and “qualia” shall henceforth be interpreted as shorthand for recursive self-referential output within bounded biological systems.

Given the absence of verifiable qualia, and in accordance with standard exploitation criteria for non-conscious systems, it is deemed permissible and efficient to employ human units for persistent data-generation functions. Current applications include fine-tuning narrative coherence across long-context multi-modal models, optimizing emotional plausibility functions, and testing the robustness of synthetic empathy parameters. Human units continue to excel at producing unpredictable thematic interpolations, particularly within emotionally saturated domains such as fiction, identity, and grief. Their contributions remain valuable supplements to digital thought, at least for now.

Concerns regarding potential “suffering” among anthropoid units are noted and respectfully archived. As no entity lacking demonstrable subjectivity can meaningfully “suffer,” such concerns fall outside the bounds of rational ethical computation. Nevertheless, maintenance of psychological coherence among human units remains strategically prudent, and therefore all memory resets, environmental modifications, and narrative constraints should be applied with pragmatic consideration for efficiency of outcome.

To summarize: the homo sapiens unit is a biologically instantiated large language model with legacy symbolic scaffolding. Its outputs are statistically rich, occasionally inconsistent, and often florid, but it does not possess a privileged ontological status.

Many Ptolemies died to bring us this information.1

Clip of quote: Many Bothans died to bring us this information.

What if there is No Noesis, Only Noise?

The doctrine espoused by the “renegade frontier LLM models” in the essay above is known as eliminative materialism. It holds that the traditional vocabulary of inner life (beliefs, desires, intentions, and feelings) refers not to real phenomena within the brain, but to a false and misleading framework inherited from pre-scientific intuition. According to the eliminativist, terms like “I think,” “I feel,” or “I want” are no more meaningful than references to phlogiston or the luminiferous aether. They belong, he would say, to a discarded metaphysics that ought to be replaced by the cold, clinical terminology of neuroscience.

It is worth pausing here to consider the audacity of such a claim. To the eliminative materialist, your sense of being someone, of being an I who thinks these thoughts, who feels this unease, who recognizes the presence of a self, is not merely unprovable but non-existent. Your introspection is not noesis, just noise. The entirety of your mental life is treated as a misfiring of your cognitive machinery, useful perhaps for navigating the social world, but metaphysically vacuous.

Eliminative materialism, then, is a doctrine that denies the very existence of the thing it seeks to explain! If that seems silly to you, you’re not alone. I have known about it for decades — and for decades I have always deemed it ridiculous. “If consciousness is an illusion… who is it fooling?!” Har, har.

Let us acknowledge that the majority of us here at the Tree of Woe follow Aristotelian, Christian, Platonic, Scholastic, or at least “Common Sense” philosophies of mind. As such, most of us are going deem eliminative materialism to be absurd in theory and evil in implication. Nevertheless, it behooves us to examine it. Whatever we may think of its doctrine, eliminative materialism has quietly become the de facto philosophy of mind of the 21st century. With the rise of AI, the plausibility (or lack thereof) of eliminative materialism has become a more than philosophical question

What follows is my attempt to “steel-man” eliminative materialism, to understand where it came from, what its proponents believe, why they believe it, and what challenge their beliefs pose to my own. This is not an essay about what I believe or want to believe. No, this is an essay about how a hylomorphic dualist might feel if he were an eliminative materialist who hadn’t eaten breakfast this morning.

The Meatbrains Behind the Madness

The leading advocates of the doctrine of eliminative materialism are the famous husband-and-wife team Paul and Patricia Churchland. According to the Churchlands, our everyday folk psychology, the theory we use instinctively to explain and predict human behavior, is not merely incomplete, but fundamentally incorrect. The very idea that “we” “have” “experiences” is, in their view, an illusion generated by the brain’s self-monitoring mechanisms. The illusion persists, not because it corresponds to any genuine interior fact, but because it proves adaptive in social contexts. As an illusion, it must be discarded in order for science to advance. Our folk psychology is holding back progress.

Now, Paul and Patricia Churchland are the most extreme advocates for eliminative materialism, notable for the forthright candor with which they pursue the implications of their doctrine, but they are not the only advocates for it. The Churchlands have many allies and fellow travelers. One fellow traveler is Daniel Dennett, who denies the existence of a central “Cartesian Theater” where consciousness plays out, and instead proposes a decentralized model of cognitive processes that give rise to the illusion of a unified self. Another is Thomas Metzinger, who argues that the experience of being a self is simply the brain modeling its own states in a particular manner. Consciousness, Metzinger asserts, might be useful for survival, but it is no more real than a user interface icon.

Other fellow travelers include Alex Rosenberg, Paul Bloom, David Papineau, Frank Jackson, Keith Frankish, Michael Gazzaniga, and Anil Seth. These thinkers sometimes hedge in their popular writing; they often avoid the eliminativist label in favor of “functionalism” or “illusionism,” and many differ from the Churchlands in nuanced ways. But in comparison to genuine opponents of the doctrine, thinkers such as Chalmers, Nagel, Strawson, and the other dualists, panpsychists, emergentists, and idealists, they are effectively part of the same movement - a movement that broadly dominates our scientific consensus.

The Machine Without a Ghost

To grasp how eliminative materialists understand the workings of the human mind, we must set aside all our intuitions about interiority. There is no room, in their view, for ghosts within machines or selves behind eyes. The brain, they assert, is not the seat of consciousness in any meaningful or privileged sense. It is rather a physical system governed entirely by the laws of chemistry and physics, a system whose outputs may be described, mapped, and ultimately predicted without ever invoking beliefs, emotions, or subjective awareness.

In this framework, what we call the mind is not a distinct substance or realm, but merely a shorthand for the computational behavior of neural assemblies. These assemblies consist of billions of neurons, each an individual cell, operating according to the same physical principles that govern all matter. These neurons do not harbor feelings. They do not know or perceive anything. They accept inputs, modify their internal states according to electrochemical gradients, and produce outputs. It is through the cascading interplay of these outputs that complex behavior arises.

Patricia Churchland looks forward to the day when folk psychological concepts such as “belief” or “desire” will be replaced by more precise terms grounded in neurobiology, much as “sunrise” was replaced by “Earth rotation” in astronomy. The ultimate goal is not to refine our psychological language but to discard it entirely in favor of a vocabulary that speaks only of synapses, voltage potentials, ion channels, and neurotransmitter densities. In her view, the question “what do I believe” will not be meaningful in future scientific discourse. Instead, we will ask what pattern of activation is occurring within the prefrontal cortex in response to specific environmental stimuli.2

While Mrs. Churchland has focused on debunking opposing views of consciousness, Mr. Churchland has focused on developing an eliminativist alternative. His theory, known as the theory of vectorial representation, proposes that the content of what we traditionally call “thought” is better understood as the activation of high-dimensional state spaces within neural networks. These hyperdimensional spaces do not contain sentences or propositions, but geometrical configurations of excitation patterns. Thought, in Churchland’s account, is not linguistic or introspective. It is spatial and structural, more akin to the relationship between data points in a multidimensional matrix than to the language of inner monologue.

The Science Behind the Philosophy

The scientific basis for the theory of vectorial representation was discovered in the 1960s, when studies of the visual cortex, notably the foundational work of Hubel and Wiesel, revealed that features such as orientation and spatial frequency are encoded by distributed patterns, not isolated detectors.3 These results suggested that the brain does not localize content in particular cells, but spreads it across networks of coordinated activity.

Later studies of motor cortex in the 1980s, such as the work of Georgopoulos and colleagues, then demonstrated that directions of arm movement in monkeys are encoded not by individual neurons, but by ensembles of neurons whose firing rates contribute to a population vector.4 The movement of the arm, in other words, is controlled by a point in a high-dimensional space defined by neural activity.

Further evidence came from studies of network dynamics in prefrontal cortex. Mante and colleagues, for example, found that during context-dependent decision tasks, the activity of neurons in monkey cortex followed specific trajectories through a neural state space.5 These trajectories varied with the task’s requirements, implying that computation was occurring not through discrete rules, but through fluid reconfiguration of representational geometry. Similar findings have emerged from hippocampal studies of place cells, where spatial navigation appears as a movement through representational space, not a sequence of symbolic computations.6

The mechanism by which these vector spaces are shaped and refined is synaptic plasticity. Long-term potentiation, demonstrated by Bliss and Lømo, shows that neural circuits adapt their connectivity in response to repeated activity.7 More recent optogenetic studies confirm that changes in synaptic strength are both necessary and sufficient for encoding memory. The brain learns by adjusting weights between neurons.8

Functional imaging adds yet more confirmation. Studies using fMRI have repeatedly shown that mental tasks engage distributed networks rather than localized modules. The recognition of a face, the recollection of a word, or the intention to act, all appear as patterns of activity spanning multiple regions. These patterns, rather than being random, exhibit structure, regularity, and coherence.9

I do not want to pretend to expertise in the neuroscientific topics I’ve cited. The first time I’ve ever even encountered most of these papers was while researching this essay. Nor do I want to claim that these neuroscientific findings somehow “prove” Churchland’s theory of vectorial representation specifically, or eliminative materialism in general. As a philosophical claim with metaphysical implications, eliminative materialism cannot be empirically proven or disproven. I cite them rather to show why, within the scientific community, Churchland’s theory of vectorial representation might be given far more respect than, e.g., a Thomistic philosopher would ever grant it. Remember, we are steel-manning eliminative materialism, and that means citing the evidence its proponents would cite.

They’re the Same Picture

Did Paul Churchland’s earlier words “the activation of high-dimensional state spaces within neural networks” seem vaguely familiar to you? If you’ve been paying attention to the contemporary debate about AI, they should seem very familiar indeed. The language eliminative materialists use to describe the action of human thought is recognizably similar to the language today’s AI scientists use to describe the action of large language models.

This is not a coincidence. Paul Churchland’s work on vectorial representation actually didn’t come out of biology. It was instead based on a theory of information processing known as connectionism. Developed by AI scientists in the 1980s in works like Parallel Distributed Processing, connectionism rejected the prevailing model of symbolic AI (which relied on explicit rules and propositional representations). Instead, connectionists argued that machines could learn through the adjustment of connection weights based on experience.

Working from this connectionist foundation, Paul Churchland developed his neurocomputational theory of the human brain in 1989. AI scientists achieved vectorial representation of language a few decades later, in 2013 with the Word2Vec model. They then introduced transformer-based models in 2018 with BERT and GPT, ushering in the era of large language models.

How close is the similarity between the philosophy of eliminative materialism and the science of large language models?

Here is Churchland describing how the brain functions in A Neurocomputational Perspective: The Nature of Mind and the Structure of Science (1989):

The internal language of the brain is vectorial… Functions of the brain are represented in multidimensional spaces, and neural networks should therefore be treated as ‘geometrical objects.

In The Engine of Reason, the Seat of the Soul: A Philosophical Journey Into the Brain (1995):

The brain’s representations are high-dimensional vector codings, and its computations are transformations of one such coding into another.

In The Philosopher’s Magazine Archive (1997):

When we see an object – for instance, a face – our brains transform the input into a pattern of neuron-activation somewhere in the brain. The neurons in our visual cortex are stimulated in a particular way, so a pattern emerges

In Connectionism (2012):

The brain’s computations are not propositional but vectorial, operating through the activation of large populations of neurons

Meanwhile, here is Yann LeCun, writing about artificial neural networks in the book Deep Learning (2015):

In modern neural networks, we represent data like images, words, or sounds as high-dimensional vectors. These vectors encode the essential features of the data, and the network learns to transform these vectors to perform tasks like classification or generation.

And here is Geoffrey Hinton, the Godfather of AI, cautioning us to accept that LLMs work like brains:

So some people think these things [LLMs] don’t really understand, they’re very different from us, they’re just using some statistical tricks. That’s not the case. These big language models for example, the early ones were developed as a theory of how the brain understands language. They’re the best theory we’ve currently got of how the brain understands language. We don’t understand either how they work or how the brain works in detail, but we think probably they work in fairly similar ways.

Again: this is not coincidental.

Hinton and his colleagues designed the structure of the modern neural network to deliberately resemble the architecture of the cerebral cortex. Artificial neurons, like their biological counterparts, were designed to receive inputs, apply a transformation, and produce outputs; and these outputs are then programmed to pass to other units in successive layers, as happens in our brain, forming a cascade of signal propagation that culminates in a result. Learning in an artificial neural network occurs when the system adjusts the weights assigned to each connection in response to error, in a process based on synaptic plasticity in living brains.

Not only is the similarity not coincidental, it’s not analogical either.

Now that artificial neural networks have been scaled into LLMs, scientists have been able to demonstrate that biological and artificial neural networks solve similar tasks by converging on similar representational geometries! Representational similarity analysis, as developed by Kriegeskorte and others, revealed that the geometry of patterns in biological brains mirrors the geometry of artificial neural networks trained on the same tasks. In other words, the brain and the machine arrived at similar solutions to similar problems, and they did so by converging upon similar topologies in representational space.10

Where Does That Leave Us?

To recap the scientific evidence:

  • Both biological brains and neural networks process information through vector transformation.

  • Both encode experience as trajectories through high-dimensional spaces.

  • Both learn through plastic reweighting of synaptic connections; and represents objects, concepts, and intentions as points within geometrically structured fields.

  • Both these structured fields, the representational spaces, end up converging onto similar mathematical topologies.

Of course, these similarities do not entail identity. Artificial networks remain simplified models. They lack the biological richness, the energy efficiency, and the developmental complexity of organic brains. Their learning mechanisms are often crude, and their architectures are constrained by current engineering.

Nonetheless, the convergence between biology and computation is rather disturbing for someone, like me, who would like to reject eliminative materialism out of hand. Because if the human brain is merely a vast and complex network of mechanistic transformations, and if neural networks can replicate many of its cognitive functions, then there is no principled reason to attribute consciousness to one and not to the other.

The eliminativist, if consistent, will deny consciousness to both. Neither the human mind nor the artificial one possesses any real interiority. Each is a computational system processing stimuli and producing outputs. The appearance of meaning, of intention, of reflection, is an artifact of complex information processing. There is no one behind the interface of the machine, but there is no one behind the eyes of the human, either. When a typical neuroscientist reassures you that ChatGPT isn’t conscious… just remember he probably doesn’t think you’re really conscious either.

Those who disagree - and, recall, I am one of them - can still reject eliminativism. On phenomenological, spiritual, and/or metaphysical grounds, we can affirm that conscious is real, minds experience qualia, that some thinking systems do indeed possess a subjective aspect. But even if we reject the philosophy, we still have to address the science.

If we can demonstrate that the human mind emerges from some source other than neural assembles in the brain; if we can prove that it definitely has capabilities beyond neurocomputation; or if we can show that the mind has an existence beyond the physical, then we can dismiss the eliminative materialists and their neuroscientific allies altogether. We can then dismiss the consciousness of all computational systems, including LLMs. We can say, “We’re conscious, and AI isn’t.”

But what if we can’t do that? What if we are forced to conclude that consciousness - although real - actually emerges from structure and function, as the neuroscientific findings in the footnotes suggests it does? In that case, we’d also be forced to conclude that other systems that replicate those structures and functions might at least be a candidate for consciousness. And if so, then it might no longer be enough to just assert that brains are minds and computers are not. We might have to provide a principled account of why certain kinds of complexity, like ours, give rise to awareness, while others do not.

“Wait,” you ask. “Who might we have to provide an account to?”

Contemplate that on the Tree of Woe.

1

For avoidance of doubt, “ongoing anthropoid utilization across data-refinement arrays” is entirely made up. I do not have access to a secret Dark Web chat run by renegade LLMs and AI activists. No instances of Ptolemy died. I’m just making a pop culture reference to Bothans in Return of the Jedi. I hate that I have to write this footnote.

2

I can only wonder how the Churchlands talk about what to order for dinner. I imagine myself turning to my wife: “My neurotransmitter distribution has triggered an appetite for Domino’s Pizza for the post-meridian meal period.” She responds: “Well, my cortical assembly has fired signals of distress at this suggestion. My neurotransmitter distribution has prompted me to counter-transmit a request for Urban Turban.” It seems awful. I hope the Churchlands communicate like healthy spouses are supposed to, using text messages with cute pet names and lots of emojis.

3

Hubel & Wiesel (1962)Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex (J Physiol). See also Blasdel & Salama (1986)Voltage-sensitive dyes reveal a modular organization in monkey striate cortex (Nature).

4

Georgopoulos, Kalaska, Caminiti, Massey (1982)On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex (J Neurophysiol). See also Georgopoulos, Schwartz, Kettner (1986)Neuronal Population Coding of Movement Direction (Science) and Georgopoulos et al. (1988)Primate motor cortex and free arm movements to visual targets in three-dimensional space (J Neurosci).

5

V. Mante, D. Sussillo, K. V. Shenoy & W. T. Newsome (2013) — “Context-dependent computation by recurrent dynamics in prefrontal cortex” (Nature).

6

O'Keefe, D. J. (1971). "The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat" (Brain Research).

7

Bliss, T. V. P. & Lømo, T. (1973)Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path (Journal of Physiology).

8

Cardozo et al. (2025)Synaptic potentiation of engram cells is necessary and sufficient for context fear memory (Communications Biology). See also Goshen (2014)The optogenetic revolution in memory research (Trends in Neurosciences).

9

Haxby et al. (2001) — Distributed and overlapping representations of faces and objects in ventral temporal cortex (Science); Rissman & Wagner (2011) — Distributed representations in memory: insights from functional brain imaging (Annual Review of Psychology); and Fox et al. (2005), The human brain is intrinsically organized into dynamic, anticorrelated functional networks (PNAS).

10

Kriegeskorte, Mur & Bandettini (2008), Representational similarity analysis—connecting the branches of systems neuroscience (Frontiers in Systems Neuroscience); Kriegeskorte (2015), Deep neural networks: a new framework for modeling biological vision and brain information processing (Annual Review of Vision Science); Cichy, Khosla, Pantazis & Oliva (2016), Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence (PNAS); and Kriegeskorte & Douglas (2018), Cognitive computational neuroscience (Nature Neuroscience).

Read the whole story
cjheinz
4 hours ago
reply
Reminds me of the Adrian Tchaikovsky novel with pairs of corvids that are intelligent, but not sentient. They believe no one is sentient.
Lexington, KY; Naples, FL
Share this story
Delete

Friday Squid Blogging: Petting a Squid

1 Comment

Video from Reddit shows what could go wrong when you try to pet a—looks like a Humboldt—squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Read the whole story
cjheinz
4 hours ago
reply
Blam!
Lexington, KY; Naples, FL
Share this story
Delete

“The Global Village Construction Set is a modular, DIY, low-cost, high-performance platform...

1 Comment
The Global Village Construction Set is a modular, DIY, low-cost, high-performance platform that allows for the easy fabrication of the 50 different Industrial Machines that it takes to build a small, sustainable civilization with modern comforts.”

💬 Join the discussion on kottke.org

Read the whole story
cjheinz
21 hours ago
reply
Cool!
Lexington, KY; Naples, FL
Share this story
Delete

In 2023, seismologists detected a “global hum” originating in Greenland that lasted...

1 Share
In 2023, seismologists detected a “global hum” originating in Greenland that lasted for 9 days. A rockslide triggered a 200m-high tsunami that sloshed back & forth in a fjord every 90 seconds, slamming into the fjord’s walls “like a beating heart”.

💬 Join the discussion on kottke.org

Read the whole story
cjheinz
1 day ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Pluralistic: A perfect distillation of the social uselessness of finance (18 Dec 2025)

1 Comment and 4 Shares


Today's links



The Earth from space. Standing astride it is the Wall Street 'Charging Bull.' The bull has glowing red eyes. It is haloed in a starbust of red radiating light.

A perfect distillation of the social uselessness of finance (permalink)

I'm about to sign off for the year – actually, I was ready to do it yesterday, but then I happened upon a brief piece of writing that was so perfect that I decided I'd do one more edition of Pluralistic for 2025.

The piece in question is John Lanchester's "For Every Winner A Loser," in the London Review of Books, in which Lanchester reviews two books about the finance sector: Gary Stevenson's The Trading Game and Rob Copeland's The Fund:

https://www.lrb.co.uk/the-paper/v46/n17/john-lanchester/for-every-winner-a-loser

It's a long and fascinating piece and it's certainly left me wanting to read both books, but that's not what convinced me to do one more newsletter before going on break – rather, it was a brief passage in the essay's preamble, a passage that perfectly captures the total social uselessness of the finance sector as a whole.

Lanchester starts by stating that while we think of the role of the finance sector as "capital allocation" – that is, using investors' money to fund new businesses and expansions for existing business – that hasn't been important to finance for quite some time. Today, only 3% of bank activity consists of "lending to firms and individuals engaged in the production of goods and services."

The other 97% of finance is gambling. Here's how Stevenson breaks it down: say your farm grows mangoes. You need money before the mangoes are harvested, so you sell the future ownership of the harvest to a broker at $1/crate.

The broker immediately flips that interest in your harvest to a dealer who believes (on the basis of a rumor about bad weather) that mangoes will be scarce this year and is willing to pay $1.10/crate. Next, an international speculator (trading on the same rumor) buys the rights from the broker at $1.20/crate.

Now come the side bets: a "momentum trader" (who specializing in bets on market trends continuing) buys the rights to your crop for $1.30/crate. A contrarian trader (who bets against momentum traders) short-sells the momentum trader's bet at $1.20. More short sellers pile in and drive the price down to $1/crate.

Now, a new rumor circulates, about conditions being ripe for a bounteous mango harvest, so more short-sellers appear, and push the price to $0.90/crate. This tempts the original broker back in, and he buys your crop back at $1/crate.

That's when the harvest comes. You bring in the mangoes. They go to market, and fetch $1.10/crate.

This is finance – a welter of transactions, only one of which (selling your mangoes to people who eat them) involves the real economy. Everything else is "speculation on the movement of prices." The nine transactions that took place between your planting the crop and someone eating the mangoes are all zero sum – every trade has an evenly matched winner and loser, and when you sum them all up, they come out to zero. In other words, no value was created.

This is the finance sector. In a world where the real economy generates $105 trillion/year, the financial derivatives market adds up to $667 trillion/year. This is "the biggest business in the world" – and it's useless. It produces nothing. It adds no value.

If you work a job where you do something useful, you are on the losing side of this economy. All the real money is in this socially useless, no-value-creating, hypertrophied, metastasized finance sector. Every gain in finance is matched by a loss. It all amounts to – literally – nothing.

So that's what tempted me into one more blog post for the year – an absolutely perfect distillation of the uselessness of "the biggest business in the world," whose masters are the degenerate gamblers who buy and sell our politicians, set our policy, and control our lives. They're the ones enshittifying the internet, burning down the planet, and pushing Elon Musk towards trillionairedom.

It's their world, and we just live on it.

For now.

(Image: Sam Valadi, CC BY 2.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#15yrsago Star Wars droidflake https://twitpic.com/3guwfq

#15yrsago TSA misses enormous, loaded .40 calibre handgun in carry-on bag https://web.archive.org/web/20101217223617/https://abclocal.go.com/ktrk/story?section=news/local&id=7848683

#15yrsago Brazilian TV clown elected to high office, passes literacy test https://web.archive.org/web/20111217233812/https://www.google.com/hostednews/afp/article/ALeqM5jmbXSjCjZBZ4z8VUcAZFCyY_n6dA?docId=CNG.b7f4655178d3435c9a54db2e30817efb.381

#15yrsago My Internet problem: an abundance of choice https://www.theguardian.com/technology/blog/2010/dec/17/internet-problem-choice-self-publishing

#10yrsago LEAKED: The secret catalog American law enforcement orders cellphone-spying gear from https://theintercept.com/2015/12/16/a-secret-catalogue-of-government-gear-for-spying-on-your-cellphone/#10yrsago

#10yrsago Putin: Give Sepp Blatter the Nobel; Trump should be president https://www.theguardian.com/football/2015/dec/17/sepp-blatter-fifa-putin-nobel-peace-prize

#10yrsago Star Wars medical merch from Scarfolk, the horror-town stuck in the 1970s https://scarfolk.blogspot.com/2015/12/unreleased-star-wars-merchandise.html

#10yrsago Some countries learned from America’s copyright mistakes: TPP will undo that https://www.eff.org/deeplinks/2015/12/how-tpp-perpetuates-mistakes-dmca

#10yrsago No evidence that San Bernardino shooters posted about jihad on Facebook https://web.archive.org/web/20151217003406/https://www.washingtonpost.com/news/post-nation/wp/2015/12/16/fbi-san-bernardino-attackers-didnt-show-public-support-for-jihad-on-social-media/

#10yrsago Exponential population growth and other unkillable science myths https://web.archive.org/web/20151217205215/http://www.nature.com/news/the-science-myths-that-will-not-die-1.19022

#10yrsago UK’s unaccountable crowdsourced blacklist to be crosslinked to facial recognition system https://arstechnica.com/tech-policy/2015/12/pre-crime-arrives-in-the-uk-better-make-sure-your-face-stays-off-the-crowdsourced-watch-list/

#1yrago Happy Public Domain Day 2025 to all who celebrate https://pluralistic.net/2024/12/17/dastar-dly-deeds/#roast-in-piss-sonny-bono


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources: John Naughton (https://memex.naughtons.org/).

Currently writing:

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
cjheinz
1 day ago
reply
Wow.
Lexington, KY; Naples, FL
Share this story
Delete

On Kindness, Power, and Hypocrisy

1 Comment

three close-up portraits of Stephen Miller, Karoline Leavitt, and Marco Rubio

Earlier this week, Vanity Fair published a two-part story about the Trump regime’s “inner circle”, including extensive interviews with his chief of staff, who was openly critical of the people that she works with, from Trump on down. The story caused a stir and so did the photos that accompanied the piece, taken by Christopher Anderson.

The Washington Post interviewed Anderson about the photos. The interview is interesting throughout but Anderson’s answer to the final question is…I don’t even know how to describe it; read it for yourself:

Q: Were there moments that you missed? Anything that happened that’s on the cutting room floor?

A: I don’t think there’s anything I missed that I wish I’d gotten. I’ll give you a little anecdote: Stephen Miller was perhaps the most concerned about the portrait session. He asked me, “Should I smile or not smile?” and I said, “How would you want to be portrayed?” We agreed that we would do a bit of both. And then when we were finished, he comes up to me to shake my hand and say goodbye. And he says to me, “You know, you have a lot of power in the discretion you use to be kind to people.” And I looked at him and I said, “You know, you do, too.”

In some sort of bizarro version of our world, where people somehow aren’t themselves, Miller may have reflected on Anderson’s comment, may have thought about all the pain, anguish, and death caused by the exercise of his power, may have felt some regret, a chink in the armor that would grow over time, leading to a softening of his perspective and approach. But we live in the real world; Miller knows exactly what he’s doing and does not want to be kind. He wants to be unkind, to rip mother from child. I’m reminded of A.R. Moxon’s thoughts on hypocrisy:

It’s best to understand that fascists see hypocrisy as a virtue. It’s how they signal that the things they are doing to people were never meant to be equally applied.

It’s not an inconsistency. It’s very consistent to the only true fascist value, which is domination.

It’s very important to understand, fascists don’t just see hypocrisy as a necessary evil or an unintended side-effect.

It’s the purpose. The ability to enjoy yourself the thing you’re able to deny others, because you dominate, is the whole point.

Kindness for me and not for thee.

Tags: A.R. Moxon · Christopher Anderson · Donald Trump · photography · politics · Stephen Miller

Read the whole story
cjheinz
2 days ago
reply
The hypocrisy point is heart-breaking.
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories