Retired since 2012.
2101 stories
·
5 followers

New adventures in AI hype: “Our language models are so ‘conscious’ we need to give them rights”

1 Comment

My DM’s this morning were filled with journalists and friends asking me about a story that increasingly dubious New York Times ran yesterday. This one:

I am not going to read Kevin’s column, and I don’t think you need to, either. What he wrote about coding was wildly naive, and he couldn’t be bothered to ask whether the system would even extend to Pac-Man , let alone debugging. His near-religious endorsement of the imminence of AGI kind of speaks for itself, as does his apparent aversion to consulting seriously with experts who might disagree with his panglossian takes. His shtick is write to with awe, and to think uncritically; I have honestly had enough.

§

That said, we have seen this movie before. The last time I wrote it about, in June 2022, I called it nonsense on stilts.

For those who have forgotten the story, an AI safety engineer at Google, Blake Lemoine, felt that an LLM-based system that nobody remembers anymore called Lamda had achieved “sentience”. The eternally-sharp Abeba Birhane nailed it then with a tweet that she could equally repost today, in Roose’s honor:

The essence of my own argument then, back in June 2022, applies as much to today’s LLM’s as those of three years ago:

To be sentient is to be aware of yourself in the world; LaMDA simply isn’t. It’s just an illusion, in the grand history of ELIZA a 1965 piece of software that pretended to be a therapist (managing to fool some humans into thinking it was human), and Eugene Goostman, a wise-cracking 13-year-old-boy impersonating chatbot that won a scaled-down version of the Turing Test. None of the software in either of those systems has survived in modern efforts at “artificial general intelligence”, and I am not sure that LaMDA and its cousins will play any important role in the future of AI, either. What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean.

I am not saying that no software ever could connects its digital bits to the world, a la one reading of John Searle’s infamous Chinese Room thought experiment. Turn-by-turn navigations systems, for example, connect their bits to the world just fine.

Software like LaMDA simply doesn’t; it doesn’t even try to connect to the world at large, it just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context. Roger Moore made this point beautifully a couple weeks ago, critique systems like LaMDA that are known as “language models”, and making the point that they don’t understand language in the sense of relating sentences to the world, but just sequences of words to one another.

Search and replace LaMDA with Claude, and it all still applies. I still don’t remotely see an argument that current models are sentient, nor any argument that scaling a model makes it more conscious, even if it can mimic more language from humans discussing consciousness. Claude does what LaMDA does better because it has more data, but I don’t see any really argument that Claude is any more sentient than a web browser.

Eric Byrnjolffson is often more bullish on AI than I am, but his 2022 commentary on the whole LaMDA affair, too, could be reposted today without changing a word:

Sad that The New York Times fell for it.

§

You can look at what Anthropic is doing (evaluating the “welfare” of its models) from the standpoint of the philosophy of consciousness (asking very reasonably questions like what would count as consciousness? and how we would measure it in an animal or a machine?, and so on), but I think it is better to look at what is happening from the perspective of of commerce. Anthropic is a business (which incidentally neglects to respect the rights of artist and writers who work they nick). I suspect the real move here is simply, as it so often is, to hype the product — basically by saying, hey, look at how smart our product is, it’s so smart we need to give it rights.

Just wait ‘til you see our spreadsheets!

Gary Marcus is shaking his head.

Subscribe now

Read the whole story
cjheinz
19 hours ago
reply
A nice way to put it:
"like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean."
Lexington, KY; Naples, FL
Share this story
Delete

Terminations

1 Comment

Looks like The Terminator took out a bunch of Starlink satellites. (SpaceWeather.com)

Read the whole story
cjheinz
1 day ago
reply
Interesting. Good info.
Lexington, KY; Naples, FL
Share this story
Delete

Pluralistic: Every complex ecosystem has parasites (24 Apr 2025)

1 Comment and 3 Shares


Today's links



A rainforest in Chiapas, green and intergrown.

Every complex ecosystem has parasites (permalink)

Patrick "patio11" McKenzie is a fantastic explainer, the kind of person who breaks topics down in ways that stay with you, and creep into your understanding of other subjects, too. Take his 2022 essay, "The optimal amount of fraud is non-zero":

https://www.bitsaboutmoney.com/archive/optimal-amount-of-fraud/

It's a very well-argued piece, and here's the nut of it:

The marginal return of permitting fraud against you is plausibly greater than zero, and therefore, you should welcome greater than zero fraud.

In other words, if you allow some fraud, you will also allow through a lot of non-fraudulent business that would otherwise trip your fraud meter. Or, put it another way, the only way to prevent all fraud is to chase away a large proportion of your customers, whose transactions are in some way abnormal or unexpected.

Another great explainer is Bruce Schneier, the security expert. In the wake of 9/11, lots of pundits (and senior government officials) ran around saying, "No price is too high to prevent another terrorist attack on our aviation system." Schneier had a foolproof way of shutting these fools up: "Fine, just ground all civilian aircraft, forever." Turns out, there is a price that's too high to pay for preventing air-terrorism.

Latent in these two statements is the idea that the most secure systems are simple, and while simplicity is a fine goal to strive for, we should always keep in mind the maxim attributed to Einstein, "Everything should be made as simple as possible, but not simpler." That is to say, some things are just complicated.

20 years ago, my friend Kathryn Myronuk and I were talking about the spam wars, which were raging at the time. The spam wars were caused by the complexity of email: as a protocol (rather than a product), email is heterogenuous. There are lots of different kinds of email servers and clients, and many different ways of creating and rendering an email. All this flexibility makes email really popular, and it also means that users have a wide variety of use-cases for it. As a result, identifying spam is really hard. There's no reliable automated way of telling whether an email is spam or not – you can't just block a given server, or anyone using a kind of server software, or email client. You can't choose words or phrases to block and only block spam.

Many solutions were proposed to this at the height of the spam wars, and they all sucked, because they all assumed that the way the proposer used email was somehow typical, thus we could safely build a system to block things that were very different from this "typical" use and not catch too many dolphins in our tuna nets:

https://craphound.com/spamsolutions.txt

So Kathryn and I were talking about this, and she said, "Yeah, all complex ecosystems have parasites." I was thunderstruck. The phrase entered my head and never left. I even gave a major speech with that title later that year, at the O'Reilly Emerging Technology Conference:

https://craphound.com/complexecosystems.txt

Truly, a certain degree of undesirable activity is the inevitable price you pay once you make something general purpose, generative, and open. Open systems – like the web, or email – succeed because they are so adaptable, which means that all kinds of different people with different needs find ways to make use of them. The undesirable activity in open systems is, well, undesirable, and it's valid and useful to try to minimize it. But minimization isn't the same as elimination. "The optimal amount of fraud is non-zero," because "everything should be made as simple as possible, but not simpler." Complexity is generative, but "all complex ecosystems have parasites."

America is a complex system. It has, for example, a Social Security apparatus that has to serve more than 65 million people. By definition, a cohort of 65 million people will experience 65 one-in-a-million outliers every day. Social Security has to accommodate 65 million variations on the (surprisingly complicated) concept of a "street address":

https://gist.github.com/almereyda/85fa289bfc668777fe3619298bbf0886

It will have to cope with 65 million variations on the absolutely, maddeningly complicated idea of a "name":

https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/

In cybernetics, we say that a means of regulating a system must be capable of representing as many states as the system itself – that is, if you're building a control box for a thing with five functions, the box needs at least five different settings:

http://pespmc1.vub.ac.be/REQVAR.html

So when we're talking about managing something as complicated as Social Security, we need to build a Social Security Administration that is just as complicated. Anything that complicated is gonna have parasites – once you make something capable of managing the glorious higgeldy piggeldy that is the human experience of names, dates of birth, and addresses, you will necessarily create exploitable failure modes that bad actors can use to steal Social Security. You can build good fraud detection systems (as the SSA has), and you can investigate fraud (as the SSA does), and you can keep this to a manageable number – in the case of the SSA, that number is well below one percent:

https://www.congress.gov/crs_external_products/IF/PDF/IF12948/IF12948.2.pdf

But if you want to reduce Social Security fraud from "a fraction of one percent" to "zero percent," you can either expend a gigantic amount of money (far more than you're losing to fraud) to get a little closer to zero – or you can make Social Security far simpler. For example, you could simply declare that anyone whose life and work history can't fit in a simple database schema is not eligible for Social Security, kick tens of millions of people off the SSI rolls, and cause them to lose their homes and starve on the streets. This isn't merely cruel, it's also very, very expensive, since homelessness costs the system far more than Social Security. The optimum amount of fraud is non-zero.

Conservatives hate complexity. That's why the Trump administration banned all research grants for proposals that contained the word "systemic" (as a person with so-far-local cancer, I sure worry about what happens when and if my lymphoma become systemic). I once described the conservative yearning for "simpler times," as a desire to be a child again. After all, the thing that made your childhood "simpler" wasn't that the world was less complicated – it's that your parents managed that complexity and shielded you from it. There's always been partner abuse, divorce, gender minorities, mental illness, disability, racial discrimination, geopolitical crises, refugees, and class struggle. The only people who don't have to deal with this stuff are (lucky) children.

Complexity is an unavoidable attribute of all complicated processes. Evolution is complicated, so it produces complexity. It's convenient to think about a simplified model of genes in which individual genes produce specific traits, but it turns out genes all influence each other, are influenced in turn by epigenetics, and that developmental factors play a critical role in our outcomes. From eye-color to gender, evolution produces spectra, not binaries. It's ineluctably (and rather gloriously) complicated.

The conservative project to insist that things can be neatly categorized – animal or plant, man or woman, planet or comet – tries to take graceful bimodal curves and simplify them into a few simple straight lines – one or zero (except even the values of the miniature transistors on your computer's many chips are never at "one" or "zero" – they're "one-ish" and "mostly zero").

Like Social Security, fraud in the immigration system is a negligible rounding error. The US immigration system is a baroque, ramified, many-tendriled thing (I have the receipts from the immigration lawyers who helped me get a US visa, a green card, and citizenship to prove it). It is already so overweighted with pitfalls and traps for the unwary that a good immigration lawyer might send you to apply for a visa with 600 pages of documentation (the most I ever presented) just to make sure that every possible requirement is met:

https://www.flickr.com/photos/doctorow/2242342898/in/photolist-zp6PxJ-4q9Aqs-2nVHTZK-2pFKHyf

After my decades of experience with the US immigration system, I am prepared to say that the system is now at a stage where it is experiencing sharply diminishing returns from its anti-fraud systems. The cost of administering all this complexity is high, and the marginal amount of fraud caught by any new hoop the system gins up for migrants to jump through will round to zero.

Which poses a problem for Trump and trumpists: having whipped up a national panic about out of control immigration and open borders, the only way to make the system better at catching the infinitesimal amount of fraud it currently endures is to make the rules simpler, through the blunt-force tactic of simply excluding people who should be allowed in the country. For example, you could ban college kids planning to spend the summer in the US on the grounds that they didn't book all their hotels in advance, because they're planning to go from city to city and wing it:

https://www.newsweek.com/germany-tourists-deported-hotel-maria-lepere-charlotte-pohl-hawaii-2062046

Or you could ban the only research scientist in the world who knows how to interpret the results of the most promising new cancer imaging technology because a border guard was confused about the frog embryos she was transporting (she's been locked up for two months now):

https://www.msn.com/en-us/health/other/horrified-harvard-scientists-ice-arrest-leaves-cancer-researchers-scrambling/ar-AA1DlUt8

Of course, the US has long operated a policy of "anything that confuses a border guard is grounds for being refused entry" but the Trump administration has turned the odd, rare outrage into business-as-usual.

But they can lock up or turn away as many people as they want, and they still won't get the amount of fraud to zero. The US is a complicated place. People have complicated reasons for entering the USA – work, family reunion, leisure, research, study, and more. The only immigration system that doesn't leak a little at the seams is an immigration system that is so simple that it has no seams – a toy immigration system for a trivial country in which so little is going on that everything is going on.

The only garden without weeds is a monoculture under a dome. The only email system without spam is a closed system managed by one company that only allows a carefully vetted cluster of subscribers to communicate with one another. The only species with just two genders is one wherein members who fit somewhere else on the spectrum are banished or killed, a charnel process that never ends because there are always newborns that are outside of the first sigma of the two peaks in the bimodal distribution.

A living system – a real country – is complicated. It's a system, where people do things you'll never understand for perfectly good reasons (and vice versa). To accommodate all that complexity, we need complex systems, and all complex ecosystems have parasites. Yes, you can burn the rainforest to the ground and planting monocrops in straight rows, but then what you have is a farm, not a forest, vulnerable to pests and plagues and fire and flood. Complex systems have parasites, sure, but complex systems are resilient. The optimal level of fraud is never zero, because a system that has been simplified to the point where no fraud can take place within it is a system that is so trivial and brittle as to be useless.


Hey look at this (permalink)



A Wayback Machine banner.

Object permanence (permalink)

#20yrsago French court bans DRM for DVDs https://web.archive.org/web/20050424023258/https://www.01net.com/editorial/274752/droit/la-justice-interdit-de-proteger-les-dvd-contre-la-copie/

#20yrsago Why governments make stupid copyrights https://www.ft.com/content/39b697dc-b25e-11d9-bcc6-00000e2511c8

#20yrsago London Review of Books’s personals are really dirty and funny https://web.archive.org/web/20050426005000/http://www.lrb.co.uk/classified/index.php#PERSONALS

#20yrsago German crooner’s megaphone-style covers of modern rock https://www.palast-orchester.de/en

#15yrsago British Airways leaves stranded passengers all over world, jacks up prices on tickets home https://www.theguardian.com/news/blog/2010/apr/23/iceland-volcano-thousands-passengers-stranded

#15yrsago Google highlights fair use defense to YouTube takedowns https://publicpolicy.googleblog.com/2010/04/content-id-and-fair-use.html

#15yrsago Microsoft wins its $100M tax-break and amnesty from broke-ass Washington State https://web.archive.org/web/20100429061500/http://microsofttaxdodge.com/2010/04/microsoft-gets-nevada-royalty-tax-cut-and-tax-amnesty.html?all

#10yrsago Privilege: you’re probably not the one percent https://jacobin.com/2015/04/1-99-percent-class-inequality

#10yrsago Marissa Mayer makes 1,100 Yahooers jobless, calls it a “remix” https://web.archive.org/web/20150425183847/http://news.dice.com/2015/04/22/yahoo-called-its-layoffs-a-remix-dont-do-that/?CMPID=AF_SD_UP_JS_AV_OG_DNA_

#10yrsago Canadian Big Content spokesjerk says the public domain is against the public interest https://www.michaelgeist.ca/2015/04/canadian-recording-industry-works-entering-the-public-domain-are-not-in-the-public-interest/

#5yrsago Riot Baby https://pluralistic.net/2020/04/23/riot-baby/#Tochi-Onyebuchi

#5yrsago Mayor of Las Vegas says the "free market" will decide what's safe https://pluralistic.net/2020/04/23/riot-baby/#carolyn-goodman

#1yrago "Humans in the loop" must detect the hardest-to-spot errors, at superhuman speed https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Enshittification: Why Everything Suddenly Got Worse and What to Do About It, Farrar, Straus, Giroux, October 7 2025
    https://us.macmillan.com/books/9780374619329/enshittification/

  • Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • Enshittification, Why Everything Suddenly Got Worse and What to Do About It (the graphic novel), Firstsecond, 2026

  • The Memex Method, Farrar, Straus, Giroux, 2026



Colophon (permalink)

Today's top sources:

Currently writing:

  • Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025

Latest podcast: Nimby and the D-Hoppers CONCLUSION https://craphound.com/stories/2025/04/13/nimby-and-the-d-hoppers-conclusion/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
cjheinz
1 day ago
reply
For decades I've said I was going to make "Embrace Complexity" bumper stickers, but I never got around to it.
Lexington, KY; Naples, FL
Share this story
Delete

OpenAI’s dirty December o3 demo doesn’t readily replicate

1 Comment
“draw an image representing a benchmark result that might have been bogus”

As a scientist, OpenAI’s widely-watched o3 livestream, December 20th, “Day 12 of Shipmas”, which Francois Chollet reported at the time as a breakthough, made me sick to my stomach. I said so at the time, in my essay 𝗼𝟯 “𝗔𝗥𝗖 𝗔𝗚𝗜” 𝗽𝗼𝘀𝘁𝗺𝗼𝗿𝘁𝗲𝗺 𝗺𝗲𝗴𝗮𝘁𝗵𝗿𝗲𝗮𝗱: 𝘄𝗵𝘆 𝘁𝗵𝗶𝗻𝗴𝘀 𝗴𝗼𝘁 𝗵𝗲𝗮𝘁𝗲𝗱, 𝘄𝗵𝗮𝘁 𝘄𝗲𝗻𝘁 𝘄𝗿𝗼𝗻𝗴, 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝗶𝘁 𝗮𝗹𝗹 𝗺𝗲𝗮𝗻𝘀. There were problems with experimental design, misleading graphs that left out competing work, and more.

Later, after I wrote that piece, I discovered that one of their demos, on FrontierMath, was fishy in a different way: OpenAI had privileged access to data their competitors didn’t have, but didn’t acknowledge this. They also (if I recall) failed to disclose their financial contributions in developing the test. And then a couple weeks ago we all saw that current models struggled mightly on the USA Math Olympiad problems that were fresh out of the oven, hence hard to prepare for in advance.

Today I learned that the story is actually even worse than all that: the crown jewel that they reported on the demo — the 75% on Francois Chollet’s ARC test (once called ARC-AGI) doesn’t readily replicate. Mike Knoop from the ARC team reports “We could not get complete data for o3 (high) test due to repeat timeouts. Fewer than half of tasks returned any result exhausting >$50k test budget. We really tried!” The model that is released as “o3 (high)” presumed to be their best model, can’t readily yield whatever was reported in December under the name o3.

The best stable result that ARC team could get from experimenting with the latest batch of publicly-testable OpenAI models was 56% with a different model called o3-medium, still impressive, still useful, but a long way from the surprising 75% that was advertised.

And the lower 56% is not much different from what Jacob Andreas’s lab at MIT got in November. It’s arguably worse; if I followed correctly, and if the measures are the same, Andreas lab’s best score was actually higher, at 61%.

Four months later, OpenAI, with its ever more confusing nomenclature, has released a bunch of models with o3 in the title, but none of them can reliably do what was in the widely viewed and widely discussed December livestream. That’s bad.

Forgive if me I am getting Theranos vibes.

§

Just a couple weeks ago Yafah Edelman at LessWrong reported a related finding, “OpenAI reports that o3-mini with high reasoning and a Python tool receives a 32% on FrontierMath. However, Epoch's official evaluation[1] received only 11%”; some possible explanations are given, but this is again a very bad look.

And guess what, sometimes o3 apparently cheats, reporting answers that are available on the internet without actually doing the work, as Toby Ord explains in a long thread on X. Essentially Ord argues that o3 is looking up the answer, not computing it.

This in turn is kind of reminiscent of something similar that TransluceAI recently reported last week, in another long thread (too complex to quickly summarize here but worth reading):

The truth is that we don’t really know how good o3 is or isn’t, and nobody should ever take OpenAI’s video presentations particularly seriously again, until they have been fully vetted by the community. The fact that their flashy result on ARC couldn’t readily be replicated speaks volumes.

§

My trust in OpenAI has never been high; at this point it is extremely low.

And given that Meta also appears to have just juiced some benchmarks, the whole thing is starting to look like a bunch of over-promisers scrambling to make things look better than they really are.

Dr Gary Marcus, Professor Emeritus at NYU, has done enough article reviewing in his career to know when people are trying to pull a fast one.

Subscribe now

Read the whole story
cjheinz
2 days ago
reply
"Theranos vibes". Oops.
Lexington, KY; Naples, FL
Share this story
Delete

Regulating AI Behavior with a Hypervisor

1 Comment

Interesting research: “Guillotine: Hypervisors for Isolating Malicious AIs.”

Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that, by accident or malice, can generate existential threats to humanity. Although Guillotine borrows some well-known virtualization techniques, Guillotine must also introduce fundamentally new isolation mechanisms to handle the unique threat model posed by existential-risk AIs. For example, a rogue AI may try to introspect upon hypervisor software or the underlying hardware substrate to enable later subversion of that control plane; thus, a Guillotine hypervisor requires careful co-design of the hypervisor software and the CPUs, RAM, NIC, and storage devices that support the hypervisor software, to thwart side channel leakage and more generally eliminate mechanisms for AI to exploit reflection-based vulnerabilities. Beyond such isolation at the software, network, and microarchitectural layers, a Guillotine hypervisor must also provide physical fail-safes more commonly associated with nuclear power plants, avionic platforms, and other types of mission critical systems. Physical fail-safes, e.g., involving electromechanical disconnection of network cables, or the flooding of a datacenter which holds a rogue AI, provide defense in depth if software, network, and microarchitectural isolation is compromised and a rogue AI must be temporarily shut down or permanently destroyed.

The basic idea is that many of the AI safety policies proposed by the AI community lack robust technical enforcement mechanisms. The worry is that, as models get smarter, they will be able to avoid those safety policies. The paper proposes a set technical enforcement mechanisms that could work against these malicious AIs.

Read the whole story
cjheinz
2 days ago
reply
Somehow I don't think this will protect us from the Bullshit Apocalypse.
Lexington, KY; Naples, FL
Share this story
Delete

An Inadequate AI Solution

1 Comment

Let’s take the example of printers, which tend to suck. Here is one example of suckage at work, and the instructive hell it put me through:


The Canon MG3600 is my wife’s. It sits by her desk and does a good-enough job.

The Epson WF-3520 is mine. It sits by my desk and leaks black ink. It has been doing that since I bought it in a hurry for $35 at a Goodwill store when we first got to town in the summer of ’21. The drip runs at about a drop per month, but otherwise it’s been okay. I use it mostly for scanning. It’s good at that.

Or was, until I encountered the problem above.

Answers to questions about the error code proved fruitless in a series of searches (such as this one) and at the Epspon link above.

So, naturally, I went shopping for a new printer. My only requirement was that it would scan at least as well as this old Epson.

Alas, scanning is a checkbox for “all-in-one” printers. Here’s an example from PC Magaziine. Nothing much about scanning. This makes sense, because demand for high-quality scanning is relatively low, and most printers are at least okay at it. But Consumer Reports (bless them) does rank scanning quality, and it gave me this in reply to an Ask CR question:

The Canon PIXMA G3262 is not one of CR’s recommended printers, rating a middling 47 out of a possible 100 score. Its problem areas are low predicted reliability, low versatility, and slow text speed. The only thing it’s good at is scanning. The printer page on Amazon is also not encouraging. It has only two user ratings, and both call it junk.

The Brother MFC‐L8905CDW is $699.99 at Amazon, which is way out of my range.

The Canon Pixma MG7720 is now in the used market, and (far as I can tell) available only on eBay, and none of the offerings there look interesting.

So I went looking just for scanners, and narrowed my choice down to the Canon CanoScan Lide 400, which is just $78.86 on Amazon. I put it in my cart, then I thought, Wait a minute. What if I asked an AI what that error code means?

So I did, with ChatGPT. The dialog:

Me:

What does “Failed to open connection to device (-21345)” mean on a Macbook Pro running Sequoia 15.3.1 that is trying to connect to an Epson WF-3520 printer?

ChatGPT:

“Failed to open a connection to the device (-21345)” on your MacBook Pro running macOS Sequoia 15.3.1, when attempting to connect to an Epson WF-3520 printer, typically indicates a communication issue between your Mac and the printer, particularly affecting scanning functionality. Here’s how you can troubleshoot and resolve this issue:

✅ Step-by-Step Solutions
1. Enable Local Network Access for the Printer/Scanner
macOS Sequoia introduces enhanced privacy settings that may block devices from accessing the local network. To ensure your Epson WF-3520 has the necessary permissions:​

  1. Navigate to System Settings.
  2. Select Privacy & Security.
  3. Click on Local Network.
  4. Locate your printer or scanner in the list and ensure it’s toggled on.​
    Apple Support Community
    Reddit

This step has resolved similar issues for many users experiencing the (-21345) error.

It listed other solutions, but that one worked. Three levels down in System Settings, the printer had been toggled off, presumably by Sequoia’s enhanced privacy settings. I am now back to scanning normally with my lousy old Epson.

I could take my learnings here in at least two directions:

  1. I could carp about Apple failing to tell customers about potential hassles created by enhanced privacy settings. Ask ChatGPT “What kinds of things do Apple’s enhanced privacy settings (introduced with Sequoia) break or at least inconvenience?” and you’ll get a long list of answers, starting with the one I encountered with the printer. To my knowledge (always subject to correction), Apple hasn’t bothered with the same. (Wouldn’t providing customers with that kind of list be nice work for Apple Intelligence, which so far is an embarrassing fail?)
  2. I could compare how ChatGPT, Claude, Gemini, Meta, CoPilot, Perplexity, and other early AI leaders perform on challenges like this one, but that would be a waste of time. That’s because, for all the good they do, they are not ours.

See, AIs of the usual sort don’t know anything more about our personal stuff than we tell them, and we may not want to do that, just for privacy reasons. Also, they tend to forget. And they are not ours. At best they are personalized, rather than personal.

A truly personal AI (or pAI) would be all mine and trained on what is under my control. For example, all my property, including these two printers. It would have data about my purchase and use of these printers. But it would also have access to data about identical or similar printers out in the world, and ways to engage the corporate AIs so we could solve problems together in far better ways than what we have now. In the process, the likes of Epson would have better intelligence about product usage than they’re getting now.

So we are talking here about agent-to-agent stuff, with the right capabilities and resources located in the right places.

Toward making that happen, there was lots of talk earlier this month at IIW XL (our 40th in 20 years) about MCP (the Model Context Protocol), A2A (agent-to-agent protocol), and GNAP (Grant Negotiation and Authorization Protocol), all of which hold promise. I’m not here to debate or advocate any of them, just to note that there are protocols out there, as options with which we can work.

But what about the entities using those agents—companies making and selling products on one side, and customers using those products on the other? How can they learn from each other, and not just make transactions and then get frustrated with each other when problems show up?

One answer is Market Intelligence That Flows Both Ways. I wrote that in 2016: nine years ahead of its time, so far.

Looking back, I don’t think that kind of flow will happen without AI agents, and full agency by those agents operators, on both sides.

We need to be able to lead in our dances with companies in the open marketplace. But first, we need personal AI to help us gather and manage intel about our lives, our things, our relationships (including the medical and financial kind), our travels, and the services we use. We are not close to having any of that, and not helped by the Apples, Microsofts, Amazons, and Googles of the world, all of which want to confine our capacities to their walled gardens.

We don’t need to knock the walls of those gardens down. The revolution we need will simply obsolesce them.

That’s why it will not be supplied by APIs, or by anything that requires that we have an account. We need to be independent and self-sovereign: first parties who speak for themselves in first-person voices.

I’ll say more about all that in my next post and others to follow.

Read the whole story
cjheinz
4 days ago
reply
Interested to see where this is going ...
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories