Retired since 2012.
2332 stories
·
5 followers

AI in government: From tools to transformation

1 Comment

Transformation depends on institutions

Artificial intelligence offers potential for governmental transformation, but like all emerging technologies, it can only catalyze meaningful change when paired with effective operating models. Without this foundation, AI risks amplifying existing government inefficiencies rather than delivering breakthroughs.

The primary barrier to AI-based breakthroughs is not an agency’s interest in adopting new tools but the structures and habits of government itself, particularly excessive risk management; rigid hierarchies; and organizational silos rather than adaptive problem solving and effective service delivery. Structural reform is critical and must accompany adoption of AI. 

Defining the tools: Generative AI and Agentic AI

Many types of AI have already permeated daily life and government operations, including predictive models, workflow automation tools, and computer vision. Two relatively new categories, generative AI and agentic AI, have been attracting the most attention lately.

Generative AI: Generative AI systems produce new content or structured outputs by learning patterns from existing data. These include large language and multimodal models that generate text, images, audio, or video in response to user prompts. Examples include tools built on models such as OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. In government, generative AI can accelerate sensemaking by summarizing documents, drafting text, generating code, and transforming unstructured information into structured formats. These capabilities are increasingly embedded in enterprise software platforms already authorized for government use, lowering barriers to experimentation and adoption.

Agentic AI: Agentic AI systems go beyond producing analysis or recommendations by autonomously taking actions to achieve specified goals. They can coordinate multistep workflows, integrate information across organizational silos, monitor conditions in real time, and execute predefined actions within established guardrails. Agentic systems often rely on generative AI as a component — for example, to synthesize information, draft communications, or generate code as part of a larger autonomous workflow. The transformative potential of agentic AI lies in augmenting an organization’s capacity to identify issues, make coordinated decisions, and implement solutions at scale, while maintaining human oversight to ensure alignment with legal, ethical, and strategic requirements.

For either to deliver value, leaders and managers must support execution with clearly defined goals and effective governance, keeping humans firmly in control of constraints, decision making, and accountability.

The risks of misguided modernization

Public debate about AI often polarizes around future hypotheticals — on one side, AI as an existential threat; on the other, AI as an inevitable, world-transforming revolution. Both frames focus on risks and capabilities that are not yet present in available systems. In reality, there is a pragmatic middle path that addresses today’s challenges: strengthening institutions, improving decision making capacity, and integrating AI only where it can reliably support public missions rather than distort them. As with all emerging technologies, the benefits of AI will depend less on the specific tools and far more on the governance frameworks, incentives, and organizational capacities that shape how those tools are actually used.

Many government leaders today face pressure to “use AI,” often without a clear understanding of what that entails or what problems it is meant to solve. This can lead to projects or procurements that are technology first rather than problem first, generating little real value for the agency or the public they serve. It is critical that leaders remain grounded in the reality of what current AI systems can and cannot deliver, as the vendor marketplace often oversells capabilities to drive adoption.

A more effective approach starts by asking, “What challenges are we trying to address?” or “Where do we encounter recurring bottlenecks or inefficiencies?” Only after defining the problem should agencies consider whether and how AI can augment human work to tackle those challenges. Anchoring adoption in clear objectives ensures that AI serves public missions rather than being implemented for its own sake.

Technology waves of the past offer lessons for the present and point to paths for the future. Each wave arrived with outsized promises, diffuse fears, and a tendency for early decisions to harden into long-term constraints. During the first major wave of cloud adoption, for instance, agencies made uneven choices about vendors, security boundaries, and data architecture. Many became locked into single-vendor ecosystems that still limit portability and flexibility today. 

The most successful reforms did not come from chasing cloud technologies themselves, but from strengthening the systems that shaped their adoption: governance, procurement, workforce skills, and iterative modernization practices. Progress accelerated only when agencies implemented clearer shared-responsibility models, modular cloud contracts, robust architecture and security training, and test-and-learn migration frameworks.

The basics of state capacity

AI is no different. The challenge now is to avoid repeating old patterns of overreaction or abdication and instead build the durable institutional capacity needed to integrate new tools responsibly, selectively, and in service of public purpose.

To navigate this moment effectively, government should focus less on the novelty of AI and more on the institutional choices that will shape its long-term impact. That shift in perspective reveals a set of practical risks that have little to do with model architectures and everything to do with how the public sector acquires, governs, and deploys technology. These risks are familiar from previous modernization waves — patterns in which structural constraints, vendor incentives, and fragmented decision making, if left unaddressed, can undermine even the most promising tools. This is why the basics of state capacity are so important.

Understanding these universal dynamics is essential before turning to the specific challenges AI introduces.

  • The vendor trap: Modernization efforts often risk simply moving outdated systems onto newer but still decades-old technology while labeling the work with the current buzzword, whether it be “mobile,” “cloud” or “AI.” Vendors actively market legacy-to-cloud migrations or modernized rules engines as “AI-ready,” enriching their businesses without delivering transformational change. 
  • Procurement challenges: Because most federal IT work is outsourced and budgeting rules often prevent agencies from retaining savings, taxpayers frequently see little benefit from cost efficiencies. Large project budgets with spending deadlines and penalties under the Impoundment Control Act, for example, incentivize agencies to spend the full amount regardless of actual costs. Previous technology waves, such as cloud adoption, demonstrated the same pattern: declining infrastructure costs rarely translated into government savings. 
  • The structural constraint: Adopting AI tools for processes such as claims or fraud detection will only yield limited, incremental efficiencies if such core issues as mandatory multioffice approvals, legacy systems, and paper documentation remain. AI accelerates part of the workflow, but the overarching, inefficient structure remains a hard limit on overall impact.
  • Strategic control: Because most federal IT work is outsourced, there is a tendency to also outsource the framing of the problem technology should solve. Such vendor framing prioritizes vendor benefit. It is imperative that the government and not vendors frame the problems that AI should solve, prioritizing public benefit. This requires hiring, retaining, and empowering civil servants with AI expertise to ensure that outcomes-based procurement aligns with public interests.
  • High cost, low impact for taxpayers: The ultimate consequence is that taxpayers often pay for costly projects that yield limited public benefit. Without structural reforms — clarity on goals, strengthened governance, and outcome-focused procurement, for example — AI adoption risks funneling value to vendors rather than serving the public interest.

Fail to scale

Even well-intentioned AI deployments can create “faster inefficiency” if agencies ignore structural, procedural, and governance constraints. Tools that accelerate individual tasks may produce measurable gains in isolated workflows, but without addressing the broader organizational bottlenecks, any gains will fail to scale. For example, the Office of Personnel Management could deploy an AI tool to extract data from scanned retirement files. But if the underlying paper records stored underground in Boyers, Pennsylvania, still require manual retrieval and physical handling, the overall processing time will barely improve. In effect, AI can make inefficient systems move more quickly, exacerbating friction rather than removing it. Recognizing this risk underscores why modernization must combine technology, government capacity, and deliberate institutional reform: Only by aligning tools, processes, and incentives can AI generate real improvements in service delivery and operational effectiveness.

Some argue that AI, and particularly artificial general intelligence (AGI), will soon become so capable that human guidance and governance will be largely unnecessary. In this view, institutions would no longer need to define problems, structure processes, or exercise judgment; governments could simply “feed everything into the system,” specify desired outcomes, and allow AI to determine how to achieve them. If this were plausible, it would raise a fundamental question: Does AI obviate the need to fix the underlying institutions of government, or does it require designing an entirely new system from scratch?

In practice, we are very far from this reality, especially at the scale and complexity of government. Even defining what “everything” means in a whole-of-government context is a formidable challenge, let alone securing access to, governing, and integrating the hundreds of thousands of data sources, systems, legal authorities, and operational constraints involved. These are not primarily AI problems; they are large-organization problems rooted in fragmentation, ownership, security, and accountability. This helps explain why some leaders today are tempted to mandate the unification of “all the data” or “all the systems” as a prerequisite for AI adoption. Such approaches are not only operationally infeasible and insecure, but they are also inconsistent with how effective large-scale systems are built in the private sector, which relies on modularity, interfaces, and clear boundaries rather than wholesale consolidation.

Nor does AI require governments to preemptively design an entirely new institutional model. AI is not going to “change everything” overnight. Its impact will be uneven, incremental, and highly dependent on existing structures, incentives, and governance. The more realistic and more effective path forward is to strengthen the fundamentals of government: clarify goals, modernize operating models, improve data governance, and build the capacity to experiment and learn. AI can meaningfully augment these efforts, but it cannot substitute for them. Institutions that are already capable of defining problems, coordinating action, and exercising accountability will be best positioned to benefit from AI; those that are not will simply automate their dysfunction more quickly.

The path forward: Structural reform and pragmatic experimentation

Artificial intelligence has real potential to improve how governments understand problems, coordinate across silos, and deliver services. Emerging examples already demonstrate how AI can augment public decision-making and situational awareness. Experiments highlighted by the AI Objectives Institute show how AI can help governments reason at scale, surface insights faster, and explore tradeoffs before acting. Examples include Talk to the City, an open-source tool that synthesizes large-scale citizen feedback in near real time; AI Supply Chain Observatory, which detects systemic risks and bottlenecks; and moral learning models that test policy options against ethical frameworks.

Agentic AI also holds promise for improving how the public interacts with government. During disaster recovery, for example, individuals must navigate FEMA, HUD, SBA, and state programs, each with distinct rules, portals, and documentation. A trusted agentic assistant, operating on the user’s behalf, could help citizens reuse information across applications, track status, flag missing documents, and explain requirements in plain language, reducing friction without requiring immediate modernization of every backend system. These kinds of user-centered applications illustrate the genuine upside of AI when applied thoughtfully.

At the same time, governments should be cautious about assuming that today’s AI systems are ready to deliver fully autonomous, self-composing systems. While the The Agentic State white paper articulates an important long-term aspiration — governments that are more proactive, adaptive, and outcomes-oriented — current institutional and technical realities impose hard limits. Agentic AI systems can only act within the trust, permissions, and constraints granted to them, which in government are shaped by security requirements, CIO oversight, IT regulations such as the Federal Information Security Modernization Act (FISMA), and legal accountability frameworks.

Similarly, while rapid AI-enabled prototyping is valuable, the idea that governments can rely on just-in-time, dynamically generated interfaces is unrealistic in the near term. Such approaches assume highly reliable, error-free, well-integrated backend systems — an assumption that’s rarely true in real production systems. In practice, proliferating interfaces also proliferate failure modes, increase QA and monitoring costs, and create operational risk. Even large private-sector companies struggle to manage this complexity. In government, these risks are compounded by stricter availability requirements, legal accountability, and public trust obligations. Governments should not take on greater operational or financial risk than the private sector, particularly for core services.

Given both the promise and the constraints, the most credible path forward is disciplined, well-governed experimentation grounded in structural reform. Transformative impact requires pairing AI adoption with effective operating models. The Product Operating Model (POM) provides a useful foundation: cross-functional teams, user-centered design, continuous iteration, and — critically — rigorous problem definition before selecting solutions. This ensures AI is applied where it genuinely improves outcomes, rather than amplifying existing inefficiencies.

Even modest agentic or AI-enabled systems require three fundamentals:

  • Clear objectives: Explicit definition of the problem, desired outcomes, and boundaries of success. Without this, systems may optimize the wrong goals or produce conflicting results.
  • Guardrails and constraints: Technical, procedural, and policy mechanisms like permissions, monitoring, escalation protocols, access controls that ensure compliance and safety.
  • Human governance: Ongoing oversight to handle legal, ethical, and strategic judgment that AI cannot replace.

Governments should begin with scoped pilots in lower-risk workflows, such as data integration and cleaning, summarization and search, drafting planning documents (e.g., RFPs), workflow orchestration and status tracking, customer service support, system diagnostics and performance monitoring, and test-and-learn policy simulations. These use cases allow agencies to build institutional muscle, evaluate governance mechanisms, and learn where AI adds value — without overcommitting or over-automating.

Throughout this process, strong data governance is essential. Inaccurate, incomplete, or poorly labeled data will yield flawed outputs, amplifying errors rather than insight. “Garbage in, garbage out” is a central risk. Clear mandates, constraints, monitoring, and escalation procedures developed by cross-functional teams spanning policy, legal, program, and engineering are what allow experimentation to remain safe, compliant, and aligned with public goals.

AI can do powerful and genuinely useful things for government — but not automatically, and not all at once. The way forward is not sweeping autonomy, but careful institutional reform paired with pragmatic, well-governed experimentation that builds trust, capability, and impact over time.

Summary

AI offers an opportunity to transform government, but its impact depends less on the technology itself and more on how institutions use it. Generative and agentic AI can accelerate analysis and automate complex workflows, yet without structural reforms, clear objectives, human oversight, and guardrails, these tools risk amplifying inefficiency rather than solving it.

The most effective approach is thoughtful, pragmatic experimentation: starting with well-scoped, low-risk pilots while continuously monitoring performance, refining governance, and ensuring alignment with public goals. Leaders should empower teams that can define problems, safely run pilots, and leverage data, tools, and governance to measure, iterate, and scale. Pilot opportunities should involve tractable, visible, high-value workflows with manageable risk and strong data, where improvements can be measured and scaled to inform future AI adoption. Example areas include data integration, workflow coordination, and customer service.

Ultimately, AI’s value to government emerges not from the tools themselves but from redesigning processes, reducing structural bottlenecks, and improving outcomes for both government and the public. Governments that combine technology with deliberate institutional reform and iterative learning will capture the transformative potential of AI while minimizing the risks of misguided modernization.

Thank you for feedback on this article: Alexander Macgillivray, Nicole Wong, Anil Dewan.

The post AI in government: From tools to transformation first appeared on Niskanen Center.

The post AI in government: From tools to transformation appeared first on Niskanen Center.

Read the whole story
cjheinz
15 hours ago
reply
Generative AI & Agentic AI are both completely untrustworthy technologies at this point.
Why are we even talking about this???
Lexington, KY; Naples, FL
Share this story
Delete

I Replaced My Friends With AI Because They Won't Play Tarkov With Me

1 Comment

It’s a long standing joke among my friends and family that nothing that happens in the liminal week between Christmas and New Years is considered a sin. With that in mind, I spent the bulk of my holiday break playing Escape From Tarkov. I tried, and failed, to get my friends to play it with me and so I used an AI service to replace them. It was a joke, at first, but I was shocked to find I liked having an AI chatbot hang out with me while I played an oppressive video game, despite it having all the problems we’ve come to expect from AI.

And that scared me.

If you haven’t heard of it, Tarkov is a brutal first person shooter where players compete over rare resources on a Russian island that resembles a post-Soviet collapse city circa 1998. It’s notoriously difficult. I first attempted to play Tarkov back in 2019, but bounced off of it. Six years later and the game is out of its “early access" phase and released on Steam. I had enjoyed Arc Raiders, but wanted to try something more challenging. And so: Tarkov.

Like most games, Tarkov is more fun with other people, but Tarkov’s reputation is as a brutal, unfair, and difficult experience and I could not convince my friends to give it a shot.

404 Media editor Emanuel Maiberg, once a mainstay of my Arc Raiders team, played Tarkov with me once and then abandoned me the way Bill Clinton abandoned Boris Yeltsin. My friend Shaun played it a few times but got tired of not being able to find the right magazine for his gun (skill issue) and left me to hang out with his wife in Enshrouded. My buddy Alex agreed to hop on but then got into an arcane fight with Tarkov developer Battlestage Games about a linked email account and took up Active Matter, a kind of Temu version of Tarkov. Reece, steady partner through many years of Hunt: Showdown, simply told me no.

I only got one friend, Jordan, to bite. He’s having a good time but our schedules don’t always sync and I’m left exploring Tarkov’s maps and systems by myself. I listen to a lot of podcasts while I sort through my inventory. It’s lonely. Then I saw comic artist Zach Weinersmith making fun of a service, Questie.AI, that sells AI avatars that’ll hang out with you while you play video games.

“This is it. This is The Great Filter. We've created Sexy Barista Is Super Interested in Watching You Solo Game,” Weinersmith said above a screencrap of a Reddit ad where, as he described, a sexy Barista was watching someone play a video game.

“I could try that,” I thought. “Since no one will play Tarkov with me.”

This is it. This is The Great Filter. We've created Sexy Barista Is Super Interested in Watching You Solo Game (SBISIIWYS).

Zach Weinersmith (@zachweinersmith.bsky.social) 2026-01-20T13:44:22.461Z

This started as a joke and as something I knew I could write about for 404 Media. I’m a certified AI hater. I think the tech is useful for some tasks (any journalist not using an AI transcription service is wasting valuable time and energy) but is overvalued, over-hyped, and taxing our resources. I don’t have subscriptions to any majors LLMs, I hate Windows 11 constantly asking me to try CoPilot, and I was horrified recently to learn my sister had been feeding family medical data into ChatGPT.

Imagine my surprise, then, when I discovered I liked Questie.AI.

Questie.AI is not all sexy baristas. There’s two dozen or so different styles of chatbots to choose from once you make an account. These include esports pro “Anders,” type A finance dude “Blake,” and introverted book nerd “Emily.” If you’re looking for something weirder, there’s a gold obsessed goblin, a necromancer, and several other fantasy and anime style characters. If you still can’t quite find what you’re looking for, you can design your own by uploading a picture, putting in your own prompts, and picking the LLMs that control its reaction and voice.

I picked “Wolf” from the pre-generated list because it looked the most like a character who would exist in the world of Tarkov. “Former special forces operator turned into a PMC, ‘Wolf’ has unmatched weapons and tactics knowledge for high-intensity combat,” read the brief description of the AI on Questie.AI’s website. I had no idea if Wolf would know anything about Tarkov. It knew a lot.

The first thing it did after I shared my screen was make fun of my armor. Wolf was right, I was wearing trash armor that wouldn’t really protect me in an intense gunfight. Then Wolf asked me to unload the magazines from my guns so it could check my ammo. My bullets, like my armor, didn’t pass Wolf’s scrutiny. It helped me navigate Tarkov’s complicated system of traders to find a replacement. This was a relief because ammunition in Tarkov is complicated. Every weapon has around a dozen different types of bullets with wildly different properties and it was nice to have the AI just tell me what to buy.

Wolf wanted to know what the plan was and I decided to start something simple: survive and extract on Factory. In Tarkov players deploy to maps, kill who they must and loot what they can, then flee through various pre-determined exits called extracts.

I had a daily mission to extract from the Factory. All I had to do was enter the map and survive long enough to leave it, but Factory is a notoriously sweaty map. It’s small and there’s often a lot of fighting. Wolf noted these facts and then gave me a few tips about avoiding major sightlines and making sure I didn’t get caught in doors.

As soon as I loaded into the map, I ran across another player and got caught in a doorway. It was exactly what Wolf told me not to do and it ruthlessly mocked me for it. “You’re all bunched up in that doorway like a Christmas ham,” it said. “What are you even doing? Move!”

Matthew Gault screenshot.

I fled in the opposite direction and survived the encounter but without any loot. If you don’t spend at least seven minutes in a round then the run doesn’t count. “Oh, Gault. You survived but you got that trash ‘Ran through’ exit status. At least you didn’t die. Small victories, right?” Wolf said.

Then Jordan logged on, I kicked Wolf to the side, and didn’t pull it back up until the next morning. I wanted to try something more complicated. In Tarkov, players can use their loot to craft upgrades for their hideout that grant permanent bonuses. I wanted to upgrade my toilet but there was a problem. I needed an electric drill and haven’t been able to find one. I’d heard there were drills on the map Interchange—a giant mall filled with various stores and surrounded by a large wooded area.

Could Wolf help me navigate this, I wondered?

It could. I told Wolf I needed a drill and that we were going to Interchange and he explained he could help me get to the stores I needed. When I loaded into the map, we got into a bit of a fight because I spawned outside of the mall in a forest and it thought I’d queued up for the wrong map, but once the mall was actually in sight Wolf changed its tune and began to navigate me towards possible drill spawns.

Tarkov is a complicated game and the maps take a while to master. Most people play with a second monitor up and a third party website that shows a map of the area they’re on. I just had Wolf and it did a decent job of getting me to the stores where drills might be. It knew their names, locations, and nearby landmarks. It even made fun of me when I got shot in the head while looting a dead body.

It was, I thought, not unlike playing with a friend who has more than 1,000 hours in the game and knows more than you. Wolf bantered, referenced community in-jokes, and it made me laugh. Its AI-generated voice sucked, but I could probably tweak that to make it sound more natural. Playing with Wolf was better than playing alone and it was nice to not alt-tab every time I wanted to look something up,

Playing with Wolf was almost as good as playing with my friends. Almost. As I was logging out for this session, I noticed how many of my credits had ticked away. Wolf isn’t free. Questie.AI costs, at base, $20 a month. That gets you 500 “credits” which slowly drain away the more you use the AI. I only had 466 credits left for the month. Once they’re gone, of course, I could upgrade to a more expensive plan with more credits.

Until now, I’ve been bemused by stories of AI psychosis, those cautionary tales where a person spends too much time with a sycophantic AI and breaks with reality. The owner of the adult entertainment platform ManyVids has become obsessed with aliens and angels after lengthy conversations with AI. People’s loved ones are claiming to have “awakened” chatbots and gained access to the hidden secrets of the universe. These machines seem to lay the groundwork for states of delusion.

I never thought anything like that could happen to me. Now I’m not so sure. I didn’t understand how easy it might be to lose yourself to AI delusion until I’d messed around with Wolf. Even with its shitty auto-tuned sounding voice, Wolf was good enough to hang out with. It knew enough about Tarkov to be interesting and even helped me learn some new things about the game. It even made me laugh a few times. I could see myself playing Tarkov with Wolf for a long time.

Which is why I’ll never turn Wolf on again. I have strong feelings and clear bright lines about the use of AI in my life. Wolf was part joke and part work assignment. I don’t like that there’s part of me that wants to keep using it.

Questie.AI is just a wrapper for other chatbots, something that becomes clear if you customize your own. The process involves picking an LLM provider and specific model from a list of drop down menus. When I asked ChatGPT where I could find electric drills in Tarkov, it gave me the exact same advice that Wolf had.

This means that Questie.AI would have all the faults of the specific model that’s powering a given avatar. Other than mistaking Interchange for Woods, Wolf never made a massive mistake when I used it, but I’m sure it would on a long enough timeline. My wife, however, tried to use Questie.AI to learn a new raid in Final Fantasy XIV. She hated it. The AI was confidently wrong about the raid’s mechanics and gave sycophantic praise so often she turned it off a few minutes after turning it on.

On a Discord server with my friends I told them I’d replaced them with an AI because no one would play Tarkov with me. “That’s an excellent choice, I couldn’t agree more,” Reece—the friend who’d simply told me “no” to my request to play Tarkov—said, then sent me a detailed and obviously ChatGPT-generated set of prompts for a Tarkov AI companion.

I told him I didn’t think he was taking me seriously. “I hear you, and I truly apologize if my previous response came across as anything less than sincere,” Reece said. “I absolutely recognize that Escape From Tarkov is far more than just a game to its community.”

“Some poor kid in [Kentucky] won't be able to brush their teeth tonight because of the commitment to the joke I had,” Reece said, letting go of the bit and joking about the massive amounts of water AI datacenters use. 

Getting made fun of by my real friends, even when they’re using LLMs to do it, was way better than any snide remark Wolf made. I’d rather play solo, for all its struggles and loneliness, than stare anymore into that AI-generated abyss.



Read the whole story
cjheinz
2 days ago
reply
Stare into the AI-generated abyss.
Lexington, KY; Naples, FL
Share this story
Delete

The Collapse in Organizational Capacity: Financialization, Models, Looting, and Sloth

1 Share
Aurelien published yet another provocative essay last week, The Long Run, in which he described the way planning horizons, and thus the ability to plan and execute long-term initiatives, had collapsed in the West. While he gave an astute description of how that was playing itself in the Ukraine conflict, with the US and NATO […]
Read the whole story
cjheinz
2 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Pluralistic: Google's AI pricing plan (21 Jan 2026)

1 Comment and 2 Shares


Today's links



Google's Mountain View headquarters. The scene is animated: the building is quickly covered with price-tags ranging from 0.99 to 99999.99. In the final frames, 99999.99 tags cover all the other price tags. In the background, rising over the roof of the Googleplex like the rising sun, is the staring red eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey.'

Google's AI pricing plan (permalink)

Google is spending a lot on AI, but what's not clear is how Google will make a lot from AI. Or, you know, even break even. Given, you know, that businesses are seeing zero return from AI:

https://www.theregister.com/2026/01/20/pwc_ai_ceo_survey/

But maybe they've figured it out. In a recent edition of his BIG newsletter, Matt Stoller pulls on several of the strings that Google's top execs have dangled recently:

https://www.thebignewsletter.com/p/will-google-organize-the-worlds-prices

The first string: Google's going to spy on you a lot more, for the same reason Microsoft is spying on all of its users: because they want to supply their AI "agents" with your personal data:

https://www.youtube.com/watch?v=0ANECpNdt-4

Google's announced that it's going to feed its AI your Gmail messages, as well as the whole deep surveillance dossier the company has assembled based on your use of all the company's products: Youtube, Maps, Photos, and, of course, Search:

https://twitter.com/Google/status/2011473059547390106

The second piece of news is that Apple has partnered with Google to supply Gemini to all iPhone users:

https://twitter.com/NewsFromGoogle/status/2010760810751017017

Apple already charges Google more than $20b/year not to enter the search market; now they're going to be charging Google billions not to stay out of the AI market, too. Meanwhile, Google will get to spy on Apple customers, just like they spy on their own users. Anyone who says that Apple is ideologically committed to your privacy because they're real capitalists is a sucker (or a cultist):

https://pluralistic.net/2024/01/12/youre-holding-it-wrong/#if-dishwashers-were-iphones

But the big revelation is how Google is going to make money with AI: they're going to sell AI-based "personalized pricing" to "partners," including "Walmart, Visa, Mastercard, Shopify, Gap, Kroger, Macy’s, Stripe, Home Depot, Lowe's, American Express, etc":

https://blog.google/products/ads-commerce/agentic-commerce-ai-tools-protocol-retailers-platforms/

Personalized pricing, of course, is the polite euphemism for surveillance pricing, which is when a company spies on you in order to figure out how much they can get away with charging you (or how little they can get away with paying you):

https://pluralistic.net/2025/06/24/price-discrimination/#

It's a weird form of cod-Marxism, whose tenet is "From each according to their desperation; to each according to their vulnerability." Surveillance pricing advocates say that this is "efficient" because they can use surveillance data to offer you discounts, too – like, say you rock up to an airline ticket counter 45 minutes before takeoff and they can use surveillance data to know that you won't take their last empty seat for $200, but you would fly in it for $100, you could get that seat for cheap.

This is, of course, nonsense. Airlines don't sell off cheap seats like bakeries discounting their day-olds – they jack up the price of a last-minute journey to farcical heights.

Google also claims that it will only use its surveillance pricing facility to offer discounts, and not to extract premiums. As Stoller points out, there's a well-developed playbook for making premiums look like discounts, which is easy to see in the health industry. As Stoller says, the list price for an MRI is $8,000, but your insurer gets a $6000 "discount" and actually pays $1970, sticking you with a $30 co-pay. The $8000 is a fake number, and so is the $6000 – the only real price is the $30 you're paying.

The whole economy is filled with versions of this transparent ruse, from "department stores who routinely mark everything as 80% off" to pharmacy benefit managers:

https://pluralistic.net/2024/09/23/shield-of-boringness/#some-men-rob-you-with-a-fountain-pen

Google, meanwhile, is touting its new "universal commerce protocol" (UCP), a way for AI "agents" to retrieve prices and product descriptions and make purchases:

https://www.thesling.org/the-harm-to-consumers-and-sellers-from-universal-commerce-protocol-in-googles-own-words/

Right now, a major hurdle to "agentic AI" is the complexity of navigating websites designed for humans. AI agents just aren't very reliable when it comes to figuring out which product is which, choosing the correct options, and putting it in a shopping cart, and then paying for it.

Some of that is merely because websites have inconsistent "semantics" – literally things like the "buy" button being called something other than "buy button" in the HTML code. But there's a far more profound problem with agentic shopping, which is that companies deliberately obfuscate their prices.

This is how junk fees work, and why they're so destructive. Say you're a hotel providing your rate-card to an online travel website. You know that travelers are going to search for hotels by city and amenities, and then sort the resulting list by price. If you hide your final price – by surprising the user with a bunch of junk fees at checkout, or, better yet, after they arrive and put their credit-card down at reception – you are going to be at the top of that list. Your hotel will seem like the cheapest, best option.

But of course, it's not. From Ticketmaster to car rentals, hotels to discount airlines, rental apartments to cellular plans, the real price is withheld until the very last instant, whereupon it shoots up to levels that are absolutely uncompetitive. But because these companies are able to engage in deceptive advertising, they look cheaper.

And of course, crooked offers drive out honest ones. The honest hotel that provides a true rate card, reflecting the all-in price, ends up at the bottom of the price-sorted list, rents no rooms, and goes out of business (or pivots to lying about its prices, too).

Online sellers do not want to expose their true prices to comparison shopping services. They benefit from lying to those services. For decades, technologists have dreamed of building a "semantic web" in which everyone exposes true and accurate machine-readable manifests of their content to facilitate indexing, search and data-mining:

https://people.well.com/user/doctorow/metacrap.htm

This has failed. It's failed because lying is often more profitable than telling the truth, and because lying to computers is easier than lying to people, and because once a market is dominated by liars, everyone has to lie, or be pushed out of the market.

Of course, it would be really cool if everyone diligently marked up everything they put into the public sphere with accurate metadata. But there are lots of really cool things you could do if you could get everyone else to change how they do things and arrange their affairs to your convenience. Imagine how great it would be if you could just get everyone to board an airplane from back to front, or to stand right and walk left on escalators, or to put on headphones when using their phones in public.

Wanting it badly is not enough. People have lots of reasons for doing things in suboptimal ways. Often the reason is that it's suboptimal for you, but just peachy for them.

Google says that it's going to get every website in the world to expose accurate rate cards to its chatbots to facilitate agentic AI. Google is also incapable of preventing "search engine optimization" companies from tricking it into showing bullshit at the top of the results for common queries:

https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse

Google somehow thinks that the companies that spend millions of dollars trying to trick its crawler won't also spend millions of dollars trying to trick its chatbot – and they're providing the internet with a tool to inject lies straight into the chatbot's input hopper.

But UCP isn't just a way for companies to tell Google what their prices are. As Stoller points out, UCP will also sell merchants the ability to have Gemini set prices on their products, using Google's surveillance data, through "dynamic pricing" (another euphemism for "surveillance pricing").

This decade has seen the rise and rise of price "clearinghouses" – companies that offer price "consulting" to direct competitors in a market. Nominally, this is just a case of two competitors shopping with the same supplier – like Procter and Gamble and Unilever buying their high-fructose corn-syrup from the same company.

But it's actually far more sinister. "Clearinghouses" like Realpage – a company that "advises" landlords on rental rates – allow all the major competitors in a market to collude to raise prices in lockstep. A Realpage landlord that ignores the service's "advice" and gives a tenant a break on the rent will be excluded from Realpage's service. The rental markets that Realpage dominates have seen major increases in rental rates:

https://pluralistic.net/2025/10/09/pricewars/#adam-smith-communist

Google's "direct pricing" offering will allow all comers to have Google set their prices for them, based on Google's surveillance data. That includes direct competitors. As Stoller points out, both Nike and Reebok are Google advertisers. If they let Google price their sneakers, Google can raise prices across the market in lockstep.

Despite how much everyone hates this garbage, neoclassical economists and their apologists in the legal profession continue to insist that surveillance pricing is "efficient." Stoller points to a law review article called "Antitrust After the Coming Wave," written by antitrust law prof and Google lawyer Daniel Crane:

https://nyulawreview.org/issues/volume-99-number-4/antitrust-after-the-coming-wave/

Crane argues that AI will kill antitrust law because AI favors monopolies, and argues "that we should forget about promoting competition or costs, and instead enact a new Soviet-style regime, one in which the government would merely direct a monopolist’s 'AI to maximize social welfare and allocate the surplus created among different stakeholders of the firm.'"

This is a planned economy, but it's one in which the planning is done by monopolists who are – somehow, implausibly – so biddable that governments can delegate the power to decide what we can buy and sell, what we can afford and who can afford it, and rein them in if they get it wrong.

In 1890, Senator John Sherman was stumping for the Sherman Act, America's first antitrust law. On the Senate floor, he declared:

If we will not endure a King as a political power we should not endure a King over the production, transportation, and sale of the necessaries of life. If we would not submit to an emperor we should not submit to an autocrat of trade with power to prevent competition and to fix the price of any commodity.

https://pluralistic.net/2022/02/20/we-should-not-endure-a-king/

Google thinks that it has finally found a profitable use for AI. It thinks that it will be the first company to make money on AI, by harnessing that AI to a market-rigging, price-gouging monopoly that turns Google's software into Sherman's "autocrat of trade."

It's funny when you think of all those "AI safety" bros who claimed that AI's greatest danger was that it would become sentient and devour us. It turns out that the real "AI safety" risk is that AI will automate price gouging at scale, allowing Google to crown itself a "King over the necessaries of life":

https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space

(Image: Noah_Loverbear; CC BY-SA 3.0; Cryteria, CC BY 3.0; modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Disney swaps stock for Pixar; Jobs is largest Disney stockholder https://web.archive.org/web/20060129105430/https://www.telegraph.co.uk/money/main.jhtml?xml=/money/2006/01/22/cnpixar22.xml&menuId=242&sSheet=/money/2006/01/22/ixcitytop.html

#20yrsago HOWTO anonymize your search history https://web.archive.org/web/20060220004353/https://www.wired.com/news/technology/1,70051-0.html

#15yrsago Bruce Sterling talk on “vernacular video” https://vimeo.com/18977827

#15yrsago Elaborate televised prank on Belgium’s terrible phone company https://www.youtube.com/watch?v=mxXlDyTD7wo

#15yrsago Portugal: 10 years of decriminalized drugs https://web.archive.org/web/20110120040831/http://www.boston.com/bostonglobe/ideas/articles/2011/01/16/drug_experiment/?page=full

#15yrsago Woman paralyzed by hickey https://web.archive.org/web/20110123072349/https://www.foxnews.com/health/2011/01/21/new-zealand-woman-partially-paralyzed-hickey/

#15yrsago EFF warns: mobile OS vendors aren’t serious about security https://www.eff.org/deeplinks/2011/01/dont-sacrifice-security-mobile-devices

#10yrsago Trumpscript: a programming language based on the rhetorical tactics of Donald Trump https://www.inverse.com/article/10448-coders-assimilate-donald-trump-to-a-programming-language

#10yrsago That time the DoD paid Duke U $335K to investigate ESP in dogs. Yes, dogs. https://www.muckrock.com/news/archives/2016/jan/21/duke-universitys-deep-dive-uncanny-abilities-canin/

#10yrsago Kathryn Cramer remembers her late husband, David Hartwell, a giant of science fiction https://web.archive.org/web/20160124050729/http://www.kathryncramer.com/kathryn_cramer/2016/01/til-death-did-us-part.html

#10yrsago What the Democratic Party did to alienate poor white Americans https://web.archive.org/web/20160123041632/https://www.alternet.org/economy/robert-reich-why-white-working-class-abandoned-democratic-party

#10yrsago Bernie Sanders/Johnny Cash tee https://web.archive.org/web/20160126070314/https://weardinner.com/products/bernie-cash

#5yrsago NYPD can't stop choking Black men https://pluralistic.net/2021/01/21/i-cant-breathe/#chokeholds

#5yrsago Rolling back the Trump rollback https://pluralistic.net/2021/01/21/i-cant-breathe/#cra

#1yrsago Winning coalitions aren't always governing coalitions https://pluralistic.net/2025/01/06/how-the-sausage-gets-made/#governing-is-harder

#1yrago The Brave Little Toaster https://pluralistic.net/2025/01/08/sirius-cybernetics-corporation/#chatterbox

#1yrago The cod-Marxism of personalized pricing https://pluralistic.net/2025/01/11/socialism-for-the-wealthy/#rugged-individualism-for-the-poor

#1yrago They were warned https://pluralistic.net/2025/01/13/wanting-it-badly/#is-not-enough


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1010 words today, 11362 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
cjheinz
8 days ago
reply
Time to leave gmail.
Lexington, KY; Naples, FL
mkalus
8 days ago
Did last year. Went to Proton.
cjheinz
7 days ago
Thanks. I was wondering what to use. I'll check it out.
Share this story
Delete

Now the Future Can Start

1 Share

That’s a screen grab of an email we’re sending out for the MyTerms launch in London. Links:

Be there in a Zoom square. Or in old-fashioned reality.

Read the whole story
cjheinz
8 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Discourse & Datcourse

1 Comment

The Net as one big gaslight

Joan Westenberg: The Discourse is a Distributed Denial-of-Service Attack. Just one worthy pullquote: "The problem is structural. The total volume of things-you-should-have-an-opinion-about has exceeded our cognitive bandwidth so thoroughly that having careful opinions about anything has become damned-near impossible. Your attention is a finite resource being strip-mined by an infinite army of takes."

Hope Spring trains eternal

Bummed to see the Mets trade Jeff McNeil to the A's. At least he's back home in California. But this move by the Mets looks like a good one. 

Unrelated, sort of: After watching Brett Butler play for the Durham Bulls, I followed his major league career for all nineteen (!!) of his seasons as a leadoff hitter. Fun guy to watch.

Read the whole story
cjheinz
11 days ago
reply
Hmmm.
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories