Retired since 2012.
2223 stories
·
5 followers

A guide to understanding AI as normal technology

1 Share

When we published AI as Normal Technology, the impact caught us off guard. It quickly became the most influential thing either of us had ever done.1 We took this as a strong signal to spend more of our time thinking and writing about the medium-term future of AI and its impacts, offering grounded analysis of a topic that tends to attract speculation. This is a shift in focus away from writing about the present-day and near-term impacts of AI, which is what the AI Snake Oil project was about.

Reflecting this shift, we have renamed this newsletter. We have already published two follow-up essays to AI as Normal Technology and will publish more regularly as we expand our framework into a book, which we plan to complete in late 2026 for publication in 2027.

Today, we address common points of confusion about AI as Normal Technology, try to make the original essay more approachable, and compare it to AI 2027.

Table of contents

  1. Normal doesn’t mean mundane or predictable

  2. A restatement of our thesis

  3. If disappointment about GPT-5 has nudged you towards AI as normal technology, it’s possible you don’t quite understand the thesis

  4. Why it’s hard to find a “middle ground” between AI as Normal Technology and AI 2027

  5. It is hard to understand one worldview when you’re committed to another

  6. Reaping AI’s benefits will require hard work and painful choices

  7. The surreal debate about the speed of diffusion

  8. Why AI adoption hits different

  9. Concluding thoughts

Normal doesn’t mean mundane or predictable

While the essay talks about what we mean by normal (more on that below), we could have been more explicit about what it doesn’t mean.

Our point is not “nothing to see here, move along”. Indeed, unpredictable societal effects have been a hallmark of powerful technologies ranging from automobiles to social media. This is because they are emergent effects of complex interactions between technology and people. They don’t tend to be predictable based on the logic of the technology alone. That’s why rejecting technological determinism is one of the core premises of the normal technology essay.

In the case of AI, specifically chatbots, we’re already seeing emergent societal effects. The prevalence of AI companions and some of the harmful effects of model sycophancy such as “AI psychosis” have taken most observers by surprise.2 On the other hand, many risks that were widely predicted to be imminent, such as AI being used to manipulate elections, have not materialized.

What the landscape of AI’s social impacts will look like in say 3-5 years — even based on the diffusion of current capabilities, not future capabilities — is anyone’s guess.

The development of technical capabilities is more predictable than AI’s social impacts. Daniel Kokotajlo, one of the authors of AI 2027, was previously famous in the AI safety community for his essay “What 2026 looks like” back in 2021. His predictions about the tech itself proved eerily accurate, but the predictions about social impacts were overall not directionally correct, a point he was gracious enough to concede in a podcast discussion with one of us.

All this makes AI a more serious challenge for institutions and policymakers because they will have to react nimbly to unforeseeable impacts instead of relying on the false comfort of prediction or trying to prevent all harm. Broadly speaking, the policymaking approach that enables such adaptability is called resilience, which is what our essay advocated for. But while we emphasized resilience as the approach to deal with potentially catastrophic risks, we should have been clearer that resilience also has an important role in dealing with more diffuse risks.

Perhaps the reason some readers misunderstood our view of predictability is the word “normal”. Again, our goal is not to trivialize the task of individually and collectively adapting to AI. In an ideal world, a better title would have been simply “AI as Technology”, but we didn’t think that that would effectively communicate that our goal is to provide an alternative to the exceptionalism that characterizes the superintelligence worldview which currently dominates the discourse.

A restatement of our thesis

If we were to extract and simplify the core of our thesis, it would be something like this:

There is a long causal chain between AI capability increases and societal impact. Benefits and risks are realized when AI is deployed, not when it is developed. This gives us (individuals, organizations, institutions, policymakers) many points of leverage for shaping those impacts. So we don’t have to fret as much about the speed of capability development; our efforts should focus more on the deployment stage both from the perspective of realizing AI’s benefits and responding to risks. All this is not just true of today’s AI, but even in the face of hypothetical developments such as self-improvement in AI capabilities. Many of the limits to the power of AI systems are (and should be) external to those systems, so that they cannot be overcome simply by having AI go off and improve its own technical design.

Aspects of this framework may have to be revised eventually, but that lies beyond the horizon bounding what we can meaningfully anticipate or prepare for:

The world we describe in Part II is one in which AI is far more advanced than it is today. We are not claiming that AI progress—or human progress—will stop at that point. What comes after it? We do not know. Consider this analogy: At the dawn of the first Industrial Revolution, it would have been useful to try to think about what an industrial world would look like and how to prepare for it, but it would have been futile to try to predict electricity or computers. Our exercise here is similar. Since we reject “fast takeoff” scenarios, we do not see it as necessary or useful to envision a world further ahead than we have attempted to. If and when the scenario we describe in Part II materializes, we will be able to better anticipate and prepare for whatever comes next.

Anyway, to reiterate, the core of the thesis is the underlying causal framework for understanding the relationship between AI and society, not any of the specific impacts that it might or might not have. In our view, if you share this causal understanding, you subscribe to the normal technology thesis. We have found that this framework is indeed widely shared, albeit implicitly.

That makes the thesis almost tautological in many readers’ minds. We are making what we see as — and what those readers should see as — a very weak claim! Not recognizing this causes readers to search for something much more specific that we may have meant by “normal”. But we didn’t. We aren’t classifying technologies as “normal” and “abnormal” and then putting AI into the “normal” bucket. We’re just saying we should treat AI like we treat other powerful general-purpose technologies.

This is not specific to large language models or any particular kind of AI. Incidentally, that’s why the title is “AI as normal technology” and not “AI as a normal technology”. Our views apply to the whole umbrella of technologies that are collectively referred to as AI, and other similar technologies even if they are not referred to as AI.

If our worldview is almost tautological, why bother to state it? Because it is in contrast to the superintelligence worldview. That’s the thing about worldviews: there can be mutually contradictory worldviews that each seem tautological to those who subscribe to them.

If disappointment about GPT-5 has nudged you towards AI as normal technology, it’s possible you don’t quite understand the thesis

It’s notable that there’s been a surge of interest in our essay after the release of GPT-5, and reasonable to surmise that at least some of that is because of people shifting their views a bit after being disappointed by the release.

This is strange! This isn’t the first time this has happened — we previously expressed skepticism of a big narrative shift around scaling that happened based on almost no new information. If a single update to one product shifts people’s views on the trajectory of AI, how reliable is people’s evidence base to begin with?

The reason why the normal technology framework predicts slow timelines is not because capabilities will hit a wall but because impacts will be slow and gradual even if capabilities continue to advance rapidly. So we don’t think disappointment with a new release should make you more sympathetic to viewing AI as normal technology. By the same token, a new breakthrough announced tomorrow shouldn’t cause you to be more skeptical of our views.

The best way to understand GPT-5 is that it’s a particularly good example of AI developers’ shift in emphasis from models to products, which we wrote about a year ago. The automatic model switcher is a big deal for everyday users of ChatGPT. It turns out that hardly anyone was using “thinking” models nearly a year after they were first released, and GPT-5 has bumped up their use dramatically.

In some communications Altman was clear that the emphasis for GPT-5 was usability, not a leap in capabilities, although this message was unfortunately undercut by the constant hype, leading to disappointment.

This broader shift in the industry is actually highly consistent with companies themselves (reluctantly) coming around to acknowledging the possibility that their path to success is to do the hard work of building products and fostering adoption, rather than racing to build AGI or superintelligence and count on it to sweep away any diffusion barriers. Ironically, in this narrative, GPT-5 is an example of a success, not a failure.

In fact, model developers are starting to go beyond developing more useful products (the second stage of our technology development & adoption framework) and working with deployers to ease early adoption pains (the third stage). For example, OpenAI’s Forward Deployed Engineers work with customers such as John Deere, and directly with farmers, on integrating and deploying capabilities such as providing personalized recommendations for pesticide application.

Why it’s hard to find a “middle ground” between AI as Normal Technology and AI 2027

Many people have tried to articulate middle ground positions between AI 2027 and AI as Normal Technology, perhaps viewing these as two ends of a spectrum of views.

This is surprisingly hard to do. Both AI 2027 and AI as Normal Technology are coherent worldviews. They represent very different causal understandings of how technology will impact society. If you try to mix and match, there is a risk that you end up with an internally inconsistent hodgepodge. (By the way, this means that if we end up being wrong, it is more likely that we will be wrong wholesale than slightly wrong.)

Besides, only in the Silicon Valley bubble can AI as Normal Technology be considered a skeptical view! We compare AI to electricity in the second sentence of the essay, and we make clear throughout that we expect it to have profound impacts. Our expectations for AI’s impact on labor seem to be at the more aggressive end of the range of expectations from economists who work on this topic.

In short, if you are looking for a moderate position, we encourage you to read the essay in full. Don’t let the title fool you into thinking we are AI skeptics. Perhaps you will conclude that AI as Normal Technology is already the middle ground you are looking for.

We realize that it can be discomfiting that the two most widely discussed frameworks for thinking about the future of AI are so radically different. (Our essay itself offers much commentary on this state of affairs in Part 4, which is about policy.) We can offer a few comforting thoughts:

  • We do have many areas of agreement with the AI 2027 authors. We are working on a joint statement outlining those areas. We are grateful to Nicholas Carlini for organizing this effort.

  • In our view, more important than agreement in beliefs are areas of common ground in policy despite differences in beliefs. Even relatively “easy” policy interventions that different sides can agree on will be a huge challenge in practice. If we can’t achieve these, there is little hope for the much more radical measures favored by those worried about imminent superintelligence.

  • There have been a few ongoing efforts to identify the cruxes of disagreement and agree on indicators that might help adjudicate between the two worldviews. We have participated in a few of these efforts and look forward to continuing to do so. We are grateful to the Golden Gate Institute for AI’s efforts on this front.

  • Speaking of developing indicators, we are in the process of expanding the vision for our project HAL, Holistic Agent Leaderboard. Currently it tries to be a better benchmark orchestration system for AI agents, but the new plan is to develop it into an early warning system that helps the AI community identify when AI agents have crossed capability thresholds for transformative real-world impacts in various domains.
    We see these capability thresholds as necessary but not always sufficient conditions for impact, and as and when they are reached, they will much more acutely stress our theses about non-technological barriers to both benefits and risks.

  • Note that HAL is not about prediction; it is about situational awareness of the present. This is a theme of our work. What is remarkable about the AI discourse in general, and us versus AI 2027 in particular, is the wide range of views not just about the future but about the things we can observe, such as the speed of diffusion (more on that below). Unless we as a community get much better at measurement of the present and testing competing causal explanations of progress, the level of energy directed at prediction will be misdirected, because we lack ways of resolving those predictions. For example, we’ve argued that we won’t necessarily know if “AGI” has been built even post facto. To some extent these limitations are intrinsic because of the lack of conceptual precision of ideas like AGI, but at the same time it’s true that we can do a lot better at measurement.

It is hard to understand one worldview when you’re committed to another

We wrote:

AI as normal technology is a worldview that stands in contrast to the worldview of AI as impending superintelligence. Worldviews are constituted by their assumptions, vocabulary, interpretations of evidence, epistemic tools, predictions, and (possibly) values. These factors reinforce each other and form a tight bundle within each worldview.

This makes communication across worldviews hard. For example, one question we often receive from the AI 2027 folks is what we think the world will look like in 2027. Well, pretty much like the world in 2025, we respond. They then push us to consider 2035 or 2045 or whatever year by which the world will be transformed, and they consider it a deficiency of our framework that we don’t provide concrete scenarios.

But this kind of scenario forecasting is only a meaningful activity within their worldview. We are concrete about the things we think we can be concrete about. At the same time, we emphasize the role of human, institutional, and political agency in making radically different futures possible — including AI 2027. Thus, AI as normal technology is as much a prescription as a prediction.

These communication difficulties are important to keep in mind when considering the response by Scott Alexander, one of the AI 2027 authors, to AI as Normal Technology. While we have no doubt that it is a good-faith effort at dialogue and we appreciate his putting in the time, unfortunately we feel that his response mostly talks past us. What he identifies as the cruxes of disagreement are quite different from what we consider the cruxes! For this reason, we won’t give a point-by-point response, since we will probably in turn end up talking past him in turn.

But we would be happy to engage in moderated conversations, a format with which we’ve had good success and have engaged in 8-10 times over the past year. The synchronous nature makes it much easier to understand each other. And the fact that the private conversation will be edited before making it public makes it easier to ask stupid questions as each side searches for understanding of the other’s point of view.

Anyway, here are a couple of important ways in which Alexander’s response talks past us. Recursive Self-Improvement (RSI) is a crux of disagreement for Alexander’s point of view, and he is surprised that it is barely worth a mention for us. In fairness we could have been much more explicit in our essay about what we think about RSI. In short, we don’t think RSI will lead to superintelligence because the external bottlenecks to building and deploying powerful AI systems cannot be overcome by improving their technical design. That is why we don’t discuss it much.3

Although it is not a crux for us, we do explain in the essay why we think the AI community is nowhere close to RSI. More recently we’ve been thinking about the fundamental research challenges that need to be solved, and there are a lot more than we’d realized. And it is worth keeping in mind that the AI community might be particularly bad at finding new paradigms for progress compared to other scientific communities. Again, this is an area where we hope that our project HAL can play a role in measuring progress.

Another topic where Alexander’s response talks past us is on the speed of diffusion, which we comment on briefly below and will address in more detail in a future essay.

The best illustration of the difficulty of discourse across worldviews is Alexander’s discussion of our hypotheses about whether or not superhuman AI abilities are possible in tasks such as prediction or persuasion. After reading his response several times, it is hard for us to figure out where exactly we agree and disagree. We wrote:

We think there are relatively few real-world cognitive tasks in which human limitations are so telling that AI is able to blow past human performance (as AI does in chess). ... Concretely, we propose two such areas: forecasting and persuasion. We predict that AI will not be able to meaningfully outperform trained humans (particularly teams of humans and especially if augmented with simple automated tools) at forecasting geopolitical events (say elections). We make the same prediction for the task of persuading people to act against their own self-interest.

You can read his full response to this in Section 3B of his essay, but in short it focuses on human biological limits:

Humans gained their abilities through thousands of years of evolution in the African savanna. There was no particular pressure in the savanna for “get exactly the highest Brier score possible in a forecasting contest”, and there is no particular reason to think humans achieved this. Indeed, if the evidence for human evolution for higher intelligence in the past 10,000 years in response to agriculture proves true, humans definitely didn’t reach the cosmic maximum on the African savannah. Why should we think this last, very short round of selection got it exactly right?

But rejecting a biological conception of human abilities is a key point of departure for us, something we take pains to describe in detail in the section “Human Ability Is Not Constrained by Biology”. That’s the problem with discussion across worldviews: If you take a specific statement, ignore the premises and terminological clarifications leading up to it, and interpret it in your worldview, it will seem like your opponent is clueless. Does Alexander think we are suggesting that if a savanna-dweller time traveled to the present, they would be able to predict elections?

He does emphasize that human performance is not fixed, but somehow sees this as a refutation of our thesis (rather than central to it). Perhaps the confusion arose because of our hypothesis that human performance at forecasting is close to the ​​“irreducible error”. We don’t imply that the irreducible error of forecasting is a specific number that is fixed for all time. Of course it depends on the data that is available — better polling leads to better forecasting — and training that helps take advantage of increased data. And some of that training might even be the result of AI-enabled research on forecasting. We emphasize in our original essay that human intelligence is special not because of our biology, but because of our (contingent) mastery of our tools including AI. Thus, advances in AI will often improve human intelligence (abilities), and have the potential to improve the performance of both sides of the human-AI comparison we propose.

The point of our hypothesis is a simple one: We don’t think forecasting is like chess, where loads of computation can give AI a decisive speed advantage. The computational structure of forecasting is relatively straightforward, even though performance can be vastly improved through training and data. Thus, relatively simple computational tools in the hands of suitably trained teams of expert forecasters can squeeze (almost) all the juice there is to squeeze.

We are glad that Alexander’s response credits us with “putting their money where their mouth is on the possibility of mutual cooperation”. The sentiment is mutual. We look forward to continuing that cooperation, which we see as more productive than Substack rebuttals and counter-rebuttals.

Reaping AI’s benefits will require hard work and painful choices

There are two broad sets of implications of our framework: one for the economy and labor, and the other for safety. Once we get past a few basic premises (notably, that superintelligence is incoherent or impossible depending on how it is defined), our arguments behind these two sets of implications are largely different.

On economic impacts, our case is broadly that diffusion barriers won’t be overcome through capability improvement. As for safety, our case is primarily that achieving AI control without alignment is not only possible, it doesn’t even seem particularly hard, and doesn’t require scientific breakthroughs.

Since these two sets of arguments don’t overlap much, it is coherent to accept one set but not accept (or be ambivalent to) the other. Indeed, our view of the economic impacts seems to have resonated particularly strongly with readers. Since the essay’s publication, we have had many discussions with people responsible for AI strategy in various industries. We discovered that the way that they had been thinking about AI was consistent with ours, but they were starting to second-guess their approach because of all the hype. Our essay provided a coherent framework that backed up their intuitions as well as their observations in the trenches.4

While people deploying AI have a keen understanding of the difference between technology development and diffusion, our framework further divides each of those into two steps. On the development side, we emphasize the gap between models and products, or capabilities and applications. On the diffusion side, we differentiate between user learning curves and other aspects of adaptation by individuals, and the structural, organizational, or legal changes that might be necessary, which often require collective action. We illustrate the kinds of speed limits that operate at each of the four stages.

While user behaviors at least tend to change slowly but predictably, solving coordination problems or reforming sclerotic institutions — which are also prerequisites for effective technology adoption — are much more uncertain. As an example, consider how Air Traffic Control is stuck with technology from the middle of the 20th century despite the enormous costs of the lack of modernization becoming apparent.

While our essay pointed out that analogous diffusion barriers exist in the case of AI, we are only now doing the work of spelling out those barriers and identifying specific reforms that might be necessary. We will be writing more on this front, some of it in collaboration with Justin Curl.

It is worth bearing in mind that advanced AI is entering a world that is already both highly technological and highly regulated. We repeatedly find that the parts of workflows that AI tackles are unlikely to be bottlenecks, because much of the available productivity gains have already been unlocked through earlier waves of technologies. Meanwhile the actual bottlenecks prove resistant due to regulation or other external constraints. In many specific domains including legal services and scientific research, competitive dynamics are so strong that productivity gains from AI lead to escalation of arms races that don’t ultimately translate to societal value.

The surreal debate about the speed of diffusion

We’ve mentioned a few times that different camps disagree on how they characterize current AI impacts. Nowhere is this more apparent than the speed of diffusion. AI boosters believe that it is being adopted unprecedentedly rapidly. We completely disagree. Worse, as more evidence comes out, each side seems to be getting more certain of their interpretation.

We are working on an in-depth analysis of the speed of diffusion. For now, we point out a few basic fallacies in the common arguments and stats that get trotted out to justify the “rapid adoption” interpretation.

First, deployment is not diffusion. Often, when people talk about rapid adoption they just mean that when capabilities are developed, they can be near-instantly deployed to products (such as chatbots) that are used by hundreds of millions of users.

But this is not what diffusion means. It is not enough to know how many people have access to capabilities: What matters is how many people are actually using them, how long they are using them, and what they are using them for. When we drill down into those details, the picture looks very different.

For example, almost a year after the vaunted release of “thinking” models in ChatGPT, less than 1% of users used them on any given day! We find no pleasure in pointing this out even though it supports our thesis. As enthusiastic early adopters of AI, this kind of number is so low that it is hard for us to intuitively grasp, and frankly pretty depressing.

Another example of a misleading statistic relates to the fraction of workers in certain high-risk domains who use AI. Such statistics tend to be offered in service of the claim that AI is being rapidly adopted in risky ways. But even in high risk domains most tasks are actually mundane, and when we dig into the specific uses don’t seem that risky at all.

For example, a survey by the American Medical Association reported that a majority of doctors are using AI. But this includes things like transcription of dictated notes.5 It also includes things like asking a chatbot for a second opinion of a diagnosis (about 12% reported this use case in 2024, a whopping 1 percentage point increase from 11% in 2023). This is definitely a more serious use than transcription one, but it is still well-founded. As we’ve pointed out before, even unreliable AI is very helpful for error detection.

Increasing adoption of AI for these tasks does not mean that doctors are about to start YOLOing it and abdicating their responsibility to their patients by delegating their decisions to ChatGPT. The vast majority of doctors understand the difference between these two types of uses, and there are many overlapping guardrails preventing widespread irresponsible use in the medical profession, including malpractice liability, professional codes, and regulation of medical devices.

The most misleading “rapid adoption” meme of all might be this widely shared chart, showing that ChatGPT reached 100M users in about two months:

It compares ChatGPT user growth with (1) Instagram, Facebook, and Twitter, which are social media apps whose usefulness depends on network effects, and therefore characteristically grow much slower than apps that are useful from day one (2) Spotify, an app that was initially invite-only and (3) Netflix, a service that launched with a limited inventory and required a subscription to use.6

What is reflected in this chart are early adopters who will check out an app if there’s buzz around it, and there was deafening buzz around ChatGPT. Once you exhaust these curious early users, the growth curve looks very different. In fact, a year later, ChatGPT had apparently only grown from 100M to 200M users, which meant that the curve evidently bent sharply to the right. That is conveniently not captured in this graph which reflects only the first two months.

This chart would be useful if it gave us any evidence that the usual barriers to diffusion have been weakened or eliminated. It doesn’t. Two months is not enough time for the hard parts of diffusion to even get started, such as users adapting their workflows to productively incorporate AI. As such, this chart is irrelevant to any meaningful discussion of the speed of diffusion.

There are many other problems with this chart, but we’ll stop here.7 Again, this is far from a complete analysis of the speed of AI diffusion — that’s coming. For now, we’re just making the point that the majority of the commentary on this topic is simply unserious. And if this is what the discourse is like on a question for which we do have data, it is no surprise that predictions of the future from different camps bear no resemblance to each other.

Why AI adoption hits different

If the “rapid diffusion” meme is so wrong, why is it so pervasive and persistent? Because AI adoption feels like a tsunami in a way that the PC or the internet or social media never did. When people are intuitively convinced of something, they will be much less skeptical of data or charts that purport to confirm that feeling.

We recognize the feeling, of course. Our own lived experience of AI is different from past waves of technologies. Initially, we dismissed this as a cognitive bias. Whatever change we’re living through at the moment will feel like a much bigger shift than something we successfully adapted to in the past.

We now realize that we were wrong. The cognitive bias might be a small part of the explanation, but there is a genuine reason why AI adoption feels much more rapid and scary. In short, while it’s true that deployment is not diffusion, in the past, gradual deployment meant that users were somewhat buffered from having to constantly make decisions about adoption, but now that buffer has been swept away. Let’s explain with a comparison to internet adoption.

Those of us who adopted dial-up internet in the 90s will remember a story that went something like this. When we first heard about the tech, we were put off by the high price of a PC. Gradually those prices came down. Meanwhile we got some experience using the internet at work or at a friend’s house. So when we bought a PC and dialup internet a few years later, we already had some training. At first, dialup was slow and expensive and there weren’t even that many websites, so we didn’t use the internet that much. Gradually prices came down, bandwidth improved, and more content came online, and we learned how to use the internet productively and safely in tandem with our increasing use.

Adopting general-purpose AI tools in the 2020s is a radically different experience because deployment of new capabilities is instantaneous. People have to spend a much higher percentage of time evaluating whether to adopt AI for some particular use case, and are constantly being told that if they don’t adopt it they will be left behind.

All our earlier points stand — learning curves exist, human behavior takes a long time to change, and organizational change even longer. But not using AI is somewhat of an active choice, and people no longer have the excuse of not to think about it because they don’t yet have access.

In short, deployment is only one of many steps in diffusion, and removing that bottleneck probably made diffusion slightly faster. But it feels dramatically faster because as soon as one hears about a particular AI use case, one has to decide whether to adopt it or not, even if it is ultimately the case that the vast majority of the time, people are deciding not to, for various reasons that might be rational or irrational.

Concluding thoughts

One thing on which we definitely agree with AI boosters is that AI is not going away, nor will it become a niche like crypto that most people can ignore. Now that the collective initial shock of generative AI has worn off, there’s a need for structured ways to think about how AI’s impacts might unfold, instead of (over)reacting to each new technical capability or emergent social effect.

The AI-as-normal-technology framework — which we continue to elaborate in this newsletter — is one such approach. It is worth being familiar with, at least as an articulation of a historically grounded default way to think about tech’s societal impact, against which more exceptionalist accounts can be compared. The framework has some degree of actionable guidance for business leaders, workers, students, people concerned about AI safety or AI ethics, and policymakers, among others. We hope you follow along and contribute to the discussion.

We are grateful to Steve Newman and Felix Chen for feedback on a draft.

Further reading/viewing

1

As is so often the case, fortuitous timing played a big role in the success of the essay. It was released two weeks after AI 2027, but this was purely by coincidence — our publication date was actually based on the Knight Institute symposium on AI and Democratic Freedoms. We are grateful to the Institute for the opportunity to publish the essay.

2

Mental health issues with chatbots in general, and problems such as addiction, have been widely recognized and discussed, including in our book AI Snake Oil. These tend to be based on an analogy with social media. But it's one thing to anticipate the potential for mental health impacts, and another to predict specifically what impacts might emerge and how to avoid them.

3

That said, we acknowledge that our whole thesis might be wrong, and it is more likely that we’re wrong if RSI is achieved.

4

The framework is adapted from the classic diffusion-of-innovations theory and also influenced by recent writers such as Jeffrey Ding who have analyzed geopolitical competition in AI through the lens of the innovation-diffusion gap.

5

While there are risks even here, we think it is definitely an application that doctors should be exploring.

6

In fact, there have been other apps whose initial growth was as fast or faster than ChatGPT, such as Pokemon Go and Threads (which bootstrapped off of Instagram and thus wasn’t reliant on network effects). But again, our bigger point is that this type of comparison is not informative. Threads ended up being something of a dud despite that initial growth.

7

Frankly, the far more impressive stat in this graph, in our view, is Instagram getting to a million users in only 2.5 months despite the need for network effects — considering that it was back in 2010 when phone internet speeds were much lower, the app was iPhone only (!), and mainly spread among 18-34 year olds in the United States in the early days.



Read the whole story
cjheinz
7 hours ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Pluralistic: Conspiratorialism's causal chain (17 Sep 2025)

1 Comment


Today's links



A four-doll matrioshke, unpacked and arranged 2x2. In order, the dolls' faces have been replaced with: the Qanon logo; an Oxycontin pill, the face of Robert Bork, and Mark Zuckerberg's metaverse avatar.

Conspiratorialism's causal chain (permalink)

Conspiratorialism is downstream of the trauma of institutional failures.

Insitutional failures are downstream of regulatory capture.

Regulatory capture is downstream of monopolization.

Monopolization is downstream of the failure to enforce antitrust law.

Start with conspiratorialism and trauma. I am staunchly pro-vaccine. I have had so many covid jabs that I glow in the dark and can get impeccable 5g reception at the bottom of a coal-mine.

Nevertheless.

If you tell me that you are anti-vax because you:

a) believe that the pharma companies are rapacious murderers who'd kill you for a nickel; and

b) believe that their regulators are so captured that every FDA official should probably be wearing a gimpsuit;

I'd be hard pressed to argue with you.

After all, the Sackler family flagrantly lied about the safety of their opioids. They bribed doctors to over-prescribe their drugs. They paid pharmacists bonuses for not asking nosy questions about people filling endless, gigantic refills. They reaped billions. They hired FDA officials and paid them to lobby their ex-colleagues to turn a blind eye, even as the country's morgues filled with the corpses of their victims. They made more billions, and they abused the justice system and got to stay disgustingly, dynastically rich, even as more than one million Americans died in the overdose epidemic they started:

https://pluralistic.net/2023/08/11/justice-delayed/#justice-redeemed

The hucksters and grifters peddling anti-vax conspiracies are pushing on an open door. The existence of real, high-stakes, mass-casualty conspiracies, right there in the open, make traumatized people easy marks for con artists selling horse-paste and taint-tanning.

(Obviously, this is also the Epstein story: the reason it was possible to convince vulnerable people that elite pedos were hiding kids in a DC pizza-parlor's nonexistent basement was that elite pedos were hiding kids on an entirely real island that Donald Trump and other rich and powerful people liked to visit and everyone knew about.)

So that's part one: conspiratorialism is downstream of institutional failures.

Institutional failures are downstream of regulatory capture:

https://pluralistic.net/2022/06/05/regulatory-capture/

Why do our institutions fail? Because they have been neutered, deliberately made weaker than the processes and companies they are meant to oversee. Starve the FAA of resources and eventually it's going to run out of money to inspect airplane factories. When that happened, Boeing got to hire its own inspectors. The FAA let Boeing mark its own homework, and then planes started falling out of the sky. Hundreds of people were murdered this way (so far – there's a reasonable chance that many more of us are boeing to die):

https://pluralistic.net/2024/05/01/boeing-boeing/#mrsa

When Trump's old FCC chair Ajit Pai decided to kill Net Neutrality, he was able to cheat like hell. He accepted over one million identical anti-Net Neutrality comments from "@pornhub.com" email addresses. He accepted millions of obviously fraudulent, identical anti-Net Neutrality comments whose reply addresses corresponded to darknet identity-theft dumps. These included the email addresses of dead people and of sitting US Senators who supported Net Neutrality:

https://pluralistic.net/2021/05/06/boogeration/#pais-lies

Americans have no federal privacy protections to speak of. The last time Congress updated consumer privacy law was with 1988's Video Privacy Protection Act, which bans video-store clerks from disclosing your VHS rentals. All other technological invasions of privacy are fair game. That's how it came to pass that when staffing agencies offer a nurse a shift, they are able to secure that nurse's credit report, discover how much credit-card debt the nurse is carrying, and offer a lower wage to nurses who are economically desperate:

https://pluralistic.net/2024/12/18/loose-flapping-ends/#luigi-has-a-point

Regulators are captured out there, right in the open. The revolving door between government service and industry lobby groups spins and spins. Give a Maga influencer a million bucks and he'll get the DoJ to call off its case blocking your $14 billion merger:

https://www.vox.com/politics/458685/trump-doj-antitrust-roger-alford-mizelle-hewlett-packard

Institutional failures are downstream of regulatory capture, and regulatory capture is downstream of monopolization.

We live in monopolized times. Virtually every industry you interact with has collapsed into a bare handful of global companies:

https://www.openmarketsinstitute.org/learn/monopoly-by-the-numbers

Whether you're buying a glass bottle, sending something by sea-freight, taking vitamin C, getting an IV drip, watching pro wrestling, lacing up your athletic shoes, shopping for a mattress, seeing a movie, using social media, listening to music, reading a book, getting fitted for eyeglasses, or choosing a browser, you are trapped in a market totally dominated by five or fewer corporations – often just one corporation.

Monopolies raise prices. They lower wages. They reduce quality. The reason Google – which has a 90% market share in Search – sucks so bad is that they decided to make their product worse so that you would have to repeatedly search to get the information you're seeking, which creates more opportunities to show you ads:

https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan

The reason your glasses are so expensive is that one company, a French-Italian consortium called Essilor-Luxotica, bought and merged all the retailers, manufacturers, optical labs and insurers and then raised the price of glasses by 1,000%:

https://www.business-standard.com/companies/news/ray-ban-maker-essilorluxottica-accused-in-lawsuit-of-inflating-prices-1000-123072200122_1.html

Companies argue that their mergers create "efficiencies." That's tech's story, for sure. Google last created a successful consumer product in 1998, when it fielded a revolutionary new search engine. Since then, virtually every in-house product it's created has tanked, but the company has managed to grow to a world-girding kraken by buying other people's companies: ad-tech, videos, maps, docs, mobile, and more.

The true efficiency of mergers isn't in companies getting better at making things that make you happy. The real purpose of boiling down a big, vibrant industry into a handful of sclerotic, inbred giants is so that they can agree on a common lobbying position, and stick to it.

Hundreds of companies are a rabble, a mob. They compete. They poach each others' best customers and best workers. They hate each other. They can't agree on anything, especially what lie they should be telling their regulators. Forced into "wasteful competition" (-P. Thiel), they must lower prices and raise wages, which leaves them with less money to spend lobbying. They can't capture their regulators.

But: stage an orgy of incestuous mergers, shrink the industry to five companies whose C-suites have all known each other all their lives, who are executors of one another's estates and godparents to one another's children, and the collective action problem vanishes. Nominal competitors suddenly start singing with one voice, demanding a unified set of privileges and exemptions from their regulators:

https://locusmag.com/2022/03/cory-doctorow-vertically-challenged/

Without monopolization, regulatory capture would be much harder to accomplish, and much easier to halt. Regulatory capture is downstream of monopolization.

And monopolization is downstream of the decision not to enforce antitrust laws.

The purpose of antitrust laws is, and always has been, to prevent monopolies. The first antitrust law was 1890's Sherman Act, and its author, Senator John Sherman, made the case for it thus:

If we will not endure a King as a political power we should not endure a King over the production, transportation, and sale of the necessaries of life. If we would not submit to an emperor we should not submit to an autocrat of trade with power to prevent competition and to fix the price of any commodity. 

https://pluralistic.net/2022/02/20/we-should-not-endure-a-king/

For 80-some years, antitrust law did exactly that. But in the 1970s, the fringe theories of a conspiratorialist named Robert Bork came to prominence, at first hesitantly under Jimmy Carter, and then with undisguised ardor and glee under Reagan:

https://pluralistic.net/2021/08/13/post-bork-era/#manne-down

Robert Bork claimed that monopolies were "efficient." He said that monopolies in the wild were almost never the result of cheating – rather, if a company managed to get all of us to buy its products, that was evidence that its products were the best. Bork insisted that it would be perverse to enlist the government to punish companies for making the most pleasing and successful products.

Bork was many things: a virulent racist who defended racial discrimination against Black people and a criminal who served as Richard Nixon's hatchet-man, illegally firing "disloyal" DoJ lawyers after every other Reagan official refused.

But above all, Robert Bork was a conspiracy-peddler. He didn't just disagree with the idea of the government going after monopolies – he claimed that a close reading of the country's antimonopoly laws revealed that these laws were never intended to fight monopolies. This, despite the fact that the laws plainly and clearly stated that their purpose was to fight monopolies. This, despite the fact that the bills' authors climbed to their hind legs in Congress and the Senate and gave long speeches about how their laws would fight monopolies.

Bork's theories about the beneficence and efficiency of monopolies were profoundly stupid. But Bork's theories about the meaning of America's antitrust laws were profoundly nuts. Bork insisted that up was down, water was not wet, and black was white‡.

‡ Well, maybe not that last one.

But Bork – like so many conspiracy peddlers – was pushing on an open door. America's wealthy, would-be aristocrats loved the idea of securing monopolies and becoming "autocrats of trade." They funded Bork's theories, endowed economics chairs, sponsored conferences, and, above all, funded all-expenses-paid luxury junkets for judges to teach them about Bork's ideas. 40% of the US Federal judiciary attended one of these "Manne Seminars" and afterwards, their rulings changed to embrace Bork's pro-monopoly posture:

https://academic.oup.com/qje/advance-article/doi/10.1093/qje/qjaf042/8241352

And here we come full circle:

  • Conspiratorialism is downstream of traumatic institutional failures; and

  • Institutional failures are downstream of regulatory capture; and

  • Regulatory capture is downstream of monopolization; and

  • Monopolization is downstream of the decision not to enforce antitrust laws; and

  • The decision not to enforce antitrust laws was the result of a conspiracy.

The campaigns to fight "disinformation" are concerned with effects, not causes. The reason people are vulnerable to conspiratorial accounts of current affairs is that they have direct, undeniable experience of many actual conspiracies that inflicted deep harm and lasting trauma. If we want to armor the people we love against conspiratorial cults, it's not enough to argue over the implausibility of their belief that elite cabals are abusing the rest of us for fun and profit – we have to actually address the real elite cabals that really do abuse us for fun and profit.

(Image: Vicent Ibáñez, CC BY-SA 3.0; RootOfAllLight; CC BY-SA 4.0; modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#15yrsago Intel threatens lawsuits against HDCP jailbreakers https://web.archive.org/web/20100920183314/https://www.wired.com/threatlevel/2010/09/intel-threatens-consumers/

#10yrsago America’s spooks abandon crypto-backdoors, plan shock-doctrine revival https://www.techdirt.com/2015/09/17/having-lost-debate-backdooring-encryption-intelligence-community-plans-to-wait-until-next-terrorist-attack/

#10yrsago Do you really trade your privacy for service on Facebook? https://theintercept.com/2015/09/17/facebook/

#10yrsago 3D print your own TSA Travel Sentry keys and open anyone’s luggage https://arstechnica.com/information-technology/2015/09/video-3d-printed-tsa-travel-sentry-keys-really-do-open-tsa-locks/

#10yrsago Campus cops: all the powers of real cops, none of the accountability https://www.muckrock.com/news/archives/2015/sep/15/public-safety-private-colleges-massachusetts/

#10yrsago Ex-mayor of Bismark, ND trademarks alternatives to “Fighting Sioux” in bid to prevent UND team from switching to non-racist name https://web.archive.org/web/20160103050027/https://www.grandforksherald.com/news/region/3838901-former-bismarck-mayor-registers-trade-names-state-3-5-und-nickname-options

#5yrsago Private equity's new debt-and-loot bonanza https://pluralistic.net/2020/09/17/divi-recaps/#graebers-ghost

#1yrago Christopher Brown's 'A Natural History of Empty Lots' https://pluralistic.net/2024/09/17/cyberpunk-pastoralism/#time-to-mow-the-roof


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025

  • "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
    https://us.macmillan.com/books/9780374619329/enshittification/

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026



Colophon (permalink)

Today's top sources:

Currently writing:

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
cjheinz
7 hours ago
reply
"If we want to armor the people we love against conspiratorial cults, it's not enough to argue over the implausibility of their belief that elite cabals are abusing the rest of us for fun and profit – we have to actually address the real elite cabals that really do abuse us for fun and profit."
Who knew the basement of a pizza joint was really Epstein's island?
Lexington, KY; Naples, FL
Share this story
Delete

Insights from the Lotka-Volterra Model

1 Comment

Download: PDF | EPUB | MP3 | WATCH VIDEO

All models are wrong;
the practical question is how wrong do they have to be to not be useful.

George Box

In science, there’s an inherent trade off between comprehensibility and realism. Realistic models tend to be intricate … even convoluted. But to be comprehensible, a model must be simple.

For a good example of this trade off, look to high-school physics. In the real world, we know that projectiles are affected by aerodynamics. (That’s why frisbees fly differently than baseballs.) But since aerodynamics are complicated, high school teachers ignore them. Instead, they teach students that earthbound projectiles behave as they would on the moon — blissfully unaffected by air drag. This simplification is a lie, of course. But it’s useful for teaching students about the essence of Newton’s equations.

Science is filled with this sort of simplification. We learn about the world by developing toy models — models which simplify reality, yet retain (we hope) an element of truth.

In economics, there’s no shortage of toy models. But most of these playthings belong in the landfill; they’re models that assume away the most pertinent features of the real world. (For example, neoclassical economic models capitalism by assuming ‘perfect competition’, whereas the real world is marked by pernicious oligarchy.)

In short, if we want simple models that capture key elements of human behavior, it’s best to leave mainstream economics behind. Instead, a good place to start is with population biology — specifically the Lotka-Volterra model of predator-prey dynamics. Like projectile motion that neglects aerodynamics, the Lotka-Volterra equations are a toy model of how predator and prey populations respond to each other. In a sense, it’s the simplest ‘systems model’ that still provides useful insights about the real world.

In what follows, we’ll take a tour of the Lotka-Volterra model, and see how it gives insights into human behavior.

The Lotka-Volterra model

Developed in the early 20th century by the mathematicians Alfred J. Lotka and Vito Volterra, the Lotka-Volterra equations are an early example of what we would today call a ‘systems model’ — a model that simulates feedback between two or more entities.

In the Lotka-Volterra model, we imagine feedback between a population of predators and a population of prey. Looking ahead, an important feature of the Lotka-Volterra equations is that they can’t be solved with algebra. Instead, they must be solved numerically by throwing numbers in and seeing what comes out.

Why is this algebraic intractability important? Well, because much of mainstream economics operates like a subdiscipline of pure mathematics, where the goal is to postulate models with neat (but worthless) analytic solutions. And since systems models defy this sort of rigid thinking, they’ve been neglected by economists.

Back to the Lotka-Volterra model. The model begins by imagining two populations, one of prey and one of predators. Now the presumption is that predators eat prey. However this predation isn’t captured literally by the model. (There’s no equation that tells us when or how a wolf eats a sheep.) Instead, the model simulates predation in terms of population dynamics. For example, if a wolf population expands, it will cut into the growth rate of the neighboring sheep population (on which it preys).

The Lotka-Volterra model assumes that if left in isolation, our predator and prey populations will have opposite dynamics. If left alone, our prey population will grow exponentially.1 (For example, if we put a group of sheep into an empty field, they will reproduce and their population will expand.) In contrast, if our predator population is left alone it will decline exponentially. (For example, if we deprive a wolf pack of food, its population will gradually starve to death.)

So those are the starting assumptions, which both yield straightforward predictions if they’re left to play out. Fortunately, the Lotka-Volterra model doesn’t stop there. Instead, it imagines what happens when we mix predators with prey. It’s here that we encounter the magic of feedback. If wolves kill sheep, then a larger wolf population will reduce the growth of the sheep population. But if the sheep population declines, that causes the wolf population to die off from starvation.

Now an economist might look at this model and imagine that it arrives at some sort of equilibrium with an ‘optimal’ number of sheep and wolves. But that’s not what happens. When we run the Lotka-Volterra model, we find that it’s defined by non-equilibrium dynamics of boom and bust.

Figure 1 illustrates the dance between predators and prey. Here, the blue curve shows the population of sheep, while the red curve shows the population of wolves. Initially, the population of wolves is small enough that the sheep population expands happily. But this sheep boom then causes the wolf population to grow. As the wolf population balloons, predation causes a collapse of the sheep population, leading to starvation amongst the wolves. Finally, once enough wolves have died, the sheep population starts to expand, and the cycle begins again.

Figure 1: Boom-bust dynamics — the characteristic behavior of the Lotka-Volterra model. A key feature of the Lotka-Volterra model is that it gives rise to cycles of booms and busts, here visualized by a feedback relation between sheep and wolf populations. A rising sheep population causes the wolf population to boom. Over-predation then causes the sheep population to collapse, leading to an eventual decline in the number of wolves. Once enough wolves have died off, the cycle begins again.

Now, the cyclical dance of the Lotka-Volterra model is well known to population biologists. In fact, it illustrates a fundamental feature of natural systems: they’re marked not by static equilibrium, but by what the physicist Ilya Prigogine called ‘order through fluctuation’. In short, if natural systems are stable, it’s because they fluctuate. And if they don’t fluctuate (if they veer in a single direction), that’s a sign that something abnormal is happening.

Harvesting non-renewable resources

Speaking of ‘abnormal’, let’s think about the nature of industrial society. It’s built on a one-time bonanza of fossil-fuel extraction, which means that there can be no long-term cycles. When it comes to fossil fuels, order through fluctuation gives way to a single extraction pulse — a wild party followed by a bad hangover.

The Lotka-Volterra model, it turns out, is a valuable tool for thinking about this fossil-fuel extraction pulse. That’s because, with a little tweak, we can transform the equations into a model of non-renewable resource extraction.

Rather sensibly, the Lotka-Volterra model assumes that the ‘prey’ population is self renewing — that sheep can replenish their numbers if some get eaten. But this assumption is just a model parameter, and parameters can be changed. If we set the prey replenishment rate to zero, then we transform the ‘prey’ population into something quite different: it becomes a stock of a non-renewable resource.

In this scenario, the Lotka-Volterra model produces a different set of dynamics. Figure 2 illustrates the new pattern. Here, the blue curve represents a non-renewable resource stock. (Let’s think of it as a stock of food in a laboratory Petri dish.) And the red curve represents a ‘predator’ population. (Petri-dish bacteria.) Initially, the bacteria population is small, and so they eat and reproduce merrily. Their numbers swell, and their fixed stock of food declines. When the foodstock reaches a point where further bacteria growth is impossible, the expansion switches to decline. A die off begins, and the bacteria head towards extinction.

Figure 2: When ‘prey’ is non-renewable, the Lotka-Volterra model creates a single pulse of predation. Suppose we put a few bacteria in a Petri dish filled with a fixed supply of food. Here’s what the Lotka-Volterra model says happens. Initially, the bacteria population (red curve) grows because food is plentiful. But because the foodstock is fixed (blue curve), it gradually declines, eventually reaching a point where further bacterial growth is impossible. At that point, growth switches to decline, and the bacteria population starves to death.

The extraction pulse

In Figure 2, the blue curve shows the remaining stock of the non-renewable resource. Now, the model presumes that we know, in advance, the total size of this stock. And in the case of a Petri dish filled with food, we certainly do know the size of the recoverable foodstock. However, in more realistic scenarios, the total recoverable resource stock remains uncertain. For example, if the Petri dish is large and the food is distributed unevenly, it may be that the bacteria never reach certain patches. So although this unreachable food ‘exists’, it doesn’t count towards the total stock of recoverable food.

Turning to humans, if the world is a Petri dish, fossil fuels are a ‘food’ that is unevenly distributed. Sure, we can guess at the total stock of fuel. However, much of this energy will never be extracted, because the cost is too prohibitive. Hence, the total recoverable stock of fossil fuels remains uncertain. For that reason, it’s more helpful to look at the dynamics of the Lotka-Volterra model in a slightly different light. Instead of measuring the remaining stock of a non-renewable resource, we can look at its flow — the rate that it’s extracted.

Figure 3 shows this flow-based view. In the case of our bacteria, the flow represents the rate of food consumption. It rises as the bacteria population expands, and then falls as the food gets exhausted and the bacteria population collapses. Now, the point is that in the Lotka-Volterra model, this bell-shaped pulse of extraction is a generic feature of non-renewable resource consumption. It applies to humans as much as it applies to bacteria. And unlike the stock-based view (Figure 2), this flow-based view is observable to human participants. We can watch the extraction rate of fossil fuels rise and fall. Indeed, in many places, we’ve already seen both sides of this extraction pulse.

Figure 3: An extraction pulse. Instead of plotting the stock of a remaining resource, this figure plots the flow of a non-renewable resource, as predicted by the Lotka-Volterra model. The result is a bell-curved pulse of consumption.

Predatory machines

Continuing the theme of non-renewable resource extraction, if fossil fuels are the ‘prey’, then who is the ‘predator’?

Well, in some sense, the fossil-fuel ‘eaters’ are literally humans. After all, we grow most of our food with fossil-fuel-based fertilizers, which means that in a way, we ‘eat’ fossil fuels. That said, the main ‘predator’ of fossil fuels is not people; it’s our fossil-fuel eating machines.

Think about it this way. We use our machinery to wrench fossil fuels from the Earth. Then we feed the harvested energy back to our machines, many of which help us extract more fossil fuels. This loop, it turns out, is exactly the sort of feedback envisioned by the Lotka-Volterra model. To use the Lotka-Volterra equations to simulate the extraction of fossil fuels, we let the ‘prey’ be fossil fuels; and we let the ‘predator’ be our extraction technology.

Now, many researchers have realized that fossil fuel extraction can be understood with simple systems models. However, it was Ugo Bardi and Alessandro Lavacchi who first proposed a direct link between the rate of resource extraction and the stock of extractive technology.

Figure 4 illustrates the connection envisioned by the Lotka-Volterra model. Here, the blue curve plots the extraction rate of a non-renewable resource. And the red curve plots the population of ‘predators’ — the stock of extraction technology. Notice two things about this simulation. First, both the resource harvest rate and size of the technological stock have a pulse-like behavior — a rise, peak, and fall. Second, the peak of the technological stock follows the peak of extraction.

Why this order? According to the Lotka-Volterra model, it’s because the extraction technology feeds on the resource being harvested. So when this resource input peaks and declines, the technological ‘predators’ begin to die off a short while later. (Which is to say that the machines are abandoned and left to rust.)

Figure 4: Feeding a technological predator. Here I’ve plotted the version of the Lotka-Volterra model envisioned by Bardi and Lavacchi. In this simulation, we imagine feedback between the extraction of a non-renewable resource and a stock of extraction technology. In essence, the technology ‘feeds’ on the resource in question, which means that its fate is linked to the resource itself. The key result is the stock of extraction technology peaks after the peak of extraction.

Predators in the Alberta oilpatch

At this point, I’m going to turn to a real-world example of Lotka-Volterra-like behavior. But before doing so, it’s worth reminding ourselves that the Lotka-Volterra model is a toy. It’s purposefully designed to be an over-simplification of reality. So it’s somewhat surprising the model has anything to say about the messy arena of human affairs. And yet when it comes to our exploitation of fossil fuels, it seems that humans behave unwittingly like the Lotka-Volterra model predicts.

For a good example of this unintended connection, let’s turn to the history of oil-and-gas production in the Canadian province of Alberta. Today, the province is (in)famous for its extraction of unconventional oil from the Athabasca tar sands. However, much of the 20th century was spent drilling for conventional oil and gas. Figure 5 shows the history of this geological bonanza.

Figure 5: The rise and fall of Alberta convention oil-and-gas production. The blue curve shows the history of conventional oil-and-gas production in Alberta, Canada. The red curve shows the rise and fall of active wells. Like the Lotka-Volterra model predicts, the peak of the technological extractive stock proceeds the peak of production. The inset map shows the over 650,000 wells drilled so far. [Sources and methods]

Following a few false starts in the early 20th century, the Alberta oilpatch got rolling after World War II, driven largely by an unquenchable American thirst for energy. Conventional oil-and-gas production expanded for the next fifty years, but peaked in 1998. Today, Alberta’s conventional oilpatch is in steep decline, and the big players have largely moved north to the unconventional oil sands.

Now, the oilpatch is driven by a simple principle, which is that you extract oil by drilling holes in the ground. The more holes you drill, the more oil you get. Or at least that’s how it works at first. Over time, the big reserves get depleted, and more and more wells become duds. Eventually, there are enough duds that oil production begins to decline even though the number of wells continues to increase. When that happens, the economics of the oilpatch shift. Drilling slows, unproductive wells are left to rust, and the number of active wells begins to decline.

The red curve in Figure 5 shows this pattern of active-well peak and decline. It is eerily similar to the Lotka-Volterra model in Figure 4. The message here is that the players in the Alberta oilpatch seem to be unwitting puppets of a toy model. As predicted by the Lotka-Volterra model, the stock of Alberta’s active oil-and-gas wells peaked shortly after the peak of oil-and-gas extraction.

Now, the effect of a good chart is always to collapse complicated behavior into a graphical pattern that’s simple enough to comprehend. So when we stare at a chart like Figure 5, it’s easy to lose sight of the antics beneath it. For that reason, I’ve included a map of the Alberta oilpatch, where each oil-and-gas well is an imperceptibly small dot. Today, there are over 650 000 wells in total, each with its own story of ambition, glory, and failure. Importantly, there was no grand plan to the Alberta oilpatch, other than to make money selling the riches of the Earth. But ironically, it’s this lack of plan that gives rise to the overarching pattern of rise and fall.

The Lotka-Volterra model assumes a basic instinct to eat when the pickings are good, and starve when the food runs dry. But it could be that humans, in all our wisdom, are able to suppress this urge. For example, we can imagine a scenario in which the government sets quotas on oil-and-gas drilling — quotas designed to keep production constant. In the face of such planning, the Lotka-Volterra model would have nothing to say about oil extraction.

Although humans are surely smart enough to enact such policies, rarely do we actually do so. Instead, when faced with a stock of exploitable resources, we’re gripped by an animalistic urge to consume them as fast as possible. The Lotka-Volterra model captures this urge, which is why it seems to predict the large-scale pattern of how we extract resources, without knowing anything about our small-scale antics.

Shocking the system

When we ‘play’ with a model, it’s important to be open about its limitations. On that front, the Lotka-Volterra model is an obvious over-simplification of the real-world, which means we expect to find many situations where it breaks down.

For example, we can imagine a population of sheep and wolves in which a farmer drastically culls the wolf pack. Or we can imagine a bacteria-filled Petri dish in which a scientist suddenly dumps in more food. Neither situation can be anticipated by the Lotka-Volterra model, which pretends that its modeled populations exist in splendid isolation. In the arcane language of economics, these system shocks are said to be ‘exogenous’; they are not part of the Lotka-Volterra model, which means they can’t be predicted.

That said, these shocks can be put in ‘by hand’. To add a system shock to the Lotka-Volterra model, we can arbitrarily change the predator/prey population midway through the model run. Then we see how the model responds.

To get started with system shocks, let’s return to our example of Petri-dish bacteria which are busy eating a finite stock of food. Left alone, the bacteria’s food consumption will follow the familiar resource ‘pulse’, plotted in Figure 3. Food consumption will rise as the bacteria population expands, and then collapse as the bacteria starve. Now suppose that partway through this consumption pulse, a benevolent scientist dumps more food into the dish. What happens?

Well, it turns out that the impact depends on the timing of the food dump. If the food dump happens early in the experiment, the shape of the consumption pulse remains essentially unchanged. Figure 6 illustrates. Here, the foodstock quadruples early on, before the bacteria population has had much time to grow. The result is a minor uptick in food consumption, followed by the expected pulse of growth and decline.

In contrast, if the food dump happens late in the experiment, the effect is drastically different. Figure 7 illustrates. Here, our scientist waits until the original foodstock has begun to wane before dumping in a new bonanza. The result is a massive increase in resource consumption, which creates a second extraction pulse.

Figure 6: An early-game resource shock. Here we imagine bacteria in a Petri dish with a finite stock of food. Early in the consumption pulse, a scientist quadruples the foodstock. According to the Lotka-Volterra model, not much happens. That’s because at the time of this early dump, the bacteria population is small, so it can’t do much with the extra food. So the consumption pulse plays out as though the larger foodstock was there all along.

Figure 7: A late-game resource shock. Unlike an early-game resource shock, a late-game shock changes the shape of the consumption pulse by adding a second peak. Here, we imagine a population of Petri-dish bacteria left alone to eat a finite stock of food. After food consumption has peaked, a benevolent scientist quadruples the remaining foodstock. This bonanza creates a second peak of consumption — one which burns more brightly and more briefly than the first peak.

So why does it matter when we dump in the food? Well, because the bacteria’s ability to harvest food depends on their population. If we add food when the population is small, the bacteria can’t do much with it — their numbers are too few. If, however, we dump food into the dish late in the game, there is a large population of starving bacteria ready to gobble up the resource.

Returning to humans, the same scale principle holds. For example, suppose that in 1870, a benevolent god somehow quadrupled the global stock of oil. Would anyone have noticed? Probably not. At the time, oil extraction was in its infancy; our seismic technology was non-existent, our drilling technology was juvenile, and our refining technology was rickety. In short, when faced with an early-game quadrupling of our oil reserves, pretty much nothing would have happened (at the time).

Now imagine that the remaining stock of oil quadrupled today. Would anyone notice? It’s a silly question that’s not rhetorical. As it happens, the United States is in the midst of an oil-and-gas bonanza — one created by the exploitation of tight oil and shale gas. Of course, these reserves haven’t just popped into existence — they’ve been there all along. What changed is our technology.

For most of the 20th century, oil and gas was extracted by drilling a vertical well, and then sucking out the reserve. This technique works well if the formation is porous enough for the oil to flow. But if the formation is imporous, the well will come up dry. Now suppose that instead of drilling vertically, we bent the borehole and extended it horizontally though the reserve. And then suppose that we pumped in a high-pressure liquid which fractured the formation. Well then, this previously inaccessible resource would flow like melted butter. That, in a nutshell, is how the fracking revolution has worked.

To see this revolution, let’s turn to Figure 8, which plots the history of US oil production. For decades, the United States was the poster child for peak oil. In 1956, the geologist M. King Hubbert predicted that US oil production would peak in the early 1970s. And that is exactly what happened. From the next four decades, production declined. But in the mid 2000s, the fracking revolution opened up new reserves, sending US oil production to new heights.

Figure 8: A second bonanza — oil production in the United States. After more than three decades of decline, US oil production exploded in the 2010s. The turnaround owes to the new technique of fracking — using high-pressure liquid to fracture oil formations that were previously too impervious to flow. [Sources and methods]

Now the pertinent question is — how long will this second oil-and-gas bonanza last? For their part, peak oil theorists have become less strident than they were in the mid 2000s, in large part because the fracking revolution has shown the importance of technological change. That is, the amount of recoverable oil is affected not just by the Earth’s geology, but also by the tools we use to harvest fossil fuels.

So while I won’t give a definite prediction for the second peak of US oil production (I’ve already made one that’s proved wrong), the Lotka-Volterra model does give us reason to be bearish about the timing of this peak. Looking at Figure 7, the Lotka-Volterra model predicts that a late-game resource shock creates a second consumption peak that burns more brightly and briefly than the first peak. Again, this is because when resources are added late in the game, there’s a huge stock of ‘predators’ ready to exploit them.

Likewise, in the United States, the fracking revolution is taking advantage of an immense technological stack that is hungry for oil (and that had been slowly starving for decades). Because of this latent capacity, the US will likely burn through its unconventional oil reserves more quickly than it did with the conventional stuff.

Killing off predators

Continuing the theme of system shocks, so far we’ve explored what happens if we shock the Lotka-Volterra model by adding new ‘prey’. In the same vein, we can also shock the model by killing off ‘predators’.

As before, what interests me is how this shock plays out in the context of a non-renewable resource ‘pulse’. Since it’s the ‘predators’ that do the consuming, killing them off has the predictable effect of temporarily dampening resource consumption.

Figure 9 illustrates the effects of a predator die off. Returning to our example of Petri-dish bacteria, suppose that midway through their growth pulse, a vindictive scientist dumps cyanide into the dish. The bacteria population crashes, as does their rate of food consumption. However, the population soon recovers, and the consumption pulse begins again.

Figure 9: Killing off predators. Here we suppose that early in the consumption pulse of a non-renewable resource, most of the predators die off. The resource harvest rate plunges, but not for long. As the predator population recovers, the pulse of resource consumption resumes its course.

Similarly, anything that kills off humans — be it famine, pandemic, or war — will disrupt our consumption of resources. And the disruption will be doubly severe if it also destroys our technology, as is the case during warfare. In this regard, fossil fuels have been both a blessing and a curse. On the one hand, they’ve enabled an unprecedented rise in our standard of living. But on the other hand, they’ve magnified our destructive power, making war far more terrifying.

Perhaps more than any country, Japan offers the best example of how a consumption pulse can go wrong when it is interrupted by war. For much of the early 20th century, Japan was busy doing what every other great power had done before, which was to conquer new territory. Japan’s main sin was that it was late to the imperial game, which meant its expansion tread on the toes of established colonial powers.

As part of this imperial game, the Japanese military decided, in late 1941, to prod the US empire by bombing Pearl Harbor. It was a foolish decision. At the time, the United States was by far the world’s most powerful country, consuming about a third of the world’s energy. So egging it into war was destined to end badly.

And for Japan, end badly the war did. Not only was much of the country flattened by conventional bombs, Japan remains the only population to have been devastated by a nuclear bomb. Figure 10 plots the scale of this wartime destruction, measured in terms of Japan’s share of world energy use. After prompting the US into World War II, Japan’s share of world energy use plummeted. It didn’t recover to pre-war levels until 1966.

Figure 10: Japan’s share of world energy use and the devastation of WWII. It’s easy to spot the moment when Japan provoked the US into declaring war (in late 1941). The ensuing US bombardment decimated Japan’s infrastructure (and of course, its population), sending Japan’s share of world energy use back to 1890 levels. The post-war recovery took decades. [Sources and methods]

Now, I realize that it feels crass to reduce the violence of war to an abstract systems model. But really, it’s no more crass than converting the violence of predation into a mathematical equation — which is exactly what the Lotka-Volterra model does. In this case, the destruction of imperial Japan offers a case study in what happens when machines and infrastructure (our technological ‘predators’) get destroyed.

Social collapse

Since human-built machines don’t (yet) have a life of their own, they needn’t be destroyed to be rendered useless. Anything that obstructs their human caretakers will have the same effect, freeing technology to sit idle. For this reason, periods of social collapse can (like war) be treated as a ‘predator shock’ — moments when our active technological stock suddenly decreases.

For example, during the Great Depression, much of the world’s machinery lay unused, simply because people couldn’t afford to use it. More recently, the collapse of the Soviet Union provided a similar experiment with idle technology driven by social chaos.

When the Soviet regime died in 1990, Western economists confidently declared that markets would emerge and pick up the slack left by the absence of state planning. Unsurprisingly, the market ‘miracle’ worked rather differently. In the aftermath of the Soviet collapse, former member states suffered a severe and prolonged depression.2

The history of Russian oil production offers a good window into this collapse. During the later years of Soviet control, Russian oil production had exploded, reaching a pinnacle in 1987. But in the aftermath of the Soviet collapse, Russian oil production was cut nearly in half. It didn’t return to the Soviet-era high until 2019. Figure 11 illustrates this free market ‘miracle’.

Figure 11: A market ‘miracle’ — Russian oil production implodes following the Soviet collapse. When the Soviet Union dissolved, Soviet-block countries experienced a severe and prolonged depression, visible clearly in the production of oil. For example, Russia’s oil production was cut nearly in half, and didn’t return to the Soviet-era peak until 2018. [Sources and methods]

Again, I think the Lotka-Volterra model provides some useful insight into the post-Soviet depression. Sure, it says nothing about how or why the Soviet Union collapsed. But when the ensuing social chaos rendered a large portion of Soviet machinery inoperable, we can treat the catastrophe that followed like a sort of ‘predator’ die off. Only recently, have Russia’s ‘oil predators’ recovered.

A humble toy

This concludes my tour of the Lotka-Volterra model, which I’ll remind you, is best considered a humble mathematical toy. The Lotka-Volterra model doesn’t make grand predictions about the future. It’s not compelling enough to attract acolytes. It’s not seductive enough to be enshrined in official dogma. And it’s not enthralling enough to be the subject of political debate. No, the Lotka-Volterra model is a simple thought experiment about the effects of population feedbacks.

And yet, I hope to have convinced you that the Lotka-Volterra model is useful. Life on Earth is dominated by feedback effects, and it behooves us to understand how they work. The Lotka-Volterra model offers a comprehensible entry point into the world of systems modeling, a world in which simple principles generate complex effects. In an academic landscape dominated by neoclassical economic fantasies, surely we could use more of this type of thinking.


Support this blog

Hi folks. I’m a crowdfunded scientist who shares all of his (painstaking) research for free. If you think my work has value, consider becoming a supporter.

member_button


Stay updated

Sign up to get email updates from this blog.



This work is licensed under a Creative Commons Attribution 4.0 License. You can use/share it anyway you want, provided you attribute it to me (Blair Fix) and link to Economics from the Top Down.


Minsky

It would be foolish to write a post about feedback modeling without mentioning the hard work being done by heterodox economist Steve Keen.

Backing up a bit, systems models consist of nothing but sets of differential equations. In the early days of modeling, scientists coded these equations by hand. But that gets tedious quickly. And so researchers developed graphical tools for creating systems models. Today, there are many such tools, but virtually all of them are proprietary and closed source. The exception is the systems modelling program Minsky developed by Steve Keen and coded by Russell Standish. Minsky is free and open source, and designed specifically with economics in mind. I encourage you to try it.

The Lotka-Volterra equations

The Lotka-Volterra model consists of a set of coupled differential equations, typically written as:

\displaystyle \begin{aligned} \frac{dx}{dt} &= \alpha x - \beta x y \\ \\ \frac{dy}{dt} &= -\gamma y + \delta xy \end{aligned}

Here, x is the prey population and y is the predator population, while dx/dt is the rate of change of the prey population and dy/dt is the rate of change of the predator population. The remaining terms are model parameters which, to me, make more sense if we rewrite the model in terms of population growth rates.

To reframe the Lotka-Volterra equations in terms of growth rates, note that a growth rate is simply a rate of change expressed as a portion of the thing changing. So the growth rate of x is:

\displaystyle \widehat{x} = \frac{dx/dt}{x}

Likewise, the growth rate of y is:

\displaystyle \widehat{y} = \frac{dy/dt}{y}

When we reframe the Lotka-Volterra equations in terms of growth rates, they simplify as follows:

\displaystyle \begin{aligned} \widehat{x} &= \alpha - \beta y \\ \\ \widehat{y} &= -\gamma + \delta x \end{aligned}

In English, these equations state:

  1. Without predators, the prey population will grow exponentially at rate \widehat{x} = \alpha .
  2. Without prey, the predator population will decline exponentially at rate \widehat{y} = -\gamma .
  3. The presence of predators y decreases the growth rate of prey by - \beta y .
  4. The presence of prey x increases the growth rate of predators by \delta y .

And that’s it! From these simple equations comes a host of dynamics that are not predictable from algebra alone.

Sources and methods

Alberta oilpatch (Figure 5)

This data is from my post ‘A Case Study of Fossil-Fuel Depletion’. Detailed methods are available here.

US oil production (Figure 8)

Oil production data is from the following sources:

  • 1949 to present: Energy Information Agency, Table 1.2, Primary energy production by source
  • 1860 to 1948: Historical Statistics of the United States, Table DB157

Japan’s share of world energy use (Figure 10)

World energy use is from Our World in Data, Energy Production and Consumption.

Japan’s energy use is from the following sources:

All Japanese series are indexed backwards from the Statistical Review data. Also note that the carbon-based estimates are indexed twice. First, I index the carbon data to Japanese energy use in 1900. The resulting series assumes that Japan’s energy use prior to 1900 directly tracks its carbon emissions. The problem with this estimate is that it ignores non-fossil fuel sources of energy, which become more important as we head back in time. To correct this problem, I then add to the time series the constant value of 26,000 KCal of energy per person per day. Finally, I re-index this updated energy estimate to the statistical data from 1900.

(For those who are interested, I’ve used the same carbon-based method to estimate the historical energy use of the Soviet Union.)

Russian oil production (Figure 11)

Russian oil production data is from the following sources:

Notes

  1. In the real world, the sheep population will eventually plateau as it reaches the field’s carrying capacity. But the Lotka-Volterra model assumes that predation keeps the sheep population well below this upper limit.↩
  2. Dmitry Orlov’s book Reinventing Collapse gives a fascinating account of the suffocating environment in post-Soviet Russia.↩

Further reading

Bardi, U., & Lavacchi, A. (2009). A simple interpretation of Hubbert’s model of resource exploitation. Energies, 2(3), 646–661.

Orlov, D. (2008). Reinventing collapse: The Soviet example and American prospects. New Society Pub.

The post Insights from the Lotka-Volterra Model appeared first on Economics from the Top Down.

Read the whole story
cjheinz
2 days ago
reply
Worth a read.
Lexington, KY; Naples, FL
Share this story
Delete

Saving Money on Groceries by Understanding Food Product Packaging Dates

1 Comment

When shopping for groceries, it’s easy to get confused by the different food product dates stamped on packaging. “Sell by,” “Best if used by,” “Use by,” and “Expiration” dates don’t all mean the same thing, and misunderstanding them can lead to throwing out perfectly good food—or worse, wasting money. By learning what these dates really mean, you can stretch your grocery budget and reduce food waste.

Importantly, food package dating is not federally regulated except in infant formula. 

Types of Packaging Dates

  • Sell By Date: This is meant for the store, not the customer. It tells retailers how long to display the product for sale. Foods are usually still safe to eat for days (sometimes weeks) after this date if stored properly.

  • Best If Used By/Before Date: This refers to quality, not safety. It’s the manufacturer’s estimate of when the product tastes its best. Many packaged foods—like cereal, pasta, and canned goods—are safe long past this date.

  • Use By Date: This is the last date the manufacturer recommends for peak quality. It’s not necessarily a safety cutoff (except on infant formula, where it is federally regulated).

  • Expiration Date: This is the closest thing to a real safety deadline. If you see “expires on,” it’s best not to consume the product after that point.

Tips for Saving Money and Reducing Waste

  1. Shop Smart Around Dates: Grocery stores often discount items nearing their “sell by” or “best by” dates. Buying these and using or freezing them quickly can save you money.

  2. Trust Your Senses: Look, smell, and taste (safely) before tossing something. Many foods are fine well beyond the printed date.

  3. Use Your Freezer: Freezing meat, bread, and even dairy products before their date can extend shelf life for months.

  4. Practice FIFO (First In, First Out): Rotate items in your pantry and fridge so older items are used first.

  5. Know the Shelf Life: Canned goods, dried pasta, and rice can last for years if stored properly. Don’t rush to throw them away just because of a “best by” label.

Why It Matters

According to the USDA, Americans waste about 30–40% of the food supply each year, much of it due to confusion over date labels. That’s money out of your pocket and food out of the supply chain. By understanding packaging dates, you can save money, reduce waste, and make your groceries stretch further.

To learn more, check out this USDA resource https://www.fsis.usda.gov/food-safety/safe-food-handling-and-preparation/food-safety-basics/food-product-dating 

Find all of our UF-IFAS Blogs here https://blogs.ifas.ufl.edu/about/

Read the whole story
cjheinz
2 days ago
reply
My family has a running discussion of this.
Lexington, KY; Naples, FL
Share this story
Delete

Podcast: AI Slop Is Drowning Out Human YouTubers

1 Comment

This week, we talk about how 'Boring History' AI slop is taking over YouTube and making it harder to discover content that humans spend months researching, filming, and editing. Then we talk about how Meta has totally given up on content moderation. In the bonus segment, we discuss the 'AI Darwin Awards,' which is, uhh, celebrating the dumbest uses of AI.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.



Read the whole story
cjheinz
2 days ago
reply
I am still jealous of Gary Marcus for inventing the term "Slopacolypse Now", IMO better than my "Bullshit Apocalypse" - although I think my term is more accurate & inclusive.
Lexington, KY; Naples, FL
Share this story
Delete

Often give in

1 Share

In an oft-quoted speech, Winston Churchill said:

Never give in–never, never, never, never, in nothing great or small, large or petty, never give in except to convictions of honour and good sense. Never yield to force; never yield to the apparently overwhelming might of the enemy.

The problem with this advice is that it means we spend an enormous amount of time in senseless battles with senseless folks who are also following this advice.

In a community, perhaps it makes more sense to only have battles about honour and good sense. In everything else, sure, give in. It’ll help you focus on what really matters.

      
Read the whole story
cjheinz
3 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories