1797 stories
·
5 followers

A Tour of the Jevons Paradox: How Energy Efficiency Backfires

1 Comment

Download: PDF | EPUB | MP3 | WATCH VIDEO

[R]esource productivity can — and should — grow fourfold.
… Thus we can live twice as well — yet use half as much.

Factor Four, 1997

When it comes to our sustainability problems, striving for greater resource efficiency seems like an obvious solution. For example, if you buy a new car that’s twice as efficient as your old one, it should cut your gasoline use in half. And if your new computer is four times more efficient than your last one, it should cut your computer’s electric bill fourfold.

In short, boosting efficiency seems like a straightforward way to reduce your use of natural resources. And for you personally, efficiency gains may do exactly that. But collectively, efficiency seems to have the opposite effect As technology gets more efficient, we tend to consume more resources. This backfire effect is known as the ‘Jevons paradox’, and it occurs for a simple reason. At a social level, efficiency is not a tool for conservation; it’s a catalyst for technological sprawl.1

Here’s how it works. As technology gets more efficient, it cheapens the service that it provides. And when services get cheaper, we tend to use more of them. Hence, efficiency ends up catalyzing greater consumption.

Take the evolution of computers as an example. The first computers were room-sized machines that gulped power while doing snail-paced calculations. In contrast, modern computers deliver about a trillion times more computation for the same energy input. Now, in principle, we could have taken this trillion-fold efficiency improvement and reduced our computational energy budget by the same amount. But we didn’t.

Instead, we took these efficiency gains and invested them in technological sprawl. We took more efficient computer chips and put them in everything — phones, TVs, cars, fridges, light bulbs, toasters … not to mention data centers. So rather than spur conservation, more efficient computers catalyzed the consumption of more energy.

In this regard, computers are not alone. As you’ll see, efficiency backfire seems to be the rule rather than the exception. Far from delivering a cure for our sustainability woes, efficiency gains appear to be a root driver of the over-consumption disease.

The search for a sustainability cure

Humans, being fad-prone animals, excel at taking old ideas and redressing them in language that’s shiny and new. Hence we get the modern obsession with ‘resource efficiency’.

Of course, the word ‘efficiency’ is not new. However, humanity’s sustainability predicament has given the pursuit of efficiency new meaning. In the before times, ‘efficiency’ was understood as a way to cut costs and bolster profits. But in recent decades, ‘efficiency’ has been rebranded as a tool for sustainability. As the UN Environment Programme puts it, the pursuit of resource efficiency can (supposedly) “decouple economic development from environmental degradation”.

So where did this reinterpretation of ‘efficiency’ come from? Well, it was a collective effort that gained traction in the 1990s, the decade when our sustainability problems became widely discussed. Perhaps more than any other work, the book Factor Four (written by Ernst von Weizsäcker, Amory Lovins and Hunter Lovins) popularized the idea that efficiency could be a cure-all for our sustainability woes. Published in 1997, the book came out just as the phrase ‘resource efficiency’ exploded in popularity. Figure 1 shows the timing.

Figure 1: ‘Resource efficiency’ becomes a fad. The idea that efficiency could solve our sustainability problems gained traction in the 1990s. It was popularized by the 1997 book Factor Four, which argued that a fourfold increase in technological efficiency could double wealth while halving resource use. [Sources and methods]

Factor Four’s thesis was simple: if our technology were to grow four times more efficient, we could live twice as well, while cutting our resource budget in half.

Sounds compelling, right?

Sadly, there were some nagging problems. True, Factor Four made a well-reasoned case that many of our technologies could grow four times more efficient. But when it came to translating this efficiency into resource conservation, details were scarce.

Worryingly, the book mostly ignored the historical record. And that turns out to be a fatal flaw. You see, efficiency improvements are not a new invention; they’ve been happening continuously for at least three centuries. And over that time, resource use didn’t shrink. It ballooned.

In what follows, we’ll look at this efficiency-driven bonanza But first, let’s revisit the man who first predicted that efficiency would backfire.

A neoclassical economist frets about sustainability

A century before the modern obsession with ‘resource efficiency’, William Stanley Jevons worried about sustainability.

Backing up a bit, Jevons is best known today as one of the founders of neoclassical economics — a co-inventor of the theory of marginal utility. But before Jevons became obsessed with counting ‘utils’, he was anxious that Britain was running out of coal.

In 1865, Jevons caused a minor sensation with his book The Coal Question. Written during the heyday of British coal mining, the book predicted that Britain would one day exhaust this precious resource. But when the coal crisis didn’t pan out, the public lost interest.

Of course, Jevons was right to worry about the exhaustion of British coal … he was just a bit early. Although he didn’t live to see the day, British coal production peaked in 1913 and declined continuously thereafter. In hindsight, Jevons got the last laugh. Today, Britain produces less coal than it did in 1700. Figure 2 paints the picture.

Figure 2: Jevons frets about British coal exhaustion. In 1865, William Stanley Jevons published his book The Coal Question, which worried about the exhaustion of British coal. Although Jevons wouldn’t live to see it, British coal production peaked in 1913. A century later, when British coal was all but gone, scientists returned to Jevons’ work on efficiency, coining the term the ‘Jevons paradox’. [Sources and methods]

I would argue that Jevons also got the last laugh about another idea. In The Coal Question, Jevons spent most of his time exploring possible ways to make coal reserves last longer. And one of the solutions he floated was to make coal engines more efficient. But instead of celebrating efficiency as a solvent for conserving resources, Jevons argued that it would have the opposite effect. Greater efficiency would amplify consumption:

It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth.

(Jevons, 1865)

The reason efficiency backfires, Jevons reasoned, is that it stimulates what he called “new modes of economy”. In other words, efficiency catalyzes technological sprawl.

Neglecting technological sprawl

It’s the neglect of technological sprawl that ultimately undermines techno-optimist books like Factor Four. Although the authors were right to argue that technology can get more efficient, they were wrong about what we would actually do with this newfound efficiency.

Let’s use computers to illustrate the problem.

To the 21st-century eye, Factor Four contains a delightfully retro discussion about how laptop computers could be tools for conservation. “A modern laptop,” the authors observed, “can ideally reduce electricity demand by 99 per cent when compared with an old-fashioned desktop computer.” What’s interesting is that you could make the same argument today. Except that the authors were talking about clunky, 1990s machines like the ones pictured in Figure 3.

Figure 3: Switch to a laptop, reduce your electricity use one-hundredfold! An optimistic calculation from Factor Four.

Now, there’s no question that you could save electricity by switching from a desktop computer to a more efficient laptop. But once you buy the new laptop, there’s no law that says the old machine must stay off. Heck, your kids have been whining about having their own computer. Let them use the desktop while you work on the laptop. Actually, new laptops are so cheap that you could buy one for each family member. And look at those new iPods. Let’s put them on the Christmas wish-list.

If this story sounds familiar, that’s because it’s how rising computer efficiency actually played out. With each generation of new device, computers could do more computation with less energy. But the end result was not resource conservation. Instead, rising efficiency catalyzed new forms of technological sprawl. Families went from having a single computer to having a sea of devices — laptops, iPods, iPads, iPhones, smart TVs, dishwashers connected to the internet, and so on.

Back to Factor Four. We can certainly forgive the authors for not foreseeing the specifics of how computational sprawl would play out. Yet the fact that there would be new forms of sprawl was utterly predictable. And that’s because in 1997, rising computer efficiency was nothing new. It had been happening continuously for a half century.

Figure 4 shows the scale of progress. Things got rolling in 1946 with the birth of ENIAC, the first modern computer. When fed a watt-hour of electricity, the room-sized machine could barely muster a single computation. Fast-forward to 2009, and a three-pound laptop could take the same energy input and do a trillion computations. That’s a stupendous increase in efficiency.

Figure 4: The evolution of computer efficiency. Since the birth of modern computers in the mid 1940s, computational efficiency (the number of computations per watt-hour) has increase by at least a factor of a trillion. [Sources and methods]

My point is that when Factor Four was written in 1997, the authors could have looked at the history of computational efficiency and seen what it had wrought. The answer — then and now — was not resource conservation. It was the continuous expansion of technological sprawl.

On the consumer end, new devices proliferated. And on the industrial end, the demand for cloud computing spawned an ever-expanding network of data centers. Today, the computational sprawl has reached comical levels. Using the most modern, ultra-efficient computers, data centers guzzle power so that half-baked chatbots can respond to your queries with answers that are plausible but wrong.

Some people call this ‘progress’. But another word for it would be the continuous backfire of computational efficiency.

Backfire on the blockchain

Despite the ubiquity of computers, it’s surprisingly difficult to pin down their collective power budget. Hence, it’s difficult to measure the scale of efficiency backfire. But in specific applications, we do have hard numbers .. and they are jaw dropping.

One such application is the ‘blockchain’ — the technology that powers cryptocurrencies like Bitcoin. Now, you’ve probably heard that the Bitcoin network uses loads of energy. And it does. But before we look at this gluttony, let’s study a (seemingly) more positive trend. Over the last decade, Bitcoin mining has grown vastly more efficient.

Figure 5 tells the story. Since 2010, the hashing efficiency of Bitcoin tech grew by a factor of a million. Backing up a bit, a ‘hash’ is the problem that Bitcoin miners solve in order to verify transactions. What’s important about this algorithm is that it involves copious computation — it’s nothing but brute-force trial and error. And so Bitcoin miners are under tremendous pressure to bolster their profits by using the most efficient technology. In 2010, that meant using standard GPUs. But today, it means using purpose-built hardware that is vastly more efficient.

Figure 5: The rising efficiency of Bitcoin hashing technology. Each blue point represents a computer chip used for Bitcoin hashing (mining). The horizontal axis shows the chip’s date of release. The vertical axis plots the chip’s hashing efficiency — the number of hashes per microJoule of electricity input. In the early days of Bitcoin, standard GPUs (graphical processors) were repurposed for hashing. Soon, however, hashing was done on purpose-built chips, resulting in vast increases in efficiency. (To view the various chips behind each data point, see Figure 18 in the appendix.) [Sources and methods]

Given the million-fold improvement in hashing efficiency, we can ask what it wrought. Did it cause Bitcoin miners to save spectacular amounts of electricity? Or did it induce new forms of technological sprawl?

Points to readers who guessed the latter. With more efficient technology in hand, Bitcoin miners responded by expanding their operations. The result, as Figure 6 shows, was that the million-fold efficiency improvements were met with a million-fold increase in Bitcoin’s energy budget. As I said, jaw-dropping backfire.

Figure 6: Bitcoin miners discover the Jevons paradox. Given the million-fold increase in hashing efficiency that occurred between 2010 and 2024 (horizontal axis), it’s conceivable that Bitcoin’s energy demands could have decrease by the same factor. But instead, they did the opposite, growing by a million-fold (vertical axis). [Sources and methods]

The Jevons paradox comes to America

Among the spectrum of modern technology, computers are probably unique for the staggering scale of their efficiency improvements. Elsewhere, the gains have been more modest. Still, it’s worth remembering that efficiency gains were not an invention of the sustainability-aware 1990s. Long before anyone cared about conserving resources, industrial tech was getting steadily more efficient.

Think of the difference between the cars of the 1910s and the cars of today. Think of what electricity generation plants looked like in 1900 and what they looked like now. Think of how powered flight went from not existing to landing on the moon in 66 years. In hindsight, the scope of this technological change is breathtaking. And the thread of efficiency runs continuously through it.

Interestingly, few scientists have attempted to look at the big picture of this efficiency thread. In other words, we know a lot about the efficiency improvements of specific machinery. But we know surprisingly little about how these improvements add up across the whole of society.

That changed in 2009 with a book called The Economic Growth Engine.2 Written by economists Robert Ayres and Benjamin Warr, the book attempts to add up efficiency gains across the full range of technology. In other words, Ayres and Warr look at how much primary energy gets pumped into society. Then they estimate how much ‘useful work’ gets done. Take the ratio of these two quantities and you get a measure of aggregate efficiency.

Figure 7 shows Ayres and Warr’s estimates of aggregate efficiency in the United States. Starting in 1900, energy-conversion technology was on average, about 4% efficient. By 2000, that value had increased to nearly 12% — a roughly threefold improvement.

Figure 7: The aggregate efficiency of primary energy converters in the United States. Throughout the 20th century, US primary energy converters — technology like internal combustion engines, thermal power plants and various industrial processes — grew steadily more efficient. This figure show Ayres and Warr’s estimates for the aggregate efficiency of these machines. [Sources and methods]

So again, we can ask what these efficiency gains wrought. Did they work to conserve energy? Or did they catalyze new forms of technological sprawl?

The evidence speaks for itself. As Figure 8 shows, the threefold improvement in US aggregate efficiency was met with a threefold increase in energy use per person. Instead of investing in energy conservation, Americans took their efficiency gains and used them to catalyze new forms of technological sprawl — interstate highways, massive suburbs, theme parks, and gadgetry of every kind.

Figure 8: The United States discovers the Jevons paradox. As the efficiency of US primary energy converters increased (horizontal) access, so did its primary energy use per person (vertical axis). [Sources and methods]

Now to be fair to Americans, their tech sprawl was uniquely bombastic. But the efficiency backfire was itself part of a common pattern. The same thing happened in the United Kingdom, in Japan, and in Austria. And while we don’t yet have expansive data, I’d bet that this efficiency backfire happened in every country.

A world built on heat engines

Speaking of expansive, let’s turn now to the global level, where we’ll watch three centuries of energy-efficiency backfire. But first, I want to reflect on the technological ‘stack’ that supports industrial society.

In our daily lives, the top of the tech stack gets most of the attention. Phones beep notifications, computers demand our time, ovens cook our food, and washing machines clean our clothes. While this top-level tech is important, when it comes to the Jevons paradox, it’s the bottom-level technology that’s most crucial. And at the very bottom of the industrial stack lies the mainspring of fossil-fuel-based civilization: the heat engine.

For those who are unfamiliar, a heat engine is a machine that converts heat into mechanical work. It’s no exaggeration to say that these machines are the primary driver of industrialization. Without them, fossil fuels are of limited use — they are little more than a source of heat. But with a heat engine, the energy contained in fossil fuels can be converted into more useful forms of work. Today, heat engines are what drive our cars, push our trains, fly our planes, and sail our ships. And heat engines generate the vast majority of our electricity.

Now, like most technologies, heat engines had humble beginnings. Just as the first computers looked like room-sized caricatures of today’s sleek machines, the first heat engines were nothing like the tightly engineered machines of the 21st century. The earliest heat engines were rickety Rube Goldberg devices that leaked energy from every seam.

Case in point was the Newcomen engine, the first commercially viable steam engine. Patented in 1712, the machine was spectacularly inefficient, wasting something like 99.3% of the coal energy that went into it.3 In fact, the Newcomen engine was so wasteful that it only worked when placed directly beside a coal mine, where it could be fed a constant stream of fuel.

At best, the Newcomen engine was the ENIAC of steam engines — a barely viable prototype that demonstrated the heat engine’s potential. Better machines came later. In 1769, James Watt patented his much-improved steam engine — a machine that wasted a mere 96% of its input coal energy.4 And by the turn of the 19th century, an arms race had ensued, with engineers competing to design better, more efficient heat engines.

Returning to the present, Figure 9 shows the history of this heat-engine arms race. From the rickety prototypes of early industry to the hyper-optimized machines of today, heat-engine efficiency trended upward for three centuries, improving by something like a factor of forty.

Figure 9: The arms race to make heat engines more efficient. Ever since Newcomen invented the first commercial steam engine in 1712, there’s been an ongoing race to create more efficient heat engines. Initially, that meant improving the steam engine. But eventually, it meant the adoption of better machines — things like internal combustion engines and gas-powered turbines. [Sources and methods]

Global investment in industrial sprawl

At this point, we’re faced with a question that’s becoming increasingly rhetorical. Given the vast improvements in heat-engine efficiency, did humanity invest the savings in resource conservation?

Obviously, we did not.

Instead, we used our increasingly efficient heat engines to catalyze the global sprawl of industrial civilization.

Interestingly, William Stanley Jevons was one of the first scientists to document this transformation. Looking at the demand for coal, Jevons noted that it was driven by the efficiency of steam engines. Literally. You see, one of the first uses for steam engines was to pump water out of British coal mines. This coal-driven work ensured that coal remained cheap. And cheap coal bolstered the demand for coal-powered work.

A positive feedback cycle ensued, prompting industrial sprawl that is now familiar. Farmers left the land to live in factory-filled cities. Animal-powered work was replaced with fossil-fuel powered machines. Electrification brought cheap, ubiquitous energy to the masses. And the industrial sprawl that started in Britain spread to every corner of the globe.

The result, as Figure 10 shows, was three centuries of energy-efficiency backfire. As heat engines grew more efficient, humanity consumed more fossil fuels.

Figure 10: The world discovers the Jevons paradox. The horizontal axis the trend in heat-engine efficiency, derived from Figure 9. The vertical axis shows the world’s fossil fuel use per capita. Note that both axes use a log scale. [Sources and methods]

Blame capitalism

Weighing the evidence, it seems clear that the Jevons paradox is a general feature of industrial society. So let’s move on to the question of why energy efficiency backfires.

For their part, many leftists confidently blame capitalism. For example, in a 2010 article called ‘Capitalism and the curse of energy efficiency’, Marxist writers John Bellamy Foster, Brett Clark and Richard York scratch their anti-capitalist itch. When it comes to efficiency backfire, they’re convinced that capitalism is the culprit:

The Jevons Paradox is the product of a capitalist economic system that is unable to conserve on a macro scale, geared, as it is, to maximizing the throughput of energy and materials from resource tap to final waste sink.

(Foster, Clark, & York, 2010)

While I’m sympathetic to this blame game, the scientist in me finds it a bit presumptuous. To be sure, capitalism is unusually effective at promoting technological sprawl. But is it the only social system in which energy efficiency backfires?

I suspect not. Far from a being unique to capitalism, my bet is that the Jevons paradox is more general than Foster and colleagues claim. You see, humans are not the only species that play the efficiency-backfire game. The rest of life plays it as well.

Nature’s feats of efficiency

To get a glimpse of the Jevons paradox’s surprising universality, let’s leave human myopia behind and zoom out to the rest of life on Earth. Across life’s panoply, evolution has delivered some stunningly efficient designs — birds that can circumnavigate the globe, animals that can go years without food, and organisms that can survive in the most extreme environments.

Interestingly, these feats of biological efficiency become more clear when we compare them to our own attempts at mimicking nature’s machinery. For example, when we use heart-lung machines to keep people alive during surgery, the machines use about 20 times more power than the organs they replace.5 Likewise, when we use dialysis machines to replace failing kidneys, the machines use about 22 times more power than the original organs.6

When we try to mimic the human brain, we fare even worse. For example, when IBM’s Watson beat Ken Jennings at Jeopardy, the machine used something like 45,000 watts of electricity to get the job done. In contrast, Jennings’ brain (assuming it was average) ran on just 20 watts — about 2200 times less energy.7

The message here is that the drive for efficient design is not unique to human culture. It’s a general feature of biology, pushed by the great killer of waste, natural selection.

And so we come to the point. It’s not just capitalist societies that play the efficiency game. The whole of life plays it too. But then, if the Jevons paradox is unique to capitalism, that means the rest of life ought to be immune from efficiency backfire. So is it?

Using efficiency to catalyze biological sprawl

To test if life is immune from efficiency backfire, we need to first define a measure of ‘biological efficiency’.

Here’s how I’ll do it. I’ll take an organism’s mass and divide by its metabolism (the rate it consumes energy). I call this ratio ‘biomass efficiency’:

\displaystyle \text{biomass efficiency} = \frac{\text{organism mass}}{\text{organism metabolism}}

Biomass efficiency quantifies how much mass an organism can support per unit of energy input. For example, humans have a biomass efficiency of about 850 grams per watt. In contrast, elephants can support about 1600 grams of biomass per watt of energy input. And mice can support a mere 66 grams of biomass per watt.

Now to our question. Is life immune from the Jevons paradox?

The answer is unequivocally no.

Let’s look first at mammals and birds — the warm-blooded species with whom we’re most closely related. Across this group of animals, biomass efficiency varies from a low of 20 grams per watt to a high of 5,000 grams per watt. So if greater efficiency got parlayed into energy conservation, the most efficient animals should consume about 250 times less energy than the least efficient animals. But that is not what we find. Instead, more efficient birds and mammals tend to consume more energy. Figure 11 shows the trend.

Figure 11: Mammals and birds discover the Jevons paradox. Each point represents a species of mammal or bird. The horizontal axis plots each animal’s ‘biomass efficiency’ — the amount of biomass it can support per watt of energy input. The vertical axis shows the animal’s metabolic rate — the rate that it consumes energy. Note that both axes use log scales. [Sources and methods]

Let’s move on to our more distant cousins. Unlike mammals and birds, the rest of life generally lacks the ability to thermo-regulate (maintain a constant body temperature). And that means these cold-blooded creatures can survive on less energy. But it doesn’t mean they’re immune from the Jevons paradox.

When we scan the panoply of life, from bacteria and amoebas to reptiles and fish, we see a now-familiar pattern. Greater efficiency is associated with more energy consumption. Figure 12 shows the pattern.

(Keen-eyed readers might notice that this figure doesn’t show plants. Don’t worry, plants also exhibit the Jevons paradox.)

Figure 12: The rest of life discovers the Jevons paradox. Each point represents a species, with major taxa shown in color. The horizontal axis plots each organism’s ‘biomass efficiency’ — the amount of biomass it can support per watt of energy input. The vertical axis shows the organism’s metabolic rate — the rate that it consumes energy. Note that both axes use log scales. [Sources and methods]

Is efficiency a ‘curse’?

To summarize, the backfire of energy efficiency appears ubiquitous. It’s a consistent pattern throughout industrial history. And it’s a recurring theme across life itself. Given this universality, does it make sense to call efficiency a ‘curse’? (This is the language used by Foster and colleagues.)

I think the answer is no.

In essence, efficiency is simply a means to an end — a way to catalyze technological or biological sprawl. Whether this sprawl is a blessing or a curse depends on the setting.

In the case of life, biological sprawl is just the spread of life into many niches of size. We typically call this sprawl ‘biodiversity’, and we consider it a good thing. But in the case of industrial society, fossil fuels have allowed use to build more technological sprawl than the Earth can sustain. So it’s less that efficiency is a ‘curse’, and more that anything done with fossil fuels was always destined to be unsustainable. Greater efficiency simply hastened the inevitable.

To summarize, humans play the same game as life — we use efficiency to catalyze sprawl. And for most of human history, we played the game within tight constraints, using only the energy made available by the sun. But our exploitation of fossil fuels obviously changed everything, supercharging our activity beyond what the solar budget could maintain. It’s this lack of constraint that converts the Jevons paradox from a blessing into a curse.

Lessons from bacteria

While we’re on the topic of constraints, it turns out that we can learn some lessons from the most basic form of life — the lowly bacteria. You see, these simple creatures manage to shirk the Jevons paradox. Figure 13 shows the pattern. As bacteria get more efficient, they consume less energy.

Figure 13: Bacteria shirk the Jevons paradox. As bacteria get more efficient at supporting biomass (horizontal axis), their metabolic rate decreases (vertical axis). Each point represents a different bacteria species. Note that both axes use a log scale. [Sources and methods]

Now, I’m no microbiologist, but here’s what I suspect is going on. My guess is that bacteria shirk the Jevons paradox because they are subject to tight constraints. Simply put, bacteria are stuck being small.

The barrier comes down to a quirk of cellular design. Since bacteria have no mitochondria, (the powerhouse of the eukaryotic cell), they’re forced to metabolize energy along their cell walls. And that means their ability to harness energy is a function of their surface area. Now, as bacteria grow larger, their volume grows faster than their surface area. And that means bigness is a killer; beyond a certain size, bacteria can’t harvest enough energy to support their biomass. As a consequence, they’re stuck being tiny.8

Backing up a bit, it is the creep towards bigness that gives rise to life’s efficiency ‘backfire’. As organisms get larger, they tend to become more efficient. But they also consume more energy. My guess is that bacteria avoid this backfire effect because they cannot get big. In other words, they cannot create biological sprawl.

Constraining technological sprawl

Looking at bacteria, the lesson for humans is that if we want to avoid the Jevons paradox, we must impose constraints on technological sprawl. Of course, one way or another, those constraints are heading our way. But it would be more pleasant if the containment policy was by design rather than by disaster.

So how do we voluntarily constrain technological sprawl? That’s the ten-quadrillion-dollar question. And I would be lying if I claimed to have a definitive answer.

For their part, degrowth writers have been mulling over the question for a while.9 And if there’s anything like a consensus among these thinkers, it’s that individual action isn’t enough. Instead, the road to degrowth will require drastic social change, with a focus on expanding public goods, reducing inequality, and constraining conspicuous consumption. As always, the main hurdle here is the difficulty of turning ideas into action. Sadly, I suspect that for degrowth policies to become mainstream, the unfolding ecological catastrophe will have to get much worse.

Besides limiting consumption, another option would be to steer technological sprawl in a more sustainable direction. The goal would be a complete shift to renewable energy — a shift that would put us back in line with the rest of the solar-dependant biosphere. Of course, the Jevons paradox would still apply, meaning more efficient renewable-energy infrastructure would stimulate technological sprawl. But unlike with fossil fuels, the sprawl would be constrained by the sun’s energy budget.

That said, the dark side of the shift to renewables is what happens in the interim, when the fossil-fuel tap is still on. The risk is that instead of reducing fossil fuel consumption, renewable energy becomes another side dish — fries to go with the fossil-fuel big mac.

Sadly, that’s exactly what’s happened historically, as Figure 14 shows. Since 1965, global renewable energy consumption has exploded. And while it did, global fossil fuel use continued to grow.

Figure 14: The spread of renewable energy hasn’t slowed the growth of fossil fuels consumption. The horizontal axis shows the world’s consumption of renewable energy, which has grown dramatically since 1965. (Note the log scale.) Unfortunately, over the same period, global fossil fuel use continued to expand (vertical axis). [Sources and methods]

So how should we combat this side-dish problem? In light of the Jevons paradox, here’s a possible solution: punish fossil-fuel efficiency.

The idea is that if we continue to develop better fossil-fuel tech, the efficiency will inevitably backfire, catalyzing more fossil fuel consumption. So instead of developing this tech, we should deprecate it. Pull the funding for anything related to fossil-fuel efficiency and pump the money into renewable energy.

Admittedly, the policy sounds crazy. But what would be even crazier is if we kept striving for greater fossil-fuel efficiency, thinking the policy won’t backfire. It always has. And it always will.


Support this blog

Hi folks. I’m a crowdfunded scientist who shares all of his (painstaking) research for free. If you think my work has value, consider becoming a supporter.

member_button


Stay updated

Sign up to get email updates from this blog.



This work is licensed under a Creative Commons Attribution 4.0 License. You can use/share it anyway you want, provided you attribute it to me (Blair Fix) and link to Economics from the Top Down.


The Jevons paradox in the UK, Japan and Austria

In addition to the United States (Figure 8), we can see the Jevons paradox in the UK (Figure 15), Japan (Figure 16), and Austria (Figure 17). In all four countries, increases in aggregate efficiency are associated with greater energy use per capita.

My guess is that this pattern is basically universal — we’d find it in every country and every society. That said, it’s only in the US, UK, Japan, Austria that we’ve got estimates for the aggregate efficiency of primary energy converters. On that front, the data plotted in Figures 1517 comes from Benjamin Warr’s REXS database.

Figure 15: The United Kingdom discovers the Jevons paradox. As the efficiency of UK energy-conversion technology increased (horizontal axis), so did energy use per capita (vertical axis). Data is from Benjamin Warr’s REXS database.

Figure 16: Japan discovers the Jevons paradox. As the efficiency of Japan’s energy-conversion technology increased (horizontal axis), so did energy use per capita (vertical axis). Data is from Benjamin Warr’s REXS database.

Figure 17: Austria discovers the Jevons paradox. As the efficiency of Austria’s energy-conversion technology increased (horizontal axis), so did energy use per capita (vertical axis). Data is from Benjamin Warr’s REXS database.

Hashing details

If you’re curious about the specific chips used for Bitcoin mining, have a look at detailed labels in Figure 18. Prior to 2011, mining was done mostly with standard GPUs. But starting in late 2012, we see the invention of chips designed solely for Bitcoin mining. The efficiency of these chips grew exponentially until about 2018, after which it settled into a linear trend.

Figure 18: Hashing efficiency of Bitcoin technology. Data is from the Cambridge Bitcoin Electricity Consumption Index.

Don’t forget about plants

Yes, plant’s also experience the Jevons paradox. Figure 19 shows how they fit into life’s spectrum.

Figure 19: Plants discovers the Jevons paradox. Data for organism metabolism and mass is from Hatton et al. 2019, ‘Linking scaling laws across eukaryotes’.

Sources and methods

Sources for Figure 1

Data for word frequency is from Google Ngrams, downloaded with the R package ngramr.

Sources for Figure 2

Data for British coal consumption is from the following sources:

Sources for Figure 4

Estimates for computer efficiency are from Koomey et al, 2009, ‘Assessing trends in the electrical efficiency of computation over time’. I digitized the data in their Figure ES-1.

Sources for Figures 5 and 6

Data for Bitcoin hashing efficiency and Bitcoin electricity consumption is from the Cambridge Bitcoin Electricity Consumption Index.

Sources for Figure 7

Data for the aggregate efficiency of US primary energy converters if from Benjamin Warr’s REXS database. Note: Warr’s original website is now dead, but fortunately has been scraped by the Internet Archive.

Sources for Figure 8

For efficiency data, see the notes for Figure 7. Data for US energy consumption per capita is from the following sources:

  • Energy use, 1900 – 1948: US Energy Information Agency, Annual Energy Review 2009, Appendix E1
  • Energy use, 1949 – 2000: US Energy Information Agency, Annual Energy Review, Table 1.1
  • Population, 1900 — 2000: Our World in Data, ‘Population’

Sources for Figure 9

Data for heat engine efficiency is from Cleveland and Clifford’s article ‘Maximum efficiencies of engines and turbines, 1700-2000’. Their original source data is from Vaclav Smil’s book Energy and Civilization: A History.

Sources for Figure 10

For heat engine efficiency, see the sources for Figure 9. World fossil fuel user per capita is from the following sources:

Note: to estimate global fossil fuel use back to 1700, I indexed it to the level of British coal production, using data from Our World in Data.

Sources for Figures 11, 12, and 13

Data for organism metabolism and mass is from Hatton et al. 2019, ‘Linking scaling laws across eukaryotes’.

Sources for Figure 14

Energy data is from the BP Statistical Review of Energy, 2023.

Notes

  1. Credit to Ulf Martin for inspiring the language of ‘sprawl’. In a 2019 paper, he describes capitalist credit as a form of ‘autocatalytic sprawl’.↩
  2. Permit me a brief book review. I recommend reading The Economic Growth Engine for two reasons. First, Ayres and Warr are extremely knowledgeable about the science of energetics, and the book is packed with useful information on this topic. Second, the book is a case study in how heterodox economists fall into conceptual traps laid by neoclassical economics.

    After conducting detailed calculations of how energy gets converted into useful work, Ayres and Warr end up dumping the latter quantity into the silliest of neoclassical inventions — the aggregate production function. This function takes inputs of capital and labor and then ‘explains’ the growth of real GDP.

    True, Ayres and Warr add ‘useful work’ to the standard list of inputs. However, they gloss over the gaping flaws with production functions. They religate the ‘Cambridge Capital Controversy’ — which demonstrated that ‘capital’ (and by extension, ‘output’) cannot be aggregated objectively — to a footnote.

    Frustratingly, Ayres and Warr go on to claim that the Cambridge controversy was ‘resolved’, in the sense that economists arrived at an agreed upon method for ‘measuring’ the capital stock. But what they mean here is that neoclassical economists ‘resolved’ the Cambridge controversy by agreeing to ignore it.

    Also, Ayres and Warr fail to consider the immense ambiguity involved with measuring GDP. In fact, it doesn’t seem to dawn on them that the most important feature of ‘economic growth’ is the rise of useful work itself, and that ‘real GDP’ is an ideological distraction not worth explaining.

    Still, reading The Economic Growth Engine was foundational to my own thinking, primarily because it made me realize that whenever neoclassical economics rears its head, it ruins otherwise good science.↩

  3. Efficiency estimates for the Newcomen engine come from Vaclav Smil’s book Energy and Civilization.↩
  4. Fun fact: Watt’s engine patent was called A New Invented Method of Lessening the Consumption of Steam and Fuel in Fire Engines. In other words, Watt knew that his main contribution was designing a steam engine that was more efficient.↩
  5. According to Wang et al. (2010), the average human heart consumes about 440 kcal per day, per kilogram of mass. Assuming an average heart mass of 310 grams, that equates to a metabolic rate of about 6.6 watts. I couldn’t find reliable data for the metabolic rate of lungs. But assuming they’re similar to skeletal muscle, they’d have a metabolic rate of about 12 kcal per day, per kilogram of mass. With an average mass of 2 kilograms, that puts the lungs’ metabolic rate at around 1.2 watts. Adding both values pegs the heat and lung’s metabolic rate at about 7.8 watts. For comparison, the Sorin S5 heart-lung-machine has power rating of 160 watts.↩
  6. Wang et al. (2010) report that the kidneys have a metabolic rate of 440 kcal per day per kilogram. Assuming an average kidney mass of 290 grams, that gives a metabolic rate of roughly 6.2 watts. In contrast Nickel et al (2017) peg the energy use of a haemodialysis machine at 0.56 kWh for a 4 hour treatment, which equates to a power of about 140 watts.↩
  7. According to an IBM press release, Watson ran on a cluster of ninety IBM Power 750 servers. Looking at the operating specs, each Power 750 server had a max draw of 1950 watts. Assuming a conservative draw of 500 watts per server while playing Jeopardy, that equates to a total power consumption of 45,000 watts. For comparison, Clarke and Sokoloff peg the brain’s metabolism at 20 watts.↩
  8. For a fascinating account of how life sidestepped the size constraints that limit bacteria, see Nick Lane’s book The Vital Question.↩
  9. On the topic of degrowth, here are two books worth reading:
    1. Less is More, by Jason Hickel
    2. The Case for Degrowth, by Giorgos Kallis, Susan Paulson, Giacomo D’Alisa and Federico Demaria

    Unlike techno-optimist works like Factor Four, both books acknowledge that energy efficiency often backfires.↩

Further reading

Alcott, B. (2005). Jevons’ paradox. Ecological Economics, 54(1), 9–21.

Ayres, R. U., & Warr, B. (2010). The economic growth engine: How energy and work drive material prosperity. Cheltenham, UK: Edward Elgar Publishing.

Foster, J. B., Clark, B., & York, R. (2010). Capitalism and the curse of energy efficiency: The return of the Jevons paradox. Monthly Review, 62(6), 1–12.

Hickel, J. (2020). Less is more: How degrowth will save the world. Random House.

Kallis, G., Paulson, S., D’Alisa, G., & Demaria, F. (2020). The case for degrowth. John Wiley & Sons.

Koomey, J. G., Berard, S., Sanchez, M., & Wong, H. (2009). Assessing trends in the electrical efficiency of computation over time. IEEE Annals of the History of Computing, 17.

Martin, U. (2019). The autocatalytic sprawl of pseudorational mastery. Review of Capital as Power, 1(4), 1–30.

Polimeni, J., Mayumi, K., Giampietro, M., & Alcott, B. (2012). The Jevons paradox and the myth of resource efficiency improvements. London: Routledge.

Smil, V. (2018). Energy and civilization: A history. MIT press.

Warr, B., Ayres, R., Eisenmenger, N., Krausmann, F., & Schandl, H. (2010). Energy use and economic development: A comparative analysis of useful work supply in Austria, Japan, the United Kingdom and the US during 100 years of economic growth. Ecological Economics, 69(10), 1904–1917.

Weizsäcker, E. U. von., Lovins, A. B., & Lovins, L. H. (1997). Factor four: Doubling wealth, halving resource use. The new report to the Club of Rome. Earthscan.

The post A Tour of the Jevons Paradox: How Energy Efficiency Backfires appeared first on Economics from the Top Down.

Read the whole story
cjheinz
1 day ago
reply
Great article.
Share this story
Delete

Friday Squid Blogging: Emotional Support Squid

1 Comment

When asked what makes this an “emotional support squid” and not just another stuffed animal, its creator says:

They’re emotional support squid because they’re large, and cuddly, but also cheerfully bright and derpy. They make great neck pillows (and you can fidget with the arms and tentacles) for travelling, and, on a more personal note, when my mum was sick in the hospital I gave her one and she said it brought her “great comfort” to have her squid tucked up beside her and not be a nuisance while she was sleeping.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Read the whole story
cjheinz
1 day ago
reply
Emotional support squid, nice! FTW!
Share this story
Delete

An evolutionary biologist and a science fiction writer walk into a bar…

1 Comment
Personally, I’d still put “Hope” in quotes..

Last month it was the Atlantic, where I pretended to know something about AI. This month it’s the MIT Reader, and the subject is The Imminent Collapse of Civilization. Honestly, I had no idea I was such an expert on so many things.

This time, though, I’m not so much an expert as a foil. Dan Brooks (a name long-time readers of this blog may recognize) and Sal Agosta (whose concept of “sloppy fitness” careful readers of my novelette “The Island” may recognize) have written a book called A Darwinian Survival Guide: Hope for the Twenty-First Century. Their definition of “hope” is significantly more restrained than the tech bros and hopepunk authors would like: not once do they suggest, for example, that we could all keep our superyachts if we just put a giant translucent pie plate into space to cut incident sunlight by a few percent. Brooks & Agosta’s definition of hope is far more appropriate for a world in which leading climate scientists admit to fury and despair at political inaction, decry living in an “age of fools”, and predict by a nearly five-to-one margin that not only is 1.5ºC a pipe dream, but that we’ll be blowing past 2.5ºC by century’s end. They’ve internalized the growing number of studies which point to global societal collapse around midcentury. Their idea of hope is taken explicitly from Asimov’s Foundation series: not How do we prevent collapse, but How do we come back afterward? That’s what their book is about.

Casual observers might see my name where bylines usually go, and conclude that this is somehow my interview. Don’t be fooled: the only thing I lay exclusive claim to here is the intro. This is about Dan and Sal. This is their baby; all I did was poke at it from various angles and let Dan react as he would. Our perspectives do largely overlap, but not entirely. (Unlike Dan, I do think the extinction rates we’re inflicting on the planet justify the use of the word “crush”—although I take his point that the thing being crushed is only the biosphere as it currently exists, not the biosphere as a dynamic and persistent entity. I also confess to a certain level of bitterness and species-self-loathing that Dan seems to have avoided; I’m pretty certain the biosphere would be better off without us.)

The scene of the Crime.

But there’s that word again: hope. Not the starry-eyed denial of reality that infests the Solarpunk Brigade, not the Hope Police’s stern imperative that We Must Never Feed A Narrative of Hopelessness and Despair no matter what the facts tell us. Just the suggestion that after everything falls apart—just maybe, if we do things right this time—we might climb back out of the abyss in decades, instead of centuries.

Probably still not what most people want to hear. Still. I’ll take what I can get.

So go check it out—keeping in mind, lest you quail at all the articulate erudition on display, that the transcript has been edited to make us look a lot more coherent than we were in real life.

I mean, we were drinking heavily the whole time. What else would you expect, given the subject matter?

Read the whole story
cjheinz
5 days ago
reply
The interview is Must Read. So many good thoughts.
Share this story
Delete

Elon Musk: Threat Or Menace? Part 5

1 Comment
Source
Much of this series has been based on the outstanding reporting of the Washington Post, and the team's Trisha Thadani is back with Lawsuits test Tesla claim that drivers are solely responsible for crashes. My main concern all along has been that Musk's irresponsible hyping of his flawed technology is not just killing his credulous customers, but much more seriously innocent bystanders who had no say in the matter. The article includes video of:
  • A driver who believed Autopilot could drive him home despite his being drunk. The car drove the wrong way on the highway and killed another innocent victim of Musk's hype.
  • Autopilot rear-ending a merging vehicle and killing another innocent victim, a 15-year-old.
  • Autopilot slamming into a broken down vehicle on the highway. When the Tesla driver left the wreck she was hit and killed by another car.
  • Autopilot speeding through a T-junction and crashing into a parked truck.
Below the fold I look into Tesla's results, Musk's response, the details revealed by the various lawsuits. and this excellent advice from Elon Musk:
"If somebody doesn’t believe Tesla is going to solve autonomy, I think they should not be an investor in the company."
Elon Musk, 24th April 2024

The Results

In Tesla’s biggest problem: cars, Drew Dickson looks at Tesla's first quarter results:
Of Tesla’s total quarterly sales of $21.3bn, 82 per cent were indeed “automotive revenues” while the rest were energy and services.
...
Tesla burned through $2.5bn of cash in the quarter. Inventories grew by over 10 per cent to $16bn.
82% of $21.3B is $17.5B, so Tesla has almost an entire quarter of unsold cars on hand.

One problem is that Musk's persona as an extreme right-wing troll has been putting off the key Tesla customer demographic, well-off liberals who care about climate change. Another problem is that Tesla lineup of models is old and expensive. Tesla used to recognize that they needed a cheaper product but:
A cut-price Model 2 was first teased at the 2023 Tesla AGM, with Musk saying in January that it would be in production towards the end of next year, but the expected spring product announcement never came.

Tesla now states it is “accelerating” plans, though as with the Cybertruck it’s easy to mistake the accelerator for the brakes. The notion that a Model 2 might be built in new factories in Mexico or elsewhere have been replaced with vague commitments to retool existing infrastructure and production lines.
The resources that could have developed a Model 2 or refreshed the existing models instead went to develop the "Incel Camino", the Cybertruck. This isn't just a $82K laughing-stock, but a manufacturing nightmare that will be lucky to sell 20% of Musk's 250K/year projection, especially since it cannot be road-legal in either of Tesla's #2 and #3 markets (China and EU). It will definitely be a drag on the results for some time. So the Models S (2012), X (2015), 3 (2017) and Y (2020) will have to soldier on for a while.

This aging product line isn't attracting customers:
  • Sequential growth in units sold was down 13 per cent sequentially.
  • Tesla’s price per vehicle, excluding regulatory credits and leasing or finance income, was $38,924.
  • This was down 13 per cent from $44,642 last year, which itself was down 11 per cent from $50,037 the previous year.
Extreme pricing pressure is forcing affordable vehicles on Tesla, irrespective of whether it chooses to launch one. Amid a lack of demand for EVs in general, and Teslas in particular, its quarterly automotive revenues were down nearly 13 per cent over the past year and by over 19 per cent sequentially.

“Clean” automotive margins (which exclude regulatory credits and leasing income) were down from 29.7 per cent in the first quarter of 2022, to 18.3 per cent in the first quarter of 2023, and again to 15.6 per cent in the first quarter of 2024. If you back out the new IRA US tax credits (which Tesla doesn’t seem to disclose) then automotive gross margin looks to have fallen even further, to around 14.1 per cent.
Shrinking margins on shrinking sales hit earnings per share:
GAAP EPS was down 53 per cent year-on-year, accelerating from the 23 per cent drop in the first quarter of 2023. Even using non-GAAP EPS it’s a 47 per cent decrease over the past year.

In the summer of 2022, when the stock was above $300 share, analysts were expecting Q1’24 EPS of $1.80. Instead, they got $0.45. That is a 75 per cent downgrade to expectations.
Source
And the upsell of Fake Self Driving isn't helping, as Craig Trudell reports in Tesla’s Self-Driving Software Is a Perpetual Revenue Letdown:
Tesla released its 10-Q, a quarterly report that provides a more detailed view into the company’s financial position. For several years running, Tesla has provided regular updates in these statements on how much revenue it’s taken in from customers and not yet fully recognized. Some of this deferred revenue relates to a work-in-progress product: Full Self-Driving, or FSD, for short.

Tesla’s deferred automotive revenue amounted to $3.5 billion as of March 31, little changed from the end of last year. Of that amount, Tesla expects to recognize $848 million in the next 12 months — meaning much of the performance obligations tied to what it’s been charging customers for FSD will remain unsatisfied a year from now.
...
In these filings, Tesla also reports how much deferred revenue it’s actually recognized — and the Austin-based company has consistently undershot its own forecasts. It has recognized $494 million of deferred revenue in the last 12 months, short of the $679 million that it projected a year ago.
Source
Tesla's CFO quit last August:
The carmaker reported this week that its operating margin shrank to 5.5% in the first quarter, the lowest since the last three months of 2020. The measure of profitability was at 16% when Zachary Kirkhorn, Tesla’s then-chief financial officer, said during an earnings call that it was key to the company.

“As a management team here, we’re most focused on what our operating margin is,” he said in January 2023, in response to an investor question on a different earnings metric. “That is what we’re primarily managing to now.”
General Motors operating margin is 7.35%. But not to worry, Tesla isn't a car company, its an AI and robotics company:
If the auto business is worth 3 or 4 times the multiple of a Stellantis or Volkswagen, then it would get a forward PE of, say, 20x. That’s more than generous for a business the CEO talks about as a legacy sideline.

Street numbers for Tesla are consistently far too high but even using the 2024 consensus EPS of $2.64, Tesla would be worth just over $50 per share. Using today’s diluted shares (and assuming that they don’t issue more, which they will) that works out to a market cap of $181bn.

Tesla’s fully diluted market cap at pixel time is still $580bn. Simplistically, that means shareholders are already paying around $400bn for corporate experiments in “robotics and AI”, along with anything else Musk has or tries to conjure up.
Tesla will definitely issue more shares, for example after the 13th June shareholder vote when they will reward Musk's corporate experiments in robotics and AI by reinstating the $56B incentive package cancelled by the Delaware court.

Pumping The Stock

About 70% of the stock price is based on Musk hyping the technology. Thus for Musk it is more than twice as important to pump the stock as it is to sell more cars. He has to follow two strategies:
  • Make the results look better in the short term by increasing margins. The obvious way to do this is to cut costs, even though this will reduce profits in the longer term. After all, in the longer term Tesla isn't about selling cars, it is about AI and robotics.
  • Distract people from looking at the results by unleashing the hype cannon.

Cutting Costs

The knee-jerk reaction of US companies to bad quarterly results is to lay off staff, but they generally target the less successful parts. Elon Musk not so much:
Even Tesla's harshest critics must concede that the company's Supercharger network is its star asset. Tesla has more fast chargers in operation than anyone else, and this year opened them up to other automakers, which are adopting the J3400 plug standard.

All of which makes the decision to get rid of senior director of EV charging Rebecca Tinucci—along with her entire team—a bit of a head-scratcher. If I were the driver of a non-Tesla EV expecting to get access to Superchargers this year, I'd probably expect this to result in some friction. Musk told workers that Tesla "will continue to build out some new Supercharger locations, where critical, and finish those currently under construction."
Like most of the recent desperation moves, this was Musk's decision:
The decision to cut the nearly 500-person group, including its senior director, Rebecca Tinucci, was made by Chief Executive Officer Elon Musk in the last week, according to a person familiar with the matter.
In return for gorvenment subsidies, Tesla had been turning Superchargers into a separate business:
Access to high-speed charging is critical to EV adoption, and Tesla invested billions of dollars into developing a global network of Superchargers that became the envy of other automakers. It’s also a critical driver of Tesla sales, and the carmaker pointed to the division’s growth during its first-quarter results just last week.

“Starting at the end of February, we began opening our North American Supercharger Network to more non-Tesla EV owners,” Tesla said in its shareholder deck.

The Musk-led company has also signed charging partnerships with carmakers including Stellantis NV, Volvo, Polestar, Kia, Honda, Mercedes-Benz and BMW. It’s not clear who will now oversee Tesla’s partnerships with those companies. GM, Volvo and Polestar were all due to open NACS chargers to their customers in the immediate future, according to Tesla’s website.
But maybe Musk couldn't resist a chance to mess with the competition:
The job eliminations mean Rivian, Ford and others have lost their main points of contact in Tesla’s charging unit shortly before the kickoff of the busy summer driving season. Tinucci was one of the main executives building and managing outside partnerships and was thought of highly, two people who had worked with her inside and outside of Tesla said.
Musk Undercuts Tesla Chargers That Biden Lauded as ‘a Big Deal’ by Craig Trudell suggests a political motive:
In addition to potentially compromising budding partnerships with other carmakers looking to tap Tesla’s chargers, another consequence of Musk’s move may be undercutting Biden’s EV push in the midst of his reelection campaign. Presumptive Republican nominee Donald Trump has repeatedly attacked electric cars on the campaign trail and predicted a “bloodbath” for the auto industry if he isn’t elected.
Faced with a huge short-term threat to his wealth Musk isn't concerned with the longer term, when unlike robotaxis, Superchargers could have been a nice little earner:
Tesla had been building a tidy charging business over more than a decade. BloombergNEF estimates that the company delivered 8% of the public charging electricity demanded globally last year. Before Musk’s surprise decision, the researcher was projecting that Tesla’s annual profit from Supercharging could rise to around $740 million in 2030.

That level of earnings is now likely out of reach, as BNEF’s estimates assumed Tesla would accelerate the pace of installations through the end of the decade. Musk had given indications this was the plan.
Musk may already be having second thoughts:
The move will slow the network’s growth, according to a person familiar with the division, who asked not to be identified discussing private matters. There already are discussions about rehiring some of the people affected in order to operate the existing network and grow it at a much slower rate, the person said.
Way to motivate the team, Elon!

Musk believes the future depends upon robotaxis but:
Many Tesla fans had been holding out hope that Musk would debut a cheap Model 2 EV in recent weeks. Instead, the tycoon promised that robotaxis would save the business, even as both of its partially automated driver assistance systems face recalls and investigations here in the US and in China.

Delivering on that goal is more than just a technical challenge, and it will require the cooperation and approval of state and federal authorities. However, Musk is also dissolving the company's public policy team in this latest cull.
Cutting off communication with the regulators who will have to approve robotaxi service isn't likely to help. And if there was another technology critical to Tesla's success it would be batteries:
Earlier this month, Tesla engaged in another round of layoffs that decimated the company and parted ways with longtime executive Drew Baglino, who was responsible for Tesla's battery development.
Jonathan M. Gitlin rounds up reactions in What’s happening at Tesla? Here’s what experts think. He quotes Ed Niedermeyer:
Car companies "go bankrupt because A, they overinvest in factories, and then demand falls off. Which... that fits the profile," said Niedermeyer. "And B, they don't invest in products. Not investing in products is sort of a longer-term cause, and the proximal cause is [that] demand falls, and you've been investing in too many factories, and you get crushed by those fixed costs. So those cases that are common across most auto industry bankruptcies are certainly there."

But with almost $27 billion of cash on hand, that shouldn't happen any time soon. "The thing that is really hard to understand is that if you have tens of billions of dollars in cash but you're losing market share and you're losing margin, losing pricing power, and all the other things that are happening with the business—you don't cut your way out of that problem," Niedermeyer continued. "That's the confusing part about all this. What would you use that cash for if not to solve those problems? And yet, instead, they're cutting.

"One of the things I've said for a really long time, and I think this is what's happening, is that an automaker is not really real until they survived a serious downturn," Niedermeyer said. And while the broader economy looks fine, EV sales are battling a strong negative headwind. "The car game is a survival business. You can capture more upside than the other guy in the good times. And that can be really good for your stock. But if you do that by not investing in the things that protect you in the downturn, it doesn't matter. And you're just another one on the list of defunct automakers,"
Musk isn't listening, because he is still firing people:
On Sunday night, even more Tesla workers learned they were no longer employed by the company as it engaged in yet another round of layoffs. ... The latest round of layoffs has affected service advisers, engineers, and HR.

Hyping The Technology

The Washington Post team's Faiz Siddiqui and Trisha Thadani report that Tesla profit plunges on price cuts, but company unveils plans for affordable models:
CEO Elon Musk, who has a unique penchant for redirecting the conversation, used Tuesday’s earnings call to deflect from the poor numbers, focusing instead on the company’s commitment to artificial intelligence and a fully autonomous car. Details on Tesla’s apparent new offerings — which include the “more affordable models” and the “cybercab” — were scant and did not address how the company would overcome the technological and regulatory hurdles ahead.`
Musk has form when it comes to hyping his technologies and companies. His tweeting that funding had been secured to take Tesla private at $420/share led to a settlement with the SEC that is still in place:
The supreme court on Monday rejected an appeal from Elon Musk over a settlement with securities regulators that requires him to get approval in advance of some tweets that relate to Tesla, the electric vehicle company he leads.
The hype is starting to wear thin but not yet with the markets, as Brandon Vigliarolo points out in Musk moves Tesla's goalposts, investors happily move shares higher:
Elon Musk has a strategy and you may have seen it before: When things aren't going well, he'll say something wild to take everyone's eyes off the trouble, and raise share prices with dreams.
...
The first quarter of 2024 didn't go well for Tesla, either economically or reputationally. As we reported earlier, sales fell, net profit tumbled off the same cliff Tesla's stock price earlier careened over, and production and deliveries decreased as well.

But give Musk a chance to toss out a flash grenade and he'll do just that: This time around with some wild predictions about his automaker producing a "purpose-built robotaxi" dubbed the "Cybercab," and Tesla's latest vision for the future as one in which it is focused on "solving autonomy."
...
"It's like some combination of Airbnb and Uber, meaning that there will be some number of cars that Tesla owns itself and operates in the fleet … and then there'll be a bunch of cars where they're owned by the end user," Musk said. He added the fleet will likely grow to include "several tens of millions" of vehicles by the end of the decade.
Last year Tesla shipped 1.8M vehicles. There are 6 years left to the "end of the decade". Musk is promising to ship an average of at least 3M vehicles/year, all of which would be enrolled in the robotaxi fleet. Even if this were plausible, one has to question where all the riders would come from for a fleet 2.5 times bigger than Uber's global driver list. Note that in the US 36% of adults have used Uber or Lyft, so the market is already close to saturated. I'm sure we all remember that:
Musk spent plenty of time in the 2010s claiming he'd have one million robotaxis on the road by 2020.
Pumping the stock full of hype is a Musk habit:
Getting in trouble over "Full-Self Driving" claims? Stick a guy in a robot suit and call it Optimus to distract shareholders. Fail to get FSD realized this year - again? Just kick it down the road. Journalists calling him out on his nonsense? Rant about the "woke mind virus" and the media on Twitter.

Of course, Optimus has been nowhere to be seen and was barely mentioned during the call. Likewise, Tesla's dreams of tens of millions of robotaxis on the road in the next six years rests on the need for serious technological breakthroughs the automaker has failed to make despite years of trying. Oh, and a ton of permits if this is to operate in the States, at least.
Vigliarolo isn't alone. In Musk Sells the Tesla Dream, But Don't Ask for Details Liam Denning notices a detail from the earnings call:
There was an odd tweak to the low-cost vehicle strategy Tesla laid out in March 2023, when management talked about cutting costs in half with revolutionary manufacturing methods. Now, Tesla talks about melding aspects of next-generation platforms with its existing ones in the new models, enabling the company to build them on existing manufacturing lines. To be clear, that is an intriguing possibility, offering efficiencies to reduce stubborn costs.

But also to be clear: It won’t deliver a $25,000 Model 2 anytime soon — “this update may result in achieving less cost reduction than previously expected” — and also isn’t what Tesla talked about only a year or so ago. It is a major overhaul of strategy requiring details.
Tesla is starting to have serious competition:
So consumers — some of whom are turned off by Musk’s incessant posting on X, the social platform he owns, and by his controversial political comments — have a lot of choices when it comes to buying an electric car. Tesla’s share of the EV market in the US was roughly 51% in the first quarter, Cox says, down from almost 62% a year earlier.

The competition is even fiercer outside the US, where Chinese carmakers dominate. About half of all EVs sold globally are Chinese brands — BYD, the top brand within China, sold more cars than Tesla did in the last quarter of 2023, though Tesla regained the lead in the following quarter.
To respond to this competition, Tesla has understood for a long time that they needed a $25K Model 2:
Musk first teased about such a car in September 2020, saying a series of innovations Tesla was working on would enable it to make an EV at that price within about three years. As recently as January, Musk said Tesla was “very far along” with work on its lower-cost vehicle.
But as always, Musk's schedule was just a fantasy, and then the need to pump the stock took over:
Then, in early April, Reuters reported that Tesla had shelved plans for the cheaper vehicle to prioritize its robotaxi, creating bedlam among investors. The tension within Tesla over Musk’s desire to focus on the robotaxi is nothing new. It was chronicled by Walter Isaacson, who wrote in his book published in September that the billionaire had “repeatedly vetoed” plans to make a less-expensive model. Musk refused to give any details about a new, more-affordable model when asked about them by analysts on the first-quarter call.
My guess is that it has dawned on Tesla that, without the resources sunk into the Cybertruck, they simply can't build a $25K car and make money, unlike the competition:
China’s EV advantage is in batteries — the most expensive part of an EV. They’re much cheaper in China because of the country’s control of the mining and processing of component materials such as lithium, cobalt, manganese and rare earth metals. UBS analysts say BYD had a 25% cost advantage over North American and European brands in 2023. Its cheapest model goes for $10,000. Tesla’s cheapest Model Y — the world’s best-selling car of any kind last year — is about $35,000 in the US after accounting for federal tax credits.
China's other advantage is in driver assistance technology:
“Chinese EVs are simply evolving at a far faster pace than Tesla,” agrees Shanghai-based automotive journalist and WIRED contributor Mark Andrews, who tested the driver assistance tech available on the roads in China. The US-listed trio of Xpeng, Nio, and Li Auto offer better-than-Tesla “driving assistance features” that rely heavily on lidar sensors, a technology that Musk previously dismissed, but which Tesla is now said to be testing.

The Robotaxi Rescue

According to Musk the thing that will transform Tesla's profitability is a robotaxi. Lets assume for the moment that, despite being dependent only upon cameras, Tesla's Fake Self Driving actually worked. In Robotaxi Economics I analyzed the New York Times' reporting on Waymo and Cruise robotaxis in San Francisco and concluded:
These numbers look even worse for Tesla. Last year Matthew Loh reported that Elon Musk says the difference between Tesla being 'worth a lot of money or worth basically zero' all comes down to solving self-driving technology, and the reason was that owners would rent out their Teslas as robotaxis when they weren't using them. This was always obviously a stupid idea; who wants drunkards home-bound from the pub throwing up on their Tesla's seats? But the fact that the numbers don't add up for robotaxis in general, and the fact that Hertz is scaling back its EV ambitions because its Teslas keep getting damaged because half of them are being used by Uber drivers as taxis, make the idea even more laughable.
Even for Waymo, it turns out that replacing a low-wage human with a lot of very expensive technology (Waymo's robotaxis "are worth as much as $200,000"), and higher-paid support staff isn't a path to profitability.

It is true that Tesla's robotaxis would be cheaper than Waymo's, since they won't have the lidar and radar and so on. But these things are what make the difference between Waymo's safety record, which is good enough that regulators allow them to carry passengers, and Tesla's safety record, which is unlikely to impress the regulators.

The regulators have a lot of reasons to be skeptical. Back in 2021 they started investigating Autopilot:
The U.S. government has opened a formal investigation into Tesla’s Autopilot partially automated driving system after a series of collisions with parked emergency vehicles.
...
NHTSA says it has identified 11 crashes since 2018 in which Teslas on Autopilot or Traffic Aware Cruise Control have hit vehicles at scenes where first responders have used flashing lights, flares, an illuminated arrow board or cones warning of hazards.
Since then the evidence has piled up, as the Washington Post team report:
At least eight lawsuits headed to trial in the coming year — including two that haven’t been previously reported — involve fatal or otherwise serious crashes that occurred while the driver was allegedly relying on Autopilot. The complaints argue that Tesla exaggerated the capabilities of the feature, which controls steering, speed and other actions typically left to the driver. As a result, the lawsuits claim, the company created a false sense of complacency that led the drivers to tragedy.
Musk claimed they would never settle these cases, but:
Tesla this month settled a high-profile case in Northern California that claimed Autopilot played a role in the fatal crash of an Apple engineer, Walter Huang. The company’s decision to settle with Huang’s family — along with a ruling from a Florida judge concluding that Tesla had “knowledge” that its technology was “flawed” under certain conditions — is giving fresh momentum to cases once seen as long shots, legal experts said.
The regulators move slowly but they keep moving:
Meanwhile, federal regulators appear increasingly sympathetic to claims that Tesla oversells its technology and misleads drivers. Even the decision to call the software Autopilot “elicits the idea of drivers not being in control” and invites “drivers to overly trust the automation,” NHTSA said Thursday, revealing that a two-year investigation into Autopilot had identified 467 crashes linked to the technology, 13 of them fatal.
Last December, the NHTSA forced Tesla to recall more than 2M vehicles because Autopilot:
has inadequate driver monitoring and that the system could lead to "foreseeable misuse,"
The agency suspects the recall wasn't adequate:
The National Highway Traffic Safety Administration disclosed Friday that it’s opened a query into the Autopilot recall Tesla conducted in December. The agency is concerned as to whether the company’s remedy was sufficient, in part due to 20 crashes that have occurred involving vehicles that received Tesla’s over-the-air software update.
The recall involved an over-the-air update, but Tesla's attitude to regulation showed through:
the agency writes that "Tesla has stated that a portion of the remedy both requires the owner to opt in and allows a driver to readily reverse it" and wants to know why subsequent updates have addressed problems that should have been fixed with the December recall.
What is the point of a safety recall that is opt-in and reversible? Clearly, it is to avoid denting the credibility of the hype. The NHTSA is not happy:
In a separate filing, NHTSA detailed findings from its investigation that preceded the December recall. The agency found that Autopilot didn’t sufficiently ensure drivers stayed engaged in the task of driving, and that Autopilot invited drivers to be overconfident in the system’s capabilities. Those factors led to foreseeable misuse and avoidable crashes, at least 13 of which involved one or more fatalities, according to the report.

“Tesla’s weak driver-engagement system was not appropriate for Autopilot’s permissive operating capabilities,” NHTSA said. This resulted in a “critical safety gap” between drivers’ expectations and the system’s actual capabilities, according to the agency.
The NHTSA is skeptical that the recall was effective:
But NHTSA says it knows of at least 20 crashes involving Tesla Autopilot that fall into three different categories. It says there have been nine cases of a Tesla having a frontal collision with another vehicle, object, or person, for which there was time for an alert driver to have avoided the crash. Another six crashes occurred when Teslas operating under Autopilot lost control and spun out or understeered into something in a low-grip environment. And five more crashes occurred when the driver inadvertently canceled the steering component of Autopilot without disengaging the adaptive cruise control.

NHTSA also says it tested the post-recall system at its Vehicle Research and Test Center in Ohio and that it "was unable to identify a difference in the initiation of the driver warning cascade between pre-remedy and post-remedy (camera obscured) conditions," referring to the supposedly stronger driver monitoring.
The agency is giving Tesla until July 1st:
to send NHTSA a lot of data, including a database with information for every car it has sold or leased in the US, with information on the number and dates of all Autopilot driver warnings, disengagements, and suspensions for each of those vehicles. (There are currently more than 2 million Teslas on the road in the US.)

Tesla must also provide the cumulative mileage covered by Autopilot, both before and after the recall. NHTSA wants Tesla to explain why it filed an official Part 573 Safety Recall Notice, "including all supporting engineering and safety assessment evidence." NHTSA also wants to know why any non-recall update was not part of the recall in the first place.
Finally, Mike Spector and Chris Prentice report that In Tesla Autopilot probe, US prosecutors focus on securities, wire fraud:
U.S. prosecutors are examining whether Tesla committed securities or wire fraud by misleading investors and consumers about its electric vehicles’ self-driving capabilities, three people familiar with the matter told Reuters.
...
Reuters exclusively reported the U.S. criminal investigation into Tesla in October 2022, and is now the first to report the specific criminal liability federal prosecutors are examining.

Investigators are exploring whether Tesla committed wire fraud, which involves deception in interstate communications, by misleading consumers about its driver-assistance systems, the sources said. They are also examining whether Tesla committed securities fraud by deceiving investors, two of the sources said.

The Securities and Exchange Commission is also investigating Tesla’s representations about driver-assistance systems to investors, one of the people said.
This is all about Autopilot, but Fake Self Driving has problems too, as the Washington Post team reported in Tesla worker killed in fiery crash may be first ‘Full Self-Driving’ fatality:
Two years ago, a Tesla shareholder tweeted that there “has not been one accident or injury” involving Full Self-Driving, to which Musk responded: “Correct.” But if that was accurate at the time, it no longer appears to be so. A Tesla driver who caused an eight-car pileup with multiple injuries on the San Francisco-Oakland Bay Bridge in 2022 told police he was using Full Self-Driving. And The Post has linked the technology to at least two serious crashes, including the one that killed von Ohain.
The regulators still approve Waymo's cautious and well-engineered robotaxi effort. Uber's and Cruise's robotaxi efforts flamed out. Given the lack of sensors, the history of crashes, the fact that their "autonomy" technology is still at level 2, and the resistance to regulation, why would any regulator approve even the testing, let alone the revenue service of a Tesla robotaxi?

After Robotaxis, What?

Now that the effectiveness of the robotaxi hype is starting to fade, it is time for Musk to roll out the next shiny object. Dan Robinson reports on it in Elon Musk's latest brainfart is to turn Tesla cars into AWS on wheels:
EV carmaker Tesla is considering a wonderful money-making wheeze – use all of that compute power in its vehicles to process workloads for cash, like a kind of AWS on wheels.

The Elon Musk-led outfit said in its recent earnings conference call for calendar Q1 that it had noticed its vehicles spend a considerable amount of their time just sitting there not moving. Many pack in a decent amount of processing power, so why not get them to do something useful and earn some cash for the company as well?

Speaking on the conference call, Musk said that he thought most Teslas were probably used for about a third of the hours in a week.
Seriously? Unless you're a gig worker for Uber or Lyft, who clocks 56 hours/week sitting behind the wheel? I can't believe that Musk is under-estimating the potential here:
"And now that we have already paid for this compute in these cars, it might be wise to use them and not let them be, like, buying a lot of expensive machinery and leaving to them idle. We don't want that. We want to use the computer as much as possible and close to like basically 100 percent of the time to make full use of it," Elluswamy said.

"It takes a lot of intelligence to drive the car anyway. And when it's not driving the car, you just put this intelligence to other uses, solving scientific problems like a human or answering dumb questions for someone else," he added.
...
"If you get, like, to the 100 million vehicle level, which I think we will at some point get to, and you've got a kilowatt of usable compute – I think you could have on the order of 100 gigawatts of useful compute, which might be more than anyone, more than any company, probably more than any company," he mused.
Tesla is currently selling around 2M vehicles/year, so "at some point" will be sometime in the 2070s, by which time the vast majority of the vehicles Tesla has shipped will have been scrapped, and even if they still work 50 years of Moore's law will have made all but the last few obsolete.

Robinson starts thinking about the details:
Of course, all this compute capacity isn't sitting conveniently clustered together in a datacenter. It is distributed here and there, reached via a cellular connection in each Tesla, or possibly via Wi-Fi if the car is on the owner's driveway.

So the model Tesla would be looking at is perhaps more akin to edge computing, such as Heata in the UK, which uses heat from servers in homes to provide domestic hot water and rents out the compute capacity via cloud company Civo.

Among the issues we can see is that Tesla would be effectively using electricity that the car owner has paid for to run any workloads while it is idle, so would they get a cut of the money generated?

Yes, it seems. CFO Vaibhav Taneja, said "the capex is shared by the entire world. Sort of everyone owns a small chunk, and they get a small profit out of it maybe."
...
IDC Senior Research Director for Digital Infrastructure Andrew Buss said the idea sounds technically feasible, but the potential downsides are perhaps too big to justify it being actually implemented.

"They'd not even be edge processing nodes as the code and data would have to be centrally managed and stored and then packaged and sent for processing before being returned once complete," he told The Register.

Other downsides include third-party code and data running on a private asset, Buss said, and if taking power from the battery, this would accelerate the degradation of these, which are the single most expensive and crucial part of a Tesla and need to be kept in as optimal a shape as possible for longevity and consistency of range.

In other words, Tesla might well find that implementing this idea may prove more trouble than it is actually worth for the returns it generates.

And as The Register noted after the earnings conference, Elon has a habit of throwing out wild ideas when things aren't going well to distract the punters and energize investors. This could well be one of them.
Read the whole story
cjheinz
9 days ago
reply
I read the whole article, great stuff, thanks!
Share this story
Delete

Stopping the Spread of Misinformation: Is Psychological Inoculation the Key?

1 Comment


This article is republished from SciLight, an independent science policy publication on Substack.

In December 2023, I moved from Washington, DC proper to the suburbs. My husband and I, and our two dogs and cat, simply needed more room than the single bedroom condo we could afford in the city. Six months later and we’re really happy with our new home, although suburban life is quite different.

One of the ways that suburban life is different is the increased amount of solicitation. We’ve had numerous folks knock on our door to sell their wares or services.

One solicitor, in particular, still stands out to me: a young woman who knocked on my door in January – only a few weeks after my husband and I had moved in. It was pouring rain outside when the knock came at my door, and the dog alarms started to sound. I looked through my door’s peephole. A young woman with long black hair stood on my porch in a raincoat with a notepad and clipboard. I opened the door and stepped on my porch to chat with her.

She told me that she was a city official visiting new homeowners. She then proceeded to tell me that a couple of months ago, the city discovered the water supply was contaminated. She was there to test my water supply to make sure it was safe and to ensure that if it wasn’t that I had the opportunity to have the city install new filtration devices.

My emotions took hold of me, and I felt fear. I had just moved into this city, and they didn’t alert me about a serious water pollution issue?

“If you have just 5 minutes, I can come into your home and collect the water samples needed,” the young woman told me.

The fear that this woman had stoked in me, and the fact that she wanted to get into my house so quickly, caused me to take pause. Surely, the city would have told me about a serious water pollution issue, and if they didn’t then there is a serious problem here. Right?

So, I asked the young woman, “You said you were with the city, right?” “No,” she replied, “but we do work with the city.”

Odd – I was pretty certain that she said that she worked for the city at the beginning of our conversation.

“I’m sorry, but I don’t think that I have time today, and I’d need to put my dogs up before letting someone in my home. Can you email or call me to schedule something in the future?”

“Sir, it’ll just take 5 minutes and I’m happy to wait for you to put your dogs away,” she replied.

The pushy behavior made me more suspicious – I knew then that I definitely would need to check with the city first. “I’m sorry, I just can’t today. Can you leave me a card with your information?”

“I usually have one, but because of the rain I forgot them all today,” she says.

“That’s ok. Feel free to stop by another time.”

I never saw that young woman again.

A quick Google search of my weird situation validated my concerns – this young woman was serving me disinformation. She did not work for the city or even with the city. There was no water pollution concern in my city – not now or months ago. If I had allowed them to “test” my water, their “lab” would have sent me a very concerning report with more disinformation regarding the level of pollutants in my water. The disinformation’s ultimate goal – get me to purchase an expensive filtration system that doesn’t do anything. Apparently, this is a common scam that happens across the nation.

I’m glad that my brain sent up some warning signals – otherwise I could’ve lost a good deal of money to solve a problem that isn’t real. But does everyone’s brain send out these same warning signals? No, and that’s a huge problem in a world where mis- and disinformation is on the rise, and where generative artificial intelligence will continually make it more difficult to tell fact from fiction.

Large-scale disinformation campaigns, when effective, can have huge impact, such as persuading people that an election was stolen. Or convincing people to not take a vaccine. Or persuading people that climate change is a hoax.

Lucky for us, psychologists have been studying human susceptibility to mis- and disinformation. By better understanding when, why, and how humans take in mis- and disinformation, psychologists also can better understand strategies to combat it. One of the strategies that is showing promise is called “inoculation.

Inoculating against mis- and disinformation

If inoculation makes you think of vaccines, then you likely already understand the strategy here to combat mis- and disinformation. Psychological inoculation is like a vaccine, but for your brain. Vaccines produce antibodies that help strengthen your immune system against a virus, so it’s better prepared to fight that virus when it enters your body again in the future. A psychological inoculation triggers “mental antibodies” that help train your brain to spot mis- and disinformation so you’re not persuaded by it when you come across it in the future.

Disinformation researchers, Jon Roozenbeek and Sander van der Linden, explain that the idea of psychological inoculation has been around since the 1960s when there were concerns that American captured soldiers might be brainwashed by enemy troops. William McGuire, a social psychologist, suggested a “vaccine for a brainwash” and thus the idea of psychological inoculation was formed.

Roozenbeek and and van der Linden have shown that inoculation strategies can be effective. In one experiment they randomized over 6,000 participants to watch a video about strategies by which mis- and disinformation spreads, or a neutral control video. These videos are really great, and you can view them yourself. The members of the group assigned the videos about disinformation strategies were significantly better able to recognize manipulation techniques, to discern trustworthy from untrustworthy content, and to share material with greater discrimination than those who didn’t watch them. The results of this study are published in Science Advances.

The nice thing about the videos used in the study linked above is that they’re short. And short videos can be distributed to lots of people – like in ads on YouTube that you’re forced to watch (if you don’t subscribe to YouTube Premium). While the videos are effective in inoculating folks, they’re not as effective as other strategies that may be more engaging.

For example, Roozenbeek and van der Linden also have used gaming as a tool for psychological inoculation. In the game, which you can play here, you play the role as a fake news producer and master the strategies of spreading misinformation. Those who play the game are better able to identify misinformation. However, the effects size for retaining the ability to spot misinformation is larger for those who played the game versus those who watch the YouTube videos. So, it seems the game might be more effective in helping individuals spot future misinformation than watching YouTube videos about disinformation techniques.

In another study published in Nature by McPhedran and colleagues, psychological inoculation was shown to be more effective than false tags. False tags are used by social media sites, such as Facebook, to alert you that a post’s content may contain mis- or disinformation. In the study, participants received social media posts based on real content that either contained misinformation or did not. Participants could interact with the posts in several ways – share it, comment on it, respond to it with an emoji, “like” it. Prior to viewing the posts, some participants received inoculation training. Those who received inoculation training were far less likely to engage with the posts in any way.

Can we scale inoculation?

For psychological inoculation to be effective at a large scale, Dr. Gordon Pennycook says that researchers need to identify two things: a) tactics that are both prevalent and diagnostic of misinformation, and b) tactics that are simple enough that people can learn the heuristics to identify them. Pennycook says that research, like that of Roozenbeek and van der Linden, has focused on (b), leaving (a) largely assumed.

For example, Pennycook says that identification of a mis-/disinformation strategy, such as manipulation, may not continue to be present in inoculated individuals outside of a lab. In a talk given earlier this year, he says that a study hasn’t been conducted that finds individuals inoculated in a lab setting go home and share less manipulative content than they did prior to be inoculated, for example. This kind of real-world data will need to be collected to show that inoculation continues to be effective outside of an experiment.

Another argument critical of psychological inoculation has been made by Pennycook and Dan Williams, a lecturer in Philosophy at the University of Sussex. Both experts argue that if you prime an individual to be cognizant of a specific technique, such as using emotional language in a headline, that it just makes people more skeptical of all information whether that information is true or not true. So, inoculation techniques may just be training people to be skeptical of headlines or social media posts using emotional language, not actually able to discern whether misinformation is present. If you want to learn more about Pennycook and van der Linden’s views, and hear them discuss these issues and their research, then check out a discussion they had together.

One thing that is clear to me is that an alarm sounded in my brain when a scammer tried to convince me my water was toxic. Maybe I was inoculated because my parents never trusted solicitors when I was growing up. Or maybe my scientific training helped me critically analyze the situation I was in. Whatever it was, we need more alarms like that going off in everyone’s head when they encounter mis- and disinformation.

Read the whole story
cjheinz
9 days ago
reply
Very interesting.
Share this story
Delete

Design Your Imagination

1 Comment

A week or so ago on Twitter/X, John Joseph Adams mentioned that he has aphantasia—he can’t visualize things in his ‘mind’s eye’. He asked how others rated themselves on the ability to picture, eg., a horse, on a scale of 1 to 6. Some said they were also ones or twos, and others, myself included, admitted to being off the charts. I said,

Definitely a 6+. I can see the horse in full color and rotate, add motion etc . with my eyes wide open. I trained myself to be able to this kind of thing in my late teens, as part of a deliberate program to hack my brain for ceativity.

But I added:

I started writing a post on my substack on the techniques I used (none drugs-related) but deleted it after looking at it and going "people will think I'm insane."

All this did was make Twitter more curious. The response was “come on! Tell us more!” That got me thinking about all the creative tricks and techniques I’ve learned over the years, many of which I take for granted. So, I’ll answer the call and describe how I trained my internal eye, emphasizing that it’s just one of the ways I hack creativity. I use lots of techniques. What follows is a list mostly from a writer’s perspective, so you may or may not find it useful if you do different creative work.

Let’s Start with the Hard Part

You can easily find articles and listicles about becoming more creative. Creativity, though, is only half of writing, and not the most important part. The important part is having something to say.

I’m pretty sure I’ve never read any articles claiming to teach relevance. But this is where you have to start if you want your writing (or anything, really) to last.

Luckily, society is experiencing rapid evolutionary pressure in this regard, because we all know AI is coming for any ‘creative’ work that can be defined and replicated. You can already generate entire novels with a few prompts, and the product is just going to get better and better. So what’s a human to do in the face of that?

Actually, it changes nothing. We were already drowning in an ocean of repetitive crap, and AI’s just going to widen the sea. It’s not going to change the fundamental issue, which is that our communication with one another should first of all be about something. Calling a text meaningful means you should be changed by reading it—and as an author, or a foresight worker who wants to shock your audience out of their assumptions about the ‘default future,’ your aim should be to open new vistas to people. Not just to re-present existing ones.

This is the hard part. Here are a few ways I try to feed my relevance:

Subscribe now

Practice Horizon Scanning

Horizon scanning is the practice of constantly watching for what in foresight we call ‘weak signals of change.’ These can be anything—changes in peoples’ voting habits, a decline in shoe sales, the appearance of multiple airship freight companies in Canada despite none owning a functioning airship. Or a mad billionaire building a giant rocket in south Texas… oh wait, maybe that’s a strong signal of change. You should track those too.

But not to figure out what’s happening. The worst rabbit hole you can go down is trying to figure it all out. The point is to recognize that something is happening, by noticing when canaries start falling. Sometimes it’s enough to do the noticing, and bring the fact to other peoples’ attention. Smarter thinkers can figure it out; or, you can let your own imagination run riot over the possible reasons. Either way, you use horizon scanning to perceive and collect the new.

Be a Generalist

…Because if you’re a specialist, AI is definitely coming for your job.

The first principle of innovative thinking is to look for deep connections between seemingly incompatible domains. I say ‘look for’ rather than ‘impose’ because the hallmark of cult and conspiracy thinking is to start with a predetermined ‘truth’ and then try to find it in everything. All you’ve got is a hammer, so every difference in the world becomes a nail to be pounded flat. That’s the opposite of what you should be doing. You’re searching, without prejudice, and generally just letting ideas suffuse through your unconscious rather than trying to draw lines between them. When you one day find yourself using one domain (say, sports) to think about another (say, biochemistry), then you may have found something worth writing about.

So first, be interested in everything. You may never become an expert on a given subject, but that doesn’t mean you can’t contribute to understanding it.

Read Diffractively

Diffractive reading is an idea I got from Karen Barad. You do it by seeing one text or set of ideas through the lens of another, not judgmentally, but according to how each amplifies or mutes the other’s meanings. The way that interfering beams of light create diffraction patterns. For example, lately I’ve been thinking diffractively about space colonization (specifically Venus) through the diffractive lens of ecology. What if we consider humanity to be just one player in a much bigger ecosystem that has to exist before earthlings (people included) can permanently settle in space? I’m just applying the Copernican principle to space development: we’re not special. Considering all the complex technologies that have to exist to sustain us up there, the diffractive reading suggests that sustaining them is the precondition to sustaining us—and that therefore, up there they are more important than us or at least, should be autonomous from us so that human infighting is unable to take the whole system down.

Read to find commonalities and differences, not to figure out which of two perspectives is ‘truer.’ The bright bands where ideas reinforce one another can become genuinely new ideas.

Be Wrong

Don’t stick to the facts. Instead, try to be wrong sometimes. Just be aware when you’re doing it. Going down a blind alley convinced that it’s the right one is a very bad idea; but one of the things I like to do when I visit a new city (at least in Europe, where I feel safest) is to go walking and get a little lost. Loosely holding a set of contradictory beliefs in mind is useful, especially when you’re writing fiction.

Fiction is the art of perspective.

Read the Edge Thinkers, Avoid the Nutcases

You can afford to be a little wrong when you start by learning the consensus view on a topic, then search for authors whose ideas disagree with that consensus, but are cited by everybody. These are the interesting ones to read.

This is a great tool for filtering out the crazies while finding the visionaries. For instance, take Ayn Rand. If you read broadly (even if shallowly) through mainstream 20th/21st-century philosophy, you will discover that nobody outside her circle talks about her. Nobody cites her. On the other hand, people may only talk about Alfred North Whitehead to disagree with him, but he’s cited across the literature. This suggests that Whitehead is an edge thinker, while Rand is uninteresting. (There are other reasons to think that Rand is a nutcase, but we needn’t go into them here.)

When you find edge thinkers, study them. They’re pure gold.

Start with the Deepest Puzzles

I’m a science fiction writer; that is to say, I use scientific possibility as one of the constraints I place on what I can write about. This is just one of the personal rules I follow, but it’s gotten me labeled as a “hard SF writer” for some reason. Hard SF is generally considered engineering porn, it’s about devices and systems and what-if scenarios more than people. But I never saw my own work that way. Keeping the science straight has the same importance as, say, continuity or consistency of tone.

—Anyway, the point is you can float on the surface of anything—sports, murder mysteries, science fiction—and the work you will do will be… fine. Just fine. Or, you can dive deeper into the existential and philosophical and political agonies that lie under these clean surfaces. That’s where truly new ideas lie in wait.

Always Reinvent the Process

I never knew how my late editor, David Hartwell, would respond to my work. For one of my books he said, “I like it but you need to chop 30,000 words. And make the hero five years older.” For another, he invited me down to his house in upstate New York to do a line-by-line pass through the entire book. It took days. For yet another he said, “I like it but take out this one sentence on the second-to-last page.” He reinvented the editorial process on a book-by-book basis.

I do this with writing, at least on the novel level. For one story I had a 20,000-word detailed outline. Another, I did the entire thing from one hand-written page of notes. I’ve been known to dog-ear books, bookmark like crazy in the e-reader, or cut and paste ideas into OneNote. And then never look at them. Sometimes I keep all my ideas in my head. Sometimes I do mind maps like the following:

Mostly I don’t. Each creative project is its own thing and has to be approached as if you’ve never done this before. You’re always a beginner in some sense, unless your aim is to find a formula that works and just crank ‘em out. That could make you successful, but it also means I’ll only need to read one of your books.

Getting Back to That Aphantasia Thing…

I’ll be posting soon about my daily writing routine, including how I break writer’s block. To anticipate that a bit, I’ll say that reliably summoning your imagination is a skill tied to letting go. How creative you can be is hugely dependent on how good you are at disengaging the critical-thinking part of your brain while you’re simultaneously engaged in a task that demands discipline. Take painting or the development of a new storyline: these are tightly constrained working environments and you work in them using a pretty regimented skillset you’ve likely built up over years of practice. This means you run in ruts much of the time. Breaking out of those ruts requires handing over control to unconscious parts of your mind, on command and in a controlled way. This is very hard to do.

Many of us have our best ideas while drifting awake in the morning. Yesterday I came up with the scenario for a potentially vast science fiction epic while in the shower. People have known about this trick for centuries; there’s a long history of artists using different techniques to enter the hypnogogic state. For creative inspiration, Salvador Dali used to sit upright with a pencil tucked under his chin in such a way that, as he drowsed, it would dislodge and wake him if he actually fell asleep. He wanted to be as close to sleep as possible without going all the way. Balancing, like a pencil on its end, between waking and the hypnogogic state, he could maximize his creativity.

Creativity is the paradoxical art of guided letting-go. I was lucky to learn this very early on, when I was just 17 and working on my first novel. It seemed to me that if this was the case, it must be possible to train yourself. I read about Dali’s technique and it didn’t appeal to me. But there were other possibilities.

Repurposing Meditation

I was already reading for the edge cases, and sometimes the edge cases really are also nutcases—so I was cheerfully plowing my way through Aleister Crowley’s book Magick at the time and thoroughly enjoying his pseudo-intellectual shenanigans when I came across his long, detailed description of meditative discipline. He describes it, basically, as the fight of your life. The specific technique he espouses (which is the opposite of mindfulness and likely the only one he knew about) involves fixing your attention on one thing and keeping it there for minutes, hours at a time. In retrospect Magick was the first time I read a description of what it is like to have ADHD—both how hard it is to keep your attention from wandering, and what the almost savage point-light of hyperfocus feels like.

Crowley’s whole schtick was that you had to keep up this fight to achieve Nirvana, and he wastes a lot of ink describing that epic battle. He didn’t convince me that Nirvana was necessarily where I wanted to go right then, but the way he described the fight made me sit up and take notice of something.

He says that at a certain point the meditator will notice how the mind is always throwing up distractions, in the form of half-finished thoughts, a ceaseless inner monologue like Joyce explores in Ulysses—but he also speaks of half-glimpsed images and voices, music or sounds that are normally below the threshold of conscious attention, but are always there.

Always there? Now that was interesting. He was saying that the hypnogogic state is not a place we can only go to at night; just as the stars are only invisible in the day because daylight outshines them, it might be that a vast river of hallucinatory images and sensations is passing through us at every instant, even when we’re wide awake. It’s just too dim to be seen; the light of consciousness washes it out.

I decided to see if I could tune into it to the point where I could summon dream-imagery anywhere, any time—deliberately do something similar to hallucinating, but in a completely controlled way and without losing the ability to tell fantasy from reality.

And I did. I fought the mental fight of meditation Crowley described, but when I began to notice images and sounds—as he said I would—I didn’t try to suppress them, as he said you should. I doubled down on noticing them. I learned to become mindful of them, and gradually, how to flick my attention away from my real surroundings and look into a different world at any time. It’s like riding a bike; once you learn it, you always have it. I’ve been able to do this for almost half a century, and it’s a skill that has come with no psychological ill effects (unless becoming a science fiction writer is one of those).

It’s not always useful, I hasten to add; as I’m writing this sentence I’m just checking and… looking through the laptop screen and my living room I see a lake with choppy waves, a forested shoreline on the right and other higher peaks past the far shore, which is about three kilometers away. Anytime, anywhere, I can trip a mental switch and see something like this. It’s different every time; and that lake’s not particularly interesting. The disappointing side to this skill is that its products are seldom relevant to the present task.

What’s the difference between doing this and just imagining something, you ask? It’s about intention. If I say, “imagine a horse,” most people can do it. But if I say, “imagine!” not many people can immediately say that they see something and name what it is. If you command me to “imagine!” I will instantly be presented with an entirely different place, adjacent to the real world and with wholly unpredictable content.

What I’ve described here is visual, because I’m a very visual thinker; but you can do the same with spoken words or sound. Simply put, there is more than one internal monologue in your head at a time, but one has its volume ratcheted up while the others are turned way down. It doesn’t mean they’re not there, though. I bet Mozart, engaged in conversation with someone, also listened to random melodies playing in the back of his head—not crafting them, just being mindful as they arise and fade away on their own. This is a skill you can learn.

But Don’t Get Distracted

I learned daylight hypnogogia, added it to the toolbox, and I pull it out when I need it. By now, the toolkit is vast; it got even bigger when I entered the Master’s program in Strategic Foresight and Innovation at OCAD University. Who knew there was a whole separate skillset, running parallel to science fiction, but with entirely different methods and aims?

Foresight, how it differs from science fiction, and why it’s powerful and relevant, deserves a deeper dive. I will talk about it in more detail, but not today.

For now, I’ll say that creativity is cool and all (as is daylight hypnogogia), but it’s just part of the toolkit, and creativity isn’t even the point. Relevant creativity is the point. That comes from constant scanning, testing, and reinvention.

Circling back to the reason I started this newsletter in the first place:

If you want to cultivate an unapocalyptic mindset, continuously reinvent the future. How you do it is beside the point; but do it.

Unapocalyptic is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Read the whole story
cjheinz
9 days ago
reply
Wow! Great stuff!
Share this story
Delete
Next Page of Stories