$20 TRILLION: US national debt, and stealing from the future

Debt clock showing that the US national debt has topped $20 trillion

Bang!  Last week, US national debt broke through the $20 trillion mark.  As I noted in a previous post (link here), debt of this magnitude works out to about $250,000 per hypothetical family of four.

Moreover, US national debt is rising faster than at any time in history.  Adjusted for inflation, the debt is seven times higher than in 1982 ($20 trillion vs. $2.9 trillion).  Indeed, it was in 1982—not 2001 or 2008—that US government debt began its unprecedented (and probably disastrous) rise.

The graph below shows US debt over the past 227 years.  The figures are adjusted for inflation (i.e., they are stated in 2017 US dollars).

Graph of US national debt, historic, 1790 to 2017
United States national debt, adjusted for inflation, 1790-2017

It’s important to understand what is happening here: the US is transferring wealth from the future into the present.  The United States government is not merely engaging in some Keynesian fiscal stimulus, it is not simply borrowing for a rainy day (or 35 years of rainy days), it is not just taking advantage of low interest rates to do a bit of infrastructural fix-up or job creation, and it is not just responding to the financial crisis of 2008.  No.  The US government, the nation’s elites, its corporations, and its citizens are engaging in a form of temporal imperialism—colonizing the future and plundering its wealth.  They are today spending wealth that, if this debt is ever to be repaid, will have to be created by workers toiling in decades to come.

You cannot understand our modern world unless you understand this: Fossil-fueled consumer-industrial economies such as those in the US, Canada, and the EU draw heavily from the future and the past.

We reach back in time hundreds-of-millions of years to source the fossil fuels to power our cars and cities.  We are increasingly reliant on hundred-million-year-old sunlight to feed ourselves—accessing that ancient sunshine in the form of natural gas we turn into nitrogen fertilizer and enlarged harvests.  At the same time, we irrigate many fields from fossil aquifers, created at the end of the last ice age and now pumped hundreds of times faster than they refill.  We extract metal ores concentrated in the distant past.  And the cement in the concrete that forms our cities is the calcium-rich remnants of tiny sea creatures that lived millions of years ago.  We have thrust the resource-intake pipes for our food, industrial, and transport systems hundreds-of-millions of years into the past.

We also reach forward in time, consuming the wealth of future generations as we borrow and spend trillions of dollars they must repay; live well in the present at the expense of their future climate stability; deplete resources, clear-cut ecosystems, extinguish species, and degrade soils and water supplies.  We consume today and push the bills into the future.  This is the real meaning of the news that US national debt has now topped $20 trillion.

Graph sources: U.S. Department of the Treasury, “TreasuryDirect: Historical Debt Outstanding–Annual”  (link here

Efficiency, the Jevons Paradox, and the limits to economic growth

Graph of the cost of lighting in the UK, 1300-2000

I’ve been thinking about efficiency.  Efficiency talk is everywhere.  Car buyers can purchase ever more fuel-efficient cars.  LED lightbulbs achieve unprecedented efficiencies in turning electricity into visible light.  Solar panels are more efficient each year.  Farmers are urged toward fertilizer-use efficiency.  And our Energy Star appliances are the most efficient ever, as are the furnaces and air conditioners in many homes.

The implication of all this talk and technology is that efficiency can play a large role in solving our environmental problems.  Citizens are encouraged to adopt a positive, uncritical, and unsophisticated view of efficiency: we’ll just make things more efficient and that will enable us to reduce resource use, waste, and emissions, to solve our problems, and to pave the way for “green growth” and “sustainable development.”

But there’s something wrong with this efficiency solution: it’s not working.  The current environmental multi-crisis (depletion, extinction, climate destabilization, ocean acidification, plastics pollution, etc.) is not occurring as a result of some failure to achieve large efficiency gains.  The opposite.  It is occurring after a century of stupendous and transformative gains.  Indeed, the efficiencies of most civilizational processes (e.g., hydroelectric power generation, electrical heating and lighting, nitrogen fertilizer synthesis, etc.) have increased by so much that they are now nearing their absolute limits—their thermodynamic maxima.  For example, engineers have made the large electric motors that power factories and mines exquisitely efficient; those motors turn 90 to 97 percent of the energy in electricity into usable shaft power.  We have maximized efficiencies in many areas, and yet our environmental problems are also at a maximum.  What gives?

There are many reasons why efficiency is not delivering the benefits and solutions we’ve been led to expect.  One is the “Jevons Paradox.”  That Paradox predicts that, as the efficiencies of energy converters increase—as cars, planes, or lightbulbs become more efficient—the cost of using these vehicles, products, and technologies falls, and those falling costs spur increases in use that often overwhelm any resource-conservation gains we might reap from increasing efficiencies.  Jevons tells us that energy efficiency often leads to more energy use, not less.  If our cars are very fuel efficient and our operating costs therefore low, we may drive more, more people may drive, and our cities may sprawl outward so that we must drive further to work and shop.  We get more miles per gallon, or per dollar, so we drive more miles and use more gallons.  The Jevons Paradox is a very important concept to know if you’re trying to understand our world and analyze our situation.

The graph above helps illustrate the Jevons Paradox.  It shows the cost of a unit of artificial light (one hour of illumination equivalent to a modern 100 Watt incandescent lightbulb) in England over the past 700 years.  The currency units are British Pounds, adjusted for inflation.  The dramatic decline in costs reflects equally dramatic increases in efficiency.

Adjusted for inflation, lighting in the UK was more than 100 times more affordable in 2000 than in 1900 and 3,000 time more affordable than in 1800.  Stated another way, because electrical power plants have become more efficient (and thus electricity has become cheaper), and because new lighting technologies have become more efficient and produce more usable light per unit of energy, an hour’s pay for the average worker today buys about 100 times more artificial light than it did a century ago and 3,000 time more than two centuries ago.

But does all this efficiency mean that we’re using less energy for lighting?  No.  Falling costs have spurred huge increases in demand and use.  For example, the average UK resident in the year 2000 consumed 75 times more artificial light than did his or her ancestor in 1900 and more than 6,000 times more than in 1800 (Fouquet and Pearson).  Much of this increase was in the form of outdoor lighting of streets and buildings.  Jevons was right: large increases in efficiency have meant large decreases in costs and large increases in lighting demand and energy consumption.

Another example of the Jevons Paradox is provided by passenger planes.  Between 1960 and 2016, the per-seat fuel efficiency of jet airliners tripled or quadrupled (IPCC).  This, in turn, helped lower the cost of flying by more than 60%.  A combination of lower airfares, increasing incomes, and a growing population has driven a 50-fold increase in global annual air travel since 1960—from 0.14 trillion passenger-kilometres per year to nearly 7 trillion (see here for more on the exponential growth in air travel).  Airliners have become three or four times more fuel efficient, yet we’re now burning seventeen times more fuel.  William Stanley Jevons was right.

One final point about efficiency.  “Efficiency” talk serves an important role in our society and economy: it licenses growth.  The idea of efficiency allows most people to believe that we can double and quadruple the size of the global economy and still reduce energy use and waste production and resource depletion.  Efficiency is one of our civilization’s most important licensing myths.  The concept of efficiency-without-limit has been deployed to green-light the project of growth-without-end.

Graph sources: Roger Fouquet, Heat Power and Light: Revolutions in Energy Services

Complexity, energy, and the fate of our civilization

Tainter Collapse of Complex Societies book cover

Some concepts stay with you your whole life and shape the way you see the world.  For me, one such concept is complexity.  Thinking about the increasing complexity of our human-made systems gives a window into future energy needs, the rise and fall of economies, the structures of cities, and possibly even the fate of our global mega-civilization.

In 1988, Joseph Tainter wrote a groundbreaking book on complexity and civilizations: The Collapse of Complex Societies.  The book is a detailed historical and anthropological examination of the Roman, Mayan, Chacoan, and other civilizations.  As a whole, the book can be challenging.  But most of the important big-picture concepts are contained in chapters 4 and 6.

Regarding complexity, energy, and collapse, Tainter argues that:

1.  Human societies are problem-solving entities;
2.  Problem solving creates complexity: new hierarchies and control structures; increased reporting and information processing; more managers, accountants, and consultants;
3.  All human systems require energy, and increased complexity must be supported by increased energy use;
4.  Investment in problem-solving complexity reaches a point of declining marginal returns: (energy) costs rise faster than (social or economic) benefits; and
5.  Complexity rises to a point where available energy supplies become inadequate to support it and, in that state, an otherwise withstandable shock can cause a society to collapse.  For example, the western Roman Empire, unable to access enough bullion, grain, and other resources to support the complexity of its cities, armies, and far-flung holdings, succumbed to a series of otherwise unremarkable attacks by barbarians.

Societies certainly are problem-solving entities.  Our communities and nations encounter problems: external enemies, environmental threats, resource availability, disease, crime.  For these problems we create solutions: standing armies and advanced weaponry, environmental protection agencies, transnational energy and mining corporations, healthcare companies, police forces.

Problem-solving, however, entails costs in the form of complexity.  To solve problems we create ever-larger bureaucracies, new financial products, larger data processing networks, and a vast range of regulations, institutions, interconnections, structures, programs, products, and technologies.  We often solve problems by creating new managerial or bureaucratic roles (e.g., ombudsmen, human resources managers, or cyber-security specialist); creating new institutions (the UN or EU); or developing new technologies (smartphones, smart bombs, geoengineering, in vitro fertilization).  We accept or even demand this added complexity because we believe that there are benefits to solving problems.  And there certainly are, at least if we evaluate benefits on a case-by-case basis.  Taken as whole, however, the unrelenting accretion of complexity weighs on the system, bogs it down, increases energy requirements, and, as Tainter argues, eventually outstrips available energy supplies and sets the stage for collapse.  We should keep this in mind as we push to further increase the complexity of our civilization even as energy availability may be contracting.  Tainter is telling us that complexity has costs—costs that civilizations sometimes cannot bear.  This warning should ring in our ears as we consider the internet of things, smart-grids, globe-circling production chains, and satellite-controlled autonomous cars.  The costs of complexity must be paid in the currency of energy.

Complexity remains a powerful concept for understanding our civilization and its future even if we don’t share Tainter’s conclusion that increasing complexity sets the stage for collapse.  Because embedded in Tainter’s theory is an indisputable idea: greater complexity must be supported by larger energy inflows.  Because of their complexity, there simply cannot be low-energy versions of London, Japan, the EU, or the global trading system.  As economies grow and consumer choices proliferate and as we increase the complexity of societies here and around the world we necessarily increase energy requirements.

It is no longer possible to understand the world by watching money flows.  There are simply too many trillions of notional dollars, euros, and yen flitting through the global economy.  These torrents of e-money obscure what is really happening.  If we want to understand our civilization and its future, we must think about energy and material flows—about the physical structure and organization of our societies.  Complexity is a powerful analytical concept that enables us to do this.

Fractal collapse: How the dominant societies and economies may fail.

Six images showing the stages of formation of a Sierpinski triangle
The stages of formation of a Sierpinski triangle illustrating fractal collapse

Fractal collapse is an important, useful idea.  It helps us understand that a society, economy, political system, or civilization may not “fall,” but rather become pock-marked and weakened—shot through with micro-collapses.

The United States may be in an advanced state of collapse.  There are many indicators that this is the case.  The national debt, nearly $20 trillion, about a quarter-million dollars per family of four (see my “US national debt per family”), seems unrepayable.  America’s former industrial heartland is now mostly rustbelt, and parts of Detroit look like sets for “Walking Dead” or “The Road.”  Climate change is bearing down from one side and resource depletion from another.  Its democratic system—rotted by dark money, voter suppression, gerrymandering, the distortions of the Electoral College, and messianic populist politics—has delivered gridlock, ideologues, cartoon-level analyses of complex issues, and, now, Trump.  Many of the manufacturing jobs that have not moved to Asia may soon be taken by robots.  Inequality and incarceration-rates are at record highs.  One could extend this list to fill pages.

Despite the preceding, I’m not predicting that America (or Greece or Australia or England) will “fall”—pitch into rapid and irreversible economic contraction and social disintegration.  Instead, fractal collapse is more likely.  In fractal collapse, parts of a system fail, at various scales, but the system, in diminished form, carries on.  We’re seeing this in America.  We see the collapse of a household here (perhaps a result of the opioid crisis), and a neighbourhood, there; a city declines rapidly (think Detroit or Scranton) and a county declares bankruptcy.  Collapse occurs in various places and at various scales but the aggregate entity moves forward.  And such collapses are not predictable—they do not just happen to poor people or in the “poor” places.  Suddenly and unexpectedly, the investment banks collapse, then General Motors becomes insolvent.  The Senate and House of Representatives cease to function properly.  Collapse is not a single event.  As we are seeing it play out now—amid the hyper-energized and dominant “industrial” economies—collapse is multiple, iterative, and repeated across scales: it is fractal.

And collapse is not monolithic or pervasive.  Indeed, some parts of the system expand and prosper.  The US is manufacturing billionaires at a record pace, the stock market continues to climb, output of everything from corn to natural gas is up, and Google and Apple are world-leading corporations.  A hallmark of collapse is that societies become dis-integrated, allowing some parts to fall as other parts rise.

The image above is a Sierpinski triangle or “gasket.”  It helps visualize this idea of fractal collapse.  Step by step, the original triangle shape develops more holes and loses area, but it does not disappear.  its outlines remain apparent.

To make a Sierpinski gasket, we start with an equilateral triangle.  Then we identify the mid-points of each side and use these as the vertices of a new triangle, which we remove from the original.  (See the top-middle triangle, above.)  This leaves us with three equilateral triangles.  We repeat this process over and over; we iterate.  From each remaining triangle we remove the middle, leaving three smaller triangles.  The Sierpinski gasket and its repeated holing can serve as a visual metaphor for the fractal collapse that may now be hollowing out many of the world’s nations.

The future is not binary, not rise or fall.  Increasingly, nations may become less homogeneous.  Some parts may expand and prosper while other parts may wither or fail.  The overall trendline may not be upward, however, but rather downward.  Our future may not be a train wreck, but rather a slow dilapidation.  Not with a bang but a wimper.  We can change this outcome.  But currently very few are trying.

The intellectual history of the idea of fractal collapse is not wholly clear.  The concept came out of the physical sciences and has been popularized as a description of social and economic collapse by author and analyst John Michael Greer.

The Rule of 70

Graph of an exponential curve illustrating exponential growth and the Rule of 70.
16-fold exponential increase caused by a constant 2.8 percent growth rate over 100 years

This graph’s smooth curve shows how an investment, economy, population, or any other quantity will grow at a constant rate of interest or growth—that is, at a constant percentage. In this case the percentage is 2.8 percent, compounded annually.

In the graph, in year 0 the value is 1. Soon, though, the value is twice as high, rising to 2. It doubles again to 4, doubles again to 8, and again to 16. An economy or investment growing at 2.8 percent per year will double every 25 years. Thus, it will double 4 times in a century: 2, 4, 8, 16.

There is a very useful tool for quickly calculating the doubling time for a given growth rate: the Rule of 70. If you know the percentage growth rate and want to know how long it will take an initial value to double, simply divide 70 by the rate. In this case, 70 divided by 2.8 = 25. The value doubles every 25 years and therefor increases 16-fold in 100 years.

By the Rule of 70 we can calculate that a growth rate of 7 percent will cause an initial value to double in just 10 years. China’s economy has been growing by more than 7 percent since the early 1990s. If a value—the size of China’s economy, for example—doubles every 10 years, it will go through 10 doublings in a century: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024. If China’s economy maintained a 7 percent growth rate for a century it would become more than 1,000 times larger. It is important to recall such facts the next time the Dow or some other economic indicator falls on the news that Chinese growth has “slowed” to 7 percent or less.