Civilization as asteroid: humans, livestock, and extinctions

Graph of biomass of humans, livestock, and wild animals
Mass of humans, livestock, and wild animals (terrestrial mammals and birds)

Humans and our livestock now make up 97 percent of all animals on land.  Wild animals (mammals and birds) have been reduced to a mere remnant: just 3 percent.  This is based on mass.  Humans and our domesticated animals outweigh all terrestrial wild mammals and birds 32-to-1.

To clarify, if we add up the weights of all the people, cows, sheep, pigs, horses, dogs, chickens, turkeys, etc., that total is 32 times greater than the weight of all the wild terrestrial mammals and birds: all the elephants, mice, kangaroos, lions, raccoons, bats, bears, deer, wolves, moose, chickadees, herons, eagles, etc.  A specific example is illuminating: the biomass of chickens is more than double the total mass of all other birds combined.

Before the advent of agriculture and human civilizations, however, the opposite was the case: wild animals and birds dominated, and their numbers and mass were several times greater than their numbers and mass today. Before the advent of agriculture, about 11,000 years ago, humans made up just a tiny fraction of animal biomass, and domesticated livestock did not exist.  The current situation—the domination of the Earth by humans and our food animals—is a relatively recent development.

The preceding observations are based on a May 2018 report by Yinon Bar-On, Rob Phillips, and Ron Milo published in the academic journal Proceedings of the National Academy of Sciences.  Bar-On and his coauthors use a variety of sources to construct a “census of the biomass of Earth”; they estimate the mass of all the plants, animals, insects, bacteria, and other living things on our planet.

The graph above is based on data from that report (supplemented with estimates based on work by Vaclav Smil).  The graph shows the mass of humans, our domesticated livestock, and “wild animals”: terrestrial mammals and birds.  The units are millions of tonnes of carbon.*  Three time periods are listed.  The first, 50,000 years ago, is the time before the Quaternary Megafauna Extinction.  The Megafauna Extinction was a period when Homo sapiens radiated outward into Eurasia, Australia, and the Americas and contributed to the extinction of about half the planet’s large animal species (>44 kgs).  (Climate change also played a role in that extinction.)  In the middle of the graph we see the period around 11,000 years ago—before humans began practicing agriculture.  At the right-hand side we see the situation today.  Note how the first two periods are dominated by wild animals.  The mass of humans in those periods is so small that the blue bar representing human biomass is not even visible in the graph.**

This graph highlights three points:
1. wild animal numbers and biomass have been catastrophically reduced, especially over the past 11,000 years;
2. human numbers and livestock numbers have skyrocketed, to unnatural, abnormal levels; and
3. The downward trendline for wild animals visible in this graph is gravely concerning; this graph suggests accelerating extinctions.

Indeed, we are today well into the fastest extinction event in the past 65 million years.  According to the 2005 Millennium Ecosystem Assessment “the rate of known extinctions of species in the past century is roughly 50–500 times greater than the extinction rate calculated from the fossil record….”

The extinction rate that humans are now causing has not been seen since the Cretaceous–Paleogene extinction event 65 million years ago—the asteroid-impact-triggered extinction that wiped out the dinosaurs.  Unless we reduce the scale and impacts of human societies and economies, and unless we more equitably share the Earth with wild species, we will enter fully a major global extinction event—only the sixth in 500 million years.  To the other species of the Earth, and to the fossil record, human impacts increasingly resemble an asteroid impact.

In addition to the rapid decline in the mass and number of wild animals it is also worth contemplating the converse: the huge increase in human and livestock biomass.  Above, I called this increase “unnatural,” and I did so advisedly.  The mass of humans and our food animals is now 7 times larger than the mass of animals on Earth 11,000 or 50,000 years ago—7 times larger than what is normal or natural.  For millions of years the Earth sustained a certain range of animal biomass; in recent millennia humans have multiplied that mass roughly sevenfold.

How?  Fossil fuels.  Via fertilizers, petro-chemical pesticides, and other inputs we are pushing hundreds of millions of tonnes of fossil fuels into our food system, and thereby pushing out billions of tonnes of additional food and livestock feed.  We are turning fossil fuel Calories from the ground into food Calories on our plates and in livestock feed-troughs.   For example, huge amounts of fossil-fuel energy go into growing the corn and soybeans that are the feedstocks for the tens-of-billions of livestock animals that populate the planet.

Dr. Anthony Barnosky has studied human-induced extinctions and the growing dominance of humans and their livestock.  In a 2008 journal article he writes that “as soon as we began to augment the global energy budget, megafauna biomass skyrocketed, such that we are orders of magnitude above the normal baseline today.”  According to Barnosky “the normal biomass baseline was exceeded only after the Industrial Revolution” and this indicates that “the current abnormally high level of megafauna biomass is sustained solely by fossil fuels.”

Only a limited number of animals can be fed from leaves and grass energized by current sunshine.  But by tapping a vast reservoir of fossil sunshine we’ve multiplied the number of animals that can be fed.  We and our livestock are petroleum products.

There is no simple list of solutions to mega-problems like accelerating extinctions, fossil-fuel over-dependence, and human and livestock overpopulation.  But certain common sense solutions seem to present themselves.  I’ll suggest just one: we need to eat less meat and fewer dairy products and we need to reduce the mass and number of livestock on Earth.  Who can look at the graph above and come to any other conclusion?  We need not eliminate meat or dairy products (grazing animals are integral parts of many ecosystems) but we certainly need to cut the number of livestock animals by half or more.  Most importantly, we must not try to proliferate the Big Mac model of meat consumption to 8 or 9 or 10 billion people.  The graph above suggests a stark choice: cut the number of livestock animals, or preside over the demise of most of the Earth’s wild species.

 

* Using carbon content allows us to compare the mass of plants, animals, bacteria, viruses, etc.  Very roughly, humans and other animals are about half to two-thirds water.  The remaining “dry mass” is about 50 percent carbon.  Thus, to convert from tonnes of carbon to dry mass, a good approximation is to multiply by 2.

** There is significant uncertainty regarding animal biomass in the present, and much more so in the past.  Thus, the biomass values for wild animals in the graph must be considered as representing a range of possible values.  That said, the overall picture revealed in the graph is not subject to any uncertainty.  The overall conclusions are robust: the mass of humans and our livestock today is several times larger than wild animal biomass today or in the past; and wild animal biomass today is a fraction of its pre-agricultural value.

Graph sources:
– Yinon M. Bar-On, Rob Phillips, and Ron Milo, “The Biomass Distribution on Earth,” Proceedings of the National Academy of Sciences, May 17, 2018.
– Anthony Barnosky, “Megafauna Biomass Tradeoff as a Driver of Quaternary and Future Extinctions,” Proceedings of the National Academy of Sciences 105 (August 2008).
– Vaclav Smil, Harvesting the Biosphere: What We Have Taken from Nature (Cambridge, MA: MIT Press, 2013).

Home grown: 67 years of US and Canadian house size data

Graph of the average size of new single-family homes, Canada and the US, 1950-2017
Average size of new single-family homes, Canada and the US, 1950-2017

I was an impressionable young boy back in 1971 when my parents were considering building a new home.  I remember discussions about house size.  1,200 square feet was normal back then.  1,600 square feet, the size of the house they eventually built, was considered extravagant—especially in rural Saskatchewan.  And only doctors and lawyers built houses as large as 2,000 square feet.

So much has changed.

New homes in Canada and the US are big and getting bigger.  The average size of a newly constructed single-family detached home is now 2,600 square feet in the US and probably 2,200 in Canada.  The average size of a new house in the US has doubled since 1960.  Though data is sparse for Canada, it appears that the average size of a new house has doubled since the 1970s.

We like our personal space.  A lot.  Indeed, space per person has been growing even faster then house size.  Because as our houses have been growing, our families have been shrinking, and this means that per-capita space has increased dramatically.  The graph below, from shrinkthatfootprint.com, shows that, along with Australia, Canadians and Americans enjoy the greatest per-capita floorspace in the world.  The average Canadian or American each has double the residential space of the average UK, Spanish, or Italian resident.

Those of us fortunate enough to have houses are living in the biggest houses in the world and the biggest in history.  And our houses continue to get bigger.  This is bad for the environment, and our finances.

Big houses require more energy and materials to construct.  Big houses hold more furniture and stuff—they are integral parts of high-consumption lifestyles.  Big houses contribute to lower population densities and, thus, more sprawl and driving.  And, all things being equal, big houses require more energy to heat and cool.  In Canada and the US we are compounding our errors: making our houses bigger, and making them energy-inefficient.  A 2,600 square foot home with leading edge ‘passiv haus’ construction and net-zero energy requirements is one thing, but a house that size that runs its furnace half the year and its air conditioner the other half is something else.  And multiply that kind of house times millions and we create a ‘built in’ greenhouse gas emissions problem.

Then there are the issues of cost and debt.  We continually hear that houses are unaffordable.  Not surprising if we’re making them twice as large.  What if, over the past decade, we would have made our new houses half as big, but made twice as many?  Might that have reduced prices?

And how are large houses connected to large debt-loads?  Canadian debt now stands at a record $1.8 trillion.  Much of that is mortgage debt.  Even at low interest rates of 3.5 percent, the interest on that debt is $7,000 per year for a hypothetical family of four.  And that’s just the average.  Many families are paying a multiple of that amount, just in interest.  Then on top of that there are principle payments.  It’s not hard to see why so many families struggle to save for retirement or pay off debt.

Our ever-larger houses are filling the air with emissions; emptying our pockets of saving; filling up with consumer-economy clutter; and creating car-mandatory unwalkable, unbikable, unlovely neighborhoods.

The solutions are several fold.  First, new houses must stop getting bigger.  And they must start getting smaller.  There is no reason that Canadian and US residential spaces must be twice as large, per person, as European homes.  Second, building standards must get a lot better, fast.  Greenhouse gas emissions must fall by 50 to 80 percent by mid-century.  It is critical that the houses we build in 2020 are designed with energy efficient walls, solar-heat harvesting glass, and engineered summer shading such that they require 50 to 80 percent less energy to heat and cool.  Third, we need to take advantage of smaller, more rational houses to build more compact, walkable, bikable, enjoyable neighborhoods.  Preventing sprawl starts at home.

Finally, we need to consider questions of equity, justice, and compassion.  What is our ethical position if we are, on the one hand, doubling the size of our houses and tripling our per-capita living space and, on the other hand, claiming that we “can’t afford” housing for the homeless.  Income inequality is not just a matter of abstract dollars.  This inequality is manifest when some of us have rooms in our homes we seldom visit while others sleep outside in the cold.

We often hear about the “triple bottom line”: making our societies ecologically, economically, and socially sustainable.  Building oversized homes moves us away from sustainability, on all three fronts.

Graph sources:
US Department of Commerce/US Census Bureau, “2016 Characteristics of New Housing”
US Department of Commerce/US Census Bureau, “Characteristics of New Housing: Construction Reports”
US Department of Commerce/US Census Bureau, “Construction Reports: Characteristics of New One-Family Homes: 1969”
US Department of Labour, Bureau of Labour Statistics, “New Housing and its Materials:1940-56”
Preet Bannerjee, “Our Love Affair with Home Ownership Might Be Doomed,” Globe and Mail, January 18, 2012 (updated February 20, 2018) 

The cattle crisis: 100 years of Canadian cattle prices

Graph of Canadian cattle prices, historic, 1918-2018
Canadian cattle prices at slaughter, Alberta and Ontario, 1918-2018

Earlier this month, Brazilian beef packer Marfrig Global Foods announced it is acquiring 51 percent ownership of US-based National Beef Packing for just under $1 billion (USD).  The merged entity will slaughter about 5.5 million cattle per year, making Marfrig/National the world’s fourth-largest beef packer.  (The top-three are JBS, 17.4 million per year; Tyson, 7.7 million; and Cargill, 7.6.)  To put these numbers into perspective, with the Marfrig/National merger, the largest four packing companies will together slaughter about 15 times more cattle worldwide than Canada produces in a given year.  In light of continuing consolidation in the beef sector it is worth taking a look at how cattle farmers and ranchers are fairing.

This week’s graph shows Canadian cattle prices from 1918 to 2018.  The heavy blue line shows Ontario slaughter steer prices, and is representative of Eastern Canadian cattle prices.  The narrower tan-coloured line shows Alberta slaughter steer prices, and is representative for Western Canada.  The prices are in dollars per pound and they are adjusted for inflation.

The two red lines at the centre of the graph delineate the price range from 1942 to 1989.  The red lines on the right-hand side of the graph delineate prices since 1989.  The difference between the two periods is stark.  In the 47 years before 1989, Canadian slaughter steer prices never fell below $1.50 per pound (adjusted for inflation).  In the 28 years since 1989, prices have rarely risen that high.  Price levels that used to mark the bottom of the market now mark the top.

What changed in 1989?  Several things:

1.       The arrival of US-based Cargill in Canada in that year marked the beginning of integration and consolidation of the North American continental market.  This was later followed by global integration as packers such as Brazil-based JBS set up plants in Canada and elsewhere.

2.       Packing companies became much larger but packing plants became much less numerous.  Gone were the days when two or three packing plants in a given city would compete to purchase cattle.

3.       Packer consolidation and giantism was faciliated by trade agreements and global economic integration.  It was in 1989 that Canada signed the Canada-US Free Trade Agreement (CUSTA).  A few years later Canada would sign the NAFTA, the World Trade Organization (WTO) Agreement on Agriculture, and other bilateral and multilateral “free trade” deals.

4.       Packing companies created captive supplies—feedlots full of packer-owned cattle that the company could draw from if open-market prices rose, curtailing demand for farmers’ cattle and disciplining prices.

Prices and profits are only partly determined by supply and demand.  A larger factor is market power.  It is this power that determines the allocation of profits within a supply chain.  In the late ’80s and continuing today, the power balance between packers and farmers shifted as packers merged to become giant, global corporations.  The balance shifted as packing plants became less numerous, reducing competition for farmers’ cattle.  The balance shifted still further as packers began to utilize captive supplies.  And it shifted further still as trade agreements thrust farmers in every nation into a single, hyper-competitive global market.  Because market power determines profit allocation, these shifts increased the profit share for packers and decreased the share for farmers.   The effects on cattle farmers have been devastating.  Since the latter-1980s, Canada has lost half of its cattle farmers and ranchers.

For more background and analysis, please see the 2008 report by the National Farmers Union: The Farm Crisis and the Cattle Sector: Toward a New Analysis and New Solutions.

Graph sources: numerous, including Statistics Canada CANSIM Tables 002-0043, 003-0068, 003-0084; and  Statistics Canada “Livestock and Animal Products”, Cat. No. 23-203

 

 

There are just two sources of energy

Graph of global primary energy supply by fuel or energy source, 1965-2016
Global primary energy consumption by fuel or energy source, 1965-2016

Our petro-industrial civilization produces and consumes a seemingly diverse suite of energies: oil, coal, ethanol, hydroelectricity, gasoline, geothermal heat, hydrogen, solar power, propane, uranium, wind, wood, dung.  At the most foundational level, however, there are just two sources of energy.  Two sources provide more than 99 percent of the power for our civilization: solar and nuclear.  Every other significant energy source is a form of one of these two.  Most are forms of solar.

When we burn wood we release previously captured solar energy.  The firelight we see and the heat we feel are energies from sunlight that arrived decades ago.  That sunlight was transformed into chemical energy in the leaves of trees and used to form wood.  And when we burn that wood, we turn that chemical-bond energy back into light and heat.  Energy from wood is a form of contemporary solar energy because it embodies solar energy mostly captured years or decades ago, as distinct from fossil energy sources such as coal and oil that embody solar energy captured many millions of years ago.

Straw and other biomass are a similar story: contemporary solar energy stored as chemical-bond energy then released through oxidation in fire.  Ethanol, biodiesel, and other biofuels are also forms of contemporary solar energy (though subsidized by the fossil fuels used to create fertilizers, fuels, etc.).

Coal, natural gas, and oil products such as gasoline and diesel fuel are also, fundamentally, forms of solar energy, but not contemporary solar energy: fossil.  The energy in fossil fuels is the sun’s energy that fell on leaves and algae in ancient forests and seas.  When we burn gasoline in our cars, we are propelled to the corner store by ancient sunlight.

Wind power is solar energy.  Heat from the sun creates air-temperature differences that drive air movements that can be turned into electrical energy by wind turbines, mechanical work by windmills, or geographic motion by sailing ships.

Hydroelectric power is solar energy.  The sun evaporates and lifts water from oceans, lakes, and other water bodies, and that water falls on mountains and highlands where it is aggregated by terrain and gravity to form the rivers that humans dam to create hydro-power.

Of course, solar energy (both photovoltaic electricity and solar-thermal heat) is solar energy.

Approximately 86 percent of our non-food energy comes from fossil-solar sources such as oil, natural gas, and coal.  Another 9 percent comes from contemporary solar sources, mostly hydro-electric, with a small but rapidly growing contribution from wind turbines and solar photovoltaic panels.  In total, then, 95 percent of the energy we use comes from solar sources—contemporary or fossil.  As is obvious upon reflection, the Sun powers the Earth.

The only major energy source that is not solar-based is nuclear power: energy from the atomic decay of unstable, heavy elements buried in the ground billions of years ago when our planet was formed.  We utilize nuclear energy directly, in reactors, and also indirectly, when we tap geothermal energies (atomic decay provides 60-80 percent of the heat from within the Earth).  Uranium and other radioactive elements were forged in the cores of stars that exploded before our Earth and Sun were created billions of years ago.  The source for nuclear energy is therefore not solar, but nonetheless stellar; energized not by our sun, but by another.  Our universe is energized by its stars.

There are two minor exceptions to the rule that our energy comes from nuclear and solar sources: Tidal power results from the interaction of the moon’s gravitational field and the initial rotational motion imparted to the Earth; and geothermal energy is, in its minor fraction, a product of residual heat within the Earth, and of gravity.  Tidal and geothermal sources provide just a small fraction of one percent of our energy supply.

Some oft-touted energy sources are not mentioned above.  Because some are not energy sources at all.  Rather, they are energy-storage media.  Hydrogen is one example.  We can create purified hydrogen by, for instance, using electricity to split water into its oxygen and hydrogen atoms.  But this requires energy inputs, and the energy we get out when we burn hydrogen or react it in a fuel cell is less than the energy we put in to purify it.  Hydrogen, therefore, functions like a gaseous battery: energy carrier, not energy source.

Understanding that virtually all energy sources are solar or nuclear in origin reduces the intellectual clutter and clarifies our options.  We are left with three energy supply categories when making choices about our future:
– Fossil solar: oil, natural gas, and coal;
– Contemporary solar: hydroelectricity, wood, biomass, wind, photovoltaic electricity, ethanol and biodiesel (again, often energy-subsidized from fossil-solar sources); and
– Nuclear.

Knowing that virtually all energy flows have their origins in our sun or other stars helps us critically evaluate oft-heard ideas that there may exist undiscovered energy sources.  To the contrary, it is extremely unlikely that there are energy sources we’ve overlooked.  The solution to energy supply constraints and climate change is not likely to be “innovation” or “technology.” Though some people hold out hope for nuclear fusion (creating a small sun on Earth rather than utilizing the conveniently-placed large sun in the sky) it is unlikely that fusion will be developed and deployed this century.  Thus, the suite of energy sources we now employ is probably the suite that will power our civilization for generations to come.  And since fossil solar sources are both limited and climate-disrupting, an easy prediction is that contemporary solar sources such as wind turbines and solar photovoltaic panels will play a dominant role in the future.

 

Graph sources: BP Statistical Review of World Energy 2017

 

Rail lines, not pipelines: the past, present, and future of Canadian passenger rail

Graph of Canadian railway network, kilometres, historic, 1836 to 2016
Canadian railway network, kilometres of track, 1836 to 2016

One kilometre of oil pipeline contains the same amount of steel as two kilometres of railway track.*  The proposed Trans Mountain pipeline expansion will, if it goes ahead, consume enough steel to build nearly 2,000 kms of new passenger rail track.  The Keystone XL project would consume enough steel to build nearly 4,000 kms of track.  And the now-cancelled Energy East pipeline would have required as much steel as 10,000 kms of track.  (For an overview of proposed pipelines, see this CAPP publication.)

With these facts in mind, Canadians (and Americans) should consider our options and priorities.  There’s tremendous pressure to build new pipelines.  Building them, proponents claim, will result in jobs and economic development.  But if we’re going to spend billions of dollars, lay down millions of tonnes of steel, and consume millions of person-hours of labour, should we be building soon-to-be-obsolete infrastructure to transport climate-destabilizing fossil fuels?  Or should we take the opportunity to create even more jobs building a zero-emission twenty-first century transportation network for Canada and North America?  Admittedly, the economics of passenger rail are different than those of pipelines; building a passenger rail system is not simply a matter of laying down steel rails.  But for reasons detailed below, limiting global warming probably makes significant investments in passenger rail inevitable.

The graph above shows the total length of the Canadian railway network.  The time-frame is the past 180 years: 1836 to 2016.  Between 1880 and 1918, Canada built nearly 70,000 kms of railway track—nearly 2,000 kms per year, using tools and machinery that were crude by modern standards, and at a time when the nation and its citizens were poor, compared to today.  In the middle and latter decades of the twentieth century, tens of thousands of kms of track were upgraded to accommodate heavier loads.

The length of track in the Canadian railway system peaked in the 1980s.  Recent decades have seen the network contract.  About a third of Canadian rail lines have been torn up and melted down over the past three-and-a-half decades.  Passenger rail utilization in recent years has fallen to levels not seen since the 1800s—down almost 90 percent from its 1940s peak, despite a doubling of the Canadian population.  Indeed, ridership on Via Rail is half of what it was as recently as 1989.

Contrast China.  In just one decade, that nation has built 25,000 miles of high-speed passenger rail lines.  Trains routinely operate at speeds in excess of 300 km/h.  Many of those trains were designed and built by Canada’s Bombardier.  China plans to build an additional 13,000 kms of high-speed passenger lines in the next seven years.

Japan’s “bullet trains” began running more than 50 years ago.  The Japanese high-speed rail network now exceeds 2,700 kms, with trains reaching speeds of 320 km/h.

Saudi Arabia, Poland, Turkey, and Morocco all have high-speed lines, as do more than a dozen nations in Europe.  Uzbekistan—with a GDP one-twentieth that of Canada’s—has built 600 kms of high-speed rail line and has trains operating at 250 km/h.

The construction of Canadian and North American passenger rail networks is probably inevitable.  As part of an international effort to hold global temperature increases below 2 degrees C, Canada has committed to reduce greenhouse gas (GHG) emissions emission by 30 percent by 2030—now less than 12 years away.  Emissions reductions must continue after 2030, reaching 50 to 60 percent in little more than a generation.  Emission reductions of this magnitude require an end to routine air travel.  Aircraft may still be needed for trans-oceanic travel, but within continents long-distance travel will have to take place using zero-emission vehicles: electric cars or buses for shorter journeys, and electrified passenger trains for longer ones.

This isn’t bad news.  Trains can transport passengers from city-centre to city-centre, eliminating long drives to and from airports.  Trains do not require time-consuming airport security screenings.  These factors, combined with high speeds, mean that for many trips, the total travel time is less for trains than for planes.  And because trains have more leg-room and often include observation cars, restaurants, and lounges, they are much more comfortable, enjoyable, and social.  For some long journeys where it is not cost-effective to build high-speed rail lines, European-style sleeper trains can provide comfortable, convenient overnight transport.  In other cases, medium-speed trains (traveling 150 to 200 km/h) may be the most cost-effective option.

Canada must embrace the inevitable: air travel must be cut by 90 percent; and fast, comfortable, zero-emission trains must take the place of the planes.  Maybe we can build thousands of kms of passenger rail lines and thousands of kms of pipelines.  But given the gravity and menace of the climate crisis and given the rapidly approaching deadlines to meet our emission-reduction commitments, it isn’t hard to see which should be our priority.


*For example, Kinder Morgan’s Trans Mountain pipeline would be made up primarily of 36” pipe (914mm) with a 0.465 wall thickness (11.8 mm).  This pipe weighs 262 kgs/m.  Rails for high-speed trains and other demanding applications often weigh 60 kgs/m.  As two rails are needed, this means 120 kgs/m—half the weight of a comparable length of pipeline.

Graph sources:
Urquhart and Buckley, 1965, Historical Statistics of Canada.
Leacy, Urquhart, and Buckley, 1983, Historical Statistics of Canada, 2nd Ed.
Stats. Can., Various years, Railway Transport in Canada: General Statistics.
Stats. Can., CANSIM Table 404-0010

 

Will Trump’s America crash Earth’s climate?

Graph of US energy consumption by fuel, 1990 to 2050
US energy consumption by fuel, 1990 to 2050

Last week, the US Department of Energy (DOE) released its annual report projecting future US energy production and consumption and greenhouse gas (GHG) emissions.  This year’s report, entitled Annual Energy Outlook 2018, with Projections to 2050 forecasts a nightmare scenario of increasing fossil fuel use, increasing emissions, lackluster adoption of renewable energy options, and a failure to shift to electric vehicles, even by mid-century.

The graph above is copied from that DOE report.  The graph shows past and projected US energy consumption by fuel type.  The top line shows “petroleum and other liquids.”  This is predominantly crude oil products, with a minor contribution from “natural gas liquids.”  For our purposes, we can think of it as representing liquid fuels used in cars, trucks, planes, trains, and ships.  Note how the US DOE is projecting that in 2050 America’s consumption of these high-emission fuels will be approximately equal to levels today.

The next line down is natural gas.  This is used mostly for heating and for electricity generation.  Note how the DOE is projecting that consumption (i.e., combustion) of natural gas will be about one-third higher in 2050 than today.

Perhaps worst of all, coal combustion will be almost as high in 2050 as it is today.   No surprise, the DOE report (page 15) projects that US GHG emissions will be higher in 2050 than today.

Consumption of renewable energy will rise.  The DOE is projecting that in 2050 “other renewables”—essentially electricity from solar photovoltaic panels and wind turbines—will provide twice as much power as today.  But that will be only a fraction of the energy supplied by fossil fuels: oil, natural gas, and coal.

How can this be?  The world’s nations have committed, in Paris and elsewhere, to slash emissions by mid-century.  To keep global temperature increases below 2 degrees Celsius, industrial nations will have to cut emissions by half by 2050.  So what’s going on in America?

The DOE projections reveal that America’s most senior energy analysts and policymakers believe that US policies currently in place will fail to curb fossil fuel use and reduce GHG emissions.  The DOE report predicts, for example, that in 2050 electric vehicles will make up just a small fraction of the US auto fleet.  See the graph below.  Look closely and you’ll see the small green wedge representing electrical energy use in the transportation sector.  The graph also shows that the the consumption of fossil fuels—motor gasoline, diesel fuel, fuel oil, and jet fuel—will be nearly as high in 2050 as it is now.  This is important: The latest data from the top experts in the US government predict that, given current policies, the transition to electric vehicles will not happen.

The next graph, below, shows that electricity production from solar arrays will increase significantly.  But the projection is that the US will not install significant numbers of wind turbines, so long as current policies remain in force and current market conditions prevail.

The report projects (page 84) that in 2050 electricity generation from the combustion of coal and natural gas will be twice as high as generation from wind turbines and solar panels.

Clearly, this is all just a set of projections.  The citizens and governments of the United States can change this future.  And they probably will.  They can implement policies that dramatically accelerate the transition to electric cars, electric trains, energy-efficient buildings, and low-emission renewable energy.

But the point of this DOE report (and the point of this blog post) is that such policies are not yet in place.  In effect, the US DOE report should serve as a warning: continue as now and the US misses its emissions reduction commitments by miles, the Earth probably warms by 3 degrees or more, and we risk setting off a number of global climate feedbacks that could render huge swaths of the planet uninhabitable and kill hundreds of millions of people this century.

The house is on fire.  We can put it out.  But the US Department of Energy is telling us that, as of now, there are no effective plans to do so.

Perhaps step one is to remove the arsonist-in-chief.

 

If you’re for pipelines, what are you against?

Graph of Canadian greenhouse gas emissions, by sector, 2005 to 2039
Canadian greenhouse gas emissions, by sector, 2005 to 2030

As Alberta Premier Notley and BC Premier Horgan square off over the Kinder Morgan / Trans Mountain pipeline, as Alberta and then Saskatchewan move toward elections in which energy and pipelines may be important issues, and as Ottawa pushes forward with its climate plan, it’s worth taking a look at the pipeline debate.  Here are some facts that clarify this issue:

1.  Canada has committed to reduce its greenhouse gas (GHG) emissions by 30 percent (to 30 percent below 2005 levels by 2030).

2.  Oil production from the tar sands is projected to increase by almost 70 percent by 2030 (From 2.5 million barrels per day in 2015 to 4.2 million in 2030).

3.  Pipelines are needed in order to enable increased production, according to the Canadian Association of Petroleum Producers (CAPP) and many others.

4.  Planned expansion in the tar sands will significantly increase emissions from oil and gas production.  (see graph above and this government report)

5.  Because there’s an absolute limit on our 2030 emissions (515 million tonnes), if the oil and gas sector is to emit more, other sectors must emit less.  To put that another way, since we’re committed to a 30 percent reduction, if the tar sands sector reduces emissions by less than 30 percent—indeed if that sector instead increases emissions—other sectors must make cuts deeper than 30 percent.

The graph below uses the same data as the graph above—data from a recent report from the government of Canada.  This graph shows how planned increases in emissions from the Alberta tar sands will force very large reductions elsewhere in the Canadian economy.

Graph of emissions from the Canadian oil & gas sector vs. the rest of the economy, 2015 & 2030
Emissions from the Canadian oil & gas sector vs. the rest of the economy, 2015 & 2030

Let’s look at the logic one more time: new pipelines are needed to facilitate tarsands expansion; tarsands expansion will increase emissions; and an increase in emissions from the tarsands (rather than a 30 percent decrease) will force other sectors to cut emissions by much more than 30 percent.

But what sector or region or province will pick up the slack?  Has Alberta, for instance, checked with Ontario?  If Alberta (and Saskatchewan) cut emissions by less than 30 percent, or if they increase emissions, is Ontario prepared to make cuts larger than 30 percent?  Is Manitoba or Quebec?  If the oil and gas sector cuts by less, is the manufacturing sector prepared to cut by more?

To escape this dilemma, many will want to point to the large emission reductions possible from the electricity sector.  Sure, with very aggressive polices to move to near-zero-emission electrical generation (policies we’ve yet to see) we can dramatically cut emissions from that sector.  But on the other hand, cutting emission from agriculture will be very difficult.  So potential deep cuts from the electricity sector will be partly offset by more modest cuts, or increases, from agriculture, for example.

The graph at the top shows that even as we make deep cuts to emissions from electricity—a projected 60 percent reduction—increases in emissions from the oil and gas sector (i.e. the tar sands) will negate 100 percent of the progress in the electricity sector.  The end result is, according to these projections from the government of Canada, that we miss our 2030 target.  To restate: according to the government’s most recent projections we will fail to meet our Paris commitment, and the primary reason will be rising emissions resulting from tarsands expansion.  This is the big-picture context for the pipeline debate.

We’re entering a new era, one of limits, one of hard choices, one that politicians and voters have not yet learned to navigate.   We are exiting the cornucopian era, the age of petro-industrial exuberance when we could have everything; do it all; have our cake, eat it, and plan on having two cakes in the near future.  In this new era of biophysical limits on fossil fuel combustion and emissions, on water use, on forest cutting, etc. if we want to do one thing, we may be forced to forego something else.  Thus, it is reasonable to ask: If pipeline proponents would have us expand the tar sands, what would they have us contract?

Graph sources: Canada’s 7th National Communication and 3rd Biennial Report, December 2017

Earth’s dominant bird: a look at 100 years of chicken production

Graph of Chicken production, 1950-2050
Chicken meat production, global, actual and projected, 1950 to 2050

There are approximately 23 billion chickens on the planet right now.   But because the life of a meat chicken is short—less than 50 days—annual production far exceeds the number of chickens alive at any one time.  In 2016, worldwide, chicken production topped 66 billion birds.  Humans are slaughtering, processing, and consuming about 2,100 chickens per second.

We’re producing a lot of chicken meat: about 110 million tonnes per year.  And we’re producing more and more.  In 1966, global production was 10 million tonnes.  In just twelve years, by 1978, we’d managed to double production.  Fourteen years after that, 1992, we managed to double it again, to 40 million tonnes.  We doubled it again to 80 million tonnes by 2008.  And we’re on track for another doubling—a projected 160 million tonnes per year before 2040.  By mid-century, production should exceed 200 million tonnes—20 times the levels in the mid-’60s.  This week’s graph shows the steady increase in production.  Data sources are listed below.

The capacity of our petro-industrial civilization to double and redouble output is astonishing.  And there appears to be no acknowledged limit.  Most would predict that as population and income levels rise in the second half of the century—as another one or two billion people join the “global middle class”—that consumption of chicken and other meats will double again between 2050 and 2100.  Before this century ends, consumption of meat (chicken, pork, beef, lamb, farmed fish, and other meats) may approach a trillion kilograms per year.

Currently in Canada the average chicken farm produces about 325,000 birds annually.  Because these are averages, we can assume that the output of the largest operations is several times this figure.  In the US, chicken production is dominated by contracting.  Large transnationals such as Tyson Foods contract with individual growers to feed birds.  It is not unusual for a contract grower to have 6 to 12 barns on his or her farm and raise more than a million broiler chickens per year.

We’re probably making too many McNuggets.  We’re probably catching too many fish.  We’re probably feeding too many pigs.  And it is probably not a good idea to double the number of domesticated livestock on the planet—double it to 60 billion animals.  It’s probably time to rethink our food system.  

Graph sources:
FAOSTAT database
OECD-FAO, Agricultural Outlook 2017-2026
Brian Revell: One Man’s Meat … 2050?
Lester Brown: Full Planet, Empty Plates
FAO: World Agriculture Towards 2030/2050, the 2012 revision

The 100th Anniversary of high-input agriculture

Graph of tractor and horse numbers, Canada, historic, 1910 to 1980
Tractors and horses on farms in Canada, 1910 to 1980

2018 marks the 100th anniversary of the beginning of input-dependent farming—the birth of what would become modern high-input agriculture.  It was in 1918 that farmers in Canada and the US began to purchase large numbers of farm tractors.  These tractors required petroleum fuels.  Those fuels became the first major farm inputs.  In the early decades of the 20th century, farmers became increasingly dependent on fossil fuels, in the middle decades most also became dependent on fertilizers, and in the latter decades they also became dependent on agricultural chemicals and high-tech, patented seeds.

This week’s graph shows tractor and horse numbers in Canada from 1910 to 1980.  On both lines, the year 1918 is highlighted in red.  Before 1918, there were few tractors in Canada.  The tractors that did exist—mostly large steam engines—were too big and expensive for most farms.  But in 1918 three developments spurred tractor proliferation: the introduction of smaller, gasoline-engine tractors (The Fordson, for example); a wartime farm-labour shortage; and a large increase in industrial production capacity.  In the final year of WWI and in the years after, tractor sales took off.  Shortly after, the number of horses on farms plateaued and began to fall.  Economists Olmstead and Rhode have produced a similar graph for the US.

It’s important to understand the long-term significance of what has unfolded since 1918.  Humans have practiced agriculture for about 10,000 years—about 100 centuries.  For 99 centuries, there were almost no farm inputs—no industrial products that farmers had to buy each spring in order to grow their crops.  Sure, before 1918, farmers bought farm implements—hoes, rakes, and sickles in the distant past, and plows and binders more recently.  And there were some fertilizer products available, such as those derived from seabird guano (manure) in the eighteenth and nineteenth centuries.  And farmers occasionally bought and sold seeds.  But for most farmers in most years before about 1918, the production of a crop did not require purchasing an array of farm inputs.  Farm chemicals did not exist, very little fertilizer was available anywhere in the world until after WWII, and farmers had little use for gasoline or diesel fuel.  Before 1918, farms were largely self-sufficient, deriving seeds from the previous years’ crop, fertility from manure and nitrogen-fixing crops, and pulling-power from horses energized by the hay and grain that grew on the farm itself.  For 99 of the 100 centuries that agriculture has existed, farms produced the animal- and crop-production inputs they needed.  Nearly everything that went into farming came out of farming.

For 99 percent of the time that agriculture has existed there were few farm inputs, no farm-input industries, and little talk of “high input costs.”  Agricultural production was low-input, low-cost, solar-powered, and low-emission.  In the most recent 100 years, however, we’ve created a new kind of agricultural system: one that is high-input, high-cost, fossil-fuelled, and high-emission.

Modern agriculture is also, admittedly, high-output.  But this last fact must be understood in context: the incredible food-output tonnage of modern agriculture is largely a reflection of the megatonnes of fertilizers, fuels, and chemicals we push into the system.  Nitrogen fertilizer illustrates this process.  To produce, transport, and apply one tonne of synthetic nitrogen fertilizer requires an amount of energy equal to almost two tonnes of gasoline.  Modern agriculture is increasingly a system for turning fossil fuel Calories into food Calories.  Food is increasingly a petroleum product.

The high-input era has not been kind to farmers.  Two-thirds of Canadian farmers have been ushered out of agriculture over the past two generations.  More troubling and more recent: the number of young farmers—those under 35—has been reduced by two-thirds since 1991.  Farm debt is at a record high: nearly $100 billion.  And about the same amount, $100 billion, has had to be transferred from taxpayers to farmers since the mid-1980s to keep the Canadian farm sector afloat.  Farmers are struggling with high costs and low margins.

This is not a simplistic indictment of “industrial agriculture.”  We’re not going back to horses.  But on the 100th anniversary of the creation of fossil-fuelled, high-input agriculture we need to think clearly and deeply about our food production future.  As our fossil-fuel supplies dwindle, as greenhouse gas levels rise, as we struggle to feed and employ billions more people, and as we struggle with many other environmental and economic problems, we will need to rethink and radically transform our food production systems.  Our current food system isn’t “normal”: it’s an anomaly—a break with the way that agriculture has operated for 99 percent of its history.  It’s time to ask hard questions and make big changes.  It’s time to question the input-maximizing production systems agribusiness corporations have created, and to explore new methods of low-input, low-energy-use, low-emission production.

Rather than maximizing input use, we need to maximize net farm incomes, maximize the number of farm families on the land, and maximize the resilience and sustainability of our food systems.

Global plastics production, 1917 to 2050

Graph of global plastic production, 1917 to 2017
Global plastic production, megatonnes, 1917 to 2017

This week’s graph shows global annual plastics production over the past 100 years.  No surprise, we see exponential growth—a hallmark of our petro-industrial consumer civilization.  Long-term graphs of nearly anything (nitrogen fertilizer production, energy use, automobile productiongreenhouse gas emissions, air travel, etc.) display this same exponential take-off.

Plastics present a good news / bad news story.  First, we should acknowledge that the production capacities we’ve developed are amazing!  Worldwide, our factories now produce approximately 400 million tonnes of plastic per year.  That’s more than a billion kilograms per day!  Around the world we’ve built thousands of machines that can, collectively, produce plastic soft-drink and water bottles at a rate of nearly 20,000 per second.  Our economic engines are so powerful that we’ve managed to double global plastic production tonnage in less than two decades.

But of course that’s also the bad news: we’ve doubled plastic production tonnage in less than two decades.  And the world’s corporations and governments would have us go on doubling and redoubling plastics production.  The graph below shows the projected four-fold increase in production tonnage by 2050.

Graph of global plastics production to 2050
Projected global plastics production to 2050

Source: UN GRID-Arendal

Plastics are a product of human ingenuity and innovation—one of civilization’s great solutions.  They’re lightweight, durable, airtight, decay resistant, inexpensive, and moldable into a huge range of products.  But projected 2050 levels of production are clearly too much of a good thing.  Our growth-addicted economic system has a knack for turning every solution into a problem—every strength into a weakness.

At current and projected production levels, plastics are a big problem.  Briefly:

1.  Plastics are forever—well, almost.  Except for the tonnage we’ve incinerated, nearly all the plastic ever produced still exists somewhere in the biosphere, although much of it is now invisible to humans, reduced to tiny particles in ocean and land ecosystems.  Plastic is great because it lasts so long and resists decay.  Plastic is a big problem for those same reasons.

2. Only 18 percent of plastic is recycled.  This is the rate for plastics overall, including plastics in cars and buildings.  For plastic packaging (water bottles, chip bags, supermarket packaging, etc.) the recycling rate is just 14 percent.  But much of that plastic inflow is excluded during the sorting and recycling process, such that only 5 percent of plastic packaging material is  actually returned to use through recycling.   And one third of plastic packaging escapes garbage collection systems entirely and is lost directly into the environment: onto roadsides or into streams, lakes, and oceans.

3. Oceans are now receptacles for at least 8 billion kilograms of plastic annually—equivalent to a garbage truck full of plastic unloading into the ocean every minute.  The growth rates projected above will mean that by 2050 the oceans will be receiving the equivalent of one truckload of plastic every 15 second, night and day.  And unless we severely curtail plastic production and dumping, by 2050 the mass of plastic in our oceans will exceed the mass of fish.  Once in the ocean, plastics persist for centuries, in the form of smaller and smaller particles.  This massive contamination comes on top of other human impacts: overfishing, acidification, and ocean temperature increases.

4. Plastic is a fossil fuel product.  Plastic is made from oil and natural gas feedstocks—molecules extracted from the oil and gas become the plastic.  And oil, gas, and other energy sources are used to power the plastic-making processes.  By one estimate, 4 percent of global oil production is consumed as raw materials for plastic and an additional 4 percent provides energy to run plastics factories.

5. Plastics contain additives than harm humans and other species: fire retardants, stabilizers, antibiotics, plasticizers, pigments, bisphenol A, phthalates, etc.  Many such additives mimic hormones or disrupt hormone systems.  The 150 billion kilograms of plastics currently in the oceans includes 23 billion kgs of additives, all of which will eventually be released into those ocean ecosystems.

It’s important to think about plastics, not just because doing so shows us that we’re doing something wrong, but because the tragic story of plastics shows us why and how our production and energy systems go wrong.  The story of plastics reveals the role of exponential growth in turning solutions into problems.  Thinking about the product-flow of plastics (oil well … factory … store … home … landfill/ocean) shows us why it is so critical to adopt closed-loop recycling and highly effective product-stewardship systems.  And the entire plastics debacle illustrates the hidden costs of consumerism, the collateral damage of disposable products, and the failure of “the markets” to protect the planet.

In a recent paper that takes a big-picture, long-term look at plastics, scientists advise that “without a well-designed … management strategy for end-of-life plastics, humans are conducting a singular uncontrolled experiment on a global scale, in which billions of metric tons of material will accumulate across all major terrestrial and aquatic ecosystems on the planet.”

Graph sources:
• 1950 to 2015 data from Geyer, Jambeck, and Law, “Production, Use, and Fate of All Plastics Ever Made,” Science Advances 3, no. 7 (July 2017).
• 2016 and 2017 data points are extrapolated at a 4.3 percent growth rate derived from the average growth rate during the previous 20 years.
• Pre-1950 production tonnage is assumed to be negligible, based on various sources and the very low production rates in 1950.