The cattle crisis: 100 years of Canadian cattle prices

Graph of Canadian cattle prices, historic, 1918-2018
Canadian cattle prices at slaughter, Alberta and Ontario, 1918-2018

Earlier this month, Brazilian beef packer Marfrig Global Foods announced it is acquiring 51 percent ownership of US-based National Beef Packing for just under $1 billion (USD).  The merged entity will slaughter about 5.5 million cattle per year, making Marfrig/National the world’s fourth-largest beef packer.  (The top-three are JBS, 17.4 million per year; Tyson, 7.7 million; and Cargill, 7.6.)  To put these numbers into perspective, with the Marfrig/National merger, the largest four packing companies will together slaughter about 15 times more cattle worldwide than Canada produces in a given year.  In light of continuing consolidation in the beef sector it is worth taking a look at how cattle farmers and ranchers are fairing.

This week’s graph shows Canadian cattle prices from 1918 to 2018.  The heavy blue line shows Ontario slaughter steer prices, and is representative of Eastern Canadian cattle prices.  The narrower tan-coloured line shows Alberta slaughter steer prices, and is representative for Western Canada.  The prices are in dollars per pound and they are adjusted for inflation.

The two red lines at the centre of the graph delineate the price range from 1942 to 1989.  The red lines on the right-hand side of the graph delineate prices since 1989.  The difference between the two periods is stark.  In the 47 years before 1989, Canadian slaughter steer prices never fell below $1.50 per pound (adjusted for inflation).  In the 28 years since 1989, prices have rarely risen that high.  Price levels that used to mark the bottom of the market now mark the top.

What changed in 1989?  Several things:

1.       The arrival of US-based Cargill in Canada in that year marked the beginning of integration and consolidation of the North American continental market.  This was later followed by global integration as packers such as Brazil-based JBS set up plants in Canada and elsewhere.

2.       Packing companies became much larger but packing plants became much less numerous.  Gone were the days when two or three packing plants in a given city would compete to purchase cattle.

3.       Packer consolidation and giantism was faciliated by trade agreements and global economic integration.  It was in 1989 that Canada signed the Canada-US Free Trade Agreement (CUSTA).  A few years later Canada would sign the NAFTA, the World Trade Organization (WTO) Agreement on Agriculture, and other bilateral and multilateral “free trade” deals.

4.       Packing companies created captive supplies—feedlots full of packer-owned cattle that the company could draw from if open-market prices rose, curtailing demand for farmers’ cattle and disciplining prices.

Prices and profits are only partly determined by supply and demand.  A larger factor is market power.  It is this power that determines the allocation of profits within a supply chain.  In the late ’80s and continuing today, the power balance between packers and farmers shifted as packers merged to become giant, global corporations.  The balance shifted as packing plants became less numerous, reducing competition for farmers’ cattle.  The balance shifted still further as packers began to utilize captive supplies.  And it shifted further still as trade agreements thrust farmers in every nation into a single, hyper-competitive global market.  Because market power determines profit allocation, these shifts increased the profit share for packers and decreased the share for farmers.   The effects on cattle farmers have been devastating.  Since the latter-1980s, Canada has lost half of its cattle farmers and ranchers.

For more background and analysis, please see the 2008 report by the National Farmers Union: The Farm Crisis and the Cattle Sector: Toward a New Analysis and New Solutions.

Graph sources: numerous, including Statistics Canada CANSIM Tables 002-0043, 003-0068, 003-0084; and  Statistics Canada “Livestock and Animal Products”, Cat. No. 23-203

 

 

There are just two sources of energy

Graph of global primary energy supply by fuel or energy source, 1965-2016
Global primary energy consumption by fuel or energy source, 1965-2016

Our petro-industrial civilization produces and consumes a seemingly diverse suite of energies: oil, coal, ethanol, hydroelectricity, gasoline, geothermal heat, hydrogen, solar power, propane, uranium, wind, wood, dung.  At the most foundational level, however, there are just two sources of energy.  Two sources provide more than 99 percent of the power for our civilization: solar and nuclear.  Every other significant energy source is a form of one of these two.  Most are forms of solar.

When we burn wood we release previously captured solar energy.  The firelight we see and the heat we feel are energies from sunlight that arrived decades ago.  That sunlight was transformed into chemical energy in the leaves of trees and used to form wood.  And when we burn that wood, we turn that chemical-bond energy back into light and heat.  Energy from wood is a form of contemporary solar energy because it embodies solar energy mostly captured years or decades ago, as distinct from fossil energy sources such as coal and oil that embody solar energy captured many millions of years ago.

Straw and other biomass are a similar story: contemporary solar energy stored as chemical-bond energy then released through oxidation in fire.  Ethanol, biodiesel, and other biofuels are also forms of contemporary solar energy (though subsidized by the fossil fuels used to create fertilizers, fuels, etc.).

Coal, natural gas, and oil products such as gasoline and diesel fuel are also, fundamentally, forms of solar energy, but not contemporary solar energy: fossil.  The energy in fossil fuels is the sun’s energy that fell on leaves and algae in ancient forests and seas.  When we burn gasoline in our cars, we are propelled to the corner store by ancient sunlight.

Wind power is solar energy.  Heat from the sun creates air-temperature differences that drive air movements that can be turned into electrical energy by wind turbines, mechanical work by windmills, or geographic motion by sailing ships.

Hydroelectric power is solar energy.  The sun evaporates and lifts water from oceans, lakes, and other water bodies, and that water falls on mountains and highlands where it is aggregated by terrain and gravity to form the rivers that humans dam to create hydro-power.

Of course, solar energy (both photovoltaic electricity and solar-thermal heat) is solar energy.

Approximately 86 percent of our non-food energy comes from fossil-solar sources such as oil, natural gas, and coal.  Another 9 percent comes from contemporary solar sources, mostly hydro-electric, with a small but rapidly growing contribution from wind turbines and solar photovoltaic panels.  In total, then, 95 percent of the energy we use comes from solar sources—contemporary or fossil.  As is obvious upon reflection, the Sun powers the Earth.

The only major energy source that is not solar-based is nuclear power: energy from the atomic decay of unstable, heavy elements buried in the ground billions of years ago when our planet was formed.  We utilize nuclear energy directly, in reactors, and also indirectly, when we tap geothermal energies (atomic decay provides 60-80 percent of the heat from within the Earth).  Uranium and other radioactive elements were forged in the cores of stars that exploded before our Earth and Sun were created billions of years ago.  The source for nuclear energy is therefore not solar, but nonetheless stellar; energized not by our sun, but by another.  Our universe is energized by its stars.

There are two minor exceptions to the rule that our energy comes from nuclear and solar sources: Tidal power results from the interaction of the moon’s gravitational field and the initial rotational motion imparted to the Earth; and geothermal energy is, in its minor fraction, a product of residual heat within the Earth, and of gravity.  Tidal and geothermal sources provide just a small fraction of one percent of our energy supply.

Some oft-touted energy sources are not mentioned above.  Because some are not energy sources at all.  Rather, they are energy-storage media.  Hydrogen is one example.  We can create purified hydrogen by, for instance, using electricity to split water into its oxygen and hydrogen atoms.  But this requires energy inputs, and the energy we get out when we burn hydrogen or react it in a fuel cell is less than the energy we put in to purify it.  Hydrogen, therefore, functions like a gaseous battery: energy carrier, not energy source.

Understanding that virtually all energy sources are solar or nuclear in origin reduces the intellectual clutter and clarifies our options.  We are left with three energy supply categories when making choices about our future:
– Fossil solar: oil, natural gas, and coal;
– Contemporary solar: hydroelectricity, wood, biomass, wind, photovoltaic electricity, ethanol and biodiesel (again, often energy-subsidized from fossil-solar sources); and
– Nuclear.

Knowing that virtually all energy flows have their origins in our sun or other stars helps us critically evaluate oft-heard ideas that there may exist undiscovered energy sources.  To the contrary, it is extremely unlikely that there are energy sources we’ve overlooked.  The solution to energy supply constraints and climate change is not likely to be “innovation” or “technology.” Though some people hold out hope for nuclear fusion (creating a small sun on Earth rather than utilizing the conveniently-placed large sun in the sky) it is unlikely that fusion will be developed and deployed this century.  Thus, the suite of energy sources we now employ is probably the suite that will power our civilization for generations to come.  And since fossil solar sources are both limited and climate-disrupting, an easy prediction is that contemporary solar sources such as wind turbines and solar photovoltaic panels will play a dominant role in the future.

 

Graph sources: BP Statistical Review of World Energy 2017

 

Rail lines, not pipelines: the past, present, and future of Canadian passenger rail

Graph of Canadian railway network, kilometres, historic, 1836 to 2016
Canadian railway network, kilometres of track, 1836 to 2016

One kilometre of oil pipeline contains the same amount of steel as two kilometres of railway track.*  The proposed Trans Mountain pipeline expansion will, if it goes ahead, consume enough steel to build nearly 2,000 kms of new passenger rail track.  The Keystone XL project would consume enough steel to build nearly 4,000 kms of track.  And the now-cancelled Energy East pipeline would have required as much steel as 10,000 kms of track.  (For an overview of proposed pipelines, see this CAPP publication.)

With these facts in mind, Canadians (and Americans) should consider our options and priorities.  There’s tremendous pressure to build new pipelines.  Building them, proponents claim, will result in jobs and economic development.  But if we’re going to spend billions of dollars, lay down millions of tonnes of steel, and consume millions of person-hours of labour, should we be building soon-to-be-obsolete infrastructure to transport climate-destabilizing fossil fuels?  Or should we take the opportunity to create even more jobs building a zero-emission twenty-first century transportation network for Canada and North America?  Admittedly, the economics of passenger rail are different than those of pipelines; building a passenger rail system is not simply a matter of laying down steel rails.  But for reasons detailed below, limiting global warming probably makes significant investments in passenger rail inevitable.

The graph above shows the total length of the Canadian railway network.  The time-frame is the past 180 years: 1836 to 2016.  Between 1880 and 1918, Canada built nearly 70,000 kms of railway track—nearly 2,000 kms per year, using tools and machinery that were crude by modern standards, and at a time when the nation and its citizens were poor, compared to today.  In the middle and latter decades of the twentieth century, tens of thousands of kms of track were upgraded to accommodate heavier loads.

The length of track in the Canadian railway system peaked in the 1980s.  Recent decades have seen the network contract.  About a third of Canadian rail lines have been torn up and melted down over the past three-and-a-half decades.  Passenger rail utilization in recent years has fallen to levels not seen since the 1800s—down almost 90 percent from its 1940s peak, despite a doubling of the Canadian population.  Indeed, ridership on Via Rail is half of what it was as recently as 1989.

Contrast China.  In just one decade, that nation has built 25,000 miles of high-speed passenger rail lines.  Trains routinely operate at speeds in excess of 300 km/h.  Many of those trains were designed and built by Canada’s Bombardier.  China plans to build an additional 13,000 kms of high-speed passenger lines in the next seven years.

Japan’s “bullet trains” began running more than 50 years ago.  The Japanese high-speed rail network now exceeds 2,700 kms, with trains reaching speeds of 320 km/h.

Saudi Arabia, Poland, Turkey, and Morocco all have high-speed lines, as do more than a dozen nations in Europe.  Uzbekistan—with a GDP one-twentieth that of Canada’s—has built 600 kms of high-speed rail line and has trains operating at 250 km/h.

The construction of Canadian and North American passenger rail networks is probably inevitable.  As part of an international effort to hold global temperature increases below 2 degrees C, Canada has committed to reduce greenhouse gas (GHG) emissions emission by 30 percent by 2030—now less than 12 years away.  Emissions reductions must continue after 2030, reaching 50 to 60 percent in little more than a generation.  Emission reductions of this magnitude require an end to routine air travel.  Aircraft may still be needed for trans-oceanic travel, but within continents long-distance travel will have to take place using zero-emission vehicles: electric cars or buses for shorter journeys, and electrified passenger trains for longer ones.

This isn’t bad news.  Trains can transport passengers from city-centre to city-centre, eliminating long drives to and from airports.  Trains do not require time-consuming airport security screenings.  These factors, combined with high speeds, mean that for many trips, the total travel time is less for trains than for planes.  And because trains have more leg-room and often include observation cars, restaurants, and lounges, they are much more comfortable, enjoyable, and social.  For some long journeys where it is not cost-effective to build high-speed rail lines, European-style sleeper trains can provide comfortable, convenient overnight transport.  In other cases, medium-speed trains (traveling 150 to 200 km/h) may be the most cost-effective option.

Canada must embrace the inevitable: air travel must be cut by 90 percent; and fast, comfortable, zero-emission trains must take the place of the planes.  Maybe we can build thousands of kms of passenger rail lines and thousands of kms of pipelines.  But given the gravity and menace of the climate crisis and given the rapidly approaching deadlines to meet our emission-reduction commitments, it isn’t hard to see which should be our priority.


*For example, Kinder Morgan’s Trans Mountain pipeline would be made up primarily of 36” pipe (914mm) with a 0.465 wall thickness (11.8 mm).  This pipe weighs 262 kgs/m.  Rails for high-speed trains and other demanding applications often weigh 60 kgs/m.  As two rails are needed, this means 120 kgs/m—half the weight of a comparable length of pipeline.

Graph sources:
Urquhart and Buckley, 1965, Historical Statistics of Canada.
Leacy, Urquhart, and Buckley, 1983, Historical Statistics of Canada, 2nd Ed.
Stats. Can., Various years, Railway Transport in Canada: General Statistics.
Stats. Can., CANSIM Table 404-0010

 

Will Trump’s America Crash Earth’s Climate?

Graph of US energy consumption by fuel, 1990 to 2050
US energy consumption by fuel, 1990 to 2050

Last week, the US Department of Energy (DOE) released its annual report projecting future US energy production and consumption and greenhouse gas (GHG) emissions.  This year’s report, entitled Annual Energy Outlook 2018, with Projections to 2050 forecasts a nightmare scenario of increasing fossil fuel use, increasing emissions, lackluster adoption of renewable energy options, and a failure to shift to electric vehicles, even by mid-century.

The graph above is copied from that DOE report.  The graph shows past and projected US energy consumption by fuel type.  The top line shows “petroleum and other liquids.”  This is predominantly crude oil products, with a minor contribution from “natural gas liquids.”  For our purposes, we can think of it as representing liquid fuels used in cars, trucks, planes, trains, and ships.  Note how the US DOE is projecting that in 2050 America’s consumption of these high-emission fuels will be approximately equal to levels today.

The next line down is natural gas.  This is used mostly for heating and for electricity generation.  Note how the DOE is projecting that consumption (i.e., combustion) of natural gas will be about one-third higher in 2050 than today.

Perhaps worst of all, coal combustion will be almost as high in 2050 as it is today.   No surprise, the DOE report (page 15) projects that US GHG emissions will be higher in 2050 than today.

Consumption of renewable energy will rise.  The DOE is projecting that in 2050 “other renewables”—essentially electricity from solar photovoltaic panels and wind turbines—will provide twice as much power as today.  But that will be only a fraction of the energy supplied by fossil fuels: oil, natural gas, and coal.

How can this be?  The world’s nations have committed, in Paris and elsewhere, to slash emissions by mid-century.  To keep global temperature increases below 2 degrees Celsius, industrial nations will have to cut emissions by half by 2050.  So what’s going on in America?

The DOE projections reveal that America’s most senior energy analysts and policymakers believe that US policies currently in place will fail to curb fossil fuel use and reduce GHG emissions.  The DOE report predicts, for example, that in 2050 electric vehicles will make up just a small fraction of the US auto fleet.  See the graph below.  Look closely and you’ll see the small green wedge representing electrical energy use in the transportation sector.  The graph also shows that the the consumption of fossil fuels—motor gasoline, diesel fuel, fuel oil, and jet fuel—will be nearly as high in 2050 as it is now.  This is important: The latest data from the top experts in the US government predict that, given current policies, the transition to electric vehicles will not happen.

The next graph, below, shows that electricity production from solar arrays will increase significantly.  But the projection is that the US will not install significant numbers of wind turbines, so long as current policies remain in force and current market conditions prevail.

The report projects (page 84) that in 2050 electricity generation from the combustion of coal and natural gas will be twice as high as generation from wind turbines and solar panels.

Clearly, this is all just a set of projections.  The citizens and governments of the United States can change this future.  And they probably will.  They can implement policies that dramatically accelerate the transition to electric cars, electric trains, energy-efficient buildings, and low-emission renewable energy.

But the point of this DOE report (and the point of this blog post) is that such policies are not yet in place.  In effect, the US DOE report should serve as a warning: continue as now and the US misses its emissions reduction commitments by miles, the Earth probably warms by 3 degrees or more, and we risk setting off a number of global climate feedbacks that could render huge swaths of the planet uninhabitable and kill hundreds of millions of people this century.

The house is on fire.  We can put it out.  But the US Department of Energy is telling us that, as of now, there are no effective plans to do so.

Perhaps step one is to remove the arsonist-in-chief.

 

If you’re for pipelines, what are you against?

Graph of Canadian greenhouse gas emissions, by sector, 2005 to 2039
Canadian greenhouse gas emissions, by sector, 2005 to 2030

As Alberta Premier Notley and BC Premier Horgan square off over the Kinder Morgan / Trans Mountain pipeline, as Alberta and then Saskatchewan move toward elections in which energy and pipelines may be important issues, and as Ottawa pushes forward with its climate plan, it’s worth taking a look at the pipeline debate.  Here are some facts that clarify this issue:

1.  Canada has committed to reduce its greenhouse gas (GHG) emissions by 30 percent (to 30 percent below 2005 levels by 2030).

2.  Oil production from the tar sands is projected to increase by almost 70 percent by 2030 (From 2.5 million barrels per day in 2015 to 4.2 million in 2030).

3.  Pipelines are needed in order to enable increased production, according to the Canadian Association of Petroleum Producers (CAPP) and many others.

4.  Planned expansion in the tar sands will significantly increase emissions from oil and gas production.  (see graph above and this government report)

5.  Because there’s an absolute limit on our 2030 emissions (515 million tonnes), if the oil and gas sector is to emit more, other sectors must emit less.  To put that another way, since we’re committed to a 30 percent reduction, if the tar sands sector reduces emissions by less than 30 percent—indeed if that sector instead increases emissions—other sectors must make cuts deeper than 30 percent.

The graph below uses the same data as the graph above—data from a recent report from the government of Canada.  This graph shows how planned increases in emissions from the Alberta tar sands will force very large reductions elsewhere in the Canadian economy.

Graph of emissions from the Canadian oil & gas sector vs. the rest of the economy, 2015 & 2030
Emissions from the Canadian oil & gas sector vs. the rest of the economy, 2015 & 2030

Let’s look at the logic one more time: new pipelines are needed to facilitate tarsands expansion; tarsands expansion will increase emissions; and an increase in emissions from the tarsands (rather than a 30 percent decrease) will force other sectors to cut emissions by much more than 30 percent.

But what sector or region or province will pick up the slack?  Has Alberta, for instance, checked with Ontario?  If Alberta (and Saskatchewan) cut emissions by less than 30 percent, or if they increase emissions, is Ontario prepared to make cuts larger than 30 percent?  Is Manitoba or Quebec?  If the oil and gas sector cuts by less, is the manufacturing sector prepared to cut by more?

To escape this dilemma, many will want to point to the large emission reductions possible from the electricity sector.  Sure, with very aggressive polices to move to near-zero-emission electrical generation (policies we’ve yet to see) we can dramatically cut emissions from that sector.  But on the other hand, cutting emission from agriculture will be very difficult.  So potential deep cuts from the electricity sector will be partly offset by more modest cuts, or increases, from agriculture, for example.

The graph at the top shows that even as we make deep cuts to emissions from electricity—a projected 60 percent reduction—increases in emissions from the oil and gas sector (i.e. the tar sands) will negate 100 percent of the progress in the electricity sector.  The end result is, according to these projections from the government of Canada, that we miss our 2030 target.  To restate: according to the government’s most recent projections we will fail to meet our Paris commitment, and the primary reason will be rising emissions resulting from tarsands expansion.  This is the big-picture context for the pipeline debate.

We’re entering a new era, one of limits, one of hard choices, one that politicians and voters have not yet learned to navigate.   We are exiting the cornucopian era, the age of petro-industrial exuberance when we could have everything; do it all; have our cake, eat it, and plan on having two cakes in the near future.  In this new era of biophysical limits on fossil fuel combustion and emissions, on water use, on forest cutting, etc. if we want to do one thing, we may be forced to forego something else.  Thus, it is reasonable to ask: If pipeline proponents would have us expand the tar sands, what would they have us contract?

Graph sources: Canada’s 7th National Communication and 3rd Biennial Report, December 2017

Earth’s dominant bird: a look at 100 years of chicken production

Graph of Chicken production, 1950-2050
Chicken meat production, global, actual and projected, 1950 to 2050

There are approximately 23 billion chickens on the planet right now.   But because the life of a meat chicken is short—less than 50 days—annual production far exceeds the number of chickens alive at any one time.  In 2016, worldwide, chicken production topped 66 billion birds.  Humans are slaughtering, processing, and consuming about 2,100 chickens per second.

We’re producing a lot of chicken meat: about 110 million tonnes per year.  And we’re producing more and more.  In 1966, global production was 10 million tonnes.  In just twelve years, by 1978, we’d managed to double production.  Fourteen years after that, 1992, we managed to double it again, to 40 million tonnes.  We doubled it again to 80 million tonnes by 2008.  And we’re on track for another doubling—a projected 160 million tonnes per year before 2040.  By mid-century, production should exceed 200 million tonnes—20 times the levels in the mid-’60s.  This week’s graph shows the steady increase in production.  Data sources are listed below.

The capacity of our petro-industrial civilization to double and redouble output is astonishing.  And there appears to be no acknowledged limit.  Most would predict that as population and income levels rise in the second half of the century—as another one or two billion people join the “global middle class”—that consumption of chicken and other meats will double again between 2050 and 2100.  Before this century ends, consumption of meat (chicken, pork, beef, lamb, farmed fish, and other meats) may approach a trillion kilograms per year.

Currently in Canada the average chicken farm produces about 325,000 birds annually.  Because these are averages, we can assume that the output of the largest operations is several times this figure.  In the US, chicken production is dominated by contracting.  Large transnationals such as Tyson Foods contract with individual growers to feed birds.  It is not unusual for a contract grower to have 6 to 12 barns on his or her farm and raise more than a million broiler chickens per year.

We’re probably making too many McNuggets.  We’re probably catching too many fish.  We’re probably feeding too many pigs.  And it is probably not a good idea to double the number of domesticated livestock on the planet—double it to 60 billion animals.  It’s probably time to rethink our food system.  

Graph sources:
FAOSTAT database
OECD-FAO, Agricultural Outlook 2017-2026
Brian Revell: One Man’s Meat … 2050?
Lester Brown: Full Planet, Empty Plates
FAO: World Agriculture Towards 2030/2050, the 2012 revision

The 100th Anniversary of high-input agriculture

Graph of tractor and horse numbers, Canada, historic, 1910 to 1980
Tractors and horses on farms in Canada, 1910 to 1980

2018 marks the 100th anniversary of the beginning of input-dependent farming—the birth of what would become modern high-input agriculture.  It was in 1918 that farmers in Canada and the US began to purchase large numbers of farm tractors.  These tractors required petroleum fuels.  Those fuels became the first major farm inputs.  In the early decades of the 20th century, farmers became increasingly dependent on fossil fuels, in the middle decades most also became dependent on fertilizers, and in the latter decades they also became dependent on agricultural chemicals and high-tech, patented seeds.

This week’s graph shows tractor and horse numbers in Canada from 1910 to 1980.  On both lines, the year 1918 is highlighted in red.  Before 1918, there were few tractors in Canada.  The tractors that did exist—mostly large steam engines—were too big and expensive for most farms.  But in 1918 three developments spurred tractor proliferation: the introduction of smaller, gasoline-engine tractors (The Fordson, for example); a wartime farm-labour shortage; and a large increase in industrial production capacity.  In the final year of WWI and in the years after, tractor sales took off.  Shortly after, the number of horses on farms plateaued and began to fall.  Economists Olmstead and Rhode have produced a similar graph for the US.

It’s important to understand the long-term significance of what has unfolded since 1918.  Humans have practiced agriculture for about 10,000 years—about 100 centuries.  For 99 centuries, there were almost no farm inputs—no industrial products that farmers had to buy each spring in order to grow their crops.  Sure, before 1918, farmers bought farm implements—hoes, rakes, and sickles in the distant past, and plows and binders more recently.  And there were some fertilizer products available, such as those derived from seabird guano (manure) in the eighteenth and nineteenth centuries.  And farmers occasionally bought and sold seeds.  But for most farmers in most years before about 1918, the production of a crop did not require purchasing an array of farm inputs.  Farm chemicals did not exist, very little fertilizer was available anywhere in the world until after WWII, and farmers had little use for gasoline or diesel fuel.  Before 1918, farms were largely self-sufficient, deriving seeds from the previous years’ crop, fertility from manure and nitrogen-fixing crops, and pulling-power from horses energized by the hay and grain that grew on the farm itself.  For 99 of the 100 centuries that agriculture has existed, farms produced the animal- and crop-production inputs they needed.  Nearly everything that went into farming came out of farming.

For 99 percent of the time that agriculture has existed there were few farm inputs, no farm-input industries, and little talk of “high input costs.”  Agricultural production was low-input, low-cost, solar-powered, and low-emission.  In the most recent 100 years, however, we’ve created a new kind of agricultural system: one that is high-input, high-cost, fossil-fuelled, and high-emission.

Modern agriculture is also, admittedly, high-output.  But this last fact must be understood in context: the incredible food-output tonnage of modern agriculture is largely a reflection of the megatonnes of fertilizers, fuels, and chemicals we push into the system.  Nitrogen fertilizer illustrates this process.  To produce, transport, and apply one tonne of synthetic nitrogen fertilizer requires an amount of energy equal to almost two tonnes of gasoline.  Modern agriculture is increasingly a system for turning fossil fuel Calories into food Calories.  Food is increasingly a petroleum product.

The high-input era has not been kind to farmers.  Two-thirds of Canadian farmers have been ushered out of agriculture over the past two generations.  More troubling and more recent: the number of young farmers—those under 35—has been reduced by two-thirds since 1991.  Farm debt is at a record high: nearly $100 billion.  And about the same amount, $100 billion, has had to be transferred from taxpayers to farmers since the mid-1980s to keep the Canadian farm sector afloat.  Farmers are struggling with high costs and low margins.

This is not a simplistic indictment of “industrial agriculture.”  We’re not going back to horses.  But on the 100th anniversary of the creation of fossil-fuelled, high-input agriculture we need to think clearly and deeply about our food production future.  As our fossil-fuel supplies dwindle, as greenhouse gas levels rise, as we struggle to feed and employ billions more people, and as we struggle with many other environmental and economic problems, we will need to rethink and radically transform our food production systems.  Our current food system isn’t “normal”: it’s an anomaly—a break with the way that agriculture has operated for 99 percent of its history.  It’s time to ask hard questions and make big changes.  It’s time to question the input-maximizing production systems agribusiness corporations have created, and to explore new methods of low-input, low-energy-use, low-emission production.

Rather than maximizing input use, we need to maximize net farm incomes, maximize the number of farm families on the land, and maximize the resilience and sustainability of our food systems.

Global plastics production, 1917 to 2050

Graph of global plastic production, 1917 to 2017
Global plastic production, megatonnes, 1917 to 2017

This week’s graph shows global annual plastics production over the past 100 years.  No surprise, we see exponential growth—a hallmark of our petro-industrial consumer civilization.  Long-term graphs of nearly anything (nitrogen fertilizer production, energy use, automobile productiongreenhouse gas emissions, air travel, etc.) display this same exponential take-off.

Plastics present a good news / bad news story.  First, we should acknowledge that the production capacities we’ve developed are amazing!  Worldwide, our factories now produce approximately 400 million tonnes of plastic per year.  That’s more than a billion kilograms per day!  Around the world we’ve built thousands of machines that can, collectively, produce plastic soft-drink and water bottles at a rate of nearly 20,000 per second.  Our economic engines are so powerful that we’ve managed to double global plastic production tonnage in less than two decades.

But of course that’s also the bad news: we’ve doubled plastic production tonnage in less than two decades.  And the world’s corporations and governments would have us go on doubling and redoubling plastics production.  The graph below shows the projected four-fold increase in production tonnage by 2050.

Graph of global plastics production to 2050
Projected global plastics production to 2050

Source: UN GRID-Arendal

Plastics are a product of human ingenuity and innovation—one of civilization’s great solutions.  They’re lightweight, durable, airtight, decay resistant, inexpensive, and moldable into a huge range of products.  But projected 2050 levels of production are clearly too much of a good thing.  Our growth-addicted economic system has a knack for turning every solution into a problem—every strength into a weakness.

At current and projected production levels, plastics are a big problem.  Briefly:

1.  Plastics are forever—well, almost.  Except for the tonnage we’ve incinerated, nearly all the plastic ever produced still exists somewhere in the biosphere, although much of it is now invisible to humans, reduced to tiny particles in ocean and land ecosystems.  Plastic is great because it lasts so long and resists decay.  Plastic is a big problem for those same reasons.

2. Only 18 percent of plastic is recycled.  This is the rate for plastics overall, including plastics in cars and buildings.  For plastic packaging (water bottles, chip bags, supermarket packaging, etc.) the recycling rate is just 14 percent.  But much of that plastic inflow is excluded during the sorting and recycling process, such that only 5 percent of plastic packaging material is  actually returned to use through recycling.   And one third of plastic packaging escapes garbage collection systems entirely and is lost directly into the environment: onto roadsides or into streams, lakes, and oceans.

3. Oceans are now receptacles for at least 8 billion kilograms of plastic annually—equivalent to a garbage truck full of plastic unloading into the ocean every minute.  The growth rates projected above will mean that by 2050 the oceans will be receiving the equivalent of one truckload of plastic every 15 second, night and day.  And unless we severely curtail plastic production and dumping, by 2050 the mass of plastic in our oceans will exceed the mass of fish.  Once in the ocean, plastics persist for centuries, in the form of smaller and smaller particles.  This massive contamination comes on top of other human impacts: overfishing, acidification, and ocean temperature increases.

4. Plastic is a fossil fuel product.  Plastic is made from oil and natural gas feedstocks—molecules extracted from the oil and gas become the plastic.  And oil, gas, and other energy sources are used to power the plastic-making processes.  By one estimate, 4 percent of global oil production is consumed as raw materials for plastic and an additional 4 percent provides energy to run plastics factories.

5. Plastics contain additives than harm humans and other species: fire retardants, stabilizers, antibiotics, plasticizers, pigments, bisphenol A, phthalates, etc.  Many such additives mimic hormones or disrupt hormone systems.  The 150 billion kilograms of plastics currently in the oceans includes 23 billion kgs of additives, all of which will eventually be released into those ocean ecosystems.

It’s important to think about plastics, not just because doing so shows us that we’re doing something wrong, but because the tragic story of plastics shows us why and how our production and energy systems go wrong.  The story of plastics reveals the role of exponential growth in turning solutions into problems.  Thinking about the product-flow of plastics (oil well … factory … store … home … landfill/ocean) shows us why it is so critical to adopt closed-loop recycling and highly effective product-stewardship systems.  And the entire plastics debacle illustrates the hidden costs of consumerism, the collateral damage of disposable products, and the failure of “the markets” to protect the planet.

In a recent paper that takes a big-picture, long-term look at plastics, scientists advise that “without a well-designed … management strategy for end-of-life plastics, humans are conducting a singular uncontrolled experiment on a global scale, in which billions of metric tons of material will accumulate across all major terrestrial and aquatic ecosystems on the planet.”

Graph sources:
• 1950 to 2015 data from Geyer, Jambeck, and Law, “Production, Use, and Fate of All Plastics Ever Made,” Science Advances 3, no. 7 (July 2017).
• 2016 and 2017 data points are extrapolated at a 4.3 percent growth rate derived from the average growth rate during the previous 20 years.
• Pre-1950 production tonnage is assumed to be negligible, based on various sources and the very low production rates in 1950.

Saskatchewan’s new Climate Change Strategy: reckless endangerment

Graph of Saskatchewan greenhouse gas emissions relative to selected nations
Saskatchewan greenhouse gas emissions relative to selected nations

Saskatchewan’s greenhouse gas emissions are extremely high: 66 tonnes per person per year.  What if Saskatchewan was a country, instead of a province?  If that were the case, we’d find that no country on Earth had per-capita emissions higher than ours.

This week’s graph compares per-capita greenhouse gas (GHG) emissions in Saskatchewan to emissions in a variety of countries.  The units are tonnes of carbon dioxide equivalent (CO2-eq).  The data is for the years 2014 and 2015, the most recent years for which data is available.  The graph shows that Saskatchewan’s emissions are higher than those of petro-states such as Saudi Arabia and Qatar and manufacturing nations such as China and Germany.

Our world-topping per-person emissions form part of the context for this week’s release of the Government of Saskatchewan’s climate strategy: Prairie Resilience: A Made-in-Saskatchewan Climate Change Strategy.  The report isn’t really a plan of action—more an attempt at public relations and a collection of re-announcements.   Most critically, it lacks a specific set of measures that can, taken together, enable citizens and businesses in this province to reduce our GHG emissions by 30 percent by 2030.  I’ll review some of the key points of the document, but first just a bit more context.

In Paris in 2015, the world’s governments reaffirmed a target of limiting global temperature increases to 2 degrees Celsius (relative to pre-industrial levels).  However, more and more scientists are warning that 2 degrees is not a “safe level,” and that temperature increases of this magnitude will create floods, droughts, storms, and deaths in many parts of the world.  But a 2 degree rise is better than 4 or 5 degrees.

So that’s the first point: our 2 degree target is weak.  To this we’ve added inadequate emission-reduction commitments.  In the lead-up to the Paris climate talks the world’s governments each submitted specific emission-reduction commitments.  Canada committed to cut this country’s emissions by 30 percent (below 2005 levels) by 2030.  Other nations made similar pledges.  But here’s the troubling part: When you add up all those emissions-reduction commitments you find that they put the world on track, not for 2 degrees of warming, but for 3.2 degrees (UN Emissions Gap Report 2017).  So this is the context for recent climate change strategies from Saskatchewan and other provinces: These plans amount to inadequate provincial contributions to an inadequate national commitment to a weak international target.

One final bit of context: not only are per-capita emissions in Saskatchewan among the highest in the world, they continue to increase: up 65 percent in a generation (1990 to 2015).  Some will want to excuse our province: it’s cold here.  But our per-capita emissions are almost twice as high as those in the Northwest Territories, nine times as high as in the Yukon, and four times as high as those in neighbouring Manitoba.  Others will want to talk about the fact that Saskatchewan is a resource-producing and agricultural province; our prosperity depends upon our ability to keep farming and mining and producing oil and gas.  There’s a grain of truth to some parts of that idea, but it simply cannot be the case that “prosperity” requires the emission of 66 tonnes of GHGs per person.  Citizens in every nation want prosperity.  But if everyone in the world felt entitled to emit GHGs at the same rate as us, there would soon be no Saskatchewan as we know it.  There would be a parched desert here, and submerged cities worldwide.  In a climate- and carbon-constrained world, prosperity simply cannot require Saskatchewan-sized emissions.

So, with this for context, what does the Saskatchewan Climate Change Strategy propose?  The government has re-committed to increasing the production of low-emission electricity—to the “expansion of renewable energy sources up to 50 per cent of generating capacity” by 2030.  This is good news and we must ensure that this happens, well before 2030, if possible.  But careful readers might note three things in the preceding commitment:  1. the words “up to.”  2. generating capacity is not the same as output; because of the intermittent nature of wind power, for example, 50 percent of capacity will not equate to 50 percent of production.  3. electricity provides less than 30 percent of Saskatchewan’s total energy demand.  Thus, moving to 50 percent renewable/low-emission sources for electricity leaves 80+ percent of Saskatchewan’s energy needs filled by high-emission fossil fuels.

The Climate Change Strategy includes the creation of a technology fund.  But this is not new.  The government passed legislation in 2010 requiring large emitters to pay into a green technology fund.  That law was never put into force.

Predictably, the Strategy rejects a carbon tax, arguing that such a tax “would make it more difficult for our province to respond effectively to climate change because a simple tax will not result in the innovations required to actually reduce emissions.”

The Strategy also includes a vague mix of commitments to reporting, potential future measures to reduce methane emissions, emission-intensity targets, and offset trading.  Think of this as a cap-and-trade system without a cap.

The Strategy includes some positive steps but fails to deliver what we need: a comprehensive, detailed plan that will result in a 30 percent reduction in emissions by 2030.  This failing is especially evident when one takes into account probable emissions increases that may result from economic growth, planned increases in energy production, and increased use of agricultural inputs such as nitrogen fertilizer.  (Applied tonnage of N fertilizer has doubled since 2002.)

Overall, the Strategy steers away from discussions of emissions reduction and focuses instead on the idea of “resilience.”  That word appears 44 times in 12 pages.  The report defines resilience as “the ability to cope with, adapt to, and recover from stress and change.”  But resilience—coping, adapting, and recovering—may simply prove impossible in the face of the magnitude of climate change that will scorch our province under a business-as-usual scenario.  The high-emission, fossil-fuel-dependent future assumed in the Climate Change Strategy would raise the average temperature of this province by 6 to 8 degrees Celsius (sources available on request).  Climate disruption of that magnitude vetoes adaptation and mocks resilience.

And even if we in Saskatchewan could find ways to adapt and make ourselves resilient in the face of the blows that may be inflicted by a hotter, stormier, more damaging climate, we must ask: Will poor and vulnerable populations around the world be able to make themselves “resilient” to the climate change that our emissions trigger?   The global proliferation of Saskatchewan-level emissions would cause cities to disappear under the waves, food-growing regions to bake and wither, and tropical storms to become more numerous and damaging.  What is our ethical position if we are among the greatest contributors to these calamities, yet all we offer affected populations is the advice to make themselves more resilient?

A real plan is possible.  Emission reductions of 30 percent by 2030 are attainable at costs that Saskatchewan can afford.  Holding global temperature increases to 2 degrees also remains possible.  All this can be accomplished if governments act with courage and integrity, rapidly and effectively, and in the interests of citizens and the future.

Graph sources:
Saskatchewan and other provinces: Environment and Climate Change Canada, Canadian Environmental Sustainability Indicators: Greenhouse Gas Emissions.
Other nations: World Resources Institute, CAIT Climate Data Explorer.

 

Geoengineering: 12 things you need to know

Graphic showing various geoengineering methods

The following draws upon extensive research by ETC Group.  I have been privileged to serve on ETC’s Board of Directors for several years. 

1.  What is “geoengineering”?  It is the intentional, largescale, technological manipulation of Earth’s systems.  Geoengineering is usually discussed as a solution to climate change, but it could also be used to attempt to de-acidify oceans or fix ozone holes.  Here, I’ll concentrate on climate geoengineering.

2.  There are two main types of climate geoengineering:
i. Technologies to partially shade the sun in order to reduce warming (called “solar radiation management” or SRM).  For example, high-altitude aircraft could be used to dump thousands of tonnes of sulphur compounds into the stratosphere to form a reflective parasol over the Earth.
ii. Attempts to pull carbon dioxide (CO2) out of the air.  One proposal is ocean fertilization.  In theory, we could dump nutrients into the ocean to spur plankton/algae growth.  As the plankton multiply, they would take up atmospheric CO2 that has dissolved in the water.  When they die, they would sift down through the water column, taking the carbon to the ocean floor.

3.  The effects of geoengineering will be uneven and damaging.  For example, sun-blocking SRM technologies might lower the global average temperature, but regional temperature changes would probably be uneven.  Other geoengineering techniques—cloud whitening and weather modification—could similarly alter temperatures in some parts of the planet relative to others.  And if we change relative regional temperatures we would also shift wind and rainfall patterns.  Geoengineering will almost certainly cause droughts, storms, and floods.  Going further, however, all droughts, storms, and floods (even those that might have occurred in the absence of geoengineering) could come to be seen as caused by geoengineering and the governments controlling those climate interventions.  If we go down this path, there will no longer be any “acts of God”; weather will become a product of government.

4.  These technologies are dangerous in other ways.  Seeding the stratosphere with sulphur particles could catalyze ozone depletion.  Shifts in rain and temperature patterns may cause shifts in ecosystems and wildlife habitats.  Multiplying plankton biomass may affect fish species distribution and biodiversity.  Moreover, as with any enormously powerful technology, it is simply impossible to foresee the full range of unintended consequences.

5.  Geoengineering is unilateral, undemocratic, inequitable, and unjust.  In a geoengineered world, who will control the global thermostat?  Solar radiation management and similar schemes will inevitably be controlled by the dominant governments and corporations—a rich-nation “coalition of the dimming.”  But benefits and costs will be distributed unequally, creating winners and losers.  Where will less powerful nations appeal if they find themselves on the losing end?  Our climate interventions will be calibrated to maximize benefits to rich nations: the same countries that have benefited most from fossil fuel combustion and that have caused the climate crisis.  We appear to be contemplating a triple injustice: poor nations will be denied their fair share of the benefits of fossil fuel use; hit hardest by climate change; and left as collateral damage from geoengineering.  Finally, geoengineering is undemocratic in another way.  It is a choice to pursue technical interventions rather than social or political reforms.  It reveals that many governments and elites would risk damaging the stratosphere, hydrosphere, and biosphere rather than risk difficult conversations with voters, CEOs, or shareholders.

6.  Geoengineering embodies and proliferates a certain worldview: masculine, nature-dominating, imperialistic, managerial and technocratic, hostile to limits, and hubristic.

7.  Geoengineering will create conflicts.  Because technologies such as SRM are transboundary and have the potential to shift weather patterns they can lead to charges that other nations are stealing rain and, ultimately, food.  To get a sense of the potential for conflict, imagine the US reaction to unilateral deployment of weather- and climate-altering technologies by Russia or China.

8.  It is untestable.  Small-scale experiments with SRM or similar technologies will not reveal potential side-effects.  These will only become evident after planet-scale deployment, and perhaps years after the fact, as weather systems move toward new equilibria.

9.  Deployment may be irreversible.  Once we start we might not be able to stop.  Geoengineering would probably proceed alongside continued greenhouse gas (GHG) emissions.  But if we deploy sun-blocking technologies and simultaneously push atmospheric CO2 levels past 500 or 600 parts per million, we wouldn’t be able to terminate our dimming programs, no matter how damaging the effects of long-term geoengineering are revealed to be.  If we did stop, high GHG levels would trigger sudden and dramatic warming.  We risk locking ourselves into untestable, unpredictable, uncontrollable, and planet-altering technologies.

10. Can geoengineering “buy us time”?  Proponents argue that these technologies can buy us some time: time humanity needs in order to ramp up emissions reductions.  But geoengineering is more likely to buy time for the status quo, to prolong unsustainable fossil fuel production and energy inefficiency, and to blunt and delay urgent and effective action.  The effect of geoengineering is not so much to buy time as to waste time.

11. There will be attempts to pressure us into accepting geoengineering.  Geoengineering proponents may soon raise the alarm and claim that we must accept these risky technologies or face even worse damage from climate change.  “Desperate times call for desperate measures,”  they will say.  From these same sources may come arguments that geoengineering is necessary to hold global average temperature increases below 1.5 or 2 degrees and thus spare the world’s poorest and most vulnerable peoples.  Such arguments would be both ironic and duplicitous.  The same government and corporate leaders who today deny or downplay climate change, or deny the need for rapid action to cut emissions, may tomorrow be the ones raising the alarm, and claiming that there is no solution other than geoengineering.  They may pivot from claiming that there is no problem to claiming that there is no alternative.

12. Geoengineering will be pushed by the rich and powerful.  A growing number of corporations, elites, and politicians see the solution to climate change, not in emissions reduction, but in massive techno-interventions into the atmosphere or oceans to block the sun or suck up carbon.  When he was CEO of Exxon, US Secretary of State Rex Tillerson said of climate change: “It’s an engineering problem, and it has engineering solutions.”  Exxon employs many geoengineering proponents and theorists.  Former executive at oil company BP and former Under-Secretary for Science in the Obama administration Steven Koonin is lead author of a report entitled Climate Engineering Responses to Climate Emergencies.   Virgin Airlines CEO Richard Branson offered a $25 million prize to anyone who could solve climate change by geoengineering.   Bill Gates and other Microsoft billionaires are funding geoengineering research.  Newt Gingrich is the former speaker of the US House of Representatives and a Vice Chairman of Donald Trump’s transition team.  His views on geoengineering are worth quoting because they may be representative of a growing sentiment among political and corporate leaders.  Gingrich wrote in a 2008 fundraising letter:

“[T]he idea behind geoengineering is to release fine particles in or above the stratosphere that would then block a small fraction of the sunlight and thus reduce atmospheric temperature.

… Instead of imposing an estimated $1 trillion cost on the economy …, geoengineering holds forth the promise of addressing global warming concerns for just a few billion dollars a year.  Instead of penalizing ordinary Americans, we would have an option to address global warming by rewarding scientific innovation.

My colleagues at the American Enterprise Institute are taking a closer look at geoengineering, and we should too.  …

Our message should be: Bring on the American Ingenuity.  Stop the green pig.”

 

For reasons outlined above and many others, we must not go down the path of geoengineering.  These technologies—massive government and corporate interventions into the core flows and structures of the atmosphere, hydrosphere, and biosphere—are among the most dangerous initiatives ever devised.  Geoengineering must be banned; it is untestable, uncontrollable, unjust, probably irreversible, and potentially devastating.  There exist better, safer options: rapid and dramatic emissions reductions; and a government-led mobilization toward a transformation of global energy, transport, industrial, and food systems.