Moore’s Law and me

Graph of Transistor count and Moore's Law, 1970-2016
Transistor count and Moore's Law, 1970-2016

In 1985 I bought an Apple Macintosh computer.  It cost $3,500 ($7,000 in today’s dollars).  Soon after, Apple and other companies started selling external hard-disk drives for the Mac.  They, too, were expensive.  But in 1986 or ’87 the price for a hard disk came down to an “affordable” $2,000, and I and many Mac owners were tempted.  In the mid-1980s, a 20-megabyte (MB) hard drive cost $2,000 ($4,000 in today’s dollars).  That’s $200 per MB (in today’s dollars).

Fast forward to 2018.  On my way home last week I stopped by an office-supply store and paid $139 for a 4 terabyte (TB) hard drive.  That’s $34 per TB.

What would that 4 TB hard drive have cost me if prices had remained the same as in the 1980s?  Well, one terabyte is equal to a million megabytes.  So, that 4 TB drive contains 4 million MBs.  At $200 per MB (the 1980s price) the hard drive I picked up from Staples would have cost me $800 million dollars—not much under a billion once I paid sales taxes.  But it didn’t cost that: it was just $139.  Hard disk storage capacity has become millions of times cheaper in just over a generation.  Or, to put it another way, for the same money I can buy millions of times more storage.

I can reprise these same cost reductions, focusing on computer memory rather than hard disk capacity.  My 1979 Apple II had 16 kilobytes of memory.  My recently purchased Lenovo laptop has 16 gigabytes—a million times more.  Yet my new laptop cost a fraction of the inflation-adjusted prices of that Apple II.  Computer memory is millions of times cheaper.  The same is true of processing power—the amount of raw computation you can buy for a dollar.

The preceding trends have been understood for half a century—the basis for Moore’s Law.  Gordon Moore was a founder of Intel Corporation, one of the world’s leading computer processor and “chip” makers.  In 1965, Moore published a paper in which he observed that the number of transistors in computer chips was doubling every two years, and he predicted that this doubling would go on for some years to come.  (See this post for data on the astronomical rate of annual transistor production.)  Related to Moore’s Law is the price-performance ratio of computers.  Loosely stated, a given amount of money will buy twice as much computing power two or three years from now.

The graph above illustrates Moore’s Law and shows the transistor count for many important computer central processing units (CPUs) over the past five decades. (Here’s a link to a high-resolution version of the graph.)  Note that the graph’s vertical axis is logarithmic; what appears as a doubling is actually a far larger increase.  In the lower-left, the graph includes the CPU from my 1979 Apple II computer, the Motorola/MOS 6502.  That CPU chip contained about 3,500 transistors.  In the upper right, the graph includes the Intel i7 processor in my new laptop. That CPU contains about 2,000,000,000 transistors—roughly 500,000 times more than my Apple II.

Assuming a doubling every 2 years, in the 39 years between 1979 (my Apple II) and 2018 (My Lenovo) we should have seen 19.5 doublings in the number of transistors—about a 700,000-fold increase.  This number is close to the 500,000-fold increase calculated above by comparing the number of transistors in a 6502 chip to the number in an Intel i7 chip.  Moreover, computing power has increased even faster than the huge increases in transistor count would indicate.  Computer chips cycle faster today, and they also sport sophisticated math co-processors and graphics chips.

In terms of civilization and the future, the key questions include: can these computing-power increases continue?  Can the computers of the 2050s be hundreds-of-thousands of times more powerful than those of today?  Can we continue making transistors smaller and packing twice as many onto a chip every two years?  Can Moore’s Law continue unabated?  Probably not.  Transistors can only be made so small.  The rate of increase in computing power will slow.  We won’t see a million-fold increase in the coming 40 years like we saw in the past 40.  But does that matter?  What if the rate of increase in computing power fell by half—to a doubling every four years instead of every two?  That would mean that in 2050 our computers would still be 256 times more powerful than they are now.  And in 2054 they would be 512 times more powerful.  And in 2058, 1024 times more powerful.  What would it mean to our civilization if each of us had access to a thousand times more computing power?

One could easily add a last, pessimistic paragraph—noting the intersection between exponential increases in computing power, on the one hand, and climate change and resource limits, on the other.  But for now, let’s leave unresolved the questions raised in the preceding paragraph.  What is most important to understand is that technologies such as solar panels and massively powerful computers give us the option to move in a different direction.  But we have to choose to make changes.  And we have to act.  Our technologies are immensely powerful, but our efforts to use those technologies to avert calamity are feeble.  Our means are magnificent, but our chosen ends are ruinous.  Too often we become distracted by the novelty and power of our tools and fail to hone our skills to use those tools to build livable futures.

 

Through the mill: 150 years of wheat price data

Graph of wheat price, western Canada (Sask. or Man.), farmgate, dollars per bushel, 1867–2017
Wheat price, western Canada (Sask. or Man.), farmgate, dollars per bushel, 1867–2017

The price of wheat is declining, and it has been for many years.  The same is true for the prices of other grains and oilseeds.  The graph above shows wheat prices in Canada since Confederation—over the past 150 years.  The units are dollars per bushel.  A bushel is 60 pounds (27 kilograms).  The brown line suggests a trendline.

These prices are adjusted for inflation.  The downward trend reflects the fact that wheat prices fell relative to prices for nearly all other goods and services; as time went on it took more and more bushels of wheat or other grains to buy a pair of shoes, lunch, or a movie ticket.  For example, my father bought a new, top-of-the-line pickup truck in 1976 for $6,000, equivalent to about 1,200 bushels of wheat at the time.  Today, a comparable pickup (base model) might cost the equivalent of about 4,000 bushels of wheat.  As a second example, a house in 1980 might have cost the equivalent of 20,000 bushels of wheat; today, that very same house would cost the equivalent of 60,000 bushels.

The graph below adds shaded boxes to highlight three distinct periods in Canadian wheat prices.  The period from Confederation to the end of the First World War saw prices roughly in the range of $20 to $30 per bushel (adjusted to today’s dollars).  From 1920 to the mid-’80s, prices entered a new phase, and oscillated between about $8 and $18 per bushel.  And in 1985, wheat prices entered a third phase, oscillating between $5 and $10 per bushel, more often closer to $5 than $10.  In each phase, the top of the range in a given period is roughly equal to the bottom of the range in the previous period.

Graph of wheat price, western Canada (Sask. or Man.), farmgate, dollars per bushel, 1867–2017
Wheat price, western Canada, farmgate, dollars per bushel

1985 is often cited as the beginning of the farm crisis period.  The graph above shows why the crisis began in that year.  Grain prices since the mid-’80s have been especially damaging to Canadian agriculture.  The post-1985 collapse in grain prices has had several effects:

– The expulsion of one-third of Canadian farm families in just one generation;
– The expulsion of two-thirds of young farmers (under 35 years of age) over the same period;
– A tripling of farm debt, to a record $102 billion;
– A chronic need to transfer taxpayer dollars to farmers through farm-support programs (with transfers totaling $110 billion since 1985); and
– A push toward farm giantism, with the majority of land in western Canada now operated by farms larger than 3,000 acres, and with many farms covering tens-of-thousands of acres.

As per-bushel and per-acre margins fall, the solution is to cover more acres.  The inescapable result is fewer farms and farmers.

It is impossible to delve into all the causes of the grain price decline in one blog post.  Briefly, farmers are getting less and less because others are taking more and more.  A previous blog post highlighted the widening gap between what Canadians pay for bread in the grocery store and what farmers receive for wheat at the elevator.  This widening gap is created because grain companies, railways, milling companies, other processors, and retailers are taking more and more, chocking off the flow of dollars to farmers.  This is manifest in declining prices.  Agribusiness giants are profiting by charging consumers more per loaf and paying farmers less per bushel.

Of course, grain prices are a function of domestic and international markets.  The current free trade and globalization era began in the mid-1980s.  (The Canada-US Free Trade Agreement was concluded in 1987, the North American Free Trade Agreement in 1994, and the World Trade Organization Agreement on Agriculture in 1995.)  The effect of free trade and globalization has been to plunge all the world’s farmers into a single, borderless, hyper-competitive market.  At the same time, agribusiness corporations entered a period of accelerating mergers in order to reduce the competition they faced.  As competition levels increase for farmers and decrease for agribusiness corporations it is easy to predict shifts in relative profitability.  Increased competition for farmers meant lower prices while decreased competition for agribusiness transnationals translated into higher prices and profits.

Graph sources:
– 1867–1974: Historical Statistics of Canada, eds. Leacy, Urquhart, and Buckley, 2nd ed. (Ottawa: Statistics Canada, 1983);
– 1890–1909: Wholesale Prices in Canada, 189O–19O9, ed. R. H. Coats (Ottawa: Government Printing Bureau, 1910);
– 1908–1984: Statistics Canada, Table: 32-10-0359-01 Estimated areas, yield, production, average farm price and total farm value of principal field crops (formerly CANSIM 001-0017);
– 1969–2009: Saskatchewan Agriculture and Food: Statfact, Canadian Wheat Board Final Price for Wheat, basis in store Saskatoon;
– 2012–2018: Statistics Canada, Table: 32-10-0077-01 Farm product prices, crops and livestock (formerly CANSIM 002-0043).

Methane and climate: 10 things you should know

Graph of global atmospheric methane concentrations
Global atmospheric methane concentrations, past 10,000+ years (8000 BCE to 2018 CE)

The graph above shows methane concentrations in Earth’s atmosphere over the past 10,000+ years: 8000 BCE to 2018 CE.  The units are parts per billion (ppb).  The year 1800 is marked with a circle.

Note the ominous spike.  As a result of increasing human-caused emissions, atmospheric methane levels today are two-and-a-half times higher than in 1800.  After thousands of years of relatively stable concentrations, we have driven the trendline to near-vertical.

Here are 10 things you should know about methane and the climate:

1. Methane (CH4) is one of the three main greenhouse gases, along with carbon dioxide (CO2) and nitrous oxide (N2O).

2. Methane is responsible for roughly 20% of warming, while carbon dioxide is responsible for roughly 70%, and nitrous oxide the remaining 10%.

3. Methane is a powerful greenhouse gas (GHG).  Pound for pound, it is 28 times more effective at trapping heat than is carbon dioxide (when compared over a 100-year time horizon, and 84 times as effective at trapping heat when compared over 20 years).  Though humans emit more carbon dioxide than methane, each tonne of the latter traps more heat.

4. Fossil-fuel production is the largest single source.  Natural gas is largely made up of methane (about 90%).  When energy companies drill wells, “frac” wells, and pump natural gas through vast distribution networks some of that methane escapes.  (In the US alone, there are 500,000 natural gas wells, more than 3 million kilometers of pipes, and millions of valves, fittings, and compressors; see reports here and here.)  Oil and coal production also release methane—often vented into the atmosphere from coal mines and oil wells.  Fossil-fuel production is responsible for about 19% of total (human-caused and natural) methane emissions.  (An excellent article by Saunois et al. is the source for this percentage and many other facts in this blog post.)  In Canada, policies to reduce energy-sector methane emissions by 40 percent will be phased in over the next seven years, but implementation of those policies has been repeatedly delayed.

5. Too much leakage makes electricity produced using natural gas as climate-damaging as electricity from coal.  One report found that for natural gas to have lower overall emissions than coal the leakage rate would have to be below 3.2%.  A recent study estimates leakage in the US at 2.3%.  Rates in Russia, which supplies much of the gas for the EU, are even higher.  Until we reduce leakage rates, the advantage of shutting down coal-fired power plants and replacing them with natural gas generation will remain much more modest than often claimed.

6. Domestic livestock are the next largest source of methane.  Cattle, sheep,  and other livestock that graze on grass emit methane from their stomachs through their mouths.  This methane is produced by the symbiotic bacteria that live in the guts of these “ruminants” and enable them to digest grass and hay.  In addition, manure stored in liquid form also emits methane.  Livestock and manure are responsible for roughly 18% of total methane emissions.

7. Rice paddy agriculture, decomposing organic matter in landfills, and biomass burning also contribute to methane emissions.  Overall, human-caused emissions make up about 60% of the total.  And natural sources (wetlands, swamps, wild ruminants, etc.) contribute the remaining 40%.

8. There is lots of uncertainty about emissions.  Fossil fuel production and livestock may be responsible for larger quantities than is generally acknowledged.  The rise in atmospheric concentrations is precisely documented, but the relative balance between sources and sinks and the relative contribution of each source is not precisely known.

9. There is a lot of potential methane out there, and we risk releasing it.  Most of the increase in emissions in recent centuries has come from human systems (fossil fuel, livestock, and rice production; and landfills).  Emissions from natural systems (swamps and wetlands, etc.) have not increased by nearly as much.  But that can change.  If human actions continue to cause the planet to warm, natural methane emissions will rise as permafrost thaws.  (Permafrost contains huge quantities of organic material, and when that material thaws and decomposes in wet conditions micro-organisms can turn it into methane.)  Any such release of methane will cause more warming which can thaw more permafrost and release more methane which will cause more warming—a positive feedback.

Moreover, oceans, or more specifically their continental shelves, contain vast quantities of methane in the form of “methane hydrates” or “clathrates”—ice structures that hold methane stable so long as the temperature remains cold enough.  But heat up the coastal oceans and some of that methane could begin to bubble up to the surface.  And there are huge amounts of methane contained in those hydrates—the equivalent of more than 1,000 years of human-caused emissions.  We risk setting off the “methane bomb“—a runaway warming scenario that could raise global temperatures many degrees and catastrophically damage the biosphere and human civilization.

Admittedly, the methane bomb scenario is unlikely to come to pass.  While some scientists are extremely concerned, a larger number downplay or dismiss it.  Nonetheless a runaway positive feedback involving methane represents a low-probability but massive-impact risk; our day-to-day actions are creating a small risk of destroying all of civilization and most life on Earth.

10. We can easily reduce atmospheric methane concentrations and  attendant warming; this is the good news.  Methane is not like CO2, which stays in the atmosphere for centuries.  No, methane is a “short-lived” gas.  On average, it stays in the atmosphere for less than ten years.  Many natural processes work to strip it out of the air.  Currently, human and natural sources emit about 558 million tonnes of methane per year, and natural processes in the atmosphere and soils remove all but 10 million tonnes.  (again, see Saunois et al.)  Despite our huge increase in methane production, sources and sinks are not that far out of balance.  Therefore, if we stop increasing our emissions then atmospheric concentrations could begin to fall.  We might see significant declines in just decades.  This isn’t the case for CO2, which will stay in the atmosphere for centuries.  But with methane, we have a real chance of reducing atmospheric levels and, as we do so, moderating warming and slowing climate change.

A series of policies focused on minimizing emissions from the fossil-fuel sector (banning venting and minimizing leaks from drilling and fracking and from pipes) could bring the rate of methane creation below the rate of removal and cause atmospheric levels to fall.  A more rational approach to meat production (including curbing over-consumption in North America and elsewhere) could further reduce emissions.  This is very promising news.  Methane reduction represents a “low-hanging fruit” when it comes to moderating climate change.

The methane problem is the climate problem in microcosm.  There are some relatively simple, affordable steps we can take now that will make a positive difference.  But, if we don’t act fast, aggressively, and effectively, we risk unleashing a whole range of effects that will swiftly move our climate into chaos and deprive humans of the possibility of limiting warming to manageable levels.  We can act to create some good news today, or we can suffer a world of bad news tomorrow.

Graph sources:
– United States Environmental Protection Agency (US EPA), “Climate Change Indicators: Atmospheric Concentrations of Greenhouse Gases.
– Commonwealth Scientific and Industrial Research Organisation (CSIRO), “Latest Cape Grim Greenhouse Gas Data.
– National Oceanic and Atmospheric Administration (NOAA), Earth System Research Laboratory, Global Monitoring Division, “Trends in Atmospheric Methane.

Energy slaves, “hard work,” and the real sources of wealth

Stuart McMillen graphic novel Energy Slaves
An excerpt from the online long-form comic "Energy Slaves" by Stuart McMillen

Check out this brilliant ‘long-form comic’ by Stuart McMillen: Energy Slaves.  Click here or on the URL above.

Many Canadians and Americans struggle financially.  Millions are unemployed.  Many others live paycheque-to-paycheque.  A 2017 report by the US Federal Reserve Board found that 40 percent of US citizens couldn’t cover an unexpected expense of $400 without selling something or borrowing money.  There’s a lot of denial and misunderstanding regarding the financial challenges faced by a large portion of our fellow citizens.

Equally, though, there is misunderstanding, denial, and myth-making regarding those among us who are more financially secure, those who are well off—“the rich.”  Most glaring is the way we mischaracterize the sources of our wealth, luxury, and ease.  We lie to ourselves and each other regarding why we have it so good.  The rich often claim that their wealth is a result of “hard work.”  We hear people objecting to even the smallest tax increase, saying: “I worked hard for my money and no one is going to take it from me.”

The reality, however, is quite the opposite.  The rich don’t work very hard.  Every poor women or girl in Asia or Africa who gets up at dawn to walk many kilometres to carry home water or firewood for her family works harder than the world’s multi-millionaires and billionaires.  Every farmer with a hoe or toiling behind an oxen works harder than any CEO.  My farmer grandparents worked far harder than I do, yet I live much better.  I would be self-delusional in the extreme to attribute my middle-class luxury to “hard work.”

No, those of us in North America, the European Union, and elsewhere in the world who enjoy privileged lives live well, not because we work hard, but because of the vast energy windfall of which we are the beneficiaries.  We live lives of comfort and ease because our work is done for us by “energy slaves.”

A human worker can toil at a constant rate of about one-tenth horsepower.  Working hard all year at that rate I can do about 200 horsepower-hours worth of work—hoeing or hauling or digging.  But if I add up the work accomplished by non-human energy—by fossil fuels and machines and by electricity from various sources and electric motors—I find that, on a per-capita average, that quantity is 100 times my annual work output.  For every unit of work I do, the motors and machines that surround me do 100 units.  Those of us who live comfortable, high-consumption lives are subsidized 100-to-1 by work we do not do.  And the richest among us enjoy the largest of those subsidies.

Let me state that another way: If I look around me, at the hurtling cars and trucks, the massive quantities of cloth and steel and concrete created each year, the rapidly expanding cities, the roads that get paved and the bridges built, I am seeing a quantity of building and digging and hauling and making that is 100 times greater than the humans around me could accomplish.  Human muscles and energies provide one percent of the work needed to create and maintain our towering, hyper-productive, petro-industrial civilization; but electricity, fossil fuels, other energy sources, engines, and machines provide the other 99 percent.  We and our human bodies put in 1 unit of work, but enjoy the benefits of 100.  That is the reason so many of us live better than the kings, sultans, and emperors of previous centuries.

As Stuart McMillen brilliantly illustrates in his long-form comic, Energy Slaves, it is as if each of us has a whole troupe of slaves toiling for our benefit.  It is the work of these virtual assistants that propel us along, create our homes and cities, raise our food, pump our water,  and make our goods.

We will face many hard questions as we progress through the twenty-first century: can we continue to consume energy at the rates we do now?  How can we generate that energy without fouling the atmosphere and destabilizing the climate?  How do we more equitably share access to energy among our soon-to-be 11-billion-person population?  How do we address energy poverty?  And all these questions and issues are tied to others, such as to issues of income inequality.  But a vital first step is to begin to talk honestly about the real sources of our wealth, to acknowledge that we enjoy undeserved subsidies, to admit that we are all (energy) lottery winners, and to approach the future with attitudes of humility and gratitude rather than entitlement.  We cannot navigate the future if we cling to the self-serving and self-aggrandizing myths of the past.

Electric car numbers, and projections to 2030

Graph of global electric vehicle numbers, 2013-17, and national data
Number of electric cars on the road, 2013 to 2017, and national data

In just two years, 2013 to 2015, the number of electric cars worldwide more than doubled.  And in the following two years, 2015 to 2017, the number more than doubled again, to just over 3 million.  This exponential growth means that electric vehicles (EVs)* will soon make up a large portion of the global car fleet.

This week’s graph is reprinted from Global EV Outlook 2018, the latest in a series of annual reports compiled by the International Energy Agency (IEA).

The graphs below show IEA projections of the number of EVs in the world by 2030 under two scenarios.  The first, the “New Policies Scenario,” takes into account existing and announced national policies.  Under this scenario, the number of EVs on the road is projected to reach 125 million by 2030.

The second scenario is called “EV30@30.”  This scenario is based on the assumption that governments will announce and implement new policies that will increase global EV penetration to 30 percent of new car sales by 2030—a 30 percent sales share.  This 30 percent share is roughly what is needed to begin to meet emission-reduction commitments made in the lead-up to the 2015 Paris climate talks.  Under this scenario, the number of EVs on the road could reach 228 million by 2030.

In either case, whether there are 125 million EV’s on the road in twelve years or 228 million, the result will be an impressive one, given that there were fewer than a million just four years ago.

Electric cars are not a panacea, but they do represent an important transition technology; electrifying much of the global car fleet can buy us the time we need to build zero-emission train and transit systems.  Thus, it is very important that we move very rapidly to maximize the number of EVs built and sold.  But the IEA is clear: EV adoption will depend on ambitious, effective government action.  The 228 million EVs projected under the EV30@30 Scenario will only exist if governments implement a suite of aggressive new policies.  The IEA states that:

“The uptake of electric vehicles is still largely driven by the policy environment.  The ten leading countries in electric vehicle adoption all have a range of policies in place to promote the uptake of electric cars.  Effective policy measures have proved instrumental in making electric vehicles more appealing to customers…, reducing risks for investors, and encouraging manufacturers to scale up production ….  Key examples of instruments employed by local and national governments to support EV deployment include public procurement programmes…, financial incentives to facilitate the acquisition of EVs and cut their usage cost (e.g. by offering free parking), and a variety of regulatory measures at different administrative levels, such as fuel-economy standards and restrictions on the circulation of vehicles based on tailpipe emissions performance.”

In 2018, about 95 million passenger cars and commercial vehicles were sold worldwide.  About 1 million were electric—about 1 percent.  The goal is to get to 30 percent in 12 years.  Attaining that goal, and thereby averting some of the worst effects of climate change, will require Herculean efforts by policymakers, regulators, international bodies, and automakers.

* There are two main types of EVs.  The first is plug-in hybrid electric vehicles (PHEVs).  These cars have batteries, can be plugged in, and can be driven a limited distance (usually tens of kilometres) using electrical power only, after which a conventional piston engine engages to charge the batteries or assist in propulsion.  Examples of PHEVs include the Chevrolet Volt and Toyota Prius Prime.  The second type is the battery electric vehicle (BEV).  BEVs have larger batteries, longer all-electric range (150 to 400 kms), and no internal combustion engines.  Examples of BEVs include the Chevrolet Bolt, Nissan Leaf, and several models from Tesla.  The term “electric vehicle” (EV) encompasses both PHEVs and BEVs.

 

 

$100 billion and rising: Canadian farm debt

Graph of Canadian farm debt, 1971-2017
Canadian farm debt, 1971-2017

Canadian farm debt has risen past the $100 billion mark.  According to recently released Statistics Canada data, farm debt in 2017 was $102.3 billion—nearly double the level in 2000.  (All figures and comparisons adjusted for inflation.)

Some analysts and government officials characterize the period since 2007 as “better times” for farmers.  But during that period (2007-2017, inclusive) total farm debt increased by $37 billion—rising by more than $3 billion per year.

Here’s how Canadian agriculture has functioned during the first 18 years of the twenty-first century (2000 to 2017, inclusive):

1. Overall, farmers earned, on average, $47 billion per year in gross revenues from the markets (these are gross receipts from selling crops, livestock, vegetables, honey, maple syrup, and other products).

2. After paying expenses, on average, farmers were left with $1.6 billion per year in realized net farm income from the markets (excluding farm-support program payments).  If that amount was divided equally among Canada’s 193,492 farms, each would get about $8,300.

3. To help make ends meet, Canadian taxpayers transferred to farmers $3.1 billion per year via farm-support-program payments.

4. On top of this, farmers borrowed $2.7 billion per year in additional debt.

5. Farm family members worked at off-farm jobs to earn most of the household income needed to support their families (for data see here and here).

The numbers above give rise to several observations:

A. The amount of money that farmers pay each year in interest to banks and other lenders ($3 billion, on average) is approximately equal to the amount that Canadian citizens each year pay to farmers ($3.1 billion).  Thus, one could say that, in effect, taxpayers are paying farmers’ interest bills.  Governments are facilitating the transfer of tax dollars from Canadian families to farmers and on to banks and their shareholders.

B. Canadian farmers probably could not service their $100 billion dollar debt without government/taxpayer funding.

C. To take a different perspective: each year farmers take on additional debt ($2.7 billion, on average) approximately equal to the amount they are required to pay in interest to banks ($3 billion on average). One could say that for two decades banks have been loaning farmers the money needed to pay the interest on farmers’ tens-of-billions of dollars in farm debt.

Over and above the difficulty in paying the interest, is the difficulty in repaying the principle.  Farm debt now—$102 billion—is equal to approximately 64 years of farmers’ realized net farm income from the markets.  To repay the current debt, Canadian farm families would have to hand over to banks and other lenders every dime of net farm income from the markets from now until 2082.

The Canadian farm sector has many strengths.  By many measures, the sector is extremely successful and productive.  Over the past generation, farmers have managed to nearly double the value of their output and triple the value of agri-food exports.  Output per year, per farmer, and per acre are all up dramatically.  And Canadian farmers lead the world in adopting high-tech production systems.  The problem is not that our farms are backward, inefficient, or unproductive.  Rather, the problems detailed above are the result of voracious wealth extraction by the dominant agribusiness transnationals and banks. (To examine the extent of that wealth extraction, see my blog post here).

Although our farm sector has many strengths and is setting production records, the sector remains in a crisis that began in the mid-1980s.  And what began as a farm income crisis has metastasized into a farm debt crisis.  Further, the sector also faces a generational crisis (the number of farmers under the age of 35 has been cut by half since 2001) and a looming climate crisis.  Policy makers must work with farmers to rapidly restructure and transform Canadian agriculture.  A failure to do so will mean further costs to taxpayers, the destruction of the family farm, and irreparable damage to Canada’s food-production system.

We’re in year 30 of the current climate crisis

An excerpt from the Conference Statement of the 1988 World Conference on the Changing Atmosphere held in Toronto
An excerpt from the Conference Statement of the 1988 World Conference on the Changing Atmosphere held in Toronto

In late-June, 1988, Canada hosted the world’s first large-scale climate conference that brought together scientists, experts, policymakers, elected officials, and the media.  The “World Conference on the Changing Atmosphere: Implications for Global Security” was held in Toronto, hosted by Canada’s Conservative government, and attended by hundreds of scientists and officials.

In their final conference statement, attendees wrote that “Humanity is conducting an unintended, uncontrolled, globally pervasive experiment whose ultimate consequences could be second only to a global nuclear war.”  (See excerpt pictured above.)  The 30-year-old conference statement contains a detailed catalogue of causes and effects of climate change.

Elizabeth May—who in 1988 was employed by Canada’s Department of Environment—attended the conference.   In a 2006 article she reflected on Canada’s leadership in the 1980s on climate and atmospheric issues:

“The conference … was a landmark event.  It was opened by Prime Minister Mulroney, who spoke then of the need for an international law of the atmosphere, citing our work on acid rain and ozone as the first planks in this growing area of international environmental governance…. 

Canada was acknowledged as the leader in hosting the first-ever international scientific conference on climate change, designed to give the issue a public face.  No nation would be surprised to see Canada in the lead.  After all, we had just successfully wrestled to the ground a huge regional problem, acid rain, and we had been champions of the Montreal Protocol to protect the ozone layer.”

The Toronto conference’s final statement also called on governments and industry to work together to “reduce CO2 emissions by approximately 20% … by the year 2005…. ”  This became known as the Toronto Target.  Ignoring that target and many others, Canada has increased its CO2 emissions by 29 percent since 1988.

Other events mark 1988 as the beginning of the modern climate-change era.  In 1988, governments and scientists came together to form the United Nations Intergovernmental Panel on Climate Change (IPCC). Since its formation, IPCC teams of thousands of scientists have worked to create five Assessment Reports which together total thousands of pages.

Also in 1988, NASA scientist Dr. James Hansen told a US congressional committee that climate change and global warming were already underway and that he was 99 percent certain that the cause was a buildup of carbon dioxide and other gases released by human activities.  Thirty years ago, Hansen told the committee that “It is time to stop waffling so much and say that the evidence is pretty strong that the greenhouse effect is here.” The New York Times and other papers gave prominent coverage to Hansen’s 1988 testimony.

Fast-forward to recent weeks.  Ironically, in Toronto, the site of the 1988 conference, and 30 years later, almost to the day, newly elected Ontario Premier Doug Ford announced he was scrapping Ontario’s carbon cap-and-trade emission-reduction plan, he vowed to push back against any federal-government moves to price or tax carbon, and he said he would join a legal challenge against the federal legislation.  In effect, Ford and premiers such as Saskatchewan’s Scott Moe have pledged to fight and stop Canada’s flagship climate change and emission-reduction initiative.  To do so, 30 years into the modern climate change era, is foolhardy, destructive, and unpardonable.

Citizens need to understand that when they vote for leaders such as Doug Ford (Ontario), Scott Moe (Saskatchewan), Jason Kenney (Alberta), or Andrew Scheer (federal Conservative leader) they are voting against climate action.  They are voting for higher emissions; runaway climate change; melting glaciers and permafrost; submerged seaports and cities worldwide; hundreds of millions of additional deaths from heat, floods, storms, and famines; and crop failures in this country and around the world.  A vote for a leader who promises inaction, slow action, or retrograde action is a vote to damage Canada and the Earth; it is a vote for economic devastation in the medium and long term, for dried-up rivers and scorched fields.  A vote for Moe, Ford, Kenney, Scheer, Trump, and a range of similar leaders is a vote to unleash biosphere-damaging and civilization-cracking forces upon our grandchildren, upon the natural environment, and upon the air, water, soil, and climate systems that support, provision, nourish, and enfold us.

In the 1990s, in decade one of the current climate crisis, inaction was excusable.  We didn’t know.  We weren’t sure.  We didn’t have the data.

As we enter decade four, inaction is tantamount to reckless endangerment—criminal negligence.  And retrograde action, such as that from Ford, Moe, Trump, and others, is tantamount to vandalism, arson, ecocide, and homicide.  How we vote and who we elect will affect how many forests burn, how many reefs disappear, and how many animals and people die.

In the aftermath of every crime against humanity (or against the planet or against the future) there are individuals who try to claim “I didn’t know.”  In year 30 of the current climate-change era, none can make that claim.  We’ve known for 30 years that the ultimate consequences of ongoing emissions and climate change “could be second only to a global nuclear war.”

Civilization as asteroid: humans, livestock, and extinctions

Graph of biomass of humans, livestock, and wild animals
Mass of humans, livestock, and wild animals (terrestrial mammals and birds)

Humans and our livestock now make up 97 percent of all animals on land.  Wild animals (mammals and birds) have been reduced to a mere remnant: just 3 percent.  This is based on mass.  Humans and our domesticated animals outweigh all terrestrial wild mammals and birds 32-to-1.

To clarify, if we add up the weights of all the people, cows, sheep, pigs, horses, dogs, chickens, turkeys, etc., that total is 32 times greater than the weight of all the wild terrestrial mammals and birds: all the elephants, mice, kangaroos, lions, raccoons, bats, bears, deer, wolves, moose, chickadees, herons, eagles, etc.  A specific example is illuminating: the biomass of chickens is more than double the total mass of all other birds combined.

Before the advent of agriculture and human civilizations, however, the opposite was the case: wild animals and birds dominated, and their numbers and mass were several times greater than their numbers and mass today. Before the advent of agriculture, about 11,000 years ago, humans made up just a tiny fraction of animal biomass, and domesticated livestock did not exist.  The current situation—the domination of the Earth by humans and our food animals—is a relatively recent development.

The preceding observations are based on a May 2018 report by Yinon Bar-On, Rob Phillips, and Ron Milo published in the academic journal Proceedings of the National Academy of Sciences.  Bar-On and his coauthors use a variety of sources to construct a “census of the biomass of Earth”; they estimate the mass of all the plants, animals, insects, bacteria, and other living things on our planet.

The graph above is based on data from that report (supplemented with estimates based on work by Vaclav Smil).  The graph shows the mass of humans, our domesticated livestock, and “wild animals”: terrestrial mammals and birds.  The units are millions of tonnes of carbon.*  Three time periods are listed.  The first, 50,000 years ago, is the time before the Quaternary Megafauna Extinction.  The Megafauna Extinction was a period when Homo sapiens radiated outward into Eurasia, Australia, and the Americas and contributed to the extinction of about half the planet’s large animal species (>44 kgs).  (Climate change also played a role in that extinction.)  In the middle of the graph we see the period around 11,000 years ago—before humans began practicing agriculture.  At the right-hand side we see the situation today.  Note how the first two periods are dominated by wild animals.  The mass of humans in those periods is so small that the blue bar representing human biomass is not even visible in the graph.**

This graph highlights three points:
1. wild animal numbers and biomass have been catastrophically reduced, especially over the past 11,000 years;
2. human numbers and livestock numbers have skyrocketed, to unnatural, abnormal levels; and
3. The downward trendline for wild animals visible in this graph is gravely concerning; this graph suggests accelerating extinctions.

Indeed, we are today well into the fastest extinction event in the past 65 million years.  According to the 2005 Millennium Ecosystem Assessment “the rate of known extinctions of species in the past century is roughly 50–500 times greater than the extinction rate calculated from the fossil record….”

The extinction rate that humans are now causing has not been seen since the Cretaceous–Paleogene extinction event 65 million years ago—the asteroid-impact-triggered extinction that wiped out the dinosaurs.  Unless we reduce the scale and impacts of human societies and economies, and unless we more equitably share the Earth with wild species, we will enter fully a major global extinction event—only the sixth in 500 million years.  To the other species of the Earth, and to the fossil record, human impacts increasingly resemble an asteroid impact.

In addition to the rapid decline in the mass and number of wild animals it is also worth contemplating the converse: the huge increase in human and livestock biomass.  Above, I called this increase “unnatural,” and I did so advisedly.  The mass of humans and our food animals is now 7 times larger than the mass of animals on Earth 11,000 or 50,000 years ago—7 times larger than what is normal or natural.  For millions of years the Earth sustained a certain range of animal biomass; in recent millennia humans have multiplied that mass roughly sevenfold.

How?  Fossil fuels.  Via fertilizers, petro-chemical pesticides, and other inputs we are pushing hundreds of millions of tonnes of fossil fuels into our food system, and thereby pushing out billions of tonnes of additional food and livestock feed.  We are turning fossil fuel Calories from the ground into food Calories on our plates and in livestock feed-troughs.   For example, huge amounts of fossil-fuel energy go into growing the corn and soybeans that are the feedstocks for the tens-of-billions of livestock animals that populate the planet.

Dr. Anthony Barnosky has studied human-induced extinctions and the growing dominance of humans and their livestock.  In a 2008 journal article he writes that “as soon as we began to augment the global energy budget, megafauna biomass skyrocketed, such that we are orders of magnitude above the normal baseline today.”  According to Barnosky “the normal biomass baseline was exceeded only after the Industrial Revolution” and this indicates that “the current abnormally high level of megafauna biomass is sustained solely by fossil fuels.”

Only a limited number of animals can be fed from leaves and grass energized by current sunshine.  But by tapping a vast reservoir of fossil sunshine we’ve multiplied the number of animals that can be fed.  We and our livestock are petroleum products.

There is no simple list of solutions to mega-problems like accelerating extinctions, fossil-fuel over-dependence, and human and livestock overpopulation.  But certain common sense solutions seem to present themselves.  I’ll suggest just one: we need to eat less meat and fewer dairy products and we need to reduce the mass and number of livestock on Earth.  Who can look at the graph above and come to any other conclusion?  We need not eliminate meat or dairy products (grazing animals are integral parts of many ecosystems) but we certainly need to cut the number of livestock animals by half or more.  Most importantly, we must not try to proliferate the Big Mac model of meat consumption to 8 or 9 or 10 billion people.  The graph above suggests a stark choice: cut the number of livestock animals, or preside over the demise of most of the Earth’s wild species.

 

* Using carbon content allows us to compare the mass of plants, animals, bacteria, viruses, etc.  Very roughly, humans and other animals are about half to two-thirds water.  The remaining “dry mass” is about 50 percent carbon.  Thus, to convert from tonnes of carbon to dry mass, a good approximation is to multiply by 2.

** There is significant uncertainty regarding animal biomass in the present, and much more so in the past.  Thus, the biomass values for wild animals in the graph must be considered as representing a range of possible values.  That said, the overall picture revealed in the graph is not subject to any uncertainty.  The overall conclusions are robust: the mass of humans and our livestock today is several times larger than wild animal biomass today or in the past; and wild animal biomass today is a fraction of its pre-agricultural value.

Graph sources:
– Yinon M. Bar-On, Rob Phillips, and Ron Milo, “The Biomass Distribution on Earth,” Proceedings of the National Academy of Sciences, May 17, 2018.
– Anthony Barnosky, “Megafauna Biomass Tradeoff as a Driver of Quaternary and Future Extinctions,” Proceedings of the National Academy of Sciences 105 (August 2008).
– Vaclav Smil, Harvesting the Biosphere: What We Have Taken from Nature (Cambridge, MA: MIT Press, 2013).

Home grown: 67 years of US and Canadian house size data

Graph of the average size of new single-family homes, Canada and the US, 1950-2017
Average size of new single-family homes, Canada and the US, 1950-2017

I was an impressionable young boy back in 1971 when my parents were considering building a new home.  I remember discussions about house size.  1,200 square feet was normal back then.  1,600 square feet, the size of the house they eventually built, was considered extravagant—especially in rural Saskatchewan.  And only doctors and lawyers built houses as large as 2,000 square feet.

So much has changed.

New homes in Canada and the US are big and getting bigger.  The average size of a newly constructed single-family detached home is now 2,600 square feet in the US and probably 2,200 in Canada.  The average size of a new house in the US has doubled since 1960.  Though data is sparse for Canada, it appears that the average size of a new house has doubled since the 1970s.

We like our personal space.  A lot.  Indeed, space per person has been growing even faster than house size.  Because as our houses have been growing, our families have been shrinking, and this means that per-capita space has increased dramatically.  The graph below, from shrinkthatfootprint.com, shows that, along with Australia, Canadians and Americans enjoy the greatest per-capita floorspace in the world.  The average Canadian or American each has double the residential space of the average UK, Spanish, or Italian resident.

Those of us fortunate enough to have houses are living in the biggest houses in the world and the biggest in history.  And our houses continue to get bigger.  This is bad for the environment, and our finances.

Big houses require more energy and materials to construct.  Big houses hold more furniture and stuff—they are integral parts of high-consumption lifestyles.  Big houses contribute to lower population densities and, thus, more sprawl and driving.  And, all things being equal, big houses require more energy to heat and cool.  In Canada and the US we are compounding our errors: making our houses bigger, and making them energy-inefficient.  A 2,600 square foot home with leading edge ‘passiv haus’ construction and net-zero energy requirements is one thing, but a house that size that runs its furnace half the year and its air conditioner the other half is something else.  And multiply that kind of house times millions and we create a ‘built in’ greenhouse gas emissions problem.

Then there are the issues of cost and debt.  We continually hear that houses are unaffordable.  Not surprising if we’re making them twice as large.  What if, over the past decade, we would have made our new houses half as big, but made twice as many?  Might that have reduced prices?

And how are large houses connected to large debt-loads?  Canadian debt now stands at a record $1.8 trillion.  Much of that is mortgage debt.  Even at low interest rates of 3.5 percent, the interest on that debt is $7,000 per year for a hypothetical family of four.  And that’s just the average.  Many families are paying a multiple of that amount, just in interest.  Then on top of that there are principle payments.  It’s not hard to see why so many families struggle to save for retirement or pay off debt.

Our ever-larger houses are filling the air with emissions; emptying our pockets of saving; filling up with consumer-economy clutter; and creating car-mandatory unwalkable, unbikable, unlovely neighborhoods.

The solutions are several fold.  First, new houses must stop getting bigger.  And they must start getting smaller.  There is no reason that Canadian and US residential spaces must be twice as large, per person, as European homes.  Second, building standards must get a lot better, fast.  Greenhouse gas emissions must fall by 50 to 80 percent by mid-century.  It is critical that the houses we build in 2020 are designed with energy efficient walls, solar-heat harvesting glass, and engineered summer shading such that they require 50 to 80 percent less energy to heat and cool.  Third, we need to take advantage of smaller, more rational houses to build more compact, walkable, bikable, enjoyable neighborhoods.  Preventing sprawl starts at home.

Finally, we need to consider questions of equity, justice, and compassion.  What is our ethical position if we are, on the one hand, doubling the size of our houses and tripling our per-capita living space and, on the other hand, claiming that we “can’t afford” housing for the homeless.  Income inequality is not just a matter of abstract dollars.  This inequality is manifest when some of us have rooms in our homes we seldom visit while others sleep outside in the cold.

We often hear about the “triple bottom line”: making our societies ecologically, economically, and socially sustainable.  Building oversized homes moves us away from sustainability, on all three fronts.

Graph sources:
US Department of Commerce/US Census Bureau, “2016 Characteristics of New Housing”
US Department of Commerce/US Census Bureau, “Characteristics of New Housing: Construction Reports”
US Department of Commerce/US Census Bureau, “Construction Reports: Characteristics of New One-Family Homes: 1969”
US Department of Labour, Bureau of Labour Statistics, “New Housing and its Materials:1940-56”
Preet Bannerjee, “Our Love Affair with Home Ownership Might Be Doomed,” Globe and Mail, January 18, 2012 (updated February 20, 2018) 

The cattle crisis: 100 years of Canadian cattle prices

Graph of Canadian cattle prices, historic, 1918-2018
Canadian cattle prices at slaughter, Alberta and Ontario, 1918-2018

Earlier this month, Brazilian beef packer Marfrig Global Foods announced it is acquiring 51 percent ownership of US-based National Beef Packing for just under $1 billion (USD).  The merged entity will slaughter about 5.5 million cattle per year, making Marfrig/National the world’s fourth-largest beef packer.  (The top-three are JBS, 17.4 million per year; Tyson, 7.7 million; and Cargill, 7.6.)  To put these numbers into perspective, with the Marfrig/National merger, the largest four packing companies will together slaughter about 15 times more cattle worldwide than Canada produces in a given year.  In light of continuing consolidation in the beef sector it is worth taking a look at how cattle farmers and ranchers are fairing.

This week’s graph shows Canadian cattle prices from 1918 to 2018.  The heavy blue line shows Ontario slaughter steer prices, and is representative of Eastern Canadian cattle prices.  The narrower tan-coloured line shows Alberta slaughter steer prices, and is representative for Western Canada.  The prices are in dollars per pound and they are adjusted for inflation.

The two red lines at the centre of the graph delineate the price range from 1942 to 1989.  The red lines on the right-hand side of the graph delineate prices since 1989.  The difference between the two periods is stark.  In the 47 years before 1989, Canadian slaughter steer prices never fell below $1.50 per pound (adjusted for inflation).  In the 28 years since 1989, prices have rarely risen that high.  Price levels that used to mark the bottom of the market now mark the top.

What changed in 1989?  Several things:

1.       The arrival of US-based Cargill in Canada in that year marked the beginning of integration and consolidation of the North American continental market.  This was later followed by global integration as packers such as Brazil-based JBS set up plants in Canada and elsewhere.

2.       Packing companies became much larger but packing plants became much less numerous.  Gone were the days when two or three packing plants in a given city would compete to purchase cattle.

3.       Packer consolidation and giantism was faciliated by trade agreements and global economic integration.  It was in 1989 that Canada signed the Canada-US Free Trade Agreement (CUSTA).  A few years later Canada would sign the NAFTA, the World Trade Organization (WTO) Agreement on Agriculture, and other bilateral and multilateral “free trade” deals.

4.       Packing companies created captive supplies—feedlots full of packer-owned cattle that the company could draw from if open-market prices rose, curtailing demand for farmers’ cattle and disciplining prices.

Prices and profits are only partly determined by supply and demand.  A larger factor is market power.  It is this power that determines the allocation of profits within a supply chain.  In the late ’80s and continuing today, the power balance between packers and farmers shifted as packers merged to become giant, global corporations.  The balance shifted as packing plants became less numerous, reducing competition for farmers’ cattle.  The balance shifted still further as packers began to utilize captive supplies.  And it shifted further still as trade agreements thrust farmers in every nation into a single, hyper-competitive global market.  Because market power determines profit allocation, these shifts increased the profit share for packers and decreased the share for farmers.   The effects on cattle farmers have been devastating.  Since the latter-1980s, Canada has lost half of its cattle farmers and ranchers.

For more background and analysis, please see the 2008 report by the National Farmers Union: The Farm Crisis and the Cattle Sector: Toward a New Analysis and New Solutions.

Graph sources: numerous, including Statistics Canada CANSIM Tables 002-0043, 003-0068, 003-0084; and  Statistics Canada “Livestock and Animal Products”, Cat. No. 23-203