Friday, March 30, 2012

22 March 2011 – EARTH HOUR

Or, "Is it better to light a candle"?


The purpose of Earth Hour is to “turn off lights for one hour as a symbolic action to raise awareness about climate change.”  The goal of sitting in the dark for an hour is to provide “an opportunity for all of us to reflect on what we can do - at home and in the office - to lessen our impact on the environment.”
I thought I’d simply offer a few facts and figures for your consideration as “Earth Hour” approaches.  First, according to the IMF, there is a direct and very stark correlation between electrification and standard of living.  The poorest countries in the world are also those with the lowest levels of electrification (expressed in Watts of electrical consumption per person).

(Source: IMF and CIA Factbook)

The upper right end of the line contains Australia, the US, Canada and Norway; the lower left end contains Chad, Rwanda, Afghanistan, Ethiopia, Nigeria and Bangladesh.  Nobody in any of those latter countries will be turning off any light-bulbs on Saturday night, because the per capita electrical consumption in each of those countries is 15 Watts per person or less - not enough to run a single light-bulb.  In other words, if you want to raise the standard of living of people in the worst-off countries in the world, don’t send them cheques, used clothing or Hollywood celebrities; provide them with cheap, abundant electrical power.
The importance of electrification is not reflected only in GDP.  International activists and aid agencies are continually bombarding potential donors with heart-rending images of children in the most dire conditions of abject poverty.  Well, guess what - there’s also a very direct and very stark correlation between electrification and child mortality.  Using the same countries as in the first chart, here’s the effect of electrification on mortality rates for children under the age of 5 (expressed in annual deaths per thousand live births):

(Source: WHO and CIA Factbook)

The bottom right of the chart, with low child mortality and high electrification, once again contains Australia, the US, Canada and Norway.  The top left of the chart, with low electrification and high child mortality, contains Ethiopia, Nigeria, Rwanda, Chad and Afghanistan.  In other words, if you want to solve the child mortality problem, don’t send monetary donations to individuals or agencies; provide the population with cheap, abundant electrical power.
Electricity is the lifeblood of modern society.  I don’t just mean its trinkets and trappings, like X-Boxes and LCD televisions; I mean its very foundations.  Stop and imagine for a moment what would happen if the power really went out.  Not just a few light-bulbs, but everything. 
Actually, you don’t have to imagine it - you just have to have been living in Ontario or the north-eastern US on the 14th of August 2003.

Remember that?  That was the great Northeast blackout - the second most widespread blackout in history, affecting 10,000,000 people in Ontario and 45,000,000 in the US.  That part of the grid normally supplied about 28 GW of power.  During the outage, supply dropped to about 5 GW.  It’s fascinating how much the affected area looks like the Atlantic and Pacific oceans, the Gulf of Mexico and Hudson’s Bay - places where there are simply no people.  This is what doing without electricity looks like.  You want symbolism?  THIS is symbolism.  Do you suppose the people affected by that blackout were reflecting on how good the loss of the grid was for “the environment”?  Or were they wondering how to keep their food from spoiling, how to get water out of their 180’ well without a pump, how to cook, how to clean their clothes, or - luxury of luxuries! - how to cool their homes in the 35+ temperatures we were enjoying that day?  Were they worrying about driving safely without traffic lights?  Or about how long auxiliary generators at hospitals would hold out?  Or whether banks could keep their computer servers running? Or how to fuel their cars with no power to the gas pumps?  Were police, fire and ambulance radio services still working?  Were the intensive care, cardiac and oncology units at hospitals still functioning?  How about air traffic control?
And that was in the summer, when ambient outdoor temperatures are survivable.  But this is Canada.  Suppose a major blackout had happened in winter? 
Wait a did:

The central North American ice storm of January 1998 destroyed over 1000 high tension pylons (and 35,000 wooden power poles) in Quebec alone, and resulted in a loss of electrical power to more than 4,000,000 consumers, in some cases for many weeks.  The damage to the grid was so severe and so widespread that the Army deployed armoured personnel carriers to rip downed power lines out of the ice in an attempt to salvage them.  Damage exceeded $4-6 billion and resulted in the largest domestic deployment of military personnel in Canadian history.  Thousands of farm animals were crushed under falling roofs, or froze to death because there was no power to heat their buildings.  During those power outages, how many folks do you think were reflecting on their impact on the opposed to, say, the environment’s impact on them

We take the grid for granted; we treat it like a constant, something that’ll always be there.  But it’s not a constant; it’s an enormous and immensely complex, interconnected, continent-wide machine consisting of hundreds of power plants, thousands of transformer stations, hundreds of thousands of step-down transformers, millions of pylons and miles of wire, hundreds of millions of individual consumers, and tens of billions of electrical appliances.  Managing the generation, distribution and consumption of the amount of energy handled by the grid is a feat of engineering that makes most other human achievements look like finger-painting.  Every day the North American grid distributes the electrical equivalent of the energy contained in a 10 Mt hydrogen bomb - and yet we don’t even think about it.(Note D)  It took the most technically advanced civilization in history more than a hundred years to accomplish something as magnificently complicated, capable and invisible as the grid, and it takes all of our ingenuity to keep it operating in the face of ever-increasing demand, insufficient maintenance, and government regulations that place illogical constraints on generators.  And as we are painfully reminded every time there’s a major power outage, the grid is all that stands between us and the unpleasant reality of the natural world.  If you live in Canada, that’s a pretty serious consideration.

Now, let’s take a moment to think about what “Earth Hour” really symbolizes.  If you accept the figures in the CIA World Factbook and those provided by the US Energy Information Administration, then the per capita electricity consumption in Canada is 1910 W/person.  For a family of 4, then, the total load is 7640 W.  The average Canadian house, according to NRCan, has 40.9 light bulbs,(Note A) of which 52% are high-efficiency (usually CFLs rather than the far more efficient, longer-lasting and much safer LED bulbs).  At an average of 60W per incandescent light bulb, and 15W per CFL (to match the 800 lumens generated by a 60W incandescent bulb), the approximate lighting load for the average household if all lights were turned on simultaneously (an unusual occurrence) would be (21x15)+(20x60)=1515W.  In my house, I counted; between 1930 and 2030 hrs on a Saturday, I would normally have 2 lights on in the kitchen, 3 in the office, 1 in the living room, 2 in the kids’ bedrooms, and 3 outside.  Half of these are halogen or LED, but let’s pretend they’re all 60W incandescent bulbs - that’s a total load of 11x60 or 660W.  If I turn these all off for one hour, I will have spared the grid 0.66 kWh. 

What kind of emissions am I saving?  Well, according to Ontario Power Generation (Note B), Ontario’s generating capacity is roughly one-third each nuclear (6606 MW), hydroelectric (6996 MW) and thermal (6327 MW).  As I write this, however, the power actually being produced by each of those elements of our overall generating capacity is 5709 MW from nuclear, 2255 MW from hydroelectric, and 319 MW from thermal plants.  In other words, of the 8283 MW currently being generated in Ontario, only 3.85% comes from carbon-emitting sources (accordingly, even if you accept the premise of the AGW thesis, 96.15% of Ontario’s generated electricity cannot in any way be even remotely connected to “climate change”).  So of the 0.66 kWh I’ll save by shutting off my lights for “Earth Hour”, I’m saving 0.66 x 0.0385 = 0.0254 kWh worth of emissions.  According to Environment Canada, using coal to produce electricity creates 0.5418 kg of carbon dioxide per kWh generated, so turning off my light bulbs for an hour will save 0.5418 x 0.0254 = 0.0137 kg, or 14 grams, of carbon dioxide. 

Paraffin, a complex hydrocarbon that is solid at room temperature, produces, like other alkane fuels, roughly 3 kg of carbon dioxide per litre when burned (about a 3:1 ratio by mass).  This means that if I want to replace my lost electrical light with paraffin candles, then - if I want to remain “Earth-friendly” - I can’t generate more than 14 grams of carbon dioxide.  This means that I can only burn about 5 grams of paraffin.  Ikea sells packs of 24 tea lights weighing 2 pounds (909 grams), so each tea light weighs about 37.85 grams (call it 35 grams once we lose the packaging and the aluminum holder for the paraffin).  If I can only burn 5 grams of paraffin, that’s 1/7 or 14.3% of a tea light.  Tea lights are advertised to burn for 4 hours, so to get one hour’s worth of light out of one, I’d have to burn 25% of it.  So I can light a single tea light during Earth Hour to replace the 11 light-bulbs I’ve turned off - but I’ll have to blow it out after 14.3/25x60=34 minutes, or else I’ll have produced more carbon dioxide from my tiny candle than Ontario Power Generation would have produced to run my 11 light-bulbs for an hour. 

For the first 34 minutes of “Earth Hour”, therefore, I’ll be trying to run my life by the light of a single, tiny candle; and for the last 26 minutes, I’ll be sitting idle in the dark.  It doesn’t get any more symbolic than that.

If “Earth Hour” symbolizes anything at all, frankly, it’s the inability of people to do basic arithmetic.  It also symbolizes the widespread and appalling ignorance of the historical fact that abundant electricity produced by the cheapest means available (and anyone who understands the market should understand that “abundant” and “cheapest means available” are inextricably interconnected) is a major part of the difference between the life that we enjoy here, and the grinding poverty and catastrophic child mortality of the third world.

Nowhere is this easier to see than in a night-time satellite image of the Sea of Japan.  You’d think that South Korea, like Japan, was an island, wouldn’t you?

It’s always “Earth Hour” in North Korea.  If you want to reflect on something this Saturday night, reflect on that.

The line dividing the modern from the pre-modern world is drawn in electric light. It took an awful lot of human science, human ingenuity, human resources and human labour to create the means to produce clean, white illumination at the flick of a switch.  Voluntarily turning that switch off is a “symbolic act”, all right - but I don’t think most people who do so understand what they’re really symbolizing.




A) []
B) []
C) The satellite photo of eastern Asia comes from []
D) The combined US-Canada electrical consumption is 11,708,821 MWh/day.  1 kWh=3.6Mj, so the daily Can-US consumption is 42x10^15 joules.  The explosion of a million tonnes of TNT liberates 4.18x10^15 joules.

Tuesday, March 27, 2012

25 February 2011 – North African turmoil, CW worries


The events of the past few weeks in North Africa make me wonder about the security of the chemical weapons (CW) programmes of the states that are presently melting down.  Both Egypt and Libya have CW.  Egypt, because it never joined the Chemical Weapons Convention, has never been required to make a declaration; but Cairo almost certainly has stockpiles of mustard and possibly G- and V-agents, and probably has the capability to produce more advanced CW.  For those who are interested, I’ve attached a paper - almost 13 years old, now - by Dany Shoham, from the 1998 Spring-Summer edition of the Non-Proliferation Review, that gives a comprehensive overview of Egypt’s chemical and biological weapons programmes at time of writing, including details of the support Cairo has supplied to other CBW proliferators, like Syria and pre-liberation Iraq.

We know a little more about Libya.  Shortly after the US Army pulled Saddam out of his furnished septic tank in December 2003, Qadafy, having recently been caught trafficking in nuclear what-nots by the Proliferation Security Initiative, turned states-evidence, voluntarily joining the CWC and declaring tons of CW agents, precursors, and unfilled munitions.  The OPCW conducted its first visit to Libya in February of 2004 and destruction got under way the same month, beginning with the easy stuff: the Category 3 chemical weapons, Libya’s unfilled munitions (mostly, as I understand it, empty aircraft bombs that could have been filled either with mustard or nerve agent).  These were destroyed via the highly technical process of lining them up on the sand and driving a bulldozer over them.  That part of the program was complete by March 2004.  Libya submitted its initial declaration immediately thereafter, declaring approximately 23 metric tonnes of mustard gas, one inactivated CW production facility, and two CW storage facilities.  No filled munitions were declared. (Note A)

The “inactivated CWPF” quickly became a bone of contention.  Known as al-Rabta, Libya soon requested permission to convert the facility into a pharmaceutical plant.  However, according to paragraph 72 of Part V of the Verification Annex, conversion of any CWPF for “purposes not prohibited by the Convention” must be completed not later than six years into entry after force of the Convention.  As the Convention entered into force on 29 April 1997 and Libya did not join until March 2004, the deadline for conversion was automatically missed.  This caused the Conference to approve a Technical Change to the Verification Annex, inserting paragraph 72bis establishing that conversion deadlines for states parties entering after the six year deadline to be set by the Executive Council.  This was only the second Technical Change ever made to the Convention (the first being a Canadian-instigated change - the insertion of paragraph 5bis into Part VI of the Verification Annex eliminating the requirement for 30-day advance notification of transfers of 5 milligrams or less of Saxitoxin for medical/diagnostic purposes).  Libya stated that it had completed conversion of the two separate facilities at al-Rabta in 2009.

Destroying Libya’s CW stockpile has been a lot trickier.  By the end of 2009, Libya had not destroyed any of its Category 1 chemical weapons (agent and precursors) and only 39% (551 tonnes) of its Category 2 CW.  The Categories, incidentally, are a declaration mechanism designed to assist States Parties and the Inspectorate in prioritizing declared CW for destruction.  Laid out in paragraph 16 of Part IV(A) of the Verification Annex, the categories are:

·          Category 1 - CW based on Schedule 1 chemicals (the traditional warfare agents) and their parts and components

·          Category 2 - CW based on all other chemicals and their parts and components

·          Category 3 - Unfilled munitions and devices, and equipment specifically designed for use directly in connection with employment of CW

Bottom line, Libya still has many tonnes of mustard and an awful lot of G-agent precursor chemicals (the Cat 2 materials, mostly phosphorous compounds) still kicking around.  This is because impoverished and technically unsophisticated countries tend to have difficulty coming up with a destruction plan that does not involve burial, sea-dumping or open-air burning, all of which are prohibited by the Convention.  Incineration and neutralization by hydrolysis are both acceptable methods, provided appropriate environmental constraints are observed, but they take a certain degree of scientific and engineering competence.  I’ve heard stories about Libya’s destruction programme over the past year that would curl your hair - the phrase “and then they set the desert on fire” came up.  Fascinating, so long as you’re not downwind.

In 2009, Libya’s mustard was reloaded from small storage canisters into new tanks for shipment to the destruction facility; 22.3 tonnes of it, anyway.  The reloading process demonstrated one of the more unpleasant qualities of mustard - its tendency to polymerize in storage, forming a gooey “heel” with all of the toxicity and mutagenic properties of pure mustard, but with a consistency varying from that of molasses to that of tar to that of a hockey puck.  2.5 tonnes of this horrid goop remained in the original containers. You can’t hydrolyze polymerized mustard unless you can make it dissolve first, and that’s no easy task even with potent organic solvents.  This is why smart people prefer high-temperature incineration.  It’s the chemical equivalent to “nuking the site from orbit” - it’s the “only way to be sure.”

Interestingly, Wikileaks has provided some corroboration of the problems Libya is experiencing with its CW programs.  According to one of the released secret cables (reported in The Telegraph), the head of Libya’s CW destruction programme, Dr. Ahmed Hesnawy (who is also the former head of its CW production programme), told the US Embassy in Tripoli in  late 2009 that a “grassroots environmental campaign” and “civil defence concerns about possible leaks” had caused “all hell to break loose” with the programme.  The embassy’s comments on these explanations were sceptical about the environmental movement, but gave credence to the concern about leaks: ”Given tight Libyan Government controls over national security facilities and programs, we find it hard to believe that a grassroots movement could affect Libyan policy or action on a sensitive program such as the Rabta facility”; and “The UK DCM, who visited the storage facility earlier this year, told P/E Chief that the containers currently housing the material were in fact leaking when he observed them.” (Note B)

Libya’s difficulties prompted Tripoli to request an extension, and in December 2009, the Fourteenth Conference of the States Parties approved an extension to 15 May 2011 (C-14/DEC.3, 2 December 2009.  A deadline of 31 December 2011 was set for Libya’s Cat-2 CW, approved by the 11th Conference in 2006).  Intermediate deadlines were extended by the 15th Conference last December.  There was already some doubt whether these deadlines were achievable; now, they seem unlikely to be achieved at all.  Libya could easily miss the final destruction deadline (29 April 2012), along with the US and Russia, as the destruction operations have to be carried out under continuous monitoring and verification by the OPCW, and the OPCW cannot send its inspectors into a civil war.

But at least the bombs were crushed first.  Given Colonel Mo’s demonstrated propensity for using air-to-surface weapons for crowd control, we should be grateful for that small mercy.






15 February 2011 – China’s gold gulp


The NYSE Composite Index is roughly where it was 10 years ago.  Stocks are recovering from the pounding they took a few years back when the market shed about half its value in the wake of the sub-prime mortgage collapse, but the NYSE is still nowhere near the highs it achieved in 2007.

Meanwhile, over the past ten years the price of gold has quadrupled:
Anyone who bought gold in the wake of the Dot-Com collapse could sell it now for more than four times the purchase price.  That’s a pretty decent return on investment when compared to what’s been going on elsewhere in the market.
In times of economic uncertainty, people, as they have done for tens of thousands of years, tend to put their money into precious metals.  To the detached observer and student of contemporary economic theory, this may seem counter-intuitive; after all, gold is merely another commodity, and commodity prices (as the past ten years have demonstrated) can be very volatile.  At the same time, though, absent the development of the sort of transmutation technology that Edison predicted would be in routine use by the year 2011, which - he stated - would result in the proliferation of the gold taxicabs I referred to a few weeks back, it seems that gold will always be a preferred instrument for procuring security in the face of unstable markets.  If you think prices are high now, for example, here’s what adjusted-dollar prices looked like in the early 1980s, the last time the US was in the throes of a major recession and folks were struggling to protect their assets against inflation:
There’s no mystery to it; for millennia, gold was the highest standard against which the value of currency could be measured - literally measured, as the value of coinage was dependent upon the purity and mass of the metal used.  The problems of protecting physical currency against devaluation led to terms only vaguely remembered today, such as ”debasement” of the valuta of the realm, or the practice of ”clipping” coins.  The latter bit of pecuniary malfeasance was sufficiently common in the Middle Ages that it is thought to be the origin of the popular nickname of King Eric V of Denmark (r. 1259-1286), who for his general untrustworthiness is known to posterity as “Eric Klipping.”
The mutability and fungibility of gold made the physical metal a highly useful means of transporting value in a compact form, either in coins, in bullion bars, or in finished objects that could, if necessary, be melted to produce more coinage.  Gold, of course, is no longer in use as a currency standard. While gold coins have been used for thousands of years, the use of gold as a specie standard has always been problematic due to its scarcity (silver specie being much more common).  Britain went informally to a gold standard early in the 18th Century and formally adopted the gold specie standard a century later.  By the end of the 1800s the countries that mattered were all on a gold standard; Britain used the sovereign, Germany the gold mark, America the eagle.  Canada, as usual, had a biaxial system based on both the eagle and the sovereign. The practice of “pegging” lesser currencies to more influential ones got under way at this time, leading to a gold exchange standard under which silver-based currencies were convertible to metallic gold. 
The massive expenditures of the First World War broke the gold exchange standard, and countries that had hoped to return to convertibility were unable to do so due to wartime inflation and, in the case of Germany, as a result of having lost its gold supply via reparation payments.  In order to stem the flow of gold overseas, Britain in 1925 replaced the gold specie standard with a gold bullion standard (under which currencies were required to be backed by reserves of physical gold held by the government).  The US adopted a similar standard, to which move some economic historians attribute the prolongation of the Great Depression, as federal legislation required that the Federal Reserve hold a certain amount of gold to back up its currency issue (Milton Friedman disagreed, arguing that “The Fed” could have loosened the money supply without disturbing the bullion standard.  Incidentally, there’s a similar feel to the ongoing debate about whether to raise the debt ceiling in the US.  Given that the near-term alternative is default, this would seem to be a no-brainer.  But then, I’m not an economist).  After the Second World War, Bretton Woods saw the implementation of dual system based on the US decision to set the price of gold at $35 per ounce, and an implicit decision by other (free) economies to use the US dollar as the reference point for their own currencies.  This lasted until 1971 when Nixon - prompted in part by the expense of the Vietnam War, and in part by moves by foreign nations to buy US gold with US dollars, simultaneously reducing the US gold reserve and US influence abroad - put a stop to convertibility.  Since then, the US dollar has been a fiat currency.  Amongst other features of mixed benefit, this has made it easier for governments to engage in deficit spending.  As a means of dealing with acute financial crises, the flexibility offers some benefits; however, it has also enabled governments to indulge in chronic deficit spending, which may be subsequently “inflated away” by increasing the money supply - although not without long-term costs, and not indefinitely.
Why bring this up?  Well, because we’re in the midst of a “gold boom” in Asia the likes of which we’ve never seen.  Much has been made of the gradual economic liberalization of China, but from a Western perspective the “liberalization” has been slow, irregular, rife with favouritism, prone to corruption, and so weighted down by permitting and regulations that progress has been, to put it mildly, “uneven”.  For decades - ever since the relative decline in importance of the last “Asian Tiger”, Japan - speculators have been speculating about what might happen when the “power of the Chinese consumer” is finally released.  We’re seeing some of that now.  A colleague sent me a piece by Eric Sprott and David Franklin, a pair of analysts at Sprott Asset Management, which I’ve attached for your delectation.  Entitled “Gold Tsunami”, the brief paper - it’s only three pages long - provides some startling details on what’s been going on in the world gold market since Beijing started allowing its citizens to own bullion, including gold.  This can’t have been an easy decision for the Communist Party bureaucracy; as Alan Greenspan once put it so beautifully, “An almost hysterical antagonism toward the gold standard is one issue which unites statists of all persuasions. They seem to sense... that gold and economic freedom are inseparable.”
Well, the Chinese people are exercising their newly-granted freedom with a vengeance.   China is already the world’s largest gold producer, but in 2009, it imported 45 tonnes of gold.  In 2010, it imported more than 209 tonnes.  That’s more than a four-fold increase in imports in only one year.  China’s demand is expected to hit 600 tonnes this year, which puts it right behind India (at 800 tonnes, the world’s largest gold consumer).  Between the two, they will consume more than half of the world’s annual production of about 2,650 tonnes.  And it’s not just gold.  China’s appetite for silver also increased fourfold between 2009 and 2010.  As Sprott and Franklin note, in 2005 China exported just over 100 million ounces of silver (that’s 28,409 metric tonnes) - and in 2010, they imported just over 120 million ounces: ”This represents a swing of 200 million+ oz. in a market that supplied a total of 889 million oz. in 2009 - a truly tectonic shift in demand!” 
And it’s being felt.  Bullion producers worldwide are reporting shortages of bars and coins.  Sprott and Franklin argue that simple consumer spending isn’t driving the surge; according to their figures, the retail demand for gold jewellery in china has only gone up by 8%.  That said, they acknowledge that China traditionally drives gold prices up in January as Chinese citizens prepare for New Year celebrations, which include, among other things, hong bao, or “lucky money”.  It should also be noted that gifts of gold to newlyweds are traditional in China, and now, due to the relaxation of regulations and growing individual wealth, may be becoming more common.  By and large, however, Chinese citizens seem to be buying gold for a very traditional reason: to guard against the inflation that is resulting from heavy government overspending, especially on infrastructure projects widely seen as unnecessary (to put it in bleak demographic terms, Beijing is building cities for citizens who have not been, and are not being, born).  China’s unprecedented gold gulp is being facilitated by its banks on a scale that dwarfs anything conceivable in the West.  The Industrial and Commercial Bank of China (ICBC), already the biggest consumer bank in the world, recently began a “Gold Accumulation Plan” that allows investors to accumulate gold through a daily dollar averaging programme, with a floor of 1 gram of gold (roughly $42 USD) per day.  The GAP is not yet a year old (it began on 1 April 2010), but it already has 1,000,000 subscribers and has already purchased 10 tonnes of gold.  And the ICBC has 212,000,000 separate accounts.  According to Sprott and Franklin, if the GAP became popular with ICBC customers and if it were to spread to China’s other banks, consumer “gold hoarding” could swiftly account for 10% of the world’s annual gold production. 
And that’s only China.  If the appetite for physical gold spreads, then the bulk of world production could swiftly be eaten up by consumer demand.  This is not a good thing; gold is one of those elements whose unique physical properties make it an indispensable feedstock for all manner of industries.  If you want to talk “strategic minerals”, it’s hard to get more strategic than gold.  Gold shortages could be a real possibility in a world plagued by economic uncertainty, especially about what America’s economic future looks like in view of the current economic situation and the policies, extant and proposed, of the present Administration.  On that topic, the Obama Administration’s budget, released Monday, forecasts that the Federal debt will jump by $2T to reach $15,476,000,000,000 by 30 September 2011.  That’s 102.6% of GDP = the first time the US has hit that debt level since the Second World War.(Note C)
Maybe the Chinese are on to something.
(H/T to Dave M., BEng, MSc, FMA, CIMC, CD, former bird-gunner, and the only rocket scientist I know)

8 February 2011 – Grain drain


I’m sure we all remember the 2008 crisis in food prices engendered by the high price of grains, especially corn, resulting from speculation as farm cropland was diverted to the production of species optimized for the production of ethanol for motor vehicle fuel (the UN Food and Agriculture Organization reported on it here.

The purpose of this post is not to debate those policies (although thoroughly unsurprising news continues to come out even today, e.g. the closure of a cellulosic ethanol plant in Georgia after making just one batch of fuel-grade product [note A], and the acknowledgement by leaders in the US ethanol industry that profitability depends heavily upon the $0.45 per gallon federal subsidy [note B]), but rather to try to put price behaviour in a little historical context.  This is worth doing as prices are likely to rise again this year, if not skyrocket, due to the continued drain on US corn stocks by the ethanol industry. According to Reuters, the end-January figures for US corn stocks are likely to be around 728M bushels, more than 40% below the figures for 2010 (1.7B bushels) - the lowest they’ve been in 15 years (Note C).

A recent paper by Daniel A. Sumner, an economics professor at the University of S.C. Davis, offers a little more perspective on the “historic” food grain price inclines and declines of the past few years.  Adjusting for constant dollars, Sumner tracks the price of wheat and corn from the end of the Civil War to 2008, showing how real prices fluctuated in response to various global events and trends.  Figure 1 of his paper, for example, shows clear spikes concurrent with the First and Second World Wars, an inter-War trough resulting from the Great Depression, and a smaller peak concurrent with the recession resulting from the 1973 oil shock.  Against these enormous fluctuations, the price increase that took place in the 2007-08 period was, proportionally speaking, rather minor.

As Sumner points out, one of the key reasons that the 2007-08 price increases seemed comparatively enormous is the fact that the real price of primary food grains has fallen precipitously over the past century and a half.  As the data charts show, the history since 1948 is particularly remarkable.  In the wake of the Second World War, the average price of both corn and wheat has fallen by approximately 2.3% per year for a 61-year period.(Note D)  The rapid and sustained decline in prices made the massive jump in 1973-74, and the comparatively minor jump in 2007-08, far more noticeable.
To better illustrate the changes, Sumner charts the upside and downside price deviations in percentage terms (figure 2, below). 

This helps to make the relative scale of price increases and declines more evident.  It is worth noting that, for reasons of scale, the chart fails to show the full scope of the two largest upside adjustments of the past century; in 1901 the upside adjustment was 88%, and in 1934, 117%.  Against these figures, the 60% increase in 2007-08 looks more alarming; proportionally speaking, it was the fifth largest grain price increase in the past century and a half.

One of the things that becomes instantly obvious from looking at Figure 1 is the massive decline in food grain prices over the past 150 years.  Sumner charts prices from 1948 onwards in his Figure 3 (page 4 of his paper), but it’s worth going back to the US Department of Agriculture site for the full run of data in order to push the charts back a little further (prices are available from 1866 onwards, but due to limited availability of CPI data for real-dollar conversion, I’m only looking at the post-1929 period).  You also have to convert prices into constant dollars, but those figures are available from the US Bureau of Economic Analysis, and the calculations aren’t all that complex for anyone reasonably familiar with economics, statistics, MS Excel, and division.  What you get is this:

What this chart shows is that, for the past 80 years, the real cost of food grains has been declining steadily.  The proportional rate of decrease in prices, moreover, as you can see from the near-identical slope of the linear trend-lines, is virtually the same.  What this means is that it has become a lot cheaper for the world to feed itself - hard data that refutes not only gloomy old Thomas Malthus, but also more recent prophets of doom like Paul Ehrlich, whose 1968 book The Population Bomb began with the phrase, “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now. At this late date nothing can prevent a substantial increase in the world death rate…”.  Rarely has a piece of predictive futurology been so magnificently and so comprehensively wrong.

That sort of chart simply begs for explanatory correlation.  There is of course no end of correlating phenomena.  The first one that pops immediately to mind for students of economics and history is, obviously, sunspots.  After all, it was the inverse correlation between sunspot records and wheat prices printed in Adam Smith’s The Wealth of Nations that was first noted by William Herschel in 1801, prompting the astronomer to posit a relationship between solar activity and terrestrial climate.  These data are also available, courtesy the National Oceanic and Atmospheric Administration.  The result is shown in Figure 4.

This figure is also interesting because it demonstrates how loose the correlation between grain prices and sunspot numbers has become in the past century.  Where Herschel noted an obvious, close inverse correlation between the phenomena (meaning high sunspot numbers, and therefore high solar activity, were closely correlated with better crop yields and low grain prices), what we see is a lot messier.  During the Great Depression, a period of low solar activity led to rising prices (1933-1939); and prices fell in the early years of the Second World War, when solar activity was high.  Both of these correlations tend to confirm Herschel’s interpretation.  So does the high price of grain during the low solar activity period in the late-1940s.  After this point, however, grain prices begin to fall precipitously, and continue to decline even during the low solar activity of the late 1950s (falsifying Herschel). 

High solar activity in the early 1960s seems to accelerate the price decline (again confirming Herschel); but despite a price bump in 1964-65, prices fall even further during the low solar activity of those years (a falsification).  Prices jump significantly during the solar decline of the early to late 1970s - but as this period is associated with the galvanizing effects of the 1974-74 recession, the data should be treated with a good deal of caution - as should the price bump of the early 1980s, which is (counter-intuitively) associated with high solar activity, and also (very intuitively) with the 1983-84 recession.  Both prices and solar activity fall in the late 1980s, again falsifying Herschel.  The bump in the early 1990s could be due either to increasing solar activity or to the S&L scandal.  Climbing prices coincide with low solar activity in the late 1990s, and low prices with high solar activity around 2001; and, as already noted, prices climbed significantly in the 2007-08 period, correlating with the lowest solar activity levels in a century.  These latter data also, interestingly, confirm Herschel - although as was the case with the wars and recessions of the past, it is always important to recall the impact on prices of economic events and political decisions.

What these results suggest is that the relationship between solar activity levels and grain prices first posited by Herschel more than two centuries ago remains valid, but is subject to being overwhelmed by external political and economic phenomena, all of which are inevitably of governmental or intergovernmental origin.  This in turn suggests a reversal of what, until very recently, was an iron law of the relationship between human political activity and the food supply.  Where once all political activity - war, trade, colonization, expansion and what-not - was dependent to a great extent on the availability and therefore the price of basic foodstuffs (e.g., on the provision “by divine providence” of a bountiful harvest), since the end of the Second World War, the price of basic foodstuffs has depended less on factors relating to the physical environment than on the nature and impact of political decisions made by governments.  In other words, the price of food - and for that matter, the extent to which a harvest is or is not “bountiful” - is now, and has for some time, been subject more to human than to solar influences. 

If this conclusion - in an era of mechanical planting and harvesting, chemical fertilizers, selected and genetically engineered species of wheat and corn, and international, globalized trade in food grains - sounds like a statement of the blindingly obvious, I agree wholeheartedly.  But if it is blindingly obvious, then why do we continue to see articles and books predicting the coming nutritional apocalypse, ranging from Ehrlich’s demented screed to more recent predictions of impending agricultural doom?  And why, if the impact of political decision-making has for at least the past 60 years demonstrably and completely overwhelmed any solar or other nature-derived influences on the price of basic foodstuffs, do futurologists insist, against observed evidence, that “changing weather patterns...will likely disrupt global agriculture and diminish food security?”

Not to put too fine a point on it, but the answer, in all likelihood, is that “things are fine and food is cheaper than ever” doesn’t sell either books or policy positions.  Nor, one suspects, are governments all that keen on admitting that, since 1948 at least, unsound political and economic decisions have had a far more deleterious effect on food prices than bad weather.



P.S.  Another piece that caught my attention this week was the report of a wolf attack on a cow in Idaho (Note E).  This piqued my curiosity in the context of a report from a few weeks back that a Norwegian boy had averted a potentially fatal wolf attack by playing a Megadeath song on his cellphone.  This was, of course, complete balderdash; the boy saved himself by treating the wolf to Creed’s album, “Overcome” (Note F).  Poor wolf; I’d run, too.

Interestingly, we have a much longer data baseline for wolf attacks than we do for grain prices or sunspot numbers.  Just to ensure that my data and interpretations were accurate, I decided to graph fatal wolf attacks against humans (data from Note E) opposite grain prices and, for good measure, the global average temperature anomaly.

It’s fascinating to note a correlation, at least over the past 80 years, not only between slowly rising temperatures and declining grain prices (which, when you think about it, makes sense), but also between rising temperatures and an increasing numbers of fatal wolf attacks.  I’m certainly not positing an inverse causal correlation between corn prices and wolf attacks (at least in the absence of additional observational evidence confirming the trend, for example, a government decision to harvest wolves to process their carcasses into biodiesel, which presumably would lead both to fewer attacks on humans and lower grain prices) but it does make you wonder about some of the deeper connections between things. 

As the Discovery Channel says, the world is just awesome. - DN

D) Daniel A. Sumner, “Recent Commodity Price Movements in Historical Perspective”, American Journal of Agricultural Economics (2009), 3.

Sunday, March 25, 2012

The Feynman Lectures


As the title of this blog suggests, in science data comes before everything else.  All theories must yield to measurements.  One of the great empiricists of all time, Richard Feynman, has said this better than anyone.

The famous "Feynman Lectures" can be bought as a collection.  But they're also available online.  Here's the first of a series of seven - Feynman talking about gravitation, and how we know that a principle of science is, in fact, a natural law.

Go listen to the others, too.  In addition to being one of the forerunners of the great popularizers of science like Carl Sagan and Neil Degrasse Tyson, his Queens accent makes listening to discussions of Brahe, Copernicus, Galileo, Newton and Einstein a positive joy.

Don't miss it.  Cheers,


3 February 2011 – Braking wind in Texas

“If that is the way the wind is blowing, let it not be said that I do not also blow!”
- Mayor “Diamond Joe” Quimby, The Simpsons


Having spent the last few years looking at “renewable energy” issues, this story caught my eye yesterday:
Texas’ longest period of rolling blackouts ended around 1:30 this afternoon, eight hours after the power outages began across the state.
The Electric Reliability Council of Texas, which operates the state power grid, warned it may initiate more blackouts Wednesday evening or Thursday morning during the peak demand hours.
The planned outages, which lasted up to 45 minutes, were triggered when more than 50 power plants, including a few owned by Dallas-based Energy Future Holdings, stopped working Tuesday night because of the cold weather.
The Electric Reliability Council of Texas said 7,000 megawatts of generating capacity tripped Tuesday night, leaving the state without enough juice. That’s enough capacity to power about 1.4 million homes. By rotating outages, ERCOT said it prevented total blackouts. (Note A)
The reason it caught my attention is because Texas has, among other things, the single highest installed wind power capacity and the highest annual wind-powered electrical generation stats of any state in the Union.(Note B)  The installed wind power capacity in Texas at the end of 2009 was 7,427 MW, which accounted for 7.1% of the state’s total nameplate generating capacity of 140,900 MW (about 141 GW).  This, incidentally, is higher than Canada’s total installed generating capacity of about 126 GW.  The installed nameplate generating capacity of wind turbines in Texas is roughly equivalent to the installed nuclear generating capacity of Ontario.  And it’s only going up; according to a Reuters piece from a few weeks ago, wind power grew to more than 9500 MW by the end of 2010, accounting for 7.8% of the total electrical generating capacity of Texas. (Note E)  This isn’t surprising; state subsidies for renewable energy are quite generous in Texas, and taken together with federal subsidies, can make running a wind farm very lucrative.
Of course, the amount of electricity that all those wind turbines produce isn’t quite the same as their rated capacity.  In 2009, for example, Ontario’s nuclear reactors generated 81.7 TWh of electricity. (Note C)  More than half of this - 44.2 TWh - was generated by the 10 nuclear reactors at Pickering and Darlington.  Together, these sites have an installed nameplate capacity of 6600 MW. (Note D)  Since there are 8760 hours in a year, this means that the maximum generating capacity of both plants, operating at 100% of nameplate capacity for 24 hours a day, every day of the year, would be 57.8 TWh.  These 10 units, in other words, produced 76.5% of their nameplate capacity in 2009 (in reality, Darlington was high, at 90% of capacity, while Pickering was low due to planned outages for repair and refurbishment).
90% isn’t unusual for nuclear generating stations.  The 104 nuclear reactors currently operating in the US have an aggregate nameplate capacity of 106.6 GW, for a total potential annual generating capacity of 933.8 TWh.  In 2007, those stations generated 806.4 TWh for an aggregate capacity factor of 86.35%, an impressive figure when you consider that several of those reactors were offline for maintenance (more recent generation statistics have been lower due, as I noted in a post late last year, to lower demand for power as a consequence of the shrinkage of the US economy due to the recession).  The average capacity factor for an operational reactor in the US is above 92%.
As points of comparison, consider coal (the single largest source of electrical generation in the US) and natural gas.  The installed nameplate capacity of coal-fired stations in the US is 338.7 GW, more than three times the nuclear generating capacity, for a theoretical total annual capacity of 2,967 TWh.  In 2007, the US generated 1,764 TWh from coal, 59% of the rated capacity - a result, as I have noted elsewhere, of the regulations- and recession-inspired drop-off in coal-fired generation.  Natural Gas turbines, in sharp contrast, of which there are 4 times as many generating stations (5,470) as there are coal-fired stations (1,436), have a nameplate capacity of 459.8 GW (4,027 TWh), but in 2009 generated only 920,378 TWh - 22.8% of their nameplate capacity.  This is not because gas turbine generation is less efficient or more cost-effective; far from it.  Combined-cycle gas turbines are more expensive to operate than other generating plants, but they have the benefit of being very responsive, quick to start up and shut down according to shifts in demand.  As such, they make ideal back-up generators to provide power in the event that intermittent energy sources - like wind power - suddenly become unavailable.
Knowing all this, let’s go back to Texas.  Energy Futures Holdings, Inc. (EFH) owns and operates Luminant, the largest power producer in Texas, with 15,400 MW of generating capacity (1/5th of the State’s total), including 2,300 nuclear MW and 8,000 coal MW.  Neither EFH nor Luminant had offered any press releases on the blackouts by this morning, but local news media had some reports.  According to EFH officials cited in the stories, the problems were more related to the impact of the cold weather on new coal-fired generating stations, and on the combined-cycle gas turbines that normally kick in to deal with failures in generation and/or unforeseen surges in demand.
Energy Future Holdings’ plants accounted for less than half of the total missing capacity, said Allan Koenig, a spokesman for the Luminant power generation business. He said some equipment at the new coal plants is exposed to the elements and stopped working because of the cold.
He couldn’t predict when the Central Texas plants will be working again, and he declined to say which other plants were down.
When large coal plants go down, ERCOT calls on natural gas plants to fire up quickly to meet demand. However, the state’s natural gas network was also grappling with the cold, and the pipelines had lost pressure. So some natural gas plants including at least one Luminant plant couldn’t get fuel.(Note F)
Another news article notes that the drop in pipeline pressure was due to greater demand upstream, again due to the unexpected cold.
It’s also interesting that Luminant has 900 MW of wind generating capacity, and touts itself as “the largest wind purchaser in Texas and the fifth largest in the United States.”(Note G) 900 MW is 12.1% of the wind power capacity of the entire state.    The whole question of wind power capacity raises eyebrows, because the amount of capacity lost in Texas yesterday - 7,000 MW, according to the news reports - is close to the amount of wind power generating capacity presently installed in Texas (about 9000 MW).  So the question becomes, what was happening with wind power in Texas when the lights went out?  Believe it or not, the actual data is available online at the website of ERCOT (Electrical Reliability Council of Texas), which regulates about 85% of the state’s electrical generation and transmission [].  A quick overview of the data shows that an awful lot of windfarms were down for maintenance, derating, or were on “forced” withdrawal from generation, although quantifying the outages is difficult as not all wind farms identify as such.  ERCOT hasn’t yet published its load data for this month (the latest spreadsheets available are for 2010), but it will be interesting to see what those figures show.  According to METARS data, the past 48 hours haven’t seen prolonged, unusually low wind activity in Texas []; windspeed lows have occasionally been in the single digits, but while this is generally too low for a wind turbine to produce power at the nameplate rating, these lows did not persist throughout the period in question.  So while the Texas wind farms were probably (almost certainly) operating at far below their rated capacity, it seems unlikely that the whole state was entirely becalmed.
That said, with such a high proportion of its power coming from wind turbines, a sudden drop-off in windspeed at exactly the wrong time - say, during the daily spikes in demand in early morning and early evening - could prove exceptionally problematic, especially if coal and gas-fired plants were also producing at below their rated capacity.  The underlying problem with wind power is that because it is intermittent, wind farms put a disproportionate amount of strain on power grids that, by virtue of their design, tend to operate close to maximum transmission capacity, and that as such have only a limited capability to absorb shocks (or “step changes”) in generation.  As one study into the problem of transmission step changes resulting from the increasing adoption of wind power notes,
When more wind power plants are connected to the system, the diverse wind resources of various locations will make the aggregate output less volatile. The statistics of power fluctuations (step changes and ramping rates), when expressed in terms of total wind power capacity, will be smaller than those of an individual wind power plant. However, the actual magnitudes of step changes in MW or kW will be larger than those of an individual wind power plant. In this situation, the system control areas will face higher regulation duty. Although the probability of calm wind at every wind power plant becomes smaller and smaller as more plants are scattered over wider areas, such an event is still possible. The maximum step changes will also be higher in magnitude.(Note H)
In other words, the more wind power plants that a grid must support, the greater the step changes in electrical production that the grid must be able to absorb, and the greater the aggregate impact of “a calm wind at every power plant”.  This is fairly obvious when you think about it, and it appears to be one of the problems that contributed to the failure in Texas this week.  It’s not immediately clear what “critical mass” of wind power a given grid can support, but yesterday’s events provide us with at least one data point.  Wind power accounts for only 7.8% of the total generating capacity of Texas, but at a time of high demand, low wind power production, and failures at conventional power plants - all due to unusually low temperatures and foul weather - the combined impact was enough to force the state’s electrical power producers to implement rolling blackouts in order to prevent a more widespread collapse of the grid. 
It’s worth noting that none of the nuclear power plants in Texas went down; only the new-fangled “clean” coal plants, the emergency gas plants with their exposed and overtaxed pipelines, and the unpredictable wind turbines were affected by the record cold and snows in the southwest.  If nothing else, it’s a cautionary tale for those who advocate replacing reliable, continuous electrical generating technologies with intermittent generating options like wind turbines. 
Oh, and did I mention that Texans pay $0.1099 per kWh, 12.1% more than the national average?  That must be especially annoying when it’s -14F outside and your baseboard heaters aren’t working. 
By the way, in case you missed it, Ontario Premier Dalton McGuinty recently announced a $7B deal with Samsung to build 2500 MW worth of wind and solar power plants in Ontario.(Note I)  According to government statistics, in an entirely unrelated development, Ontario consumer electricity rates have risen 48% since McGuinty took office.(Note J)
B) Unless otherwise noted, all of the energy statistics are derived from the US Energy Information Administration at

31 January 2011 – Follow-up to ‘Political Science’


A quick follow-up to my post of a few weeks back.  I’d left dangling a question, as you may recall, about who was lying - the UK Meteorological Office, which claimed that it had “privately warned the Government” that Britain was likely to face “an extremely cold winter”; or the Cameron government, which claimed to have no record of the Met Office telling it any such thing.

Well, time has afforded us some clarity.  First, it seems that some emails between the UK Cabinet Office and the Met Office have now hit the intertubes.  A British blogger, who calls himself “Katabasis”, received a response to an FOI request.  According to the emails, “someone at the Cabinet Office wrote to the Met Office to tell them what the official position would be: “The Met Office seasonal outlook for the period November to January is showing no clear signals for the winter”. The Met Office writes back - “That is fine.”

‘That is fine’.  Not a correction - for example, ‘Sorry, you got it wrong, we told you it was probably going to be bloody cold, fix it please.’

The lack of any specific warning to government of a cold winter by the Met Office is confirmed by an exchange in the House of Lords on 27 January 2011.  From Hansard:

Lord Lawson of Blaby: My Lords, can my noble friend inform the House of the statistical and scientific evidence for the Met Office’s estimate that there was only a one in 20 chance of a severe winter in 2010-11, an estimate on which the airports relied?

Earl Attlee: My Lords, my right honourable friend the Secretary of State has asked Sir John Beddington to give him scientific advice on the likelihood of future severe winters. On 25 October 2010, the Met Office provided the Cabinet Office with an updated three-monthly forecast, which suggested a 40 per cent chance of cold conditions, a 30 per cent chance of near average conditions and a 30 per cent chance of mild conditions over northern Europe.

(For those who, like me, don’t follow Brit politics, John Lord Attlee is the Cameron Government’s spokesman for, amongst other things, transport; while Nigel Lord Lawson of Blaby, a conservative currently outside of government, is the Chairman of the Board of Trustees of the Global Warming Policy Foundation [] - so the question was an obvious plant designed to put evidence of the Met Office’s error and subsequent perfidy on the record.)

In other words, what the Met office actually told Parliament was that there was a 40% chance of a cold winter, and a 60% chance of an average or mild winter.  This is the opposite of what BBC’s Roger Harrabin claimed, which was that the Met Office had told Cabinet that “Britain was likely to face an extremely cold winter”.

Looks like it’s the Met Office’s pants that are, once again, on fire.  Double-oopsie.



Saturday, March 24, 2012

24 January 2011 – Maunderings on trend projection


Last week, the University of Calgary published a study predicting that the world’s glaciers and ice shelves would collapse by the year 3000.  You can read about it here.

This projection contradicted data published by the National Climatic Data Centre (NCDC) in the US, which recently released its temperature figures for the month of December.  I don’t know how to break this to you, but melting glaciers are the least of our worries.  According to present climatic trends, Minnesota will be uninhabitable in only a little over two centuries. 

Well...more uninhabitable. 

The data published by the NCDC suggest that, at some point in December 2289, the average temperature in Minnesota will reach absolute zero.  Helium will become a solid, all molecular motion will cease, and we’ll finally find out whether the laws of thermodynamics are really just a bunch of hokum made up by physicists who want to keep all the perpetual motion for themselves.

Impossible, you say?  Absolute zero can’t be achieved even in a lab, you say?  Oh, ye of little faith!  The data are indisputable.  Minneapolis is hurtling towards icy oblivion at this very moment, careering into a frigid abyss whence there can be no return.  You don’t have to trust me - just look at the trend!

Figure 1: NCDC climate data, Minnesota, December, 2002-2009; trend = -16.44F/decade (note A)

According to the last seven years of official US government temperature data, the average temperature in Minnesota in December is declining by 16.44 degrees Fahrenheit per decade.  It’s getting cold, fast.  There’s a silver lining, though; in only a little over 50 years the average December temperature will have fallen to well below -80 C, which means that all of that pesky carbon dioxide will freeze, precipitate as snow, and can be shovelled up and packed away in the reefer, never to trouble us again.

You can’t argue with figures; it’s going to happen!  These are MEASURED DATA, people!

Meanwhile, did you know that the long-term trend of a sine wave is a straight line?  No, I’m not kidding.  The equation y = sin(x)m where x is expressed in radians produces a repeating curve that gives values for y ranging between 1 and -1.  If we pick two points on that curve, we can extract a trend line.  For example: (π/2,1) and (-π/2,-1) gives a trend line where Δy / Δx = 2/π.  The equation of that line is y = 2x / π.  This, again, is not the same as the equation for the sine curve; it’s a straight line trending upwards to the right of the chart, forever, whereas the sine wave cycles between a y value of 1 and -1, never exceeding either.  To drive the idea home, try it with different points.  Selecting the points (π,0) and (-π,0) gives you a line with a slope of 0.  Selecting the points (-3π/2,1) and (-π/2,-1) gives you a line with a slope of -2/π.

See what I’m getting at?  By carefully selecting the end-points for your trend analysis, you can derive, from a simple sine curve, a linear trend proceeding infinitely upwards at a slope of 2/π; a linear trend proceeding infinitely downwards at a slope of -2/π; or a linear trend proceeding infinitely onwards at a slope of 0.  And none of them bear any genuine relation to the curve from which they were derived.  In short, when you project a linear trend from a cyclical curve, the direction of the trend depends on the end-points you select.  Select your end-points carefully, and you can produce just about any trend you like.

Okay, back to Minnesota, where by the end of the century they’ll be dodging puddles of liquid oxygen in the parking lot.  As will be obvious from the foregoing examples, deriving a linear trend from a curve and projecting it indefinitely into the future is clearly an exercise that is open to manipulation based on how cleverly you select your endpoints.  In asking the NCDC plot generator to give me the above plot, I selected a year with an unusually warm December (2002) for the start of the calculation, and a year with an unusually cold December (2009) for the end.  Doing that gave me a linear temperature trend of -16.44F per decade - which, extended into the future, means that in a century, the average winter temperature in December in Minnesota will be about -170F.

Bundle up, right?  It won’t help.  If we change the end-points to 2005 and 2007, the NCDC gives us a trend of -23.5F/decade.  Absolute zero in only a century and a half!  Minneapolis is doomed!

But it’s not doomed, because three years do not a trend make.  Nor do twenty, especially when you can pick which twenty years you look at to give you the result you want.  Let’s go back to the Minnesota data.  Take a look at 2 different twenty-year periods in the data:

Figure 2: NCDC climate data, Minnesota, December, 1931-1951; trend = -1.82F/decade (note A)

Figure 3: NCDC climate data, Minnesota, December, 1983-2003; trend = +5.46F/decade (note A)

Which of these trends is accurate?  Which one should we trust to give us an idea of where temperature is going over the long term?

The answer is ‘neither’.  These are all examples of a failure of analysis known as the ‘end-point fallacy’, which is the argument that a short-term trend extracted from a long-term series is necessarily representative of the long-term series, and may be substituted for it.  In logic this is known as the fallacy of composition, i.e., inferring the shape of a composite entity from the shape of one of its constituent parts - akin to inferring, from the shape and structure of a tire, that an entire car must necessarily be circular and made of fibreglass belts and vulcanized rubber. In reality, though, you can only determine the shape and structure of the car by observing the whole car - just as you can only determine the shape and structure of a sine wave by observing the whole wave (at least for a sufficient number of cycles to determine its equation).

This is one of the key weaknesses of linear trend projection.  While it is one of the least unreliable forecasting tools available to scientists, its utility is highly conditional, and it is weakened by imperfect understanding of the nature of the trends we are attempting to project.  How do we get around it?  Well, we MUST extract a linear trend from a cyclical phenomenon, we can minimize our failures by maximizing the observational baseline for the trend.  So in the case of December temperatures in Minnesota, we have to look at all the data available.  There’s quite a lot, actually.  More than a hundred years’ worth.

Figure 4: NCDC climate data, Minnesota, December, 1895-2010; trend = +0.1F/decade (note A)

If we expand the analysis of the curve to the full 115 years of data that are available, we find that the average annual December temperature in Minnesota has ranged from a low of 0 F in the early 1980s, to a high of 25F in the 1940s.  We also find that the trend line is an increase of +0.1F/decade (or +0.056C/decade).  December temperatures in Minnesota warmed at a rate of half a degree Celsius per century (which is less than 3/4 of the 0.7C/century increase in average global temperature posited by the IPCC).  The warming trend in Minnesota December temperatures, furthermore, is 1/250th or 0.4% of the observed variability.  That’s a lot more reasonable than predictions of plunging or skyrocketing temperatures based on linear projections derived from selection of more proximate end-points.

The real problem with linear trend analysis, of course, is that the temperature trend isn’t really linear at all - it’s cyclical.  If we projected even that slight half-degree-per-century warming 1000 years into the future, we’d logically conclude that Minnesota Decembers would be 5C warmer than average in the year 3011.  But how valid would such a conclusion be?  And more to the point, how useful would it be?  Presumably somebody will still be living in Minnesota in the year 3011 (it’s at least a little more likely if the winters are 5 degrees warmer than they are at present); but if we consider the range of unforeseeable events that might “shock” us, and the fact that the trend we are linearly projecting is cyclical and demonstrably non-linear, then the unreliability and inutility of long-term linear projection become depressingly clear.  The longer the predictive timeline, the greater the statistical likelihood of error.

When we attempt to apply this method to social scientific analysis, we are also crippled by the fact that linear trend projection makes no allowance for unforeseeable, watershed events.  The classical example is London’s great manure crisis.  In 1900, the city of London had 11,000 horse-drawn cabs and several thousand buses, each of which required twelve horses per day.  Add to these the horses necessary to draw goods wagons, carts and private conveyances, and the number becomes quite significant.  New York City, in 1900, was in similar straits, with more than 100,000 horses, producing an impressive 2,500,000 pounds of manure per day, all of which had to be collected and removed.  This “fertilizer crisis” was so severe in large cities that one author writing in 1894 for The Times of London predicted that “in 50 years every street in London would be buried under 9 feet of manure.” (note B)

This did not happen, of course; the internal combustion engine, still a relative novelty in 1894, eventually replaced the horse – and far more rapidly in the cities than in the countryside.  A half-century after 1894, the streets of London were indeed buried – but they were buried in rubble rather than manure, the result of five years of mechanized warfare featuring high-altitude piston-engined bombers, pulse-jet-driven flying bombs, and liquid-fuelled ballistic rockets – weapons that would have been deemed the height of fantasy by a writer sitting in fin-de-siècle Britain, solemnly predicting that the Imperial capital was doomed to be smothered in equine feces.  The author of that piece had made the fatal error of deriving a linear trend from a non-linear phenomenon, and projecting it fifty years into the future.  He also made insufficient allowance for the “technology shock” of the internal combustion engine - which of course he could never have reasonably been expected to allow for in the first place.  “Shocks” are by definition ex-post-facto phenomena.  They only “shock” us because we failed to foresee them; if they can be foreseen, then they cannot shock us.

And the solution to that part of the problem is parsimony.  If we absolutely must derive linear trends from non-linear phenomena and attempt to project them into the future, then the only way to minimize the certainty of error is to use all of the data available to us to better understand the nature of the phenomenon were are trying to understand; to acknowledge all of the possible sources of error, and bubble-wrap our analysis in caveats; and to keep the period of our artificial linear projection as short as we possibly can.

The importance of parsimony in trend projection cannot be overstated.  Exactly 100 years ago, the pre-eminent technologist of the era, Thomas Edison, was asked to envision the technology of the year 2011, a century hence.  His answers were published in the June 23, 1911 edition of the Miami Metropolis.  Among Edison’s predictions were the following:
·         Electric trains driven by hydroelectric power (correct);
·         The demise of the steam engine (“as remote an antiquity as the lumbering coach of Tudor days” - incorrect, as coal- and oil-burning steam turbines produce most of the electricity on the planet, just as they did in Edison’s day);
·         Travelers “will fly through the air, swifter than any swallow, at a speed of two hundred miles an hour” (right on flight, but off by a factor of four on velocity);
·         Houses will be built and furnished entirely out of steel (“The baby of the twenty-first century will be rocked in a steel cradle; his father will sit in a steel chair at a steel dining table, and his mother’s boudoir will be sumptuously equipped with steel furnishings” - incorrect; while steel is more widely used these days, plastic is the ubiquitous material, while houses are still built predominantly out of concrete, bricks and wood);
·         Future books will be printed entirely on “leaves of nickel, so light to hold that the reader can enjoy a small library in a single volume. A book two inches thick will contain forty thousand pages, the equivalent of a hundred volumes; six inches in aggregate thickness, it would suffice for all the contents of the Encyclopedia Britannica. And each volume would weigh less than a pound.” (Incorrect - although this would be really cool);
·         We’ll all be riding in golden taxis due to the demise of gold as a monetary standard.  Edison was right about the last, but for the wrong reason; he argued that we would soon be able to transmute iron into gold (“We are already on the verge of discovering the secret of transmuting metals, which are all substantially the same in matter, though combined in different proportions...Before long it will be an easy matter to convert a truck load of iron bars into as many bars of virgin gold.” - very incorrect, to say the least).(note D)
It’s crucial that we recognize that in making these predictions, Edison was not predicting anything revolutionary; he was working from known technological discoveries and ideas that existed in his time, and projecting them a century forward.  Why shouldn’t he predict aircraft flying at 200 mph in another century?  Airplanes were already flying at close to 100 mph when he made his prediction.  Doubling that after a century’s worth of work would not be a stretch.  As for being right on electric trains and hydroelectric power, this isn’t surprising given that Edison had himself opened one of the first hydroelectric generating stations thirty years earlier, in 1882.  None of this was truly earth-shattering.
What’s more important is what he didn’t predict.  He predicted none of the things that really changed the modern world: antibiotics, jet engines, television, nuclear power, nuclear weapons, genetic engineering, synthetic polymers, computers, space travel...the list is endless.  It’s not Edison’s fault; the man wore starched collars, had never heard of a “tank”, and might have taken over-the-counter radium pills.  Even in places where there were clues - the “wireless”, for example, was well-known in his time, and the base technologies necessary for “television” would be in place before his death - Edison didn’t “blue-sky” anything.  Everything he predicted was evolutionary, not revolutionary.  And he still got most of it wrong.
Let’s be fair, though.  How could Edison have rationally predicted space travel anyway? Even if he thought it might one day be possible, there was nothing to base his prediction on.  It would be another 15 years before Goddard launched the first liquid-fuelled rocket.  Having never seen a liquid-fuelled rocket, how could Edison have imagined that only fifty years after Goddard’s launch - which Edison lived to see - we would be using colossal versions of that rocket to dispatch robots on journeys that would take them out of the Solar System (and to threaten each other with weapons capable of unimaginable devastation)?  Even if he’d dreamed of space flight, like his contemporary H.G. Wells had done, Edison, as a serious and practical scientist, couldn’t have made such predictions without sounding like a fantasist at best, and a lunatic at worst.  Voyager was simply not predictable from the knowledge base of his era.  None of the reasonable “trend lines” of 1911 pointed at space travel.  There was no rational way to get to “here” - the present that we know - from “there”, Edison’s day.  Meanwhile, those trend lines that did make sense to him pointed at a lot of things - like nickel books, steel houses, and gold taxicabs - that were never to be.  Edison’s only accurate predictions in that article - electric trains, hydro power, and 200 mph aircraft - were simply marginal refinements of things that were already being done.
Minnesota’s impending icy doom and Edison’s gold taxicabs illustrate the problem that analysts face whenever we are asked to look to the future.  The best we can do is work from a comprehensive knowledge of the past, and make reasonable, parsimonious, short-term predictions based on every last scrap of data we can muster.  The alternative is to eschew study of historical trends, selecting our endpoints creatively to hype the story we’re trying to sell, and projecting the derived linear trends decades into the future, secure in the knowledge that we’ll all be retired or dead before our predictions are disproven.  Even if we take the prudent, scientific course, like Edison did, we’re going to be wrong most of the time.  The people who, in 1911, invested in hydro power, electric trains and aircraft presumably did fairly well.  Those who sold all of their soon-to-be-worthless gold and put their money into the nickel book-printing and all-steel-housing industries, though...
Bottom line, whenever we pull the lever on the What-If? machine, we’re taking a risk.  We can apply science to mitigate that risk, or we can allow our imaginations free rein and have fun with it.  Like that chap in London who in 1894 projected a near-term trend into the distant future and found that it led inevitably to nine-foot-deep piles of manure, I guess it all comes down to what we feel like shovelling.

(A)    []

(B)    Stephen Davies, “The Great Horse Manure Crisis of 1894”, The Freeman, September 2004, 33 [].