Sunday, July 29, 2012

THAT's Watts Up With That!

So THAT's Watt's Up With That!

Go read the paper.  But in my opinion, this is the money quote:

  • NOAA adjustment procedure fails to address these issues. Instead, poorly sited station trends are adjusted sharply upward (not downward), and well sited stations are adjusted upward to match the already-adjusted poor stations. Well sited rural, non-airport stations show a warming nearly three times greater after NOAA adjustment is applied.

Bottom line:

  • What the compliant thermometers (Class 1&2) say: +.155°C/decade

  • What the non-compliant thermometers (Class 3,4,5) say: +.248°C/decade

  • What the NOAA final adjusted data says: +.309°C/decade

In other words, NOAA adjustments to raw data - adjustments that are in the direction OPPOSITE to what you would expect to account for Urban Heat Island effect - are responsible for fully half of the alleged warming in the US temperature record.

By the way, 1.5 degrees per century is the same as the rate of warming observed from 1700-1850, long before the SUV was invented.  So the rate of warming in the US over the past 40 years is no different from the rate of warming before humans started burning fossil fuels in significant quantities.

So...if there's no unusual warming, then why do we have to make up a theory to explain it? Where, in short, is the evidence for an anthropogenic impact on temperature?

Ask the NOAA.  It's in their adjustments.

Well done, Watts et al.

Now, not to put too fine a point on it, but some of us have been saying this - or at least, TRYING to say it - for years:


Variances between satellite temperature records – by far the most accurate and global means of measuring atmospheric temperature – and land-based thermometer records are largely the result of significant problems with the latter, which are affected inter alia by land-use changes, the Urban Heat Island (UHI) effect,[1] station drop-out,[2] inconsistent coverage, poor station siting,[3] and manual alteration of data.[4]  The difference is manifested in a strong divergence between the satellite and land-based temperature record over the past six years.  Since 2003, the satellite record shows temperatures dropping at a rate of 2.84ºC/century (UAH record) and 3.60ºC/century (RSS record), while the land-based temperature data maintained by the Goddard Institute of Space Studies shows a rate of decline of only 0.96ºC/century (GISS).[5] The divergence between the satellite and the land-based temperature record is becoming more pronounced over time.  It is curious, although perhaps not surprising, that despite the many problems with the quality of the data in the land-based temperature record, the proponents of the AGW thesis prefer the results it produces over those produced by satellite temperature measurements.  All of these data, incidentally, are available on-line.[6]




[1] See Ross R. McKitrick and Patrick J. Michaels, “Quantifying the influence of anthropogenic surface processes and inhomogeneities on gridded global climate data”, Journal of Geophysical Research, Vol. 112, D24S09, doi:10.1029/2007JD008465, 2007 [http://www.uoguelph.ca/~rmckitri/ research/jgr07/ M&M.JGRDec07.pdf].
[2] The number of global temperature measurement stations dropped from a high of 6000 in 1970 to roughly 2000 today.  Most of the dropouts occurred at the fall of the Soviet Union, and most of the lost stations were rural.  The result is a higher statistical emphasis on urban stations, exacerbating the contamination from UHI and land-use changes.  See Joseph D’Aleo, “Recent Cooling and the Serious Global Data Base Issue” [http://icecap.us/images/uploads/Recent_Coolingand_the_Serious_Data_Integrity_issue.pdf].
[3] See the “How Not To Measure Temperature” series maintained by Anthony Watts at [http://wattsupwiththat.com/category/weather_stations/].  Recent examples include thermometers situated in front of air condition exhausts, near barbecues, and buried in Antarctic snow.
[4] A data posting error by the NASA GISS staff in autumn 2008 (in which September temperature data for Siberia was mistakenly copied over October data) led to renewed cries of impending doom by the AGW alarmists, until the data error was reported by a blogger, and subsequently correct by GISS.  For a breakdown of the incident, see [http://wattsupwiththat.com/2008/11/12/corrected-nasa-gistemp-data-has-been-posted/].

[5] Steve Goddard and Anthony Watts, “GISS Divergence with satellite temperatures since the start of 2003”, 18 January 2009 [http://wattsupwiththat.com/2009/01/18/giss-divergence-with-satellite-temperatures-since-the-start-of-2003/].

[6] NASA GISS data are here [http://data.giss.nasa.gov/gistemp/].  RSS MSU/AMSU data re here [http://www.ssmi.com/msu/msu_data_description.html].  UAH AMSU data are here [http://datapool.nsstc.nasa.gov/].  Hadley Centre data are here [http://hadobs.metoffice.com/hadat/].


It's certainly nice to be vindicated. Anthony, Ross and company have earned a pat on the back.

//Don//

P.S. It's amazing what you can do when you have actual MEASURED DATA to work with, isn't it? As opposed to the outputs of models tailored to your prejudices - or, as is so often the case these days, a load of crap you just made up?

 

Saturday, July 28, 2012

19 August 2011 – To serve man

Colleagues,

What a marvellous cavalcade of weirdness this week, kicked off by the government’s Tuesday announcement of the reinsertion of the Royal qualifier to the titles of, inter alia, the RCN and the RCAF.  The Land Force became the “Canadian Army”, but as a lifetime member of the senior land service, the Royal Regiment of Canadian Artillery, I don’t feel even slightly slighted.  I wonder when we’ll be getting the pips back?

The announcement did, however, entice the perennial proponents of unification to couch their lances and rush headlong against the windmills of what they perceive to be folly.  Leading the charge, as expected, was the indefatigable Paul Hellyer, erstwhile architect of that “act of mayhem in the name of administrative tidiness”.  In an op-ed piece in the TorStar (natch), Hellyer lambasted the government for “turning back the military clock”, characterizing his decision to unify the Forces as necessary to avoid the sort of wasteful duplication described in the report of the Glassco Commission, and placing all of the blame for the subsequent failure of his policy to realize its objectives on the “dreadful blunder” in 1974 of unifying the civilian and military headquarters in Ottawa.(Note A)

For reasons that escape me, longevity seems to be taken to confer seriousness, at least in the fundamentally unserious medium of the mainstream media (of which the Red Star is one of the more spectacularly unserious examples).  Hellyer’s corporeal persistence, for example, makes him the longest-serving Privy Councillor, ahead even of Prince Phillip, another chap famous for verbal eruptions that range from the hilarious to the downright perplexing.  Hellyer’s self-exculpatory expostulations will no doubt continue to provide grist for discussions among strat analysts for years to come.  Today’s topic, however, is less about the five years he spent as defence minister under Pearson, and more about the forty-plus years he has been involved in the UFO community. 

As an ardent sci-fi fan myself, I’ve always been perplexed that someone could believe that the Earth is frequently visited by aliens, while simultaneously arguing against the weaponization of space.  At the very least this strikes me as appeasement.  At least he’s not suggesting that we pre-emptively baste ourselves with a nice vinaigrette in case E.T. is looking for a low-carb snack.

Which, in a roundabout way, brings me to the subject of this week’s message.  Earlier this year, three scientists - two from Penn State, specifically the geography and meteorology departments, and one from NASA’s planetary science division - published a paper in Acta Astronautica (AA) that, to parse their abstract, analyzes “a broad range of contact scenarios in terms of whether contact with ETI [extraterrestrial intelligence] would benefit or harm humanity”.(Note B)  Now, I know that I’ve been a harsh critic of unbounded speculation in scenario-writing, but even I have to confess that the AA paper - which is allegedly peer-reviewed - goes far beyond even the most bizarrely fabricated goofball ideas that have ever been presented by any part of the official speculative fiction community.

I say this for one simple reason: evidence.  I laughed out loud when I read a spectacularly improbable story about a US CVN being sunk by an orbiting particle-beam weapon - but my laughter was based only on lack of probability rather than an utter lack of evidence.  The problem was that the scenario simply assumed too much.  For example, it assumed (a) that particle beam devices are infinitely scalable, (b) that their power input-to-output ratios can be improved from the poor ratios obtainable today, (c) that power supplies can be made small enough to place in orbit, (d) that aiming problems can be solved, (e) that atmospheric blooming and scattering can be compensated for, (f) that a charged particle beam with sufficient power to punch through armour steel can even be built, (g) that all of this could happen in the next 20 years, and most significantly (h) that the US government will have made no concomitant technological progress and would not notice a potential adversary building such weapons, building the space program to place them in orbit, launching the rockets to orbit the weapons, assembling the weapons and power supplies in orbit, and then leaving the weapons in orbit - where they could be brought down by nothing more elaborate than a 1980s-era ASAT rocket launched from a 1970s-era F-15.

Who needs anti-ship missiles and wake-following torpedoes when you’ve got one of THESE bad boys?

But to be fair, there are particle-beam generators (there’s a big one at CERN right now called the LHC, doing its best not to make black holes or otherwise exterminate humanity, as well as others around the world); there are big power supplies; there are rockets; and there are aircraft carriers.  So there were at least a few grains of evidence to support the outlandish scenario posited in the tale in question.  This is not the case in the AA piece.  The authors themselves admit that “we do not know how contact would proceed because we have no knowledge of ETI in the galaxy.”

There are, to put it simply, no data points.  None whatsoever.  The scientific method has four steps: observe, hypothesize, experiment, synthesize.  If you have no observations then you have nowhere to start.  There are no observations that suggest the existence of alien intelligence anywhere, full stop.  Not so much as a squeak on the 10.7 centimetre hydrogen band, much less crashed spaceships, technology of unambiguously extra-terrestrial origin, or miscellaneous goo samples from super-secret autopsies.  To offer an arithmetic analogy, if you have a simple Cartesian plane with a single point on it, you can infer all manner of possible things.  It could be just a point.  It could be part of a straight line, or a parabola, or a hyperbola, or a circle.  It could be the two-dimensional cross-section of a line intersecting the plane from a third dimension, or one of the corners of a cube, or a four-dimensional tessarect.

Where ETI is concerned, we don’t even have that first point.  There is simply nothing to go from.

What we do have, though, is plenty of “evidence of absence”.  We can rule a lot of things out, and we rule more things out every year.  Back in the 1940s and 1950s, Heinlein wrote his famous “juvies”, the juvenile sci-fi novels that launched his career.  Farmer in the Sky had humans living unprotected on Ganymede.  Several of them - Space Cadet and Between Planets, for example - posited quasi-terrestrial atmospheric conditions on Venus.

Women are from Venus.  So are six-legged space lizards.

We now know that the surface temperature on Venus is more than 450 degrees Centigrade, and that its atmosphere consists mostly of carbon dioxide and is nearly a hundred times as dense as Earth’s.  Unprotected, human life cannot exist on Venus - and given what we know about terrestrial life, it is difficult to imagine any kind of intelligent life evolving or existing there at all.  Piezophiles - micro-organisms that thrive at extreme oceanic depths - can tolerate higher pressures, but even the most startling hyperthermophiles that exist around hot springs and geysers, or in sub-oceanic volcanic vents, cannot tolerate temperatures much above the boiling point of water.  We have no data suggesting that any kind of life that could exist in the conditions that exist on Venus.  Astrobiologists are holding out hope for Mars and the Jovian moon Europa, and astrophysicists continue to hunt for exoplanets that occupy the “Goldilocks Zone” - not too hot, not too cold - around their respective stars, but where evidence of life is concerned, to date we have nothing.  We don’t even have that first point.

Which makes the kind of speculation offered in the AA article at once both uplifting, and sad.  I say uplifting, because in many ways I agree with the authors. I think SETI is a noble endeavour.  I think we should do our best to see if we’re alone or not, and I think it would be odd, and not a little lonely, if we were.  I think contact with an ETI would indeed be “one of the most important events in the history of humanity”.  I just don’t think these authors are doing science.  A good example is their appeal to the Fermi Paradox - the argument by scale which suggests that, in a galaxy containing somewhere between 200 and 400 billion stars (and for that matter, an observable universe containing something like 70 sextillion), the probability that other intelligent life exists approaches certainty.  The commonest expression of the argument is the infamous Drake Equation postulated in 1961 by astronomer Frank Drake, and popularized by Carl Sagan in the 1980s tele-documentary Cosmos.  The Drake Equation is usually expressed as a product of individual probabilities (Note C):

where:

N = the number of civilizations in our galaxy with which communication might be possible;

and
R* = the average rate of star formation per year in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life per star that has planets
f = the fraction of the above that actually go on to develop life at some point
fi = the fraction of the above that actually go on to develop intelligent life
fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time for which such civilizations release detectable signals into space

The problem with the equation is that while it gives the impression that it is scientific, it really isn’t, because too many of the variables are unknown and not subject to being known by any prospective research or technology.  We do not, for example, know with any certainty the average rate of star formation in our galaxy.  Apart from the example of our own solar system and our close neighbours, we have no idea what fraction of the stars in our galaxy have planets.  Apart from the example of our own solar system, we have no idea how likely it is that planets will evolve life, let alone intelligent life (in our solar system, the chance is either 1 in 8 or 1 in 9, depending on whether you count Pluto. Is it that high - or low - everywhere else in the galaxy?).  Apart from our own example, we don’t know what fraction of intelligent life eventually goes on to release detectable signals into space (our example suggests that the ratio is 1:1 or 100%.  But one point does not a curve make).  And finally, we have no idea how long, on average, signal-emitting civilizations last before they disappear.  Our example suggests 60+ years at least, counting from the television transmissions from the 1936 Berlin Olympics that, according to Carl Sagan’s book and movie Contact, were picked up at Vega.  But again...we’re only one point on the curve.  There could be civilizations that last for millennia or even millions of years; there could be civilizations that exterminate themselves before they invent radio.


Is it “billiyuns and billiyuns”, or is it just us?

Are we representative?  Do we fall on the midpoint of the bell curve of galactic intelligent life?  As species go, are we about average, are we prodigies, or are we morons?

We. Don’t. Know.  There is no evidence beyond ourselves.

Without that evidence, all that is left is speculation.  To borrow a phrase from a philosopher of note, “the shelves are groaning” with speculation about how contact with an ETI might turn out.  Heinlein, the dean of sci-fi, gives all sorts of examples.  In Starship Troopers (the outstanding book, not the egregious film), humanity finds itself at war with the expansionist, hive-minded “bugs” and their humanoid lackies, the “skinnies”.  In Star Beast, Venus gives us intelligent sauropods.  In Podkayne of Mars and Stranger in a Strange Land, Mars gives us ancient, withered, cerebral and even telepathic beings; and in The Puppet Masters, Titan serves as a base for shoulder-riding parasites that control our minds.  Larry Niven’s “Expanded Universe” novels give us the predatory Kzinti, the unbelievably advanced and pathologically cowardly Pupeteers, the extinct, telepathic Slavers, the mutant Pak Protectors, the non-breathing, deep-space Outsiders, and a whole host of other ETIs.  And according to Carl Sagan’s Contact, any alien civilization we make contact with is statistically likely to be so far in advance of us that there will simply be no basis for mutual communication.  We would be the drooling new-born infants of any Federation of Planets. 

There is, in short, no lack of unfettered speculation about what human contact with an ETI might be like.  There are just no facts.

In the AA paper, the authors go into great detail discussing the potential intentions of ETI vis-à-vis humanity, more or less dismissing predation as a concern on grounds of protein incompatibility, but positing possible interest on the grounds of evangelism (oh, the horror of an extraterrestrial Tammy Faye Bakker!), or perhaps as a consequence of some ability or trait humans possess that aliens may find entertaining or amusing.  In such circumstances, one hopes that our new alien overlords would find “Shakespeare in Love” or chainsaw juggling more appealing than, say, “300”, ”Deathrace”, or, heaven forfend, “Jersey Shore”.

The authors, perhaps unsurprisingly, spend a great deal of time addressing ecological arguments.  An aggressively expansive species, they argue, could outstrip its resource base and thus find itself preying on unspoiled planets - the sort of argument driving the “locust-like” and unambiguously nasty aliens thwarted by Will Smith and Jeff Goldblum in Independence Day, a film in which the aliens had interstellar travel and city-obliterating particle-beam weapons (sound familiar?) but apparently forgot to download the free version of Norton Antivirus.  According to the authors, “This type of ETI civilization would likely consume all the resources of Earth and destroy humanity if we got in its way” - a particularly illogical argument, as an aggressively consumptive species would presumably be actively looking for resource-rich planets, rather than simply preying on those that “got in its way”.  Beef cattle do not “get in the way” of the abattoir.

This line of reasoning eventually leads the authors to their most astonishing conclusion - that we might inadvertently invite destruction through too much success as a species:

Another recommendation is that humanity should avoid giving off the appearance of being a rapidly expansive civilization. If an ETI perceives humanity as such, then it may be inclined to attempt a preemptive strike against us so as to prevent us from growing into a threat to the ETI or others in the galaxy. Similarly, ecosystem-valuing universalist ETI may observe humanity’s ecological destructive tendencies and wipe humanity out in order to preserve the Earth system as a whole. These scenarios give us reason to limit our growth and reduce our impact on global ecosystems. It would be particularly important for us to limit our emissions of greenhouse gases, since atmospheric composition can be observed from other planets. We acknowledge that the pursuit of emissions reductions and other ecological projects may have much stronger justifications than those that derive from ETI encounter, but that does not render ETI encounter scenarios insignificant or irrelevant.

A-a-a-and THERE IT IS!  Just as surely as the seasons come ‘round, I knew we would eventually get to greenhouse gas emissions.  Apparently now I have to stop driving my SUV lest environmentally conscious aliens destroy us.  And here I thought that exterminating the human race in the interest of saving the planet was a peculiarly human psychosis.  This is the most astonishingly obtuse of the authors’ many obtuse deductions, and displays an ignorance of basic science that is nothing short of appalling.  I hardly know where to begin, but as it’s the duty of every would-be scientist to drive a stake through the heart of unsubstantiated and unjustifiable prognostications, let’s give it a go, shall we?
First, even assuming that an ETI is close enough to detect the feeble electromagnetic signals we have been emitting for the past six decades, we are unlikely to “give off the appearance of being a rapidly expansive civilization” as those signals are coming from only one planet.  Any alien species sophisticated enough to detect our radio signals at any distance (say, outside of our solar system) would have had to develop the technology to travel to our solar system, which would mean that they had either perfected long-duration sub-light travel (with all of its attendant problems of generational crews and relativistic time dilation), or had, through some science entirely inconceivable to us, developed the ability to travel either faster than light, or through some non-Einsteinian geometric manipulation of spacetime.  Any species with that kind of technology would likely be comforted rather than alarmed by the comparative lack of sophistication of our space travel, power and energy, and communication technologies.  For example, when coming across the gold phonograph record bolted to Voyagers 1 and 2, they would likely be relieved that we had not yet, as of their respective launches, developed the CD (although they might find Chuck Berry’s warbling rendition of “Johnny B. Goode” a little difficult to comprehend, having no idea what a “guitar” is, much less a ”gunny sack”).
Second, any attempt to deduce the psychological much less strategic motivations of an unknown ETI - particularly by ascribing human motivations to them, as the authors do - is without logical foundation.  The assumption that a “universalist” ETI would value an eco-system over its clearly dominant species is so utterly and anthropogenically speculative as to be laughable.
Third, as with Voyager, Cassini, Pioneer and our other long-range probes, any species that was able to detect accumulations of greenhouse gases in our atmosphere and that decided to pre-emptively exterminate humanity on that basis would be even more ignorant of basic science than the authors of the article (by the way, detection of atmospheric trace elements at a distance is not impossible; we are already able to spectroscopically analyze the atmospheres of the exoplanets we have detected when they occlude their respective stars, identifying water vapour, sodium vapour, methane and even CO2 in the atmospheres of some of them).  The problem is that CO2 levels in our atmosphere are at present less than 400 parts per million; but only a few million years ago, long before the genus homo emerged (much less started making cement and driving Jeeps), there was more than five times that concentration, as a result of entirely natural processes.  Even today, humanity is responsible for only about 3% of the annual CO2 emissions into the atmosphere; the rest comes from forests, oceans and other biological processes.  Methane, sulphur dioxide, and so forth are all produced by natural processes as well.  One could infer technology levels from some particulate elements in the atmosphere – for example plutonium, which is not produced by natural processes – but particulate matter is not identifiable by spectroscopy.
Fourth, apart from respiration, the bulk of human CO2 emissions come from burning fossil fuels and making cement.  No species capable of interstellar travel is likely to find those technologies even remotely threatening - and if they’re the sort of folks who blitz around the galaxy wiping out intelligent alien species because they’re adversely affecting their own planetary environments, then I doubt they’re the sort of folks we’re likely to be able to negotiate a peaceful coexistence treaty with anyway.  Any alien species that went around obliterating planets because of high CO2 concentrations would be operating on pretty flimsy science, and would in any case probably have to start with Venus, which would, when it vanished in a burst of neutrons, give us at least a little warning to set our affairs in order before the BEMs decided to turn the carrier-killing death ray on us.
Refused to ratify Kyoto, eh?  You brought this on yourselves, pathetic Earthlings!

Not surprisingly, the authors of the AA paper decline to quantify anything in their outline scenarios.  This is because there is nothing in them that CAN be quantified.  They describe their paper as “an important step towards a quantitative risk analysis” but note that such an analysis is beyond the scope of what can be accomplished in a single paper (the understatement of the century) “and thus must be left for future work” - i.e., a future in which there is more data about alien intelligence to support quantitative analysis than, you know, NONE.  They then proceed to a paragraph loaded with disclaimers in which they admit that they have no empirical data about ETI and that all of their speculation is therefore based on human experience, and acknowledge that “it is entirely possible that ETI will resemble nothing we previously experienced or imagined” - the scholarly equivalent of confessing that the preceding 26 pages may be nothing but a load of bollocks.  They then suggest that future work should look at, amongst other things, possible impacts of contact on the ETI themselves, and “feel strongly that consideration of impacts to nonhumans represents an important area for future work.”

Wow.

Look, this is a fun subject to play with.  Extrapolation is fine so long as it’s deductively rational and you have something to extrapolate from.  But when you don’t even have that first data point, you have absolutely nowhere to begin, and therefore nowhere to go but into the realms of imagination.  At the end of the day, the only difference between the AA paper and the average pulp title published by Tor or Del Rey is that the latter is more likely to be commercially viable because it’s better written; and the only difference between these ‘scientists’ and the crazy guy ranting about chemtrails, mind control and the Trilateral Commission in the corner booth at Quiznos is that the NASA and Penn State guys are publicly funded. 

At the end of the day, anything based on ungrounded speculation and lacking even a single piece of observed evidence ain’t science.  You can’t make something out of nothing, and it does our profession - and our clients - a disservice to pretend otherwise.

Cheers,

//Don//

Notes

A) http://www.thestar.com/opinion/editorialopinion/article/1041006--military-clock-turned-back-to-the-past

B) http://arxiv.org/abs/1104.4462

C) http://en.wikipedia.org/wiki/Drake_equation

Friday, July 27, 2012

WTFUWT??!?

Anthony Watts of the incomparable science and climate blog Watt's Up With That has posted a genuine attention-grabber of a message:

------------------------------------

Something’s happened.

From now until Sunday July 29th, around Noon PST, WUWT will be suspending publishing. At that time, there will be a major announcement that I’m sure will attract a broad global interest due to its controversial and unprecedented nature.

To give you an idea as to the magnitude of this event, I’m suspending my vacation plans. I weighed the issue, and decided (much to my dismay) this was more important. I can go on vacation trips another time, but this announcement is not something I can miss now and do later.

Media outlets be sure to check in to WUWT on Sunday around 12PM PST and check your emails.

If you wish to be automatically notified of the updates, click on the “Follow Blog via Email” button about midway down the right sidebar.

Comments are closed, and I will not be responding to emails until Sunday.

Thank you for your consideration and patience – Anthony Watts

-----------------------------------

Over the years, Anthony's site has been an indispensable source for unbiased scientific information over a surprising broad range of topics.  It's a daily read for me. I hope whatever this is helps to justify all of the incredibly hard work he's put into WUWT.

Now I can't wait for Sunday!

//Don//

Thursday, July 26, 2012

12 August 2011 – Charged up

Colleagues,

The Ontario government announced this week that it would invest $80M in charging stations for electric cars.  Around about the same time I came across a few articles on electric vehicles that caught my eye, and that engendered a little arithmetic.

According to the NRCAN 2007 Canadian Vehicle Survey, in 2007 there were 19,003,427 “light vehicles” (cars and trucks) on Canada’s roads (more than one third - 6,957,086 - in Ontario alone); 392,608 medium trucks; and 314,877 heavy trucks.  A “light vehicle”, by the way, is defined as any vehicle with a gross weight of less than 4.5 tonnes.  The vast majority of light vehicles (18.3M) are gasoline powered; most of the rest are diesel-powered, and a small percentage (64,587, or 0.35% of the total) use “other” fuels, principally propane.

In 2007, Canada’s light vehicles travelled 300,203,000,000 km (300 billion kilometres).  To do so, they burned 31,305,000,000 litres of gasoline and 1,291,000,000 litres of diesel, racking up average fuel consumption rates of 10.8 litres of gasoline, or 12.3 litres of diesel, per 100 km travelled.  Gasoline contains 34 Mj/litre, so on average, light vehicles in Canada used 34 x 10.8 / 100 or 3.672 Mj per km travelled.

The Chevy Volt has a 200-kg lithium-ion battery that according to GM literature gives the vehicle an electric-only range of 25-50 miles “depending on terrain, driving techniques and temperature”.  The battery holds a maximum energy charge of 16 kWh, or 57.6 Mj.  At maximal charge-to-power performance, this means that the Volt uses 0.720 Mj/km; and at minimal charge-to-power performance, twice that, or 1.44 Mj/km - in other words, somewhere between one-quarter and one-half the energy required by light vehicles powered by internal combustion engines.  Clearly, EVs are more energy-efficient than IC vehicles.  No surprise there; electric motors have always been more efficient at turning input into output.*

Of course, the Volt figures are theoretical numbers provided by the company that makes them, while the NRCAN figures are actual statistical numbers gathered from huge amounts of empirical data.  Moreover, the “light vehicles” category includes many vehicles that the Volt cannot compete with - e.g., SUVs, pickup trucks, minivans, larger vans, construction trucks, cube vans, and other vehicles with significant passenger and/or cargo storage space.  In terms of size and passenger capacity, the Volt is more comparable to, say, a Nissan Sentra, which according to the manufacturer gets 34 mpg or 6.96 litres per 100 km (which translates to 2.36 Mj/km).  So in other words, the Nissan Sentra uses somewhere between 1.6 and 3.2 times the energy per unit of distance travelled as the Volt does. 

Of course, with a 55-litre tank the Sentra has an unrefuelled maximum range of 788 km, which is ten times the Volt’s maximum 80 km range.  In order to manage the same range as the Sentra on battery power alone, the Volt would need 2000 kg of batteries instead of just 200 kg, which would degrade vehicular performance and take up a lot more space - more space, in fact, than the vehicle has.

It would also have an impact on charging time.  According to company literature, the Volt takes 10-12 hours to fully charge from a 120-Volt charging station (the charging time can be compressed to “about 4 hours” by using a special-purpose 240-Volt charging station).  Assuming zero losses (unrealistic, I realize, but we’re modelling here, not measuring), taking 10 hours to put 16 kWh into a battery pack means that you’re adding 1.6 kW to your household current load for that entire period.  That’s about the same draw as running an electric kettle, a curling iron, or a 3-ton air conditioner for 10 hours straight.

That’s not the way to look at it, though.  Returning to the NRCAN data, if there are 19M light vehicles driving 300B vehicle-km, then that’s an average of 15,789 km per light vehicle per year.  That’s 57,977 Mj in gasoline, at the 10.8 litres/100 km cited above for light vehicles (not at the lower Nissan Sentra rate).  If those vehicles were all Volts, that 15,789 km per vehicle would translate to an absolute minimum (assuming perfect performance, no losses, no overcharges, and no deleterious effects to performance from “terrain, driving techniques or temperature”) of 197.36 full 16 kWh charges.  That equates to 197.36 charges per vehicle x 16 kWh per charge x 3.6 Mj per kWh = 11,368 Mj per vehicle.  In theory, therefore, swapping out all the light vehicles in Canada for Volts and running the volts solely on electric power would result in an energy savings of something like 80%.

There are, of course, a couple of problems with these calculations.  Among them are the fact that Volts aren’t trucks; that without burning fuel, they can’t go further than a maximum of 80 km unrefuelled; that their total cargo capacity is 200 litres (about one-fifth of your average SUV, to say nothing of pickup trucks); and that their performance (especially battery performance) in Canadian weather conditions is, shall we say, somewhat less than the company claims.  But put all that aside for a moment and consider only the question of electric power. 

Suppose Ontario’s fleet of 7M light vehicles consisted of all-electric vehicles like the Nissan Leaf. The Leaf has a battery capacity of 24 kWh and a claimed range of 160 km.  Because each light vehicle in Canada drives an average of about 16,000 km/year, assuming perfect performance and no losses, each Leaf would require 100 full charges, or 2400 kWh, or 2.4 MWh per year.  Charging those vehicles would consume 16,800,000 MWh, or 16,800 GWh, or 16.8 TWh.

Ontario Power Generation, which generates about 70% of Ontario’s electricity, produced 88 TWh in 2010.  The total output of all of its hydroelectric generating stations last year was about 30 TWh; and the total output of all of its nuclear plants was about 45 TWh.  What’s most interesting, though, is that the total output of OPG’s thermal generating stations in 2010 was 12.2 TWh - which is about two-thirds of the amount of power necessary to support a province-wide switch to electric vehicles.  It’s also the generating capacity that the Ontario government plans to close by 2014.  Not to put too fine a point on it, but it’s a little odd to be simultaneously spending money to build electric car charging stations at the same time you’re reducing the amount of the electrical generating capacity you need to charge them with.

Lest you think I’m pounding unduly on Ontario, the situation in the US is far, far worse.  The EPA has issued MACT - Maximum Achievable Control Technology - regulations that come into effect on 1 January 2012, and that will essentially price coal-fired generation out of the market, leading to colossal underproduction of electricity and the loss of hundreds of thousands if not millions of jobs, especially in places like Indiana, which obtains 95% of its electricity from coal.  Not to put too fine a point on it, but perhaps folks should have listened when Obama promised during the campaign that he would bankrupt coal-fired generating stations.  He intended to do it through carbon trading, of course; but since his legislative efforts in that direction failed in Congress, his EPA is doing it through regulatory action.  The US government is of course still subsidizing purchases of hybrids and EVs; it’s just not clear what folks are going to use to charge them.

The bottom line is this: electric vehicles are the future, because they are at a minimum twice as energy-efficient as IC engines, and because we have 8000 years’ worth of uranium (and a virtually infinite supply of fuel for thorium-based reactors).  But the future isn’t here yet.  It took over 40 years to get the internal combustion engine-powered private automobile from a curiosity to a useful consumer product, and 80 more to turn it into the reliable, low-cost, fuel-efficient piece of awesomeness that it is today.  Electric vehicles are actually older as a concept than IC-powered cars, but due to slow advances in battery technology, they haven’t benefitted from more than 100 years of market-driven innovation.  Also, the infrastructure to support EVs simply does not exist, and creating it will take decades and literally hundreds of billions of dollars.  The IC automobile was not a government project, and its replacement by EVs cannot be imposed by government fiat - it’s as simple as that.  The market - which is to say, consumers - will decide when it’s time to switch, and the switch will take decades.  Attempts by governments to accelerate the process, either through punitive taxation on conventional vehicles (did governments impose a punitive tax on horses in the 1890s and 1900s to force people to switch to cars?) or by using taxpayer money to build infrastructure that the market has no incentive to create, will be a waste of money at best, and will impede the process at worst.

Okay, enough of that.  Two more interesting and related points.  The first is that electric and hybrid vehicles respond very differently from conventional internal combustion engines when they run out of fuel / battery power.  These two articles make interesting reading for anyone who’s ever poked around under the hood of a car.

http://www.popularmechanics.com/cars/reviews/hybrid-electric/when-the-nissan-leaf-dries-up

http://www.popularmechanics.com/cars/how-to/repair/what-to-do-when-your-hybrid-cars-battery-dies?click=pm_latest

As the author of the articles makes clear, advertised charge times and operating ranges are a little squishy, and when your EV flatlines on the highway, you can’t just catch a lift to the nearest charging station for a one-gallon jerrycan of electrons.  As for California’s solar-powered EV charging stations, it’s worth recalling where the Sun usually is when you have the time to park your car for a 10-hour charging cycle.

Second, here’s a link to a paper that warns about what might happen to electricity grids under real-time pricing: http://arxiv.org/abs/1106.1401v1

Don’t bother with the equations; just flip to the conclusion, which states that “As the penetration of new demand response technologies and distributed storage within the power grid increases, so does the price-elasticity of demand, and this is likely to increase volatility and possibly destabilize the system under current market and system operation practices.”  In other words, once you can set all of your appliances to operate after the rates go down, the demand during “cheap time” will increase, necessitating altering the generation schedules and, therefore, increasing prices during the new demand time.  And just imagine what happens when a significant fraction of the population comes home from work at 1800 hrs and plugs their 4-wheeled, 2400-Watt EVs into the wall, all at the same time.  Or, alternatively, when everyone sets the charging to begin exactly at 1900 hrs, just as the rates go down. 

EVs that will offer a genuine, comparable, cost-effective alternative to conventional IC-engine vehicles are coming.  They’re just not here yet.

Cheers,

//Don//

*The calculus changes if you factor in the cost of burning fuel to generate electricity to charge an EV’s batteries.  I’ll save that discussion for another day, though.

Monday, July 23, 2012

5 August 2011 – Heat wave horrors and fusion follies

Colleagues,

Two interesting articles on the “alternative energy” front yesterday, the first from our cousins south of the 49th.  Back in the winter months, I briefly discussed the energy situation in the Lone Star State (“Braking Wind in Texas”, 3 February 2011), noting, among other things, that ice storms had caused damage to transmission lines and gas-fired backup power plants, leading to rolling blackouts during periods of high heating demand.  Part of the problem is that about 8% of the total generating capacity of the state’s various electricity producers is provided by wind turbines, and as Britons found out this past winter, when their wind farms consumed more power than they produced in order to keep turbine gearboxes from freezing up, the wind often doesn’t blow when it’s really cold.

Well, as I remarked a few weeks back during the early July heat wave in Ontario, the wind tends not to blow when it’s really hot, either, which of course is when electrical demand climbs due to air conditioning requirements.  This poses sort of a meta-question for the whole wind power industry: if the systems are least effective when power demands are highest, are they really the sort of generation systems we need to be subsidizing to the tune of billions of taxpayer dollars?  Anyway, not surprisingly, the same thing is happening in Texas right now as happened back in February, although for a slightly different reason.  Back then, persistently cold weather increased demand and reduced supply; now, persistently hot weather is leading to increased demand, and reducing supply.

Figures provided by ERCOT (the Electricity Reliability Council of Texas, Inc., an Orwellian title given that they are currently spending more time explaining why electricity is not reliable than providing reliable electricity) the largest single power utility in the state (kind of like OPG here in Ontario) illustrate the nature of the problem.[Note A]  On Wednesday this week, the demand on ERCOT’s generating capacity reached 68,294 MW.  That’s just 50 MW shy of the utility’s total generating capacity.  50 MW is 25 wind turbines - so much of Texas was only two-dozen bird-blenders away from rolling blackouts.  And there lies the heart of the problem.  The installed capacity of ERCOT’s wind turbines is 9000 MW, but on Wednesday, they were generating only 2000 MW - because the wind tends not to blow when it’s really hot.  That’s a capacity factor of 22.2%.  In other words, for every 1 MW of wind generation you plan to get, you have to install 4.5 MW of wind generation capacity.  Compare that to nuclear, gas and coal-fired plants, where for every 9 MW you need, you have to install 10 MW of capacity.  This is not calculus, people, it’s basic arithmetic.  And here’s some more: the standard production rate price per MWh in Texas runs about $110.  At peak periods, producers can earn $300-$400 per MWh.  But when the supply-demand margin goes razor-thin, as it did on Wednesday, prices go completely nuts; generators chugging out the last few MW to meet skyrocketing demand can earn up to $3000/MWh, which is the state cap for production prices.  Think about that - it’s thirty times the going rate, and those increases get passed on to consumers through higher rates.  Imagine what gasoline would be like if instead of fluctuating by 5-10% in response to per-bbl price changes, the pump price changed by a factor of 30, and on the Friday before a long weekend you had to pay $36/litre to tank up?  Clearly there is some financial benefit to generators to keeping capacity right at the breaking point, which wouldn’t happen if supply was designed to meet peak demand.  Of course, running right at that razor edge is pretty hard when you have no idea what proportion of your 9000 MW worth of wind turbines are going to be spinning, and how many are just going to be sitting out there like modern art sculptures.  When the difference between state-wide power and rolling blackouts comes down to 25 turbines out of the 1000 or so (out of 4500 installed) that are actually running, you’re really gambling on the breeze.

One of the ERCOT representatives interviewed argued that capacity shouldn’t be designed with “extreme” situations in mind.  Kent Saathof, ERCOT’s VP of system planning and operations, said, “You have to determine if it is worth spending millions or billions to avoid a one in 10-year event.” [Note A]  That’s generally a fairly sound businessman’s argument, and I’ve made it many times myself in design discussions.  But when you’re talking about state-wide power generation, there are a few problems with it.  First, when 8% of your generators cannot be relied on to produce at least 90% of their nameplate capacity, your margin of error is a lot bigger and therefore a lot less predictable.  If Texas had suffered the same sort of wide-area heat dome effect that eastern Ontario suffered back in July, the 2000 MW of generation they were getting on Wednesday (out of the theoretical 9000 MW of installed wind capacity) could easily have dropped by 90%.  It’s a lot harder to make up 2000 MW of generation with gas-fired emergency turbines - in fact, it’s pretty much impossible.  That’s two whole nuclear power plants worth of electricity - plants that Texas hasn’t got.  Second, if you’re structuring your capacity on a “one in ten-year event” with heavy reliance on intermittent sources like solar and wind power, you’ve either got to be prepared with reliable (which means fossil-fuelled) backup systems, or you’ve simply got to accept that there will be blackouts when demand goes too high.  This is what the UK grid chief, Steve Holliday, meant when he said last March that one of the consequences of the UK government’s decision to “go green” was that “the era of constant electricity at home is ending” and that “families will have to get used to only using power when it is available.” [Note B] 

Remember when electrification was one of the triumphs of the modern world?  Remember when we used to mock the Soviets for being unable to provide continuous electricity to their citizens?  Remember last decade, when certain politicians in the US decried every brownout in Baghdad as evidence of the failure of post-war reconstruction efforts?  When exactly did not having electricity become virtuous?

The third problem, of course, is the breathtaking inconsistency of these arguments.  The whole reason for ”going green” is to lower the carbon (dioxide, everybody forgets the dioxide) emissions that are allegedly killing the planet.  Global warming, we are continuously being told, is now inevitable, and extreme heat waves are going to become more common.  In other words, according to the most ardent proponents of “green energy”, heat waves like the one in Texas this week, and the one in Ontario last week, are not going to be “one in 10-year events”; they are going to become routine.  If these folks actually believed their rhetoric, they would logically be structuring electrical generating capacity to deal with heat waves as occurrences that are inevitably going to become more frequent.  And yet they’re not.  They’re acting as though either (a) they expect major heat waves to remain a rare occurrence; or (b) they expect heat waves to become more common, but think that the end of the “era of constant electricity at home” is a good thing.  They’re either hypocrites or Luddites.

So much for the late, great state of Texas.  The second interesting article was also about alternative energy, but has nothing to do with wind turbines, blackouts or the Alamo.  Anybody remember who Pons and Flesichmann were?  They were the chaps who, back in 1989, announced that they had discovered “cold fusion”.  Basically, they reported anomalous heat production and nuclear reaction by-products in a fuel cell where heavy water (normal water with a high proportion of deuterium) was being electrolysed by a palladium electrode.  Their experiment could not be replicated, and “cold fusion”, like Piltdown Man, became a running joke for any bogus scientific claim.  Well, fast-forward 20 years.  For the past few months there have been reports coming out (not via the MSM, which doesn’t seem to have noticed the situation) from a private research lab in Italy where a scientist/entrepreneur named Andrea Rossi has been working on a similar concept using nickel substrates instead of palladium.  An experiment back in January consumed 400 Watts of power and produced 12,000 Watts of heat, a result that would have been virtually impossible absent some nuclear-level effect.  While Rossi hasn’t published his process (presumably because he hopes to make money from patenting it) or allowed observers to deconstruct his apparatus, objective monitoring and evaluations of his setup by physicists suggest that there are no other explanations for how much energy is produced by his “energy catalyzer”. 

Yes, it sounds like something between black magic, alchemy and phlogiston.  Yes, it has yet to be independently verified.  However, Rossi is currently working on a 1 MW commercial version of his reactor based on ganging-up modules of the sort used in the January demo.  It is expected to produce output water at 500 degrees C (See Note C; I won’t reproduce the article as it’s an excellent summary of the project to date).  Such a device could be used for a wide variety of commercial (and military) applications.  For large-scale power plants, for example, it could be used to provide input water to a final-stage boiler, generating - at almost no cost - the first 500 degrees of the 600 degrees generally needed by steam turbines, and thereby cutting fuel requirements for electrical generation by more than three-quarters.

Is it true?  Maybe.  From a scientific perspective, we have at present a small amount of confirmatory evidence, and no falsifying evidence.  If Rossi produces and sells a 1 MW variant, it’ll be testable, and the evidence, pro or con, will pile up rapidly.  In the meantime, though, what I find very interesting about this is that it came pretty much out of left field.  In his famous 2003 speech entitled “Aliens Cause Global Warming”, Michael Crichton opined on how it was impossible to predict the future.  Looking back at the year 1900, he asked how it would have been possible for citizens of that era to predict what the world would be like in 2000.  His survey of the technological and sociological changes alone is worth a read:

Let’s think back to people in 1900 in, say, New York. If they worried about people in 2000, what would they worry about? Probably: Where would people get enough horses? And what would they do about all the horseshit? Horse pollution was bad in 1900, think how much worse it would be a century later, with so many more people riding horses?

But of course, within a few years, nobody rode horses except for sport. And in 2000, France was getting 80% its power from an energy source that was unknown in 1900. Germany, Switzerland, Belgium and Japan were getting more than 30% from this source, unknown in 1900. Remember, people in 1900 didn’t know what an atom was. They didn’t know its structure. They also didn’t know what a radio was, or an airport, or a movie, or a television, or a computer, or a cell phone, or a jet, an antibiotic, a rocket, a satellite, an MRI, ICU, IUD, IBM, IRA, ERA, EEG, EPA, IRS, DOD, PCP, HTML, internet. interferon, instant replay, remote sensing, remote control, speed dialing, gene therapy, gene splicing, genes, spot welding, heat-seeking, bipolar, prozac, leotards, lap dancing, email, tape recorder, CDs, airbags, plastic explosive, plastic, robots, cars, liposuction, transduction, superconduction, dish antennas, step aerobics, smoothies, twelve-step, ultrasound, nylon, rayon, teflon, fiber optics, carpal tunnel, laser surgery, laparoscopy, corneal transplant, kidney transplant, AIDS... None of this would have meant anything to a person in the year 1900. They wouldn’t know what you are talking about.

Now. You tell me you can predict the world of 2100. Tell me it’s even worth thinking about. Our models just carry the present into the future. They’re bound to be wrong. Everybody who gives a moment’s thought knows it. (Note D)

I particularly like the second sentence of the second paragraph - “in 2000, France was getting 80% of its power from an energy source that was unknown in 1900.”  In 1900, we didn’t know what an atom was, and today nuclear power is prevalent worldwide.  Is Rossi’s “energy catalyzer” one of the things that nobody saw coming as much as 10 years ago, and that in 20 years, could be an important source of worldwide energy?  Maybe, maybe not.  Maybe it’s a load of bushwah.  Maybe we’ll be getting the bulk of our power from something different entirely - perhaps a scientific development for which the basic science still has to be done.  But if Rossi’s “E-Cat” is real, it will render discussions of “peak oil” and the nuclear willies of the hand-wringing post-Fukushima panickers entirely moot.  If it’s real, wind turbines will be quickly abandoned as the costly and ineffective white elephants they are (or used for pumping water on farms, as they have been for centuries).  And if, like oil, low-energy ’fusion’ does end up changing the world, then we - like the 19th Century prophets of the horse-manure apocalypse - didn’t see it coming, because, as Crichton warns, you can’t predict where technology will go, and we were, as a result, simply projecting current trends into the future.

Here’s hoping that Rossi’s got something, that it will be commercializable in the near term, and that it will contribute to electrical generation.  Because the 1,323-page “Cross-State Air Pollution Rule” that the Environmental Protection Agency just passed at the end of July and that is due to take effect on 1 January 2012 will force Texas to close 18 power plants that provide about 11,000 MW of coal-fired generation capacity.  Imagine what the past week would have been like for Texans without that electrical power.

Looks like the “end of the era of constant electricity at home” isn’t going to just be a British phenomenon. 

Cheers,
//Don//

Notes

A) [http://www.reuters.com/article/2011/08/04/us-utilities-ercot-heatwave-idUSTRE7736OT20110804]

B) [http://johnosullivan.livejournal.com/31784.html]

C) [http://wattsupwiththat.com/2011/08/04/andrea-rossis-e-cat-fusion-device-on-target/#more-44578]

D) [http://scienceandpublicpolicy.org/commentaries_essays/crichton_three_speeches.html]

Thursday, July 19, 2012

2 August 2011 – Peer Review: The Bear Facts

Colleagues,

Two of my interests - scientific methodological rigour in general, and climate science in particular - bumped uglies this week, in more than just the figurative sense.

For those of you who shelled out the requisite bucks a half-decade or so ago, or those who remember the 2007 Oscars, you might recall Al Gore’s venture into the art of documentary film-making, entitled An Inconvenient Truth.  Known throughout the climate blogosphere by the shorthand “AIT”, Gore’s flik purported to display conclusive evidence that human-produced greenhouse gases are imperilling life on this planet.

One of the pieces of evidence he trotted out in support of his argument - a superb example of photogenically-apt saccharine bathos if there ever was one - was a study suggesting that polar bears are being endangered by climate change because retreating Arctic ice means that they are having to swim further to find ice to hunt from.  The film was full of footage of the big, pretty, fluffy predators frolicking on the floes, gambolling merrily with their adorable offspring, and gamely swimming hither and yon…at least, until they drowned and were found floating and dead, allegedly (according to Gore) because even superb swimmers like polar bears eventually get tired.

The film footage touched hearts worldwide and sparked all manner of imitative imagery from AGW advocates and green groups, from articles accompanied by photoshopped images of a lonely polar bear in the flagship journal Science...


(Which the magazine itself captioned as follows: “This image is a photoshop design. Polar bear, ice floe, ocean and sky are real, they were just not together in the way they are now”*)

...to a papier-mache polar bear riding a plastic iceberg in the Thames…


...to polar bears hanging themselves from bridges (in what seems to be a bizarre bi-polar suicide pact with penguins, a startling display of inter-species solidarity given that penguins live roughly 20,000 km away in the antipodean Antarctic, where sea ice extent has been steadily increasing)...


...to the amazingly ham-fisted advertisement by Planestupid.com, who - in order to convince air travellers to air-travel less - executed a feat of propaganda not to be surpassed until 10:10 UK made a commercial showing school children being blown into bloody flinders by their teacher for asking skeptical questions about consensus climate science. Planestupid.com produced a video showing polar bears plummeting from the skies and smashing themselves into ursine tartare on pavement and the odd car.

 “Oh, the bearmanity!”

And all because Al Gore swore up and down that polar bears were threatened by climate change. 

Incidentally, one wonders how these Planestupid folks think to get off their island, if not via some piece of fossil-fuel-burning man-made equipment.  These people put me in mind of what Lord St. Vincent had to say about Napoleon's invasion prospects in 1803: "I do not say that they cannot come. I only say that the cannot come by sea." Or by air, if the Planestupiders get their way. Or via the Chunnel.  Perhaps they intend to swim.  Hopefully the plastic polar bears won't jump off their plastic icebergs in the Thames and eat them.

Let’s ignore a few obvious facts - for example, the fact that polar bears as a species have survived at least three warmer interglaciations without going extinct; the fact that they seem somehow to be able to cope with an annual phenomenon known as “summer”, when the amount of Arctic sea ice shrinks by two-thirds; the fact that polar bear numbers are vastly higher than they were forty years ago, during which period the Earth has allegedly undergone “unprecedented warming”; and for that matter, the fact that if polar bears were really drowning in record numbers, we would expect to find a corpse now and again (references available on request) - and turn instead to the source of Al’s assertions.

The source was a paper by Dr. Charles Monnett, an Alaska-based researcher with the U.S. Bureau of Ocean Energy Management, Regulation and Enforcement.  In 2006, Monnett lead-authored an article in Polar Biology entitled “Observations of mortality associated with extended open-water swimming by polar bears in the Alaskan Beaufort Sea”.  The article was based on multi-decadal observations during aerial transect flights aimed principally at observing whale populations. From 1987-2003, hundreds of swimming polar bears were observed during these flights, and no dead ones were reported.  In 2004, however, four polar bear carcasses were seen floating offshore and were “presumed drowned” by the researchers.  The researchers first extrapolated this into a broader trend, and then interpreted it as proof that climate change was threatening polar bear populations (especially, according to the authors, lone females and cubs) because the bears have to swim longer distances to find ice to rest on.  Gore cited Monnett’s research in his film, launching the whole furry fandango.  The following year he collected his Oscar (and his Nobel Prize); and the year after that, the US Government classified polar bears as a “threatened species”.

If you’re interested, Monnett’s original article is here:

http://www.abc.net.au/rn/backgroundbriefing/documents/bbg_20110717_dead_polar_bears.pdf.

Fast-forward to 2011.  Monnett was recently suspended from his position and has been grilled extensively about his research.  The transcript of the inquiry makes interesting reading for a number of reasons, ranging from his methodology (using models to infer a species-wide threat from only four carcasses) to his startling inability to articulate what he was doing and how he was doing it. (the transcript of the interview with Monnett can be found here:

http://www.peer.org/docs/doi/7_28_11_Monnett-IG_interview_transcript.pdf).

Apart from the difference between modelled (extrapolating from four corpses) and genuinely empirical (counting all the corpses) methodologies demonstrated in the paper, what I found especially interesting was the fact that Monnett’s paper - the paper that inspired Al, and that sparked the whole panic about the poor, doomed polar bears - seems to have been peer-reviewed by his wife (see line 26, page 35 of the transcript).  It was apparently reviewed elsewhere too, of course (including, presumably, by whomever the journal sent it to for review), but the fact that the internal review for the first version of the report seems to have been by Monnett’s spouse, a co-worker who shares his views and research funding, certainly sets off some alarm bells.  If nothing else, it gives the appearance of a conflict of interest - something that presumably any research organization would want to avoid.  One of the key complaints about the climate science community in recent years has been its insularity and propensity for gentle “pal reviews” rather than robust “peer reviews”.

It’s also worth noting that it doesn’t seem clear whether the questions posed by secondary reviewers were ever addressed.  At one point in the interview transcript (see page 41), for example, the investigator cites a reviewer’s question about Monnett’s numbers and statistics and asks whether Monnett ever addressed the reviewer’s concern, and Monett replies as follows:

“I would assume since they signed off on it, that they were satisfied with whatever answers they got”.

It’s an interesting argument, because there is no indication that the reviewer in question was ever asked whether he was satisfied with Monnett’s answers.  Monnett then cites “management review” in his defence, arguing that the paper must have been okay because his manager signed off on it.  It’s an amusing fallback position, since Monnett argues, near the end of the interview, that his managers have been trying to kill his study “ever since the polar bear thing came out” due to the sensitivity and policy implications of the issue; “management”, according to Monnett, doesn’t want anything to interfere with oil drilling in Alaska.  "God forbid" he publish, he says, “something that has anything to do with the climate change debate.”
The transcript of Monnett’s interview is as painful to read as any transcript where the interviewers don’t understand what the interviewee is talking about, and the interviewee isn’t particularly good at explaining it.  But a couple of interesting points emerge.  One is that peer review as a mechanism for ensuring scientific rigour depends for its utility on a combination of expertise and objectivity.  While it is possible for any adequately experienced scientist to offer a critique of the methodology in a paper, one can’t provide an adequate review of the substance of a paper unless one is at least as expert in the subject as the author (this is what the word “peer” means).  For this reason, most peer reviews of complex and detailed arguments tend necessarily to be about methodology more than substance.  Objectivity is equally important, and consists of two parts: perceived disinterest, i.e., that the reviewer has no stake in the results of the study he or she is reviewing (and thus has no incentive either to support the author’s conclusions, or to challenge them); and actual disinterest, i.e., that the reviewer is able to step back from personal or professional involvement with the author and/or his arguments, and provide a qualitatively impartial analysis of the strengths and weaknesses of the arguments presented in the paper.  These are of course in addition to the basic elements of scientific professionalism, e.g., refraining from ad hominem observations and other fallacies of logic while reviewing another scientist’s work, providing evidence when challenging data or conclusions, etc.
For all of these reasons, even in cases where the reviewee and the reviewer possess roughly comparable levels of expertise, it is inadvisable to have papers reviewed by members of the same organization as the author.  It is especially inadvisable - as the Monnett case demonstrates - to assign as a reviewer for a new and potentially controversial project someone who (a) has a demonstrable personal, pecuniary or bureaucratic interest in either supporting or undermining the paper’s conclusions; or (b) someone who, for whatever reason, is unlikely to be able to take an objective, unbiased stance vis-à-vis the subject of the paper under review.  We are none of us saints, and in the interest of preserving the integrity of the process, it is always best to avoid even the possibility of the appearance of bias.  Scientific rigour is a little like Caesar’s wife in that regard; it must not only be pure, but also be seen to be pure.
There are a couple of interesting codas to this story.  One is that Al Gore’s polar bear claim was cited in 2007 by a UK High Court as one of eleven demonstrably disprovable inaccuracies in An Inconvenient Truth; the Court noted that Monnett’s paper stated that only four carcasses had been observed, and that the deaths of these bears was attributable to “a particularly violent storm” rather than “climate change” (or for that matter, to falling out of the sky onto the M25).  Another is that Arctic sea ice extent was the same in 2006 as it was in 2004, when the dead bears were observed, and was actually lower in 2005, 2007, 2008, 2009 and 2010 than it was in 2004 - and yet, as with the 20 years preceding Monnett’s 2004 observations, no floating, ”presumably drowned” bears were observed in any subsequent years.  
Finally, according to the CBC, two polar bears did in fact drown in 2007.  However, they drowned because they slid off the ice after being shot with tranquilizer darts by researchers from Nunavut’s Environment Department.  While expressing regret over the deaths, Steve Pinksen, the department’s Director of Policy, defended the value of polar bear research, and added that gaining important scientific data always has costs - “including the odd dead bear.”  He added that these were only the third and fourth deaths in 25 years of research. 
So in other words, the number of dead bears that Al Gore used to spark worldwide panic over the future of the species (and which led directly to the whole string of bizarre imagery and mind-bogglingly insane public relations campaigns noted above) - four in a twenty-five year period, in other words - is precisely the same as the number of bears that the Nunavut Environment Department accidentally killed in a twenty-five year period as the regrettable but necessary cost of doing scientific business.
Wow.
Anyway, it seems that while much of the interview transcript concerned Monnett’s (in)famous polar bear paper, his suspension seems to be related to an entirely different issue.  According to last Friday’s Sacramento Bee, Monnett was suspended over “integrity issues”, possibly in conjunction with his acting as a contracting officer for some $50M in research funds that he’s been responsible for administering for the last few years.

Even if we can’t support Monnett on his research methods, his results, or how he deals with adverse comments from peer reviews, I’m sure we can all sympathize with a fellow scientist finding himself driven to distraction by the ineffable joys of contracting.

Cheers,

//Don//

P.S. Here's a picture of Elvis Presley playing the guitar next to the Venus de Milo atop a steam locomotive on the Moon.
*In the spirit of Science magazine, I should probably warn you that this image is a photoshoppe design. Steam locomotives, the Venus de Milo, Elvis Presley, and the Moon are all real; they were just “not together in the way they are now”.