Sunday, April 29, 2012

19 April 2011 – Poll position

Colleagues,

‘Tis the season when everybody seems to be paying attention to pollsters, so I thought I might take a moment and talk about some polling data that has absolutely nothing to do with anything going on in Canada.  There's a new Brookings paper out that uses polling data to analyze the “climate of public opinion” on climate change in Canada and the US.  I take this as an indication that I’ve been lax in recent weeks in commenting on the state of the climate debate, and that some (or at least one) of you are feeling the lack most acutely.  Mea culpa; I thought I had already sufficiently thrashed this deceased equine, but apparently not.  So be it; I am but a servant.  Ask, and ye shall receive. Time to turn the crank once more.  Boy, my whip!

The paper, entitled “Climate Compared: Public Opinion on Climate Change in the United States and Canada “, was written by 3 political scientists from various universities, and is attached.  It makes for interesting reading.  The timing, too, is interesting, particularly in view of the spate of polling results from around the world showing a general decline in public belief in global warming.(Note D)

From a scientific perspective, the paper has a number of problems.  First, while the pollsters asked all sorts of questions about “belief” in global warming, they didn’t ask any questions that would enable the analysts to differentiate between respondents who believe that global warming is driven by anthropogenic greenhouse gas emissions (which is the key contention of the anthropogenic global warming or AGW hypothesis), and those who believe it to be a natural phenomenon. This is the single most important scientific question of the debate; Kyoto, Copenhagen, carbon taxes, cap and trade, and other emissions control mechanisms are all useless - and potentially destructively useless - unless human carbon emissions are indeed, as the IPCC contends, the single most important factor driving global climate.  All the survey asked was “From what you’ve read and heard, is there solid evidence that the average temperature on Earth has been getting warmer over the past four decades?”  Even if this is true - and I’m not saying it’s not true; if the temperature records are accurate, then the Earth has been warming, although for a lot longer than the 150 years of human industrialization – even if it’s true, from a policy perspective what’s important is not whether but rather why the warming has occurred.  That’s a big oversight and, as the purpose of the study is to guide policymaking, the authors’ failure to investigate the issue of causality undermines the whole exercise.  As one observer puts it, ”the fact of warming tells us nothing about the cause.”

The failure to differentiate between the existence of a warming trend and its origin is not surprising, because the second problem is that the authors confuse “determinant” with “correlation.” On the basis of their results, for example, they argue that political affiliation is a “determinant” of belief in global warming.  The problem is that the word “determinant” implies a causal relationship.  Correlation is not causation.  One can certainly note a correlation between political affiliation and belief in global warming based on these data, but the data do not support the alleged causal link, because the questions weren’t designed to elicit which came first, the political affiliation or the belief.  In fact, the data equally support the argument that belief in global warming is a determinant of political affiliation.  This is not all that different, ironically, from how the IPCC, based on an observed, linear increase in atmospheric carbon dioxide concentrations and an observed (though nonlinear) increase in average global temperatures during the 20th Century, has posited a causal relationship between the two phenomena.  Very little empirical evidence presented to date justifies inferring a causal relationship between temperature and CO2 concentrations, and that which does - the Vostok and other Antarctic ice cores - actually demonstrates that over the past million years or so, temperature increases have preceded increases in CO2 concentration by hundreds of years - or in other words, the arrow of causality appears to point in the direction opposite to that posited by the AGW thesis.(Note E)  This example illustrates the importance of not interpreting correlation to mean causation without data to support your interpretation.

This is not a small point.  For example, based on the authors’ definition of “determinant” and the data they present in Table 4, there’s a stronger case for arguing that being a woman is a “determinant” of believing in global warming (63% correlation) than there is for arguing that being a Republican is a “determinant” of non-belief in global warming (43% correlation). As a matter of fact, after Democrats (69% believers), there is a far higher correlation between folks on the distaff side and belief in global warming than there is between political affiliation and belief. According to a 2009 Rasmussen poll, women in the US also significantly lead men in believing that Jesus Christ rose from the dead (87% women/70% men) and that he was the son of God (89% women/74% men).(Note A)  This leads us to some odd conclusions.  On the basis of the results cited in the Brookings article, for example, one is not justified in concluding that political affiliation determines belief in global warming; but on the basis of the results presented in the Brookings article and this second poll, one IS entitled to surmise that, by a statistically relevant margin (10% or greater) women are more likely than men to believe in a proposition that, shall we say, lacks empirical support.

This observation is neither sexist nor critical of the theologically-inclined; I am merely making a value-free deduction from statistical evidence presented by one of the Western world’s premiere polling firms.  Don’t shoot the messenger!

The authors of the Brookings paper should probably also have further investigated the remarkable split among Democrats on the question of belief in global warming - the 53% divergence between the two sides (69% believers/16% non-believers) is simply extraordinary, and statistically far more interesting than the statistically irrelevant 2% divergence between Republicans (43/41). It’s also interesting, although the authors pretty much ignore it, that uncertainty about whether there is sound evidence for global warming correlates positively with age; the older you are, the more likely you are to be on the fence.  In any case, according to another 2009 Rasmussen poll, the proportion of Americans who are unsure whether there is evidence for global warming (16%) is lower than the number who believe in ghosts (23%).(Note B)  Not sure if that proves anything, but I would suggest that it says something about our ability/willingness to believe - to use the Rasmussen article’s words - “as-yet-unprovable things.”  Interestingly, that latter survey shows that the percentage of US women who believe in ghosts is virtually the same as men, which calls into question the point I made above.  Why would men be proportionally more likely to be sceptical about global warming and matters of faith than about ghosts?  Similar studies show that US liberals, despite being on average slightly less religiously observant than US conservatives, are statistically more likely to report having seen a spectre.(Note C)  All of this is fascinating and bears on the question of why we believe (or disbelieve) the things we do. 

To put it another way, we are justified concluding that “humans are unpredictable and illogical.”  Although, while true, such a conclusion doesn’t lead us anywhere…and in any case, it’s hardly original.

Finally, from a methodological perspective, I have problems with using unscientific, emotionally-loaded words like “believe.”  Of course, using scientifically acceptable terminology would push the polling effort toward questions that would require respondents to have at least a passing familiarity with oodles of complicated research and would require laymen to evaluate the strength of evidence (“Do you think that the Urban Heat Island effect, poor station siting, station dropout, manual manipulation of temperature data and excessive interpolative smoothing undermine the US ground-based temperature record?”) - at which point you might start getting a lot more “don’t knows”, or responses that an institution like Brookings could potentially find uncomfortable.  I feel bad for Brookings, by the way; the last time I was there, they had a very earnest Gordon Gecko “greed is good” chappie give a presentation on how carbon markets were the way of the future.  It must be more difficult to find such presenters now that the Chicago Climate Exchange has gone belly-up and assumed room temperature.  There’s also the issue of whether questions of science should be decided by polls instead of through validation or falsification of an hypothesis by observed data; if that was the way science worked, we’d all be breathing phlogiston and having our children’s cranial contours checked by credentialed phrenologists to determine whether they were likely to become axe-murderers.  Finally, I would note that while the Brookings article talks about “climate change”, the poll questions were all about “global warming”.  This suggests to me that the authors think that the two are the same, although the IPCC and most proponents of the AGW thesis have argued for several years now that they are not, and that increased carbon dioxide concentrations may lead to cooling as well as to warming (because if they can’t, then the last 15 years of temperature data tend to falsify the AGW thesis).  This terminological confusion suggests either that the authors didn’t get the memo, or that the poll was conducted while “global warming” was still the officially-sanctioned term, and had not yet been replaced by “climate change”.  If they repeat the exercise, they might wish to note that the new term is “climate disruption” (or maybe “man-caused disasters” - I’ve lost track of the euphemisms) so as to forestall accusations of lax lexicography.

Why should we care about all this?  No reason.  No reason at all.  It’s an interesting paper about how climate change is perceived in the US (and Canada).  But the horse is starting to stiffen up and stink, so I think I’ll lay off it for awhile, and let it continue to decompose in peace.




D) [http://www.gallup.com/poll/146810/Water-Issues-Worry-Americans-Global-Warming-Least.aspx]

E) The Vostok Ice Core data and their interpretation is a fascination subject.  The data themselves are available on-line at [http://cdiac.ornl.gov/ftp/trends/co2/vostok.icecore.co2].  For interpretations, see J.R. Petit, et al., “Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica”, Nature 399 (1999), 429-436. Another author notes that “A review of the recent refereed literature fails to confirm quantitatively that carbon dioxide (CO2) radiative forcing was the prime mover in the changes in temperature, ice-sheet volume, and related climatic variables in the glacial and interglacial periods of the past 650,000 years”.  Willie Soon, “Quantitative implications of the secondary role of carbon dioxide climate forcing in the past glacial-interglacial cycles for the likely future climatic impacts of anthropogenic greenhouse-gas forcings”, Physical Geography, 4 July 2007 [http://icecap.us/images/uploads/Soon07-CO2-TempCORR-Preprint.pdf].  Also, “Over the full 420 ka of the Vostok [ice core] record, CO2 variations lag behind atmospheric temperature changes in the Southern Hemisphere by 1.3 +/- 1.0 ka [thousand years].”  Manfred Mudelsee, “The phase relations among atmospheric CO2 content, temperature and global ice volume over the past 420 ka”, Quaternary Science Reviews 20 (2001), 583 [http://www.manfredmudelsee.com/publ/pdf/.  The_phase_relations_among_atmospheric_CO2_content_temperature_and_global_ice_volume_over_the_past_420_ka.pdf].  The same conclusions have been reached by many other researchers; see, for example, H. Fischer, et al., “Ice core records of atmospheric CO2 around the last three glacial terminations”, Science 283 (1999), 1712-1714; A. Indermuhle, et al., “Atmospheric CO2 concentration from 60 to 20 kyr BP from the Taylor Dome ice core, Antarctica”, Geophysical Research Letters 27 (2000), 735-738; E. Monnin, et al., “Atmospheric CO2 concentrations over the last glacial termination”, Science 291 (2001), 112-114; P.U. Clark and A.C. Mix, “Ice Sheets by Volume”, Nature 406 (2000), 689-690; and N. Caillon et al., “Timing of atmospheric CO2 and Antarctic temperature changes across Termination III”, Science 299 (2003), 1728-1731.

Friday, April 27, 2012

11 April 2011 – Financial disincentives vs. non-discretionary spending

Colleagues,

President Obama raised hackles last week when he offered a glib response to a questioner at a town hall meeting in Pennsylvania who was concerned about the rising price of gasoline.  The official White House transcript of the event (note A) reads as follows:

Now, I notice some folks clapped, but I know some of these big guys, they’re all still driving their big SUVs. You know, they got their big monster trucks and everything. You’re one of them? Well, now, here’s my point. If you’re complaining about the price of gas and you’re only getting eight miles a gallon -- (laughter) -- you may have a big family, but it’s probably not that big. How many you have? Ten kids, you say? Ten kids? (Laughter.) Well, you definitely need a hybrid van then. (Laughter.)

“Laughter”, eh?  For the record there is no hybrid vehicle on the market today that will carry a driver and ten children.  And while the President loves to criticize SUVs, he might want to look out the back window of his three-tonne armoured limo - which gets considerably less than 8 miles to the gallon, I might add - at the dozen or so Chevy Suburbans that follow him everywhere, and ask himself how many of those Secret Service folks, D-Boys and assorted strap-hangers you could cram into a Chevy Volt.

But I digress.  The point is that this town hall exchange makes it clear that the Obama Administration, like many other political entities (on both sides of the spectrum) appears to believe that consumer behaviour that they define as undesirable - in this case, consuming fossil fuels for transportation purposes - can be altered through financial disincentives.  This belief betrays something about their attitude re: the free market, because the same thesis has traditionally been applied to so-called “sin taxes” on products like tobacco and alcohol.  The comparison is strangely apt, because sin taxes demonstrably affect consumption only at the margins.  While casual consumers may reduce consumption of “sinful” products in response to price increases, individuals who are physically addicted to cigarettes and alcohol are not disincentivized by growing costs.  Rather, they support their increasingly expensive addiction by reducing expenditures in other areas, often with catastrophic personal and societal results.

This raises an interesting comparison.  In contemporary Western society, how big a role does consumer choice really play in fossil fuel consumption?  In other words, is consumption of fossil fuels a casual purchase, or are we more akin to addicts?  I use the latter term deliberately, as Western society is routinely and roundly condemned for being “addicted to fossil fuel.”  Obama, in remarks addressing last summer’s oil spill in the Gulf, cited “America’s century-long addiction to fossil fuels” as an argument for favouring “green energy” over exploiting offshore and other domestic oil resources.(note B)  If this is true - if we are in fact “addicted” to fossil fuels - then choice is less of a factor in energy consumption than people think, and it is illogical to expect financial market disincentives to alter consumer behaviour.  That, after all, is what “addiction” means.

This is fairly easy to check out, actually.  All we need to do is take a look at US fuel consumption patterns over the past several decades. 

Figure 1 - US Fuel consumption patterns 1991-2011
(source: Energy Information Administration, Note C)

For ease of comprehension, I’ve only listed the two most significant types of fuel consumed in the US - distilled motor gasoline (including diesel) and distilled fuel oil - over the past twenty years.  Three features pop out of this graph.  First, fuel oil consumption peaks in the winter months and bottoms out in the summer months; while gasoline consumption peaks during the summer months and bottoms out in the winter months.  No surprises there; winters are cold, and summers are when goods are shipped and people travel.  Second, fuel oil consumption in Feb-Mar of 1994 was atypical, which is also not surprising, as 1994 was an abnormally cold winter in large areas of the US.(Note D)  Third, the effect of the current recession on fuel consumption is clearly evident; gasoline consumption has declined somewhat, and fuel oil consumption has declined enormously, since the recession began in the fall of 2008.  However, it is worth noting that the proportional decline in consumption is much less for gasoline and diesel - transportation fuels, in other words - than it is for fuel oil.  One reason is because fuel oil has many industrial uses beyond heating homes, and the decline in fuel oil consumption reflects the enormous increase in unemployment and the decline in industrial output associated with the recession.  The divergence in the two curves post-2008 demonstrates that transportation fuel consumption is less likely to be affected by a severe economic downturn than fuel oil consumption.  To put it another way, the recession hasn’t caused anywhere near as large a drop in transportation as it has in industry.  Even if they aren’t working, after all, people still have to move, and they still have to eat.

Okay, so how robust is fuel consumption in the face of price changes?  If consumption of transportation fuels is a matter of choice, then, ceteris paribus, as practitioners of the ‘dreary science’ say, an increase in price should result in a concomitant decrease in consumption, right?

Figure 2 - US Fuel (diesel and gasoline) prices vs. total fuel supplied 1991-2011
(source: Energy Information Administration, Note C)

Wrong.

The blue and yellow lines in figure 2 represent, respectively, the price per gallon of gasoline (all grades) and diesel fuel in constant 1982-84 dollars (I had to use 82-84 dollars as that was the only consumer price index data available from the Bureau of Labor Statistics for the whole of the period in question - Note E).  The pink line represents the total of diesel and gasoline supplied for consumption, while the black line is a four-period moving-average smoothed trendline for total fuel supplied (I used four-period smoothing because the data was in weekly increments.  The smoothing establishes monthly patterns, offset to the right by one month.  As you can see, the black line more or less replicates the pattern shown by the yellow [transportation fuels] line from Figure 1).

Okay, what does figure 2 show?  Well, as the pink and black curves demonstrate, transportation fuel consumption cycles annually, high in the summer period, and low in the winter period.  Apart from a gradual increase over the past 17 years, the only significant change was the decline in consumption due to the current recession.  Post-2008, however, we can see a slight recovery during peak consumption periods (i.e., summers) but not during low consumption periods (i.e., winters).  The blue and yellow lines, by contrast, show fuel prices in constant dollars.  Gasoline and diesel prices map so closely on an annual basis that there’s little point in treating them separately. 

We can extract a number of interesting lessons from fuel price behaviour over the past 20 years.  First, fuel prices adjusted for inflation were pretty much the same in 2004 as in 1994.  There was a noticeable bump from 2000-2002, which correlates with the economic disruptions due to the Dot-Com scandal and 9/11; a major increase in 2008; a major drop-off shortly thereafter; and a rapid increase from the recessional trough to the present.  The massive increase took place from summer 2007 to summer 2008, at the same time as grain shortages resulting from cropland diversion to ethanol production led to massive increases in worldwide food prices; and the massive decline correlates precisely with the collapse of the sub-prime mortgage industry in the US that touched off the current recession.  The more recent increase in prices has been linked to increasing demands from industrializing nations like China and India; the 2010 oil spill in the Gulf of Mexico; supply concerns resulting from ongoing regulatory constraints on resource exploitation in the US; and, more recently, the explosion of popular discontent in the Middle East.

What’s interesting in this graph is the obvious lack of a significant correlation between the price of transportation fuels and fuel consumption patterns.  Look at the first half of the chart.  The 2000-2002 price bulge produced no detectable change in the consumption curve.  If fuel was truly a pure market commodity, a “nice-to-have” good that was subject entirely to consumer choice, then logically an increase in price should have led to a decrease in consumption, or vice-versa (or, if you like, the other way around - an increase in consumption should produce an increase in demand and therefore in price, and vice-versa).  No such thing happened.  One of two interpretations is possible: either the cost of fuel for the average American consumer represents proportionally such a miniscule percentage of total household expenditures that price changes are relatively meaningless (i.e., fuel is cheaper than dirt); or the purposes for which fuel is purchased are so important that price is relatively meaningless, and the consumer will continue to consume it in accordance with existing patterns, regardless of price changes (i.e., fuel is a necessity).

Well, these are also easy interpretations to check out.  According to a recent Reuters report, increases in motor fuel prices are going to raise the average US household’s expenditures on motor fuel by $700 (28%), to $3235, for 2011.(Note F)  Given that the median income in the US in 2009 was $50,000 (US Census Bureau), motor fuel costs represent more than 10% of the average family’s after-tax income.  This is a significant proportion of the average family’s expenditures and eliminates the “fuel is cheaper than dirt” argument.

Does this mean that fuel is a necessity and that consumption, therefore, is less dependent on price than we think?  Take a look at the right half of the graph in Figure 2.  Looking closely at the black moving-average trendline, we can see a very slight deceleration in the aggregate increase in fuel consumption from 2004-2008.  During this period, the real cost of fuel doubled.  The fact that a doubling in price correlates with only a barely-detectable decline in consumption further reinforces the impression that fuel consumption is poorly linked to fuel price.  This in turn suggests that consumer decision-making about fuel purchases is driven largely by factors other than price.

As a gross error check on this reasoning, think about your own driving patterns.  How many of the kilometres that you drive in a week are truly “discretionary” - i.e., how much of the driving that you do is done for purposes that could easily be eliminated without inflicting a significant impact on your personal life, to the point of forcing a “step-change” in your behaviour?  How much of your driving is because you want to, not because you have to?

If the bulk of transportation fuel purchases are indeed effectively non-discretionary, then there is only a very limited extent to which consumer behaviour patterns vis-à-vis transportation fuel consumption are subject to alteration through the imposition of artificial price increases regardless of how they are accomplished; direct taxes (like carbon taxes levied at the pump) or indirect taxes (like price increases resulting from trading in carbon credits purchased by petroleum distillers, with costs passed on to consumers), even if enormous, will not have the desired effect.  This is the difference between a luxury and a necessity.  If the price of caviar goes up, chances are you’ll eat less caviar; and if it goes up enough, you might well phase it out of your diet.  Nobody needs caviar, after all.  If the price of wheat goes up, though, chances are you’ll still buy bread; you’ll just have to adapt by reducing consumption in areas that are more discretionary.  You’ll see fewer movies, put off buying that X-Box, or not purchase the sofa you’ve been saving for.  You don’t really have a choice about buying bread, you see, because the alternative is starvation.

And as the data demonstrate, you largely don’t have a choice about driving your car.  The little discretionary things - like that trip to Disney World you wanted to take the kids on - will disappear.  But that’s less than 10% of the gas you’ll buy in a year.  And a 10% reduction at the margins of consumer behaviour will have a negligible impact compared to much larger effects - like the massive reduction in fossil fuel consumption imposed by a crushing economic crisis; or the massive increase in fossil fuel consumption occasioned by increasing demand for fuel in the growing economies in Asia.

When 90% of your purchases in a given sector are non-discretionary, you’re an addict.  So the bottom line, I guess, is that for all practical purposes, we ARE addicted to fossil fuels - at least in the sense that price changes do not appear to significantly impact consumption patterns (looks like I agree with Obama on something.  Somebody write the date down).  Which is why trying to modify behaviour through price manipulation won’t work.  Forcing consumers to pay higher prices for largely non-discretionary commodities like transportation fuel will not lead to a significant reduction in fuel consumption; it will simply lead to a lower standard of living by forcing them to make sacrifices in discretionary areas.  Spending an extra $700 on gas simply means that the family won’t be able to afford the nice-to-haves, like sending little Johnny to hockey practice.  Which of course means that Mom won’t have to drive him there, which means that she’ll save the expense of the fuel that would have been consumed, which in turn means a concomitant reduction in those greenhouse gas emissions Obama was talking about when he advised that Pennsylvania man to buy a hybrid to transport his 10 children in.

Of course, any family that is having difficulty affording a $700 increase in annual fuel costs probably isn’t in a position to spend $40k on a Prius.  But, hey…details.

Cheers,

//Don//


Notes








Wednesday, April 25, 2012

8 April 2011 – Egout idea

Colleagues,

As something of an aficionado of alternative energy sources, I was mesmerized by a Reuters article earlier this week that reported that various public buildings in Paris - schools, swimming pools, and even the Presidential Palace - are soon going to be heated by energy drawn from the city’s legendary sewers.

No, wait - don’t click out!  I’m not kidding.  Here’s the article:


Heating buildings with energy drawn from sewers - 'cloacothermal' power, if I may be permitted a neologism - is perfectly feasible, and actually quite elegant.  The Paris sewers channel a massive flow of waste - 285,000,000 cubic metres per year, according to the article - which remains at a steady temperature of between 12 and 20 degrees Celsius.  Since one calorie is the amount of energy necessary to raise the temperature of one gram of water by one degree Celsius, and a cubic metre of water (we’ll forget the icky solid stuff for now) masses a metric tonne, that means that the sewers transport a minimum of 342 trillion calories, or 1.4 quadrillion joules, of energy every year.  That’s about a third of a megatonne in nuclear weapons terms; or, in electrical terms, a little less than the annual electrical consumption of Togo.  At present, all of that energy goes to waste, and it’s a great idea to try to capture some of it. 

This is how it works.  A metal plate containing hundreds of internal channels - a heat exchanger, more or less like the radiator on your car, only a lot bigger - is submerged in the waste flow.  Pumps channel a heat-exchange fluid (usually a water/ethylene glycol mixture or some similar antifreeze solution) through the plate, forcing cold liquid down into it where it is warmed by the waste flow, and bringing the warmed fluid back up into a heat pump.  The heat pump works like a refrigerator or air conditioner; the warmed heat exchange fluid is channelled into an evaporator where it heats refrigerant in a copper coil.  That refrigerant is then forced through a reversing valve into a condenser, where the gas is compressed, forcing it to release the concentrated heat.  That heat can be used to warm just about anything.  For a building, it is used either via forced-air circulation, or - since this is Paris - probably an hydronic system where the heat is circulated to conventional radiators via another water or water/antifreeze loop.  As the engineers cited in the article rightly point out, there can be no odour, since the original heat source and the destination are totally isolated.  With decent insulation, the heat can actually be transported a reasonable distance with minimal losses.

This is precisely how a lake-plate heat pump works, except that for a lake system, the heat exchanger plate is submerged in a large body of water, while in the proposed Paris system, the plate is submerged in…well, not to put too fine a point on it, but yesterday’s boeuf bourgignon.  The best part is that there’s no well drilling involved; you just drop the plate in the waste flow, connect the pipes, and you’re off to the races. 

For those who’re interested, I’ve included a link to a brief bit of literature on plate heat exchangers and how they work. (Note A)

This is all very cool.  I foresee only three possible problems.  The first is mentioned in the article, which notes that the sewers have occasionally been home to “rats, pickpockets, intrepid tour groups and the odd corpse” (not to mention werewolves and tragically disfigured composers).  None of these are likely to be as much of a problem as vandals.  Lake plate systems and their piping are tough, but they’re not invulnerable, and in the sewers they will be exposed to mischief-makers.  This is not an insignificant consideration in a city that sees more than a thousand cars subjected to multicultural incendiarism every year.

The second problem is clogging.  In order to maximize thermal transfer, a heat exchanger - like your radiator - circulates fluid through pipes surrounded by thin vanes designed to maximize surface area.  This means that the plate contains many thousands of tiny channels that are easily clogged by debris.  There’s a lot more debris in a sewer than at the bottom of a lake.  In order to maintain system performance, the plates will have to be cleaned regularly.  Depending on how bad the clogging gets, it might be easier to simply swap them out for new ones and take the old ones away for a thorough cleansing and refurbishment.  The cost of programmed maintenance has to be figured into the cost-benefit analysis for the system (and maintenance costs are always underestimated).  And I don’t know about you, but cleaning or swapping out heat exchanger plates that’ve been submerged in a sewer is not a job I’d be lining up for.  As one geothermal experted I know put it, "I wouldn't want to be the guy who has to de-scale the plates."

The third problem is thermal depletion.  The size of the plate determines how much heat you can transfer - but for cost reasons, plate systems tend to be sized as small as the designer can get away with.  On very cold days, the demand on the heating system will be high.  With a very efficient heat pump operating on a water-glycol loop, it is possible to draw so much heat out of the circulating fluid that its temperature drops below zero.  When this happens, ice begins to form on the submerged plate, causing a collapse of system performance, and also causing the plate to begin floating.  If there are enough plates sunk in the sewers, it might even be possible to draw so much heat out of the effluent flow that the sewers begin to freeze up - and when sewers freeze up, they back up, with unpleasant results for upstream customers.  Fortunately, there’s an easy fix; if the heat pump system is designed for air conditioning as well as heating (and given how more than 14,000 people died in Paris in the 2003 heat wave, a little air conditioning might not be a bad idea), you’d just have to reverse the system and run it in a/c mode for a few hours, sucking heat out of the buildings and forcing it down into the plates to melt the ice and get the system running again.  Doing so would make the Elysée a little chilly for Sarko and the missus, to say nothing of the swimmers in the solidifying public pools or the shivering tots in the sewer-heated schools - but that’s what happens when engineers have to fix problems created by mathematicians and physicists.  “Elegant, fast, functional - pick any two.”

One other thing about thermal depletion: the “material solids” contained in the effluent flow in sewers are broken down by bacterial action, and bacteria do their best work under defined conditions of salinity, pH balance, and of course temperature.  I’m just spit-balling here, but there’s a reason you don’t sink lake plates in the settling ponds at sewage treatment plants.  Chilling the Paris sewer water down by a few degrees could impede bacterial action, with unpleasant results.  Time will tell, though, I suppose.  I’m certain the planners have thought of this.

Anyway, I say go for it.  It’s brilliant.  The Paris sewers are more than a monument to the eternal glory of the Ville Lumière; they’re a precious natural resource, and there’s no reason to lose all of that glorious, bio-generated and nicely-packaged heat into the general environment.  I suppose it won’t be long before restaurants start displaying signs saying, “Please try the curry - you’re heating the Presidential Palace!”

Cheers,

//Don//

Notes

Saturday, April 21, 2012

RIP - Greatly Exaggerated

Steve Goddard, the brilliant and often hilarious iconoclast who operated Real-Science.com, has died at the age of 81.


With Steve, the data always came first.  I can offer no higher praise than that.

Looks like some whack-job posted a fake death notice for Steve.

God, the internet is full of freaking nutbags, isn't it?  Sometimes I feel like Sam the Eagle:  "You people are all WEIRD and SICK."

Friday, April 20, 2012

30 March 2011 – Is there a ribbon to protest the abuse of language?

Colleagues,

A recent office move stirred up many things, among them dust bunnies, long-lost paperclip sculptures, and the collection of discarded Euro-coins and British pennies I’d been keeping in the bottom of a drawer.  It also brought to light magnificent strata of old, long-forgotten photocopies of articles I had pack-rattingly squirreled away over the years.  Since the fall of the government last Friday and the resulting launch of yet another federal electoral campaign, and especially in view of the reams of speculative scribbling and professional political blather that have accompanied the launch of military operations against the Khadafy regime in Libya, one of the buried articles I unearthed seemed particularly timely.  I refer, of course, to George Orwell’s magnificent 1946 piece, “Politics and the English Language.”

The thrust of Orwell’s diatribe against obfuscatory prose is this: the purpose of language is the communication of ideas, which places it in diametric opposition to political language, which “is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind.”  Orwell identifies a variety of different sins committed routinely (accidentally as often as deliberately) by the political writer:

·         Deploying metaphors that have been worn-out through over-use (as examples, he offers a number that are still in widespread use today, e.g., ‘toe the line’, ‘ride roughshod over’, ‘ stand shoulder to shoulder with’, ‘play into the hands of’, ‘no axe to grind’, ‘grist to the mill’, ‘Achilles’ heel’, ‘swan song’ and ‘hotbed’).  While a clever or novel metaphor may serve the writer’s (or the speaker’s) purpose by “evoking a visual image”, the use of such worn-out or dead metaphors serves only to clutter up an argument with syntactical dead weight;

·         Using what he terms “operators or verbal false limbs” - polysyllabic phrases in place of more accurate single words.  As examples he offers ‘render inoperative’ (break), ‘militate against’ (impede), ‘make contact with’ (touch), ‘play a leading role in’ (affect), and so forth.  Here, too, the passive voice comes into play, generally in conjunction with prepositional phrases that seem designed to dissociate the writer from his prose (“in view of”, “by dint of”, “in the interest of”, “the fact that” - or my own personal besetting sin, “in that context”).  Bad writers also attempt to lend profundity to their work by means of what Orwell calls the “not un-” formulation.  “Possible” becomes “not unlikely”; “favourable” becomes “not entirely undesirable”, and so on;

·         Pretentious diction, or what my second year classical strategy prof used to call “using a Latin polysyllable when an Anglo-Saxon monosyllable will do”.  This is a relatively simple sin to avoid, especially for anyone who speaks both a Latin and an Anglo-Saxon language, as the differences are readily apparent.  Words of Latin origin, he argues, are used to “dress up a simple statement and give an air of scientific impartiality to biased judgements”.  Foreign words and expressions, similarly, are used to give an air of “culture and elegance”.  “Bad writers”, Orwell argues, “and especially scientific, political, and sociological writers, are nearly always haunted by the notion that Latin or Greek words are grander than Saxon ones,” leading to the overuse of unnecessarily complex formulations in writing, especially in those fields.  Ironically, Orwell also notes that the jargon of Marxism consists of words transliterated and adapted from Russian, German or French, the languages of violent revolution in the 19th and 20th Centuries - cannibal, petty bourgeois, lackey, flunkey, mad dog, etc.; and,

·         Meaningless words.  Orwell suggests that this lexical offence sin takes two forms: first, the use of language in inapplicable circumstances (as, for example, the attribution of subjective human emotional states to things like art); and second, the use of words which, through repeated over-application, have been drained of all meaning.  This latter category, Orwell argues, includes words like “fascism”, which once had a clear meaning but which now, due to overuse as a political epithet, has been drained of all meaning “except in so far as it signifies ‘something undesirable’.”  The same has happened to once useful words like “democracy”, “socialism”, “freedom”, “patriotic”, “realistic” and “justice”; the author now has to define what he means when he uses them.  And remember, Orwell was writing before there was a country called “The Democratic People’s Republic of Korea”.

Orwell calls this his “catalogue of swindles and perversions”, a nice, meaty summation of the literary crimes of which we’ve all been guilty at one time or another (I for one am a self-confessed repeat offender), and offers, as an example of the imprecise and ugly prose characteristic of modern writing styles, this passage:

Objective considerations of contemporary phenomena compel the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account.

This, Orwell states, is what one would get if one were to “translate” a famous paragraph from Ecclesiastes into modern bureaucratese:  “I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.”  I don’t think there’s any doubt which version more clearly expresses the author’s intent, or indeed which is more inherently beautiful.  Nor is there any doubt which one would survive the editor’s pen in a modern bureaucracy, and which one would be sent back to the author for recrafting.

“Modern writing”, Orwell goes on to argue, “does not consist in picking out words for the sake of their meaning and inventing images in order to make the meaning clearer.  It consists in gumming together long strips of words which have already been set in order by someone else, and making the results presentable by sheer humbug.”  We do this, he suggests, because it’s easier; “you not only don’t have to hunt about for the words; you also don’t have to bother with the rhythms of your sentences since these phrases are generally so arranged as to be more or less euphonious.”  From the point of view of those of us faced with writing on contentious issues in an environment where our words are routinely scrutinized for any potential perception of incompatibility with official policy, there is a further layer of comfort in pre-authorized verbiage; I’m sure I’m not the only writer who remembers being ordered to replace newly-researched and carefully-crafted prose with outdated and ploddingly insipid “approved language” which, because it had already been signed off by this or that manager, was deemed to have achieved a state of inviolate and perpetual perfection.  Such practices invert the very purpose of the written word.

There is yet another comfort in complex linguistic constructions; they isolate the writer from responsibility for his writing, in perception if not in fact.  One eventually adopts the habit of writing phrases like, “In the author’s opinion, it is not an unjustifiable assumption that” instead of the far more concise and accurate, “I think.”  The two phrases mean precisely the same thing, but the former is presumed to sound more objectively scientific when in reality it is simply more abstruse.  There is also the perception – which I think, or at least hope, is incorrect – that polysyllabic qualifiers are somehow less condemnatory than concise, qualitatively accurate descriptions of ideas or events.  We saw an example of this a few weeks ago, when a young politician objected to an immigration pamphlet that characterized so-called “honour killings” as “barbaric,” arguing that “absolutely unacceptable” would be a less offensive formulation.(Note A)  One was left to wonder why, precisely, Canadians should be distressed at the thought of having offended those who murder women and girls for perceived slights to their honour.  The linguistic question, however, devolved upon the failure to recognize that the two terms were neither synonymous nor mutually exclusive.  An act, after all, can be unacceptable without being barbaric – electronic identity theft comes to mind – while another, for example the television show “Trailer Park Boys”, might be considered barbaric (using the dictionary definition of “rough or uncultured”) without being deemed unacceptable.  The brouhaha that followed was the result of the sloppy use of language, and its devolution into denunciations and hasty back-pedaling was as entertaining as it was instructive.

Even when we don’t commit these “swindles and perversions” ourselves, we are surrounded by no end of obfuscatory lingo.  Political speeches are often a trove of what was once referred to as “bumf”, as are many if not most policy documents.  Recent publications in our own Department provide some truly stunning examples of convoluted prose that can be remarkably difficult to decrypt.  Such examples (and we’re all aware of at least a few) demonstrate that the quality of language is often entirely disconnected from the quality of the thought that the language is intended to convey; it is as easy to present a good idea in bad writing as it is to present a bad idea in good writing.  What is truly difficult is to express a good idea in good language.  Obviously, it helps if you have a good idea from the outset.

Orwell summarizes his observations by opining that the “scrupulous writer, in every sentence that he writes, will ask himself at least four questions, thus:

“1.       What am I trying to say?

2.         What words will express it?

3.         What image or idiom will make it clearer?

4.         Is this image fresh enough to have an effect?”

He adds that the writer will probably ask himself two further questions:

“1.       Could I have put it more shortly?

2.         Have I said anything that is unavoidably ugly?”

In pursuit of good writing, he offers five rules:

“1.       Never use a metaphor, simile, or other figure of speech which you are used to seeing in print;

2.         Never use a long word where a short one will do;

3.         If it is possible to cut a word out, always cut it out;

4.         Never use the passive where you can use the active;

5.         Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent;”

…and finally,

“6.       Break any of these rules sooner than say anything outright barbarous.”

If we’re honest with ourselves, I’m sure we can all come up with examples of how we’ve broken these rules at one time or another.  I can think of a few dozen examples in the TM I’m currently drafting, and that’s only in the introduction.  Hell, I can think of a few dozen examples in this message!

Of course, there’s really no need to go to all of this trouble over mere words, is there?  After all, as Orwell notes, we can avoid the agony of crafting accurate, expressive language by “throwing [the] mind open and letting the ready-made phrases come crowding in.”

They will construct your sentences for you – even think your thoughts for you, to a certain extent – and at need they will perform the important service of partially concealing your meaning even from yourself.

“It is at this point,” he concludes, “that the special connection between politics and the debasement of language becomes clear.”  Orwell goes on to argue that in the rare circumstances where political writing is not bad, “it will generally be found that the writer is some kind of rebel, expressing his private opinions and not a ‘party line’.  Orthodoxy, of whatever colour, seems to demand a lifeless, imitative style.”  The prolonged practice of cleaving to orthodoxy in writing and in speech, he warns, turns the author into a machine: “The appropriate noises are coming out of his larynx, but his brain is not involved as it would be if he were choosing his words for himself.”  This “reduced state of consciousness”, according to Orwell, “if not indispensable, is at any rate favourable to political conformity.”  This is a matter of grave concern to the well-being of the free world, given how much of modern politically-derived discourse is dedicated to portraying failures as victories; presenting as utterly certain proposals or hypotheses that are fundamentally not amenable to certainty; stating as fact things that are arrant nonsense; and as Orwell says, mounting a “defence of the indefensible.”

Simplifying our use of English, Orwell concludes, frees us from “the worst follies of orthodoxy.”  If we eschew the formulaic phrases so beloved of bureaucracies, if we refuse to accept their incestuous, obfuscatory dialects as the standard for communication (or as is so often the case, numbing non-communication), then when we say or write something manifestly stupid, it will be obvious – even to the one who wrote it.  It is, after all, difficult to make “lies sound truthful” if one uses words like “lie” and “truth” instead of, for example, “somewhat less than wholly forthright” and “presumably not fundamentally inaccurate”.

Go here for a .pdf copy of Orwell’s piece for your delectation.  You might find, as I did, that it seems to become more applicable every time you read it.

You might also find this link helpful:


It’s a set of hints for translators facing the arduous task of translating government documents into other languages.  I particularly enjoyed this bit of advice: “Non-verbs like “impact” and non-adjectives like “impactful” must be changed to words with real meanings.”

And this one: 

“The second part of the translator’s solution for incomprehensible nonsense is to be aware that the purpose of many such expressions is not to say anything of substance at all. Rather, the author stuck the text in just to occupy space without running the risk of adorning it with content. Such fragments make the letter look longer and appear to say more. They are the old bread that you stuff into the turkey to make it look fatter. With a bit of creativity, a good translator can come up with an equally vapid string of words from the target language and culture. Bureaucratese happens all around the world. Since nothing is really being said, giving the appearance of content without saying anything would be a faithful translation.”


Maybe there’s hope for the Queen's English after all.

Incidentally, it turns out that there doesn’t seem to be a ribbon to protest the abuse of language.(Note B)  The closest ones I could find were the campaigns to “Protest Political Correctness” and in favour of “Less Crap Online!”, both of which use a brown ribbon.  Unfortunately, the same colour was also chosen by the campaigns for “Free Beer Online”, “Chombos Chocolate” (whatever that is), and “Carnie Rights Awareness.”  I guess if we want a ribbon, we’ll have to come up with our own.

Cheers,

//Don//



Notes


Thursday, April 19, 2012

18 March 2011 – Humans and risk assessment

Colleagues,

As of this morning [18 March 2011], the casualty count in Japan from last Friday’s tsunami stands at 6539.  As another 10,250 persons are still listed as missing, the death toll is almost certain to climb.  These numbers place last week’s event ahead of the 1995 Kobe earthquake in Japan’s annals of lethal disasters.  380,000 people are still in 2200 shelters; 100,000 of them are evacuees from the area surrounding the Fukushima nuclear power plant, where workers are still attempting to bring the situation at stricken reactors and spent-fuel cooling ponds under control.


A few years ago, one of my esteemed colleagues published a paper entitled “Reflections on the Proliferation of Threats”, the core argument of which was that governments display a lamentable tendency to conflate threats driven by human intention with challenges deriving from the often very deadly, but quite insensate, environment in which we live.  One of the chapters of this note, entitled “A sense of proportion”, focussed more closely on a peculiarity of human nature: how very bad we can be at assessing risk.  He offered urban mortality rates as a case in point.  According to a study that had been cited by the IPCC, as a consequence of climate change New York City could be expected to experience an additional 500-1000 deaths due to heat over the period 2000-2050.  This amounts to 10-20 climate-change-related deaths per year for the Big Apple.  Data from the Centres for Disease Control and New York’s own Department of Public Health offer some perspective on that number.  Of the more than 55,000 people who died in NYC in 2006, for example, more than 1000 died from complications from diabetes, more than 1500 from alcohol-related causes, and 1000 in infancy.  100 people die every year in New York in construction accidents - and yet New Yorkers still build buildings.  21,000 die from heart disease, and 13,000 from cancer,(Note A) and yet New Yorkers still eat Big Macs and smoke Marlboros.  Such figures, the paper notes, puts the risk of excess mortality from climate change into appropriate context.(Note B)

I raise the issue of risk assessment in the context of the death toll from last week’s tsunami because of the ongoing drama over events at the Fukushima nuclear power plant and the media-driven frenzy over radiation fears, both in Japan (where at least there is some factual basis for worry, however slender it may be) and elsewhere around the world (where there is no basis whatsoever for the panic).  As I write this, people in Canada and the US are emptying pharmacies of potassium iodide pills, and you can’t get a Geiger counter for love or money.  So let’s take a look at how big the risk really is, shall we?

According to World Nuclear News, the normal annual allowable radiation dose for nuclear power plant workers is 20 millisieverts (mSv) per year; after receiving that much radiation, no further nuclear activities are permitted.  In practice, most workers receive much, much less.  In emergency situations, that limit may be increased 100 mSv.  This is the point at which health effects from exposure become possible.  To make this more clear, below 100 mSv per year, there is no detectable statistical relationship between radiation exposure and human health; above this level, a statistical relationship begins to become apparent.  Due to the scale of the disaster at Fukushima and the importance of controlling the situation, Japan’s Nuclear and Industrial Safety Agency has allowed the maximum exposure for on-site personnel to be increased to 250 mSv.
How bad is that?  Well, as of yesterday, one week into the crisis, this is how bad it was:

·         One Tepco worker working within the reactor building of Fukushima Daiichi unit 3 during “vent work” was taken to hospital after receiving radiation exposure exceeding 100 mSv, a level deemed acceptable in emergency situations by some national nuclear safety regulators.

·         Nine Tepco employees and eight subcontractors suffered facial exposure to low levels of radiation. They did not require hospital treatment.

·         Two policemen were decontaminated after being exposed to radiation.

·         An unspecified number of firemen who were exposed to radiation are under investigation. (Note C)

Yes, you read that right.  One Tepco worker has received a higher than 100 mSv dose.  That’s it.  That’s all.

How about radiation levels?  Well, a couple of days ago the Tokyo office of Denphone has set up a web-enabled real-time radiation monitor showing 4-hour, 24-hour, and 1-week radiation level readings.  Here’s the 1-week chart:
Figure 1 – Denphone Tokyo Realtime radiation monitoring chart; March 17-18 2011

For the past two days the average radiation level at the Denphone office in Tokyo has been below 30 micro-Roentgens per hour; the max has not exceeded 50, and the levels are dropping.  Normal background radiation, by the way, is 23 micro-Roentgens per hour. (Sieverts is the modern unit replacing Roentgen-equivalent-man, or REMs).  If radiation levels were to remain at 30 micro-Roentgens per hour, the annual additional dose a Tokyo resident would receive would be 61 milliRems or .061 Rems.  Since 1 Sievert is 100 Rems, this would be an additional annual dose of 0.61 mSv, or .0016 mSv per day.

To put that into perspective, consider the following:

·         sleeping next to another human for 8 hrs: .0005 mSv.  So the additional annual dose rate would be the same as sleeping next to 3 humans for 8 hours per day.

·         a mammogram is 3 mSv

·         a chest CT scan is 6-18 mSv

·         cosmic radiation at sea level is .24 mSv/year

·         terrestrial radiation (from the ground) is .28 mSv/year

·         radiation from the granite in the US Capitol Building: .85 mSv/year

·         airline crew on New York - Tokyo runs accumulate an additional 9 mSv/year

·         smoking 1.5 packs of cigarettes per day: 18-65 mSv per year

In other words, someone who smokes a pack and a half a day will receive up to 100 times more excess radiation per year than the folks in Tokyo are absorbing at present.  To say nothing of the further dangers of smoking, of course.

Radiation is a complicated phenomenon and is present in so much of the world around us that we never think about it.  For example, the dose from eating a banana is .0001 mSv.  So if you eat a banana a day, you’re voluntarily absorbing more radiation every year than you would get from 7 dental X-rays.(Note D)

How about North America?  Are we in any danger?  Well, there’s a network of radiation detectors in the US (and you can join it, if you want to, and if you can buy a real-time web-enabled Geiger counter, which at present you can’t because the companies that make them are sold out).  Geiger counters measure radiation in counts-per-minute (CPM) of alpha and beta particles.  The normal background level is 25-75 CPM.  Here’s the US as of 7:04 EST this morning:

Figure 2 – Realtime radiation results for the continental US as of 0404 PST, 18 March 2011

The numbers on the West Coast are revealing; 16-34 CPM, low on the normal range for background radiation. (2004 is the patent date)  Why is Denver so high, you ask?  Probably because of its altitude; there’s less atmosphere to block cosmic radiation.

So I’d reflect before popping too many potassium iodide pills.  They work by filling your thyroid gland with non-radioactive iodine so that it can’t absorb highly radioactive iodine-131 - but the potential radiation from Fukushima, if any ever gets here, won’t be in the form of I-131 anyway.  Which sort of makes you wonder why US Surgeon General Regina Benjamin told people to stock up on them...but that’s a problem for another day.

Bottom line, there has been no significant radiation from the Fukushima disaster to reach North America.  Additional radiation levels in Tokyo are two orders of magnitude lower than the radiation to which the average Japanese is exposed from cigarette smoking.  And even in the heart of the crisis, at the Fukushima power plant itself, there has to date only been one individual exposed to more than 100 mSv of radiation - and he absorbed less than he would if he were a 3-pack-a-day man.  Could it get worse?  Yes, of course it could.  But so far it hasn’t.  A meltdown, partial or complete, is possible.  “Chernobyl” (let alone “The China Syndrome”) is not.

Earlier this morning, Japan rated the Fukushima situation as “equivalent to 3-Mile Island”, which - as Greenpeace, the Union of Concerned Scientists, CNN and other media outlets keep reminding us - was the “worst nuclear accident in US history”.(Note E)  There were no deaths or injuries either at the plant or in the local community as a result of the 3-Mile Island partial meltdown.(Note F)  As “worst accidents” go, that’s not so bad - especially when you consider that, according to the CDC mortality data reported in Peter’s paper, it is statistically probable that 131 Americans will die in traffic accidents, and 34 in gun homicides...today.

Over the past week, the only death at the Fukushima power plant was a crane operator who succumbed to injuries received in the initial earthquake.  Meanwhile, 25 of the 100,000 people evacuated from the Fukushima area have died in the shelters to which they were moved.  Sometimes the cure is worse than the disease.  Idiotic headlines comparing Fukushima to Chernobyl or evoking the dreaded “China Syndrome” aren’t helpful, and they distract scarce recovery efforts from the real problems.  And, depressingly, they show how bad people continue to be at assessing probabilities, costs and risks. 

Now, to go buy a LottoMax ticket!

Cheers,

//Don//
NOTES

A) See Peter Archambault, Reflections on the Proliferation of Threats, DRDC CORA TN 2009-020, June 2009, 9-10.
B) The study predicting excess heat-related mortality (Kalkstein and Greene, 1997) assumed that the IPCC model predictions for future temperature regimes were accurate and that temperatures would continue to rise.  The average global temperature has in fact fallen since Kalkstein and Greene published their study.
D) This is because bananas, as we all know, are a major source of potassium, and potassium-40 is a highly radioactive isotope.  Crates of bananas regularly set off radiation detectors at US ports and customs facilities.  By the way, if radiations cares you, then stay the hell away from Brazil Nuts; pound for pound, they’re about 30 times as radioactive as bananas.
E) [http://www.cnn.com/2011/WORLD/asiapcf/03/18/japan.nuclear.reactors/index.html?hpt=T1#]
F) [http://www.rps.psu.edu/probing/nuclearpower.html].