Thursday, October 25, 2012

Patron saint of empiricists

 
After Richard Feynman, Karl Popper, E.H. Carr, and so forth.
 
Also, patron saint of people with alien tracking devices lodged in their skulls, and people with gullible, clinically insane colleagues.
 
(Image courtesy Failblog)

Thursday, October 18, 2012

11 January 2011 – The impossibility of data management

Colleagues,

Empirical science has four steps: observation, hypothesis, experimentation and synthesis.  Two of those - observation, which requires noting a previously unexplained phenomenon, and experimentation, which is designed to produce evidence - are based, respectively, on collecting and generating data for further analysis.  Data is the bedrock of the scientific method, because it is only through data that established, often cherished, assumptions about how the world works may legitimately be challenged.  So you’d think there couldn’t be any such thing as ‘too much’ data, right?

“Wrong,” says David Weinberger.  In a new book with the wonderfully expressive title Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts are Everywhere, and the Smartest Person in the Room is the Room, Weinberger argues that our ability to generate data is outstripping not only our ability to organize it in such a way that we can draw useful conclusions from it, but even our cognitive ability to grasp the complexities of the phenomena we are trying to understand.

This isn’t a new phenomenon.  As Weinberger notes, in 1963 - right around the same time that Edward Lorenz formulated chaos theory to explain the inherent impossibility of predicting the long-term behaviour of non-linear systems like weather - Bernard Forscher of the Mayo Clinic published a letter in Science entitled “Chaos in the Brickyard”, in which he complained that scientists were generating - of all things - too many facts.  According to Weinberger,

...the letter warned that the new generation of scientists was too busy churning out bricks — facts — without regard to how they go together. Brickmaking, Forscher feared, had become an end in itself. “And so it happened that the land became flooded with bricks. … It became difficult to find the proper bricks for a task because one had to hunt among so many. … It became difficult to complete a useful edifice because, as soon as the foundations were discernible, they were buried under an avalanche of random bricks.” [Note A]

The situation today, Weinberger argues, is astronomically - and I use the word “astronomically” quite deliberately, in the sense of “several orders of magnitude” - worse than it has ever been.  In an article introducing his book, he puts the problem thus:

There are three basic reasons scientific data has increased to the point that the brickyard metaphor now looks 19th century. First, the economics of deletion have changed. We used to throw out most of the photos we took with our pathetic old film cameras because, even though they were far more expensive to create than today’s digital images, photo albums were expensive, took up space, and required us to invest considerable time in deciding which photos would make the cut. Now, it’s often less expensive to store them all on our hard drive (or at some website) than it is to weed through them.

Second, the economics of sharing have changed. The Library of Congress has tens of millions of items in storage because physics makes it hard to display and preserve, much less to share, physical objects. The Internet makes it far easier to share what’s in our digital basements. When the datasets are so large that they become unwieldy even for the Internet, innovators are spurred to invent new forms of sharing.  The ability to access and share over the Net further enhances the new economics of deletion; data that otherwise would not have been worth storing have new potential value because people can find and share them.

Third, computers have become exponentially smarter. John Wilbanks, vice president for Science at Creative Commons (formerly called Science Commons), notes that “[i]t used to take a year to map a gene. Now you can do thirty thousand on your desktop computer in a day. [Note A]

The result, Weinberger argues, is actually worse than Forscher predicted.  We are not merely awash in “bricks/facts” and suffering from a severe shortage of “theory-edifices” to organize them with; the profusion of data has revealed to us systems of such massive intricacy and interdependence that we are incapable of visualizing, let alone formulating, comprehensive theories to reduce them to manageable - by which is meant, predictable - rules of behaviour.  Scientists facing “data galaxies” are being forced down one of two paths.  The first path - reductionism - attempts to derive overarching, general principles that seem to work well enough to account for the bulk of the data.  Historically, searching for “universals” amid the vast sea of “particulars” has been the preferred approach of empirical science, largely because there tend to be many fewer universals than there are particulars in the physical world, and also because if you know the universals - for example, Newton’s laws of motion and the fact that gravitational attraction between two objects varies inversely as the cube of the distance separating them - then you can often deduce the particulars, for example, where you should aim your Saturn V rocket in order to ensure that the command and service module achieves Lunar orbit instead of ending up headed for the heliopause.

The second path - modelling - involves the construction of tunable mathematical models that can be tweaked to produce outputs that mimic, as closely as possible, observational data without attempting to derive an overarching theory of how the system that produced the data actually functions.  This has been the preferred approach for dealing with complex phenomena that do not easily lend themselves to reduction to “universals”.  When it comes to complex systems, however, both paths exhibit crippling flaws.  Overarching theories that explain some data, but not all of it, can help us to approach a solution, but they cannot produce “settled science” because unexplained data are by definition the Achilles Heel of any scientific theory.  Meanwhile, modelling, even if it can come close to reproducing observed data, cannot replace observed data, and is never more than a simulation - often a grossly inaccurate one - of how the real world works, because if we cannot visualize all of the complexities and interdependencies of a given problem set, we certainly cannot design mathematical formulae to simulate them.

As if these two unacceptable paths were not bad enough, perhaps the most alarming deduction that Weinberger draws from his analysis is that these data-driven trends are forcing us towards a “new way of knowing” that is, from a human perspective, almost entirely “virtual.”  According to Weinberger, only computers can generate the vast quantities of data necessary to investigate a given phenomenon; only computers have the capacity to organize and store so much data; and only computers can perform the unthinkable number of calculations necessary to process the data and create modelled outputs.  Our role, such as it is, has been to create the mathematical models for the computers - but even this last redoubt of human involvement is collapsing as we become increasingly incapable of visualizing the scope and interrelationships of the problems we are trying to solve.  In other words, the future of research into complex interdependent phenomena is gradually departing the human domain, simply because we lack the cognitive wherewithal needed to cope with it.

Does this mean that scientific inquiry is doomed to leave the human sphere entirely?  Or that it’s destined to grind to a halt unless we can come up with an AI that mimics human cognition to the point of being able to visualize and craft investigative solutions to huge, complex problems?  I certainly don’t dispute that science has developed the ability to drown itself in data; but there are ways of dealing with preposterous quantities of information.  As I believe I’ve pointed out before, historians are accustomed to being drowned in data.  From the point of view of the science of historiography, this is not a new problem.  Take, for example, a relatively straightforward historical event - the Battle of Waterloo.  Working from basic numbers, Napoleon had about 72,000 troops, and the Allies under Wellington had about 118,000.  Reducing the affair to nothing more than a series of iterative diarchic interactions (which I am the first to admit is a wholly ludicrous proposition, but which frankly is no more ridiculously inappropriate than some of the assumptions made in climate modelling - for example, the assumption that clouds warm the Earth, when observed data suggest that they cool it) suggests that there were about 8.5 billion potential diarchic interactions in the first iteration alone.  And that’s only the individuals.  Nothing is too insignificant to be eliminated from your model, and each increase in the fidelity of the data you input ought to help refine the accuracy of your output (unless you’re trying to model a non-linear system, in which case the fidelity of your input doesn’t matter because inherent instabilities will quickly overwhelm the system (see “Nonlinearity andthe indispensability of data”, 15 June 2011)).  In the quest for greater fidelity/realism, you would need to model the characteristics and behaviour not just of each individual but also of their weapons, their clothing, their boots or shoes, their health and physical condition, their horses, their guns, their limbers, their ammunition, the terrain, the obstacles, the weather, the psychological vagaries of individual soldiers and commanders...how many details are we talking about here?  Are we getting to the level of a “non-visualizable” problem yet?

Is it even possible to model something this complex?  Well, it’s sort of already been done.  I’m sure many of you have seen The Lord of the Rings: The Two Towers.  The final battle at Helm’s Deep comprised hundreds of Rohirrim and Elves, and ten thousand Uruk-Hai - and yet there were no more than a hundred live actors in any of the shots.  The final digital effects were created by Weta Digital using a programme called “Massive”.  According to Stephen Regelous, the chap who programmed the battle sequences, he didn’t really “program” them at all, or at least, not in the sense of crafting specific movements for every digital effect in the scenes.  That would have been an impossibly daunting task, given the number of individual digital actors involved.

Instead, he made the digital actors into people.  “The most important thing about making realistic crowds”, Regelous explains, “is making realistic individuals.”  To do that, the programme creates “agents”, each of which is in essence an individual with individual characteristics, traits, and most important of all, volition:

In Massive, agents’ brains - which look like intricate flow charts - define how they see and hear, how fast they run and how slowly they die. For the films, stunt actors’ movements were recorded in the studio to enable the agents to wield weapons realistically, duck to avoid a sword, charge an enemy and fall off tower walls, flailing.

Like real people, agents’ body types, clothing and the weather influence their capabilities. Agents aren’t robots, though. Each makes subtle responses to its surroundings with fuzzy logic rather than yes-no, on-off decisions. And every agent has thousands of brain nodes, such as their combat setting, which has rules for their level of aggression. When an animator places agents into a simulation, they are released to do what they will. It’s not crowd control, but anarchy. Each agent makes decisions from its point of view. [Note B]

In other words, outcomes aren’t predetermined; the Agents are designed with a range of options built into their makeup, and a degree of choice about what to do in response to given stimuli.  Kind of like people.  While it’s possible to predict likely responses to certain stimuli, it’s not possible to be certain what a given Agent will do in response to a given event.  “It’s possible to rig fights, but it hasn’t been done,” Regelous says. “In the first test fight we had 1,000 silver guys and 1,000 golden guys. We set off the simulation, and in the distance you could see several guys running for the hills.”

If you happen to own the full-up Uber-GeekPathetic Basement-Dwelling Fan-Boy LOTR collection (I admit nothing!), you can watch the “Making Of” DVDs for Two Towers, and listen to a much more in-depth explanation of how chaotic, complex and unpredictable the behaviour of the Massive-generated battle sequences were.  The programmers ran the Helm’s Deep sequence many, many times, using the same starting conditions, and always getting different results.  If that sounds familiar, it’s because it’s exactly what Lorenz described as the behaviour of a non-linear system.  It’s chaos in a nutshell. 

According to the DVD explanation, the results of individual volition were potentially so unpredictable that the Agents’ artificial intelligence needed a little bit of tweaking to ensure that the battle scenes unfolded more or less in line with the script.  Early on, for example, the designers had given each Agent a very small probability of panicking and fleeing the battle, along with a slight increase to that probability if a neighbouring agent went down hard, and a slightly greater probability if a neighbouring agent panicked and fled.  Reasonable, right?  We all know that panic on the battlefield is contagious.  The problem is that, depending on how the traits of the Agents were programmed, it could be too contagious.  In one of the simulation runs for the battle, one of the Uruk-Hai Agents near the front lines apparently panicked and fled at exactly the same time as a couple of nearby Agents were shot and killed.  This sparked a massive, rapidly propagating wave of panic that resulted in Saruman’s elite, genetically-engineered army dropping their weapons and heading for the hills.  I’m sure King Theoden, Aragorn, and hundreds of unnamed horse-vikings and elf-archers would’ve preferred it if Jackson had used that particular model outcome for the film, but it probably wouldn’t have leant itself to dramatic tension. And it would've been something of a disappointement when Gandalf, Eomer and the Rohirrim showed up as dawn broke across the Riddermark on the fifth day, only to discover that there was no one left for them to fight because Uruk Spearman #9551 had gone wobbly a couple of hours before.

What’s the point of all this?  Well, the link between Forscher’s complaint about the profusion of data-bricks and Weinberger’s conclusion that computers are taking over the process of generating, storing, and figuring out what to make of data is obvious.  The problem of deriving “universals” from the colossal mass of “particulars” that make up modern scientific (and historical) inquiry ought to be obvious, too, and it’s demonstrated by one of the key weaknesses of history: experimental repeatability.  Steve Regelous could re-run the Battle of Helm’s Deep as many times as he liked until he got the result that Peter Jackson wanted - but that’s fiction, and when you’re writing fiction you can do whatever you like because you’re not tied to any objective standard other than some degree of plausibility (which in turn depends on the expectations, experience and gullibility of your audience).  But we can’t re-run the Battle of Waterloo, changing one variable here or there to see what might have happened differently, because there’s no way to account for all possible variables, there’s no way to find out whether some seemingly insignificant variable might have been overwhelmingly important (butterfly wings, or “for want of a nail”, and all that), and finally there’s no way to scientifically validate model outputs other than to compare them to observed data, because historical events are one-time things.

What we can do, though, is try to develop rules that help us winnow the “universals” from the chaff that makes up most of history.  There were 190,000 “Agents” at Waterloo - but do we really need to know everything about them, from their shoe size to how many were suffering from dysentery on the big day?  Do we need all of the details that had to be programmed into Weta’s “Massive” Agents?  Or are there some general principles that we can use to guide our understanding of history so that we can pull out what’s important from the gigantic heap of what isn’t?

This is where a good reading - or hopefully re-reading - of E.H. Carr’s What is History? comes in handy.  In Chapter One, “The Historian and His Facts”, Carr disputes the ages-old adage that “the facts speak for themselves”, arguing instead that the facts speak only when the historian calls on them.  Figuring out which facts to call on is the historian’s métier.  Separating the gold - the “significant” as opposed to the “accidental” facts of a given phenomenon - from the dross of historical data is the historian’s challenge, and Carr’s criteria for judgement of what is “gold” is generalizability.  The key word here, of course, is “judgement” - informed judgement as a function of human cognition.  This is something that can’t be done by computers. 
 
Not yet, anyway. [Notes C, D] 

So for the time being, at least, there’s still a need for a meat plug somewhere in the analysis chain, if only to provide the informed judgement that our eventual silicon replacements haven’t quite learned to mimic.  That’s something of a relief.

Cheers,

//Don// (Meat plug)

Notes:


C) Historians looking for a lesson in humility need only read Heinlein’s The Moon is a Harsh Mistress, in which a computer that accidentally becomes artificially intelligent finds itself having to run a Lunar uprising - and in order to design the best possible plan for doing so, reads every published history book in a matter of minutes, analyzes them, and then not only plots a revolution from soup to nuts, but actually calculates the odds of victory, and recalculates them periodically in response to changing events.  Now that’s what I call “future security analysis.”  If only.

D) Some computer programmes are getting pretty good at matching human cognition.  Fortunately, we still hold an edge in important areas.  As long as we can still beat our electronic overlords at “Rock-Paper-Scissors”, we might, as the following chart from the genius webcartoon xkcd.com suggests, be able to hold the Singularity off for a little longer.
 


Thursday, October 11, 2012

21 December 2011 – H-bombs, Santa, and the Poop Spiral of Doom

Colleagues,

The Yuletide season is upon us, and with comes the annual ritual of NORAD tracking the big guy in the red suit.  If you're so inclined, you can follow his progress here, on NORAD's on-line Santa Tracker:


As I write, we're currently 3 days and 17 hours (and some assorted minutes) away from launch.  As an Air Force brat, I've been familiar with NORAD's tracking efforts for most of my life, and I recall wondering whether Santa, like the Tu-4s that drop by from time to time, ever merited an escort.  You sort of had to be concerned about whether his IFF transponder was operating, and whether he had the right codes (which in turn makes me wonder whether Santa has to wait three years to get his security clearance updated like the rest of us so he can even be ISSUED the codes); because if he didn't, well, in an era of AIM-120s, a pilot might be cleared to engage from beyond visual range, and then it'd be Run, Run Rudolph! for real.

In today's world, however, an air-breathing intercept seems somewhat less likely.  After all, with the ground-based interceptors of the BMD system in place and operational, it might - given how fast Santa would have to be travelling in order to get through his assigned duties in the allotted time - be more realistic to forego the F-22s and simply send an exo-atmospheric kill vehicle his way.  It's worth working through the intercept from an air defence perspective, if only to get a better grasp of the nature of the problem.

Well, what are the capabilities of the system?  Assuming Santa's sleigh operates on a cold launch system (which is not necessarily true, but more about that later), his take-off probably wouldn't be detected by the Defense Support Program (DSP) satellites that watch missile fields for the thermal signature, or bloom, of an ICBM launch.  On a southbound trajectory from the North Pole, the first piece of equipment to pick up Dasher, Dancer and the rest would be the Ballistic Missile Early Warning System, from one of two stations: Clear, Alaska, or Thule, Greenland. 

Ballistic Missile Early Warning System Sites

Clear has a PAVE PAWS phased array radar system that operates in the UHF band, with two faces each giving 120 degree coverage, with elevation coverage from 3 to 85 degrees above horizontal.  At peak power (about 500 kW for the main beam) it can detect an object the size of a small car at a range of 5550 km (3000 NM).  Since that accords pretty much with a large sleigh, it's the figure we'll use.  Even if Santa has adopted stealth technology, we can assess 8 "tiny reindeer" as adding up to the radar cross-section of a small car - or 9, if it's a foggy Christmas Eve and Rudolph's on duty.

PAVE PAWS phased array radar system at Clear, Alaska

The problem, of course, is that Santa's flight profile doesn't come close to that of a ballistic missile launched from the Asian heartland, or of a SLBM launched from a sub lurking in the - let's face it - totally ice-covered Arctic Ocean.  There's never been any indication that the sleigh is pressurized, so unless he's wearing breathing apparatus, the old guy's going to have to keep it below 10,000 feet ASL.   That poses some horizon issues, but solving them is a relatively straightforward problem in geometry:

If the PAVE PAWS was capable of detection at the visual horizon, calculating its detection range D for a target at altitude X is simple.  Knowing that the polar radius of the Earth R is 6,356,752 m, and that R1 is therefore R+3077 or 6,359,829 m, then D would simply be the square root of R1 squared minus R squared, or 197,810 m - or about 197.8 km from the radar station.  However, the PAVE PAWS has a minimum detection altitude of 3 degrees above horizontal, so detection would be a little later, when the bogey was closer to the site.  I'd recalculate that for you but I don't feel that into trigonometry this morning.

With a detection range of only 200 km or so for a target at an altitude of 10,000', would there be enough time for the warning system to react?  Maybe; depends on how fast the sleigh is travelling.  From Santa's perspective, he could vastly improve his survivability by flying lower and faster.  That's the same conclusion that was reached by the designers of the B-1 Lancer bomber, the performance of which I've had occasion to witness.  It's true; the lower and faster you fly, the less time anyone watching has to find you, fix you, and intercept you.

Which brings me, in a roundabout way, to the topic of this week's message.  You think the B-1 is low, fast, and nasty?  Well, amigos, you ain't seen nothing yet.  A colleague who is also an aficionado of all things ancient and atomic brought to my attention the other day one of the historical gems from America's glorious nuclear past.  Back in the halcyon days of the late 1950s - the era that brought us the Pentomic Army and such weapons systems as Atomic Annie, the 280-mm nuclear howitzer, and the Davy Crockett, the A-bomb-firing recoilless rifle - there were no problems that couldn't be solved by judicious application of the Mighty Atom.

The Davy Crockett nuclear recoilless rifle; and the Atomic Annie 280-mm nuclear howitzer

Thing is, those weapons, crazy as they might have been (and the Davy Crockett was crazy enough that, under certain firing conditions and selected yields, its lethal radius exceeded its range), those weapons were actually deployed.  The ones that fascinate me are the ones that did make it off the drawing board, but only as far as proof-of-concept and test and evaluation stages.  The most infamous one is probably one that, although it was closely connected to military weapons research, wasn't really a Defense programme at all: Project Orion.

 Project Orion - MY kind of crazy

In a nutshell (ahem), Project Orion was a spaceship designed to be propelled by the explosion of nuclear bombs fired out of its base.  The force of the explosion against a pusher plate (equipped with, shall we say, "powerful" shock absorbing systems) would drive the ship forward.  Thousands of bombs would be needed to reach planets throughout our solar system, which required miniaturizing the weapons as much as possible.  The research aimed at miniaturizing nuclear weapons - remember, this was the late 1950s and early 1960s, when much research was put into making bombs as big and destructive as possible, leading to monstrosities like the boxcar-sized B-17 - eventually led to the nuclear artillery rounds small enough to be put into 155mm and 203mm projectiles - and to the enhanced radiation weapon or 'neutron bomb' that bedevilled the Carter Administration, and was responsible for so much Euro-angst in the late 1970s.

In between the Davy Crockett and the Orion spaceship, though, were a good many 'almost-rans.'  One of those was the Convair X-6.  Based on the Convair B-36 bomber, the X-6 was to have been propelled by nuclear reactor-driven engines.  The idea was that the plane would carry a 3 MW air-cooled nuclear reactor in the bomb bay - and a 12-tonne lead and rubber shield to protect the crew from the otherwise unshielded powerplant.  A testbed aircraft - the XB-36H - was built to trial the shielding requirements, and logged 215 hours of flight time, during 89 of which the on-board reactor was operated. 

 
Based on the results of the testing, the Convair X-6 project was scrapped in 1961.  Had the thing gone to trials, a number of problems would have had to have been overcome.  One was the weight.  The takeoff weight of the beast was expected to be 363,000 pounds - roughly the same as a 747, but decades before the 747 became a reality.  The testing facility was on the point of building a 15,000-foot runway when the programme was cancelled.  Also, there was the small matter that only the aircrew were protected against the radiation of the reactor; everything else, including the plane body and everyone around it on the tarmac, wasn't.  The nature of the problem might have been telegraphed just a little when the Air Force started advertising for pilots who were past child-bearing age.

The biggest problem with the plane, though, was the fact that the aircraft engines weren't...err...well, you couldn't really shut them off.  You see, in a real jet turbine, propulsive force is achieved by superheated exhaust expanding out the back end of the engine.  The heat is provided by burning fuel.  Turn off the fuel flow, the engines shut down.  In the X-6 concept, however, there was no fuel; the heat was supplied by the nuclear reactor.  Airflow over the reactor elements superheated the air, and the efflux from that drove the aircraft.  It also cooled the reactor, as there was no space in the plane for the hundreds of tonnes of water and other assorted cooling media associated with terrestrial or naval reactors.  The cooling provided by air rushing over the reactor elements at hundreds of miles per hour would still be needed even when the aircraft was on the ground and parked. 
 
Something of a poser, as they say.

Well, put all of these problems and capabilities together, and what do you get?  Think about it: a plane powered by a nuclear reactor doesn't really need fuel, so it can fly pretty much forever, except that it produces so much radiation that nobody wants to get near it.  You can't shut it off, so it's pretty much a one-shot deal.  And it can go really fast.  REALLY fast, in fact, once you realize that the reactor can produce so much heat that the rate-limiting factor, really, is how fast you can get the air into the core to be superheated.  And once you realize THAT, you start thinking about something that the scientists had only just begun talking about after the X-15 programme was under way, which was...ramjets.  In a jet turbine engine, air is compressed to the necessary density by compressor blades (hence the name).  But if you get an air-breathing vehicle up to a high enough speed, you can do away with the compressor blades, and simply shape the intake to force the incoming air to the right density. 
 
Put all of these factors together, add a monocle and a white Persian cat, and what do you get?  That's right: a doomsday machine. 
 
Enter the Vought SLAM.

 
Imagine a cruise missile the size of a railway locomotive.  It's powered by a nuclear reactor similar to the one designed for the Convair X-6, but reconfigured a little.  It doesn't heat air for individual engines; it's the engine itself.  It has a big ramjet intake and is designed to heat compressed air so hot - 2330 degrees - that it needs no fuel at all.  It just blasts the superheated air out the tailpipe. 

Is this for real, you ask?  Well, for starters, they built the aerial reactor (amusingly nicknamed the 'Tory' Reactor), as part of Project Pluto, which eventually became the nickname for the whole project, including the airframe:


 
They also built (at the unbelievably appropriately-named Jackass Flats) the 25 miles of oil well pipe casing needed to contain enough compressed air to test the ramjet capacity of the engine...


 
...and they also built the Tory IIC engine, and tested it in 1964.
 

 
At full power, the Tory IIC engine produced 35,000 pounds of thrust and 513 MW of power.  For the sake of comparison, that's roughly the same as the reactor in a large nuclear generating station - except that the Tory IIC was only the size of a railcar.  Amazing what you can achieve when you do away with all of that pesky radiation shielding (although according to the test results, the engine produced less radiation than expected).  As an article about the SLAM project noted, the May 1964 test was observed by "dozens of admiring AEC officials and Air Force Generals"...all from "a safe distance".  Yeah, I'll bet. Like Tasmania.

Of course, to make the missile work you first have to get it up to ramjet speeds - well over Mach 3 - before the nuclear engine starts operating properly, so to do that you strap a few solid rocket boosters onto the thing.  Stick an inertial guidance system into it, like the one used by the infamous Snark cruise missile (and like the first-generation ALCMs would get about 10 years later) and you could program it to follow a preset course.  It can fly for so long (estimates put its range at an incredible 100,000 km, or two and a half times around the planet) that you could launch it and let it loiter, flying figure eights over an ocean for hours or even days before sending it a command to penetrate enemy territory.  It's virtually indestructible because compared to a manned bomber it has only a fraction of the moving parts; remember, it's really nothing more than an aerodynamic teakettle (the project manager dubbed it "the flying crowbar").  It'll fly so low that Soviet radars will never spot it; so fast that Soviet fighters (and SAMs) will never catch it (so fast, in fact, that the 150 DB of the shock wave was expected to smash windows and rupture eardrums all along its flight path); and it will never run out of fuel.  Awesome, eh? 
 
But wait...there's more! 

Now you turn it into a one-shot disposable aerial SSBN!

 File under "Seriously, who thought this was a good idea?"

You install a dozen vertical-deployment tubes, each carrying a one-megaton thermonuclear warhead equipped with an ejection mechanism and a parachute.  So now your low, fast, 100,000 km-range, unstoppable, nuclear-powered flying freight train can follow a pre-programmed course across the Soviet Union, 500 feet off the ground, at Mach 3, excreting H-bombs at predetermined deployment sites. 

 
Good freaking lord.

And did I mention that it would be spewing radiation all the way?

Yeah, no need for pilots means no need for shielding other than the minimum necessary to protect the electronics.  Given the amount of radiation the lightly-built, almost totally unshielded reactor would be producing, burst eardrums and broken windows would be the least of the problems afflicting anyone under the missile's flight path.  The intake would be sucking in dust, debris, water droplets and all manner of aerosolized contaminants, cooking them to a turn in a 600-MW reactor running at full power, and blasting them back out the tailpipe as, let's face it, fallout.  According to project reports, this was regarded as a bonus.  There were discussions during the project as to whether the missile, having discharged a dozen buckets of sunshine onto the heads of unsuspecting kulaks, should be programmed to then add insult to injury by flying back and forth over Soviet territory, sowing neutrons until it eventually melted down and crashed into Comrade Sergey's potato field.  Alternatively, it could be programmed to crash itself and its screaming hot reactor into a 13th target, sort of as an added treat.  An apocalyptic baker's dozen, if you will.

Just how serious was the SLAM project?  Well, as noted above, serious enough that they built the engine, tested it, and were working on a Mach 4+ version when the plug was pulled on 1 July 1964.  A number of factors contributed to the decision to kill the project.  One was the cost; each missile was expected to run about $50 million, an exorbitant figure at the time.  The Navy was preparing to deploy the Polaris SLBM, and the Air Force had ICBMs, both of which were totally invulnerable to the SAMs of the time, and both of which arrived at their targets much faster than the SLAM (which one critic redubbed 'Slow, Low And Messy').  There were other problems, too.  Which ally would be crazy enough to allow a SLAM to overfly their territory en route the USSR?  And for that matter, how could you test the thing?  One proposal was to have it fly lazy figure-eights near Wake Island in the Pacific - and, once the test was complete, to ditch the missile, with its red-hot reactor, into the ocean.  Another (hilarious) proposal was to test it in Nevada using a long tether.  One project expert remarked rather drily, "That would have been some tether."  And what if one got away, either during testing, or in some sort of operational scenario?  After all, a SM-62 Snark cruise missile test-fired in 1956 using a similar inertial navigation system had been aimed at Puerto Rico, and was last seen on radar heading into the Amazon.(Note A)  What if that Snark had been carrying 12 thermonuclear warheads and a blazing hot, neutron-spewing reactor?

Worse, what if it didn't crash?  Remember, it didn't need fuel, and didn't have a whole lot of moving parts.  How long could such a thing stay up?  How would you bring it down?  For that matter, where would you bring it down?

Good lord, why hasn't somebody made a movie about this thing?  Oh, wait, they did:

 
Except that the SLAM moved a thousand times faster than any Terminator.  Plus it flew, spewed radioactivity, and was stuffed full of H-bombs.

Bottom line, beyond the sheer horrifying craziness of the concept, the SLAM was inferior to ballistic missiles in every conceivable way, and so it ended up on the chopping block.  The USN and USAF went on to deploy thousands of SLBMs and ICBMs which, for all their faults, were at least cheaper and faster; and while SSBNs were driven by nuclear reactors, none of the missiles were themselves nuclear-propelled.  When cruise missiles eventually were deployed a decade or so later, they were much smaller, carried only one warhead, and required fuel to fly.  Quite a different concept from the invulnerable, unstoppable "flying Chernobyl" dreamed up by the USAF, AEC, and the frighteningly innovative wrench-benders at Vought.

For those of you interested in reading more about the SLAM, you can find fascinating articles at the following websites:




A final thought about the discussion that sparked this whole line of investigation in the first place.  If you're Santa, then you're looking for something that can travel very low, very fast, for very long distances without refuelling, and that is capable of delivering packages at predetermined sites. If Santa were looking to upgrade the old sleigh to something a lot more capable, the SLAM would be a fantastic choice.  The hazard to the big guy himself shouldn't be too much of a worry, because let's face it, he's a long way past child-bearing age.  Like the Project Pluto folks, the radiation should be considered a bonus.  There'd be no need to drag Rudolph along to provide additional illumination: 
"Mommy, why is Rudolph's nose blue?" 
"Actually, sweetums, that's called the Cherenkov effect..."

And on that happy note, dear colleagues, Merry Christmas to all - and to all, a good night!  See you next year.

//Don//
 
P.S. As a contemporary note, nearly a year later, here's an example of why I think creating a nuclear-propelled autonomous flying H-bomb delivery truck might be a bad idea:

(Source: Failbook)

You see, I don't think the robot apocalypse is going to be the result of our preprogrammed servants freaking out or conspiring to destroy us all in some sort of Skynet Götterdämmerung.  I think we're in much more potential danger from a combination of our own laziness and our chronic lack of imagination about the potential consequences of robots doing exactly what we built them to do. 

"Judgement Day" is a whole lot cooler and much less embarassing as an explanation for the demise of humanity than "The Poop Spiral of Doom".


Notes:
A) http://www.airforce-magazine.com/MagazineArchive/Documents/2004/December%202004/1204snark.pdf

Friday, October 5, 2012

16 December 2011 – Seeing the light

Colleagues,

There's been an illuminating development in the battle in the US Congress to avoid a federal shutdown.  The $1 trillion spending bill, agreed last night, runs to more than 1200 pages and includes all manner of provisions inserted at the last minute in order to garner sufficient bipartisan support to ensure passage.(Note A)

One of those last-minute changes was a rider defunding the Department of Energy's standards for light bulb efficiency that would have gradually outlawed traditional incandescent bulbs.  The result of an energy law signed in 2007 by then-President George W. Bush, the DOE standards were to have been phased in gradually; the first victims would have been 100W incandescent bulbs, which would have been outlawed as of 1 January 2012, a fortnight hence.  Grass-roots Republican supporters had seized on the ban, making it a cause-celebre, and the issue has been at the forefront of talk radio and Tea Party-type discussions for months.  Democrats railed against the rider, but managed to eke out what they consider a win by including provisions requiring recipients of federal grants in amounts of $1 million or more to certify that they will upgrade the efficiency of facilities to the 2007 lighting standards.(Note B)

The light bulb ban is one of those rare issues that seems to galvanize the disparate ends of the political spectrum.  Democrats justified the ban by citing the impending extinction of polar bears and the drowning of coastal cities; Republicans countered with charges of war-against-the-poor and historical treason against the sainted memory of Thomas Alva Edison.  Lost in all the rhetoric, as usual, was the bulk of empirical fact, to wit, that while incandescent bulbs consume more electricity to operate, the lowest-cost alternative - compact fluorescent bulbs - are far more expensive than incandescent bulbs, contain liquid mercury, rarely last as long as advertised, must be disposed of as toxic waste, and are virtually all made in China (and thus must be shipped across the ocean on oil-powered cargo ships).  One reason for the manufacturing location is that China is virtually the world's only source of the rare earth elements used in the phosphors coating the interior of the CFL tubes.  The next generation of candidate illumination, LED lights, are even more expensive (due primarily to the need for a heavy and costly heat sink) and unless of very high quality, do not come close to mimicking the characteristics of incandescent bulbs.

As an exercise in cost-effectiveness, I ran the numbers on these back in March (see "Are governments smarter than a5th-grader?, 8 March 2011"), and concluded that the payback period for LED bulbs is about 6 years.  I'm six months [17 months, now! - ed.] into that on my office light bulbs right now and they're all still working; let's see how that goes.

Of course, the cost-effectiveness calculation is only one part of the equation.  The unit cost of the spotlamps I bought was $35, about 30 times the cost of a comparable incandescent spotlamp.  The average wage-earner isn't in a position to engage in costly experimentation with novel lighting technologies; if you're pulling down $35K a year as a general labourer, $35 for a single light bulb might seem a little outrageous.  That, in my opinion, is what fuelled much of the discontent with the DOE standards that would have led to the demise of the incandescents about two weeks from now.  Of course, the visceral response by many Americans to the government telling them what light bulbs they can and can't buy probably had something to do with it, too; live free or die, and all that.  You can have my incandescents when you pry them from my cold, dead hands...

What I find especially interesting from a strategic analysis perspective is how some in the environmental lobby responded to the news that the bill effectively lifted the impending ban.  Jim DePeso, the policy director of the group "Republicans for Environment Protection", was quoted as saying:

"In the real world, outside talk radio's echo chamber, lighting manufacturers such as GE, Philips and Sylvania have tooled up to produce new incandescent light bulbs that look and operate exactly the same as old incandescent bulbs, and give off just as much warm light...The only difference is they produce less excess heat and are therefore 30 percent more efficient. Same light, lower energy bills. What's not to like?"

Well, Jim, as I pointed out above, there's a lot not to like.  But what DePeso is referring to is a new suite of bulbs like the Osram Sylvania, a halogen bulb that reportedly replicates 100-W performance while consuming only 72W of current.(Note C)  
 
The Osram Sylvania 72W halogen bulb

Unlike the CFLs, Sylvania makes the Osram, which doesn't contain mercury, in Mexico.  But there's another strategic problem, you see - unlike traditional incandescents, these bulbs require rare earth phosphors.  This prompted Sylvania to issue a consumer notice earlier this year:

Rare earth metals  are elements vital to the manufacturing of our energy-efficient fluorescent lamps as they are a crucial component of the light-producing tri-phosphors inside the lamps. Currently, 95% of the world’s rare earth metal mining and oxide production comes from China where the manufacture and export of these compounds is heavily taxed and controlled.

The Chinese government has implemented new tariffs and mining regulations on rare earth materials. These actions, coupled with increasingly strict export quotas, have caused the price of these compounds to substantially increase – as much as 3500% since January of 2010 in some cases.

Due to regulation, exports of rare earth materials were reduced 40% from 2009 to 2010 and another 35% during the first half of 2011 compared with prior year. It is clear that the Chinease [sic] policies regarding rare earth materials must be addressed with multiple strategies in order to stabilize pricing and supply of these critical minerals.

OSRAM SYLVANIA has and will continue to take steps to combat these rising costs. We have continued to strengthen our supply chain while spurring our research and development teams into action. We are investigating the ability to reduce the weight of powders used in fluorescent lamps while evaluating alternatives to the current phosphor combinations. However, despite our best efforts, the rising cost of rare earth oxides has outpaced our ability to absorb them.

In order to combat the steep rising costs of these raw materials, OSRAM SYLVANIA will increase pricing on our fluorescent lamps, including our T8, T5, Deluxe T12 and all CFL pin and self-ballasted lamps. These increases will go into effect July 1, 2011; August 1, 2011; and then monthly until the cost of rare earth materials are stabilized.(Note D)

What does that mean for the bulbs that Mr. DePose thinks are the next big thing, infinitely preferable to traditional incandescents?  Well, it means that they're going to get more expensive.  And they're not exactly cheap right now.  The 72W Osram Sylvania goes for $8.20, which is more than ten times the price of a 100W Phillips Duramax, which retails for $2.98 per four-pack at Home Depot.  But the Osram isn't ten times as efficient (it uses 72% of the power of the 100W Duramax); and it isn't ten times as long-lived (it's rated for 3500 hrs, compared to 1500 hrs for the Duramax).  So it's already not much of a deal.  Furthermore, as the above statement indicates, that $8.20-per-bulb price is subject to change without notice due to the impact of China's near-total rare earths monopoly on REE supplies needed by the Mexican manufacturer.(E)
As for Mr. DePose's comment about GE, Sylvania, Philips and so forth gearing up to make the new light bulbs...well, is that a consequence of consumer demand for vastly more expensive light bulbs?  Or is it a consequence of the 2007 Bush Energy law outlawing incandescent bulbs that was supposed to come into effect on New Year's Day?
The Great Light Bulb Purge - narrowly averted by Congress, at least for now - is an excellent example of the chicken-and-egg relationship of government regulation with consumer preference in the modern "free" market.  Apart from moral pressure (like the David Suzuki Foundation's recent and ludicrously hyperbolic "Drowning Santa" campaign), there are essentially two ways that governments can force consumers to buy things they wouldn't otherwise buy.  One is subsidies - governments giving taxpayer money to industries to make otherwise non-competitive products cheap enough for consumers to consider buying them (q.v. wind power, solar power, and hybrid and electric vehicles - I've got sources if you want'em) - and the other is regulating technologies deemed 'undesirable' out of business.  This is the source of the philosophy reflected in Obama’s campaign promise to bankrupt coal-fired electrical generating concerns (Note F).
Both approaches constitute interference in the marketplace.  And as I've mentioned before in previous missives, both of these techniques seem to keep popping up, time and time again, in the political drive towards the 'green energy' economy.
But for the moment, at least insofar as the venerable incandescent bulb is concerned, it seems as though someone just might have "seen the light".
(Exclamation of joy at sudden observation of luminescence expurgated)

Let us therefore give thanks.

Cheers,

//Don//

P.S.  A back-of-the-envelope calculation tells us a little bit about the cost comparison.  The 72W Osram Sylvania bulb sells for $8.20 online.  A 4-pack of Phillips Duramax 100W bulbs costs $2.98 at Home Depot, or $0.75 apiece.  The Duramax has a rated lifespan of 1500 hrs; the Osram 72W, 3500 hrs.  To get 21,000 hours of light, you'd need 6 Osrams or 14 Duramaxes.  21,000 hrs of Osrams would consume 1512 kWh of electricity.  21,000 hrs of Duramaxes would consume 2100 kWh of electricity.  At roughly $0.10 per kWh, the Osrams would cost you $151.2; the Duramaxes, $210.  So you're saving $58.80 over 21,000 hrs, or $0.0028 per hour of use.  Those 6 Osrams would cost you $49.20, whereas 16 Duramaxes would cost you $12.00.  So you're paying $37.20 more for the bulbs, but saving $58.80 on electricity, for a total savings of $21.60 over 21,000 hrs.  Unless the Osrams don't last as long as advertised, or their price goes up due to the rare earths shortage, of course.

Notes
A) http://www.politico.com/news/stories/1211/70530.html

B) http://www.politico.com/news/stories/1211/70534_Page2.html

C) http://assets.sylvania.com/assets/Documents/Consumer_SocketSurvey_FINAL.6f999370-cb88-479f-94e1-a6483531b81f.pdf

D) http://www.sylvania.com/Phosphors/

E) http://www.bulbtronics.com/Search-The-Warehouse/ProductDetail.aspx?sid=0000984&pid=OS72MBCAP120V&AspxAutoDetectCookieSupport=1

F) http://frontporchpolitics.com/2012/07/obama-keeps-campaign-promise-patriot-coal-bankrupt/

Tuesday, October 2, 2012

7 December 2011 – Imagination and the toxic terrorist

Colleagues,

Consider, for a moment, these two pictures, and the attached deduction:



Congratulations - you've finished the Internet.

I'm going somewhere with this, trust me.

One of the perennial topics at the OPCW these days - and for that matter, in defence and public safety departments everywhere - is the question of chemical terrorism.  For the record, the last significant terrorist attack involving a chemical weapon occurred in 1995.  It was perpetrated by Aum Shinrikyo, and featured poor-quality sarin dispersed using a remarkably ineffective methodology.  Still, 12 people were killed and 5000 sickened by the attack.

More recent incidents have been considerably less effective. On 20 February 2007, for example, a tanker truck carrying chlorine gas exploded outside of a restaurant in the Iraqi town of Taji, a predominantly Sunni settlement some twenty kilometres north of Baghdad.  While some initial reports suggested that the truck may have struck a roadside improvised explosive device (IED), the Iraqi Government subsequently confirmed that the truck had been converted into a vehicle-borne chemical IED by the addition of a small bomb located next to the pressurized chlorine tank.  Six people were killed and approximately 150 injured, both by the initial blast and by subsequent exposure to chlorine gas.  No Coalition personnel were harmed in the attack.

This latter incident had two interesting characteristics.  First, it provided evidence of the evolution of insurgent tactics in Iraq, as previous attempts to employ improvised chemical weapons had focussed either upon producing “home-made” chemical agent (which failed miserably), or making use of the detritus of Saddam’s chemical weapons programs (e.g., attempting to disperse old nerve or mustard agent shells by detonating them with external explosives - which also failed miserably).  Second, it demonstrated a far better understanding of the threat posed by common, high-production-volume toxic industrial chemicals than had hitherto been the case amongst international jihadists, who had to date focussed a great deal of time and effort attempting to produce small quantities of traditional chemical warfare agents or biological toxins, or to weaponize rare and exotic chemical compounds for use in improvised chemical weapons or explosives.   The Surge helped damp down the violence, and while IED attacks and suicide bombings have continued in Iraq and elsewhere, there have been no repetitions of the chlorine attacks.

The deadliest terrorist attack using industrial chemicals occurred on 11 September 2001, of course, and the chemical involved - aviation fuel - was fairly prosaic.  Over the course of the subsequent half-decade, a number of potential attacks were thwarted by police and domestic intelligence services throughout the Western world; few were even slightly successful.  Until the 20 February 2007 chlorine attack, the successful ones all shared one element in common: they employed manufactured chemical explosives, or used easily-obtainable chemicals to produce improvised chemical explosives.  Similarly, many of the unsuccessful attacks appeared to share certain characteristics differentiating them from more lethal and destructive events.

Makeshift CBW.  First, some of the unsuccessful attacks appeared to be disproportionately focused on employing makeshift chemical or biological weapons (CBW).  Producing, weaponizing and effectively disseminating a CBW agent requires precise application of an array of exact sciences, posing daunting challenges to amateurs.  Aum Shinrikyo, for example, carried out at least two attacks using home-made sarin nerve gas, and at least one using home-grown anthrax.  Despite access to trained scientific personnel, virtually unlimited funding, a highly permissive security environment, and lax policing by officials worried about violating the patina of religion employed by Aum to conceal their activities, the perpetrators failed to produce the intended high casualty count.  Even with the widespread proliferation, particularly via the Internet, of detailed information on the production and deployment of CBW agents, groups lacking specialized scientific expertise in the field of chemical or biological weapons appear thus far to have been unable to overcome the hurdles.

Late in 2003, for example, London police uncovered a plan by an Islamic terrorist cell to employ the biological toxin ricin in terror attacks.  Ricin, despite its ease of production and high toxicity, is difficult to disperse and is simply not an effective weapon.  Canadian, British and American scientists conducted extensive experiments with ricin in a variety of munitions and dissemination devices during and after the Second World War, eventually abandoning it in favour of the more reliable nerve agents.



"Ricinis Communis" - the Castor Bean plant

Other plots have been even more exotic.  On 6 April 2004, British police foiled what appeared to be a plot to use osmium tetroxide as a chemical weapon.  From a technical perspective, osmium tetroxide makes a poor weapon; a crystalline solid with a melting point of 42° C, it is unlikely to produce, even through sublimation, a significant vapour mass, which is the most important operational criterion for an agent designed to injure or kill through inhalation.  The other option for dissemination – formation of a respirable aerosol by explosive dissemination – is a much more challenging technical problem, and given osmium’s propensity to act as an explosive catalyst, explosive dissemination would likely cause the agent to combust and disperse quickly.  Experts in chemical toxicity consulted by the media in the wake of the arrests fell into two distinct groups: those who noted the absolute toxicity of osmium tetroxide and were very concerned by the alleged incident; and those who noted its toxicity in comparison to modern chemical warfare agents, such as Sarin or VX, or to other less toxic, but more widely available, industrial compounds, and were rather less concerned.


Osmium tetroxide - not exactly a "high production volume" chemical

Toxic industrial chemicals.  This latter group of chemical experts tended to consider the incident in the broader context of comparative hazards and risks.  There is a vast array of industrial chemicals more toxic than osmium tetroxide (or more acidic, caustic, explosive or carcinogenic), most of which are readily obtainable in far larger quantities.  Virtually all of them are also considerably cheaper.  According to one UK report, osmium tetroxide may be obtained over the internet at a price of ₤17 per gram, making it twice as expensive as pure gold (and even kilogram quantities would likely be insufficient to generate a significant toxic hazard over a large area; using it to poison people would be like a footpad bludgeoning his victims with a Faberge egg).  By contrast, less exotic industrial chemicals like hydrogen fluoride, chlorine, sulfuric acid, dimethylamine, hydrogen sulfide, phosgene and sulfur dioxide are all produced worldwide in enormous quantities, and are shipped virtually everywhere by train-car and tank-truck – including across Canadian rail and highways.

Comparative risk is the key component in analyzing the hazard posed by toxic industrial chemicals.  Osmium tetroxide – based inter alia on its toxicity (high), how often it is shipped (infrequently), how much is shipped (very small quantities) and whether it has ever featured in an industrial accident or terrorist attack (it has not) – was recently assigned an overall risk value of 6 on a scale of 1 to 15 by a joint CANUKUS scientific board.  This makes it about 1560th on a list of nearly three thousand industrial chemicals evaluated by the board.  The same group rated sodium hydroxide, a highly caustic and very common industrial chemical, as 231st and assigned it a risk value of 10, noting that it ranked as one of the ten most common chemicals involved in industrial accidents.  By way of comparison, aviation fuel is number ten on the same list; and gaseous chlorine, number two.

These and hundreds of thousands of tons of equally hazardous industrial chemicals are shipped across North America via railroad, highway and inland waterway on a daily basis.  9/11 killed thousands using aviation fuel.  The 2004 Madrid train bombings killed hundreds with conventional explosives.  A similar attack against a train traversing a major city while carrying a thousand tons – a mere five tanker cars – of chlorine, hydrogen fluoride, hydrogen sulfide, dimethylamine, or an organophosphorous pesticide like parathion (or any of dozens of other highly toxic compounds) could, with a favourable wind, kill thousands.

The threat posed by toxic industrial chemicals is severe.  It would be far easier for terrorists to obtain a large quantity of a highly toxic (or for that matter, highly flammable) industrial chemical than to acquire even a few kilograms of an exotic substance like osmium tetroxide.

PSYCHO-ANALYZING THE 'TOXIC TERRORIST'

In view of the ready availability of large quantities of more common hazardous substances, one wonders why some terrorist groups appear to be concentrating on complex operations featuring exotic and unpredictable compounds unlikely to have the same impact as even a primitive improvised explosive device.  Either the individuals and groups planning these attacks are deluded as to the probability of success, or their motivation, goals and modi operandii differ significantly from those of their ideological brethren.  Presuming incompetence leaves no role for further analysis, though, and is in any case dangerous; a more prudent assessment may be that different organizations are following different operational philosophies.

Some groups appear to favour high-casualty operations featuring simple, proven technologies (improvised explosive devices, car bombs, hijacked aircraft) conducted by individuals prepared for and seeking martyrdom.  This approach appears to be shared by the more nihilistic of the Islamic terrorist organizations, including some Palestinian terrorist groups, the 9/11 hijackers, the various truck- and car-bombers operating in Iraq, the perpetrators of the railway blasts in Madrid and the would-be assassins of Pakistani President Musharraf.  Al Qaeda, the Tamil Tigers and numerous Palestinian terrorist organizations have enjoyed considerable success with this approach.

By contrast, other groups appear to favour a competing and thus far ineffective operational methodology – one characterized by complex operations featuring exotic substances based on unproven, experimental technologies which, even if successful, are unlikely to result in vast numbers of casualties.  The individuals caught in the course of staging these operations appear to be more likely to plan for their own escape, and may therefore be less inclined to seek “martyrdom”.  Given that these attacks appear to be perpetrated by a more educated class of individual, this may represent an inverse relationship between intelligence and willingness to become at martyr (as opposed to merely advocating martyrdom).  Additionally, in view of the fact that, notwithstanding the number of times this has been tried, a noteworthy chemical or biological attack has yet to be carried out, these groups also appear to be more likely to be caught.  In view of the fact that informants are reportedly playing a part in successful investigations, this category of attack seems to be conducted by a less dedicated class of individual – or at the very least, one reliant upon a wider network of actors, and perhaps less concerned with maintaining operational security. 

Many terrorists, of course, fall outside of this admittedly simplistic categorization.  The Bali bombers, for example, used a traditional improvised explosive approach, but apart from one member, appear to have made allowance for their own survival.  Similarly, the 2001 anthrax attacks in the United States employed both an exotic agent and a delivery method highly unlikely to cause mass casualties – but this attack was carried out by a disgruntled US government employee.  And pulling it off took an employee who had high security clearance, access to virulents bioagent strains, and more than a decade's experience as a biowarfare scientist.

Our principal enemy, international jihadism, seems to be plagued by a competency gap.  Despite the jihadists' obvious interest in employing weapons of mass destruction, they appear to lack even the rudimentary skills and knowledge necessary to obtain, weaponize and effectively deliver a CBW attack.  This may be due to any (or all) of a number of factors, including:

·        the capture or killing of a significant proportion of the educated, intelligent and scientifically literate leadership during the war in Afghanistan, the Iraq War, and subsequent worldwide policing and security sweeps (this may be particularly true with respect to al Qaeda);

·        the loss of Afghanistan (and for Ansar al-Islam, of Iraq) as a pied-à-terre and training ground has likely forced many of the survivors into constant movement in order to avoid death or capture, preventing them from accumulating the laboratory infrastructure and stockpiles of chemicals required to stage an attack; (note A) and

·        the fall of Saddam, allied restriction of the Afghan drug trade, Libya’s renunciation of both WMD and terrorism, increasing international scrutiny of Saudi Arabia, Iran and Syria, and the progressive tightening of controls on funding to terrorist organizations may be severely constraining their financial and logistic resources.(note B)

These and other constraints imposed by the “War on Terror” may have induced the opposition into hasty attempts to conduct ill-conceived operations.

Other explanations should also be considered.  Since 9/11, the Western media (abetted, to be sure, by government overuse of highly imaginative threat matrices) has succeeded in creating an atmosphere of apprehension, especially with respect to the potential use by terrorists of weapons of mass destruction.  The 2001 anthrax attacks in the US and the resulting panic likely taught our adversaries an important lesson regarding the susceptibility of Western publics to WMD-induced hysteria.  Indeed, this is almost certainly why Ayman al-Zawahiri once claimed that al Qaeda had acquired a “smart [nuclear] briefcase bomb” from disaffected Russian scientists.(Note C)  Given variable public sensitivity to WMD issues (and, by contrast, public indifference to industrial accidents, at least those short of the Bhopal scale), a terrorist attack that featured dispersion of an unfamiliar substance such as osmium tetroxide would be likely to generate panic to a degree that a much larger and therefore much more dangerous dispersion of a less exotic compound like caustic soda or ammonia would not.  It may therefore be useful to draw a distinction between two varieties of WMD-seeking terrorist: the calculating type hoping to terrorize in order to modify the behaviour of the target state; and the more recent, and more dangerous, Islamic nihilists, whose interest in WMD lies not in their potential to terrorize, but rather to kill large numbers of people.

SO WHAT?

We haven't seen the end of toxic terrorism.  While Osama bin Laden is no longer among the vertical, in 1998 he declared the acquisition and use of CBW a religious duty for muslims,(note D) and later requested and received a fatwa, issued by Saudi Sheikh Nasi bin-Hamid al-Fahd, condoning the use of CBRN weapons against non-Muslims.(note E)  There will be more of this, and if the jihadists see the light and focus on TICs instead of the unusual stuff, our problems are going to multiply.  The volume of chemical transfers is increasing, and new technologies like microsynthesis are making rapid production of small quantities of exotic, even supertoxic, chemicals easier than ever before.  Thus far the jihadists appear to have focused their efforts on large-scale attacks using more traditional, reliable means; but they are still trying to get their hands on supertoxic chemicals that industrialized states consider "chemical weapons."  Lack of success in this area suggests that it is probably only a matter of time until they turn their attention to the possibilities offered by the enormous volumes of toxic industrial chemicals produced and traded annually.  When they do, their operational patterns of behaviour suggest that they will eschew small-scale, exotic plans based on unpredictable, difficult-to-obtain compounds, and focus instead on large-scale, simple operations featuring hazardous chemicals that are available widely and in large quantity. 

To put it another way, crashing a VBIED into a building requires that you steal a V and build an IED.  Crashing a tanker full of toxic, caustic, acidic, flammable or explosive chemicals into a building requires only that you steal the right truck.  Combining both methods – a small bomb with a truckload of toxic chemicals – could greatly magnify the impact of such an attack, especially if operationally-savvy terrorists researched the properties of the chemicals they planned to steal, and made the most of geographic and meteorological conditions to conduct a release at the most damaging time and place.  This isn't rocket science; militaries have been doing it since 1915.


Accidental chlorine tanker derailment, Graniteville S.C., January 2005
5000 evacuated, 1450 hospitalized, 550 injured, 9 killed

Western nations should be exploring why the competent jihadists have yet to pursue this course of action.  The answer might tell us how much time we have before they figure it out.  Instead, what are we doing?  Worrying about exotic threats like ricin, osmium tetroxide, and whether a bunch of guys who haven't been able to figure out how to release chlorine as effectively as the Germans did at St. Julien are going to be able to cook up a batch of VX in somebody's bathtub without poisoning themselves first.  Those kinds of exotic threats are sexy, and they attract all sorts of press and (therefore) CT funding; but they're unrealistic, and they're nowhere near as potentially damaging as one guy who nabs a truck carrying chlorine, caustic soda or propane, and slams it into the Centre Block.  If the 20 February 2007 chlorine attack is any indication, maybe they're already starting to get it.

Which brings me back to the photo montage at the beginning of this message, and its disturbing deduction about the topology of the theoretical Hitler-Worf facial hair overlap.  My point is simple, and is applicable to separating realistic threats of chemical terrorism from wild-eyed fantasies about homemade VX and osmium tetroxide bombs: just because something is technically possible, that doesn't mean it's likely

Cheers,

//Don//

Notes

A) Quillen notes, “Clearly, the al-Qaeda CBRN programs that existed in Afghanistan under the Taliban were at least temporarily disrupted by the 2001 US-led invasion”.  Chris Quillen, “Three Explanations for al-Qaeda’s Lack of a CBRN Attack”, Terrorism Monitor, Vol. 5, No. 3 (15 February 2007).

B) That said, unless one is trying to legitimately purchase large quantities of highly expensive compounds such as osmium tetroxide, the resources required to stage a successful chemical attack are not great.

C) No such proliferation occurred, and indeed, it is questionable whether such a weapon ever existed.

D) Bin Laden’s declaration was delivered in the course of an interview with Jamal Isma’il in December 1998.  It was rebroadcast by al-Jazeera in September 2001.

E) Shiekh Nasir bin Hamid al-Fahd, “A Treatise on the Legal Status of Using Weapons of Mass Destruction Against Infidels”, May 2003 [http://www.carnegieendowment.org/static/npp/ fatwa.pdf].