Thursday, December 28, 2006

New Nuclear Causes "Run" on Uranium.

A worldwide emphasis on nuclear power, which indicates the inauguration of 250 new nuclear power plants, to add to the 440 currently operating, has caused a hike in the price of uranium by almost a quarter in the past three months, and a new analysis predicts it will rise by a further three quarters (a doubling overall) over the next two years. To provide the uranium fuel, a host of new relatively small mining and production companies has emerged on the market, and whose share values have rocketed. Britain has murmured in the positive about a new generation of nuclear power stations, in particular as the current family are due for retirement and decommissioning by around 2022, but no firm programme has been described, or even the type of reactors that will be used. It is my understanding that they will be fission reactors based on enrichment of natural uranium in uranium-235, rather than fast breeders, which convert the majority of uranium (uranium-238) to plutonium-239 as a fuel. As I have pointed out in recent postings, this is actually a very wasteful use of uranium, since it "throws away" 99% of the resource (which becomes "depleted uranium"), but there are serious safety issues attending fast-breeders reactors based on uranium/plutonium, and breeding uranium-233 from thorium might be the better way to go.
While uranium remains relatively abundant as a resource, there is less fiscal incentive to adopt an entirely new technology based on thorium, although India with its huge reserves of thorium, is taking serious steps to use this as the fuel for its own nuclear programme. Depending on precise figures, there may be around 70 years worth of uranium resource in current holdings, including the known deposits in Canada, Kazakhstan and Australia, and those in armouries of nuclear warheads, containing heavily enriched uranium (that's 90% uranium-235, not the mere 3.5% contained in nuclear fuel) and weapons grade plutonium (that's pure plutonium-239, free from other isotopes, otherwise a smaller bang is got from a nuclear detonation!). Uranium is ubiquitous at around 2 -3 parts per million of the Earth's crust and soils, particularly those associated with phosphate minerals, can contain from 50 - 1000 ppm (0.1%) of uranium, hence as the price rises, more will be found. The difference between a reserve and a resource, is that the resource is that portion of the reserve which is economically extractable at any given time of writing. As the cost of a commodity increases, then so does the amount of its resource, bearing in mind that other resources, e.g. oil and gas, are used-up in extracting uranium and other fuels, nuclear or not.
The uranium price is forecast to reach $90 per pound by the middle of 2007, and $115 per pound by the end of 2008, according to a report produced by Resource Capital Research. In 2003, the market value of uranium was just $11 per pound. The delay in production at the Cigar Lake mine in Canada means that it is not expected to come on-stream until 2010, which was supposed to account for 40% of all new output within the next three years. A group of 65 new Australian mining firms have seen their shares leap by 53% over the last three months, amounting to a total increase in value of 186% over the past 12 months. Very nice too - I wish I had bought into some of them!
Mining uranium has environmental consequences, beyond those of the nuclear power it is intended to fuel. Friends of the Earth is campaigning against new schemes such as Olympic Dam (also known as Roxby Downs), which by 2013 could become the world's largest producing uranium mine. Mining uranium in Australia is further sensitive because many of the uranium deposits are located on land inhabited by aboriginal groups, and efforts to install a new uranium mine at Jabiluka in the Northern Territories were halted in the light of opposition from environmental and aboriginal groups.

Saturday, December 23, 2006

Calculate your Carbon-Footprint.

Ever wanted a ready-reckoner as to your carbon footprint? If so, Donnachadh McCarthy (who's family are originally from Cork, I would think, as it is my Grandmother's maiden name) has written an article in The Independent in which are some handy conversion factors. The carbon footprint is oddly named, since it refers to the "weight" of CO2 impressed on the sky, in tonnes of carbon dioxide NOT carbon. To convert this to an equivalent of carbon, multiply each tonne of CO2 by (12/44) or about a quarter. First of all, electricity. To convert this into an emission of CO2, first work out how many kilowatt hours of electricity has been used by your household during the past 12 months. For our own household, it is 3,062 kwh (according to the electricity bills), and so multiply this by 0.43:

Hence: 3,062 x 0.43 = 1,316.7 kg = 1.3 tonnes of CO2.

Next, gas/oil. Again, from the bills you can work out how many kwh of gas you have used, and in our case that is 18,237 kwh, which is to be multiplied by 0.19, giving:

3,465 kg = 3.5 tonnes of CO2.

[For completeness, If you have oil-fired heating and used 2000 litres of oil, the conversion would be 2,000 x 2.68 = 5,360 kg = 5.4 tonnes of CO2.
If it is coal-fired, and 2.5 tonnes of coal were used, then that equates to 2.5 x 2.42 = 6,000 kg = 6 tonnes of CO2].

Car use: this is worked out on the mileage, and so for an annual 8,700 miles, multiply by 0.36 = 3,132 kg = 3.1 tonnes of CO2. Now, we do not own a car, and instead walk or use public transport for longer distances. Hence although on this basis, our CO2 contribution is zero, there is some output.

Flights. The recommendation is to take the number of short-haul flights (e.g. London-Paris; London-Edinburgh) and multiply by 0.2 tonnes of CO2. For medium-haul flights (e.g. Edinburgh-Ankara; Cardiff-Cairo), multiply by 0.8 tonnes, and long-haul (e.g. London-Sydney) multiply by 2 tonnes.

For the electricity and gas/oil, add the two together and divide by the number of adults in the household. So, for us that is: 1.3 + 3.5 = 4.8/2 = 2.4 tonnes of CO2 each. Add zero for car use, and 2 medium haul flights each (London-Prague) = 2 x 0.2 = 0.4. Hence we have a carbon footprint of 2.8 tonnes of CO2 each. I am relieved that our combined "household emissions" of 5.6 tonnes is less than the 6.2 average according to DEFRA, and a good deal less than the 66 tonnes estimated for one well-off Hertfordshire family! The entire family of Friends of the earth press officer Neil Verlander apparently produces a mere breeze of 1.7 tonnes of CO2!

Energy Balance wishes you all a Merry Christmas and a less energy intensive 2007!

Thursday, December 21, 2006

Thorium - Letter to The Independent.

As I reported two postings ago "Thorium gets Good Press over Uranium", I sent a letter to "The Independent" newspaper in response to an article they published on using thorium as a source of energy for the future. That article focussed on Accelerator Driven Systems (ADS), and my point was that thorium could instead be used in a Liquid Fluoride Reactor, to particular advantage, notably avoiding the huge energy cost of running an accelerator system capable of producing an intensely powerful proton beam with which to produce neutrons by spallation, in a lead target. One such accelerator is the "ring-zyklotron" at the Paul Scherrer Institute in Switzerland, where I have carried out particle beam experiments over many years. The power is huge, and for a beam current of 1.5 milliamps with an accelerating potential of 600 million electron volts (MeV), we have a proton beam running at a power of 900 kilowatts.
Beams of charged particles are steered and focussed down "beam-lines" (steel-pipes pumped out to a hard vacuum) using dipole magnets for bending the beam, and quadrupole magnets (a pair of dipoles in pairs... sort of!) which act as "lenses" to focus the beam. Of course, depending on the polarity of a dipole magnet, the beam is bent (steered) either to the right or to the left, say. After one set-up, a magnet was set the wrong polarity, with the result that instead of the beam passing smoothly down the middle of the tube, it was brought into the wall of it, which immediately vapourised, drilling a hole about the size of the beam-spot - say about a circle 3 cms in diameter - straight through the side of the beam line. I was surprised at this until I worked out the power, and then realised that 900 kilowatts on a piece of metal about an inch across...well, what else would it do but punch a hole straight through it?! Particle beams are not "clean", either, and are contaminated with all kinds of radioisotopes from the "target", which took some time to be cleaned-up before the accelerator and its beam-lines were deemed safe for experimenters to return there!

As the "Inde" have not published that first letter - understandably as there are more immediately pressing issues such as the Ipswich murders in the press, I sent them a modified letter to see if the essential message could be got across with fewer technical details than in the original one. Since the Ipswich case, social policy aspects and the many features pertaining to the Middle East continue to absorb most of the space in the "Letters" section, I'm not sure if they will publish this one either! In any case, I have copied the text below:

I applaud Helen Brown's article "What Energy Crisis" (Wednesday,December 13) extolling the potential virtues of thorium as a nuclear fuel. However, thorium does not "require an accelerator-driven system" (ADS). Such accelerators need huge amounts of electricity to run them, as particle accelerators always do, but in this case are required to produce a beam of protons of such intensity that until 10 years ago the prevailing technology meant that it could not have been done. Rather like nuclear fusion, a working ADS reactor is some way off, and may never happen. However, thorium can be effectively utilised in a liquid-fluoride reactor, where the nuclear materials are present in the form of a solution of fluoride salts. This kind of reactor permits the continuous reprocessing of its nuclear material relatively easily (certainly as compared to solid fuel based nuclear reactors) which is critical in the way thorium is actually used. Details of this and related matters are at http://ergobalance.blogspot.com


Professor Chris Rhodes

Wednesday, December 20, 2006

No Snow for Santa!

The climate of Northern Europe is distinguished by its lack of winter weather this year, and although midwinter's day has fallen, very little snow has, and spring seems to have already flown to blossom on the roof of Europe. I commented previously on the lack of snow that we noticed on holiday in Switzerland back in August, which appears to be a general feature of the whole of Europe, with knock-on effects for the tourist industry. What are the Alps without snow on them, all said and done?! What is not done without snow, certainly, is skiing and holiday-makers arriving to play their annual medicine of winter sports are being greeted by lush green meadows, on which sheep graze contentedly, not crunchy white virgin pistes. Hotels throughout the Alps remain underbooked, to a loss reckoned by the Italian Hoteliers' Association of £400 million this year so far - and that's just in Italy.
Every spring since 1818, the city of Geneva's official chestnut tree has been watched as a harbinger of Alpine spring, for when it puts out its first bud, by a specially appointed official (sounds like a typical local authority!) who solemnly records the date on a special noticeboard in the town hall (definitely!). There have been two previous "tree" post-holders (how many "human" ones is anyone's guess) and the event usually falls in March, though occasionally it has arrived prematurely in February. This year the tree burst into bloom in late October, and remains to sport flowers and leaves, thereby by-passing winter. Two weeks ago, the Organisation for Economic Cooperation and Development warned that the Alps are especially sensitive to global warming, and in recent years have warmed up by about three times as much as the world as a whole, and that still greater changes can be expected in the coming decades, with less snow at lower altitudes, and receding glaciers and melting permafrost higher up.
The organisation has made a two year study which concluded that 609 from the present 666 (a symbolic number, if there ever was!) medium to large Alpine ski resorts have sufficient snow cover for 100 days a year at the moment, but this number could decrease to a mere 200 if temperatures rise by four degrees centigrade. However, that is the worst case scenario, and does seem an almost outlandish increase. Interestingly, it is Germany that would be worst affected, where a one degree rise - which the "experts" say could happen by 2020... hmmm - would lead to a 60 per cent fall in the number of resorts with snow (more of them at relatively low altitudes, presumably). It is not just the "surface" mountain coverage that is at issue, because ice glues together huge masses of rock which have started to detach from mountains like the Eiger, and entire cliff faces have disintegrated. Smart Swiss-bankers are now refusing to lend money to ski resorts located at altitudes below 4,500 feet.
In Moscow too it is unusually warm at 9 degrees C, rather than the more usual "minus four", and bears in Moscow Zoo show no wish to hibernate. In Sweden, the gingerbread houses that families traditionally make for Christmas are collapsing as the damp, warm weather melts the icing used to stick them together. "The problem is the mild winter," according to Aake Matteson of Anna's, the country's leading gingerbread wholesaler. In Britain, Prince Charles has said that climate change is the "biggest threat to mankind", which is the point made by the government's Chief Scientific Advisor, Professor Sir David King, earlier in the year, in agreement with Professor James Lovelock (of Gaia fame), in support of a need for a new generation of nuclear power stations. The Prince has called on governments across the world to act before it is too late. Some think it already is, while others believe that the current global warming is part of a natural 1,500 year cycle, related to the power output of the sun.
In reference to the 53 Commonwealth countries, Charles ("Sir" to me) said that they needed to improve levels of sustainable development. Now he is onto something there, whatever is the cause of global warming. Ultimately, we will have to live and provide energy to do so in a sustainable way; in an eventual "powerdown" strategy that will require a paradigm shift in our relentless pursuit of "more". We still have plenty, but need to use the resources "better" - more efficiently through a combination of technology and some frugality, though less than adopting an entirely agrarian society, as yet. We still need our prevailing technology to develop sustainable methods, holding a programmed and steady course to the self-renewing destination that must be the future.

Monday, December 18, 2006

Thorium gets Good Press over Uranium!

"The Independent" newspaper, published an article last week (Wednesday, December 13th) to which I responded with a letter, pointing out that an accelerator-driven system (ADS) is not the only way that thorium can be used to generate nuclear power, but that it could be used very advantageously in Liquid Fluoride Reactors. The letter has not been printed as yet, and so I have copied the text at the bottom of this posting. Such accelerators need huge amounts of electricity to run them, as particle accelerators do, but these are required to produce a beam of protons of such intensity that until 10 years ago the prevailing technology meant that it could not have been done. Rather like nuclear fusion, the working ADS technology is some way off, and may never happen, although professor Egil Lillestol of Bergen University in Norway is pushing that the world should use thorium in such ADS reactors. Using thorium as a nuclear fuel is a laudable idea, as is amply demonstrated in the blog "Energy from Thorium" (http://thoriumenergy.blogspot.com/) to which there is a link on this blog (above left). However, the European Union has pulled the plug on funding for the thorium ADS programme, which was directed by Professor Carlo Rubbia, the Nobel Prizewinner, who has now abandoned his efforts to press forward the programme, and instead concentrated on solar energy, which was another of his activities. Rubbia had appointed Lillestol as leader of the CERN physics division almost two decades ago, in 1989, who believes that the cause is not lost.
Thorium has many advantages, not the least being its greater abundance than uranium. It is often quoted that there is three times as much thorium as there is uranium. Uranium is around 2 - 3 parts per million in abundance in most soils, and this proportion rises especially where phosphate rocks are present, to anywhere between 50 and 1000 ppm. This is still only in the range 0.005% - 0.1% and so even the best soils are not obvious places to look for uranium. However, somewhere around 6 ppm as an average for thorium in the Earth's crust is a reasonable estimate. There are thorium mineral deposits that contain up to 12% of the element, located at the following tonnages in Turkey (380,000), Australia (300,000), India (290,000), Canada and the US combined (260,000)... and Norway (170,000), perhaps explaining part of Lillestol's enthusiasm for thorium based nuclear power. Indeed, Norway is very well endowed with natural fuel resources, including gas, oil, coal, and it would appear, thorium.
An alternative technology to the ADS is the "Liquid Fluoride Reactor" (LFR), which is described and discussed in considerable detail on the http://thoriumenergy.blogspot.com/ blog, and reading this has convinced me that the LFR may provide the best means to achieve our future nuclear energy programme. Thorium exists naturally as thorium-232, which is not of itself a viable nuclear fuel. However, by absorption of relatively low energy "slow" neutrons, it is converted to protactinium 233, which must be removed from the reactor (otherwise it absorbs another neutron and becomes protactinium 234) and allowed to decay over about 28 days to uranium 233, which is fissile, and can be returned to the reactor as a fuel, and to breed more uranium 233 from thorium. The "breeding" cycle can be kicked-off using plutonium say, to provide the initial supply of neutrons, and indeed the LFR would be a useful way of disposing of weapons grade plutonium and uranium from the world's stockpiles while converting it into useful energy. The LFR makes in-situ reprocessing possible, much more easily than is the case for solid-fuel based reactors. I believe there are two LFR's working already, and if implemented, the technology would avoid using uranium-plutonium fast breeder reactors, which need high energy "fast" neutrons to convert uranium 238 which is not fissile to plutonium 239 which is. The LFR is inherently safer and does not require liquid sodium as a coolant, while it also avoids the risk of plutonium getting into the hands of terrorists. It is worth noting that while uranium 235 and plutonium 239 could be shielded to avoid detection as a "bomb in a suitcase", uranium 233 could not, because it is always contaminated with uranium 232, which is a strong gamma-ray emitter, and is far less easily concealed.

The Independent article claims that thorium "...produces 250 times more energy per unit of weight" than uranium. Now this isn't simply a "logs versus coal on the fire" kind of argument, but presumably refers to the fact that while essentially all the thorium can be used as a fuel, the uranium must be enriched in uranium 235, the rest being "thrown away" and hence wasted as "depleted" uranium 238 (unless it is bred into plutonium). If both the thorium and uranium were used to breed uranium 233 or plutonium 239, then presumably their relative "heat output" weight for weight should be about the same as final fission fuels? If this is wrong, will someone please explain this to me as I should be interested to know?
However, allowing that the LFR in-situ reprocessing is a far easier and less dangerous procedure, the simple sums are that contained in 248 million tonnes of natural uranium, available as a reserve, are 1.79 million tonnes of uranium 235 + 246.2 million tonnes of uranium 238. Hence by enrichment 35 million tonnes (Mt) of uranium containing 3.2% uranium 235 (from the original 0.71%) are obtained. This "enriched fraction" would contain 1.12 Mt of (235) + 33.88 Mt of (238), leaving in the other "depleted" fraction 248 - 35 Mt = 213 Mt of the original 248 Mt, and containing 0.67 Mt (235) + 212.3 Mt (238). Thus we have accessed 1.79 - 0.67 = 1.12 Mt of (235) = 1.12/224 = 4.52 x 10*-3 or 0.452% of the original total uranium. Thus on a relative basis thorium (assuming 100% of it can be used) is 100/0.452 = 221 times as good weight for weight, which is close to the figure claimed, and a small variation in enrichment to a slightly higher level as is sometimes done probably would get us to an advantage factor of 250!

However, plutonium is a byproduct of normal operation of a uranium-fuelled fission reactor. 95 to 97% of the fuel in the reactor is uranium 238. Some of this uranium is converted to plutonium 239 and plutonium 241 - usually about 1000 kg forms after a year of operation. At the end of the cycle (a year to 2 years, typically), very little uranium 235 is left and about 30% of the power produced by the reactor actually comes from plutonium. Hence a degree of "breeding" happens intrinsically and so the practical advantage of uranium raises its head from 1/250 (accepting that figure) to 1/192, which still weighs enormously in favour of thorium!

As a rough estimate, 1.4 million tonnes of thorium (about one third the world uranium claimed, which is enough to last another 50 years as a fission fuel) would keep us going for about 200/3 x 50 = 3,333 years. Even if we were to produce all the world's electricity from nuclear that is currently produced using fossil fuels (which would certainly cut our CO2 emissions), we would be O.K. for 3,333/4 = 833 years. More thorium would doubtless be found if it were looked for, and so the basic raw material is not at issue. Being more abundant in most deposits than uranium, its extraction would place less pressure on other fossil fuel resources used for mining and extracting it. Indeed, thorium-electricity could be piped in for that purpose.
It all sounds great: however, the infrastructure would be huge to switch over entirely to thorium, as it would to switch to anything else including hydrogen and biofuels. It is this that is the huge mountain of resistance there will be to all kinds of new technology. My belief is that through cuts in energy use following post peak oil (and peak gas), we may be able to produce liquid fuels from coal, possibly using electricity produced from thorium, Thorium produces less of a nuclear waste problem finally, since fewer actinides result from the thorium fuel cycle than that from uranium. Renewables should be implemented wherever possible too, in the final energy mix that will be the fulcrum on which the survival of human civilization is poised.

Here is a copy of the text of my letter to The Independent:
:

I applaud Helen Brown's article "What Energy Crisis" (Wednesday, December 13) extolling the potential virtues of thorium as a nuclear fuel. However, thorium does not "require an accelerator-driven system", it can be utilised to particular advantage in a liquid- fluoride reactor (a specific example of a molten salt reactor), where the nuclear materials are present in the form of fluoride salts dissolved in a solution of other fluoride salts. This kind of reactor permits the continuous reprocessing of its nuclear material relatively easily; certainly as compared to nuclear reactors which use solid fuel. This factor is critical in the way thorium is actually used, because the material must first be converted to protactinium by neutron irradiation from a fissile kick-starter element (e.g. plutonium), and then isolated from neutrons (by removal from the reactor) allowing this to decay to uranium-233. The U-233 is then introduced to the reactor to undergo nuclear fission and consequent energy production.

Professor Chris Rhodes.




Friday, December 15, 2006

Radiation, Free Radicals and Disease.

Radiation breaks down water in living cells into dangerously reactive free radicals which subject our bodies to a continual assault, attacking essential molecules of life such as DNA and proteins. Since radiation has been around for as long as the Earth has, we have fortunately evolved efficient defense systems called antioxidants, which generally intercept the radicals before they do too much damage. The energy spectrum of radiation spans the range from radio-waves, at fairly low energies, through the microwave, infra red, visible and ultra-violet wavelengths of light, and into the x-ray, gamma-ray and cosmic-ray regions. The latter constitute "ionising radiations" which are of relatively high energies, up to millions of electron volts. The disintegration of radioactive nuclei too, results in energetic particles which can also ionise molecules, such as beta-rays (high energy electrons) and alpha-particles along with gamma-rays. Indeed most nuclear disintegrations release gamma rays as the decay ("daughter") product nucleus is formed in an excited state and emits a gamma ray on returning to its state of lowest energy.
The word "radiation" refers to the process of emitting energy in the form of rays or particles, which are themselves often called "radiations". In 1895, Wilhelm Roentgen was the first to identify (though probably not discover as vacuum "Crookes" tubes had been around for several decades) x-rays, streaming from the "positive electrode" in such cathode ray tubes. Cathode rays are energetic electrons which "boil-off" from the surface of an electrically heated cathode, which on striking a (positive) electrode to which they are attracted by virtue of their own negative charge, excite atoms in the target, which emit energetic radiations known as x-rays. Within months of Roentgen's report, Antoine Henri Becquerel had identified natural radioactivity from uranium ore, making the discovery that the mineral could darken a photographic plate, even when the latter was wrapped in paper, and so the curious radiations had a peculiar power of penetration. In 1896, the first x-ray "photograph" was taken, in fact of Roentgen's wife's left hand - the bones are clearly visible, as indeed is her wedding ring, since the rays are stopped by denser material, thus permitting structural differentiation in living tissue. I wonder why Roentgen didn't x-ray his own hand, but left this privilege to his wife! Perhaps he wasn't entirely sure that x-rays were harmless and used her as a guinea-pig; but more likely, he did consider it a privilege, as radiations were to be revered for the next few decades as being possessed by almost miraculous properties. X-rays were put to generalised use in 1897 by the U.S. Army in the Spanish- American War to locate bullets and shrapnel in wounded soldiers.
In 1898, Marie Curie discovered that thorium too emitted "uranium radiation", and coined the term "radioactivity" to describe the phenomenon. In this same year, Marie and Pierre Curie manged to identify both the elements polonium (named after her native Poland) and radium. Polonium is a decay product of radium and both are members of the decay-chain leading down from uranium 238. The elements were "discovered" using spectroscopy, since both exhibited spectral lines unknown for any other elements. My casual annotation here trivialises the immensity of their accomplishment, which involved boiling, in 20 kilogram batches, 4 tonnes of uranium ore (pitchblende) in concentrated acids, and then separating out the constituent elements using probably the most laborious technique in classical chemistry: "fractional crystallisation". All the work was done in an unheated shed, which, as they came closer to their goal, was lit at night by the green glow from the radioactive salts in petri-dishes spread out over the simple wooden tables that were its furniture. Marie Curie described herself as "often physically exhausted" at the end of a day's work, no doubt as she, a slightly built woman, handled half-hundredweight quantities of materials, and stirred them with a heavy iron bar in cauldrons of boiling acid. It is likely too, that her strength was sapped by exposure to ionising radiation from these elements from which there was no protection at all, and which eventually took her life in the form of aplastic pernicious anemia (a form of leukemia). Yet she described those four years spent working "in the shed" with her husband as being "their happiest". I imagine an almost spiritual euphoria of belief, idealism, purpose and love, which only a precious few of us experience. Marie Curie was awarded the Nobel Prize for Physics, along with Pierre Curie and Becquerel, in 1903. In 1902, she isolated pure radium by electrolysing a solution of radium chloride (RaCl2) in water using a mercury cathode: the mercury was distilled-off, leaving 100 milligrams of metallic radium. This led to her being awarded a second Nobel Prize in 1910.
In 1899, Ernest Rutherford discovered "alpha-rays" and "beta-rays", as he termed them, differentiating between the two kinds by their differing penetrating power. Specifically, beta-rays (electrons) are stopped by a sheet of paper, but alpha-rays (helium nuclei and hence particles) require a sheet of aluminium to stop them. The lawyers were in quickly enough too, and also in 1899 the first malpractice suit award was made for x-ray "burns". Though the public "miracle" of radiation had inaugurated a huge industry, including "emanation-generators", devices which contained sufficient radium to produce a steady dose of "healthy radon" in the home, those working with radioactive materials and x-ray equipment had come to realise that there were serious health-effects attendant to radiation. As early as 1902, it was shown that a dose of x-rays could cause death in a mammalian foetus. Both the Curies and Becquerel received radiation "burns" from handling radium. The connection was made between exposure to radiation and the development of tumours. One particularly unfortunate pioneer was Clarence Dally, a fine experimentalist who was employed by the U.S. inventor Thomas Edison in making and experimenting with x-ray tubes. Dally suffered a steady contracting of tumours to both hands, which necessitated the amputation of his fingers, then hands and finally both arms. He finally died of generalised cancer aged just 39. Radiation induced tumours are a result of free radicals causing damage to DNA in cells, which mostly arise from the breakdown ("radiolysis") of water in the sheath surrounding the DNA double helix.
The symptoms of "radiation sickness" are well known (vomiting, hair-loss and in high doses, an ultimate liquefaction of internal organs as proteolytic enzymes are spilled from breached cell interfaces, then death). The lethal dose to a human is around 500 - 1000 Rems (5 - 10 Sieverts). Many animals have been irradiated in "radiation biology" experiments, and in the 1950's a connection was made between the effects of exposure to x-rays and "oxygen poisoning", which deep-sea divers may experience under inopportune circumstances (accelerated metabolism, coma and even death). It was proposed that oxygen free radicals may be involved in both cases. This led to the notion by Denham Harman in 1956 that the reason we age is that cells in the body are constantly attacked by free radicals formed from oxygen (which we breathe) and transition metals such as iron, and ultimately we die from accumulated wear and tear. In a similar manner to an old car, which rusts from exposure of oxygen radicals, so do we and eventually become "old bangers" and are finally shipped-off to the junk-yard!
As an extension of this "free radical theory of aging" is the notion that many diseases, including cancer, arthritis and cardiovascular disease, are the result of injury from oxygen free radicals. A huge industry has grown-up based around the concept of "antioxidants", and that taking supplements of antioxidants in the form of pills to some extent reduces the damage, and hence the likelihood of developing these illnesses. However, there is no conclusive evidence that antioxidant supplements do any good at all, whereas there is well documented data that taking too many or the wrong kinds can be positively harmful. In one trial of beta-carotene supplements in smokers in Finland, the intended three-year long trail was discontinued after just nine months because subjects began to show an elevation in levels of lung cancer by about 20%! Since beta-carotene is thought to protect against cancer, this is most alarming, but it just shows that Nature is more complex than we comprehend, and the only hard medical statistical evidence is that eating 5 - 8 portions of fruit and vegetables per day does seem to correspond with a reduced level of cancer, heart disease and arthritis. This is usually referred to as the "Mediterranean Diet", but I wonder to what extent the benefit actually is provided by the "Mediterranean Lifestyle", that people who are warmer, less stressed and happier are less prone to these diseases than are those of us living in the colder, more societally frenetic northern countries.

This text is based on a lecture "Radiation, Free Radicals and Disease" delivered by Professor Chris Rhodes at Kingston University, London on Wednesday, December the 13th (2006).

Wednesday, December 13, 2006

Wind Farms and Nuclear Waste.

There is no direct connection between the terms in the title, but if wind turbines are placed in the wrong place, they will not produce the amount of electricity forecast for them, and hence we will rely increasingly on other forms of electricity generation, probably nuclear. Almost certainly, nuclear power will play a major role as part of the mix chosen to secure electricity production, labelled until 2050, although falling oil supply may well change world geopolitics sufficiently that "Plan B's" will need to be drawn out before then. Friends of mine who are "into" such things, tell me that I should not be worrying about any of this, as time (as we know it) will end when the Mayan Calender finishes in the year 2012! It is interesting that this is also the year of the London Olympics - surely some coincidence? My friends may well be right, but I would prefer to keep a more pragmatic view in reserve, in terms of energy and resources as we currently perceive them!
Wind turbines are wonderful devices, but just how wonderful they are depends on where you put them. Organisations that place them ostentatiously on the roofs of their premises in regions with little or highly intermittent wind flow, are making a statement more than any significant contribution to the national grid (most do not have local electricity capture technology installed, so to be of any use it would have to feed the grid). To whit, a new study by the Renewable Energy Foundation concludes that England and Wales are not windy enough to allow large wind turbines to operate at the rates claimed for them. This foundation is a charity that aims to evaluate wind and other types of renewable energy on an equal basis, and it based its study on data supplied by the energy regulator Ofgem for over 500 wind turbines. Even on wind farms in Cornwall, which might be expected to be most efficient, being subjected to gusts from the Atlantic, operated at at average of just 24.1% of capacity. Now my understanding, as I have written before here - and received some censure too, that I was being far too pessimistic! - is that 20% is a reasonable value for the "capacity factor" according to long experience in Germany and Denmark (a very windy place - so much so that the trees grow at an acute angle, sloping against the wind!). Mid-wales fares similarly at 23.8%, the Yorkshire Dales are slightly better placed at 24.9% and Cumbria tops the bill at 25.9%. It is only north of the border that a 30% average was achieved, in southern Scotland (31.5%), Caithness, Orkney and Shetland (pretty exposed places) made 32.9% and offshore (North Hoyle and Scroby Sands, on opposite sides of the mainland) where 32.6% was recorded.
The report concludes that the best siting for wind farms is offshore near major cites so that the greater force of the winds can be harnessed, and relatively little is lost compared with the transmission through the national grid from remote areas such as northern Scotland and its isles. Apparently, all the government's wind targets are based on a capacity factor of 30% (why?, I ask), and clearly if they are placed anywhere on land south of the border this will not be met. Some real "turkey farms" (of the wind variety) were found in lowland England, the worst being the turbine close to the M25 at Kings Langley, Hertfordshire at the Headquarters of Renewable Energy Systems, the green energy phalanx of the Robert McAlpine (construction) Group, which limped home at 7.7% - ouch!

Nuclear power has another horror story to report. I am not forthrightly anti-nuclear, and with various reservations, mainly over what to do with the long-term nuclear waste, and potential radioactive "dirty bomb" material for terrorists (either by blowing the stuff around London with a small explosive device, or crashing an aircraft into a running reactor, say), I accept that certainly in line with government policy, it will be with us for decades to come. Through breeder technology based or either thorium or uranium, creating respectively uranium-233 or plutonium-239 as a final fuel, nuclear power could supply us with electricity for hundreds of years and maybe it will. Fast breeder reactor technology was explored in the UK at Dounreay, now closed but regularly in the headlines as pieces of plutonium keep appearing on the beach there. The first of the Dounreay reactors to achieve criticality was the Dounreay Materials Test Reactor (DMTR), in May 1958. This reactor was used to test the performance of materials under intense neutron irradiation, particularly those intended for fuel cladding and other structural uses in a fast neutron reactor core. It is fast (high energy) neutrons that are required to convert uranium-238 to plutonium-239.
More fuel fragments have been discovered, and the Dounreay Particles Advisory Group (DPAG) has recommended that the UK Atomic Energy Authority (UKAEA) close the beach immediately adjacent to its north Scotland nuclear site where they were found. This is the latest find of radioactive fuel residues at various offsite locations, including Sandside beach and the Dounreay foreshore. The cat was put among the pigeons when one particle was detected on a beach at Dunnet, which is in fact several miles distant to the east of Dounreay. The particles are about the size of a grain of sand. The pollution is a legacy of many years worth of poor practice in waste management, which left thousands of shards of irradiated fuel rods from reprocessing being released into the environment through a number of different routes. I remember a story that waste was dumped into a shaft near to a cliff, whose geology was less sound than originally thought, and it cracked releasing some of the plutonium it was supposed to contain "for thousands of years". DPAG has announced that trials are to be undertaken of "robots" for recovering particles from offshore sediments. There is a lesson here of a "duty of care" to future generations, that currently applauded methods of radioactive waste disposal really will sequester radioactive waste for thousands of years as claimed, until the radioactivity has died down to near background levels.

There are many who favour renewables, and that includes me, although I have pointed out manifestly that the gargantuan amounts of energy we use currently will not readily (if ever) be supplied from renewable sources. Energy efficiency is key, either through technology and probably simple frugality if ends so necessitate, but we should power-down to whatever kind of life this engenders and not run the profligate train of plenty over the edge of a cliff; we must begin to apply the brakes before it steams-on out of track. I am not enthused by the idea of going back to an agrarian economy; while I have enjoyed reading the novels of Thomas Hardy, that is in part because of the simple and beautiful language he used which describes a life based around agriculture, that was technologically simple but humanly harsh.

Monday, December 11, 2006

Your Carbon Footprint Counted.

A new report by the Carbon Trust, funded by the U.K. government, has concluded that each of us "emits" almost 11 tonnes (10.92 tonnes) of CO2 per year. This does not refer, of course, to our bodily emissions, but that produced from all the activites surrounding an average life in the U.K. It is worth mentioning that the report refers to "The cost in carbon (per person, per year)" when it is really carbon dioxide that is being counted. As an effective mass, this equals: (12/44) x 10.92 = 2.98, or nearly 3 tonnes of carbon.
For comparison, the average American causes 19 tonnes of CO2, which is roughly in the ratio of the statistic that if all the world lived a European lifestyle, it would take three planet Earths to support it, but five planets to live as the US do, in terms of the energy resources consumed. I noted previously that we have just passed the point of "going into the red" in terms of resources used, and have exceeded what can be provided in a sustainable way - put more choicely, "we have begun to eat the planet!"

As a matter of interest, lets see how much CO2 each "body" does emit. Let's assume an average food intake of 2,000 "calories" per day, and that it is all burnt up in the form of glucose. The heat of combustion of one mole (180 grams) of glucose (C6H12O6) is 2830 kilojoules. This is equal to:
2830/(4.18 x 180) = 3.76 kilocalories per gram. Now, one "calorie" as referred to in food is actually one kilocalorie, hence that 2000 calories is really 2000 kilocalories. Hence 2000 kcal/3.76 kcal/g = 531.9 g of glucose.
"Burning" 531.9 g of C6H12O6, according to: C6H12O6 + 6O2 --> 6CO2 + 6H2O, produces:
(6 x 44/180) x 531.9 = 780.12 grams of CO2/day. Which per year is 780.12 x 365 = 284,743.8 grams = 285 kg = 0.285 tonnes CO2. While this is quite a bit less than the 11 tonnes that are our responsibility in total, it is not included in the figure for "food", and hence I revise the annual figure upward from 10.92 tonnes to 11.2 tonnes!

The figures break down as follows. Almost a fifth (1.95 tonnes) of that 12.2 tonnes arises from recreational activities - everything from car trips, to visiting gyms and leisure centres with a heated pool, watching the good old "goggle box" or floodlit evening football matches. Home heating accounts for 1.49 tonnes of CO2, so any form of energy efficiency such as double-glazing, loft insulation, and keeping internal doors shut all helps to reduce this. It is reckoned that 1.39 tonnes iof CO2 are generated by catering and food generally (cooking and refigeration, plus indirect emisions from food production including drink products and services etc.). Within this heading is included growing crops, producing packaging, manufacturing, distribution and recycling. Add to this the 0.285 tonnes emitted bodily from digestion and we have 2.235 tonnes from "food" altogether!
The Carbon Trust message is not suggesting that we curtail these activities, but just think more about what we are using, and how we might use less. For example, 2 kg of CO2 can be saved for each journey under three miles when we walk rather than use the car (I don't have car!), and 30 kg by swiching off the power in your house at night. A massive 2,300 kg (2.3 tonnes) of CO2 could be saved by using recycled paper in the office (trouble is that often this doesn't work very well in printers). An additional 1.37 tonnes are apparently generated by "household activities", and this includes lighting, running appliances such as vacuum cleaners, and also the electricity used to produce household furnishings and even the building itself, from making bricks to delivering the furniture etc. to put in it. Clothing and footwear amount to one tonne of CO2, commuting is reckoned at an average of 0.85 tonnes (I am self-employed and work from home most of the time, making my contribution close to 0.00 tonnes per day or per year), "Hygeine", for example taking a bath instead of a shower adds an extra 50 kg of CO2 to the personal burden, and so on runs-up 1.34 tonnes of the tarrif. Aviation (a subject I have mentioned many times since it consumes over one fifth of the entire national transport fuel budget of 57 million tonnes), by each of us results in the emission of 0.68 tonnes of CO2.
Reckoning aviation fuel in terms of n-octane (C8H18), and that it burns according to:
C8H18 + 12.5O2 --> 8CO2 + 9H2O, than means one mole, 114 grams, of fuel produces 352 grams of CO2. Hence the annually consumed 12 million tonnes of fuel is expected to produce (352/114) x 12 million = 37 million tonnes of CO2, which implies 37 million/0.68 = 54.4 million person-flights. If that were all used in one return flight (as most people will want to return home again), then we have somewhere over one million individual flights flown per year. I don't know what the figure is recorded by the airline companies? It's a lot, anyway, and a population of 60 million each contributing 0.68 tonnes of CO2, implies 40 million tonnes of CO2 annually just from plane flights.
Education ("information" really) accounts for almost half a tonne (0.49 tonnes) of CO2, including travel, books and newspapers, so 172 kg (0.172 tonnes) are for school buildings, 13.6 kg for books and the "school-run" in a dreaded 4 x 4 ("Hummer" or "Chelsea Tractor" as we know them affectionately over here), rated at 1.2 miles, five times per week during term time, was 200 kg (0.2 tonnes). Communications emissions, including from computing, amount to 0.1 tonnes (100 kg), and for example mobile phone chargers accounted for up to 70 kg of CO2 per year. It is to be noted that sending letters represented only 0.01 kg!

Friday, December 08, 2006

Global Warming Melts the Alps.

In a previous posting, "Smoking Gun Found for Atmospheric CO2" I pledged my conversion to the belief that climate change is happening. This was a result of having visited the Bernese Oberland in Switzerland, where I noticed the visible lack of glacial coverage on the Jungfraujoch plateau especially, and more generally on the mountains of that region, including the Jungfraujoch, the Eiger and the Monch. These names are usually taken to mean respectively "the virgin " (young woman?), "the ogre" and "the monk" and provide three dominating peaks over the landscape of this part of central Switzerland. Having visited and worked in Switzerland regularly over the past twenty or so years, the frugality of ice really struck me, and that even in the past few years it is clear that something spectacular has happened to the climate there. In my own direct experience, this is the clearest manifestation of the truth of global warming I have witnessed.
I now read that the phenomenon is a general feature of the Alps, which now display a distinct lack of snow and ice. Wider issues of global warming set aside for a moment, this is very bad news for the skiing industry. In consequence, Alpine resorts across Europe are postponing the start of the season in the hope that more snow will yet fall in this, the warmest winter in 1,300 years, and are shipping in machines to produce tons of artificial snow to cover their flanks. It takes significant amounts of energy to run them, which seems to me as a microcosm of the difficulties that will yet be encountered in keeping regions of a warmer world colder by artificial means. One might be tempted to leave the car at home and walk today, in a fit of moral pique that it is all our fault this is happening, but I caution (while taking neither side) that there is evidence presented in a recent book (as noted in my posting "Global Warming Caused by Natural 1,500 Year Cycle") to the effect that the Earth is in a warming trend as a result of natural causes. Whether we are layering an additional burden onto this from our profligate greenhouse gas emissions (probably) and worse, that the full heating effects of our CO2 legacy are yet to kick in, remain at present in the realms of speculation.
So what indeed did become of the Alpine winter sports wonderland? According to the accepted norm, it is the snow falling in the autumn which sets down the foundations and this is frozen into place by the ensuing winter frigidity. This year has seen a definite lack of both snow and cold. Last year was a "proper" winter right across the Alps, which gladdened older hearts of a return to the weather of their youth, but this year has seen a grim return of the more recent trend, where the permafrost line creeps higher and higher year on year. According to a recent study of trends, within 20 years skiing in Italy below 2,000 m may be impossible, because of steadily rising winter temperatures and declining snowfall. That would mean that the sites of some of the most famous resorts such as Bormio (where last year's World Skiing Championship was held) and the very glitzy Cortina d'Ampezzo, which have their lowest slopes at around 1,220 m will be redundant: very bad for business!
The still calm of winter nights in many skiing locations are punctuated by the boom of snow-cannons, which fire millions of droplets of water into freezing air to create snow on increasing numbers of pistes around the Alps are no final solution to the problem. Installing these machines requires a capital outlay of 140,000 Euros for each hectare region to be covered by snow, and a further cost of up to 5 Euros for each square metre of snow that they produce. Their electricity consumption is prodigious, at 3.5 kwh per cubic meter of snow, formed from around 2.5 cubic metres of liquid water, another essential resource. One may immediately deduce that snow occupies a much greater volume than liquid water, and that it has a density of 1/2.5 = 0.4 tonnes per cubic meter. This indeed is true, if the snow is slightly compressed - freshly fallen snow has a density of only around 0.1 tonnes per cubic metre, meaning that 90% of fresh snow is actually "air"!
In addition to the financial costs of resources, it is a fact that the snow machines only work if the temperature is at least 4 degrees C below freezing. Hence those already installed have a potentially limited lifetime if predictions are true that winter temperatures in the Val d'Aosta region will increase by about 2 degrees C by 2050. Mmm, the whole world is predicted to be quite a bit hotter by then, though by how much exactly remains to be seen, and predictions from computer models vary substantially among them. Playing "Devil's Advocate" for a moment, I wonder what the situation will be if the North Atlantic Thermohaline circulation (Atlantic Conveyor) has slowed enough that northern Europe is by then getting to grips with the fingertips of the next ice-age?
We shall await the outcome of global warming (which is predicted either to cook-on to some unknown heat or suddenly - perhaps over is little as 10 - 20 years - snap into the next ice-age), hoping the consequences are not too catastrophic for our descendants and I hope sensibly preparing for what changes may come. Meanwhile, skiing resorts in France, Germany, Switzerland, Italy and Austria are severely lacking in snow, an observation history may record as one of the early and tell-tale signs that the climate was indeed changing, while pondering what, if anything was done to prepare for the consequences of this.

Wednesday, December 06, 2006

Sky Falls in on Global Warming.

According to a team of Indian scientists, while CO2 and other greenhouse gases released into the atmosphere are causing the Earth's surface to warm, they are simultaneously causing the upper atmosphere to cool. I recall reading something about six years ago to this effect, that the absorption of infra-red radiation ("heat") by CO2 present in the lower atmosphere actually restrains it from heating the stratosphere, causing the latter to cool. For ready-reckoning, the surface of the Earth is immediately blanketed by a "boundary layer" which extends to an altitude of about 2 km, above which lies the troposphere extending upwards to anywhere between 7 kilometers (km) at the poles and 17 km at the equator. Then there is the stratosphere, which goes up to around 50 km. Above this is the mesosphere, and finally the thermosphere which begins at around 85 km. The atmosphere is extremely rarefied at such altitudes, and the 1000 K temperatures that may be estimated for individual air molecules, do not correspond strictly to a condition of thermal equilibrium since the average distances ("mean free paths") for collisions between molecules are of the order of kilometers, and so the facile energy-transfer in the "thermal-bath" assumed in physical chemistry is now rather inefficient.
The different regions of the atmosphere are characterised by their differing temperatures. As one moves away from the surface of the Earth, the temperature initially falls, in the manner of moving one's hand increasingly upward from an electric hotplate. This can be seen on the drop-down panels that are common in passenger aircraft now, which read such parameters as outside temperature, altitude and speed. I think these things fascinate me more than non-scientist travellers, but I have struck-up some very pleasant conversations with fellow passengers, based around these readings ... so science can provide good chat-up lines! For example on one flight to the U.S., at 42,000 feet (13 km) I looked out of the window and slightly "upward", and realised that the blue-black canopy I could see some way above us was in fact the "ozone layer". I explained this to my travelling companion, who knew that the loss of this was "the reason why she and her friends couldn't go sunbathing for so long these days in California"! She was a nice lady and as usual it is interesting to compare notes with someone from the other side of "The Pond" as to how our lifestyles differ.
If you go high enough (say in Concorde, before it was withdrawn), at the top of the troposphere (called the "tropopause") the temperature is seen to increase from the (minus) 60 degrees C that it has fallen to in consequence of decreased radiative heating by the surface. This is a result of entering the lower region of the ozone layer, which absorbs ultraviolet (UV) light from the Sun, and converts this to heat by collisions between molecules that have been initially excited by the UV energy. The gas pressure, though low (of the order of a few millimeters; ground-level pressure is 760 mm) is still high enough that energy is effectively redistributed through the gas. The process acts as a "shield" to protect life on Earth from UV which would otherwise harm us, e.g. give us skin cancer, which is on the increase in some parts of the world, such as California where sun-worshipping lifestyles are popular. The region of "heated gas" is still fairly cool, but warmer that it would be without the ozone and warmer than the gas below it, so that it acts as a "lid" on the troposphere and keeps most of the gases contained there from diffusing (convecting) upward, hence maintaining its unique chemistry.
The stratosphere is nonetheless heated to some extent by radiation emitted from the Earth's surface, but as levels of CO2 (and other greenhouse gases, such as methane and water vapour - let's not forget that one, as the Earth warms-up!) increase, more of the heat is absorbed and retained by the troposphere, and so less gets further up, meaning that the stratosphere cools. This encourages the formation of clouds especially in the polar regions, upon the surfaces of which ozone molecules are actually decomposed, thus contributing to a thinning of the ozone shield. Thus it is thought that there may be a link between global warming and ozone loss.
The new study shows that the upper atmosphere is cooling much faster than the surface of the Earth is warming, and in the last three decades it has cooled by somewhere between 5 and 10 degrees C while the surface has increased by just 0.2 - 0.4 degrees C. They are making much of this, but surely just by thinking of the Earth and its atmosphere in terms of an engine which transfers heat between regions of differing density, this is not surprising. If more heat is absorbed in denser regions lower down, it is effectively being taken from those more rarefied regions higher up, and so of course they will get relatively colder, as there is less "up there" to take the heat from, hence the effect will be manifested by a greater fall in temperature.
The study refers specifically to the region between 50 and 100 km above the Earth's surface, which is also known as the ionosphere and is critical to long range satellite communication. The region is physically shrinking (gases contract when they are cooled) because of the reduction in temperature, and it can be said literally that "the sky has fallen", since the upper level of the atmosphere is now 8 to 10 kilometers lower that it was thirty years ago. It is feared that these changes to the ionosphere could lead to a deterioration in the quality of short wave radio reception. It has also been suggested that solar panels used to power on-board satellite systems might become increasingly degraded by high energy particles from space which can increasingly penetrate a thinner ionosphere? I think the latter concern is the less compelling of the two, but only time will tell. However, geostationary satellites, which are used in relaying signals for communications, television broadcasts and GPS (geopositioning satellite) systems are parked very much higher, at about 36,000 km, and should not therefore be affected.

Monday, December 04, 2006

Coal May be Crowned King after all!

Further to my recent posting "Shall Coal be Crowned King?", Mcrab posted a couple of very much appreciated "comments", to the effect that there may be a lot more coal than I had been led to believe! The 220 million tonne figure for U.K. coal reserves on which I based my calculations for these islands, only represents the amount of coal that is currently in the holdings of the U.K. mining industry. Prompted by his suggestion that the UK may in fact be sitting on 190 billion tonnes of coal, I managed to dig-out the following text from a 1991 House of Parliament transcript, which does indeed place the subject of coal reserves under quite a different light:

"The (British Coal) Corporation's estimate of coal in place in the United Kingdom (that is in seams over 60 centimetres thick and less than 1,200 metres deep, minus coal which has already been worked) is 190 billion tonnes." It goes on to detail technically recoverable reserves. It is valuable, not least because it parallels much of the important information available in the Brown Book on the oil and gas sector...

I commend the order to the House.

Question put and agreed to.

Resolved,

That the draft Coal Industry (Restructuring Grants) Order 1991, which was laid before this House on 15th April, be approved."

So, let us reconsider the case for coal, and where it might be found. Coal occurs in the form of layers (‘seams') in sequences of sedimentary rocks. Almost all onshore coal resources in the UK occur in strata of the Carboniferous system, and coals of this age also extend into the North Sea basin. Individual seams may be up to 3.5 m in thickness although, exceptionally, thicker seams also occur. Resources of coalbed methane (CBM) that may be developed in the future are also contained exclusively in Carboniferous coals. In Northern Ireland lignite, or "brown coal", of Tertiary age is a significant resource, which could be used in power generation. In Great Britain coals of Mesozoic and Tertiary age are insignificant onshore but occur over large areas, and in considerable thicknesses, in the North Sea basin and other offshore areas.

It is thought that 45 billion tonnes of the UK's reserve can be extracted using current technology. We burn 62 million tonnes of coal each year in the UK, and produce only 20 million tonnes of that ourselves. Reckoning that three tonnes of coal can produce one tonne of synthetic oil by goal liquefaction, and we use around 72 million tonnes annually (57 million tonnes for transportation and the rest as a chemical feedstock for industry), which could, in principle anyway, be got from 72 x 3 = 216 tonnes of coal, our entire coal requirement comes out at 62 + 216 = 278 million tonnes. Hence, as a rough estimate, there is enough for 45,000 million/278 million = 162 years... and if we can get at the entire 190,000 tonnes, we are set for 190,000/278 = 683 years. The time allowed us by the first 45 billion tonnes ought to be sufficient to develop more sophisticated means with which to extract the rest of the 190 billion tonnes! I am just doing simple sums, in order to make the point that running out of coal is not the issue, we have plenty of it and the considerations to be addressed are rather how cleanly the resource is used, and the huge mining and processing infrastructure that will need to be introduced from scratch. In other words, we will need to dig a completely new network of mines. It is thought possible to extend the existing mines to garner around one billion tonnes of coal in total, and we are after much more than that.
It occurred to me that this new mining network would probably cover quite a large underground area. As usual I shall use rough numbers (since that is all there is, also as usual!). If coal seams must be at least 0.5 metres thick to be worth considering, and there are "nugget" seams up to 3.5 m thick, then an average thickness of 2 m overall, might be a reasonable estimate. Taking an average density for coal (it varies quite a bit in fact) of 1.4 grams/cubic centimetre (1.4 tonnes per cubic metre = 1.4 t/m*3), gives:

2 m x 1.4t/m*3 = 2.8 t/m*2, i.e. 2.8 tonnes under each square metre area. Hence 190 x 10*9 t/2.8 (t/m*2) = 67.9 x 10*9 m*2 which is about 68,000 square kilometers (km*2). By way of scale, this is (68,000/244,000) x 100 = 28% of the area of the UK mainland, not that the workings would be all under this visible island home, but an appreciable amount would be dug into the seams under the north sea. It is still a big area, nonetheless!

Most coal-fired electricity generating power stations are quoted at an efficiency of around 35%, or about a third of their thermal power (i.e. the heat that could be got from the coal assuming 100% efficiency). The factor of "a third" applies to nuclear power station too, and so Sizewell B at 1200 MW (that is electricity production), actually produces three times that in terms of thermal power (heat), and so is nearer 3,600 MW. This is important in calculating how much uranium is needed to fuel a nuclear power plant for a year, say... As Mcrab points out, there are integrated systems being evaluated with higher efficiencies than this, and the general consensus I can find for IGCC (Integrated Gasification Combined Cycle) plants, which gasify the coal and burn the gas from it as the fuel to drive the electric turbines, is a figure closer to 50% (up from "a third" to "a half"). This looks good, but the plant is much more sophisticated to design and construct, and hence more expensive.
A typical cost for building a coal-fired station is $1,500 per kw of generating capacity. Hence a typical 1 GW (1,000 MW) station is generating 1 million kw and would therefore cost one million x $1,500 = $1.5 billion to build. Figures for IGCC plants are really still on the drawing board, but I did find one set of costings that implied an initial investment of 20 times that; however, this would almost certainly fall as the technology became more widely adopted. Another advantage of the technology is that some of the gas could be extracted for conversion to oil by "liquefaction". Heat can also be drawn-off rather than wasted at various stages during the operating cycle and used for ancillary electricity generation.
Even environmentalists are warmer in their regard of IGCC's since they are more readily adapted to capture CO2 for its potential long-term sequestration. My own view is that "sequestering" (i.e. "locking up") CO2 in the wrong place, could be a legacy of global climate disaster to future generations, e.g. if for some reason - an earthquake say - the store of CO2 were to be suddenly released at a later date. That could really screw-up climate models, and their forecasts! Possibly such consequences of geoengineering could be factored in, on a betting odds basis. However, the energy costs of CO2 capture and disposal are close to those gained by the 35% to 50% increase in efficiency from opting for IGCC systems in the first place. However, if we are to go down this road, and arguably we have no choice, in order to prevent the "rate of increase" of CO2 levels from increasing, we will be running harder to maintain the same pace.
In my very first posting, back in December 2005, I discussed the need for a suite of energy sources, and I feel it is most likely that we will see more new coal powered stations, more nuclear power, and new technologies being implemented as time goes on. It is sobering to note that China opens the equivalent of 4 new typical UK power stations every week! - none of which are of a more efficient (IGCC or any other) design. If coal liquefaction will (and I have taken the extreme of providing 100% of our oil from coal in my reckoning) become increasingly important, it is likely that some very attractive integrated plants will be introduced (check Mcrab's figures in his comments to my "Shall Coal be Crowned King" posting), to provide both heat (electricity) and hydrocarbons, but mostly (cheaper!) tried and tested all-out coal liquefaction plants based on Fischer-Tropsch chemistry, as used extensively in South Africa by Statoil. It is always money that wins, and introducing new technologies which offer greater energy efficiency or renewables on the large scale will only happen if and when the economics of the "energy-game" so dictate!

---------------------------------------------------------------------------------------------
In regard to how much coal the world has as a reserve, estimates vary, but Russia has 6 trillion and China 1 trillion tonnes, so a world reserve of 10 trillion tonnes is probably reasonable. However, only 1.2 trillion is accessible to current or readily adaptable mine workings, and so an enormous coal-scrabbling investment will be required to scratch it out of the ground, with all kinds of environmental issues attending. However, to put the quantity there in perspective, I note that the countries of the world used in 2004, 3,770 million tonnes of oil. To produce this equivalent (and crude oil and syn-oil are NOT chemically equivalent) synthetically from coal, we would need 3,770 million x 3 = 11,310 million tonnes of coal plus the 5,540 million tonnes currently used already, i.e. we would need to roughly treble our coal production to meet this total demand, on top of the fuel required to actually run the processes themselves. However, if we are on-target about how much coal there is, we have:

10 trillion = 10 x 10*12 tonnes/(11,310 + 5,540) x 10*6 = 593 years worth.

This is a similar value to the 683 years I worked out for the UK. There is a hell of a lot more to it than this though.... My point is that shortage of coal in the Earth per se is NOT at issue, it is getting at it, extracting it and the attendant environmental aspects (dangers!) of this and how it is finally used that are.

Friday, December 01, 2006

Polonium 210, Russian Spies and Safe Tobacco.

Probably for the first time ever, the element polonium 210 has hit the headlines, in connection with its use as a poison to kill Alexander Litvinenko, an outspoken critic of the Putin regime. Polonium was discovered by Marie Curie (and her husband Pierre), among the products heroically obtained from extracting four tons of uranium ore (pitchblende), working for four years in an unheated shed, boiling the rock in vats of concentrated acids. At that time, only uranium and thorium were known to be radioactive, and the far more intensely radioactive radium and polonium were revealed and isolated by the procedure of fractional crystallization, probably the most laborious in chemistry. Marie named polonium after her native Poland. Polonium 210 has a half life of about 138 days, and emits five thousand times the amount of radiation as an equal quantity of radium does. Put differently, one milligram of polonium 210 (210Po) emits as much radiation as 5 grams of radium. Polonium is an alpha emitter (i.e. when the nucleus undergoes radioactive decay it releases alpha particles), and so it must be inhaled, swallowed or injected to exert any toxic effects, since alpha particles are stopped by the skin and do not penetrate thus into the body. Only one decay out of every 100,000 results in the emission of a gamma ray along with an alpha particle, while the rest are pure alpha decays. This makes the material more difficult to detect than many other radioactive isotopes, and this is most sensitively done using an alpha spectrometer to measure alpha particles rather than by measuring gamma rays.
The decay of polonium releases a considerable amount of energy and half a gram of the material will quickly reach a temperature of 500 degrees Celsius. A quantity of 210Po equal to just a few curies (one curie is equal to one gram of radium and hence is equal to around 0.2 milligrams of 210Po) is observed to emit a blue glow from gamma rays exciting surrounding air molecules. One gram of 210Po produces 140 Watts of power, and accordingly it has been used as a heat source to power thermoelectric cells in satellites. Because polonium is a highly radioactive and toxic element it is very difficult to handle, and even microgram (millionths of a gram) quantities of 210Po are extremely dangerous, requiring specialized equipment and strict containment procedures. Alpha particles emitted by 210Po will damage internal organic tissue easily if polonium is ingested, inhaled or injected. The maximum permitted body burden for ingested polonium is reckoned at just 1,100 becquerels (0.03 microcurie - 30 billionths of a Curie), which is equivalent to a particle weighing only 6.8 × 10-12 gram (6.8 millonths of a microgram). Weight for weight, polonium is 250 billion times as toxic as prussic acid (hydrogen cyanide).
I discovered only the other day that there is a relation between polonium and lung cancers caused by smoking cigarettes. Indeed, 210Po is the one individual component of cigarette smoke shown to cause cancers by inhalation. In studies of laboratory animals, lung tumours were found to develop at levels far lower than the dose received by a heavy smoker. The history is that lung cancer rates among men climbed from being a rare disease that occurred at a rate of 4 in every 100,000 people per year, to 72/100,000 by 1980, making it the number one fatal cancer, despite an almost 20% decrease in levels of smoking during that same period. This coincided with a tripling in the levels of 210Po found in American tobacco, a result of tobacco growers using superphosphate fertilizers. Calcium phosphate ores tend to concentrate uranium, which decays to radon which then breaks down to 210Po (a "daughter product"). Indeed, soils associated with phosphate ores have uranium concentrations from 50 - 1000 parts per million (ppm), and far in excess of the the 2 - 3 ppm usually found. The 210Po becomes attached to the sticky hairs on the underside of tobacco leaves and when the resulting cigarette is smoked, the intense heat of the burning tip volatalises it, and so it is inhaled. Apparently the filters, while effective against chemical carcinogens do not hold-back the radioactive components.
The lungs of a heavy smoker (which may mean only 15 cigarettes per day - I used to smoke a lot more than this until I gave up twenty years ago, and un-filtered cigarettes at that!) become coated with a radioactive lining which irradiates the sensitive lung tissue. Smoking two packs (40 a day) gives an alpha particle radiation dose of around 1,300 millirems per year, over six-times the dose received by the average American from breathing in radon (200 millirems). Furthermore, 210Po is soluble in body fluids and is thus percolated through every tissue and cell giving levels of radiation much higher than that received from radon. The circulating polonium causes genetic damage and premature death from diseases that are reminiscent of those encountered by the early radiation pioneers: e.g. cancers of the liver and bladder, liver cirrhosis, leukemia, stomach ulcers and cardiovascular conditions. Marie Curie herself died of cancer.
C. Everett Coup, surgeon general for the United States of America, has stated categorically that radiation, rather than tar accounts for at least 90% of all smoking related lung cancers. Now, that is a huge statistic: nine cases out of ten! Indeed, the Center for Disease Control has concluded that: "Americans are exposed to far more radiation from tobacco than from any other source." Although 30% of all cancer deaths can be related to tobacco smoking, nonetheless the National Cancer Institute has no active funding for research into radiation from smoking as a cause of cancer. This may be in order to prevent panic amongst the public over radiation, but surely, the solution is simple, if people are not going to quit smoking (which is the absolute safeguard), then wouldn't simply growing the plants on soil fertilized "organically" say, rather than using phosphate fertilizers, solve much of the problem?
Once again, I imagine the underlying reason is economics: that tobacco, like most plants, grows better when "pushed" by chemical fertilizers, and greater crop yields mean greater profits. It is as simple as that, and the lives of smokers are expendable, since there is always a new generation (or new market in the developing world) to take their place in the ranks once they have been cut down. "Safer", "polonium-free" tobacco would cost money, and is therefore unattractive both to growers and to cigarette manufacturers.

Wednesday, November 29, 2006

Peak Oil Unlikely in the Short Term?

"Experts" have told the Energy 2030 conference held in Abu Dhabi last month that near-term peak oil is not going to happen. This flies in the face of conclusions made by a large number of analysts, which I have documented in these postings. For example, Matthew Simmons, Chairman of Simmons and Company International, said last month that global oil production may have already reached its peak in December 2005, although he mollifies this with the caveat that continued monitoring of production is necessary to be sure that the peak has indeed occurred. My understanding is that production is down from all major fields, which does not urge the view that the peak is comfortingly distant. Some geologists think that even if we include the most favourable efforts in exploration and the consequent discovery of new fields like the Gulf of Mexico, the world production of oil will peak during 2010 - 2020 at somewhere between 95 and 110 million barrels a day, and then turn smoothly into an irreversible, steady decline.
Projected world demand for petroleum liquids indicates an increase from approximately 85 million barrels per day in 2005 to 115 barrels daily, in 2030, according to ExxonMobil estimates. This can only be sustained if there is enough oil actually in the ground to be extracted and whether it can be recovered at the necessary rate. Dr Richard Vierbuchen, who is the vice-president of the Caspian/Middle East region for ExxonMobil said that supply can adequately meet the increasing demand. I guess he would say that though, wouldn't he? He stated further that "estimates of the liquids resource base have been increased over the last 50-100 years, and are likely to continue to do so." Now why is that exactly? Well, he says that "Forecasts of an imminent peak in global production appear to underestimate major sources of growth in the resource base, particularly improved recovery and resources made economic by new capabilities." I presume the latter is a veiled allusion to "unconventional oil", for example that recovered from oil sands ("Tar sands" in reality), or produced by coal liquefaction. He then went on to attack the fundamental analysis made by M. King Hubbert in 1956, stating that it is not readily applicable to forecasting global liquids production, while conceding that it did work in predicting that US Lower-48 oil production would peak in 1965-70, and it did in fact peak in 1971.
His criticism of the Hubbert method is that it cannot account for an increasing resource base, and this much is true. In effect, what Hubbert did was to estimate how much oil was in the ground, how much had been drawn off and hence how quickly the peak in production was likely to be arrived at. One consequence of his analysis was that there is a lag of about 40 years between the peak in oil discovery and the peak in oil production. Hubbert's method was based on the number of squares on a sheet of graph paper, representing the volume of oil in the reserve, which must be fitted under a curve representing the rate of extraction, and in its simplest form is "bell-shaped", so that production leading up to the peak is a symmetrical mirror-image of the production after the peak. Of course, it will never be so simple, as extracting oil beyond the peak point (when the reserve is half empty) is a more difficult matter than when the first well is sunk into eager, virgin territory.
He has a point, but the issue of peak oil is not about running out of oil; but that cheap oil will run-out, and the price thereafter increase to some imponderable level (with consequences that may be pondered only too clearly). The central and underpinning feature of any prediction is the quantity of oil there really is down there - further issues are how readily that may be extracted depends on the precise geology of particular regions, and the quality of that oil, in terms of the refining of it that is necessary before it can be used as a fuel.
According to Vierbuchen, "although annual global production has exceeded annual discoveries since the early 1980's, annual global reserve additions still exceed annual production because of reserve growth in increasing fields." I think he is referring to methods of improved oil extraction, that previously unyielding wells can be made to do so, e.g, by blasting steam into them. Such enhanced recovery methods, are believed to damage the well-geology, and it is thought that the Saudi reserve may be partly inaccessible because the damaged rock will hinder extraction of the oil . However, this is simply getting more of what is contained out, not increasing the volume of the reserve. He also refers explicitly to oil from gas, coal, very heavy oil, bitumen and shale, but as I have pointed out before, these are much harder to convert into oil, in terms of the energy that needs to be invested, potentially reducing the EROEI to an unfavourable ratio. (i.e. it takes so much energy to get the stuff out it isn't worth it for what energy is actually recovered from the final oil product).
Now, in support of this, Michael Huston, who is professor at the University of Huston (and also a managing partner of a petroleum consulting firm), reckons that peak oil will not strike for "at least the next three centuries". This is by way the most optimistic estimate I have seen, but let's see what it means in reality.

We are now extracting 85 million barrels a day x 365 = 31 billion barrels a year. If this increases to the projected daily production of 115 million barrels, by 2030, in that year the world will have drawn off 42 billion barrels. Undoubtedly, some of this will be in the form of "unconventional oil" and as this is a rough estimate I shall assume that over time, on average, 100 million barrels are extracted daily, or 36.5 billion barrels each year. In "three centuries", that means 300 x 36.5 = 10,950 billion barrels... or 11 trillion barrels (roughly).
Now my understanding is that there is 1.2 trillion barrels left in the ground (equal to what has already been used, and so we are at that half-way point), which is enough for say 32 - 38 years. I posted an article called "Peak Oil - all Bunkum?" recently, which refers to another optimistic reckoning that there are 3.74 trillion barrels worth to be had, but that seems to include everything - crude oil from wells, and all manner of synthetic oil. This was based on a report entitled "Why the Peak Oil Theory Falls Down: Myths, Legends, and the Future of Oil Resources," produced by Cambridge Energy Research Associates (CERA), a consulting company based in Cambridge, Massachusetts, which is only available for $1,000, which I am not prepared to pay.
So, even 3.74 trillion barrels is only enough for 100 years (with a half-way point in 50 years, if projected production levels are met and maintained), and as I have pointed out, to produce most of that is going to involve huge efforts to provide the necessary infrastructure for coal and gas liquefaction, bitumen extraction etc. etc. and of course the energy to run it. I am feeling that we will manufacture a lot more oil, and mostly from coal, and we will burn more coal per se for direct heating and as the means to generate ever-increasing amounts of the world's electricity. I am uncomfortable about these very high estimates of what might be produced in terms of oil, simply because they give the appearance that getting it will be easy, along the lines of the kind of oil production we are familiar with, and this is deceptive, or indeed a deception. Probably these different "kinds" of oil should not all be reckoned together on the same energy balance sheet. I doubt there is any coincidence that it is those with the most to profit from acting as though the "business as usual" scenario can go on for decades or centuries, who seem to be most blatant in their denials that peak oil is imminent. Perhaps they will be living protected behind armed-defenses, when they are proved unequivocally wrong, and the majority of human civilization collapses.
Peak oil is about running out of the cheap oil we have become accustomed too, and that certainly is running out, whether or not we find access to other, more costly won, versions of its commodity, in the future. In any event, human societies will find themselves based around the facts of less, readily available oil.

Monday, November 27, 2006

Pouring Water on Chinese Coal Liquefaction.

China has put the brakes on, so to speak, its coal-liquefaction projects. The main fear is that there could be an over-run in the coal-chemicals industry. To curtail development of the process, the Chinese government has legislated not to approve coal liquefaction plants with a processing capacity of under 3 million tonnes of coal. Since such a plant would require an outlay of 30 billion yuan (about $4 billion), very few enterprises could enter this particular market. China is the world's largest coal producer and generates 75% of its energy from burning coal. Like the rest of the world, China is looking more closely at synthetic oil, in view of hikes in the price of imported oil. The plan is that China will rely mainly on its own oil production, in the region of 180 - 200 million tonnes per year, and imported oil (95.8 million tonnes in the first eight months of this year), with increasing future provision of synthetic oil from coal. It is reported that 30 projects are under detailed planning or are at the feasibility study stage, yielding a total production capacity of 16 million tonnes, at an investment in excess of 120 billion yuan ($15 billion). Insiders predict that by 2020, China will produce 50 million tonnes of oil from coal.
Shell Gas and Power Developments BV and the Shenhua Ningxia Coal Industry company (Shenhua-Ningmei) signed a joint agreement to study coal-liquefaction and the technical and commercial feasibility of launching a direct coal liquefaction plant with a daily output of 70,000 barrels (about 10,000 tonnes) of oil products and chemical feedstocks. South African based Sasoil has also joined as a collaborator with the Shenhau group, to build two coal-to-liquids plants using Fischer-Tropsch technology developed by and unique to Sasoil, which is the world leader in producing fuel from coal. The Fischer-Tropsch process was developed in Germany and kept the country in fuel during the oil-blockades of WWII; it has also fuelled South Africa during spates of political sanctions, and remains the principal source of oil in this country. One may conclude that the technology has a clear future in breaking the dependency of individual countries on imported oil, mainly from the Middle East... so long as there is enough coal available as a feedstock.
Coal liquefaction consumes massive quantities of water. Although part of the restriction to only "big" plants is intended to spread coal-to-oil production across the country, many regions of China, especially in the north and northwest, are already extremely short of water. Significant environmental discharges of effluent gases, waste (i.e. contaminated) water and industrial effluent are also attendant to coal liquefaction processes. The profitability of producing oil from coal depends on the prevailing price of crude oil on the world markets, and since this varies year on year, and it takes up to five years to build a coal liquefaction plant, there is an element of risk as to whether the plant will immediately return money or not. However, once we hit the Peak Oil production point, crude oil will become increasingly expensive. It is reckoned (in China at least) that the technology is viable so long as the world price is about $25 a barrel. Personally I doubt it will ever fall to anywhere near that again - it was three times that some time back, and not much less now. Hence, coal liquefaction will be attractive anywhere on economic grounds, even if the environmental picture is less so.
Water as a resource is under pressure in many parts of the world. It is therefore a central issue to estimate whether the water reserve of a region can support any new production processes, without detriment to the environment and to the people (and animals) who live there. It appears as a twist of nature that many regions that are potentially well provided with means for "unconventional" oil production are relatively short of water. So, there is plenty of coal in parts of Australia, Africa, China, India and North America, where supplies of freshwater are limited. As noted, coal liquefaction is intensive of water, in part to provide the hydrogen atoms to convert coal (mainly carbon) into hydrocarbons, and also to run cooling systems for the plants themselves. We hear much too, about producing oil from the Alberta Oil Sands (the name "tar sands" is more accurate), by cracking the bitumen they contain into oil for use as a fuel. The latter process relies heavily on gas to "crack" the bitumen into liquid hydrocarbons, but it also uses a lot of water. Indeed, water allocations made by Alberta to oil sands projects amount to 359 million cubic metres per year, which is twice the quantity of water used by the city of Calgary. The whole enterprise threatens the water supply (in terms of quantity and quality) to Saskatchewan and the Northwest Territories through the Mackenzie River system.
Most of the oil sands operations draw their water from the Athabasca River, which is a tributary of the Mackenzie, most of which is not returned to the river. Strip mining of the oil sands uses between 2 and 4.5 cubic metres of water to extract one cubic metre of synthetic crude oil. The water becomes heavily polluted and only 10% goes back into the river, with the remainder being stored in enormous holding ponds, among the biggest structures ever constructed by humans on Earth. The oil sands yielded over one million barrels per day in 2005, but it is believed that they may be exhausted by 2050. The story here is similar to conventional crude oil production, in the sense that initially the resource is relatively easy to extract from close to the surface, but the process becomes increasingly demanding as deeper levels are accessed. This is true of coal production too. The Energy Returned on Energy Invested (EROEI) for producing oil from oil sands is currently about 3, which is just about viable so long as there is sufficient gas available for the purpose. The point must come, and long before the resource is exhausted, when the investment of gas, water, environmental clean-up, etc. etc. no longer justifies the return.
It is a case of making hay while the Sun shines, and finally it may be water that proves itself as the limiting energy resource.

Saturday, November 25, 2006

Washing Machine Spins Nanoparticle Regulations

I have just read Michael Crichton's novel "Prey", which "warns" or at least paints a grisly picture of nano-technology going very badly wrong. Accordingly, I was struck by an article in "Chemistry World" on 24 November 2006, relating to a less catastrophic matter, but which concerns silver nanoparticles and their use in washing machines since they show bactericidal activity. Nanoparticles are particles of nanometer dimensions (one nanometer (nm) is 10*-9 meters, i.e. 1/1,000,000,000 or one billionth of a meter, or 10 Angstrom Units (A)), and as a working definition are considered to be of a size range 1- 10 nm. Particles of this size might be expected to show biological activity, since they are dimensionally comparable to proteins in biological agents, e.g viruses, and there is evidence that they may have a bright future in drug-delivery systems etc. I will not elaborate further here, but there is a corresponding environmental safety issue attended to the potential release of nanoparticles. While it is probably unlikely that nanoparticles would self-assemble into a "swarm" which attacks humans, kills and copies them and so on (nobody can write action better than Crichton, even if he does have his critics!), they could prove toxic in some less offensive way.
Accordingly, the US' Environmental Protection Agency is under pressure to regulate commercial products containing silver nanoparticles. But it is not yet clear how precisely this 'knee jerk reaction', as it has been described, will be enforced.

In a "guilty-'til-proven-innocent" regulation by the EPA, any company marketing a product as containing silver nanoparticles to kill bacteria must provide scientific evidence that the particles pose no environmental health risk. A tricky one indeed. How can that really be "proved"? Methods for determining "nanotoxicology" are in only very early stages of development, mainly as it is difficult to know exactly what to look for. For example, there has been a study of carbon nanotubes (like short little straws made of carbon, a bit like a piece of graphite-sheet rolled back on itself) aimed to prove that they can generate free radicals. Accepting the Free Radical Theory of Disease (an extension of the Free Radical Theory of Ageing), if they were found to have produced radicals, that might be some evidence that we should fear them. However, there was no such evidence found, and moreover, it was concluded that the presence of carbon nanotubes actually diminished the yield of reactive oxygen radicals when present in systems known to generate them. However, we are quite some way from any conclusion than nanoparticles are actually good for you... although they may ultimately be so proven. Who knows? The jury is not so much as "out" as not yet elected.

The decision is the result of legal enmeshings concerning the 'Silver Wash' washing machine, marketed by Samsung as containing silver nanoparticles in order to kill bacteria in clothes. Some US water authorities are afraid that discharged nanosilver particles might concentrate in wastewater treatment plants, killing bacteria which were meant to detoxify the wastewater. That's a good point, in the sense that a broad-spectrum antibiotic can kill both the nasties and the good bugs in the digestive tract, with well known consequences, also ending up at a water treatment plant somewhere nearby. In this particular context, nanosilver could be listed among other environmental pesticides, and would accordingly need to be tested under the Federal insecticide, fungicide and rodenticide act (Fifra). So long as the silver nanoparticles were contained within the washing machine, it could be classified as a 'device', and this exempted it from Fifra. However, in taking the view that some of the particles could actually "escape", EPA have now reconsidered this decision. As EPA spokesperson Jennifer Wood put it: 'The release of silver ions in the washing machines is a pesticide, because it is a substance released into the laundry for the purpose of killing pests.' So there!

Although this particular washing machine uses silver ions, which may not constitute nanoparticles, silver nanoparticles are used to kill germs in such products as air-fresheners, shoe liners, socks and food-storage containers. In all probability, these products will all now have to be tested under the regulations. Silver nanoparticles are also added to bandages to speed healing; but these and other medical applications are regulated by the US' Food and Drug Administration, not the EPA. A legal loophole remains for companies who drop anti-microbial claims from their nanosilver products, since it is only products marketed as 'anti-microbial' that will have to be regulated.

Undoubtedly, the new regulation will justify more research into the toxic effects of nanoparticles, but who will pay for it? Will it be government funded e.g. through the Research Councils, or will the manufacturers and suppliers of these new technologies bear the burden? I think it most likely that industry will put some money into university labs., say by supporting a few Ph.D projects, which is a far cheaper option than doing in all in-house themselves at full-costs, and the universities are mostly (in the U.K. at least) sufficiently desperate for cash they will take whatever crumbs might thereby drop their way.