Saturday, December 29, 2012

Radiation Fears Over Fracking?

It is found that high concentrations of salts, including those of radium and barium, are present in the flowback waters from late-end fracking operations, lending fears over potential groundwater contamination. The amounts of the various salts are greater than those in the water-mix used in the fracking operation, and their specific concentrations are consistent as having arisen from an underground aquifer that was set-down during the Paleozoic era. Hydraulic fracturing ("fracking") is the process whereby gas and oil is made to flow from impermeable rock, which is broken-open with water containing various salts and other materials, under high pressures, sometimes as much as 15,000 psi (i.e. one thousand times atmospheric pressure).

Researchers at the University of Pennsylvania analysed samples taken principally from four different sources. These were brines recovered from 40 conventional oil and gas wells in the state; flowback waters from 22 Marcellus gas wells, collected by the Pennsylvania Bureau of Oil and Gas Management; two more samples of Marcellus flowback waters from a previous study; and similar waters from 8 horizontal wells taken by the Marcellus Shale Coalition.

The results showed that the flowback waters contained a very high degree of salinity, which is inconsistent with the concentrations of salts contained in the waters used for the fracking operations. Rathermore, it appears that these additional elements stem from the Paleozoic era, which was the earliest of three geologic eras of the Phanerozoic eon, and lasted from around 541 - 252 million years B.P. The Paleozoic is subdivided into six geologic periods, which in decreasing order of age are: the Cambrian, Ordovician, Silurian, Devonian, Carboniferous and Permian. The Paleozoic was a period of dramatic geological, climatic, and evolutionary change, and it follows the Neoproterozoic Era of the Proterozoic Eon, and leads-on to the Mesozoic Era.

Specifically the study examined fluids that were brought to the surface within 90 days of fracking. While a fluid equal in volume to one quarter of that used for the fracking operation was recovered, it was found to contain high amounts of a range of elements, most disturbingly radium and barium, which were washed up from some 8,000 feet below the surface. The latter observation might appear to run counter to the view that groundwater contamination is impossible because of the great depths at which fracking is done, and well below the water table.

Attention and concern has so far been given mainly to the chemicals, including corrosive salts and benzene, that are present in the fracking fluid; however, this investigation raises issues over the exhumation of other toxic materials that had previously remained sequestered in the rock over millions of years. The measured levels of radium and barium are significantly greater than those deemed acceptable in drinking water, and so the necessity to dispose properly of the waters from fracking operations is once more stressed, and that account should be taken of the kind of materials that may be washed up from deep underground, as well as the intrinsic composition of the fracking fluid that is injected into the wells in the first place. If the waters are disposed of incautiously, there may be a real risk of water supplies becoming contaminated by substances that are naturally occurring, but nonetheless highly dangerous.

Saturday, December 22, 2012

The Start of a New Cycle

Rather than the proverbial or actual "end of the world", for me, other than the mild irritations of a particularly crowded train journey, and a blocked drain at home, nothing untoward passed yesterday, because the fabled 21-12-2012, meant neither the final conclusion of the Mayan calender nor of the world itself. It is interesting to speculate as to how the world might have ended, if it had, for which the most apocalyptic proposed mechanism was a collision, or close encounter with a celestial body, variously called Planet X or Nibiru. Given the expected size of the latter, I think that we might well have been aware that it was on its way, somewhat earlier in the year, in a kind of Velikovskian "Worlds in Collision" or "Earth in Upheaval" scenario, with the surface of Mother Earth boiling into a magmatic frenzy and killing billions.

Another interpretation is that no actual end of anything was forecast, but rather the closure of one epoch and the heralding-in of another. Specifically it is the Mayan "Long Count" calender that is in reference. This calender began in a year corresponding in our temporal accounting record to 3114 B.C., and it advances in periods of 394 years called Baktuns. December the 21st - the winter solstice - 2012, marks the conclusion of the 13th Baktun, and the myth that the world would end then can be brought culpable in a false interpretation of a Mayan tablet, carved 1,300 years ago. Rather, we have been re-birthed by a mere few hours into a new cycle.

Although the world has not ended catastrophically, there seems little doubt that a core transformation of human civilization is under way, and which should accordingly prove dramatic in all aspects.While discoveries of untapped fossil fuels abound, there is little shadow of doubt that their rate of recovery at present levels, most immediately of crude oil, cannot be maintained with any significant longevity. This fact will impact most urgently on transportation, as fuel prices rise and then actual fuel shortages ensue, with profound consequences both to individuals and to the economies of nations. Lower-energy pathways for human activities must be found, and a new style of growth established on regional, local and community scales.

While this is the antithesis of global accession, it builds community buffers and diminishes our dependency on vulnerable exogenous supply lines. Thus, the new cycle must be one of establishing resilience, as the old  record of unbridled consumption is ushered backwards into the pages of history. The task confronting humanity, however, is by no means trivial, and there are few constants to draw upon to firm-foot the uncharted descent from the slippery peak of avarice. Accordingly, we may ultimately regard our having been "spared" to witness this new age as either a privilege or a condemnation.

Sunday, December 09, 2012

“Peak Oil” is Nonsense… Because There’s Enough Gas to Last 250 Years.

The title is a condensate of the latest rendition from Nigel Lawson, who served Margaret Thatcher’s government, both as Secretary of State for Energy and as Chancellor of the Exchequer. In a recent interview, published by the Daily Mail (, Lord Lawson makes various assertions, each of which invite some consideration and question. At bold face, his conclusions confound the difference between a resource and a reserve. Furthermore, they ignore the fact that it is not the quantity available, but the rate at which it may be recovered - and this not only as a technical but economic reality (this is the “reserve”) – which is the determinant of whether and when oil or gas will “peak”.

As an example where the former but not the latter criterion holds, we might say that it is technically feasible to mine minerals from the moon, and bring them back to earth, but in economic terms, the prospect is unrealistic. However, it is the inclusion of all known, proved, probable and theoretical, that is reckoned-up as a resource, not only ignoring technical and economic factors, but the uncertainty of whether the material is there to be had in the first place. A useful analogy for the relationship between the amount in the reserve, and how quickly it may be recovered, is that it is not the size of the tank but the size of the tap that matters.

No sensible person that I am aware of, is saying that oil or gas is going to “run out” any time soon. I give a talk entitled “What happens when the oil runs out?”, but I begin by explaining that this is not going to happen, and we will be producing oil for decades to come. That noted, continuing to produce oil at the present rate of 30 billion barrels every year is unlikely to be possible for very much longer. At some point, reckoned to be around now, conventional crude oil production will reach a maximum, and then fall relentlessly. It must – this is the nature of a finite reserve. In principle, so long as that “hole” in the output of crude oil can be filled from alternative, unconventional sources, all is well, but once the loss of conventional production exceeds the provision of the latter, the overall sum will pass into the negative; in other words, global oil production will have peaked.

Lawson begins with mention of the extraction of gas and oil from shale by hydraulic fracturing (frac’ing, for the purists, but more commonly designated as fracking). He is entirely correct that it is new technologies – horizontal drilling combined with fracking – that have brought the cost down sufficiently that exhuming gas and oil from such inaccessible reservoirs is now both practically and economically viable. In principle, shale gas can be recovered all over the world, although until an actual well is drilled, there remains speculation as to how much gas there is and indeed its quality; for example, from several shale wells in Poland, came a gas that was so heavily contaminated with nitrogen that it wouldn’t burn. It also contained high levels of hydrogen sulphide, and removing both these other gases to leave pure methane would be extremely costly. That noted, because production from shale wells, of either gas or oil, tends to decline quite rapidly, down to perhaps only 20% of the initial rate within 2 years, more wells must be drilled year on year, to maintain the overall output of a field, and this rate must be elevated to raise gas production, as is sought. Ultimately, the scheme must run up against material limits to the levels of financial investment, infrastructure, equipment and trained personnel that can be brought to bear in the effort.

As to how much shale gas the United States has, claimed in the media as sufficient to last for 100 years, detailed inspection of the available figures reveals this to relate to a resource – i.e. the most optimistic set of accounts – while the reserve (proved plus probable) is more like 20 years worth. Given the known reserves of shale oil, and  the expected production from it over the next few years, it is difficult to see how the U.S. will overtake Saudi Arabia, to don once more its crown as the world’s greatest oil-producing nation, which would mean an output of about 11 million barrels a day, up from just under 6 mbd currently, by 2017. In the tally of “oil” is included other “liquids”, including biofuels, natural gas plant liquids and refinery gains, which compromise the truth, since they have different properties from crude oil - in particular, a lower energy density.

Unsurprisingly, oil shale gets a mention, for which it is claimed there is three times as much “oil” under the U.S. as has been used in the past 100 years. Yes, it’s that resource thing again. It is probably worth stressing that oil shale is not the same thing as shale oil. Shale oil (tight oil) is actual crude oil that if recovered, e.g. through horizontal drilling and fracking, can be refined in the normal way. However, oil shale does not contain oil as such, but a solid organic material called kerogen. To produce a material resembling crude oil requires large amounts of energy to heat the kerogen to above 300 degrees centigrade, in order to crack it into liquid form; the process also uses large amounts of freshwater, and churns-out an equal volume of contaminated, wastewater which must be dealt with responsibly.

There is, as yet, no commercial scale production of oil from “oil shale”, and there may never be, since it takes almost as much energy to get oil from it as will be delivered by the oil itself, i.e. pointless. The returns are better on “oil sands”, maybe 3 to 1, in energy terms - once the material has been “upgraded” to provide a liquid fuel - but here too, vast quantities of water are needed, and sufficient energy is required to extract the bitumen in the first place, that installing nuclear reactors in such locations is being considered seriously as a source of heat, currently generated  by burning natural gas.

Lawson concludes, “Today, oil, gas and coal represent 80 per cent of the global energy mix. They will continue to dominate the world’s energy markets for decades to come. And within that picture, natural gas is going to offer the cheapest way to produce electricity: cheaper than nuclear energy and massively cheaper than renewables...”. He’s obviously forgotten about climate change.

Wednesday, December 05, 2012

A Solar Cell All Made From Carbon.

The first solar cell has been reported, whose active components all consist of forms of carbon Until 1985, there were just two known allotropes of carbon - diamond and graphite - but in that year the fullerenes were added to the list. The latter are also known as "buckyballs", in honour of Buckminster Fuller, who conceived the structure of the geodesic dome, which these materials mimic at the molecular level. Since then, nanotubes and graphene have been discovered, all pure carbon, but with different atomic arrangements that determine their particular structures and properties, which include a high electrical conductivity. In the cell, carbon nanotubes are used to provide a light-absorber and electron donor, while a counterpart electron acceptor is fashioned from the fullerene, C60. This photo-active double layer is made the filling of a sandwich between a reduced graphene oxide anode and doped carbon nanotubes to act as the cathode. Among the advantages claimed for this device are low cost, ready availability of materials and simple processing from solution - using wet chemistry methods.

Other workers have previously used carbon nanotubes, graphene and fullerenes in various styles of solar cells, but this is apparently the first time that all three carbon allotropes have been used in the same cell. Most solar cells are transparent to near infra-red wavelengths, and hence around half the solar spectrum is unusable by them as an energy source. In contrast, the carbon cells adsorb near IR radiation strongly, and it is thought that it might be possible to combine them into a tandem cell, e.g. with silicon, to absorb sunlight right across the solar spectrum, resulting in greater power production. However, the light-to-electricity conversion efficiency is, as yet, extremely low, at 0.06%. By way of comparison, 12% might be got from a dye-based solar cell, and as much as 25% from crystalline semiconductors. A typical silicon solar panel gives around 15%. Thus, there is some way to go before the all-carbon solar cell is used in anger.

Monday, December 03, 2012

B.P. Introduces Low-Salt Water to Enhance Oil Extraction, as part of Shetlands oil-boom.

As part of an anticipated massive oil boom in the Sheltlands, along with a revamping of the Sullam Voe terminal (confirmed just last week), B.P. intends to use a desalination plant to reduce the salt content of seawater so that it can more effectively flush oil from the surfaces of the rock reservoirs that contain it. If the process can increase the amount of oil recovered by another 4%, from a current average of 35%, this will mean a substantial increase in output across the industry. Applied to the Clair Ridge field, which is in the North Sea off the western Sheltand islands, it is thought that, over the lifetime of the well, 640 million barrels of crude oil might be recovered using untreated sea water, but that 42 million barrels, over and above this, could be made available using the low-salt water. The cost of the desalination plant is $120 million, which will add $3 to the cost of a barrel of oil.

Crude oil contains a wide range of molecules, many of which are charged or polar. In the B.P. research laboratories at Sunbury on Thames, in the south of England, it has been discovered that the oil molecules form chemical bridges with doubly charged cations, such as Ca2+ and Mg2+ that are present on the surfaces of clay particles in sandstone. At the normal salt concentrations in seawater (3,500 ppm), the oil molecules are compressed close to the mineral surface, so preventing access by free cations that are necessary to displace the Ca2+ and Mg2+ cations and thus free the oil from the surface. As the salt concentration is reduced, the thickness of the thin film of water between the oil molecules and the surface increases, through an effect known as “expansion of the electrical double layer”, and permits access of free cations from the seawater. This releases the oil molecules from the rock surface.

For the method to work, two criteria must pertain: (1) the total salinity of the water must be low enough to relax the electrical double layer compression, and (2) the dipositive cation concentration needs to be lower than that of the reservoir water. Together, these factors allow most of the oil to be released that is bound to the reservoir surface by this mechanism. B.P. uses water flooding to recover 60% of its oil, and it is predicted that the low-salt (LoSal®) method might increase production over the company’s holdings by 700 million barrels.

This latest development is part of a predicted new oil boom in the Shetland Sullom Voe oil terminal, for which £300 million ($480 million) worth of investment is expected.The manager of the terminal  confirmed only last week that the following projects were going ahead:
● a complete refurbishment of the plant and pipework.
● introduction of a major gas-cleaning plant.
● construction of a temporary two-storey office building at Sella Ness.
● final work on the Project Aurora gas plant.
● overhaul, by its owners, Fortum, of the power station.
● overhaul of 16 giant oil storage tanks.

While only 9.2 million tonnes of oil were produced by the terminal last year, over its lifetime it has handled one-third of the British offshore oil production. The Clair Ridge development (where the desalinised water is to be employed) is expected to be a major producer, with two new platforms. Oil will be brought through the existing pipe to Sullom Voe from the end of 2016, which could continue for another 25 years, still being in operation in the 2040s.

Rebuilding the Soil Food Web – A Perfect, Natural Recycling System.

I found an interesting video-clip recently which I now summarise the contents of: This gives a very clear description of the soil food web, and the need to restore it, to make soil more active, rather than relying on an industrialized agricultural system, that is utterly dependent on vast inputs of limited resources: petroleum-refined fuels, and fertilizers made from natural gas and mined rock phosphate.

Californian humus is formed by low-temperature decomposition of wood chips over 3-5 years. Most composting processes work at higher temperatures which destroy the soil microorganisms. Soil humus is the component of soil that has been broken down by microbes to form soil organic matter (SOM). Leaves fall and form litter on the surface of the soil. These are broken down by fungi, which have evolved for the purpose of breaking it down to soil humus. Humus can be thought of as an ecosystem which contains many thousands of different types of microorganisms growing together and working in symbiosis with plants to build topsoil. The application of chemicals and tillage has killed-off microbes and destroyed soil biodiversity. In conventional (industrialized) agriculture - noting that this has only been in existence for about 60 years! - soil humus is lacking, and so plants have no microbes growing with them and no immunity. This leads to problems of disease, pests etc. Better results (yields) are obtained using organic methods to rebuild soil humus.

 The Soil Food Web is a whole community of microorganisms that live in soil, of which there are four main types: bacteria, fungi, protozoa and nematodes, whose collective numbers total billions in a single teaspoonful of soil. The soil food web shows how they all relate to one another. In nature, plants grow and then eventually die, leaving dead material lying on the surface of the soil. At the vanguard are bacteria and fungi, which eat the plant material, and hence all the nutrients that were in the plants now become incorporated into the bodies of the bacteria and fungi. To make these nutrients available to plants again, requires the action of the soil predators - like sharks in the ocean or wolves in the pasture setting – nematodes and protozoa. These eat the bacteria and fungi continually, and excrete the nutrients in forms that can once more be used by plants. A perennial process is thus sustained: plants grow and die, bacteria and fungi consume the plant material, and are themselves eaten by predators which convert them to (and excrete them as) available nutrients that are taken up again, by newly growing plants.

 e.g. In the case of nitrogen, plant mass is eaten by bacteria and the N becomes part of the bacterial biomass. The bacteria are eaten by protozoa, which excrete the N in plant-available (NH4+) form. This can reduce the necessary input of N-fertilizer by 50% because the microbes help convert the plant material to useful fertilizer. About half way through the video is some wonderful footage, recorded at 400x magnification, of a bacteria-feeding nematode actually eating a bacterium, as part of its daily diet of some 50,000 bacteria. As it eats each bacterium, it excretes plant-available N. California humus is used to form a “tea”, by extracting the humus into water, which can be applied to crops to inoculate the soil food web, eventually meaning that very low inputs of artificial fertilizers are required to grow crops, and an improved resistance to disease is achieved.