The full cost of nitrogen pollution exceeds the financial value of higher yields from fertilisation

A lack of nitrogen in the soil often restricts the productivity of a crop. Nitrogen fertilisers, made by the energy-intensive Haber Bosch process, have hugely improved farm yields around the world. But a new study on the wider impacts of the use of nitrogen compounds on soils suggests that these benefits are much less than the environmental costs. What is called ‘reactive’ nitrogen pollutes water supplies, produces greenhouse gases, cuts air quality and reduces biodiversity.  Mark Sutton and colleagues completed a cost-benefit analysis of reactive nitrogen.(1) They found the value of extra yields from the use of nitrogen was €25-€130bn. (An unusually wide estimate, it must be said). But the cost of nitrogen-related pollution was put at €70 to €320bn meaning that roughly speaking the pollution costs of nitrogen are three times the value of the enhanced crop yields. So why we do use so much fertiliser? It is much like the banks: the gains are private (accruing to bankers and to farmers) while the losses are socialised (accruing to taxpayers and citizens). Put another way, the application of fertiliser onto farmlands is far too cheap because its price does not recognise the full costs of using it. The result is that farmers are not incentivised to be efficient in its use and the study authors estimate that about half of the nitrogen added to Europe’s soils ends up as pollution or back into the air as nitrogen gas.

Perhaps more controversially, the report recommends a reduction in meat consumption. About 85% of the nitrogen that isn’t wasted and which ends up in the complex molecules in food crops is eaten by animals, not humans. If we were all vegans, we would need to grow more plant matter but would still reduce the need for artificial nitrogen by 70% from today’s levels, even if we wasted as much nitrogen as we do today.

There’s another point which isn’t mentioned in Mark Sutton’s summary of the full report. One of the causes of nitrogen waste is monoculture. If our croplands are all devoted to huge acreages of a single plant, whether wheat or sugar beet or oilseed rape, then nitrogen uptake is likely to be lower than if we grow a diverse mix of crops in the same area. The precise mechanisms by which nitrogen is better absorbed by a range of different plants in the same area than by just one plant are not well understood. Nevertheless several studies now show that biodiversity reduces the run-off of nitrogen and thus cuts the environmental impact of using fertilisers, whether animal manure or artificial.

By coincidence, one such study was carried in Nature the week before the Sutton summary of the European work.(2) Bradley J Cardinale showed that maintaining the diversity of different species of algae in streams resulting in greater uptake of the available nitrogen that would otherwise have polluted downstream rivers. He concludes that ‘biodiversity may help to buffer natural ecosystems against the ecological impacts of nutrient pollution’.

An earlier piece of research demonstrated a similar result. Whitney Broussard and Eugene Turner found that places in the US with diverse crops had lower levels of dissolved nitrogen in the rivers leaving the area.(3). The authors recommend rotating crops, decreasing field size, increasing the width of field edges and incorporating more native grasses between fields. All worthy objectives, but as Broussard says, ‘The American farmer is caught in a mode of production that has tremendous momentum and cannot be changed on the farm’. In other words, farmers are trapped into monocultures and have neither the knowledge nor the ability to take the financial risks of moving away from reliance on one crop. We therefore need to find a way of incorporating biodiversity’s value in reducing the costs of nitrogen pollution in the calculus of the farmer.

The curious fact is that moving away from single crop agriculture also seems to increase yields. Mixing different plants in a single field, or having animals living alongside the plants, such as ducks living in rice paddies,  can systematically improve crop performance. One study in China showed that mixing plants such as maize, sugar cane, wheat, potato and broad bean in a single field might add 30 to 80% to overall production.(4) I have seen similar results from mixing grains and other crops in Australia. Perhaps even if we cannot find a way to reward farmers for increasing biodiversity, and thus reducing nitrogen run-off, we can persuade them to investigate intercropping of different plants in the same field simply because it improves their overall yields.

1, Sutton M.A. et al. The European Nitrogen Assessment, (Cambridge University Press, 2011), available at http://go.nature.com/5n9lsq. A summary can be found at www.nature.com/nature/journal/vaop/ncurrent/full/472159a.html

2, Cardinale, Bradley J, Biodiversity improves water quality by niche partitioning, Nature, www.nature.com/nature/journal/v472/n7341/full/nature09904.html

3, This research is described at http://www.azocleantech.com/Details.asp?newsID=4617

4, Chengyun Li et al., Crop Diversity for Yield Increase, PLoS ONE, November 2009

The dangers from nuclear power in light of Fukushima

This is a joint post, by Chris Goodall of carboncommentary.com and Mark Lynas (www.marklynas.org). We make no apologies for length, as these issues can really only be properly addressed in detail. An abridged version of this article was published in the TODAY newspaper of Singapore on April 6 2011. How risky is nuclear power? As the Fukushima nuclear crisis continues in Japan, many people and governments are turning away from nuclear power in the belief that it is uniquely dangerous to human health and the environment. The German government has reversed its policy of allowing the oldest nuclear plants to stay open and Italy has reportedly abandoned its efforts to develop new power stations. Beijing has stopped approving applications for nuclear reactors until the consequences of Fukushima become clear, potentially affecting up to 100 planned new stations. The mood towards the nuclear industry is antagonistic and suspicious around the world. We think this reaction is short-sighted and largely irrational.

For all its problems, nuclear power is the most reliable form of low carbon electricity. It remains the only viable source of low-carbon baseload power available to industrialised economies, and is therefore responsible for avoiding more than a billion tonnes of CO2 emissions per year. In addition to these unarguable climate benefits, we believe that nuclear power is much safer than its opponents claim. Despite the hyperbolic nature of some of the media coverage, even substantial radiation leaks such as at Fukushima are likely to cause very little or no illness or death. No power source is completely safe, but compared to coal, still the major fuel for electricity generation around the world, nuclear is relatively benign. About 3,000 people lost their lives mining coal in China alone last year. Many times that number died as a result of the atmospheric pollution arising from the burning of coal in power stations.

Although much journalism of the last few weeks has provided careful assessment of the true dangers of nuclear accidents, we thought it would be helpful to pull together the results of scientific studies on the damage caused by nuclear radiation to human health. Our aim is allow readers to put some perspective on the radiation risks of nuclear power, particularly after accidents, and to appreciate the context of the oft-quoted units of ‘millisieverts’, ‘bequerels’ and other measurements. This is a complicated story, because not all radiation is the same – a crucial factor is the timescale of exposure. There is a big difference between the expected impacts of exposure to huge amounts in a very short period, large doses over several weeks, and long-running or chronic exposure.

We examine these three scenarios in turn. The results seem to be quite clear to us: accidents and leaks from nuclear power stations are not likely to cause substantial numbers of illness or deaths, even under exceptional circumstances such as are currently being experienced after the combined earthquake and tsunami disaster at Fukushima. This is an important conclusion given the potential for nuclear power to continue to mitigate global warming, which presents vastly greater risks on a global scale. We are not advocating slackness or complacency, just suggesting that a rational and balanced assessment of the risks of radiation is a good idea. To hastily abandon or delay nuclear power because of radiation risks from accidents such as that at Fukushima is poor policy-making.

Some background

All of us are exposed to radiation every day of our lives. Very little of this comes from nuclear power or nuclear weapons. Other sources are far more important. One example: potassium is a vital chemical for carrying electrical signals around our bodies but a rare, naturally occurring, isotope, potassium 40, is radioactive. The tiny amount inside us produces 4,000 decays of individual nuclei every second. This internal nuclear fission of potassium atoms and from a radioactive natural isotope of carbon is responsible for about 10% of the annual dose received by someone in the UK.

More important sources are the radon gas produced in granite rocks, cosmic radiation and doses from medical equipment. By contrast, and despite the attention we pay to them, nuclear power stations and nuclear weapons are responsible for much less than half of one percent of the radiation typically absorbed by people in the UK. The same rough percentages apply to other countries operating nuclear reactor fleets.

The average background radiation across the UK is about 2.7 millisieverts (mSv) a year. (A ‘millisievert’ is a measure of radiation exposure to the body and is therefore a useful unit to directly compare the radiation received from different sources). People in Cornwall, where there is far more radioactive radon around because of local geology, experience more radiation than in other areas. Their dose may be as high as 10 mSv, almost four times as much as the UK average. In fact nuclear power plants could not be built in the granite areas of the county because the natural background radiation at the boundary of the power station would be higher than is allowed under the strict rules governing the operation of nuclear plants. Cornish radiation isn’t that unusual: parts of Iran, India and Australia have even more natural nuclear fission than Cornwall.

So our first point is that nuclear power is an almost trivial source of radiation, dwarfed by natural variations in other sources of radiation. The second is that exposure to radiation in the UK is tending to rise, but certainly not because of nuclear power or leaks from other nuclear operations. Instead it comes from the increased use of radiation in diagnostic equipment used by health care professionals. One scan in a CT machine will add about 10mSv to a person’s annual exposure – 3 million Britons went through this process last year. Per head of population, the number is even higher in the US.

These are two important basic numbers to help us assess just how dangerous nuclear power is: 2.7 mSv a year for the average natural background radiation received by the typical person in the UK and 10 mSv for a single CT scan. We will use these numbers to compare the radiation effect of nuclear power and to assess the importance of the very rare but severe accidents at nuclear power plants.

The impact of exposure to very high levels of radiation over a few hours

a) Chernobyl workers (1)

The fire and explosion at Chernobyl in 1986 was the world’s most severe accident at a civil nuclear power plant. It is the only such event which is known to have killed workers from the effect of radiation. About six hundred people were involved in work on the site of the power plant during the first day after the accident, of which 237 were thought to be at risk of acute radiation syndrome (ARS) because of their degree of exposure. 134 individuals developed symptoms of ARS and 28 died as a result. The deaths were generally due to the skin and lung problems, compounded by bone marrow failure. All but one of people killed received a dose of radiation above 4,000 mSv, with one of the deaths occurring after a dose of about 3,000 mSv.

The implication of this is that ARS will usually only kill someone who has experienced the impact of over 4,000 mSv. Indeed, many workers at Chernobyl actually received doses above 5,000 mSv and survived. By comparison, the workers engaged in the repair at Fukushima are being carefully monitored to ensure their total exposure does not go above 250 mSv, less than a tenth of the minimum level at which an ARS victim died at Chernobyl. As at 23rd March, 17 workers had received more than 100 mSv of radiation, forty times the yearly radiation received by the typical UK resident and equivalent to ten CT scans. It has been reported that two workers received radiation burns to the legs after exposure in contaminated water to 170 millisieverts per hour doses in Unit 3 on 24 March (2). To date this remains the only known health impact suffered by Fukushima workers.

But what of the longer term dangers to Chernobyl workers who suffered massive radiation exposures? Of those who survived acute radiation syndrome, 19 out of the 106 died between 1987 and 2006. These deaths included 5 cancers. 87 people were still alive in 2006; 9 of them had been diagnosed with various cancers including cases of leukaemia. The problem with using these statistics to draw definitive conclusions is that the numbers of workers affected by extremely high levels of radiation in the Chernobyl emergency are not large enough to give robust data on the long-term impact across wider groups. But the 20 year survival rate of the workers exposed to the greatest radiation – 82% – and the unremarkable percentage either dead of cancer or living with it – 14% in total, within ‘normal’ bounds – suggests that the human body is usually able to recover from even extremely high doses delivered in a short period of time. (This comment is not intended to diminish the severity of the effects of ARS: many of the survivors have suffered from cataracts, sexual dysfunction, skin problems and other chronic illnesses.)

Fourteen healthy children were borne to ARS survivors in the first five years after the accident. There is no evidence of genetic damage passed to future generations.

b) Chernobyl’s wider early impacts

Several hundred thousand workers were involved in the aftermath of the accident (the so-called ‘recovery operation workers’ or ‘liquidators’). These people’s average total dose was about 117 mSv in the period 1986-2005, of which we can assume the large part was experienced in the first months after the accident or at the time the sarcophagus was being placed over the reactor core a couple of years later. The exposures in this group ranged from 10 to 1,000 mSv. The UN Committee on Chernobyl comments that ‘apart from indications of an increase in leukaemia and cataracts among those who received higher doses, there is no evidence of health effects that can be attributed to radiation exposure’. The suggestion here is that the overall impacts on cancer rates among the people with lower doses – but which are still very much higher than would normally be experienced in the UK – is limited.

This conclusion has been attacked by some groups. In particular, Greenpeace published a report entitled The Chernobyl Catastrophe: Consequences on Human Health in 2006 that estimated a figure for total deaths resulting from the disaster that was many times greater than official estimates. Nevertheless, most scientific reports, including all the many official reports into the accident, have concluded that the long-term effects of radiation on the recovery workers, as opposed to the much smaller numbers working inside the plant immediately after the explosion, have been very limited.

After the 1986 Chernobyl disaster large numbers of people in surrounding populations were exposed to the radioactive isotope iodine 131, largely through consuming milk and other farm products. The human body takes up iodine and stores it in the thyroid. Radioactive iodine accumulates in this small area of the body and gives the thyroid gland disproportionate exposure. The effective dose of radiation to the thyroid among some people in the areas affected by Chernobyl fallout ranged up to 3,000-4,000 mSv.(3)The concentration of radioactive iodine in the thyroid has produced large numbers of cases – probably about 7,000 by 2005 – of thyroid cancer among the millions of people in the affected areas. (4) These cases are highly concentrated among people aged less than 18 at the time of the disaster and the impact on adults appears to be very much less or even negligible.(5) The risk of getting thyroid cancer among the most affected group is continuing to rise even now. The implication is clear: severe doses of radiation twenty five years ago produced damage that is still causing cancer today.

Thyroid cancer is treatable and death rates are low. The number of people who had died of thyroid cancer in the affected areas by 2005 was 15. (6) We have been unable to find a scientific assessment of how many people are likely to die in the future from thyroid cancer in the Chernobyl region but the effective treatment for this disease may mean that relatively few of those affected will die. The incidence of thyroid cancer after Chernobyl could have been very substantially reduced if the authorities had acted to provide the local populations with iodine tablets. The effect of taking these tablets is to flood the thyroid gland with normal iodine, reducing the uptake of iodine 131 and thus cutting the dose of radioactivity. Of those countries closest to the nuclear power plant, only Poland seems to have widely distributed iodine, although this is a well understood and simple way of reducing thyroid cancer risk from radioactivity. Second, the authorities could have banned the sale of milk, which is the medium through which most iodine 131 enters the human body and which is why young children appear to have been most severely affected.

It is notable that the authorities around Fukushima are taking an extremely precautionary approach to iodine 131 exposures in the surrounding populations, both in rejecting milk and distributing iodine tablets. Given the experience of Chernobyl, this seems sensible, even though the real risks of exposure and developing cancer as a result are very much lower.

c) Fukuryu Maru fishing boat

In the 1950s and early 1960s nuclear weapons powers like the US, Britain, China and Russia carried out above-ground explosions of atomic bombs in remote areas. (In 1963 these tests provided about 5% of the radiation dose experienced by people in the UK, over five times the impact of Chernobyl, which added less than 1% to the total dose for the average person in 1986, the year of the explosion). One of these tests was in 1954 at Bikini Atoll, one of the Marshall Islands in the mid Pacific. The device turned out to be much more powerful than expected by the US scientists running the experiment, with an explosive power of about one thousand times the 1945 bombs over Japan. As a result the fallout extended well beyond the exclusion zone established by the US, and a Japanese fishing boat was caught in the aftermath of the explosion.

The 23 individuals on this boat received huge doses of radiation – probably averaging between 4,000 and 6,000 mSv. The fishermen suffered severe radiation burns within hours and decided to return to their home port in Japan. Upon their arrival two weeks later their symptoms were recognised to be caused by radiation and they were treated for ARS. Unfortunately, one of the treatments the fishermen received was blood transfusions using blood which was infected with the hepatitis C virus. One of the crew members died a few months after the explosion from liver disease, which may have resulted from hepatitis as much as from acute radiation syndrome. The other fishermen also suffered disease from the hepatitis C in the transfusions and many of them died of liver problems. This experience complicates any medical conclusions that might be drawn about the immediate or long-term impacts of severe radiation exposure.

As at February 2011, it is reported that of the 22 crew members who survived ARS, nine are still alive 57 years later.(7)The average age of these survivors is over 80. These individuals all seem to have had major health problems during their lives, but the cause may well be the transfusions rather than the radiation. Once again, the main implication of the Fukuryu Maru event is that even huge doses of intense radioactivity can cause surprisingly few fatalities.

d) Hiroshima and Nagasaki

The survivors of the atomic bomb blasts were exposed to high but varying levels of radiation. The death rates of nearly 90,000 survivors have been painstakingly studied and compared with people from other cities, so are a valuable source of information from a horrific real-world experiment. Most survivors endured an exposure of less than 100mSv and for these people there is no statistically significant increase in cancer risk. One study shows, for example, that the number of deaths from solid cancers among those who received less than 100 mSv was 7,647, compared to 7,595 that might have been expected based on the experience of populations in other Japanese cities. (8) The increment of 52 deaths is less than 1% above the expected level, and the result is statistically meaningful because it involves a relatively large group.

Above 200 mSv of total exposure, the effect of the radiation becomes a little more obvious but it is not until the dose was greater than 1,000 mSv that a major increase in cancers occurs. Over 2,000 mSv, the risk of a survivor of the bombs dying from a solid cancer is approximately twice the level of risk in non-affected cities. But, even at this very high dose, the number of people dying from solid cancers was 18% of all bomb survivors, to which should be added the 3% of people dying from leukaemia. Compare this, for example, to the UK, where about a quarter of all today’s deaths are from cancer, presumably because of other factors.(9) So it is fair to say that even severely irradiated Japanese atomic bomb survivors appear to be at less risk of developing cancer than normal British people.

e) The effects on soldiers exposed to radiation at tests of nuclear bombs

US and UK research has shown that soldiers experiencing radiation in the aftermath of tests of nuclear bombs, such as at the ‘Smokey’ test in Nevada in 1957 have not had higher than expected incidence of cancer. Although this group seems to have experienced more leukaemia than would have been predicted, the number of other cancers has been lower. The overall death rate from cancer is not higher than in a control group.(10)

Severe exposure over longer time periods

In the previous section we looked at single catastrophic events that caused high doses of radiation, showing that only very high doses, perhaps ten or hundred times the yearly amount received from background sources, substantially affect the risk of future cancers. The same is true of less intense individual events that are repeated many times over a period, even though these events may add up to very high levels of total exposure.

a) Radiotherapies for cancer

Highly targeted bursts of radiation are used to kill cancer cells in radiotherapy. As a result the patient receives very large total doses of radiation over the period – perhaps a month – of the treatment. The amount of radiation received may be as much as 30,000 mSv, many times that sufficient for a fatal dose. This amount does not cause acute radiation sickness because the patient is given time to recover between the doses, allowing damaged non-cancerous cells to recover, and because much of the power is directed at specific internal sites in the body, where the radiation does indeed cause cell death. (That after all is the point of radiotherapy – to kill the cancerous cells in the patient’s tumour.) Some of the radiation reaches other healthy parts of the body and does seem to cause small increases in the likelihood of development of another cancer. But, as the American Cancer Society says, ‘overall, radiation therapy alone does not appear to be a very strong cause of second cancers’.(11) For this reason, radiation overall cures many more cancers than it causes in today’s populations.

b) Workers manufacturing luminous dials for watches

A classic study by Rowland et al in 1978 investigated the incidence of bone cancer among workers painting luminous dials on watches with radioactive paint before the second world war. (12) Workers ingesting more than 10 gray, a measure equivalent to more than 100,000 mSv, had very high incidence of bone cancer. Those taking in less than 10 gray had no cases of bone cancer at all. In his book Radiation and Reason, Oxford University Professor Wade Allison comments that this is ‘a most significant result’ because it shows a clear demarcation between the level of longer-term exposure that seems to cause obviously enhanced cancer risk and that which does not.(13) The threshold – 10 gray – is a level never likely to be now experienced by anyone as a result of nuclear power. It is far greater, for example, than the exposure of any workers fighting the fire at Chernobyl.

The impact of chronic enhanced background radiation

Thus far we have tried to show that only very high levels of radiation, such as are very rarely ever encountered, will tend to produce statistically significant increases in cancer and other diseases. The last category of exposure is to very long-lived elevated levels of exposure. With the exception of thyroid cancer, and high levels of radon gas if the victim also smokes (see below), raised levels of radiation appear to have a small effect on the likelihood of cancer or other diseases. In fact, some people say that small increases in the total amount of radiation received per year have no impact whatsoever on illness rates, or that some dosages of elevated radiation can even be beneficial.

The standard way of viewing the impact of radiation on human health is called the ‘linear, no threshold model’ or LNT. LNT assumes that increased rates of cancer seen in populations such as the atomic bomb survivors can help us predict the degree of cancer arising from radiation at much lower levels of radiation. The theory says that there is a straight line relationship: simply put, if a 1000 mSv dose gives 10% of people cancer, then a 100 mSv total exposure will induce the disease in 1% of the population. With this model of the relationship between radiation and cancer, all incremental doses are bad, from whatever base level, because they add to risk. But the evidence from many studies is that it is difficult to show any unfavourable effect from elevated levels of exposure. For example, people living at higher altitudes in a country generally get more background radiation than those at sea level because of greater cosmic ray density. However we could find no study that showed that these people experience more cancers or other radiation-related diseases. In Ramsar, Iran, naturally high background radiation delivers a hefty dose of 260 millisieverts per year to local residents, a hundred times higher than 2.7 mSv/yr experienced by the average UK citizen, and also ten times higher than doses normally permitted to workers in nuclear power stations. However, there is no observed increase in cancer in this or any other area where levels of background radiation are up to two orders of magnitude higher than normally observed. (14)

The LNT model is controversial because it is based on statistical assumptions (which reflect a very precautionary approach) rather than observed biological effects of radiation – it would predict higher rates of radiation-induced cancer in Ramsar where radon levels are exceptionally high despite no evidence of these occurring in reality. It has been criticised because the body can repair most DNA damage caused by radiation, and cells have mechanisms that perform this healing role on a constant basis. An analogy would be blood loss: whilst losing half a litre of blood (such as a blood donor might) causes no health impacts whatsoever, losing 5 litres of blood would be fatal. In this case clearly there is a threshold for harm, so a ‘linear no threshold’ assumption is biologically incorrect.

There is one important exception, however, to the rule that increased background radiation presents no additional health problems. In many parts of the world, particularly those with granite rocks close to the surface, radon gas represents the most important source of natural exposure to radiation. Radon is a short-lived radioactive element that arises from the decay of fissile uranium. As we said above, for the UK population as a whole the average total absorption of radiation is about 2.7 mSv per year but many people in Cornwall receive much more, largely from the pooling of the gas in their homes and workplaces.

Studies have suggested that this increase has a very small effect on the incidence of most cancers and other illnesses, although the research is not yet definitive about the precise relationship between radon gas exposure and rates of cancer. However, radon does have an observed effect on lung cancer occurrence, particularly among smokers, and this effect increases with the typical densities of radon in the home. In homes with the highest radon levels, the chance of a smoker getting lung cancer rises from about 10% to about 16%, according to one study.(15)

The US National Cancer Institute concludes: “Although the association between radon exposure and smoking is not well understood, exposure to the combination of radon gas and cigarette smoke creates a greater risk for lung cancer than either factor alone. The majority of radon-related cancer deaths occur among smokers.” (16)

a) The impact of living near a nuclear power plant

Several studies have shown ‘clusters’ of solid cancers and of leukaemia around nuclear installations in the UK and other countries, although the vast majority show no relationship between the two. (17) In particular, the incidence of childhood leukaemia appears to be marginally higher than the national average in some areas close to nuclear sites and at some locations the rate of such cancers appears to rise with closeness to the site. (This suggests a risk that is related to the dose experienced by the child, and thus in line with LNT theory).

This is a worrying finding and much research has tried to find out why the chance of cancer appears to be slightly higher in these places. But the issue is this: why should there be an increased risk of cancer around nuclear sites when the aggregate level of radiation exposure is so low compared, for example, to parts of Cornwall? Similarly, why do we not see higher incidences of childhood cancers around large coal-fired power stations, which emit far higher levels of radiation than nuclear sites as a result of the radioactive material contained in the coal being dispersed from the chimneys? And, as a separate point, why have some of the rates of higher-than-expected cancer fallen at some sites when radiation levels have remained approximately constant?

Scientists working on this issue have no convincing explanation for the higher rate of childhood cancers in these clusters. But many experts now believe that what is known as ‘population mixing’ may be responsible for the observed increase. Mixing occurs when a new population, such as those recruited to construct or operate a nuclear power station, arrives in the area. This may, one theory goes, cause unusual infections in the area and the end result of some of these infections may be childhood leukaemia.

To repeat: the clusters of cancer around some nuclear sites for some periods of time appear to suggest a worrying relationship between nuclear power stations and cancer. But the relatively low levels of radiation at these places, compared to around coal-fired power stations or areas with high natural background radiation, makes it extremely difficult to see how radioactivity could cause the higher levels of cancer.

b) Workers in defence industries exposed to radiation

Oxford’s Professor Wade Allison reports on a survey of a huge number (174,541) workers employed by the Ministry of Defence and other research establishments.(17) This study found that the workers received an average of 24.9 mSv above background radiation, spread over a number of years. But even though this amount is small when expressed as a figure per year, the large number of people in the study should enable us to see whether there is any effect on cancer incidence of low levels of incremental exposure. (Any increase will be much more statistically significant than any additional cancers in smaller groups.) In fact the survey found that the workers suffered from substantially less cancer than would be expected, even after correcting for factors such as age and social class. (The mortality rate for all cancers was between 81% and 84% of the level expected). This suggests that the increased radiation they experienced delivered no additional cancer risk at all.

More on Fukushima

How dangerous are the levels immediately next to the Fukushima boundary fence? The power plant operator TEPCO issues data every day from measurements taken at one of the gates to the plant. (18) On March 27th, about two weeks after the accident, the level had fallen to about 0.13 mSv an hour – and was continuing to decline at a consistent rate. (In the course of writing this article, the number rose to about 0.17 mSv an hour but then started to decline again.) If someone stood at that point for a year, he or she would receive about 1.1 Sv. This is a very high level – about 400 times background level in the UK – but would not necessarily have fatal effects. Professor Allison argues in a post on the BBC News web site that a figure of 100 mSv a month, or 1.2 Sv a year, would be a good level to set as the maximum exposure for human beings before real risk was incurred. (19)

Radiation intensities obey the inverse square rule: as we move away from the source of radiation, the level of radiation will decrease by the square of the distance. (This excludes the impact of fallout from an explosion or of radiation carried in plumes of steam). Thus a reading obtained 2km away from the plant will be one hundredth of the level at 200m distance. In other words if the plant were in the UK, with its average background dose of around 2.7 mSv per year, and the monitoring at the Fukushima gate is 200 metres from the source of the radiation, then the level of incremental radiation would be no greater today than the background level at a distance of 4 kilometres from the plant. Since much of the radiation emanating from Fukushima is iodine 131, which has a half life of 8 days, the level of contamination of the surrounding area will continue to fall rapidly.

The effect on water supplies in Tokyo and elsewhere

The authorities in Tokyo recommended in mid-March that infants were not provided with tap water after levels of radiation rose to higher than usual levels. The peak level reached at a Tokyo water supply plant was 210 becquerels per litre and this prompted the decision – anxious parents were provided with bottled water instead. (A becquerel is a measure of the number of nuclear fissions, not a measurement of the dose of radiation absorbed). Children will be most susceptible to the effects but an infant drinking this water for a year will absorb the equivalent of about 0.8 mSv of radiation, or less than a third of normal absorption by an adult in the UK. (20)

There are significant divergences between different country approaches to radiation in water. The European limit for radiation in public water supplies is set at 1,000 becquerels per litre, nearly five times that declared ‘unsafe’ for infants by the Tokyo authorities. (21) In one study carried out by the British Geological Survey in Tavistock, Devon, private water supplies were found to contain as much as 6,500 becquerels per litre, and no ill effects have been reported.(22)

Although this is not directly stated, we can assume that the large majority of this radioactivity in British water is derived from the decay of radon. This means that in the UK, the level is likely to remain at a roughly consistent level year after year. But in Japan the radiation is more likely to be from the decay of iodine 131, which has a very short half life. So the radiation in Japanese tap water will quickly fall, and already appears to be doing so. Thus the risk of any radiation damage, even for very young children, from drinking tap water in Tokyo is not just small but infinitesimal.

Summary

Overall the average UK person ets approximately 0.2% of his or her radiation exposure from the fallout from nuclear plants (and from nuclear accidents) and less than 0.1% from nuclear waste disposal. This compares to about 15% from medical imaging and other medicinal exposures and about 10% from the natural decay of potassium 40 and carbon 14 in the body. Naturally-occurring radon is many hundreds of times more important as a source of radiation than nuclear power stations and nuclear fallout. Even for those who believe in a direct linear relationship between radiation levels and the number of cancer deaths, the effect on mortality of normal operation of nuclear power stations would be impossible to discern statistically and in our opinion is likely to be non-existent.

It can only be in the event of a serious accident that we have any reason to be really concerned about nuclear power. We have tried to show in this article that even when such accidents occur the effects may be much less extensive than many people imagine, particularly given the constant media coverage devoted to Fukushima. Chernobyl killed 28 people in the immediate aftermath of the disaster. All these people had experienced huge doses of radiation in a short period. Mortality since the accident among the most heavily dosed workers has not been exceptionally high. And many studies after Chernobyl have suggested that – with the exception of the thyroid variant – cancer rates have only increased very marginally even among those exposed to high doses of radiation after the accident.

While reported rates of other, non-cancer, illnesses may have risen, researchers seem to think that much of this rise is due to the impact of other factors, such as the need to evacuate from the area, increased smoking, drinking and other risky behaviours, or even the wider effect of the breakup of the Soviet Union soon after the accident. There is substantial evidence, as the UN reports on Chernobyl attest, that the psychological impacts of fear of radiation far outweigh the actual biological impacts of radiation. Thus, misinformation about exaggerated dangers of radiation is actually likely to be harmful to large numbers of people – a point which should be borne in mind by anti-nuclear campaigners. This appears certainly to have been the case after Chernobyl and Three Mile Island (in the latter case the radiation released was negligible, but the political fallout immense).

We hope that a more rational sense of risk and an appreciation of what we have learned from past experience will prevent the repeat of this experience after Fukushima. It is important to appreciate that whilst radiation levels at the boundary fence are still high, they are dropping sharply. Even today, March 28th, the radiation exposure of a person a few kilometres from the plant (in the precautionary exclusion zone) is likely to be lower than experienced by many people living in Cornwall or other places with high radon density. Similarly, the peak levels of radiation in the water supply have constantly been well below levels regarded as safe in other parts of the world.

No technology is completely safe, and we don’t wish to argue that nuclear power is any different. But its dangers must be weighed against the costs of continuing to operate fossil fuel plants. Just down the road from us is Didcot A power station, a large coal-burning plant with poor pollution control and therefore with substantial effects on local air quality, as well as more substantial emissions of radiation than from any UK nuclear power station and a Co2 output of about 8 million tonnes a year. We offer a view that Didcot has caused far more deaths from respiratory diseases than all the deaths ever associated with nuclear energy in the UK, and that coal power is a far more legitimate target of environmental protest than nuclear.

Chris Goodall and Mark Lynas, 29th March 2011.

(With many thanks to Professor Wade Allison for his help on the research for this article. All errors are ours.)

1 Much of the data in this section is taken from Sources and Effects of Ionizing Radiation, United Nations Scientific Committee on the Effects of Atomic Radiation, 2008 report to the General Assembly, published February 2011.

2 http://www.world-nuclear-news.org/RS_Contaminated_pools_to_the_drained_2703111.html

3 Prof. Wade Allison, Radiation and Reason, page 100

4 Taken from Sources and Effects of Ionizing Radiation, United Nations Scientific Committee on the Effects of Atomic Radiation, 2008 report to the General Assembly, published 2011

5 Sources and Effects of Ionizing Radiation, United Nations Scientific Committee on the Effects of Atomic Radiation, p 19

6 Sources and Effects of Ionizing Radiation, United Nations Scientific Committee on the Effects of Atomic Radiation, 2008 report to the General Assembly, published 2011, page 15

7 Interview report, http://www.japantoday.com/category/national/view/crewman-of-irradiated-trawler-hopes-bikini-atoll-blast-never-forgotten

8 Preston Dale et al. (2004) Effect of Recent Changes in Atomic Bomb Survivor Dosimetry on Cancer Mortality Risk Estimates, Radiation Research.

9 National Statistics UK http://www.statistics.gov.uk/cci/nugget.asp?id=915

10 American Cancer Society, http://www.cancer.org/Cancer/CancerCauses/OtherCarcinogens/IntheWorkplace/cancer-among-military-personnel-exposed-to-nuclear-weapons

11 American Cancer Society http://www.cancer.org/Cancer/CancerCauses/OtherCarcinogens/MedicalTreatments/radiation-exposure-and-cancer 1

2 Rowland et al, 1978: Dose-response relationships for female radium dial workers, Radiation Research, 76, 2, 368-383

13 Wade Allison, Radiation and Reason, 2009

14 M. Ghiassi-nejad et al., 2002: Very high background radiation areas of Ramsar, Iran: preliminary biological studies, Health Physics, 82, 1, 87–93

15 British Medical Journal http://www.bmj.com/content/330/7485/223.abstract

16 American Cancer Society, http://www.cancer.gov/cancertopics/factsheet/Risk/radon

17 Details of some of these studies are discussed in a 2005 report on the Committee on Medical Aspects of Radiation in the Environment available at http://www.comare.org.uk/documents/COMARE10thReport.pdf

18 Wade Allison, Radiation and Reason, page 127

19 http://www.tepco.co.jp/en/nu/monitoring/11032711a.pdf

20 Professor Richard Wakeford of the Dalton Nuclear Institute, quoted on the BBC News web site at http://www.bbc.co.uk/blogs/thereporters/ferguswalsh/2011/03/japan_nuclear_leak_and_tap_water.html

21 This limit is what is called an ‘action level’. That is, the authorities expect something to be done when higher levels are observed

22 Neil M. MacPhail, A radon survey of Ministry of Defence occupied premises in Her Majesty’s Dockyard, Devonport, unpublished MSc dissertation, University of Surrey 2010.

The cost of delaying nuclear power by a year: 2030 emissions will be 3% higher

 This post was first carried on Mark Lynas's blog at www.marklynas.org The Fukushima disaster will probably delay the arrival and growth of nuclear power in the UK. Unless the gap is filled by alternative low carbon sources, CO2 emissions will inevitably be higher than they otherwise would be. This note estimates the likely effect.

Last December, the Committee on Climate Change (CCC) produced a carbon budget for the period up to the end of the next decade. It suggested that the UK needs to emit no more than 310 million tonnes of greenhouse gases by 2030. To get there requires emissions to fall at over 4% a year during the 2020s, a hugely difficult target.

Electricity generation is the easiest major source of carbon emissions to decarbonise and the Committee looks for new nuclear power stations to replace fossil fuel plants from 2018 onwards. By the start of the 2020s, the CCC believes it necessary to install an average of 2 – 2.5 gigawatts a year of new nuclear generating capacity. (Broadly speaking, this means construction of one and a half nuclear power plants a year). The Committee sees renewables and carbon capture (CCS) providing approximately the same amounts of new low-carbon capacity each year. Nuclear is needed because it provides reliable electricity 24 hours a day, unlike wind. In addition, the technology is far more mature than carbon capture, meaning that although the first stations are very unlikely to be completed before 2018, they will still be producing electricity before the first CCS stations.

Put simply, to achieve the CCC’s target of emissions from electricity of 50 grammes per kWh, down from about 500 grammes today seems to require the fastest possible expansion of nuclear. The implication of the CCC’s very robust work seems to be that if Fukushima delays nuclear construction, emissions in 2030 will be higher than they otherwise would be. By how much?

Here are my assumptions.

a)      Concerns over the implications of Fukushima delay the UK’s nuclear programme by just over a year. This means that the nuclear programme ramps up later and so has constructed two fewer Areva EPR reactors by 2030.

b)      Instead of these two reactors, the UK is obliged to keep equivalent gas-fired capacity on stream. These gas fired plants generate 350 grammes of CO2 per kilowatt hour of electricity more than the very low carbon electricity from nuclear.

c)       An Areva EPR power station generates 1.6 gigawatts for 8,000 hours a year (just over 90% uptime, its design capacity).

To replace the electricity generated in 2030 by the two EPRs that haven’t been built because of the Fukushima-induced delay, will result in 9 million extra tonnes of CO2 per year, just under 3% of the UK’s carbon budget for 2030. At the government’s target price of CO2 in 2030 of £70 per tonne, the ‘cost’ in 2030 is over £600m a year and the equivalent of several billion pounds over the decade of the 2020s.

Nuclear power may turn out to be extremely expensive. Certainly no-one watching what is going on at the two construction sites of Flamanville in Normandy and Olkiluoto in Finland can be anything but sceptical of Areva’s bland assurances. Nevertheless the reality is that a further year’s delay while politicians, regulators and industry calm a worried UK public will make achievement of carbon targets even more difficult to achieve.

2020 renewable heat targets mean converting a third of the UK to woodland

The production of heat is responsible for about half the UK’s total CO2 emissions and the announcement of the details of the Renewable Heat Incentive (RHI) is a welcome step forward. Many significant issues remain unaddressed – most importantly whether the active encouragement of the use of biomass (primarily, wood) is likely to increase pressures on land use. Put simply, are the targets for renewable heat announced today compatible with commitments not to increase deforestation around the world? Also, will the RHI mean land is converted from agricultural use to wood production, in the UK or elsewhere? The calculations in this note suggest that to achieve the 2020 targets from domestically grown wood about a third of the UK’s total land area would have to be given over to new forest. The preamble to the RHI says that the government wants to see about 12% of the UK’s heating provided by renewable sources by 2020. Since about half of all energy use in the UK is employed to provide heat, this implies that just under 6% of total national energy consumption will be provided from renewable sources. Not all of this will be wood. The government’s plans mention biomass from the municipal waste stream and biomethane from the digestion of agricultural wastes. But it is almost inevitable that wood will be used to provide a large majority of the total power sources. There just isn’t much energy in domestic waste and agricultural residues.

I have done a series of quick calculations that demonstrate how much wood is needed to provide about 6% of UK energy demand. (For fans of incomprehensible energy numbers, this is about 100 terawatt hours.) This could be provided by about 24 million tonnes of dry wood, burnt in very efficient boilers. Fresh-cut wood is about 40% moisture, meaning that about 40 million tonnes needs to be cut down and then dried.

For comparison purposes, it may be helpful to note that the UK currently produces about 9 million tonnes of forest products a year – somewhat less than 25% of what we will need for wood for energy.

I think that well-managed UK woodlands and land given over to energy crops such as elephant grass (Miscanthus) can produce about 3 dry tonnes a hectare a year, averaged over soil and climate types. So to produce enough wood domestically, we need to use about 8 million hectares. The UK’s total land area is about 25 million hectares. So to get 12% of our heating needs from wood, we would need to set additionally aside about one third of the surface of the country for forestry and energy crop production.  The figure today is about 12%.

Of course this won’t happen. We will import the vast bulk of the wood we need. The Forestry Commission publishes estimates of the UK’s likely output of wood well beyond 2020 – after all, this is now planted - and the most we can hope for is an extra 3m or so tonnes, less than a tenth of what we will need.

Other countries have far more woodland than the UK does. We could meet the UK’s targets for renewable heat in 2020 by giving over less than one quarter of one percent of world forest land to meet the 12% commitment. The problem is that the world needs to decrease the pressure to log slow-growing hardwood forests not add, however marginally, to the demands for wood as fuel. And the UK’s policies towards renewable heat will probably be copied by other countries, adding to the pressure on world timber stocks. The uncomfortable fact is that we need the world’s land to produce more food, more ethanol and biodiesel for vehicle and aircraft fuels and more biomass for heat. Although we can use our land more productively, for example re-establishing forests on the UK’s upland grasslands, the RHI will inevitably add more pressure to food prices – and to the price of wood itself.

‘Zero-carbon’: the sad story of the rapid retreat from a difficult commitment to improving Britain’s new homes

In late 2006 the British government announced that new homes in 2016 would have to be ‘zero-carbon’. The net emissions from heating the house, providing hot water, running the lights and powering the home appliances had to be no more than zero. Over the last four years this commitment has been progressively weakened. In late February 2o11, the committee charged with advising the government set a new target that set a new target about one third as demanding as the 2007 promise. Battered by builders wanting to continue constructing standard houses and by worries about the expense of achieving the best standards for insulation and airtightness, the government has given up. Four story blocks of flats build in 2016 will have emissions standards barely different from well-insulated apartments built today.

The Code for Sustainable Homes came out in December2006 at the height of enthusiasm for radical changes in the way we consumed energy. British houses are built poorly with inadequate standards of insulation, high levels of air loss and low quality window and doors. The Code was meant to jerk the industry into radical improvements in construction performance. While the use of renewable energy was seen as important, better construction standards and, for example, the use of off-site fabrication of major components was at least as important.

The drive persisted for some time. A consultation document came out in July 2007 affirming the commitment to ‘zero-carbon’. And zero carbon meant what it said. All emissions, including those from running homes appliances were meant to be covered. The definition was unambiguous.

By December 2008 worries had set in. Would it be possible to install enough renewable energy in the form of wind turbines, solar panels, biomass boilers and community heating systems? Were the proposed insulation standards too tight? A consultation document was put out, entitled ‘Definition of zero-carbon homes and non-residential buildings’. It quietly dropped the idea that the zero-carbon standard should include the electricity for powering appliances and set a new target of reducing the emissions from heating the house and running the lights by 70%  below planned 2010 standards. ‘Zero-carbon’ would be achieved by offset schemes that allowed the housing developer to pay money for renewable energy schemes elsewhere. (Details of how this might work are still to be published).

This line held for a time. The government stood by the revised definition of zero carbon and the reconfirmed it in July 2009.

But the Coalition government of 2010 took two months to retreat again. Housing minister Grant Shapps said that the 70% target ‘needs to be re-examined’.  (Nevertheless, at the same time, Mr Shapps repeated the Coalition’s promise that it would be the ‘greenest government ever’). Six months later the government’s advisory body Zero Carbon Hub reported back to him, offering a revised view that 44% reductions were right for apartment blocks, 56% for semi-detached houses and terraces and 60% for detached homes.

In tabular form, here is what happened. The numbers refer to the kilogrammes of carbon dioxide (or other greenhouse gases) per square metre of new building and are taken from the latest report of the Zero Carbon Hub. But the units are not important; what matters is the loss of commitment to genuinely low carbon building.

Reduction from emissions of typical 2006 house of 45 kilogrammes per square metre Date Government document Policy
       
45 December 2006 ‘Code for Sustainable homes’ ‘Zero carbon emissions from all energy use over the year’.
45 July 2007 ‘Building a Green Future consultation document’ ‘Zero carbon emissions from all energy use over the year’
19 December 2008 ‘Definition of zero-carbon homes and non-domestic buildings’ 70% reduction in building emissions from heating and lighting but ambitions to cancel out other electricity use dropped
19 July 2009 ‘Defining an energy efficiency standard’ Continued with 70% commitment
  July 2010 Ministerial statement 70% commitment ‘needs to be re-examined’
14 February 2011 ‘Carbon Compliance: setting an appropriate standard for zero carbon new homes’ Commitment reduced to between 44 and 60% reduction depending on house type

 

In summary, the commitment to reduce household emissions for new home developments from 45 kg/m2 to zero has been reduced to a commitment to cut emissions from 45 kg/m2 to about 31 kg/m2. This is well above what is being achieved today by careful builders constructing genuinely low carbon homes.

What has driven this change? Four things seem to be important.

  • Although we can drive insulation and airtightness standards up far beyond the Zero Carbon Hub recommendations, no-one seems very keen to do so. The proposed  energy use standards for 2016 are over twice the maximum limits used for Passivhaus buildings, usually taken as the norm for really high quality construction. Increased cost is a factor, though Passivhaus might only add £6-£10,000 to the cost of a semi-detached house.
  • Builders are frightened of moving from standard UK construction techniques. On a UK building site gangs of tradesmen construct a house in the open air and using techniques essentially a century old. The resulting property, they say, is what people want to buy. Better techniques, involving pre-fabricated panels largely assembled in a dry factory to much higher standards of air tightness, are shunned. If we want to keep traditional British housing styles, better insulation is almost impossible.
  • Assessments of the cost of improving insulation compared to the equivalent price for getting the same carbon benefit from putting up wind turbines show that renewable energy is cheaper than efficiency. Speaking personally, I am not convinced by these calculations – partly because they rely on assumptions about the future price of energy – but the result of Zero Carbon spreadsheet work is to suggest that housebuilders would get carbon reductions more cheaply from giving money to offset schemes rather than spending it on energy efficiency.
  • The regulatory load on builders is increasing. They are obliged to devote a proportion of their developments to social housing and to pay for local infrastructure improvements. On brownfield sites needing remediation (that is, exactly where the government wants housebuilding to happen) the costs of construction in some parts of the country are not far from the value of the houses that are built. Add further to costs and homes will not be constructed. Of course this isn’t true in high price areas such as Surrey but it might well reduce the number of new homes across much of England and Wales.

Whatever the reason for the watering down of what we mean by ‘zero-carbon’, you might expect that the dilution has now stopped. Don’t be too optimistic: having agreed the latest specification the House Builders Association, a trade body, withdraw its support for the proposal for houses. I suspect that we will eventually see regulations for 2016 and beyond that are no better than the agreed 2013 standards. This will leave new homes a very long way from having zero emissions.

Time for a renewed debate on biofuels

The latest world price index from the FAO shows that commodity foods are now more expensive than in the spike of 2008. After a generation of falling prices for basic foodstuffs, the world is now seeing substantial inflation in major crops, taking prices in real terms back up to the level of the 1970s. We are simultaneously watching an unsteady but sharp rise in the price of crude oil as industrial demand, particularly in China, continues to rise. During the inflation of 2008, observers frequently noted the connections between food and oil, focusing on the impact of the biofuels initiatives of the US and EU. These connections are now ever more obvious. Why aren’t we demanding that countries rein back their food into fuel programmes? In particular, why are European electorates not attempting to reverse the EU’s target of getting 10% of all transport fuel from ‘renewable’ sources by 2020, up from less than 4% today? We used to think that the food and energy markets were separate. Changes in supply and demand in one would not dramatically affect the other. The growth of state-mandated biofuels in the last decade has now intertwined the two markets. The most obvious example is the use of maize to make ethanol (a petrol substitute) in the US. About one quarter of all the global production of coarse grains, including maize, are now used to make ethanol. (263m tonnes out of 1,102m tonnes, source FAO). The global harvest of these grains, including in the Southern Hemisphere, is forecast to have fallen 2% in the last year while non-food uses rose 2%. These small changes reduced stock levels, helping to tighten world prices. The other main trend, almost unnoticed in the West, is the increasing use of the cassava root - better known as tapioca to middle aged Britons like me -  as a feedstock in Asian ethanol refineries, reducing the already limited supply of this important source of carbohydrate.

I hope that one simple comparison may help illustrate the close interconnection of liquid fuel and food markets. We eat foods principally for their energy content and need about 1000 calories to survive and about 2,000 to be healthy and active. A calorie is a unit of energy. (Specialists will know that when nutritionists refer to a calorie, they are actually talking of a kilocalorie, or one thousand calories). A simple piece of arithmetic can convert calories into kilowatt hours, the most-used measure of energy use. Our daily need of 2,000 calories is very approximately equal to 2 kilowatt hours. The energy value of any foodstuff can be expressed as a number of kilowatt hours per tonne. Wheat, for example, is about 4,500 kilowatt hours per tonne of grain. At today’s wheat prices (about £350 a tonne) the energy in the food is costing us about 4.5 pence per kilowatt hour. A similar calculation for oil shows a figure of almost 4 pence per kilowatt hour. In other words, the energy in oil costs very slightly less than the energy in wheat. But if oil rises to $120 a barrel, and wheat remains at the same price, both cost about the same for each kWh.

The important implication is this: when oil prices rise, food tends to get sucked from human consumption to use in ethanol and biodiesel refineries. If oil falls in price, grain ethanol becomes uncompetitive, and less of it is used. The long run downward trend in food prices has been interrupted, perhaps for ever, by the close linkage between foodstuffs and liquid hydrocarbons which seem likely to be in increasingly short supply. Our chance to reduce hunger has been catastrophically disrupted by our urgent need for rising quantities of oil and oil substitutes.

We saw a steady fall in world hunger until about 1995. Since then the number of undernourished people has increased sharply but erratically. The current serious bout of food inflation may push the number of undernourished people above 1 billion for the first time this year. There is genuine debate about the impact of biofuels on the degree of food price inflation. Nevertheless, I think that the evidence that diversion of crops into refineries reduces global food stocks and increases the likelihood of sharp spikes in the world prices of foodstuffs is strong. Global food production is relatively predictable from year to year (last year was disrupted by climate-related events but was still probably the third best ever) but the linkage with oil markets has introduced a new source of volatility into demand and thus prices. To be clear, it may be that the world needs higher food prices to encourage increases in supply, particularly in Africa, but unpredictable and reversible rises in foodstuff costs may not provide a consistent price signal for poor farmers and may just induce hunger, as is the case today.

More generally, the rich world’s biofuel policies seem to be a clear case of selfishness. In order to improve their security of supply of liquid fuels the prosperous nations are increasing the price of food. As this policy proceeds, the impact on food supply is going to get worse. The world’s food production today is somewhere about 4 kilowatt hours person. Some of this is eaten – very inefficiently – by cattle and pigs, reducing the world supply to about 2.5-3.0 kilowatt hours per head. The daily world oil supply of about 85 million barrels is equivalent to about 18 kilowatt hours, perhaps six or eight times as much. In other words even if we use the world’s entire food production for conversion into fuels, and at 100% efficiency, we can never hope to make more than a small fraction of the total liquid fuel needed from our foods.

Such a policy is perhaps the most regressive public policy ever initiated. It makes the petrol of the rich consumer marginally cheaper while pushing hundreds of millions of the world’s poorest into hunger.  Think back, if you will, to the cost of food expressed in pence per kilowatt hour. Today’s wholesale wheat prices tell us that wheat costs about 10p a day to feed an active and healthy person, before considering local costs. We don’t know the precise number of people living on under $1 (60p) a day but it is probably over a billion. For these people, the food price inflation of the last year has reduced their standard of living by perhaps 20%. The car driver in the rich world has probably benefited by less than 1% from the slight reduction in oil prices caused by competition from food-generated ethanol. This is not just madness, it is wicked.

Cuts at the Carbon Trust and EST - perhaps not a bad thing.

(Published on the Guardian web site on 14th February 2011) The Carbon Trust is the latest body to announce a substantial cut in its funding from government. The 40% reduction in its grant income is marginally less severe than the 50% cut imposed on the Energy Saving Trust (EST) a few weeks ago. The job of both these bodies is to reduce energy use and carbon emissions with the Carbon Trust focusing on large companies and the EST on households. They have both claimed major successes in recent years.

So should those of us worried about climate change be upset with the government’s cost savings? I suggest our reaction should be very muted indeed. Both bodies had become bloated and inefficient. I have dealt with many entrepreneurs and small businesses who have found them to be actively unhelpful. Their contribution to the climate change effort may not be worth the money we spend on them.

First of all, we should put the funding cuts in perspective. The EST had a budget of about £70m in 2009/10. But two years earlier, its funding was only £36m, just over half the current level. In other words, the cuts imposed by DECC last month simply take the EST income back to where it was in 2008. And, incidentally, the claim the EST makes for its impact on carbon emissions as a result of work carried out in 2009/10 is actually less than the figure of two years earlier, despite the much higher level of income. Similarly, the number of people employed at the agency has risen sharply in recent years with no apparent impact on the energy savings it achieved.

The position at the Carbon Trust is very similar. Its income rose from just under £100m in 2007/8 to £166m last year. As with the EST, the reduction announced today simply takes its funding from central government back to where it was two years ago. Chief Executive Tom DeLay says that the 40% cut will mean 35 redundancies but this will still leave its employee numbers substantially higher than they were only a couple of years ago.

Both bodies have large and expensive offices in central London. The Carbon Trust, for example, occupies space in an office block in one of the most desirable areas of the City. I have recently watched an entrepreneur struggling to establish his business gasp at the disparity between the conditions in which he works and the standards he saw at the Carbon Trust offices.

Other small business people have commented to me on the ponderousness of both organisations and their lack of industrial expertise. I have also heard one successful entrepreneur refuse to deal again with these bodies because of his strongly expressed fear that details of his technology had been leaked to a competitor. Perhaps these criticisms are unfair but views like these are very widely held amongst companies and individuals working in the low carbon sectors.

Both bodies can rightly claim that their jobs have got much more complicated in recent years, demanding higher allocations of funds from the taxpayer’s purse and putting strain on their ability to respond quickly and efficiently. As organisations dependent on pleasing their bosses in DECC and other government departments, they are forced to focus on projects given to them at very short notice, such as the domestic boiler scrappage scheme handled by the EST with three weeks notice.

Put together, these two agencies spent around £250m in the last financial year. No doubt there was some benefit from this expenditure: for example, the EST communicated in some way with over 3m people to give energy-saving advice and the Carbon Trust invested in some very interesting new technologies. Nevertheless the scale of taxpayer’s money being spent at these organisations looks wildly disproportionate. Much of the funding seems to go on bureaucratic activities rather than real research. Contrast the £250m bill with, for example, the total spending on research into the use of deep geothermal energy in Cornwall at around £1m a year, a technology which might produce a measurable fraction of the UK’s total electricity needs. If slimming the Carbon Trust and the EST means more money can go directly to pathbreaking research, I don’t think we should complain.

A blow that will affect the UK renewables industry for decades

At one stroke Energy Secretary Chris Huhne has guaranteed that Britain will get very little electricity from smaller scale renewables. He has announced that the ‘threat’ (his words) from ground-mounted solar panels makes it necessary to urgently review the feed-in tariffs for all solar installations above 50 kilowatts. These tariffs were meant to be stable until April 2012 and would then fall in a orderly and guaranteed way over the next few years. He hasn’t said by how much the rates for larger scale PV will fall, or when the reduction will happen. But the uncertainty introduced by his intervention, and his obvious willingness to fiddle with the feed in tariffs whenever he feels the urge,  means that he has destroyed the ability of developers to raise money for any substantial renewables project. The UK renewables industry, from PV to geothermal, needs certainty and stability and this is precisely what has just disappeared.

Solar farms 

The Secretary of State’s actions have been prompted by a rash of planning applications for large ground mounted solar farms, principally in Cornwall because the financial returns will be better there than anywhere else in the UK. Perhaps 30 solar farms would have been built before the PV feed in tariff started to fall under the orderly ‘degression’ laid out by his department.  Not a single panel has yet been placed on the ground and only about four schemes have got planning permission from Cornwall council so far.

These  four schemes will probably be constructed although their financiers will be worried that the Huhne review will be implemented before the farm is completed. A small delay might mean – who knows? - a serious fall in the returns available to investors because a key component doesn’t arrive from China in time. 

All the other schemes fighting to get through planning processes and to arrange for connection to the grid will probably be abandoned. The DECC Press Office could not tell me when the changes will be introduced but it sounds like sometime over the summer. It is difficult to exaggerate the impact on the PV industry across Britain.

The Cornish PV industry might have built 150 Megawatts of capacity in the next year, driving down costs across the industry and giving the UK vital experience of this technology.  While this would only have provided about a twenty thousandth of UK electricity demand, it would have been the first time that the UK government had ever seen how to get a substantial renewables industry started and flourishing.  Once doesn't have to believe that PV is the right technology for Britain - and I certainly don't - to despair at the stupidity of Huhne's move.

But at the first sign of successful commercial industry developing, the government has pulled the rug and the energetic edifice of financiers, construction companies and German firms entering the UK to provide the know-how to get the equipment on the ground will now collapse within weeks. In a symbolic coincidence, one of the most important German solar firms, Conergy, had only just announced its arrival in the UK when the Huhne statement was made. The able entrepreneurs just beginning to see a role in the renewables industry will return to other sectors where their talents aren’t abused in this way.

Why is the end of any substantial hope for smaller scale renewables in the UK?

Most renewable projects take substantial amounts of time to get to the point at which construction is complete.

For example, the process of getting a solar farm running takes up to a year. (Wind takes far longer).

1)      The developer finds a site that is suitable

2)      The developer approaches the landowner and agrees terms

3)      The scheme is costed and planned. The developer needs to work out how the farm will be constructed and how it will connect to the local grid

4)      Planning permission is applied for and local consultation takes place. This may take four months

5)      Environmental assessment will happen. Will the site affect breeding birds, for example?

6)      The funds necessary for the farm will be raised

7)      Construction can start, and connection cables will need to be dug to the nearest electricity transformer

 To get any substantial project from the start of this process to the start of  installation takes hundreds of thousands of pounds of investment.  That is before considering the cost of the panels themselves. To get the project financed, the investors must believe in the reliability of their returns. By indicating today (February 7th 2011) that rates will go down sometime ‘in the summer’ but giving no indication of the amount of the reduction, Chris Huhne has made it impossible for any PV project above 50 kilowatts to go ahead anywhere in the country.

But, more generally, his action has demonstrated that any renewable technology which is attracting real commercial interest will be arbitrarily penalised to ensure it doesn’t develop. The Conservative 2010 manifesto said ‘Britain needs an energy policy that is clear, consistent and stable’. Today’s announcement is the opposite.

 The case that the FiT tariff wasn’t designed for larger scale projects

The Huhne announcement suggests that the Cornish PV developers were somehow abusing the FiT regime and that he was forced to act against the ‘threat’ that they represented. The DECC press office told me that ‘commercial companies were exploiting’ a scheme designed for smaller projects.  It told me that larger-scale solar installations ‘weren’t anticipated’.

The response to this is obvious. Throughout its long-drawn out evolution the Feed In Tariff was always intended to offer a guaranteed rate to projects up to 5 megawatts. A 5 megawatt photovoltaic installation will cost about £14m today. Did the government not realise that this scale of investment can only be made by a large and well-funded company? And if it didn’t want large companies to get involved, why did offer FiTs up to the 5 megawatt level, instead of stopping, say, at 100 kilowatts?

 Huhne’s press office said that the 5 megawatt limit was a mistake made by the previous government and the current administration was merely rectifying it. But here’s what the Lib Dems (Huhne’s party) said about the FiT scheme in their 2010 manifesto -  ‘ Liberal Democrats will .. ensur(e) the feed-in tariff scheme which guarantees prices for micro-generation will benefit community-scale, small business and agricultural projects of under 5MW.’  The truth is that all parties supported the Feed in Tariff in all respects and Huhne is wrong to claim otherwise.

To make a real difference to electricity supply renewables must be implemented on an industrial scale. If we continue treat renewables as a charming and quaint cottage industry and refuse to allow entrepreneurs to prosper, we will leave the UK energy industry in the hands of the six oligopolists who currently dominate the industry.  Today we have just thrown away another chance to get innovation and cost-reduction in the UK energy industry.

Addendum: the impact on PV

About 20,000 installations of PV have now been completed in the UK. The average size is about 2.5 kilowatts and the total capacity therefore about 50 megawatts. (Many of these smaller installations on domestic roofs will have been incentivised by FiTs of 41p per kilowatt hour, compared to the much lower  29p paid to the prospective Cornish solar farms.)  These 20,000 installations will produce about the same amount of electricity as nine of the Cornish farms that will have been abandoned in the last 24 hours.

Domestic PV is almost completely irrelevant to our electricity needs. To make a meaningful difference to our renewables targets we need hundreds or even thousands of solar farms. Italy put in place 2 gigawatts of PV capacity last year and 5 gigawatts is expected this year. This is one hundred times the scale of the UK.

'Peak Travel'

 A new paper suggests that the industrial world may be close to ‘peak travel’.(1) After a half a century of rapid increase, the number of kilometres travelled per person has started to slow down or even reverse in advanced countries. Car travel per person appears to have peaked at around 4,000 miles per year in Japan, around 7,000 miles in Europe and about 9,000 miles in Australia and Canada. The US, at around 13,000 miles per person, may be seeing a sustained fall.  While this decline is probably partly driven by rising fuel prices and economic stagnation, there is growing evidence of a saturation of the need for car travel. Similar results are seen for public transport and for domestic air travel – and indeed for freight transport. The implication is that the rich world may not have to choke off the demand for mobility by further huge increases in fuel taxes or road pricing because other forces are already depressing the amount of travel. (The authors don’t say this, but I suspect that their results would not be as striking if they had included international air travel where demand is still growing).  ‘Peak travel’ would be unambiguously good for emissions as about 25% of all CO2 emissions in developed countries arises from motorized travel and up to about 5% from air flights. Improved fuel efficiency in cars and the move to electric vehicles will cut fossil fuel use rapidly if the number of miles travelled continues to plateau.

 At first sight, the flattening of the number of kilometres travelled is surprising: most econometric models have shown personal transport demand continuing to increase for decades to come. These models are driven by the assumption that as people get richer they travel more. For example, the US Energy Information Administration’s model forecasts the number of miles driven using estimates of future growth in disposable income, fuel prices and demographic adjustments for the number of elderly people and females in the driving population.

 Similarly, here is the latest forecast from the UK’s Department for Transport. It shows a 43% increase from 2003 to 2035 in the number of vehicle kilometres.

 

England, Forecast Change compared to 2003 Year Traffic (Vehicle km) Congestion (Lost time/km) Journey Time (time/km)  
 
Central Forecast 2015 7% 6% 1%  
2025 25% 27% 4%  
2035 43% 54% 9%  

 Source: Road Transport Forecasts, 2009, UK Department for Transport

 The annual increase to 2035 is about 1.1% a year. This figure is consistent with the trends of recent decades (see below) and is used by the UK’s Committee on Climate Change in its central forecasts for the next few decades.

Historic Growth in Traffic, GDP and Oil Prices, Average Annual Growth

Decade Traffic Oil Prices GDP Comments
1950s 8.4% -0.5% 2.4% Strong increase in 1st car ownership
1960s 6.3% -3.7% 3.1% Strong increase in 1st car ownership
1970s 2.9% 24.4% 2.4% Oil Crises
1980s 4.7% -10.3% 2.3% Strong growth post 1982, falling oil prices
1990s 1.4% -2.9% 2.1% Early 90s recession, fuel duty escalator
2000-2007 1.2% 15.4% 2.8% Steady traffic growth, rapidly increasing oil prices

Source: Road Transport Forecasts, 2009, UK Department for Transport 

What is going on? Why are the traditional models of transport demand showing continued growth but actual distances travelled tending to plateau and fall? The authors of the paper speculate that the obvious reason is that most people don’t want to spend more and more time in cars or buses. The amount of daily time occupied in travel is, they say, about 1.1 hours and any increase is unwelcome to most people. And as congestion increases, the distance travelled in the typical 1.1 hours per day is inevitably tending to fall and we are now seeing that decrease in national travel statistics.

So it looks like the modellers had it wrong. Travel demand will probably not continue to increase because - unlike truly desirable activities, such as going out to a restaurant or buying nice clothes - most of us see travel as a chore not as a pleasure. Motorised transport is getting more efficient as fuel consumption is improving and the world  is switching to electric cars. Even if GDP continues to increase we may see a sharp decline in transport-related emissions.

(1)   Are we reaching peak travel? Trends in passenger transport in eight industrialized countries. Adam Millard-Ball and Lee Schipper, submitted to Transport Reviews.

Oil industry sees barely a bend in the curve of global emissions

 BP gave us its forecasts for world energy use in mid January. It sees global energy consumption rising by about 1.7% a year to 2030, down slightly on the 1.9% recorded over the last 20 years. Most of the increased use comes from fossil fuels. Here's the BP figures for the increase in fossil fuel use for the period to 2030

Gas  - up 2.1% a year

Oil (including biofuels) – up 0.9% a year

Coal  - up 0.3%

While renewables are forecast to become increasingly important, growing at about 8% a year, this is not enough for low-carbon sources (including nuclear) to provide even half  of the supply to meet the growth in world energy demand.

  2010-2030 1990-2010
     
Energy use + 1.7% p.a. +1.9%
Increment provided by fossil fuel 64% 83%
Increment from renewables 18% 5%
     
Expected increase in greenhouse gases from energy use +1.2% +1.9%

 

If BP is right, CO2 emitted from the burning of fossil fuels rises from about 29 billion tonnes in 2010 to about 38 billion tonnes in 2030. The growth all arises in the developing world and the company says that OECD emissions will be 10% lower in 2030 than they are today. Overall, the BP figures suggest that the world has no hope of achieving the goal of stopping carbon dioxide levels rising above 450 parts per million, the level that many analysts believe is the number that approximately equates to a 2 degree temperature rise.

The International Energy Agency suggests that achieving a peak of 450ppm requires the world to reduce emissions from energy use to below 25 billion tonnes within twenty years but the oil company's figures are over 50% above this figure. If BP is right, the rise in temperatures is likely to be at least 3 degrees and probably 4 degrees, enough to change the distribution of world agriculture and population distribution to an unprecedented extent.

BP’s estimates of greenhouse gas emissions growth are only slightly lower than the IEA’s estimate of the annual percentage change in energy demand with no further mitigation measures over the next 20 years (1.5% for the IEA figure, compared to 1.2% for BP). In other words, BP is deeply pessimistic about the impact of the developed world’s carbon reduction programmes, suggesting that they will have little impact.

As the latest report from the UK’s Committee on Climate Change points out, the national commitments made after the Copenhagen conference by OECD countries would cut emissions by about 55% by 2030, a vastly greater decrease than BP expects will actually happen in the industrial world. Governments are telling a very different story to that offered by the oil companies.

Global precipitation levels hit new high in 2010

Climate change scientists have consistently predicted that increasing greenhouse gas concentrations will increase global average precipitation levels. At the same time, many areas will see increased drought. 2010 shows early evidence for this forecast. The US National Climate Data Centre said earlier this week that the year equalled the hottest on record and this finding reached some of the newspapers. Less noticed was NCDC’s calculation that global rainfall levels were the highest since at least 1900. This year's anomaly (variance from the average) was over 50 millimetres, up significantly on the two previous peaks of about 45 millimetres. (For a sense of scale, Oxford's rainfall averages between 600 and 750 millimetres a year.)

http://www.ncdc.noaa.gov/sotc/service/global/global-prcp-anom/201001-201012.gif

This information probably doesn’t surprise any of us. In mid January 2011 we are seeing very serious flooding in Brazil, Sri Lanka and Australia.  And the list of countries that suffered extreme rainfall in 2010 is a long one. So although UK residents – who have just been through the coldest December in living memory – have increasingly little faith that climate is warming, they may more easily believe that rainfall is increasingly heavy.

NCDC explicitly links some of the main flooding events in 2010 with the concurrent high temperatures. Importantly, it says that the unprecedented Pakistan floods (affecting 20 million people) were connected to the extremely hot air masses holding temperatures over Russia at high levels in summer of last year.

But global weather patterns are never consistent. Although Pakistan had catastrophic floods, Bangladesh has the driest monsoon season for fifteen years.  Some parts of Brazil and Peru, including the vital Amazon region, were also very dry. The UK had less rain in the first half of the year than in any comparable period for half a century. Ontario had very little spring snow and Canada as whole had its driest winter since national records began in 1948.

One of the most troubling things I experienced during 2010 was delivering talks about climate change, mentioning the risks of extreme rainfall such as in Pakistan and being told by sceptics in the audience that the cause of floods is not high levels of rain but poor agricultural practices or increased urbanisation speeding the rate of water runoff.  Both of these two explanations have a grain of truth but I hope that the increasingly obvious global threat from high levels of rainfall, and the increasing intensity of that precipitation, gets people to reconsider whether climate change is causing more floods.

A little dishonesty on future electricity prices: the new proposals for reform of the power market

The UK intends to reform the way the electricity market operates in order to encourage the growth of low-carbon power production at the expense of fossil fuel generation. The Department of Energy and Climate Change said in December that it would both increase the levy on CO2 from power stations, making coal and gas electricity more expensive and also reward low-carbon electricity by guaranteeing high prices for nuclear and wind power. Both these changes will tend to increase power prices to businesses and households. Nevertheless, the Department claimed that its proposal would not effect customers significantly, noting ‘small impacts on bills in the near term, but in the longer term bills are expected to fall by 2030’ (1). Can bills really be lower as a result of changes designed to increase wholesale prices? Let’s get one thing clear to start with. Despite any impression given to the contrary, electricity bills are going to rise sharply under the government’s proposals. Wholesale power changes hands today at around 5-6p a kilowatt hour. Charges for transmission, distribution, customer service and VAT, as well as retailer profit, add another 7p to this total, meaning a retail price to household customers of about 12-13p. The government’s December consultation documents make clear that its purpose is to double the wholesale price of electricity to around 11-12p by 2030. If the increase in wholesale charges is simply passed on by the retailers, the cost to householders will rise to 18-19p, an increase of about 50% on today’s prices. Although the government may wish to disguise this fact, the future increase in prices is a fundamental part of its policy. The average householder is going to be paying about £200-£25o more per year for electricity.

Why does it need to increase prices in this way? The simple fact is that gas-fired power stations are producing power at a full (‘levelised’) long-term cost of around 5p per kilowatt hour. No low-carbon technology is currently remotely competitive with this figure. Research commissioned by the Department suggests a figure of 7-8p for nuclear power and perhaps 9p for onshore wind. (And regular readers of Carbon Commentary will know that 8p for nuclear seems a suspiciously low figure).  Unless the cost of gas-fired generation rises by at least 4p, the generators will simply continue to pile their capital into this form of power station and the UK will end the next decade with relatively little offshore wind and few nuclear power stations under construction.

Current estimated costs of generating one kilowatt hour at a modern CCGT (gas) power station

Fuel 3.4p
Carbon price in ETS 0.4p
Operating costs 0.4p
Contribution towards capital costs 0.7p
   
Total 4.9p

 The National Grid’s seven year forecast from spring of last year suggested that generators are planning to build about 17 gigawatts of CCGT power stations – about one quarter of current UK generating capacity - before mid 2017, compared to only about 12 gigawatts of new wind. And because well-sited wind will typically produce only 35% of its rated capacity over the year, this 12 gigawatts is only equivalent to about 4 gigawatts of actual generation, a quarter of the new gas power station output.

The government, and its adviser The Committee on Climate Change, want to largely decarbonise the UK’s electricity production by 2030. The precise target is to reduce average emissions to less than 100 grammes of CO2 per kilowatt hour, 20% of today’s level. The aim will not be achieved if gas (producing about 350 grammes of CO2 per kWh) forms the backbone of UK electricity generation. The implication for policy-makers is obvious: gas needs to be made more expensive to price it out of the market.

The December proposals seek to achieve this objective by increasing the price of carbon dioxide from about £12 a tonne now to £70 in 2030. In addition, the government’s forecasts see the price of gas rising to about 74p per therm, compared to about 55p in winter 2010/2011. These two forces will increase the cost of gas-generated power. A countervailing reduction will arise because a) there will be small efficiency improvements in CCGT, with about 60% of the energy value of gas being turned into power, up from about 57% today, and b) power station construction costs are likely to slip slightly from their recent and unusual figures of over £1,000 a kilowatt.

My approximate assessment of the net impact is as follows.

Cost of producing electricity from gas in 2030

Current cost per kWh 4.9p
Impact of increasing CO2 tax to £70 per tonne +2.0p
Increasing fuel price +1.3p
Efficiency gains -0.3p
   
Net cost in 2030 7.9p

 

The government’s proposal to raise the carbon dioxide price to £135 a tonne by 2040 will add a further 2.3p per kilowatt hour, taking the cost of generating electricity to over 10p. If it does this, nuclear and onshore wind will almost certainly be cost-competitive, particularly if the government guarantees the prices that these technologies obtain in the electricity market. Generating companies looking at whether to invest in CCGT plants may have second thoughts.

All this is fairly straightforward, and probably sensible if you think fossil fuels need to be driven out of electricity production. But the question remains, why does the government say there will no impact on prices from its twin promises to sharply raise the carbon price and to guarantee returns to low-carbon generators? The unfortunate truth is that the Department has employed a trick to disguise what it is doing, hoping commentators wouldn’t notice.

The carbon pricing proposals it has put forward for consultation see the CO2 costs rising quite gently to 2020, followed by sharp rises to 2030 and beyond. These will be written into law. The trick is that in its December documents it assumes that the free market price of CO2 within the European trading system, currently languishing at about £12 a tonne, will anyway rise almost as fast as the UK’s new mandatory carbon price in the next decade. Since power stations are all already within the ETS, gas generators would pay much higher carbon costs anyway. By 2030, the European market price and the UK’s legally enforced level would be identical. The counter-factual against which the UK is presenting the impact of its mandatory minimum carbon price is a guess about the future evolution of the market price for CO2, not today’s levels or even future markets indications. And if government thinks the European price will reach £70 by 2030, the carbon price legislation it proposes for the UK will have no effect.  

This linguistic trick seems to me to be verging on the dishonest. People deserve the truth: under the Department’s assumptions about fuel and carbon prices, electricity prices for householders will rise by about 50% by 2030 if we are to largely decarbonise generation by the date.

There is another problem not squarely faced by the Department. Participants in the market for gas see a real possibility that prices may fall, not rise, over the next decade or so. The shale gas revolution is really changing the gas market. If prices fall from today’s levels, the attempt to use the carbon price alone to drive the cost of gas generation above alternative low-carbon technologies will fail. Even the Climate Change Committee acknowledged the possibility in its recent 4th Carbon Budget. Here’s what they said:

‘The International Energy Agency has estimated the scale of unconventional gas resources and the range of costs of production. These suggest that the gas price of 76p per therm in 2030 under the central fossil price scenario is towards the high end of the range of supply costs (actually, it is above all but the prospective costs of some Artic gas sources) while the DECC’s lowest price of 35p/therm and the current price of 40-50p per therm are closer to the middle of the range’ (p265).

The unfortunate  truth is that if the generating companies really think that gas is going to cost 35p a therm in 2030, they will still want to invest in CCGT and not wind or even nuclear. No-one is very comfortable with this fact but the decarbonisation of electricity generation will not easily take place while generating assets remain privately held and utilities are completely free to continue to invest in fossil fuel power stations.

5th January 2010

Just how ambitious are the UK Climate Change Committee’s targets?

   The CCC suggested this week  (December 7th 2010) that the UK should aim to reduce its emissions by about 46% below today’s levels by 2030. (60% below 1990 figures). Its plans suggest a reduction of 1.5% a year for this decade, followed by about 4.4% per annum in the following ten years. These are very rapid changes and rely on the successful implementation of decarbonisation initiatives across all parts of society. What does data from recent history tell us about the scale of the task?

The European Environment Agency has recently published a series of factsheets on energy use and CO2 emissions across the EU-27 over recent years. I have tried to tabulate the main results to give a sense of how challenging a task the UK is setting itself.

The evolution of energy demand and emissions in four sectors of the European economy

Sector Energy usage change CO2 output change ‘Energy efficiency’ change Comment           Households +0.5% -1.1% Up 1.0%   Transport +1.4% +1.4% Up 1.0% Much faster improvement from 2000 onwards Manufacturing +1.0% -1.1% Up 2.1% Energy usage is per employee, slower improvement from 2000. Services (1997-2007) +0.6%  Na Up 1.3% Efficiency is measured as kWh per unit of value added

Source: http://www.eea.europa.eu/data-and-maps/indicators/#c7=all&c5=&c0=10

What are the key messages from this table? First, energy demand tends to grow substantially less than GDP. Nevertheless, we can expect energy use to continue to increase, probably at a rate of about 1% less than the economy as a whole. Second, some sectors including personal transport over the past few years and manufacturing in the nineties, have shown improved efficiency of energy use at rates of more than 2% a year.  But complete decoupling of energy use from growth has certainly not been possible thus far. So the CCC’s targets must rely on decarbonisation of energy supply.

The crucial change, of course, is moving electricity generation from fossil fuels to low carbon sources. This will enable improved emissions from heating (for example through the widespread use of heat pumps) and from transport (through the use of electric cars). The problem is that many of the CCC’s cost estimates for low carbon generation look optimistic. Although this year’s report has accepted, unlike some of the Committee’s past analyses, that nuclear costs are going to be higher than predicted a few years ago, it still uses a figure of only 7p per kilowatt hour for fully utilised plants. The uncomfortable reality is that the cost will almost certainly be over 10p. And the risk is that long-term abundance (and thus cheapness) will always mean that the generators want to build gas turbines (as is happening at the moment).  These gas plants, offering cheap, reliable electricity and relatively low carbon emissions are almost certainly going to stop decarbonisation at the rate the CCC says we need. The Committee’s insistence that we can get to the 2030 targets for less than 1% of GDP looks increasingly impossible.

Electric cars - scepticism finally starts to fade

Those who follow unusual ways of forecasting the future have got an interesting new source of information for assessing the prospects for electric cars. The first production Chevy Volt electric cars have just rolled off the production line in Michigan for sale in a small number of US markets from early next year. The first of these extraordinary vehicles has gone to the GM museum and the second was put into a charity auction, closing on 14th December. After three days, the bidding has reached $180,000, over four times the forecourt price of the car. For fans of electric vehicles, this little nugget of data suggests that at least some bidders see the Volt as potentially a stunning success. You wouldn’t bid much if you felt the car would fail lamentably. Also this week, National Grid CEO Steve Holliday said that his company’s base forecast is for 1 million electric cars in the UK by 2020. Electric cars will be about a fifth of UK car sales from 2016 onwards, he said. Note: these numbers are not consistent – current UK sales are about 2 million a year, so one fifth is about 400,000 cars, or 1.6 million vehicles sold from 1.1.2016 to 31.12.2019. But his enthusiasm was nevertheless clear. Since he is partly responsible for ensuring the UK has the electricity system available to charge these cars, his opinion matters. Other forecasts, such as from strategy consultants BCG in January of this year, also see electric and hybrid cars representing a quarter or more of total sales by the end of this decade.

The Chevy Volt wins praise from almost all of those who have driven it. Luxurious and well-styled for the American market, reviewers uniformly call it a ‘real’ car. Its European version, the Vauxhall/Opel Ampera, is similarly highly regarded. The engineering of this vehicle makes it unique. The car is always powered by electricity, as will be the Nissan Leaf, but when the batteries run low a small petrol-powered generator recharges them as the car drives. The 16 kWh battery gives about 25-50 miles of driving, depending on the temperature and how you drive, and the generator kicks in automatically after this point. A full battery and a full tank gives the driver about 300 miles of range. Farewell ‘range anxiety’.

I looked at the patterns of driving of UK cars to estimate how many miles a year the average Volt/Ampera will be powered by electrons and how many by petrol. The National Travel Survey’s 2009 figures gives estimates for the number of miles driven each year by the average UK car and splits this into journeys of various lengths. Typically a car is driven about 8,400 miles a year (and this number is now falling). My estimate is that about 1,600 miles will be travelled using petrol, less than 20% of the total. (These figures assume that the driver typically gets 40 miles of driving before the petrol engine starts). If this figure seems surprisingly low, consider the research finding that the average driver (not the same as the average car) only takes 7 trips a year, on all modes of transport, over 100 miles.

Very roughly, and assuming that the Volt/Ampera is charged on overnight cheap electricity, the savings will be about £600 a year in fuel costs. The list price of the car in the US is about $41,000. Translated directly into UK £, and VAT added, the figure is about £32,400, substantially more than a Nissan LEAF. A £5,000 subsidy for early buyers brings the figure down to £27,000 or so. Making a direct comparison to an equivalent internal combustion engine car is difficult because of the high quality of the Volt’s fittings. But it is probably about £7,000 above the fossil fuel competition. (Knowledge of cars is not my strong suit – different opinions very welcome). Apart from the fuel savings, there’ll be no excise duty to pay and insurance for electric cars is looking as though it is less than equivalent petrol vehicles. Industry people suggest that depreciation rates should be lower – there are far fewer moving parts and mechanical wear will be very much less.

Nevertheless, electric cars are still not quite direct competitors to internal combustion engines. The key problem is the battery, currently costing over $1,000 per kilowatt hour of storage. The Volt has 16 kWh and the Leaf 24 kWk, so the cost problem is obvious. The association for the promotion of electric car batteries in the US has set a target of $250 per kWh, but consultants BCG see a realistic number as $360-440 kWh by 2020. If the BCG figures are achieved, the cost of the Volt’s batteries will fall by about $9,000 (just less than £6,000) by 2020. Optimistic forecasts from GM’s Chevy division seem to indicate that other components will also fall in price substantially, implying that by about 2020 the forecourt price of an electric car will be little more than a petrol-engined vehicle. And then there are the fuel savings.

What about CO2? The Volt is only moderately low emissions when in petrol mode, delivering about 145 grammes of CO2 per kilometre. The lowest emission models are now tipping below 100 grammes in Europe. So for the typical mileage and average UK CO2 emissions from electricity generation, the saving are negligible when compared to a small fuel efficient car such as the Citroen DS3. Against a new business salon car with emissions of 160 grammes per kilometre, the savings are about 0.7 tonnes a year, or about a third. As the percentage of low carbon electricity supplied to the Grid rises, the CO2 savings increase.

If the Volt is as successful among users as it has been among motoring journalists, the huge investment in this vehicle may help GM finally return to being the technology leader among world car companies, a position it probably lost in about 1965. Perhaps as important, it will provide a signal to all other manufacturers that the era of the electric car is finally here. Or rather back again since many of the most interesting vehicles of the early 20th century ran on electric batteries.

James Delingpole on DDT

 Some of the most vocal climate sceptics employ a very effective tactic when attacking those who want action on greenhouse gases. They try to show that the environmental movement, which is now so exercised by the threat of climate change, has made many substantial and costly mistakes in the past. Why, the sceptics say, should anybody believe the environmentalists on the issue of global warming when these people have been wrong about so many other things? The example of DDT is often used as an illustration of how wrong-headed scientists can cause untold suffering by their work. This propaganda needs a response. Here’s what James Delingpole wrote about this insecticide on 5th November 2010.(1)

‘The near global ban on DDT – inspired by Rachel Carson’s junk science bestseller Silent Spring – had caused millions to die of malaria.’

And then, a few sentences later

‘What ABOUT those millions and millions that Rachel Carson inadvertently massacred with her entirely unfounded claims about the effects of DDT on birdlife?’

Let’s work through these statements word by word.

a)      ‘The near global ban on DDT.’ The word order is important. Delingpole presumably knows that there is no ‘ban’ on the use of DDT for use in disease prevention but that its use is prohibited in agriculture. So when he writes ‘near global ban’ he hopes that we misunderstand this to mean that it is banned across most of the globe rather than the correct statement that DDT is subject to a ‘global near-ban’ which restricts its application to malaria prevention.

b)      ‘Rachel Carson’s junk science bestseller’. Silent Spring is one of the most influential books ever written and its publication in the US can be seen as the start of the modern environmental movement. Carson noticed a reduction in birdlife in many rural areas and blamed agricultural pesticides. Her work has been challenged many times, but no-one has ever contended that the core thesis of the book - immoderate use of pesticides had caused severe loss of wildlife - was wrong. Not even the most committed opponent of the restrictions on DDT says, for example, that the insecticide does not affect the reproductive success of birds at the top of the avian food chain by thinning the eggshell.

c)      ‘Junk science’ The words ‘junk science’ are increasingly employed by the global warming sceptics to cast doubt on the validity of claims made by climate science. It is difficult to rebut epithets like this. Delingpole is trying to link his claims about DDT to his views about those people who want to reduce greenhouse gas emissions. He wants us unconsciously to absorb the message that all environmentalism involves ‘junk science’.

d)     ‘Caused millions to die of malaria’. DDT continues in use in anti-malaria campaigns. Rapid progress is being made in many parts of the world on malaria eradication. Those countries where malaria is still not effectively controlled, principally in Africa, are losing the battle not because of a ban on DDT but because of poor public health provision and, for example, the growth of sunlit pools of stagnant water after deforestation. DDT is probably less used as an insecticide in tropical countries than it would have been had we not seen its effects on wildlife but Carson is hardly to blame for this.

e)      ‘Rachel Carson inadvertently massacred’. Delingpole knows that most people don’t seen environmentalists as wicked but he can try to successfully portray them as gullible and dangerous fools. So he suggests that Carson didn’t set out to kill millions but her benighted adherence to her erroneous views caused great suffering. We are expected to understand that environmentalists concerned about global warming are equally misguided and destructive.

f)       ‘Entirely unfounded claims about the effects of DDT on birdlife’. There probably isn’t a single scientist alive today who contests Carson’s central thesis that the effects of DDT are severe. Not all types of bird suffer from its effect but in addition to birds at the top of the food chain any creature, such as the robin, that eats earthworms is affected by the chemical. Carson was not the first to notice this. Here is the first paragraph of a 1958 scientific report by Roy J Barber.(2) The main purpose of this paper is to call attention to the possibility that moderate applications of DDT under certain conditions can be concentrated by earthworms to produce a lethal effect on robins nearly one year later.

Rachel Carson was a quiet, cautious person who carried out her science with precision and care. She never suggested that society should give up insecticide use but told us of the huge, unforeseen impact of massive and indiscriminate use of under-researched chemicals, often sprayed from the air in volumes that would now seem utterly horrifying. Climate sceptics that use her work on DDT as an example of the deleterious impact of environmentalists are profoundly mistaken. She represented science at its careful, thoughtful best.

Silent Spring is dedicated to Albert Schweitzer and carries his words on its title page. ‘Man has lost the capacity to foresee and to forestall. He will end by destroying the earth’.  This sentiment is as relevant to today’s climate challenge as it was to the over-use of insecticides half a century ago.

(1)   http://blogs.telegraph.co.uk/news/jamesdelingpole/100062459/why-being-green-means-never-having-to-say-youre-sorry/

(2)   http://www.jstor.org/stable/pdfplus/3796459.pdf?acceptTC=true

Solar roads

Most people will be surprised, but Italy was the first country in the world to build motorways. In fact, the A8 “Milano-Laghi” motorway (connecting the city of Milan to Lake Como and Maggiore) was completed in 1926.  But Italy will soon be able to claim a new “first”: the A18 Catania-Siracusa motorway, a 30km addition to Sicily’s 600km motorway network, will be a fully solar-powered motorway, the first in its kind. Work is well underway to complete commissioning of this road, which will be the most advanced motorway in Europe, including many outstanding features in terms of control systems, surveillance apparatus, tarmac quality, safety features (one of its new tunnels has also been commended for its levels of safety). Construction activities are concluded, and a quarter of its solar photovoltaic (PV) panels were already operational by the end of September. Pizzarotti & Co., the general contractor for this project, aims at having all of them online by early December. Road testing is due in November, while on 1st January 2011 the Catania-Siracusa motorway will open to the public. By then, 100% of its electricity needs will be met by the PV panels installed along the road: 80 thousand of them. Lights, tunnel fans, road signs, emergency telephones, all the services and street furniture installed on the A18 will be run with solar power: distributed over a surface of 20 hectars, the photovoltaic array was obtained through the construction of 3 artificial tunnels on a 100m wide, 2.8km long stretch of road, a project with an overall cost of €60 milion. Annual solar electricity production is estimated at about 12 million kWh, which will save – constructors claim – the equivalent of around 31 thousand tons of oil and 10 thousand tons worth of CO2 emissions every year.

The Catania-Siracusa motorway is one of the first experiments where a major infrastructure and distributed power generation are integrated in one design, surely the first at this scale. Furthermore, all the green areas involved in this project will be subject to a major environmental renovation scheme: the contractor provided for planting thousands of trees and plants, improving existing tree lines and hedges, increasing the extension of local woods.

This however is not the first time renewable energy and sustainability are key to a road project in Italy . In the last few months, still in Sicily, solar panels for a total of 368 kWp where installed along the A20 Messina-Palermo: they now provide electricity for all the buildings located along the 183km motorway. A thousand km away in northern Italy, the A22 Brennero motorway (which crosses the Alps towards Austria) saw the installation of a soundproofing barrier along a residential area of the motorway route: the 1km long barrier is made of solar panels able to produce some 680,000 kWh per year, thus covering 20% of the local electricity needs.

But this new paradigm in energy management is not being pursued just on single, isolated schemes: a further and more meaningful example of this shift in Italy’s infrastructure design approach is witnessed through the widespread implementation of LED road lights and photovoltaic car park shelters being rolled out in all Italian motorways by Autostrade per l’Italia (ASPI), the leading European concessionaire for toll-motorways construction and management, with more than 3,400km of the 6,500km long Italian motorway network. In fact Autostrade per l’Italia launched a series of initiatives  to promote the use of renewable sources for the production of electricity and improvement of energy efficiency in its buildings and infrastructure. The plan provides for production of electricity from renewable sources, energy saving measures for tunnels and service areas lighting, replacement of heating and air conditioning systems with high-efficiency plants, use of geothermal energy underground to produce heat and electricity and tri-generation (production of electrical, heating and cooling energy) in the main office buildings, and finally a “passive” improvement at their headquarter buildings in Rome and Florence and outlying structures (section departments, maintenance points, snow point). This actions will result in a reduction of CO2 emissions of about 40% and substantial savings in maintenance costs. In 2009, 6,378 lighting fixtures were replaced, while for 2010 the installation programme counts a further 10,766 LED units, reaching approximately 50 percent of the total. An extended programme for the construction of 100+ photovoltaic generation sites is also being completed in 2010: a first phase provided for the installation of patented PV sun-shading shelters at 87 service areas (for a total of 4MWp), while phase 2 involves design and construction of several PV sites, ranging from 200kWp to 1MWp (a mix of stand-alone and integrated modules), adding a further 3MWp.

With Italy’s solar energy boom now invigorated by the renewed Feed-In Tariff scheme “Conto Energia”, which gained the country enormous investments and the second position in the global PV market (with a projected 1,500MW installed capacity in 2010 alone), energy-driven design is finally making its way to mainstream thinking.

Carlo Ombello at www.opportunityenergy.org

Falling southern English wind speeds - another problem for the renewables industry

One south of England wind farm faces default on its bank loans because wind speeds have been as low as ever recorded and electricity output has therefore been much less than expected. So far this year the power delivered by the turbines has been less than two thirds of what is predicted in a typical year, meaning is cash flow is failing to meet even the most pessimistic projections. Statistical analysis suggests that electricity output this low should only be achieved once every several hundred years. Another wind farm in southern England reports similar results, with electricity delivery this year so limited that it would only be expected once a century. 2009’s figures were almost as bad. These results will make it much more difficult to get bank finance for English wind farms. If the rules-of-thumb for wind speeds are turning out to be incorrect, financial institutions will be much more cautious about lending. This note looks at the likely variability of wind farm output and compares it with solar photovoltaic power. Although the returns on PV are likely to be lower, does the recent decline in southern English wind speeds make solar an easier investment to finance?

(At the end of this note I ask UK PV owners to forward me their output records so that I can assemble a central database – needed if we are to persuade banks to lend large sums on solar installations. Erratic output makes financing much more difficult.)

Wind farms

When wind farms are being planned, the developer erects a wind speed meter, usually for one year at the site of the proposed turbines. This was thought to give investors and banks a firm indication of likely electricity output levels. But recent extraordinarily low average wind speeds have made this measurements seem much less reliable.

The consulting engineers who measure wind speeds usually provide developers with figures for average speeds across the year. They translate this into typical power output for the turbines chosen for the site. They also provide a ‘P90’ figure, an estimate of the electricity output that will be exceeded in nine out of ten years. This figure is, of course, lower than the average expected output.

I have looked at the business plans for three wind farms in the UK that were constructed in the last ten years. The P90 output figure is about 87% of the mean expected output. In other words in 90% of all years, the electricity delivered will be at least 87% of the estimated average. (For those of a statistical bent, the expected standard deviation in power output is about 10%).

Two of these wind farms publish details of the actual output to shareholders. In one case, output so far this year has been well below the P90 figure. I have carried out some simple statistical analysis, suggesting that the output levels from the turbines should only have been achieved once every several hundred years. Similar, but less extreme, results are seen at the other wind farm.

These wind levels may just be exceptional – the unprecedented persistence of Artic high pressure over the 2009/10 UK winter certainly reduced typical wind speeds. Some people have suggested that the summer melting of Artic sea ice is affecting wind patterns. It could also be that general meteorological conditions have militated against high winds over the last few years. Or it may that the engineers have simply hugely underestimated the underlying degree of natural annual variability in southern UK winds In other words, the average estimated power outputs for these wind farms are correct over a period of decades, but the chance of the actual number being much higher or much lower in any particular year is significantly greater than has been projected.

In all of these cases the implications for English wind farm developers are potentially severe – the banks will be willing to put up a much smaller fraction of the total cost. Their financial models, which require wind farms to be able to pay back the interest and loan principal even in unusually bad years, will have to be revised.

Solar

There are few month-to-month records of the output of solar PV installations in the UK. We know that the expected output has a much tighter distribution than wind power because solar insolation levels are more stable year on year. But just how much more reliable is PV? I did some simple statistical analysis of the monthly output from the installation on my roof to provide some estimates for those looking to get banks to help finance their own purchases of PV panels. This is not great data but will provide a reasonable figure for those trying to guess at just how variable the cash flow from PV will be.

I have six years of monthly figures from our house. (In one month the failure of one of the inverters affected the figures for fourteen days and I have adjusted for this). My analysis shows that the annual P90 figure for a small installation in central southern England is about 3% below the expected average output. To give a simple illustration, if the expected output (which can be obtained from several databases on the web) is 1500 kilowatt hours, then in nine years out of ten the actual output will be greater than 1455 hours. (1) In other words, the natural variability of solar PV output each year is only about one quarter as much as wind power, at least as far as I can measure it in Oxford.

I also looked at the difference between the best 12 month period during the six years during which we have had PV and the worst. The worst ever yearly output (March 2009 to February 2010) was 1388 kilowatt hours and the best was 1552 (May 2007 to April 2008), with the lower figure being 89% of the higher. This variability is dwarfed by the figures from the English wind farms for which I have data.

In order to lend significant sums to PV developers, banks will need good records of the degree of variability of output. They cannot be expected to lend cash to installations if there is a reasonable prospect of cash flows being insufficient to service the loan. If you have any data on the weekly, monthly or yearly output from your PV installation in the UK, please may I have a copy in order to keep a centralised database tracking the variability of electricity output? Thank you.

(1) Anybody wanting the actual six year output figures from my house to carry out their own modelling is welcome to email me. (chris@carboncommentary.com)

Chris Huhne's statement on nuclear, October 18th 2010

Chris Huhne’s announcement of a further consultation on the government’s National Policy Statements on energy gave him  a chance to clarify the stance on the provision of ‘subsidy’ for new nuclear power. His parliamentary statement says that

  • there will be ‘no public subsidy for new nuclear power’
  • this means ‘no levy, direct payment or market support for electricity supplied or capacity provided’
  • Unless, and this is a very big unless, ‘similar support is also made available more widely to other types of generation’
  • And, moreover, he is ‘not ruling out action by the Government to take on financial risks or liabilities for which it is appropriately compensated or for which there are corresponding benefits’.

What lies behind this new declaration? It suggests that the government will actually be prepared to provide financial support in the form of

  • a guaranteed  carbon price (because this will affect all electricity generators equally)
  • and/or guarantees for capital raising, probably in the form of credit insurance (because such insurance would ‘appropriately compensate’ the government for the risk)

Carbon price

This web site has previously suggested that the carbon price necessary to get new nuclear construction in the UK may be as high as £110/tonne of CO2. (A similar figure is probably needed for offshore wind, which is currently being subsidised through the alternative mechanism of Renewable Obligation Certificates – ROCs).

This high figure is driven by the cost of the construction of the new Areva EPR reactor in Finland and its implications for what a similar reactor would cost in the UK. Areva recently announced that it now made provisions against the cost of finishing the Finnish contract of €2.6bn, implying that the total cost of constructing the OL3 plant is thought to be at least €6bn.If EDF and the other potential operators of new nuclear in the UK believe that UK reactors will cost this much, then the floor for the carbon price will indeed need to be at least as high as £110 a tonne.

Areva’s latest financial presentation admits that the second EPR site at Flamanville in Normandy is likely to cost almost as much as the Finnish reactor. But the two other reactors being built at Taishan in China look as they will be constructed at much lower cost. Areva’s own estimate for the total bill in China is about €1,500 per kilowatt, or about €2.5bn for the 1.6 gigawatt plant, about 40% of the Finnish cost. If the construction cost were similar in the UK the required carbon price to incentivise EDF would be very much lower than the £110/tonne of CO2 that I suggested earlier. The figure would be nearer £50/tonne.

Will the cost of UK reactors be more like Finland or China? Does the huge reduction in EPR cost in China arise because of lower local labour costs or because Areva has effectively learnt the lessons of the utter financial disaster at OL3? Will Areva continue to reduce the cost of the EPR as it gets more skilled at managing costs downwards? These are the critical questions that face EDF’s nuclear team in the UK.

Here’s what the respected nuclear industry construction cost sceptic Professor Steven Thomas said The future of the EPR is clearly in doubt. Construction work on the two orders in Europe has gone appallingly wrong, the process of getting generic safety approval (in the US) is long-delayed and continues to throw up serious unresolved issues, and estimated costs are continuing to escalate at an alarming rate’.

Credit insurance

The UK has previously indicated a strong preference for a high carbon price to provide the umbrella under which nuclear power stations can obtain financing. The US has favoured credit insurance, with the government taking a risk on the construction costs. The last few weeks have seen setbacks for this policy as the potential nuclear operators have backed away from nuclear construction, partly in light of the high price for the insurance.

The Calvert Cliffs 3 project was a flagship for the federal government’s quarantee programme. (Credit insurance quarantees the lending backs repayment of the debt from constructing a nuclear power station). It collapsed in early October as the consortium backing the plant finally faced what had long become obvious: the likely cost of the government loan guarantee (said to be more than 8% of the construction cost) was crippling. Add in the recent clear declines in electricity demand and rising shale gas production and it seems to no longer make financial sense to use new nuclear power in the US. The other three or four projects negotiating for federal guarantees are also likely to fall away.

The upshot of all this is that the UK government’s promise not to ‘subsidise’ new nuclear is looking increasingly incompatible with rapid progress on construction of Areva EPRs (or the Westinghouse equivalent) in the UK. If we want nuclear, 'subsidy' is inevitable.

Monbiot's pig

 The last few weeks have seen George Monbiot write passionately in support of measures to improve biodiversity. To the surprise of many, he also gave us a reasoned defence of eating meat. Are these two themes consistent? Can biodiversity be maintained if world food production includes a significant amount of farmed meat? A new paper in the Proceedings of the National Academy of Sciences suggests strongly that George cannot both maintain an omnivorous diet and defend biodiversity. (1) Now, and far more so in the future, livestock farming imposes stresses on global ecosystems that are incompatible with maintaining species diversity. Pelletier and Tyedmers’ article demonstrates that livestock farming pushes human society over important three global ecological boundaries: biomass use, the nitrogen cycle and climate change. In each case, failure to remain within the sustainable limits creates unmanageable pressure on biodiversity. Monbiot’s defence of meat eating is based on two assertions. First, he says that Simon Fairlie’s book Meat: a Benign Extravagance shows convincingly that livestock farming is only responsible for about 10% of world emissions. Second, a move away from the factory farming of ruminants (predominantly cattle) towards a system that fed pigs and chickens with our agricultural and domestic food waste would significantly reduce the climate change impact of livestock. It would also decrease the diversion of cereals such as maize to animal feedlots and away from human use. Both of these assertions may be true. But they don’t tell the whole story.

First, climate change. We know that temperature increases and changes in rainfall patterns are already seriously affecting the survival ability of many types of plant, animal and insect life. The global target of maintaining temperature increases at less than 2 degrees C above the pre-industrial level will require us to cut annual emissions to no more than about one tonne of CO2 per person by 2050. Pelletier and Tyedmers’ article says that emissions from livestock today are equivalent to about 52% of this level today, rising to about 70% by 2050 as the dietary habits of the rich world are adopted by today’s industrialising nations. If we are to meet our targets for emissions, large scale livestock production, particularly of methane-producing cows and sheep, is impossible. Switching to pork and chicken only makes limited difference to emissions levels.  

The impact of livestock production on the possibility of staying within the other important boundaries is even worse. At present humankind uses about one quarter of the total net production of biomass across the world each year and this figure is rising as the planet’s population increases. The food we take from the world’s surface is perhaps one half of this figure, either directly or in the form of biomass eaten by farmed animals. Livestock farming accounts for about 58% of total food-related biomass use. Increasing numbers of farmed animals will require more food and the new paper estimates that livestock alone will use up 88% of the total sustainable amount of global biomass that can be appropriated by humankind for its own purposes. The increasing need to devote land to producing food for animals (for example, soybeans in what was the Amazon rainforest) necessarily implies a reduction in the space and plant diversity available to sustain threatened species.

Lastly, the paper looks at the impact of livestock farming on the amount of reactive nitrogen on the planet’s surface. Nitrogen atoms in the air are bonded with another atom of nitrogen to form a stable molecule. Humankind has found ways of breaking the bond and turning nitrogen atoms into important components of artificial fertilisers. ‘Half the synthetic fertiliser ever used on Earth has been applied in just the last 15-20 years’ say Pelletier and Tyedmers. Adding fertilisers to crops increases the amount of food produced but at a cost to the many species of plant and animal life that would lived alongside the crop. The improvements in food productivity, largely driven by the need to feed huge numbers of livestock and increasing population, have resulted in drastic ‘ecosystem simplification and biodiversity loss’.  If George Monbiot wants space for threatened species to live, he cannot also allow it to be used for heavily fertilised monocultures to feed the world’s farm animals.

Argument rages over what the sustainable amount of unbonded nitrogen added to the soil can be. The paper’s authors suggest tentatively that today’s use of nitrogen is about three times the acceptable future level. This will affect crop yields adversely. Today, over half the world’s corn production is fed to animals. If we continue with livestock farming at its current levels, this means that reducing nitrogen fertiliser use will reduce the amount of grain left for human use. So it we are to give priority to maintaining biodiversity and feeding the extra three billion people by 2050, livestock farming has to reduce dramatically. Today’s ultra-intensive production techniques for food, whether they are feedlot production of cattle or highly fertilised grain production, are unlikely to be possible in the not-far-distant future

The unavoidable conclusion is that meat eating is going to have become an unusual luxury for all of us. Or it will have to be grown in a test tube. Animal sources of protein will have to be replaced by plants such as soybeans which have ecological impacts of up to two orders of magnitude lower than cattle farming.

One three separate grounds the new PNAS paper therefore says that livestock farming is difficult to accommodate without increasing the rate of biodiversity loss. Nevertheless, let’s be kind and allow George Monbiot a little meat and dairy. We’ll accept his view that pigs are better than cattle and get him a sow for his back garden, happily eating the kitchen waste. George consumes about 2,000 calories of food energy a day and we’ll assume that the unconsumed food in his household allows the sow a daily 1,000 calories that would otherwise have been thrown away. (In order to get a fat pig, he will therefore have to share with a neighbour).

A pig kept outdoors might be able to turn those 1,000 calories of food waste into 150 or 200 calories of meat. Don’t believe the figures you see suggesting higher conversion efficiencies – they assume that the pig is fed the highest quality maize and is not free to take much exercise that would burn off some of the energy value of the waste food. Even at the maximum 200 calories meat production a day - a couple of bacon rashers – George will only get about 10% of his energy intake from animal products, compared to the European average of about 30%. The unfortunate  truth is that a little bit of meat and dairy may be compatible with keeping within  global ecological constraints but nothing like today’s levels of meat and dairy consumption.

(1) Nathan Pelletier and Peter Tyedmers, Forecasting potential environmental costs of livestock production 2000-2050, PNAS Early Edition, October 2010

Nuclear power;Green power?

(This is the text of a talk given at the Science Museum's Dana Centre on September 23rd 2010) I’ve spend most of today at a prospective site for a large solar farm in Cornwall. My colleagues and I think we can find space for 2 megawatts of capacity here, costing about £7m. Cornwall has the best solar radiation in the UK but we would need 150,000 of these farms, covering over 5% of the UK, to meet current annual electricity needs.

Last week I did some work on a very small wind farm in Norfolk, helping a parish council assess its impact on the community. The farm will provide about 3.5 megawatts at peak and probably cost around £5m. We’d need something like 50,000 installations of this size to replace annual electricity consumption. David MacKay says we’d have to devote all of Wales to wind farms to get to this amount.

The Committee on Climate Change indicates that we must almost completely decarbonise electricity production by 2030. So here’s some numbers for the cost of decarbonising current levels of electricity production using the most viable current technologies. Solar PV – around £1,000 billion. Onshore wind – about £250 billion. By contrast, just adding more gas power stations to replace closing coal, oil (and nuclear) would cost the country about £50bn.

Herein lies problem number one. Renewables remain expensive. Decades of underinvestment in R+D have left us without a significant UK renewables industry. We are now investing less than a sixth of the level in the mid 70’s in energy research. If we’d put a billion a year into marine energy, rather than a very sporadic few tens of millions, we might now be in a different position and able to exploit the tides and the waves at a reasonable price. We’d have a built an industry capable of serving the world.

But money isn’t the only problem. To get decarbonisation by 2030 we need not only

a)      Large resources of private and public capital

b)      Continued expensive R+D, often of dubious immediate productivity

But also

c)      Huge political support, including a tolerance for expensive failure of new technologies

d)      A high and reliable carbon price

e)      An willingness to spend billions building stronger transmission links to Norway, Iceland, Netherlands, Ireland and France to deal with intermittency, and to meet the need to export electricity surpluses when the winds are blowing hard

f)       A ruthless programme of peak shaving, introducing schemes to ensure that demand will never rise above pre-determined levels

g)      active demand management, meaning, for example that our fridges turn off when wind speeds fall unexpectedly in Scotland

h)      Huge investment in energy storage, probably including massive subsidy of electric cars and electric charging points

As a digression, my eco-friends tell me we can achieve most of what we want by ‘energy efficiency’. Well, it is true that heating and transport are highly wasteful users of energy in the UK. A petrol car is only about 25% efficient and the UK’s heat losses through buildings are a national disgrace. But the efficiency with which we use electricity cannot be increased dramatically, particularly in the home.  Anything that uses electrical resistance to generate heat – an iron, a toaster, a washing machine, a heater or a kettle is already close to 100% efficient. We can improve efficiency somewhat on our heat pumps in the form of fridges and freezers. And consumer electronics have some space for efficiency gains, but these are tending to be wiped out by increases in the number of these devices in the home. Yes, we can move from fluorescent to solid state lighting but lights are only 15% of our home power consumption. It may be important to note that domestic electricity consumption has barely fallen in the recession.

And any future efficiency gains are going to be outweighed by the need to move transport and home heating towards using electricity. Electric cars and heat pumps for our homes are vital ingredients in national plans but will eventually add at least 50% to our electricity needs.

So my conclusion is that achieving our low carbon ends by 2030 is virtually impossible using renewables. Although offshore wind is speeding up - congratulations today to the new Thanet wind farm – we simply aren’t moving at the rate we have to. For example, we now have about 3,000 wind turbines compared to almost 20,000 in Spain. Britain – the Saudi Arabia of renewable energy sources – has made nothing like the commitment it needs.

The result is we will get more gas and, even worse, coal. Just today I got a PR release from the energy practice at Ernst and Young, banging a drum to the effect that the UK needs to let its large existing coal plants escape EU pollution rules and continue operating after 2015. Similarly, the owners of Didcot, just down the road from where I live, are beginning to soften up the local press for a campaign to keep this coal-burning dinosaur open.

This is why nuclear may be necessary. It is expensive, probably hugely expensive, and highly problematic in other ways. But is backed by the big 6 suppliers and it seems that if we guarantee a carbon price we can probably persuade them to provide the capital and organisational resources to make nuclear happen. Make no mistake, we need these companies and their access to the banks and bond markets to enable us to meet our decarbonisation objectives. Microgeneration, expensive renewables and other small initiatives simply aren’t enough.

But what about Olkiluto?, I hear you say. The new Finnish nuclear plant is years behind schedule and will almost certainly cost more than twice the contracted cost. Remember, though, that China is now constructing 25 nuclear power stations, mostly using an Areva design and expects to have more nuclear by 2020 than the UK’s entire current generating  capacity. China is going to iron out the design defects of the EPR for us.

We are left with no alternative but to go, perhaps slightly shamefacedly, to RWE, EON and EDF and ask them just how high electricity prices need to go to get them to start a crash programme building new nuclear. It goes without saying that the national negotiating position is not strong. We might have wished for another route, but all other options have disappeared.