Cabot Institute blog

Find out more about us at www.bristol.ac.uk/cabot

Monday, 30 September 2013

Measuring our world: Notes from the V.M. Goldschmidt Conference

Galileo Galilee
Measure what is measurable, and make measurable what is not so.’ - Galileo Galilee

Science is measuring.

Of course, it is about much more than measuring.  The scientific approach includes deduction, induction, lateral thinking and all of the other creative and logistical mechanisms by which we arrive at ideas. But what distinguishes the ideas of science from those of religion, philosophy or art is that they are expressed as testable hypotheses – and by testable hypotheses, scientists mean ideas that can be examined by observations or experiments that yield outcomes that can be measured.

Earth scientists use astonishingly diverse approaches to measure our world, from the submolecular to the planetary, from bacterial syntrophic interactions to the movement of continental plates. A particularly important aspect of observing the Earth system involves chemical reactions – the underlying processes that form rocks, fill the oceans and sustain life. The Goldschmidt Conference, held this year in Florence, is the annual highlight of innovations in geochemical methodologies and the new knowledge emerging from them. 

Geochemists reported advances in measuring the movement of electrons across nanowires, laid down by bacteria in soil like electricians lay down cables; the transitory release of toxic metals by microorganisms, daily emissions of methane from bogs, and annual emissions of carbon dioxide from the whole of the Earth; the history of life on Earth as recorded by the isotopes of rare metals archived in marine sediments; the chemical signatures in meteorites and the wavelengths of light emitted from distant solar nebulae, both helping us infer the building blocks from which our own planet was formed.

******

The Goldschmidt Conference is often held in cities with profound cultural legacies, like that of Florence.  And although Florence’s legacy that is perhaps dominated by Michelangelo and Botticelli, Tuscany was also home to Galileo Galilee, and he and the Scientific Revolution are similarly linked to the Renaissance and Florence. Wandering through the Galileo Museum is a stunning reminder of how challenging it is to measure the world around us, how casually we take for granted many of these measurements and the ingenuity of those who first cracked the challenges of quantifying time or temperature or pressure.

And it is also exhilarating to imagine the thrill of those scientists as they developed new tools and turned them to the stars above us or the Earth beneath us.  Galileo’s own words tell  us how he felt when he pointed his telescope at Jupiter and discovered the satellites orbiting around it; and how those observations unlocked other insights and emboldened new hypotheses:

‘But what exceeds all wonders, I have discovered four new planets and observed their proper and particular motions, different among themselves and from the motions of all the other stars; and these new planets move about [Jupiter] like Venus and Mercury... move about the sun.’

The discoveries of the 21st century are no less exciting, if perhaps somewhat more nuanced.

******

Laura Robinson
The University of Bristol is one of the world leaders in the field of geochemistry.  Laura Robinson co-chaired several sessions, while also presenting a new approach to estimating water discharge from rivers, based on the ratio of uranium isotopes in coral; the technique has great potential for studying flood and drought events over the past 100,000 years, helping us to better understand, for example, the behaviour of monsoon systems on which the lives of nearly one billion people depend.  Heather Buss chaired a session and presented research quantifying the nature and consequences of reactions occurring at the bedrock-soil interface – and by extension, the processes by which rock becomes soil and nutrients are liberated, utilised by plants or flushed to the oceans. Kate Hendry, arriving at the University of Bristol in October, presented her latest work employing the distribution of zinc in sponges (trapped in their opal hard parts) to examine how organic matter is formed in surface oceans, then transported to the deep ocean and ultimately buried in sediments; this is a key aspect to understanding how carbon dioxide is ultimately removed from the atmosphere.  The Conference is not entirely about measuring these processes – it is also about how those measurements are interpreted. This is exemplified by Andy Ridgwell who presented two keynote lectures on his integrated physical, chemical and biological model, with which he evaluated the evidence for how and when oceans become more acidic or devoid of oxygen.

What next?  Every few years, a major innovation opens up new insights.  Until about 20 years ago, organic carbon isotope measurements (carbon occurs as two stable isotopes – ~99% as the isotope with 12 nuclear particles and ~1% as the isotope with 13) were conducted almost exclusively on whole rock samples. These values were useful in studying ancient life and the global carbon cycle, but somewhat limited because the organic matter in a rock derives from numerous organisms including plants, algae and bacteria. But in the late 1980s, new methods allowed us to measure carbon isotope values on individual compounds within those rocks, including compounds derived from specific biological sources.  Now, John Eiler and his team at Caltech are developing methods for measuring the values in specific parts or even at a single position in those individual compounds within those rocks.  This isotope mapping of molecules could open up new avenues for determining the temperatures at which ancient animals grew or elide what microorganisms are doing deep in the Earth’s subsurface. 

Scientists are going to continue to measure the world around us.  And while that might sound cold and calculating, it is not!  We do this out of our fascination and wonder for nature and our planet.  Just like Galileo’s discovery of Jovian satellites excited our imagination of the cosmos, these new tools are helping us unravel the astonishingly beautiful interactions between our world and the life upon it.

This blog was written by Professor Rich Pancost, Cabot Institute Director, University of Bristol

Prof Rich Pancost

Jared Diamond: The World Until Yesterday – What can we learn from traditional societies?

Jared Diamond
On 27 September, Pulitzer Prize-winning polymath, author Jared Diamond, gave the first talk in the Bristol Festival of Ideas series, in conjunction with the Cabot Institute, to promote the paperback release of his new book “The world until yesterday”. The book surveys 39 traditional societies and their attitudes to universal problems such as bringing up children, treatment of the elderly and attitude to risk. The aim of the book is neither to idealise nor disparage these traditional societies, but to investigate what lessons can be learned from unindustrialised peoples.

Constructive paranoia

Photo©JahodaPe­tr.com (Papua Guide)
One such tribe was the Dani whom Diamond lived with while studying local ornithology in Papua New Guinea. He opened his talk with a tale of the fear and trepidation that the New Guinean tribe showed when he suggested making camp under a dead tree in the jungle. Their “constructive paranoia” – while completely at odds with Diamond’s own Western attitude – was essential to their survival in an ecosystem rife with environmental dangers. Diamond mused that not only were accidents caused by the physical environment less frequent in Western society, but the consequences were less likely to be fatal or permanently disabling, due to our healthcare system. This alters our perception of the risks associated with hazardous behaviour.

Perception of risk

We worry too much about dangers that do not kill many people – like terrorism, nuclear accidents, plane crashes and DNA based technologies – but are comparatively blasé about the risks of alcohol, smoking and cars. Westerners tend to overestimate the risk of things beyond our control; things that are unfamiliar; that kill many people at once or in a spectacular way, while we underestimate risks that we encounter every day but assume “It will never happen to me”. This can be demonstrated by comparing personal ratings of danger with the number of actual deaths, but this does not take into account changes in personal behaviour to protect against significant risks. Diamond recounted another tale of a tribe living in close proximity to a pride of lions. Though the risk of being killed by a lion was very real, few tribes people were actually killed due to the various precautions taken such as travelling in groups and making a lot of noise so as not to startle the lions.

Conflict management

Image from Penguin Books
Jared also briefly covered further topics from the book, such as our treatment of the elderly, whose collective wisdom and knowledge is now usurped by the rise of the world wide web. He also examines differing strategies for dealing with conflict. While in the West we concentrate on perceived wrongs, and who is “in the right”, more traditional societies contend with disputes with those who they will continue to live with and trade with for the rest of their lives. Their model for conflict-management more closely resembles the idea of ‘restorative justice’, where victims and perpetrators meet to discuss the incident. The emphasis is on restoring a working relationship rather than assigning blame or retribution.

“The world until yesterday” goes on general sale by Penguin Books in paperback on 29 October 2013.

This blog has been written by Boo Lewis, Biological Sciences, University of Bristol.
Boo Lewis, Cabot Institute blogger















Watch the Jared Diamond event again online.

Will global food security be affected by climate change?

The Intergovernmental Panel on Climate Change (IPCC) has just released an important report outlining the evidence for past and future climate change. Unfortunately it confirms our fears; climate change is occurring at an unprecedented rate and humans have been the dominant cause since the 1950s. Atmospheric carbon dioxide (CO₂) has reached the highest level for the past 800,000 years, which has contributed to the increased temperatures and extreme weather we have already started to see.

As a plant scientist, I’m interested in the complicated effects that increased temperatures, carbon dioxide and changes in rainfall will have on global food security. Professor David Lobell and Dr Sharon Gourdji wrote about some of the possible effects of climate change on crop yield last year, summarised below alongside IPCC data.

Increased CO₂

Image credit: David Monniaux
Plants produce their food in a process called photosynthesis, which uses the energy of the sun to combine CO₂ and water into sugars (food) and oxygen (a rather useful waste product). The IPCC reports that we have already increased atmospheric CO₂ levels by 40% since pre-industrial times, which means it is at the highest concentration for almost a million years. Much of this has accumulated in the atmosphere (terrible for global warming) or been absorbed into the ocean (causing ocean acidification) however it may be good news for plants.


Lobell and Gourdji wrote that higher rates of photosynthesis are likely to increase growth rates and yields of many crop plants. Unfortunately, rapid growth can actually reduce the yields of grain crops like wheat, rice and maize. The plants mature too quickly and do not have enough time to move the carbohydrates that we eat into their grains. 

High temperatures

The IPCC predicts that by the end of the 21st century, temperatures will be 1.5C to 4.5C higher than they were at the start of it. There will be longer and more frequent heat waves and cold weather will become less common.

Extremely high temperatures can directly damage plants, however even a small increase in temperature can impact yields. High temperatures means plants can photosynthesise and grow more quickly, which can either improve or shrink yields depending on the crop species (see above). Lobell and Gourdji noted that milder spring and autumn seasons would extend the growing period for plants into previously frosty times of year allowing new growth periods to be exploited, although heat waves in the summer may be problematic. 

Image credit: IPCC AR5 executive summary

Flooding and droughts

In the future, dry regions will become drier whilst rainy places will get wetter. The IPCC predicts that monsoon areas will expand and increase flooding, but droughts will become longer and more intense in other regions.

In flooded areas, waterlogged soils could prevent planting and damage those crops already established. Drought conditions mean that plants close the pores on the leaves (stomata) to prevent water loss, however this means that carbon dioxide cannot enter the leaves for photosynthesis and growth will stop. This may be partly counteracted by the increased carbon dioxide in the air, allowing plants to take in more CO₂ without fully opening their stomata, reducing further water loss and maintaining growth.

Image credit: IPCC AR5 executive summary


These factors (temperature, CO₂ levels and water availability) interact to complicate matters further. High carbon dioxide levels may mean plants need fewer stomata, which would reduce the amount of water they lose to the air. On the other hand, higher temperatures and/or increased rainfall may mean that crop diseases spread more quickly and reduce yields.

Overall Lobell and Gourdji state that climate change is unlikely to result in a net decline in global crop yields, although there will likely be regional losses that devastate local communities. They argue that climate change may prevent the increases in crop yields required to support the growing global population however.

The effect of climate change on global crop yields is extremely complex and difficult to predict, however floods, drought and extreme temperatures will mean that its impact on global food security (when all people at all times have access to sufficient, safe, nutritious food to maintain a healthy and active life”) will almost certainly be devastating.

On the basis of the IPCC report and the predicted impact of climate change on all aspects of our planet, not just food security, it is critical that we act quickly to prevent temperature and CO₂ levels rising any further.

This blog is written by Sarah Jose, Biological Sciences, University of Bristol
You can follow Sarah on Twitter @JoseSci


Sarah Jose

Monday, 23 September 2013

Is ash dieback under control?

Image by FERA
European ash tree is an important component of British woodlands. It has been stayed popular and recommended for planting due to its economic and aesthetic value, also the fact that its resistance towards grey squirrels. In UK, it has been estimated that among all the 141000ha big woodlands (>0.5ha), 5.4% of their composition is ash trees. However, since its first discovery in Poland in 1992, the ash dieback disease, caused by fungus Chalara fraxinea, has spread over the European continent and devastated ash populations in certain areas. On 19.Sep, Rob Spence for Forestry Commission came to Bristol to talk about thecurrent stage of ash dieback control in England.

Chalara fraxinea is the asexual stage of Hymenoscyphus pseudoalbidus, and also the infectious stage. Ascospores are produced from fruiting bodies on the dead branches in the litter, and can be transmitted by wind to more than 10km. Ascospores are not durable, thus its infection window is limited to summer months. The spores tend to attack the young trees due to their lower resistance to the disease, cause crown necrosis and eventually death. In mature plants, the effect of the disease is less severe. However, the disease can seriously compromise the condition of mature trees, and make them succumb to other diseases.

Source: BBC website
Current distribution of the disease in England is largely constrained in tree nurseries, except for East Anglia, where a number of cases have been reported in the wild. The prevalence of the disease in the nurseries all over the country is thought to be due to the fact that seeds are germinated outside of UK, and then saplings and young trees are imported back into UK from the continent, which may already be infected. However, the large outbreak in East Anglia is more likely attributed to extreme weather conditions which bring spores from the continent.

The control effort in southwest is focusing on confining the disease. Unlike East Anglia, the cases of ash dieback in wild are still rare. The Forestry Commission has been conducting aerial surveys to spot early infections, also, two smartphone apps, Tree Alert and OPAL can be used to take photos of suspected infected trees and send to the experts for identification. As the staff of the Forestry Commission is very limited, it becomes very unrealistic for them to come to field for most cases.

It is also worth noting that around 1-2% of the natural population is resistant against the disease. Researches are going on in The Sainsbury’s Lab and John Innes Centre in Norwich, as well as some European institutes trying to identify the resistant genes and possible approaches to deter the spread of the fungus through biological approaches. On country level, a ban has been placed on ash import from outside of the country and transfer of living ash tissues within the country, though the timber transport are still allowed as they are regarded as low risk.

In my point of view, ash dieback is well controlled at this stage. Despite the eventual widespread is inevitable, but this kind of selection bottlenecks has happened widely in nature since the evolution starts. Although there is no reason to reduce our effort in protecting ash trees, as long as we keep the genetic diversity with the susceptible populations while introducing and expanding the resistant traits within the population, the disease will be controlled in macro-scale. 

This blog is written by Dan Lan, Biological Sciences, University of Bristol

Power within the rift

Lying just under the Earth’s surface, the East African Rift is a region rich in geothermal resources. Exploitation of this clean and green energy source is steadily been gaining momentum. What is the geological mix that makes the Rift Valley ripe for geothermal power and how is it being tapped?

The East African Rift, stretching from Djibouti to Mozambique, marks the trace of a continent slowly tearing apart. At rates of about 1-2 cm per year, the African continent will one day split into two separated by a new ocean.

When continental rifting occurs, volcanism shortly follows. As the continent steadily stretches apart, the Earth’s crust thins allowing an easier path for buoyant magma to rise up. Where the magma cracks the surface, volcanoes build up. Dotting the Rift Valley are many active, dormant and extinct volcanoes. Famously active ones include Nyiragongo in the Democratic Republic of Congo, Ol Doinyo Lengai in Tanzania and the bubbling lava lake at Erta Ale volcano in Ethiopia.

How to brew a geothermal system

The presence of volcanoes in the Rift Valley indicates one important occurrence –hot rocks under the Earth’s surface. This, combined with a thinned crust due to extension, provides the first geological ingredients for a geothermal system. Active magma chambers are typically extremely warm; consequently they will heat up groundwater in fractures and pores in the surrounding rock up to temperatures of 200-300°C.

Hence, a geothermal field can be defined as a large volume of underground hot water and steam in porous and fractured hot rock. A geothermal system refers to all parts of the hydrological system involved, including the water recharge and the outflow zone of the system. The area of the geothermal field that can be exploited is known as the geothermal reservoir and the hot water typically occupies only 2 to 5% of the rock volume. Nevertheless, if the reservoir is large and hot enough, it can be a source of plentiful energy.

To keep a geothermal system brewing you need three essential components: a subsurface heat source; fluid to transport the heat; and faults, fractures or permeability within sub-surface rocks that allow the heated fluid to flow from the heat source to the surface or near surface.
East African Resources

The presence of geothermal systems in East Africa has not gone unnoticed. At present, geothermal electricity is produced in Kenya and Ethiopia with Djibouti, Eritrea, Rwanda, Zambia, Tanzania and Uganda at the preliminary exploration and test drilling stages. Kenya is steams ahead in terms of development with an installed capacity of 200 MW, but still progress has been slow over the last few decades. In comparison, Ethiopia currently has a 7.3 MW installed capacity with a proposed expansion of 70 MW.  

In Hells Gate National Park, just south of Lake Naivasha, Kenya’s geothermal energy is generated from Olkaria power station. Exploration at Olkaria started in 1955 but it wasn’t until the 1960s when 27 test wells were drilled that extensive exploration kicked off. At present, Olkaria I power station generates 45 MW, Olkaria II produces 65 MW and Olkaria III is a private plant generating 48 MW.  Olkaria IV power plant is under construction, due to be completed in 2014 and has an estimated potential of between 280 and 350 MW. By 2030, Kenya hopes to produce at least 5,000 MW of geothermal power.

Geological and financial risk

Whilst the East African Rift naturally provides the perfect geological conditions in order to meet future energy demands, the risks involved have so far prevented significant development. Geothermal exploration and development is a high-risk investment. Financially, investing in geothermal has high up-front costs followed by relatively low running-costs. If drilling encounters a dry well during exploration, then the financial loses can be substantial, at roughly $3 million of investment for each MW produced, dry wells can cause significant financial set backs, consequently detracting investors.

It’s not just financial risks, there’s geological risk too - they are volcanoes after all. In Kenya, geothermal fields comfortably sit on top of the volcanoes Olkaria, Longonot, Eburru, Paka and Menengai. The picture is similar in Ethiopia where the Alutu Langano power plant is situated within Alutu volcano. In fact, nearly every geothermal prospect site throughout East Africa is located near, or on a volcano.

Whilst many of the volcanoes have not erupted in historical times, recent satellite observations using a technique called InSAR, has revealed that these volcanoes may not be as quiescent as previously thought. Menengai, Alutu, Corbetti and Longonot have all shown periods of ground deformation, both uplift and subsidence. The precise cause of these ground movements is subject to further research with possibilities including the rise or withdrawal magma within the crust or perturbations to the geothermal system. What these observations do mean however is that perhaps accounting for geological risk could be considered in future geothermal development.

Overall, the outlook is bright for East African geothermal resources. The World Bank has a history of supporting and cultivating geothermal in East Africa, for example, since 1978, Kenya has built up its geothermal generation with $300 million in support from the World Bank. The World Bank recently announced their Global Geothermal Development Plan (GGDP), that will “scale up geothermal energy in developing countries” bringing geothermal energy “into the mainstream, and deliver power to millions” – an initiative that will greatly benefit East Africa.

This blog has been written by Elspeth Robertson, Earth Sciences, University of Bristol

Read Elspeth's other blog post 'Geothermal workshop: Accelerating the impact of research and development in East Africa'.

Elspeth Robertson



Friday, 20 September 2013

Welcome from the new Director

Left to right: Rich Pancost, Sir John Beddington, Paul Bates
I became the second Director of the Cabot Institute on the 28th of July, taking over from Paul Bates and planning to continue making Cabot one of the world’s premier environmental institutes. The past month has been rather exhilarating in terms of the breadth and quality of my interactions. My experiences have cemented my reasons for assuming this role - the Cabot Institute represents hundreds of brilliant people, working together and working with equally brilliant government, NGO and industry partners to better understand our environment, our relationship to it and the challenges of our co-dependent future. The central aspect of my job as Director is to continue to support those individuals and especially those collaborations.

My first month also confirmed that we have vital, illuminating and challenging ideas to share and we will all benefit from improved communications. Hence, this blog post and the many to follow it.  There are many buried treasures, both clever insights and mature wisdom, on the Cabot Blog, and I encourage new visitors to explore those past posts.  For example, see recent posts on Food Security by Boo Lewis and Energy Markets by Neeraj Oak.  As for me, I’ll be bringing in a combination of personal observations and insights arising from discussions with Cabot partners, as well as ideas emerging in my own discipline.

Penn State University
As a bit of an introduction, I grew up in on a dairy farm in Ohio, and attended Case Western Reserve University, where I dithered back and forth between majors in political science and astrophysics before realising my heart was in Geology.... life decisions are complicated for all of us. I obtained my PhD from Penn State University , using geochemical tools to study past climates, and then continued that work as a post-doctoral researcher at the Royal Netherlands Institute for Sea Research. And in 2000 I joined the Organic Geochemistry Unit  in the School of Chemistry here at Bristol.  Along the way, I played a fair bit of Ultimate (Frisbee ).
Ultimate frisbee GB masters beach team (2007)

I examine organic compounds in a wide range of materials, from soils and plants to microbial mats to ancient rocks. Those organic compounds can be exceptionally well preserved for long periods of time, allowing us to investigate aspects of how the Earth’s biological and chemical systems interact on time scales from tens to millions of years. The topics of my research range from understanding the formation and fate of methane to reconstructing the climate history of the planet (especially during times when carbon dioxide levels and temperatures were higher than those of today). It requires working with a diverse group of people, including climate modellers, mathematicians, social scientists and petroleum geologists.  Those themes will become more prominent in this blog over the coming months, especially as I report back from a few conferences and around the release in late September of the Fifth Report from IPCC Working Group 1: The Physical Basis of Climate Change. But I will also be discussing Environmental Uncertainty and Decision Making: what it means, my personal perspectives on it, and why it is at the heart of the Cabot Institute’s mission.

Finally, this is meant to be an interactive forum.  Do use the comments section and do suggest future topics.  We especially welcome suggestions from our fellow Bristolians for potential visitors and events we could organise in our home town.

Cheers,
Rich

This blog was written by Professor Rich Pancost, Cabot Institute Director, University of Bristol
Rich Pancost

Monday, 16 September 2013

Warming up the poles: how past climates assist our understanding of future climate

Eocene, by Natural History Museum London
The early Eocene epoch (56 to 48 million years ago), is thought to be the warmest period on Earth in the past 65 million years. Geological evidence from this epoch indicates that the polar regions were very warm, with mean annual sea surface temperatures of > 25°C measured from geological proxies and evidence of a wide variety of vegetation including palm trees and insect pollinated plants found on land. Unfortunately, geological data from the tropics is limited for the early Eocene, although the data that does exist indicates temperatures only slightly warmer than the modern tropics, which are ~28°C.  The reduced temperature difference between the tropics and the poles in the early Eocene and the implied global warmth has resulted in the label of an ‘equable’ climate.

Simulating the early Eocene equable climate with climate models, however, has not been straightforward. There have been remarkable model-data differences with simulated polar temperatures are too cool and / or tropical temperatures that are too hot; or the CO2 concentrations used for a reasonable model-data match being outside the range of those measured for the early Eocene.

There are uncertainties in both geological evidence and climate models, and whilst trying to resolve the early Eocene equable climate problem has resulted in an improved understanding of geological data, there are uncertain aspects of climate models that still need to be examined.  Climate processes for which knowledge is limited or measurements are difficult, such as clouds, or which have a small spatial and temporal range are often simplified in climate models or parameterised. These uncertain model parameters are then tuned to best-match the modern observational climate record. This approach is not ideal, but it is sometimes necessary and it has been shown that the modern values of some parameters, such as atmospheric aerosols, may not be representative of past climates such as the early Eocene, with their removal improved the model-data match.

However, a climate model that can simulate both the present day climate and past more extreme climates without significant modification potentially offers a more robust method of understanding modern and future climate processes in a warming world. We have conducted research in which uncertain climate parameters are varied within their modern upper and lower boundaries in order to examine whether any of these combinations is capable of the above. And we have found one simulation, E17, from a total of 115, which simulates the early Eocene equable climate and improves the model-data match whilst also simulating the modern climate and a past cold climate, the last Glacial Maximum reasonably well.

This work hopefully highlights how paleoclimate modelling is a valuable tool in understanding natural climate variability and how paleoclimates can provide a test bed for climate models, which are used to predict future climate change.

This blog has been written by Nav Sagoo, Geographical Sciences, University of Bristol.
Nav Sagoo, University of Bristol

Why the Pliocene period is important in the upcoming IPCC report

Critical to our understanding of the Earth system, especially in order to predict future anthropogenic climate change, is a full comprehension of how the Earth reacts to higher atmospheric CO2 conditions. One of the best ways to look at what the Earth was like under higher CO2 is to look at times in Earth history when atmospheric CO2 was naturally higher than it is today. The perfect period of geological history is the Pliocene, which spans from 5.3 – 2.6 million years ago. During this time we have good evidence that the Earth was 2-3 degrees warmer than today, but other things, such as the position of the continents and the distribution of plants over the surface, was very similar to today.

There is therefore a significant community of oceanographers and climate modellers studying the Pliocene, many of whom were in Bristol last week for the 2nd Workshop on Pliocene climate, and one of the main points of discussion was the exact value of CO2 for the Pliocene.

80 top scientists from 12 countries gathered for the 2nd Workshop on Pliocene climate on 9-10 September 2013 at the University of Bristol
The imminent release of the first volume of the 5th assessments of the IPCC is also expected to include sections on Pliocene climate.

Today we published a paper in Philosophical Transactions of the Royal Society A which therefore represents an important contribution to the debate. Several records of Pliocene CO2 do exist, but their low temporal resolution makes interpretation difficult. There has also been some controversy about what these records mean, as some show surprisingly high variability, given what we understand about Pliocene climate.

We sampled a deep ocean core taken by the Ocean Drilling Program in the Carribean Sea. Cores such as this record the ancient envrionment as sediment collects over time like the progressive pages in a book, and by analysing the chemical composition of the layers a history of the Earth System can be discovered. The approach that Badger et al take is to use the carbon isotopic fractionation of photosynthetic algae, which has been shown to vary with atmospheric CO2.

What this study revealed is that atmospheric CO2 was actually quite low, at around 300 ppm for much of the warm period. What was also revealed was that CO2 was relatively stable, in contrast to previous work. This implies that in the Pliocene the Earth must have been quite sensitive to CO2, as small changes in atmospheric CO2 drove changes in climate. The study of Badger et al doesn't explicitly reconstruct climate sensitivity but it does have important implications for future change.

The paper is published in a special volume of Philosophical Transactions of the Royal Society A, edited by Bristol scientists Dan Lunt, Rich Pancost, Andy Ridgewell and Harry Elderfield of Cambridge University. The volume is the result of the Warm Climates of the Past – A lesson for the Future? meeting which took place at the Royal Society in October 2011. The volume can be accessed here: http://bit.ly/PTA2001

Marcus Badger

Wednesday, 11 September 2013

Neonicotinoids: Are they killing our bees?


In April, the EU banned the use of neonicotinoid pesticides for two years starting in December because of concerns over their effect on bees.  The use of these pesticides will not be allowed on flowering crops that attract bees or by the general public, however winter crops may still be treated. Fifteen countries voted for this ban, with eight voting against it (including the UK and Germany) and four countries abstaining.

Neonicotinoids were originally thought to have less of an impact on the environment and human health than other leading pesticides. They are systemic insecticides, which means they are transported throughout the plant in the vascular system making all tissues toxic to herbivorous insects looking for an easy meal. The most common application in the UK is to treat seeds before they are sown to ensure that even tiny seedlings are protected against pests.

Image by Kath Baldock
The major concern over neonicotinoids is whether nectar and pollen contains levels of pesticide is high enough to cause problems for bees. It has already been shown that they do not contain a lethal dose, however this is not the full story. Bees live in complex social colonies and work together to ensure that there is enough food for developing larvae and the queen. Since neonicotinoids were introduced in the early 1990s bee populations have been in decline and there is a growing feeling of unease that the two may be connected. Scientific research has provided evidence both for and against a possible link leaving governments, farmers, chemical companies environmentalists and beekeepers in an endless debate about whether or not a ban would save our bees.

Several studies on bees have shown that sublethal levels of neonicotinoids disrupt bee behaviour and memory. These chemicals target nicotinic acetylcholine receptors, one of the major ways that signals are sent through the insect central nervous system. Scientists at Newcastle University recently showed that bees exposed to neonicotinoids were less able to form long-term memories associating a smell with a reward, an important behaviour when foraging for pollen and nectar in the wild. 

Researchers at the University of Stirling fed bumble bee colonies on pollen and sugar water laced with neonicotinoids for two weeks to simulate field-like exposure to flowering oil seed rape. When the colonies were placed into the field, those that had been fed the pesticides grew more slowly and produced 85% less queens compared with those fed on untreated pollen and nectar. The production of new queens is vital for bee survival because they start new colonies the next year. Studies in other bee species have found that only the largest colonies produce queens, so if neonicotinoids have even a small effect on colony size it may have a devastating effect on queen production.

So why does the government argue that there is not enough scientific evidence to support a ban on neonicotinoids?

Image by Kath Baldock
In 2012, the Food and Environment Research Agency set up a field trial using bumble bee colonies placed on sites growing either neonicotinoid-treated oil seed rape or untreated seeds. They found no significant difference between the amount of queens produced on each site, although the colonies near neonicotinoid-treated crops grew more slowly. The study also found that the levels of pesticide present in the crops was much lower than previously reported.

I personally think that both laboratory and field studies bring important information to the debate, however neither has the full answer. Whilst more realistic, the government’s field trial suffered from a lack of replication, variation in flowering times and various alternative food sources available to bees. Only 35% of pollen collected by the bees was from the oil seed rape plants, so where oil seed rape comprises the majority of flowering plants available to bees the effect on neonicotinoids may be more pronounced. The laboratory research can control more variables to establish a more clear picture, however the bees in these studies were often given only neonicotinoid-treated pollen and nectar to eat, which clearly is not the case in a rural landscape. Flies and beetles have been shown to avoid neonicotinoids, which could mean that bees would find alternative food sources where possible. This would have a major impact on crop pollination.

We desperately need well-designed field studies looking at the effect of neonicotinoids on bees and the environment in general. Despite an EU moratorium on growing neonicotinoid treated crops, an allowance should be made for scientists to set up controlled field trials to study the effect of these pesticides on bees during the two year ban. It could be our only chance to determine the danger these chemicals pose to vital pollinators and the wider environment



This blog is written by 
Sarah Jose, Biological Sciences, University of Bristol

Sarah Jose

Don’t change horses midstream: the impact of EMR on low-carbon electricity producers

The way low-carbon electricity is supported by the government is changing drastically. For the last decade, electricity generators have used an emissions trading system known as the Renewable Obligations (RO) scheme. Current plans are to phase this scheme out completely by 2017, replacing it with a form of feed-in tariff scheme instead. These changes present the industry with an entirely new range of challenges and uncertainties.

On September 10th, Alon Carmel of the Department of Energy and Climate Change (DECC) presented their case for the new policies to the major players in the South West’s low carbon energy industry. The event, hosted by RegenSW and Osborne Clarke in Bristol, set out to allay the concerns of the industry, and to give electricity generators a chance to voice their opinions ahead of the final implementation of the new scheme in 12 months’ time.

Generators have a choice between 2014 and 2017, as both the RO scheme and the new Contract-for-Difference (CfD) feed-in tariff scheme will be in operation. Choosing between the two is potentially the most consequential decision generators can make in the next 3 years; the subsidy provided by both schemes is vital to the existence of many generators, and the profit margins of all of them.

Electricity bills are a huge issue for the next election, and choosing the degree of subsidy offered by the new CfD scheme will play a big part in keeping the cost to consumers low. That said, novel renewable energy technologies such as wave and tidal power count on a high level of subsidy in order to develop their technology and lower production costs. This event was the first chance for many to see the degree of subsidy their technologies would receive under the new scheme.

The government is braced for a great deal of criticism over the level of support offered in their new scheme, and are prepared to consider changes where appropriate. However, a panel of academics and consultant hired by DECC have concluded that too much attention is being paid to the industry in making energy policy decisions. This suggests that electricity producers should expect a cooler reception from the government when recommending changes to the new policy.


For now, the advice to electricity producers from Osborne Clarke is not to change to using the new scheme; there is a degree of mistrust in the scheme from banks and financial institutions, who don’t wish to risk using an untested policy. This makes borrowing more expensive, and impacts both the generator’s profits as well as the cost to consumers. There is a fear that most generators will follow this advice, creating a looming crisis in 2017 when the RO scheme ends.

This blog is written by Neeraj Oak, from the department of Complexity Sciences at the University of Bristol.

Neeraj Oak

Tuesday, 10 September 2013

Sharing the world’s natural resources

In discussions about climate justice, one particular question that receives a lot of attention is that of how to share the global emissions budget (that is, the limited amount of greenhouse gases (GHGs) that can be released into the atmosphere if we are to avoid ‘dangerous’ climate change). A popular proposal here is ‘equal shares’. As suggested by the name, if this solution were adopted the emissions budget would be shared between countries on the basis of population size – resulting in a distribution of emission quotas that is equal per capita. Equal shares is favoured not only by many philosophers, but also a wide range of international organisations (it is the second essential component, for example, of the prominent Contraction and Convergence approach).  

I became interested in the equal shares view because it is often put forward with very little argument. Some people seem to think it obvious that this is the best way for parties to the UNFCCC to honour their commitment to deal with climate change on the basis of ‘equity’. But can things really be so simple when this budget must be shared across countries and individuals that differ greatly in terms of their needs, wealth and – arguably – contribution to climate change?

When you look more closely at the arguments that are actually given for equal shares, it turns out that many people claim this to be a fair solution to a global commons problem (for a quick introduction to commons problems, listen here). They argue that distributing emission quotas equates to distributing rights to the atmosphere. The atmosphere, however, is alleged to be a global commons – or shared resource – that no individual has a better claim to than any other. Therefore, rights to this resource – in the form of emission quotas – should be distributed to all human beings globally on an equal per capita basis.

The major problem underlying this argument is its restricted focus on fairly sharing the atmosphere. What many defenders of equal shares neglect to realise is that the atmosphere does not actually act as a sink for carbon dioxide (CO2) – thought to be the most important anthropogenic GHG – which is instead assimilated by the ocean, soils and vegetation. Whilst the argument for equal shares might seem plausible in the case of the ocean – a resource that is also often described as a global commons – it is much harder to carry it over to terrestrial sinks (soils and vegetation), which lie for the most part within state borders. This leaves the equal shares view open to objections from countries like Brazil, which could argue that they should be allocated a higher per capita CO2 allowance on the basis of their possession of a large terrestrial sink (in the form of the Amazon rainforest). Furthermore it seems that Brazil would have backing in international law for such a claim.

The question of whether countries with large terrestrial sinks should have full use rights in these resources – and should therefore be allocated a greater share of the emissions budget – leads us into an enquiry about rights to natural resource that has occupied philosophers for centuries. Roughly speaking, the main parties to this debate are statists – who often deny the existence of significant duties of international justice and attempt to defend full national ownership over natural resources – and cosmopolitans – who hold that justice requires us to treat all human beings equally regardless of their country of birth.

Cosmopolitans often argue that one’s country of birth is a ‘morally arbitrary’ characteristic: a feature like gender or race that shouldn’t be allowed to have a significant influence on your life prospects. They believe it is clearly unfair that a baby lucky enough to be born in Norway will on average have far better life prospects than a baby that happens to be born in Bangladesh. Because of this, cosmopolitans are often opposed to national ownership of natural resources, which they take to be a form of undeserved advantage. 

Cosmopolitanism can be used to defend equal per capita emission shares because according to this view, rights to natural resources – whether gold, oil, or carbon sinks – shouldn’t be allocated to whichever state they just happen to be found in.

This cosmopolitan interpretation of fairness has a certain intuitive plausibility – why should claims to valuable natural resources be based on accidents of geography? On the other hand, there are a number of arguments that can be given for taking some people to have a better claim than others to certain carbon sinks. It seems particularly important, for a start, to consider whether indigenous inhabitants of the rainforest should be taken to have a privileged position in decision-making about how this resource is used. We also need to acknowledge that preserving forests will often be costly in terms of missed development opportunities. Why should everybody be able to share equally in the benefits of forest sinks, if it is countries like Brazil or India that must forgo alternative ways of using that land when they choose to conserve? This question is particularly pertinent given that land devoted to rainforest protection cannot be used for alternative – renewable – forms of energy production such as solar or biofuel.

In addition, if cosmopolitans are correct that use rights to terrestrial sinks should not be allocated on the basis of their location, then we need to question national ownership of other natural resources as well. If countries are not entitled to full use of ‘their’ forest sinks, then can we consistently allow rights to fossil fuels – e.g. the shale gas below the British Isles – to be allocated on a territorial basis? This is how rights over natural resources – forests and fossil fuels included – have generally been allocated in the past, with many resource-rich countries reaping huge benefits as a result. If national ownership is a rotten principle, is rectification in order for its past application? And what should we then say about the natural resources that can be used in renewable energy production? Why should the UK alone be allowed to exploit its territorial seas for the production of tidal energy? Or Iceland claim rights over all of its easily accessible geothermal sources?

Commons arguments for equal per capita emission quotas go astray when they claim that the atmosphere is the sole natural resource that assimilates our GHG emissions. Once we recognise this, we should appreciate that the problem of the fair allocation of the global emissions budget cannot be dealt with in isolation, but is instead tied up with broader, difficult questions regarding how the natural world should be used and shared. The answers we give regarding each individual’s claim to use GHG sinks need to be rendered consistent with our judgments regarding the justice of the past and current allocation of rights over other natural resources – resources including fossil fuels and renewables – if we are really to deal with climate change on the basis of equity.

This post was written by Megan Blomfield, a PhD student in the University of Bristol philosophy department. It is based on her paper, published in the latest edition of The Journal of Political Philosophy, titled ‘Global Common Resources and the Just Distribution of Emissions Shares’.

Megan Blomfield, University of Bristol