Diberdayakan oleh Blogger.

Popular Posts Today

Dot Earth Blog: Accounting for the Expanding Carbon Shadow from Coal-Burning Plants

Written By Unknown on Jumat, 29 Agustus 2014 | 15.50

Steven Davis of the University of California, Irvine, and Robert Socolow of Princeton (best known for his work dividing the climate challenge into carbon "wedges") have written "Commitment accounting of CO2 emissions," a valuable new paper in Environmental Research Letters showing the value of shifting from tracking annual emissions of carbon dioxide from power plants to weighing the full amount of carbon dioxide that such plants, burning coal or gas, could emit during their time in service.

This makes sense because of the long lifetime of these plants once built — typically 40 years or so — and the long lifetime of carbon dioxide once released. (I'd love to see some data visualization experiments on this idea from Adam Nieman, building on his work showing the volume of daily CO2 emissions from cities and the like.)

Here's Davis's "video abstract" (the transcript is appended at the end of this post, along with a rich discussion of the paper's findings and implications):

The opening section of the paper is remarkably clear and is worth posting here (minus footnotes and the like):

Each year, governments and firms estimate and report CO2 emissions from the burning of fossil fuels, and their efforts to slow climate change are measured against these annual emissions. Ultimately, though, the magnitude of warming we experience will not be determined by emissions in any one year, but by cumulative CO2 emissions. Thus, climate scientists and energy-economic modelers have developed hundreds of plausible scenarios of future emissions and used them to identify emissions pathways that might achieve climate policy goals. Such scenarios are powerful tools for connecting emissions and warming to trajectories of population growth, economic development, and energy use. However, these trajectories are constrained by tremendous socio-economic inertia (e.g., existing energy, transport and built infrastructures) that limits the rate at which CO2 emissions can be reduced and climate change avoided.

In 2010, Davis et al [background] quantified an important component of socio-economic inertia by estimating the future emissions expected from all existing fossil fuel-burning infrastructure worldwide, naming these 'committed' emissions. That paper provided a single data point (commitments as of 2009), but lacked the context perhaps most important to policymakers: how these committed emissions have changed over time. Here we provide that context. We show that, despite international efforts to reduce CO2 emissions, total remaining commitments in the global power sector have not declined in a single year since 1950 and are in fact growing rapidly (by an average of 4% per year 2000–2012).

The annual 'commitment accounting' that we demonstrate here offers policymakers an opportunity to evaluate historical trends and to quantify the long-term consequences of current actions in a new way.

Environmental Research Web has posted "Why our carbon-dioxide emissions are like credit-card debt," which explains how it grew out of earlier work by Davis and others (Davis, Caldeira and Matthews, Science, 2010). This quote from Davis is particularly helpful:

"One of the things that makes climate change such a difficult problem is that it lacks immediacy," Steven Davis of the University of California, Irvine, told environmentalresearchweb. "It's going to have huge impacts in the long run, but its effects on our day-to-day lives seem small. The way we've been tracking carbon-dioxide emissions reinforces this remoteness: the annual emissions we monitor are small relative to the cumulative emissions that will cause large temperature increases. The alternative we present, what we call commitment accounting, helps by quantifying the long-run emissions related to investment decisions made today."

I sought reactions from a host of energy and climate analysts, kicking things off with this query and thought (some email shorthand is cleaned up):

You are probably aware of "Commitment Accounting of CO2 Emissions," a valuable new Socolow/Davis paper (building on Davis, Caldeira, Matthews, 2010).

Given the near-term and enduring benefits of electric power expansion in developing countries, the other long-term effect of expanded coal-powered generation is accrued wealth and economic growth (along with health costs if they are dirty plants, of course). You could call it a gigawatt-hours commitment.

It'd be interesting to visualize all of this side by side with the emissions commitment in some way.

I think this emerging form of emissions accounting provides a valuable way to show how the growing coal (and natural gas) greenhouse-gas emissions commitment will play out, but — because of the competing social and economic values embedded in that extracted energy, along with the equity argument poor countries use against established fossil-powered industrial giants — I'm not sure it leads to a more effective strategy for cutting those emissions.

I'd love to include your thoughts on this, including links to relevant background that bolsters your points…. On complex issues the best way to pinpoint "reality" is through discourse. As in celestial navigation, the more "lines of position," the tighter the resulting area on the chart.

(Please excuse the acronyms below. For now, please use Google to find definitions. I don't have time at the moment to include explanatory links.)

Christopher Green, an energy-focused economist at McGill, was first to reply:

Without taking anything away from the importance of the Davis-Socolow contribution, your suggestion that we should also look at the benefits of the electricity generated is, in my opinion, dead on. The world as a whole (especially the emerging/developing country component) is going to require increased energy consumption for the foreseeable (and likely more distant) future. That that energy, however produced, provides enormous benefits cannot be denied

The important question is how the world's huge and growing energy requirements are going to be met. Hoffert et al (1998) [the paper is here] provided what still is the clearest framework for establishing the huge magnitude of the energy technology challenge to meeting a growing energy commitment—a challenge measured in terawatts not gigawatts. That challenge has been largely ignored, with the policy focus placed on emissions and emissions reduction and the political will to reduce them, without due regard to the current limits on alternative low carbon energy technologies. The failure to address the Hoffert et al energy technology question is reflected in the fact that since 2000 (and including 2013) the share of global energy consumption accounted for by fossil fuels has remained essentially constant at 86.5-87.0%. (Here I use BP Energy Statistics.) globally, the "progress" made with non-hydro renewables is offset by a decline in energy from nuclear power plants. Not surprisingly, the carbon content of energy is the same in 2013 as it was in the early 1990s and actually rose a little since 2000.

While Davis and Socolow are clearly right that the climate has not benefited from the lack of progress on the energy technology front, billions of people, particularly in the emerging country/developing world, have benefited substantially from the energy generated. As far as I can see the standoff will continue until there is recognition that climate change is first and foremost an energy technology problem, one that cannot be solved/resolved without a great deal more than emission reduction pledges.

David Hawkins of the Natural Resources Defense Council was concerned that I was using too wide a brush:

You say— "Given the near-term and enduring benefits of electric power expansion in developing countries, the other long-term effect of expanded coal-powered generation is accrued wealth and economic growth…"

This statement conflates three things: electrification services, electric generating capacity, and coal-fired generating capacity.  The implication is that there is a serious tradeoff between constraining cumulative global CO2 emissions and meeting the needs of developing countries for increased electrification services.  Truth is, the CO2 commitment from new generating capacity in the poorest countries will be small, even if they build mostly coal.  But building mostly coal is not the only path they or other countries need to follow to expand electrification services.  When the world's wealthiest countries begin to take climate protection seriously, there are ways in which any incremental costs of pursuing a low-carbon electrification path in poorer countries can be shared based on all countries' strategic interests in avoiding a disrupted climate.

Steve Davis, one of the paper authors, added this:

Dave is right to point out that the largest commitments we've quantified are only distantly related to meeting the energy needs of the underserved. The big commitments reflect coal-based industrialization and energy-intensive development whose value is more questionable given the climate and health impacts entailed and availability of cleaner and more energy-efficient options.

I responded with this question:

But given urbanization trends, particularly, centralized power production and the jobs and output that come with industrialization are certainly a big factor for many governments, right?

Davis replied:

Perhaps, but that's the distant relation. We point out that the world built 89 GW [gigawatts, or billion watts] of coal-fired capacity per year 2010-2012. That's roughly equal to the entire generating capacity of sub-Saharan Africa being added every year. These plants are supporting large-scale industrialization that is a world apart from providing basic energy services, even if one dreams of the other. They're related just as the merits of different graduate schools are in the minds of parent teaching their toddlers to read.

Chris Green responded to Davis and Hawkins:

Yes, Dave is talking about the underserved in the sense of the poorest many of whom are not even hooked up to the grid. But my comment addresses the huge energy demands as people move beyond the very poorest. Think here of the many countries, especially the populous ones, that are industrializing and urbanizing. They inevitably will require a lot of energy, if for no other reason than that materials for construction (steel, cement, flat glass, aluminum, copper) are hugely energy intensive — with energy intensities an order of magnitude higher than that for most other manufactured goods. Think too about the demand for appliances including air conditioning as people move into the middle class. These are key factors in the rise in energy demand and the observed increase in fossil fuel generating capacity.

I also take issue with Dave's suggestion of good alternatives to fossil fuel generated energy. With the possible exception of nuclear energy, which to date has faced numerous hurdles, there certainly aren't on anything like the scale required. And little has changed in this respect since Hoffert et al, 1998.

Alex Trembath at the Breakthrough Institute joined the general discussion:

Thanks for the note. As you know, Breakthrough and our colleagues at the Consortium for Science, Policy & Outcomes (and development/energy experts from across the world) addressed this issue in our April report "Our High-Energy Planet." As Professor Green mentioned, the singular focus on emissions is pervasive, and in some cases understandable (for instance, comparing national emissions accounts — measured in MTCO2 — is often much easier than harmonizing and comparing trade-adjusted energy consumption — measured variously in Mtoe, GWhs, bbls, EJs, etc etc). But in the case of energy in emerging economies, the laser focus on emissions is inappropriate.

There has been some movement towards a "gigawatt-hours commitment," as you write. This is the Decade of Sustainable Energy for All, after all. Unfortunately, the dominant frameworks used for understanding the energy access challenge — including those of the UN and IEA — are pretty clearly unacceptable. There are two main problems. The first is an almost exclusive focus on basic household electricity access, often in remote areas, without attendant attention to urbanizing populations, industrializing economies, etc. The second is the scale of ambition. As most on this thread will know, energy access is defined at 250-500kWh/year. This compares to the average German consumption of over 7000kWh/yr. With German levels of consumption, a planet of 9-10 billion in midcentury would require three times as much energy as the world does today.

If we don't start with this very basic, fundamental point, then we're not really having a discussion about energy development or climate change. Again, as Professor Green writes, these realities are commonly ignored, even though folks like Marty Hoffert, Nathan Lewis, Richard Smalley, and John Holdren pointed them out over a decade ago.

Morgan Bazilian and Roger Pielke, Jr. have described what it actually take for sub-Saharan Africa to reach levels of energy consumption enjoyed by South Africans or Americans. China, India, Indonesia, South Africa, and other emerging countries have of course developed largely with coal-fired power. Africa appears poised to mostly leapfrog coal straight towards large hydro and natural gas. No one has yet conceived of a feasible energy development strategy — to power cities, industries, major infrastructure, etc — with renewables or nuclear as the dominant energy sources. Fortunately, a lot of the much-needed work on delivering truly scalable and affordable zero-carbon alternative is being done in the developing world itself. China is pursuing nuclear, solar, and carbon capture at a much greater scale than most Western economies. There are several dozen other countries investing in next-generation nuclear technologies (along with things like renewables, shale gas, coal-to-gas, etc.) to power rapidly growing demand. If decarbonization is to remain a priority — as it should — the energy innovation momentum of the developing world must be captured and accelerated.

I would finally recommend the work of Catherine Wolfram at UC Berkeley, who has both criticized the energy projections methods of the IEA and most recently introduced the very helpful framework of "under-grid" populations, which are very near but unconnected to nascent electricity grids. While some in the West have apparently decided that delivering energy access is a household event best pursued by developed-world charities, NGOs, and entrepreneurs, what we actually see with electricity is similar to what we see with food — good governance and political institutions are essential for equitable and abundant distribution.

Amory Lovins of the Rocky Mountain Institute weighed in:

On the contrary, everything has changed since 1998, making Marty's paper even less relevant today. A few quick highlights, doubtless incomplete:

- Half the new generating capacity the world has added since 2008 has been renewable. (In 2013, 68% of China's new capacity was renewable, the majority of it solar and wind; in Europe, 72%.)

- In each of the past three years, the world invested >$250b investment in non-hydro renewables, adding >80 GW per year. Orders for central thermal plants continue to fade because they have no business case.

- PV power is scaling faster than cellphones worldwide, and in concert with high end-use efficiency (e.g. LED lighting) has extraordinary potential to reach those with no electricity as well as in peri-urban areas.

- China in 2012 increased electricity output more from non-hydro renewables than from all fossil-fueled and nuclear sources, and in 2013, added more PV capacity than the US had added since it invented PVs in 1954. RMI's Reinventing Fire: China collaboration with ERI, EF/C, and LBNL is turning up cost-effective practical potential to raise China's 2050 carbon productivity (GDP per unit of fossil fuel) by ≥15x. The potential in India, where a revolution in efficiency and renewables is also emerging, may be even greater. Both these countries make more electricity from windpower than from nuclear power.

- Unsubsidized US market prices in favorable US sites are ~$0.07/kWh for PV power and ~$0.04/kWh for windpower, with both falling rather rapidly, but already they beat new combined-cycle gas, even neglecting gas's ~$2/GJ price-volatility cost.

- Grid integration of high-renewables mixes, for those paying attention, is now an interesting evolutionary opportunity, not a prohibitive technological or economic challenge. (See http://www.rmi.org/storage_necessity_myth_amory_lovins for a short nontechnical summary of why no breakthrough in bulk electrical storage is needed.) Four EU countries not especially rich in hydropower got about half their 2013 electricity consumption from renewables—Spain 45%, Scotland 46%, Denmark ≥47%, Portugal 58%—with excellent reliability and no additions of bulk storage.

- Micropower—the Economist's term for renewables, less big hydro, plus cogeneration—now produces one-fourth of the world's electricity (>2x nuclear output); see RMI's July 2014 Micropower Database update for details.

- Commonly mentioned renewable issues around land-use turned out to be a canard (http://www.rmi.org/Knowledge-Center/Library/2011-07_RenewableEnergysFootprintMyth).

- Both globally and nationally, renewables excluding big hydro have scaled at least as fast as nuclear power ever did, and without its two-decade "windup" period of building the very demanding capabilities and complex institutions required.

- Properly integrated renewables have emerged as a uniquely profitable and practical pathway to resilient grids that make big cascading blackouts impossible—without adding material cost (see Transform scenario in Reinventing Fire).

- Although U.S. utilities typically pay ~$0.02–0.03/kWh for efficiency, RMI's empirically grounded Reinventing Fire synthesis showed in detail how to quadruple U.S. electric end-use efficiency at an average cost of ~$0.007/kWh (2009 $). The average IRR for 3-4x higher energy productivity in US buildings is 33%; for doubled energy productivity in US industry, 21%. Previous analyses show higher costs and smaller savings because they left so much out, including integrative design, which often turns diminishing into expanding returns to investments in energy efficiency.

- The world is investing >$300b/y in energy efficiency (the electric fraction is unknown but significant), and some major economies like the US and Germany show pretty steady declines in electricity use even as their economies grow. US weather-adjusted electric intensity, for example, fell 3.4% in 2012 alone. It's probably reasonable to estimate that efficiency's annual addition to global electrical services is at least comparable to that of nonhydro renewables.

- It is not essential for developing countries to repeat industrialized countries' historic trajectories: on the contrary, they can leapfrog in energy supply as many did in cellphones. Some are already doing exactly that. Experience teaches that waiting for the wires to reach the villages, bringing unaffordable thermal power, is impractical. The smart solution is to skip the wires, just as in telecomms, and go efficient/renewable/resilient/distributed.

- In general, developing countries have lower end-use efficiency and have more infrastructure yet unbuilt, and they can more easily build right than fix later, so they have far more dynamic and capacious efficiency opportunities than industrialized countries.

- As Ashok Gadgil and I showed ~23 years ago from World Bank data, investing in negawatts wherever they're cheaper than megawatts cuts by ~4 orders of magnitude (3 from intensity, 1 from velocity) the capital needed by the power sector — the most capital-intensive sector, gobbling about ¼ of all development capital. Such least-cost investment could turn that sector into a net exporter of capital to fund other development needs. That's the most powerful macroeconomic lever we know for global development; yet few finance ministers have ever heard of it.

In short, convergent trends in renewable and distributed power, empowered customers, liberalized markets, transparent pricing, and radical end-use efficiency are building an energy future very different from the past. Powerful players are betting on this new horse, coming up fast on the outside. Those who can't see that horse will continue to lose value, because their preferences have more cost and financial risk than investors wish to fund. These market forces will ultimately prove more important than policy, international agreements, or the inertia of those with old ideas and trapped equity.

I haven't time to enter a protracted discussion about these facts and ideas, but was surprised by their absence from the thread, and hope their injection may prove helpful.

Chris Green responded to Lovins's arguments:

It is hard to square the energy facts presented by Amory Lovins with those I have garnered from BP Energy Statistics. While the statistics here (see below) focus on primary energy consumption and those presented by Lovins focus on electricity (a component of final energy) the differences are stark. Here are just a few of the differences:

1. Lovins says renewables (including small hydro and co-generation) account for 47, 58, and 46% of electricity consumption in Denmark, Portugal and Spain, respectively. But non-hydro renewables (NHRs) and all hydro accounted for only 5.7%, 28% and 19%, respectively, of primary energy consumption in 2013 in these three countries. If only NHRs are considered the percentages would be much lower.

2. Globally, NHRs accounted for 2.2% of primary energy consumption in 2013—up from 0.5% in 2000. If all hydro is added in the share in 2013 rises to 8.9%.

3. All of the gain in share of NHRs share since 2000, was offset by the decline in share of nuclear.

4. Lovins claims regarding the contribution of NHRs to electricity consumption in China in 2013 must contend with the fact that the NHRs increased by only 9.4 million million tonnes of oil equivalent (mTOE) between 2012-2013 compared to 100 mTOE for fossil fuels.

As indicated in an earlier intervention, low carbon energy has not made any inroads into the 86.5-87% dominant share of fossil fuels in global energy consumption since 2000. What NHRs have gained has been at the expense of nuclear

I would note that Alex Trembath's useful intervention to this discussion provides insight into why we can expect global energy consumption will continue to grow, and tangentially why so much of that energy will be supplied by fossil fuels without an major breakthroughs in energy technology.

Trembath built on Green's comment:

Indeed, Professor Green, the numbers for even 2013 are not nearly as optimistic as Dr. Lovins suggests. Over 2012, 2013 consumption of different energy sources rose by the following amounts, measured in Mtoe (note these are primary energy, not only electricity):

Oil: 46.1

Gas: 34.1

Coal: 103.0

Nuclear: 3.3

Hydro: 22.2

Wind: 24.0

Solar: 6.9

(Source: BP 2014)

There really ought to be a collegial rule against discussing energy consumption trends without mentioning capacity factor, which explain why 68% of capacity added in 2013 was renewable but a much smaller minority of added generation was renewable. Capacity factor of course also explains why renewables excluding large hydro (and excluding CHP) account for almost twice the installed capacity of global nuclear, yet nuclear in 2013 still generated about 20% more electricity than all those sources combined. These graphs are taken from RMI's Micropower Database; the first is capacity and the second is generation:

Of course we see here also how essential including cogeneration (typically combined heat and power using natural gas) is for drawing such optimistic conclusions about micro generation.

I'm also not sure along what metric Dr. Lovins finds that " globally and nationally, renewables excluding big hydro have scaled at least as fast as nuclear power ever did." As this analysis by my colleagues shows, the best cases of nuclear have outpaced the best cases of nuclear when the metric is added generation divided by national population:

Of course some of the fastest energy, not carbon, transitions have been with natural gas. The UK went from 0% natural gas for electricity in 1990 to 40% in 2000. The US deployed over 250 GW of natural gas generation capacity in the last 20 years, and here's what our own energy transition has looked like:

Certain subsidized and unsubsidized renewables projects can come in at low cost, and we should celebrate these and learn from their successes. But while Dr. Lovins cites 7c/kWh as a best case for solar, the EIA still finds average LCOE of solar at 13c/kWh, higher than advanced nuclear and, of course, higher than natural gas. Wind does appear to have reached an all time low, settling in 4-6c/kWh unsubsidized, as the recent LBNL report showed. However, it must be remembered that these technologies' value to the grid decreases, and integration cost increases, with higher penetration. This is from the work of Lawrence Berkeley National Labs:

This dynamic is evident in Germany, where wholesale power prices are being depressed by must-dispatch, low-marginal cost renewables, but balancing this intermittency is causing retail power prices to rise, both from increasing FIT commitments, and increasingly with costs like capacity payments for baseload power stations and curtailment payments for excess renewables. Here's the results of a review of increasing integration costs from variable renewables:

On the efficiency side of things, we and others have long requested that RMI better incorporate rebound effects into modeling future energy consumption patterns. For instance. In Reinventing Fire (RF), Lovins et al cite 2 studies finding transportation sector rebounds at 3 and 22 percent, but ultimately exclude transportation rebounds from their analysis (Sorrell 2007's survey and the European Commission's review find transpo rebound between 10 and 30 percent; Gavankar and Geyer 2010 identify long-term rebounds between 20 and 65 percent). RF uses 10 and 5 percent rebound for heating and cooling respectively, while Sorrell 2007 finds 10-30 percent for heating and 1-26 percent for cooling. RF rejects the idea of rebounds effects in industrial processes, while Saunders finds rebound in energy intensive industries like primary metals, utilities, manufacturing, and agriculture to be in the range of 20-35 percent. The IPCC and the IEA have recently begun to incorporate rebound effects into their analyses, largely at the behest of organizations like Breakthrough, UKERC, the European Commission, and vocal scholars like Harry Saunders, Steve Sorrell, Joyashree Roy, Dorothy Maxwell, and Karen Turner.

I think it's clear from the above that the world still continues to rely primarily on fossil fuels for development and that even the richest economies are far from achieving significant scaling of renewables (or nuclear) for major decarbonization. Where renewables have scaled, they have brought national electricity prices right up with them. This is especially true for solar PV, where in Germany and Spain FIT payments have caused serious energy policy turmoil, and in the US, where net metering policies have caused public utility backlash against PV deployment in states like Arizona and California (a result of utilities being forced to pay 15-30ckWh for PV electricity instead of purchasing on the wholesale markets for 4-6c/kWh).

To return to the point of Andy's original prompt. Renewables are expanding and their costs are declining, but isolated and contextless success stories don't help us understand the true scale of both the climate and energy development challenges. Drs. Davis and Socolow's intervention with the new paper is extremely helpful because it aims to reveal the scale of the challenge. While renewables continue to develop worldwide — and while countries as varied as US, UK, China, South Korea, Turkey, UAE, Ethiopia, Vietnam, Jordan, etc continue to develop nuclear — fossil still reigns. Now the deployment of fossil and coal generation in regions like sub-Saharan Africa is very likely a net positive for humanity, since a lack of modern electricity systems is largely what makes those locales most vulnerable to climate and other impacts in the first place. But again, if decarbonization of the global economy is the ultimate goal, then I think it's clear we still need answers to the cost and scalability of nuclear and renewables both large and small.

At this point Rob Socolow, one of the paper authors, sought to be sure readers caught the main intent of the paper:

It would be nice to flag what Steve Davis and I have contributed which we think is new.

We are calling attention to a systematic neglect of capital investment decisions in the reporting rules related to climate change, relative to current emissions.

We introduce a concept, "committed emissions," and a methodology to quantify the carbon implications of capital investments.

We show quantitatively that, for the global power sector in any recent year, two quantities are comparable: 1) current emissions that year from all power plants, and 2) "committed emissions" from plants that went on line that year – emissions that can be expected from these plants in the future (when we assume a 40 year lifetime).

We take the concept of remaining committed emissions developed in Steve's 2010 paper with Caldeira and Matthews and work out the trajectory of that value for the global power sector each year over the past 60 years (the earlier paper reported the value for only a single recent year). We find that this index has never fallen, is over 300 GtO2 today and was 200 GtCO2 as recently as Year 2000.

We recommend that "committed emissions" be incorporated prominently into energy analysis, scenario making, and climate policy.

Burt Richter, the physics Nobelist and author of "Beyond Smoke and Mirrors: Climate Change and Energy in the 21st Century," added this comment:

Sorry to jump in late, but I have been away. I think of world energy demand as follows:

Energy = (population) x (per capita income) x (energy/GDP)

We know population is going up (UN mid-level projection is about 9.5 billion by 2050 and 10.5 by 2100), and the poor want to get rich while the rich don't want to get poor, so the only way to work on global energy demand is the last term which is really energy efficiency. If you want to worry about emissions, add another term (emission per unit energy) which is where clean energy comes in. I use "clean" rather than "renewables" because renewables is a term designed to exclude nuclear, big hydro, and large scale efficiency efforts. Taken with most projections for growth energy demand will be up by nearly a factor of 4 by 2100.

What kind of energy is the question? The most recent authoritative numbers for the world that I can find are from the IEA and credit big hydro and combustibles with significant contributions to the total, while wind and solar are quite small. As for Amory's numbers, Denmark is insignificant of the world scale and is connected to the Scandinavian power grid so has lots of back up. Germany is not so connected and so already is having trouble with the stability of its grid. In discussions with Amory in the past I have always found it useful to ask for references for his numbers.

As to the world's poorest countries, they contribute negligibly to emissions and to demand. Let them start up development in any way they can.

Then came input from Nathan Myhrvold, who is best known for his time in research at Microsoft and as an inventor and investor, but has been an author on some relevant energy papers:

Burt's outlook is one that I share.

If you think of energy usage from a "rich world" perspective where there is low growth in demand you can imagine the 21st century challenge to be one where we replace existing fossil fuel energy with clean (in Burt's sense) energy.

But the reality is that the currently poor world is getting richer so by 2100 we need way more primary energy. Today's entire infrastructure will be only ~25% of the picture. The ~4X increase will be in the developing world which will put a huge premium on cost.

As Burt asks, what will that be? At present it is very hard to be optimistic that the new energy will be clean. That isn't the current course.

Amory appears to be far more optimistic than I am, based on numbers that I can't reconcile with the figures available to me.

Daniel Kammen, a professor of energy at the University of California, Berkeley, who happens to edit the journal in which the Davis-Socolow paper is published, offered some overarching thoughts:

As Editor-in-Chief of Environmental Research Letters I am delighted to see this neat paper by Davis and Socolow generating this useful discussion.

Commitment accounting as per Davis and Socolow is useful (I dispute that these facts are not widely known) but it is always useful to be reminded of the implications of our collective decisions today.

The key point that Davis and Socolow make is that when we talk about stranded assets their measure puts this huge looming threat into a form that can easily become something the business community can assess in terms of risks.

When I was at the World Bank my number one goal was to make as many multinational agencies perform life-cycle accounting of emissions on all of their current assets and future potential investments.

This is a step that governments, companies, municipalities and others could commit to at, for example, the September 23Climate Summit in New York.

Even for entities not ready to implement a price on carbon, simply making this accounting a business requirement would dramatically advance the calculation along the lines that Davis and Socolow recommend.

As an example, the University of California system has committed to eliminating fossil fuel use by 2025 — a major task given our transportation footprint.

One way to do this is to start with good, holistic accounting.

We took a step there with a recent national carbon emissions per household map where we show interactively the average footprint for each US zip code. (Yes, a finer-grained map is needed; that paper is next.)

Second, a very simple directive that states, agencies, cities, and the federal government could do to bring the committed emissions into focus is to do that GHG life-cycle analysis along with other project assessments.

In California, for example, we would recommend using the current market price of carbon GHG emissions (around $11/ton in CA), but this could arguably vary to the http://www.epa.gov/climatechange/EPAactivities/economics/scc.html">social cost of carbon or other reasonable values.

From there, agencies, financial officers, CFOs would all know the long-term implications of their decisions.

Kammen appended this thought focused on his years of work in developing countries:

I'd like to add a bit on the developing nation aspects of this conversation.

I have been working in Central America and East Africa for the last 30 years.

I am also just back from far northern Kenya where I was working with the grid planning and the infrastructure for both on-grid wind energy (Africa's largest wind farm is begin built to take advantage of remarkable wind site and the transmission access provided by a new Kenya-Ethiopia linking line). Photo over Lake Turkana with the wind site at upper right (construction begins this fall).

Kenya's grid today is 1.8 GW, and the country's least-cost form of new energy is geothermal (8.5 cents/kWh) for which the economic resource is > 9 GW.

Second, off-grid solar is fastest growing and largest component of new energy access in a country with 29% grid access today. One company with which my lab has an NDA/MOU is the largest user of mobile online money in the country.

Kenya is also on path to replace its hydropower dominated grid of today with one where wind and geothermal are the largest providers of energy (hydropower in the region is increasingly uncertain due to climate change).

Kenya is not unique. Many nations have this clean energy capacity. This message gets lost all the time. 80%+ clean energy paths are not at all hard to find (we are doing assessment for 10 nations right now), and other researchers are doing similar things elsewhere.

On and off-grid are both vital, and public-private partnerships and pure private sector investments are key. We need to find ways to support locally directed efforts that meet these fully decarbonized local visions.

Steve Davis circled back the prime points of the paper:

Regardless of how well renewables are or are not doing, the point Rob and I are trying to make is that fossil infrastructure is still expanding in a big way: the total committed emissions represented by power plants is growing even faster than annual emissions. Whether or not we share Amory's rosy outlook on renewables, these fossil commitments are inconsistent with a decline in emissions any time soon.

Myhrvold reacted:

This is a key point — the fossil infrastructure is still expanding.

Ken Caldiera and I have done a lot of work recently in modeling atmospheric GHG concentration during switchover from fossil fuels to clean energy. A general lesson that comes from this work is that inertia in the climate system means that the "hangover" from emission lasts for decades. Radiative forcing due to greenhouse gases and global average temperature continue to rise for a long time.

So even if you stopped emissions today, we would have climate impact for decades.

But we are not stopping today – the fossil infrastructure is still expanding. The metric of "committed emissions" makes that clear.

Indeed if one combined that metric with modeling results you could call it "committed delta T" – in effect power plant construction commits us to higher global average temperature.

As promised, here's the transcript of Davis's video presentation on the work:

Video Abstract for Commitment Accounting of CO2 Emissions, Davis and Socolow

One of the things that makes climate change an especially difficult problem is that, for the average policymaker, and certainly the average person, it lacks immediacy. As one psychologist puts it, "Climate change may ruin your future, but it won't mess up your evening."

The fact that climate impacts are a threat in the long-run and will materialize slowly over time make it hard for us to get very excited about responding to the threat, no matter how devastating the impacts may turn out to be.

The problem is, our instinct to wait and see is at odds with another characteristic of climate change, which is that it's a problem with huge inertia.

There's a bit of physical inertia—it would take temperatures a few years to catch up with all the CO2 we've been dumping into the atmosphere even if we stopped today.

But more importantly, there is a tremendous amount of social, economic and political inertia. We've spent nearly two centuries and tens of trillions of dollars worldwide building up the largest network of infrastructure that has ever existed to extract, process, and deliver fossil fuels and fossil energy to consumers. All of this long-lived infrastructure represents an enormous investment that won't be easy to walk away from.

The idea behind the new paper my co-author Rob Socolow and I have written is that it's possible to estimate future GHG emissions that are locked-in by all the existing fossil infrastructure, what we call "committed emissions." Our paper demonstrates the concept of this commitment accounting by quantifying the CO2 emissions that are expected to come from now-existing power plants.

Rather than tallying up CO2 emissions from power plants in the year they come out of the plant's smokestacks, we assume a typical plant lifetime of 40 years and allocate the lifetime emissions of each power plant to the year it was built.

What we found is that the currently existing power plants around the world—unless they are retired early or retrofitted so that their emissions are captured–can be anticipated to emit roughly 300 billion tons of CO2 in the future. That's about 10 years worth of current emissions from existing power plants alone, and enough to put a big dent in the remaining budget of emissions we can dump into the atmosphere and still have a reasonable chance of avoiding 2 degrees C of warming relative to the preindustrial era.

But even more daunting than this large amount of committed emissions, we found that total committed emissions grew by an average of 4% between 2000 and 2012 as we built more coal-fired power plants over that period than in any previous 12-year period.

Power plants in the US, EU and India each represent about 10% of the current committed emissions, and the incredible expansion of coal-power in China shows up, as all these Chinese plants represent 42% of global committed emissions. Power plants in Japan, Indonesia, Saudi Arabia and Iran also have substantial and growing share of the world total.

Now to be clear, "committed" doesn't mean "unavoidable." There's nothing to stop us from shuttering a brand new power plant or retrofitting it with carbon capture and storage technology, but of course there'd be costs associated with doing of either of those things. Once something is built and operating, there really is some commitment.

SO, coming back to the issues of immediacy and inertia, our hope is that the sort of commitment accounting described and demonstrated in the paper will be taken up by other analysts and used to evaluate the long-run climate impacts of current capital investments, in turn allowing policymakers to confront these distant implications of their decisions in the present.

Thanks for listening, and feel free to email me with questions about the paper or the concept of commitment accounting of GHG emissions.


15.50 | 0 komentar | Read More

Dot Earth Blog: A Closer Look at Turbulent Oceans and Greenhouse Heating

Written By Unknown on Rabu, 27 Agustus 2014 | 15.49

Photo A sailboat encounters a waterspout along a squall line in the Indian Ocean near the Maldives in 1984 (high resolution).Credit Andrew C. Revkin

Updated, 6:30 p.m. | Earth's climate is shaped by the interplay of two complicated and turbulent systems — the atmosphere and oceans. (The photo above is from the two years I spent at that interface as crew on ocean-roaming sailboats.) The oceans hold the majority of heat in the system, are full of sloshy cycles on time scales from years to decades and, despite an increase in monitoring using sophisticated diving buoys, remain only spottily tracked.

It's no wonder, then, that assessing the mix of forces shaping short-term wiggles in global and regional atmospheric temperature (years to decades) remains a daunting exercise. That's why it's worth stepping back after weeks of news about studies of the role of oceans in retarding, and sometimes accelerating, global warming to reflect a bit on the difference between edge-pushing analysis and firm scientific conclusions.

What's firmly established is that the climate is warming, that the buildup of human-generated heat-trapping greenhouse gases is contributing substantially to the warming and that while the buildup of gases is steady, the rise in temperatures is not.

There's been a burst of worthy research aimed at figuring out what causes the stutter-steps in the process — including the current hiatus/pause/plateau that has generated so much discussion. The oceans are high on the long list of contributors, given their capacity to absorb heat. The recent studies have pointed variously to process in the Pacific and Atlantic and Southern oceans (the latter being the extraordinary band of seas in the Southern Hemisphere where winds circulate around the globe unimpeded by continents).

There's important work to be done on this question but — as the oceanographer Carl Wunsch notes at the end of this post — the paucity of data on ocean heat makes it tough to get beyond "maybe" answers.

Peter Spotts of the Christian Science Monitor wrote a nice piece on the battle of the ocean basins. Here's his description of the Atlantic mechanism:

[I]n the Atlantic, the heat is carried north as part of a powerful current system known as the Atlantic thermohaline circulation. The north-flowing Gulf Stream is the most visible manifestation of this circulation.

By the time it reaches the far North Atlantic, the dense, salty water has cooled and sinks. It plunges toward the seafloor and heads south at depth, retaining some of the heat it accumulated on the surface.

In a news article in the journal Science, which published the latest paper on the Atlantic's role in decades-long global temperature fluctuations, Eli Kintisch described the Pacific argument this way: 

[I]n the 17 August Nature Climate Change study, a team led by [Kevin] Trenberth suggests that natural variability in the Pacific explains more than half of the hiatus. Based on data and climate simulations, they argue that a pattern known as the Pacific Decadal Oscillation, which shifts every 20 to 30 years, is driving the increased upwelling as well as other climate trends, including the rapid warming of the Arctic and recent cold winters in Europe.

The newest paper, in the current issue of Science, "Varying planetary heat sink led to global-warming slowdown and acceleration," argues that the Atlantic not only has shaped the current plateau, but also was responsible for half of the sharp global warming at the end of the 20th century. The paper, by Xianyao Chen of the Ocean University of China and Ka-Kit Tung of the University of Washington, has a remarkably trenchant abstract:

A vacillating global heat sink at intermediate ocean depths is associated with different climate regimes of surface warming under anthropogenic forcing: The latter part of the 20th century saw rapid global warming as more heat stayed near the surface. In the 21st century, surface warming slowed as more heat moved into deeper oceans. In situ and reanalyzed data are used to trace the pathways of ocean heat uptake. In addition to the shallow La NiƱa–like patterns in the Pacific that were the previous focus, we found that the slowdown is mainly caused by heat transported to deeper layers in the Atlantic and the Southern oceans, initiated by a recurrent salinity anomaly in the subpolar North Atlantic. Cooling periods associated with the latter deeper heat-sequestration mechanism historically lasted 20 to 35 years.

In an e-mail exchange, Ka-Kit Tung noted how this work can help reveal the steady warming in the background that is attributable to human activities:

The underlying anthropogenic warming trend, even with the zero rate of warming during the current hiatus, is 0.08 C per decade.* [That's 0.08 degrees Celsius, or 0.144 degrees Fahrenheit.] However, the flip side of this is that the anthropogenically forced trend is also 0.08 C per decade during the last two decades of the twentieth century when we backed out the positive contribution from the cycle….

This aspect of the work was largely missed in press coverage. I asked a range of climate and ocean scientists to weigh in on the paper. Many focused on details of the Atlantic-Pacific debate. A few took a broader view that's worth sharing:

Joshua K. Willis of NASA's Jet Propulsion Laboratory said this:

In regards to your question, if you mean how robust is the "slowdown" in global surface warming, the answer is it just probably just barely statistically significant. If you are wondering whether is it meaningful in terms of the public discourse about climate change, I would say the answer is no. The basic story of human caused global warming and its coming impacts is still the same: humans are causing it and the future will bring higher sea levels and warmer temperatures, the only questions are: how much and how fast?

As far as the cause of the slowdown, I think there is still some debate, not just about the cause but about the details of what's going on. For example, there have been several studies including this one to suggest that some deeper layer of the oceans are warming faster now than they were 10 or 15 years ago. This suggestion of an accelerated warming in a deep layer of the ocean has been suggested mostly on the basis of results from reanalyses of different types (that is, numerical simulations of the ocean and atmosphere that are forced to fit observations in some manner). But it is not clear to me, actually, that an accelerated warming of some sub-surface layer of the ocean (at least in the globally-averaged sense) is robustly supported by the data itself.

Until we clear up whether there has been some kind of accelerated warming at depth in the real ocean, I think these results serve as interesting hypotheses about why the rate of surface warming has slowed-down, but we still lack a definitive answer on this topic.

Here's Andrew Dessler of Texas A&M University:

There are a few interesting things to note here.

First, the hiatus is example of how science works. When it was first observed a few years ago, there were lots of theories — including things like stratospheric water vapor, solar cycles, stratospheric aerosol forcing. After some intense work by of the community, there is general agreement that the main driver is ocean variability. That's actually quite impressive progress and shows how legitimate uncertainty is handled by the scientific community.

Second, I think it's important to put the hiatus in context. This is not an existential threat to the mainstream theory of climate. We are not going to find out that, lo and behold, carbon dioxide is not a greenhouse gas and is not causing warming. Rather, I expect that the hiatus will help us understand how ocean variability interacts with the long-term warming that humans are causing. In a few years, as we get to understand this more, skeptics will move on (just like they dropped arguments about the hockey stick and about the surface station record) to their next reason not to believe climate science.

As far as this particular paper goes, I think the findings that the heat is going into the Atlantic and Southern Ocean's is probably pretty robust. However, I will defer to people like Josh Willis who know the data better than I do.

What's most exciting to me is that this is really a fascinating conundrum. People like Kevin Trenberth and Kosaka and Xie have published quite convincingly that the action seems to be in the Pacific. So the challenge is to try to resolve that evidence with the ocean heat data that shows that the energy is going into other ocean basins. Ultimately, the challenge come up with the parsimonious theory that fits all of the data.

I do think that ocean variability may have played a role in the lack of warming in the middle of the 20th century, as well as the rapid warming of the 1980s and 1990s. But the argument that the hiatus will last for another decade or two is very weak and I would not put much faith in that. If the cycle has a period of 60-70 years, that means we have one or two cycles of observations. And I don't think you can much about a cycle with just 1-2 cycles: e.g., what the actual period of the variability is, how regular it is, etc. You really need dozen of cycles to determine what the actual underlying variability looks like. In fact, I don't think we even know if it IS a cycle.

And this brings up what to me is the real question: how much of the hiatus is pure internal variability and how much is a forced response (from loading the atmosphere with carbon). This paper seems to implicitly take the position that it's purely internal variability, which I'm not sure is true and might lead to a very different interpretation of the data and estimate of the future.

Thus, their estimate of 1-2 more decades before rapid warming resumes might be right; but, if so, I'd consider them lucky rather than smart.

John Michael Wallace, a professor emeritus of atmospheric sciences at the University of Washington, offered these thoughts:

Back in 2001 I served as a member of the committee that drafted the National Research Council report, "Climate Change Science: An Analysis of Some Key Questions." The prevailing view at that time, to which I subscribed, was that the signal of human-induced global warming first clearly emerged from the background noise of natural variability starting in the 1970s and that the observed rate of increase from 1975 onward could be expected to continue into the 21st century. The Fourth Assessment Report of the IPCC, released in 2007, offered a similar perspective, both in the text and in the figures in its Summary for Policymakers.

By that time, I was beginning to have misgivings about this interpretation. It seemed to me that the hiatus in the warming, which by then was approaching ten years in length, should not be dismissed as a statistical fluke. It was as legitimate a part of the record as the rapid rises in global-mean temperature in the 1980s and 1990s.

In 2009 Zhaohua Wu contacted me about a paper that he, Norden Huang, and other colleagues were in the process of writing in which they attributed the stair-step behavior in the rate of global warming, including the current hiatus, to Atlantic multidecadal variability. I was initially a bit skeptical, but in time I began to appreciate the merits of their arguments and I became personally involved in the project. The paper (Wu et al.) encountered some tough sledding in the review process, but we persisted and the article finally appeared in Climate Dynamics three years ago. [See Judith Curry's helpful discussion.]

The new paper by Tung and Chen goes much farther than we did in making the case that Atlantic multidecadalvariability needs to be considered in the attribution of climate change. I'm glad to see that it is attracting attention in the scientific community, along with recent papers of Kosaka et al. and Meehl et al. emphasizing the role of ENSO-like variability. I hope this will lead to a broader discussion about the contribution of natural variability to local climate trends and to the statistics of extreme events.

Carl Wunsch, a visiting professor at Harvard and professor emeritus of oceanography at the Massachusetts Institute of Technology, offered a valuable cautionary comment on the range of papers finding oceanic drivers of short-term climate variations. He began by noting the challenge just in determining average conditions:

Part of the problem is that anyone can take a few measurements, average them, and declare it to be the global or regional value. It's completely legitimate, but only if you calculate the expected uncertainty and do it in a sensible manner.

The system is noisy. Even if there were no anthropogenic forcing, one expects to see fluctuations including upward and downward trends, plateaus, spikes, etc. It's the nature of turbulent, nonlinear systems. I'm attaching a record of the height of the Nile — 700-1300 CE. Visually it's just what one expects. But imagine some priest in the interval from 900-1000, telling the king that the the Nile was obviously going to vanish…

Photo Variations in the height of the Nile River over the centuries.Credit Carl Wunsch

Or pick your own interval. Or look at the central England temperature record or any other long geophysical one. If the science is done right, the calculated uncertainty takes account of this background variation. But none of these papers, Tung, or Trenberth, does that. Overlain on top of this natural behavior is the small, and often shaky, observing systems, both atmosphere and ocean where the shifting places and times and technologies must also produce a change even if none actually occurred. The "hiatus" is likely real, but so what? The fuss is mainly about normal behavior of the climate system.

The central problem of climate science is to ask what you do and say when your data are, by almost any standard, inadequate? If I spend three years analyzing my data, and the only defensible inference is that "the data are inadequate to answer the question," how do you publish? How do you get your grant renewed? A common answer is to distort the calculation of the uncertainty, or ignore it all together, and proclaim an exciting story that the New York Times will pick up.

A lot of this is somewhat like what goes on in the medical business: Small, poorly controlled studies are used to proclaim the efficacy of some new drug or treatment. How many such stories have been withdrawn years later when enough adequate data became available?

Addendum, 6:30 p.m. | Ka-Kit Tung responded to Wunsch and Dessler in an e-mail.

Here's his reply to Carl Wunsch's reaction:

Carl Wunsch's concern over the sparsity of the ocean data, as expressed in his recent papers, is mostly related to the part of the ocean below 2000 m (the abyssal ocean). He pointed out the signal in the abyssal oceans were mostly at least 500 years old. The signals that we are interested for the current hiatus of the past 15 years came down from above and have not reached the part of the ocean below 2000 m. We used only data above 1500 m and our case was made in Figure 2 of the paper using recent data with better coverage.

And Andrew Dessler's reaction:

We did not predict in our Science paper that the current hiatus will last another decade or two. The statement that it will last another "15 years" was found in the press release by Science magazine. We were not given a chance to approve it; it probably was not their practice. In the paper itself, we discussed the fact that "historically" lasted 20-35 years. In our university's press release, we emphasized that it is difficult to predict how long it will last given the changing climate conditions.

Dessler mentioned that there is only 1-2 cycles of this 60-year variability in the short climate record. We discussed this issue in our paper: The global instrumental record since 1850 contains only 2 and half cycles of this 65-year cycle. Tung and Zhou (2013, PNAS) extended it a few hundred years using Central England temperature data. We are currently reexamining Greenland ice-core data that extends the cycle back another thousand years. In addition, free-running models have produced this multidecadal cycles in their control runs (i.e. without anthropogenic forcing), although the latest batch of models have problems getting the period right.

Postscript, 2:13 p.m. | * In a followup chat, Tung asked to slightly expand the comment at the asterisk above.


15.49 | 0 komentar | Read More

Dot Earth Blog: A Small Island Takes a Big Step on Ocean Conservation

Written By Unknown on Sabtu, 23 Agustus 2014 | 15.49

Photo A map of protected marine zones that are being established around the Caribbean island of Barbuda.Credit Waitt Institute

Marine life in the Caribbean has been badly hurt in recent decades by everything from an introduced pathogen that killed off reef-grooming sea urchins to more familiar insults like overfishing and impacts of tourism and coastal development.

Some small island states are now trying to restore once-rich ecosystems while sustaining their economies. A case in point is Barbuda, population 1,600 or so, where the governing council on Aug. 12 passed a suite of regulations restricting activities on a third of the island's waters. The regulations and reef "zoning," in essence, came about after months of discussions involving fishing communities, marine biologists and other interested parties, facilitated by the Waitt Institute, a nonprofit conservation organization.

Of course, it'll take time to see if the ambitious marine zoning plan works as intended. Local fishermen are seeking help finding new sources of income, according to the Antigua Observer.

But the process, which took nearly two years of study and meetings, provides a promising template not only for other island communities but also for any region where environmental restoration efforts have to mesh with local economic concerns.

The best way to avoid resistance is to involve affected communities from the start. The initiative brings to mind the Cheonggyecheon stream restoration project in the heart of Seoul that I wrote about in 2009. The mayor's team held hundreds of meetings with merchants and residents to work out issues and explain benefits.

Here's a piece describing Barbuda's "Blue Halo" initiative, written by Arthur Nibbs, the chairman of the Barbuda Council and minister of fisheries of Antigua and Barbuda, and Ayana Elizabeth Johnson, a marine biologist and the Waitt Institute executive director (and National Geographic blogger):

How to Use the Ocean Without Using it Up

By Minister Arthur Nibbs and Ayana Elizabeth Johnson, Ph.D.

Small islands face big ocean problems, but the solutions can be simple. Set some areas aside, protect key species, and prevent habitat damage. This will benefit the economy, help ensure food security, and allow the ocean to be used sustainably, profitably, and enjoyably, for this and future generations.

A year-and-a half ago, the Barbuda Council and the Waitt Institute forged a partnership to envision a sustainable ocean future for the island of Barbuda and launch the Barbuda Blue Halo Initiative. Put simply, we collaborated to design a plan to use the ocean without using it up.

This month, the Barbuda Council signed into law a sweeping set of new ocean management regulations that zone the coastal waters, strengthen fisheries management, and establish a network of marine sanctuaries. Barbuda may be a small island, but we hope the big commitment represented by these new policies will set an inspiring example for the region.

The new regulations create five marine sanctuaries, collectively protecting 33% (139 square kilometers) of the coastal area, and initiate a two-year hiatus on fishing in Codrington Lagoon to enable fish populations to rebuild and habitats to recover. Catching parrotfish and sea urchins has been completely prohibited, as those herbivores are critical to keeping algae levels on reefs low so coral can thrive. Barbuda is the first Caribbean island to put either of these important, strong measures in place.

Of course, if the community didn't support this, it wouldn't work. Therefore, to ensure the new policies reflect stakeholders' concerns and priorities, there were six rounds of community consultations. The final zoning map is the fourth iteration – the boundaries have changed dramatically since the Council's initial draft. The prohibition of using nets on the reefs was included at the request of local fishers concerned with reef damage. Though there will never be 100% agreement, this has been a consensus-seeking process and the Council aimed to balance current and future needs to use ocean resources.

Why were these measures necessary? First, Caribbean-wide, communities are seeing declines in the health of coastal ecosystems and fish populations. Barbuda is no exception. On average, Barbuda's reefs are 79% covered in algae, with less than 14% living coral. This is not good for fishing or tourism; fish need habitats and tourists want to see vibrant abundance.

Second, fishers now have to go further and into deeper water to make a good catch. This is expensive in fuel and it is dangerous. The regulations aim to rebuild coastal fisheries and ensure fishers have a livelihood that will last in perpetuity. Some people will say that these policies are meant to hurt fishers, but that couldn't be further from the truth. The sanctuaries were created to replenish the surrounding fishing areas.

Third, Barbuda is highly endangered by climate change and sea level rise. The coral reefs and mangroves buffer the island from the impacts of storms, so by protecting the reefs and mangroves, they will in turn protect the island. Healthier reefs will also be more resilient to impacts like warming sea temperature that can't be prevented locally.

Last, but certainly not least, dwindling coastal resources threaten local culture. The people of Barbuda have a strong connection to the sea – fish fries, camping on the beaches, kids growing up learning to fish with their parents and grandparents. In order to preserve this way of life, ocean ecosystems must be protected.

Over the next several years, the Barbuda Council and Waitt Institute will continue to work closely as these regulations are implemented. The Institute will help to set up a long-term scientific monitoring program, train local staff in marine ecology and field research techniques, design enforcement approaches, provide needed equipment, and work with the schools to develop an ocean education curriculum.

Unfortunately, Barbuda is not unique in facing these challenges – degraded coral reefs, depleted fisheries, and climate change impacts are nearly ubiquitous globally. So putting strong ocean policies in place is merely the first step, and we hope that more and more nations will take this step alongside Barbuda.

Here's a helpful presentation on the Blue Halo initiative given in July at the Scripps Institution of Oceanographyby Stephanie Roach, who worked with the Waitt Institute on the project over the past year:


15.49 | 0 komentar | Read More

Well: Legal Marijuana for Parents, but Not Their Kids

Written By Unknown on Rabu, 20 Agustus 2014 | 15.49

Photo Credit Stuart Bradford
The Well Column

Tara Parker-Pope on living well.

When the antidrug educator Tim Ryan talks to students, he often asks them what they know about marijuana. "It's a plant," is a common response.

But more recently, the answer has changed. Now they reply, "It's legal in Colorado."

These are confusing times for middle and high school students, who for most of their young lives have been lectured about the perils of substance abuse, particularly marijuana. Now it seems that the adults in their lives have done an about-face.

Recreational marijuana is legal in Colorado and in Washington, and many other states have approved it for medical use. Lawmakers, the news media and even parents are debating the merits of full-scale legalization.

"They are growing up in a generation where marijuana used to be bad, and maybe now it's not bad," said Mr. Ryan, a senior prevention specialist with FCD Educational Services, an antidrug group that works with students in the classroom.

"Their parents are telling them not to do it, but they may be supporting legalization of it at the same time."

Antidrug advocates say efforts to legalize marijuana have created new challenges as they work to educate teenagers and their parents about the unique risks that alcohol, marijuana and other drugs pose to the developing teenage brain.

These educators say their goal is not to vilify marijuana or take a stand on legalization; instead, they say their role is to convince young people and their parents that the use of drugs is not just a moral or legal issue, but a significant health issue.

"The health risks are real," said Steve Pasierb, the chief executive of the Partnership for Drug-Free Kids. "Every passing year, science unearths more health risks about why any form of substance use is unhealthy for young people."

Already nearly half of teenagers — 44 percent — have tried marijuana at least once, according to data from the partnership. Regular use is less common. One in four teenagers report using marijuana in the past month, and 7 percent report frequent use — at least 20 times in the past month.

Even in the states where marijuana is legal, it remains, like alcohol, off-limits to anyone younger than 21. But the reality is that once a product becomes legal, it becomes much easier for underage users to obtain it.

This summer, the Partnership for Drug-Free Kids released its annual tracking study, in which young people were asked what stopped them from trying drugs. Getting into trouble with the law and disappointing their parents were cited as the two most common reason young people did not use marijuana. The concern now is that legalization will remove an important mental barrier that keeps adolescents from trying marijuana at a young age.

"Making it legal makes it much more accessible, more available," said Dr. Nora Volkow, the director of the National Institute on Drug Abuse. "This is the reality, so what we need to do is to prevent the damage or at least minimize it as much as possible."

Drug prevention experts say the "Just Say No" approach of the 1980s does not work. The goal of parents should not be to prevent their kids from ever trying marijuana.

Instead, the focus should be on practical reasons to delay use of any mind-altering substance, including alcohol, until they are older.

The reason is that young brains continue to develop until the early 20s, and young people who start using alcohol or marijuana in their teens are far more vulnerable to long-term substance-abuse problems.

The brain is still wiring itself during adolescence, and marijuana — or any drug use — during this period essentially trains the reward system to embrace a mind-altering chemical.

"We know that 90 percent of adults who are addicted began use in teenage years," Mr. Pasierb said. "They programmed the reward and drive center of their teenage brain that this is one of those things that rewards and drives me like food does, like sex does."

Studies in New Zealand and Canada have found that marijuana use in the teenage years can result in lost I.Q. points. Mr. Pasierb says the current generation of young people are high achievers and are interested in the scientific evidence about how substance use can affect intelligence.

"You have to focus on brain maturation," he said. "This generation of kids wants good brains; they want to get into better schools. Talk to a junior or senior about whether marijuana use shaves a couple points off their SATs, and they will listen to you."

Because early exposure to marijuana can change the trajectory of brain development, even a few years of delaying use in the teen years is better. Research shows that young adults who smoked pot regularly before the age of 16 performed significantly worse on cognitive function tests than those who started smoking in their later teenage years.

Drug educators say that one benefit of the legalization talk is that it may lead to more research on the health effects of marijuana on young people and more funding for antidrug campaigns.

The Partnership for Drug-Free Kids plans to continue its "Above the Influence" marketing campaign, which studies show has been an effective way of reaching teenagers about the risks of drug use. The campaign does not target a specific drug, but it teaches parents and teens about the health effects of early drug use and tries to empower teens to make good choices.

"Legalization is going to make the work we do even more relevant," Mr. Pasierb said. "It's part of the changing drug landscape."

A version of this article appears in print on 08/19/2014, on page D1 of the NewYork edition with the headline: In Drug Fight, Erratic Cues for Teenagers.


15.49 | 0 komentar | Read More

Dot Earth Blog: Heading Down East for a Spell

Written By Unknown on Sabtu, 02 Agustus 2014 | 15.49

I'm going to be in slow blogging mode through the coming week, on a long-overdue visit to Winter Harbor, Me., where my mother in law was born, resides and paints. It's mostly a sleepy backwater, but once a year things get very loud and fast.

To get a feel for the place, read a couple of my past reports for The Times from the region — on the annual lobster boat races and a boom in the 1990s in sea urchin harvests bound for Japan.

This video shows you just a touch of the quiet corners out along Schoodic Point, with the soundtrack an instrumental I wrote for my wife long ago. I hope you get a break from your hubbub this summer at some point, as well. Use the comment thread below for civil discussion of the merits of quiet times, taking a breath, just being.

Here's a bit more video in which the only music is the soft call of a cedar waxwing:


15.49 | 0 komentar | Read More
techieblogger.com Techie Blogger Techie Blogger