Diberdayakan oleh Blogger.

Popular Posts Today

Dot Earth Blog: A Deeper Look at a Study Finding High Leak Rates From Gas Drilling

Written By Unknown on Kamis, 24 April 2014 | 15.49

Updated below with Skype chat with the study lead author, 10:07 p.m. |
Most efforts to slow the natural gas drilling boom in the United States have focused on questions about the environmental impacts of the process called hydraulic fracturing, or fracking, which occurs deep underground after a well is drilled.

That's why a great deal of attention was paid last week to the results of a two-day aerial survey over gas fields in southwestern Pennsylvania that calculated emission rates of methane (the main component of natural gas) from two well pads still in the drilling phase. The emissions rates were between 100 and 1,000 times higher than what would be consistent with Environmental Protection Agency leakage estimates.

The study, "Toward a better understanding and quantification of methane emissions from shale gas development," was published in the Proceedings of the National Academy of Sciences and undertaken by Dana R. Caulton and Paul B. Shepson of Purdue and a host of co-authors, including Anthony Ingraffea and Robert Howarth, Cornell scientists who are prominent foes of fracking, along with Renee Santoro of Physicians Scientists & Engineers for Healthy Energy, a nonprofit group that has been critical of fracking* (Ingraffea is affiliated with the group, as well).

Much of the news coverage and commentary was greatly oversimplified, implying that airplane measurements taken on two days in 2012 and showing high methane levels over a handful of wells (and nothing unusual over almost all the other wells in the region) pointed to an extraordinary new pollution and climate change risk. A case in point was this Climate Central post: "Huge Methane Leaks Add Doubt on Gas as 'Bridge' Fuel."

In fact, the study is consistent with other recent work covered here that shows there are specific and tractable issues that can be addressed, making gas production far less leaky and thus a legitimate successor to coal mining.

This section from the paper says as much (I added the paper links to the citation numbers): 

[T]hese regional scale findings and a recent national study (23) indicate that overall sites leak rates can be higher than current inventory estimates. Additionally, a recent comprehensive study of measured natural gas emission rates versus "official" inventory estimates found that the inventories consistently underestimated measured emissions and hypothesized that one explanation for this discrepancy could be a small number of high-emitting wells or components (33) . These high leak rates illustrate the urgent need to identify and mitigate these leaks as shale gas production continues to increase nationally (10).

There is one aspect of the new study that's worth a deeper dive. The authors noted the presence of sources of coalbed methane — a common peril in coal mines throughout the history of coal mining — near the methane hot spots they found (the supplementary information is here).

It took a bit of time for me to seek some knowledgable input on this. (Increasingly I'm in "slow blogging" mode these days, partly because I'm chronically swamped but mostly because I try to maintain a foothold in reality.)

I sent the paper to Louis Derry, a Cornell University geologist who's been a constructive presence  in the Dot Earth comment stream for a long time and, although he worked several decades ago for mining and oil companies, has provided science-based guidance on research related to shale gas. Read on for Derry's analysis, which I'm sharing with the authors [see the update below].

In the end, the best way to resolve such questions is in the peer-reviewed literature, but it's valuable to have some discourse here given the way simplified interpretations of single papers ("single study syndrome") often swamp policy debates even as the process of science grinds forward.

Here's Derry's critique, which includes a very important conclusion I'm sure these authors would agree on — that regulators should require monitoring of local air chemistry before, during and after drilling of gas wells:

 

The new study on methane fluxes by Caulton et al. raises some interesting questions.  The authors report very high fluxes associated with a small set of wells in southwest Pennsylvania, while finding "little or no emission" from other wells in a larger area.  The local area they identify as anomalous has a number of coal mining operations, also a potentially large source of methane. The reported gas chemistry in the Caulton study has low ratios of propane to methane (known as C3/C1, from the carbon numbers).  Such low C3/C1 is characteristic of coal bed gas, but not of Marcellus gas (higher C3/C1).  As the authors note, the data suggest pretty strongly that the vented gas is from coal, not from the Marcellus target horizon. Coal bed methane is produced from many wells specifically drilled for that purpose in the area, with about 12 billion cubic feet produced in 2012 in Greene and Washington counties.  An important question is how that coal bed gas is reaching the atmosphere.  Is it, as the authors propose, leaking from new shale gas wells that happen to penetrate the coal-bearing horizon on the way down to their deeper intended target (the Marcellus shale)?  Or, in this area with a long history of coal mining, are structures associated with past or present mining activity the main pathway?  Underground coal mines (active and abandoned) are routinely vented to prevent mine explosions.  There are other routes for coal gas to escape, including fractures or undocumented structures from legacy mines and abandoned wells.

Measuring gas fluxes and identifying sources associated with drilling, mining, landfills, or agriculture is not as easy as it may sound.  One approach is to go out and measure gas levels "on the ground."  A widely reported Environmental Defense Fund study released last year (Allen et al, 2013 PNAS) did that for about 190 gas wells.  With site-specific data, they found relatively low overall leak rates but were able to identify gas-operated valves as an important leakage point.  As with any "bottom up" study, extrapolating the results to large areas is difficult.  Another approach is to take aircraft, tower, and other measurements, and try and infer the strength and identity of sources from anomalies in gas concentrations sampled from wide area. This requires some important assumptions and computation, including the issue of "back tracking" air masses, both horizontally and vertically.  A recent example of a large-scale "top down" study using aircraft and tower data is that of Miller et al (2013, PNAS).  They identified anomalous methane fluxes from the south central United States that they tentatively ascribed to fossil fuel production there.  But studies like this, by their very design, cannot identify individual sources.

Both kinds have their merits and limitations.  There's no silver bullet.  The Caulton study is, in a sense, in between these two approaches.  The study used aircraft measurements where they circled upwind and downwind of potential sources, but they only sampled during two days. Because the radius of an individual circuit was small (less than 1 kilometer), they can better identify the location of methane sources that contribute to atmospheric anomalies. For example, one well pad appears to be the source of a methane plume, since only background methane was observed upwind but anomalous levels were found just downwind. A nearby circuit showed higher methane upwind of a pad than downwind, indicating a source outside the target area, possibly from nearby mine vents. On another of their flights, a very strong gas anomaly showed up near a well pad that was also near a coal mine. The mine signal is so strong that it's not possible to resolve any contribution from the well pad.  These results give an indication of the complexity of gas sources in the area. The authors state that six other well pads gave results indicating significant leakage but the underlying data are not included in the paper, so it's hard to evaluate.  Given the history of coal mining in the area, each location needs to be treated with care so as not to convolve gas fluxes that come directly from past or present mining activities with those that are following new drill holes as the release pathway.

Unfortunately, we have no equivalent data on gas concentrations in this area (or just about any other place) from prior to the start of drilling with which to compare.  This would have been particularly valuable in an area with so much coal, where we might expect high fluxes prior to any shale gas drilling.  Because of the pace of shale gas development, scientists are basically playing catch up.

It would be very smart if local scale monitoring of air chemistry were an intrinsic part of gas development.  The monitoring should include pre-drilling data so that we can usefully compare fluxes before, during and after. Real-time data would be available to monitor gas fluxes and especially to identify anomalies that could then be fixed. Data in real time can support an active QA/QC [quality assurance and quality control] program.  If the methane signal jumps, something is wrong, and should be (and in most case can be) fixed quickly.  At about $50,000 apiece, modern portable atmospheric gas analyzers are small change relative to the cost of drilling, and in my opinion should be part of any the well field development plan.

Related devices are already employed by some gas developers, and methane monitoring will increasingly be required as Federal and state governments roll out new regulations. Methane leakage during drilling and production can be controlled, but without data nobody can know what most needs doing.  For example the E.D.F. study showed that gas-operated pneumatic valves [relevant video] were a big source.  That's easy to fix, once you know you have a problem.  Before that study, nobody seems to have thought about it much. The E.P.A. recently issued regulations limiting leak rates from such valves.

The hypothesis of the Caulton paper, that the new shale gas wells are enabling the escape of coal bed methane, is certainly plausible but they only clearly document one example. The drilling phase is transient, and even if there are high leak rates as the drill penetrates coal-bearing strata they are likely to persist for a matter of days, or at most until the well is cased, usually a couple of weeks.  One way to assess their magnitude is to use the instantaneous flux, which implies a large effect from the newly identified sources. But if these sources are only active for a week or two, their integrated impact drops substantially. The headlines to the effect that "gas leaking at 1,000 times E.P.A. estimates" [example] might be true in a very narrow sense but do not reflect the transient nature the process.  And it may turn out that at least some of this flux is not the result of shale gas drilling at all.

The rates proposed by Caulton are about a million cubic feet a day, not something the industry wants to lose, and industry people are quite skeptical that drilling operations leak anything like that amount. It may be that drilling in a coal-rich area will require special precautions to prevent transient leaks. If, as Caulton et al. conclude, a small number of wells contribute heavily to the leakage flux, this actually makes fixing the problem more straightforward.

The same has been observed with automobile emissions. A few clunkers emit more than many properly maintained vehicles, so it makes sense to try and get the clunkers off the road.

Or the real message of this study may be that gas fluxes from coal operations have been underestimated, and that they are mostly responsible for the hotspots. Either way the study helps identify an anomalous local source, but more data is needed to decide which it is, and how important it is. Once identified, gas leaks in production and transportation systems can be reduced, and there is an economic as well as environmental incentive to do so. Further, vented mine gases in the same area are now being captured as an economic resource, and there may be additional opportunities for such conversion of a waste product to a resource. Methane is an attractive target from the standpoint of stemming climate change, as it has the potential for short-term climate impacts and its anthropogenic sources are easier to control than CO2. But in the long run, it is CO2 emissions that will determine the fate of our climate.

Problem number one in greenhouse gas emissions remains coal consumption, not to mention its onerous impact on air quality and public health.

Insert, 10:08 p.m. | Paul Shepson, the study's lead author and an atmospheric chemist at Purdue, said Derry's concern that the team was measuring coalbed methane coming from somewhere other than the gas wells was unfounded.

But in a Skype chat he agreed with Derry's conclusions that real-time monitoring is vital, and that, in the end, carbon dioxide is the greenhouse gas of greatest concern.

Postscript, 8:30 p.m. | * In an e-mail message, Seth Shonkoff, the executive director of Physicians, Scientists & Engineers for Healthy Energy, said the group preferred to be described as critical of fracking rather than "anti-fracking" (see the line marked with an asterisk). In the interest of nuance and engagement, I'm happy to make this change. He gave me permission to post the message in the comment string.


15.49 | 0 komentar | Read More

Well: The Limits of ‘No Pain, No Gain’

Written By Unknown on Rabu, 23 April 2014 | 15.49

Phys Ed

Gretchen Reynolds on the science of fitness.

Exercise makes us tired. A new study helps to elucidate why and also suggests that while it is possible to push through fatigue to reach new levels of physical performance, it is not necessarily wise.

On the surface, exercise-related fatigue seems simple and easy to understand. We exert ourselves and, eventually, grow weary, with leaden, sore muscles, at which point most of us slow or stop exercising. Rarely, if ever, do we push on to the point of total physical collapse.

But scientists have long been puzzled about just how muscles know that they're about to run out of steam and need to convey that message to the brain, which has the job of actually telling the body that now would be a good time to drop off the pace and seek out a bench.

So, a few years ago, scientists at the University of Utah in Salt Lake City began studying nerve cells isolated from mouse muscle tissue. Other research had established that contracting muscles release a number of substances, including lactate, certain acids and adenosine triphosphate, or ATP, a chemical involved in the creation of energy. The levels of each of those substances were shown to rise substantially when muscles were working hard.

To determine whether and how these substances contributed to muscular fatigue, the Utah scientists began adding the substances one at a time to the isolated mouse nerve cells. Deflatingly, nothing happened when the scientists added the substances individually.

But when they exposed the cells to a combination of all three substances, many of the nerve cells responded. In living muscle tissue, these neurons presumably would send messages to the brain alerting it to growing muscular distress. Interestingly, the scientists found that different neurons responded differently, depending on how much of the combined substances the scientists added to the lab plates containing the mouse nerve cells.

Since rodent nerve cells are not people, however, the scientists next decided to repeat and expand the experiment in humans. For a study published in February in Experimental Physiology, they recruited the thumbs of 10 adult men and women. The entire volunteers showed up at the lab, but only their thumbs were needed, since the researchers wanted to study muscles that were accessible and easily held still. Those in the thumb served nicely.

So, asking each volunteer not to move his or her hand, the researchers injected lactate, ATP or the various acids just beneath the tissue covering one of the muscles in the thumb. After the discomfort from the injection had faded, they asked the volunteers if they felt anything. None did.

They then injected volunteers' thumbs with the three substances combined and at a level comparable to the amounts produced naturally during moderate exercise. After a few minutes, the volunteers began to report sensations similar to fatigue, describing their thumbs as feeling heavy, tired, puffy, swollen and, in one case, "effervescent," although the thumbs had not been exercised at all.

In a subsequent injection, the researchers increased the amount of the combined substances until they approximated those produced during strenuous exercise. The volunteers reported intensified sensations of muscular fatigue and also some glimmerings of aching and pain.

Finally, the researchers upped the levels of the substances until they were similar to what is seen during all-out, exhausting muscular contractions. After this injection, the volunteers reported considerable soreness in their thumbs, as if the muscles had been completing a grueling workout.

What the study's findings indicate, said Alan R. Light, a professor at the University of Utah and senior author of the study, is that the feeling of fatigue in our muscles during exercise "probably begins" when these substances start to build up. Small amounts of the combined substances stimulate specific nerve cells in the muscles that, through complicated interactions with the brain, cause the first feelings of tiredness and heaviness in our working muscles.

These feelings bear only a slight relationship to the remaining fuel and energy in our muscles. They don't indicate that the muscle is about to be forced to stop working. But they are an early physiological warning system, a way for the body to recognize that somewhere up ahead lies a limit.

Each subsequent increase in the levels of lactate and other substances amplifies the sense of fatigue, Dr. Light said, until the substances become so concentrated that they apparently activate a different set of neurons, related to feelings of pain. At that point, the exercise starts to hurt and most of us sensibly will quit, staving off muscle damage should we continue.

Of course, improvements in physical performance sometimes demand that we continue through fatigue and on to achiness. "There is some truth" to the adage about "no pain, no gain," Dr. Light said. But disregarding all the signals from your muscles can be misguided, he said.

In recent experiments at his lab, cyclists who were given mild opiates that block the flow of nerve messages from the muscles to the brain and vice versa could ride faster than they ever had before, with a sense of unfettered physical ease — until, without warning, their leg muscles buckled and, limp and nearly paralyzed, they had to be helped from their bikes. "Ignoring fatigue and pain is not a good, long-term competitive strategy," Dr. Light said.

Better, he said, to attend to the messages from your muscles and calibrate training accordingly. Should your exercise goal be to become faster or stronger, find a pace or intensity that allows you to work out near and occasionally just beyond the boundary between fatigue and pain, a line that will differ for each of us and vary day to day. If on the other hand, your goal, like mine, is easier, pleasurable and sustainable exercise, consider an intensity at which your muscles grow only slightly heavy and tired and, if we are fortunate, effervescent.


15.49 | 0 komentar | Read More

The Well Column: The Lure of Forbidden Food

Written By Unknown on Selasa, 22 April 2014 | 15.50

The Well Column

Tara Parker-Pope on living well.

How hard will your child work for food?

In an experiment, researchers at Pennsylvania State University gave preschool children the opportunity to "work" for a food reward. All the child had to do was click a computer mouse four times to earn a cinnamon-flavored graham cracker.

But earning additional treats required progressively more effort. A second treat required eight clicks. Then 16. Then 32.

Some children were satisfied after one cracker, while others kept clicking for a few additional crackers. Most of the preschoolers were done after about 15 minutes, but some children stayed with it, accumulating as many as 2,000 clicks before the researchers ended the task after 30 minutes.

Children who are highly motivated by food — researchers have called them "reactive eaters" — are of particular interest to childhood health experts. Were they born this way? Or do parents create reactive eaters by imposing too many food rules and imposing restrictive eating practices at home?

The answer is probably a little bit of both. Genetics and biology play a role in the foods we like and the amounts we tend to eat. At the same time, studies show that children who grow up in homes with restrictive food rules, where a parent is constantly dieting or desirable foods are forbidden or placed out of reach, often develop stronger reactions to food and want more of it when the opportunity presents itself.

In the Penn State experiments, the same preschoolers who worked for food were later offered two types of graham crackers (Scooby-Doo or SpongeBob SquarePants) during their snack time. On five occasions, one type of graham cracker treat was freely available, while the other was placed in a glass bowl with a lid and put off limits. The restricted snacks were available for only five minutes of snack time.

Not surprisingly, the graham crackers that were off limits were enticing to all the preschoolers. But the children who had worked hardest in the clicking task — the "reactive" ones — also had the strongest response to the forbidden food.

They showed more interest in the off-limit snacks, and once they were available, took more and ate more than the children who had been less interested in clicking for food during the first experiment.

"The message is that restriction is counterproductive — it just doesn't work very well," said Brandi Rollins, a Penn State postdoctoral researcher and lead author of the study, which was published in February in the journal Appetite. "Restriction just increases a child's focus and intake of the food that the parent is trying to restrict."

Leann Birch, senior author of the Penn State studies and now food and nutrition professor at the University of Georgia, said additional research has shown that parents who impose highly restrictive food rules, such as putting desirable foods out of reach, tend to have children who are the most reactive to food in the laboratory.

"It's hard to talk cause-and-effect," said Dr. Birch. "The parents are responding to kids' reactivity, and the child is reacting to the parenting and to a general genetic predisposition. The only way to break the cycle is to try to get the parents to respond differently."

While restrictive feeding practices can backfire, that doesn't mean children should have unfettered access to all foods. Instead, parents should be aware that tight control over food can set off overeating in some children. The solution is to control the quality of the food in the home.

Don't buy soda, candy and chips and place them off limits on the top shelf of the pantry. Stock the house with healthful foods, and then allow children access and a reasonable amount of control over what they eat. At snack time, for instance, give them a choice between an apple or orange or vegetables with different dips.

The primary food rule should be "a high quality diet for all," said Dr. David Ludwig, director of the New Balance Foundation Obesity Prevention Center at Boston Children's Hospital.

Parents should not have different rules for themselves, or allow a thin child to eat junk food freely and restrict a sibling with a weight issue. Parents typically don't have to worry about an overweight child overeating when they are serving high-quality unprocessed foods. For instance, it's almost impossible to binge on apples. But process the apple into applesauce or juice, and it becomes a junk food that is easy to overeat.

Occasional treats outside the home are fine. "Take the kid out for ice cream once or twice a week, but don't keep it in the house," Dr. Birch said. Dr. Ludwig noted that with young children, parents needed to set more limits. But adolescents should be given more freedom to eat.

"I don't like the concept of telling a hungry child you can't eat," said Dr. Ludwig. "Ultimately, we want children to gain better connection to their inner satiety cues. So if their body is telling them they are hungry, don't ignore that — just pay close attention to the quality of the foods that are offered."

A version of this article appears in print on 04/22/2014, on page D6 of the NewYork edition with the headline: The Lure of Forbidden Food .

15.50 | 0 komentar | Read More

Well: An Easier Way to Delay Cutting the Cord

Written By Unknown on Kamis, 17 April 2014 | 15.49

Doctors in the delivery room are increasingly urged to hold off cutting the umbilical cord of a newborn. Delayed clamping, as it's called, allows blood to continue flowing from the placenta, improving iron stores in the baby.

But the practice has been slow to catch on in part because doctors have also been advised that for it to be most effective, they also must hold the wet, screaming infant at the level of the mother's vagina for a crucial minute or longer so that gravity will help blood flow.

Doctors have long considered the maneuver awkward, and now a new study, published on Wednesday in The Lancet, has found that it is probably unnecessary. Babies who were placed on their mothers' stomachs before clamping fared just as well as those who were held lower, the researchers found.

"They found no difference whether the baby was at abdomen level or on the chest, or the baby was held at the vagina," said Dr. Tonse Raju, the chief of the pregnancy and perinatology branch at the National Institute of Child Health and Human Development, who wrote a comment accompanying the study. "It made no difference in terms of extra blood the baby got."

The authors hope their finding will convince doctors reluctant to delay cord clamping to start the practice.

"A mother would prefer to have the baby on top of her," Dr. Néstor Vain, the lead author and a professor of pediatrics at the University of Buenos Aires in Argentina. "And that doesn't change the amount of placental transfusion, and facilitates the procedure for the obstetrician."

The study assigned 194 healthy full-term babies to be placed on their mother's abdomen or chest for two minutes and 197 babies to be held at the level of the vagina for two minutes. All of the newborns were still attached to umbilical cords, and weighed before and after the allotted time.

The group placed on their mothers' abdomens gained 53 grams of blood, while the babies held lower gained 56 grams.

Delayed clamping of the cord remains underused despite mounting evidence that it helps reduce iron deficiency in babies and poses no added risk of maternal blood loss. (A recent analysis did find roughly 2 percent more babies whose cord clamping was delayed had to be treated for jaundice.)

One reason the practice hasn't been more widely adopted could be simply that holding a bloody, squirming newborn is cumbersome, said Dr. Raju. A minute or two in this position, he said, can feel like "an eternity" with an exhausted mother looking on.

Obstetricians also increasingly recognize the benefits of early skin-to-skin contact, said Dr. Jeffrey Ecker, the chairman of the committee on obstetric practice of the American College of Obstetricians and Gynecologists. Immediate contact helps the baby stay warm, promotes maternal-infant bonding and may even improve breast-feeding.

The new study suggests no trade-off is necessary.

"You can delay cord clamping and do skin-to-skin contact, and it's not going to affect the volume of blood that is added to a baby's circulation," said Dr. Ecker, who was not involved in the study.

Premature babies and newborns who needed resuscitation or were delivered via cesarean section were excluded from the study. Research still is needed into blood flow in the umbilical cord in these infants.

Diane Farrar, an author of a review of alternative positions before cord clamping, said some cesarean births may be different for two reasons.

"You cut through the uterus, and the uterus doesn't contract as well, so the effect on placental transfusion may be different, may be less," said Dr. Farrar, a senior research fellow at the Bradford Institute for Health Research in England.

Also, after a C-section the surgeon will sometimes hold the baby up. "If the cord is still intact," she said, "that's a long way up for baby to go, and there's a potential for blood to drain from the baby to the placenta if you do that."


15.49 | 0 komentar | Read More

Dot Earth Blog: Nations’ Handling of New Climate Report Presages Divisions in Treaty Effort

Written By Unknown on Senin, 14 April 2014 | 15.49

Justin Gillis's news story from Berlin on the latest report from the Intergovernmental Panel on Climate Change — the one on the world's options for limiting global warming — tells you all you need to know about the familiar contents. The chart of trending news in the United States above tells you all you need to know about how much people are tuning in. (Click to learn more about how Newsmap works.)

The core panel conclusion, of course, is that rich and developing nations are way behind on what would need to be done to avoid substantial and largely irreversible (on meaningful time scales) warming of the climate. His story, "U.N. Climate Panel Warns Speedier Action Is Needed to Avert Disaster," is succinct and spot on, so please read it and return.

[Insert, 3:52 p.m. | Eric Holthaus has posted an excellent summary of the economic points made in the report at Slate.]

There's an important back story — on how the final two days of negotiations between the report authors and government officials reflect global divisions that will only intensify as the world's rich and developing countries wrangle over a new climate treaty that is supposed to emerge in late 2015.

Under rules created when the climate panel was established in 1988, governments have to approve the final summary for policy makers word by word and unanimously. The detailed and voluminous underlying reports are not touched. What this means is that the summaries — in what remains and what is lost — indicate what you can foresee in the parallel treaty process.

Gillis's story gets at this here:

[T]he divisions between wealthy countries and poorer countries that are making such a treaty difficult, and have long bedeviled international climate talks, were on display yet again in Berlin.

Some developing countries insisted on stripping charts from the report's executive summary that could be read as requiring greater effort from them, while rich countries — including the United States — struck out language implying that they needed to write big checks to the developing countries.

An illuminating piece from Associated Press reporter Karl Ritter on Saturday dug in helpfully on this process:

In Berlin, the politics showed through in a dispute over how to categorize countries in graphs showing the world's carbon emissions, which are currently growing the fastest in China and other developing countries. Like many scientific studies, the IPCC draft used a breakdown of emissions from low, lower-middle, upper-middle and high income countries.

Some developing countries objected and wanted the graphs to follow the example of U.N. climate talks and use just two categories – developed and developing – according to three participants who spoke to The Associated Press on condition of anonymity because the IPCC session was closed to the public.

In earlier submitted comments obtained by AP, the U.S. suggested footnotes indicating where readers could "view specific countries listed in each category in addition to the income brackets."

That reflects a nagging dispute in the U.N. talks, which are supposed to produce a global climate agreement next year. The U.S. and other industrialized nations want to scrap the binary rich-poor division, saying large emerging economies such as China, Brazil and India must adopt more stringent emissions cuts than poorer countries. The developing countries are worried it's a way for rich countries to shirk their own responsibilities to cut emissions.

The deadlock over the graphs appeared to have ended early Saturday after 20 hours of backroom negotiations led by IPCC vice chairman Jean-Pascal van Ypersele, a Belgian.

"I offered some Belgian Easter chocolate eggs to the participants of the Contact group at midnight: they helped!" van Ypersele wrote on Twitter early Saturday.

Another snag: oil-rich Saudi Arabia objected to text saying emissions need to go down by 40 percent to 70 percent by 2050 for the world to stay below 2 degrees C (3.6 F) of warming, participants told AP. One participant said the Saudis were concerned that putting down such a range was "policy-prescriptive," even though it reflects what the science says.

If you want more, I encourage you to track Ritter's output on Twitter.

When you put this news in the context of climate treaty negotiations, it bodes poorly for a climatically meaningful treaty emerging in Paris late next year. Re-read "Climate Talks Make Way for a Design Show" and the warnings from a top Chinese climate change strategist for more.

To get a sense of underlying carbon dioxide emissions realities, here are some points from the report's summary for policy makers that nicely describe the coal boom through 2010 that is a prime driver:

About half of cumulative anthropogenic CO2 emissions between 1750 and 2010 have occurred in the last 40 years (high confidence)….

Globally, economic and population growth continue to be the most important drivers of increases in CO2 emissions from fossil fuel combustion. The contribution of population growth between 2000 and 2010 remained roughly identical to the previous three decades, while the contribution of economic growth has risen sharply (high confidence). Between 2000 and 2010, both drivers outpaced emission reductions from improvements in energy intensity. Increased use of coal relative to other energy sources has reversed the long‐standing trend of gradual decarbonization of the world's energy supply….

Without additional efforts to reduce [greenhouse gas] emissions beyond those in place today, emissions growth is expected to persist driven by growth in global population and economic activities.


15.49 | 0 komentar | Read More

Dot Earth Blog: My TEDx Talk: We Are Perfect, With a Hefty Asterisk

Written By Unknown on Minggu, 13 April 2014 | 15.49


I just gave a talk at TEDx Portland — a daylong event focused on various interpretations of the word "perfect."

I was hardly perfect, but hopefully conveyed my core conclusion: that in our variegation and imperfection, we humans — with motivation and sustained work — are perfectly suited for surviving, and perhaps thriving, in a consequential, complicated century and changing climate.

The talk was shaped around the post I recent built on a string of Twitter haikus in which I listed eight traits that, if nurtured, can help sustain human progress on a finite planet:

"Bend, Stretch, Reach, Teach, Reveal, Reflect, Rejoice, Repeat."

I started by referring to a sobering conversation I had on a grassy slope by a beautiful fountain at the lunch break. I was sitting with Yumei Wang, the director of Oregon's geohazards team (and Edward Wolf, a Portland campaigner for earthquake safety).

As regular readers will have guessed, we were talking about the highly imperfect human reaction to the profound seismic threat the Northwest faces from the Cascadia fault off the coast. Wang described the impact of the next inevitable great quake on a host of schools in the state that were built of unreinforced masonry before the danger was recognized: "They'll snap like candy canes instead of bending like licorice sticks."

Of course I also focused on another threat that is largely unaddressed — the buildup of human-generated greenhouse gases in the atmosphere. But, given the failure of decades of pledges and agreements aimed at curbing emissions, I suggested it was time to move away from a longstanding focus on numerical goals — such as 350 (parts per million of CO2), 80 percent (in emissions cuts) by 2050, a 2-degree limit on warming — and toward the goal of maximizing the suite of traits I described in those eight words.

I hope you'll listen and weigh in.

All of the other presentations — by programmers, graphic designers, chefs, wireless-electricity innovators, the rapper Maclamore and more — are archived here.

Addendum
, 10:28 p.m. Pacific time | There's a nice post summarizing some highlights on the blog of KOIN, the CBS affiliate in Portland: "Six things to take from TEDx Portland."


15.49 | 0 komentar | Read More

Well: Look for Cancer, and Find It

Written By Unknown on Rabu, 09 April 2014 | 15.50

Mammography has become a fighting word in recent years, with some researchers questioning its value and others staunchly defending it.

One especially disturbing criticism is that screening mammography may lead to "overtreatment," in which some women go through grueling therapies — surgery, radiation, chemotherapy — that they do not need. Indeed, some studies estimate that 19 percent or more of women whose breast cancers are found by mammography wind up being overtreated.

Picture Your Life
Faces of Breast Cancer

We asked our readers to share insights from their experiences with breast cancer. Here are some of their stories.


This problem occurs, researchers say, because mammography can "overdiagnose" breast cancer, meaning that some of the tiny cancers it finds would probably never progress or threaten the patient's life. But they are treated anyway.

So where are these overtreated women? Nobody knows.

They are out there somewhere, studies suggest. But the figures on overtreatment are based on theory and calculations, not on counting the heads of actual patients known to have experienced it. No one can point to a particular woman and say, "Here's a patient who went through the wringer for nothing."

Overdiagnosis is not the same as a false positive result, in which a test like a mammogram initially suggests a problem but is proved wrong. False positives are frightening and expensive, but overtreatment is the potential harm of mammography that worries doctors most, according to an article published last week in The Journal of the American Medical Association.

But the authors also say that estimates of how often overdiagnosis and overtreatment occur are among the least reliable and most controversial of all the data on mammography.

In the past, overdiagnosis was thought to apply mainly to ductal carcinoma in situ, or D.C.I.S., a breast growth that may or may not turn cancerous. Now, researchers think that invasive cancers are also being overdiagnosed and overtreated by mammography.

The concept of overtreatment is based on the belief that not all breast cancers are deadly. Some never progress, researchers suspect, and some progress so slowly that the patient will probably die of something else, particularly if she is older or has other health problems.

But mammography can find all of these tumors, even those too small to feel. And doctors and patients rarely watch and wait — once a tumor is found, it is treated, because nobody knows how to tell the dangerous ones from those that could be safely left alone.

"Everyone has an anecdote of a small spot on mammography year after year that was finally biopsied and turned out to be positive — invasive, low grade," said Dr. Constance Lehman, a radiologist at the Fred Hutchinson Cancer Center and the director of breast imaging at the University of Washington in Seattle.

Where do the numerical estimates of overdiagnosis come from? In several large studies of mammography screening, women judged to have the same risk of breast cancer were picked at random to have the test or to skip it. Early on, more cancers were expected in the mammogram group, because the test can find small tumors.

Over time, the groups should have equalized, because if small tumors in the unscreened group were really life-threatening, they would have grown big enough to be felt or caused other symptoms.

But in several studies, the number of cancers in the unscreened group never caught up with the number in the mammography group. The reason for the difference, researchers assume, is that there must have been women in the unscreened group who had cancers that were never diagnosed and never progressed — and therefore did not need treatment.

The next step is to subtract the number of cancers in the unscreened group from the number in the mammography group. The result is the estimate of how many women in the mammography group were overtreated.

"We don't know which individual women those were," said Dr. Lydia E. Pace, of Brigham and Women's Hospital, an author of the new paper. "All we know is the proportion, and a lot of people would argue that we don't really know the proportion."

This kind of calculation was used in a Canadian study of about 90,000 women, published in February in the journal BMJ. The authors found that after 15 years there was a "residual excess" of 106 invasive cancers in the mammography group. The authors attributed that to overdiagnosis, and said that it amounted to 22 percent of the 484 invasive cancers found by mammography. They concluded that for every 424 women who had mammography in the study, one was overdiagnosed.

Other studies have estimated overdiagnosis in different ways, with huge variations in the results, reporting that 5 percent to 50 percent of cancers found on mammograms are overdiagnosed. To make it clear that the numbers are uncertain, some offer ranges: For example, one says that if 10,000 50-year-old women have annual mammograms for 10 years, 30 to 137 women will be overdiagnosed.

It is frightening to consider the prospect that mammography could be leading some down a slippery slope to unneeded surgery, chemotherapy and radiation, with all their risks and side effects. But the numbers on overdiagnosis are all over the map, a shaky foundation on which to base important decisions.

The best hope for resolving the confusion may lie in molecular tests that can tell the difference between dangerous tumors and those unlikely to progress — but those tests are in the future.

A version of this article appears in print on 04/08/2014, on page D6 of the NewYork edition with the headline: Look for Cancer, and Find It.

15.50 | 0 komentar | Read More

Well: Inside the Mind of a Child With Autism

Therapists who specialize in autism often use a child's own interests, toys or obsessions as a way to connect, and sometimes to reward effort and progress on social skills. The more eye contact a child makes, for example, the more play time he or she gets with those precious maps or stuffed animals.

But now a group of scientists and the author of a new book are suggesting that those favorite activities could be harnessed in a deeper, more organic way. If a child is fascinated with animated characters like Thomas the Tank Engine, why not use those characters to prompt and reinforce social development?

Millions of parents do this routinely, if not systematically, flopping down on the floor with a socially distant child to playact the characters themselves.

"We individualize therapy to each child already, so if the child has an affinity for certain animated characters, it's absolutely worth studying a therapy that incorporates those characters meaningfully," said Kevin Pelphrey, director of the child neuroscience laboratory at Yale.

He and several other researchers, including John D. E. Gabrieli of M.I.T., Simon Baron-Cohen of the University of Cambridge and Pamela Ventola of Yale, are proposing a study to test the approach.

The idea came from Ron Suskind, a former Wall Street Journal reporter whose new book "Life, Animated" describes his family's experience reaching their autistic son, Owen, through his fascination with Disney movies like "The Little Mermaid" and "Beauty and the Beast." It was Mr. Suskind's story that first referred to '"affinity therapy." He approached the researchers to put together a clinical trial based on the idea that some children can develop social and emotional instincts through the characters they love.

Experts familiar with his story say the theory behind the therapy is plausible, given what's known from years of studying the effects of other approaches.

"The hypothesis they have put forward is sound, and absolutely worth studying," said Sally J. Rogers, a professor of psychiatry at the MIND Institute of the University of California, Davis. "If you think about these animated characters, they're strong visual stimuli; the emotions of the characters are exaggerated, those eyebrows and the big eyes, the music accompanying the expressions. Watching those characters is the way many of us learned scripts that are appropriate in social situations."

But Dr. Rogers cautioned that using animated characters is hardly the key to reaching all autistic children. Many are fascinated by objects or topics without inherent social content — maps, for instance. But for those who fixate on movies, television shows or animated characters, affinity therapy makes sense, she said.

The researchers brought together by Mr. Suskind have written a proposal for a study of the approach. It calls for a 16-week trial for 68 children with autism, ages 4 to 6. Half the children would receive affinity therapy, using the shows or movies they love as a framework to enhance social interaction, building crucial abilities like making eye contact and joint play.

The other half, the control group, would engage in the same amount of interaction with a therapist but in free play, led by the child's interest. Therapists have had some success using the latter approach, most notably in a therapy called Floortime, developed by Dr. Stanley Greenspan.

In autism therapy, progress is measured in increments and tends to be slow, especially in severely affected children, experts say. But the disorder — the autism spectrum, as it's known — includes a very diverse group of children whose prospects for improvement are unpredictable and individual. Some children develop social skills relatively quickly, while others are stubbornly unreachable.

Dr. Pelphrey said that the affinity approach would incorporate many elements of pivotal response treatment, a type of therapy being intensely studied. It incorporates a system of rewards into normal interactions between a therapist (or parent) and the child, playing together.

Sarah Calzone of Stratford, Conn., said her son, now 7 years old, became more socially adept in a pivotal response trial at Yale. "The way it works is that, for instance, one time the therapist was playing with my son, blowing bubbles," Ms. Calzone said. "Then the therapist stopped and looked away. Of course my son still wanted to see the bubbles, so he had to stop, too, and look in the same direction, then make eye contact and ask to continue."

Those two responses, making eye contact and so-called perspective taking, recognizing another person's point of view, developed quickly in the therapy. Her son, who has engaged in various therapies nearly every day for most of his life, is now in regular classes at school.

Dr. Pelphrey said that affinity therapy would deploy some of the same techniques, with the therapist playacting a favorite character and inhabiting the scenes with the child.

"Instead of watching Thomas the Tank Engine as a reward, for instance, we would have the child enter the social setting, with Thomas and Percy and the other characters," and learn through them about eye contact, joint play and friendship, he said. The scientists plan to submit their study proposal to the National Institute of Mental Health for funding.

"The whole thing has been exciting, and a little weird," said Mr. Suskind, now a senior fellow at Harvard, "having these leading neuroscientists listen to me and say, 'O.K., what can we do to help?' "

A version of this article appears in print on 04/08/2014, on page D6 of the NewYork edition with the headline: Door to Autistic Child's World.

15.50 | 0 komentar | Read More

Ask Well: Ankle Replacements

Written By Unknown on Sabtu, 05 April 2014 | 15.49

Q

I understand that because of the number of bones and complexity of the other interacting parts, ankles aren't easy to repair back to their original condition, but how about just a ball socket replacement?

A

"We can and do" replace ankles, said Dr. Steven Weinfeld, an orthopedic surgeon and chief of the foot and ankle service at the Icahn School of Medicine at Mount Sinai in New York City. While not nearly as common as surgeries to replace a worn-out hip or knee, ankle replacement is on the rise, with as many 25,000 replacements likely to be performed this year in the United States, according to estimates from the American Academy of Orthopedic Surgery. Like hip and knee replacements, the procedure treats debilitating bone-on-bone arthritis.

Until recently, though, the preferred surgical treatment for severely arthritic ankles had been a procedure called ankle fusion, in which rods are inserted into the ankle bones, fusing them and preventing them from grinding together. Ankle fusion generally eliminates arthritis pain, Dr. Weinfeld said, but it also warps how someone moves and can increase stress on knees and other leg joints.

So, for many people, ankle replacement is a better option, he said, although it too affects gait, at least at first. "The way people walk often changes when they have arthritis" in their ankles, he pointed out. They begin to hobble, and after surgery, "have to learn to walk normally again, which can be surprisingly difficult sometimes."

A more lingering concern is that today's ankle replacement devices are projected to last only 20 years or so, he said, meaning that a 40-year-old might require multiple replacements of the device during his or her lifetime. For younger patients, Dr. Weinfeld urges physical therapy, bracing, painkillers or other nonsurgical options first. But if your ankles twinge and creak, he said, consult a sports medicine specialist or orthopedist about what would work best for your situation.


Do you have a health question? Submit your question to Ask Well.


15.49 | 0 komentar | Read More

Well: Think Like a Doctor: Running in Circles Solved!

Think Like a Doctor

Solve a medical mystery with Dr. Lisa Sanders.

On Thursday, we challenged Well readers to solve the mystery of a 23-year-old man with episodes of aggressive, manic behavior that couldn't be controlled. Nearly 1,000 readers wrote in with their take on this terrifying case. More than 300 of you got the right class of disease, and 21 of you nailed the precise form of the disorder. Amazing!

The correct diagnosis is …

Variegate porphyria

The first person with the correct answer was Francis Graziano, a 23-year-old recent graduate of the University of Michigan. His major in neuroscience really gave him a leg up on this case, he told me. He recalled a case he read of a young Vietnam veteran with symptoms of porphyria. He's a surgical technician right now, waiting to hear where he'll be going to medical school next year. Strong work, Dr.-to-be Graziano!

The Diagnosis:

The word porphyria comes from the ancient Greek word for purple, "porphyra," because patients with this disease can have purplish-red urine, tears or saliva. The porphyrias are a group of rare genetic diseases that develop in patients born without the machinery to make certain essential body chemicals, including one of the most important parts of blood known as heme. This compound makes up the core of the blood component hemoglobin. (The presence of heme is why blood is red.) Patients who can't make heme correctly end up with too much of its chemical precursors, known as porphyrins. The excess porphyrins injure tissues throughout the body, but especially in the nervous system.

The disorder is characterized by frequent episodes of debilitating back or abdominal pain and is often accompanied by severe psychiatric symptoms. Patients with porphyria do not respond to most psychiatric medications. Indeed, many of these drugs make the symptoms of porphyria worse. Perhaps the most famous person suspected to have porphyria was King George III in the late 18th century — a diagnosis that remains controversial.

In this disease, when the machinery is stimulated to make heme — or any of the products that use this defective biological equipment — the precursor compounds known as porphyrins accumulate. Not only are these precursor chemicals unable to do what the final product is supposed to do, they can injure tissues throughout the body.

There are two main types of porphyrias. One primarily affects the skin, and the other affects the nervous system. A third type, which is what this patient was ultimately found to have, affects both. When the skin is affected, exposure to certain frequencies of ultraviolet light excite the excess porphyrins and causes skin to blister, itch and swell. The forms that affect the nervous system can cause pain in the chest, abdomen, limbs, or back; muscle weakness or cramping; nausea and vomiting; and personality changes or psychiatric disorders.

This patient (and supposedly King George III as well) had all of the above.

Attacks are usually caused by exposure to known triggers, including many medicines, smoking (either tobacco or marijuana), drinking alcohol, infections, stress and sunlight. These painful, often debilitating episodes can develop over hours or days and can last for days or even weeks.

This patient had many triggers for his attacks. He'd been taking antipsychotic medications, which are known to stimulate the production of porphyrins. He'd been in the sun. He'd stopped eating and sleeping – physiologic stresses that can cause attacks. He was smoking tobacco and marijuana.

Attacks of porphyria are usually treated with an artificial heme known as hematin. This drug is expensive and hard to obtain. It was the first orphan drug approved by the Food and Drug Administration and is made in the United States by only one pharmaceutical company. Before hematin was approved, porphyria attacks were treated with high doses of intravenous glucose, which works by temporarily shutting down porphyrin production.

How the Diagnosis Was Made:

Dr. Jory Goodman, the psychiatrist in this case, was intrigued by the story of this young man who had psychotic symptoms but did not respond to antipsychotic medications. He doubted that this was a psychiatric disease at all.

"I strongly suspect," he said, just 15 minutes after meeting the young man, "that your son has some type of porphyria." The patient's father dismissed the diagnosis immediately, saying: "He's already been tested for that. He doesn't have it."

"He hasn't really been tested for it until I test him for it," Dr. Goodman shot back. It's an easy test to do wrong, he told them. That happens all the time.

The patient was admitted to the hospital, and Dr. Goodman started the painstaking process of looking for a medical cause for his psychiatric symptoms. He ordered the tests for porphyria, giving explicit instructions on how the samples had to be handled so the test would be accurate.

Then Dr. Goodman ordered blood and urine tests to look for other possible causes of such symptoms in a young man. Certainly street drugs could cause at least a temporary psychosis. Heavy-metal poisoning, from lead or arsenic or mercury, is rare in the United States but can cause similar symptoms. Some autoimmune diseases can cause personality changes. So can deficiencies of vitamin B12 or folate, or disorders of thyroid hormone.

Initially only the marijuana screening test was positive. Then one of the tests for porphyria came back abnormal. The rest of the tests were done improperly and had to be resent. It wasn't enough to make a firm diagnosis, but when the patient became violent again, Dr. Goodman decided to start treatment with high-dose glucose.

The effect was immediate. As the sugar flowed into the patient's system, the shouting, the cursing, the struggling stopped. His face relaxed. His mother watched in amazement as the young madman was transformed back into the son she remembered.

After a half an hour, he turned to her and said, ''I don't remember the last time I felt this good.'' The pain in his back and his abdomen was completely gone. The snarling anger that had been his daily companion for months, maybe years, had vanished.

A second set of tests, this time done properly, finally provided the diagnosis: variegate porphyria.

A Patient Transformed:

Porphyria cannot be cured. Management is focused on avoiding triggers.

After feeling so much better from that first treatment, the patient was eager to learn how to avoid future attacks. Eating and sleeping regularly helps, he learned. He quit smoking; he quit drinking. He eliminated all the medications he could. Doing all this, he felt, finally, nearly back to normal, nine months after getting this diagnosis.

''This has changed my life more than I thought anything ever would,'' the patient told me recently. He's planning to go to work — something he had never been able to do — and he's hoping to return to college in the fall.

Dr. Goodman told me that this was the 19th patient in which he had diagnosed porphyria. "This is what I do all the time," he told me. "When you see symptoms you can't just think about the treatment. You have to think about the cause of the symptom, too. If you don't think of it you won't look for it. And if you don't look for, it you won't find it."


15.49 | 0 komentar | Read More

Well: Turning Up the Heat on Fruit

Fruit compotes make great compromise desserts; they're sweet, but not as sweet as sorbets, and like sorbets they don't require flour, butter or pastry skills. I didn't develop any kind of knack for pastry until I began collaborating with pastry chefs on their cookbooks, but for years I managed to round out my dinner parties with fruit-based desserts (though the children of my friend Clifford Wright used to roll their eyes when I brought dessert – "She doesn't bring dessert, she brings fruit," they'd say).

I revisited some of those desserts this week, particularly various fruits poached in wine, and I still find them delightful. I find that I'm sometimes negligent about eating fruit in the colder months, but not when I have some wine-poached pears, bananas or prunes in the refrigerator. I am as likely to stir the fruit, with its luscious syrup, into my morning yogurt as to eat it for dessert, and the compotes are good keepers.

Early spring is an in-between time for fruit. Stone fruits aren't ready yet and it's not really apple, pear or citrus season either, though all of those fall-winter fruits are still available. I poached pears in red wine and bananas in white wine, and used dried fruits for two of my compotes, prunes poached in red wine and a dried-fruit compote to which I also added a fresh apple and pear. For the last compote of the week I combined blood oranges and pink grapefruit in a refreshing citrus-caramel syrup, and topped the fruit with pomegranate seeds. Even if my friend's kids wouldn't agree, this was definitely dessert.

Prunes Poached in Red Wine: Reducing the soaking time in this French bistro classic saves flavor.

Bananas Poached in Vanilla-Scented Chardonnay: Don't overcook the bananas in this easy dish, and you'll be rewarded with a fragrant, delicious dessert.

Pears Poached in Red Wine and Cassis: A classic French dessert with liqueur that adds a deep berry essence.

Dried Fruit Compote With Fresh Apple and Pear: An alcohol-free compote with a variety of dried fruit and a bright flavor.

Blood-Orange, Ruby-Red Grapefruit and Pomegranate Compote: A refreshing dessert that keeps well for a few days.


15.49 | 0 komentar | Read More
techieblogger.com Techie Blogger Techie Blogger