Annan Climate Sensitivity Blog Post

James Annan’s recent musings on climate sensitivity are getting quite a bit of attention on twitter. I’m sure many of you will be interested in reading it.

I’m just going to add some bullet points:

  1. He’s observing that the probability that climate sensitivity lies in what has been the high end of the probability distribution must now be assessed as low.

    As I said to Andy Revkin (and he published on his blog), the additional decade of temperature data from 2000 onwards (even the AR4 estimates typically ignored the post-2000 years) can only work to reduce estimates of sensitivity, and that’s before we even consider the reduction in estimates of negative aerosol forcing, and additional forcing from black carbon (the latter being very new, is not included in any calculations AIUI). It’s increasingly difficult to reconcile a high climate sensitivity (say over 4C) with the observational evidence for the planetary energy balance over the industrial era.

  2. He critizes some of Nick Lewis analytical choices in getting a low estimate but it seems to me he does so rather mildly. Specifically, he appears to think Nic’s choices tend to be biased to create a somewhat low estimate of sensitivity but nevertheless seems to agree with Nic’s broader point: “But the point stands, that the IPCC’s sensitivity estimate cannot readily be reconciled with forcing estimates and observational data.”
  3. He comes right out and states one of the major problems with using expert elicitation to form Bayesian priors in this field: Intentional lies.

    The paper I refer to as a “small private opinion poll” is of course the Zickfeld et al PNAS paper. The list of pollees in the Zickfeld paper are largely the self-same people responsible for the largely bogus analyses that I’ve criticised over recent years, and which even if they were valid then, are certainly outdated now. Interestingly, one of them stated quite openly in a meeting I attended a few years ago that he deliberately lied in these sort of elicitation exercises (i.e. exaggerating the probability of high sensitivity) in order to help motivate political action. Of course, there may be others who lie in the other direction, which is why it seems bizarre that the IPCC appeared to rely so heavily on this paper to justify their choice, rather than relying on published quantitative analyses of observational data. Since the IPCC can no longer defend their old analyses in any meaningful manner, it seems they have to resort to an unsupported “this is what we think, because we asked our pals”. It’s essentially the Lindzen strategy in reverse: having firmly wedded themselves to their politically convenient long tail of high values, their response to new evidence is little more than sticking their fingers in their ears and singing “la la la I can’t hear you”.

    (I’d note that from time to time, I have told people who asked why I am a lukewarmer that part of the reason is observational evidence but that part of the reason is related to my impression of social dynamics in climate science. James’s revelation of brazen lying illustrates the existence of behavior that has tended to make me think sensitivity is on the lower end. It is my sense that when a scientist will not only lie during an expert elicitation he expects to influence scientific literature but volunteers that information to another colleague, this is happening because the liar feel confident he will not be ostracized for this behavior. I have been aware of similar behaviors myself though not this specific one.

    I do not pretend the previous is relying on “the science”. )

  4. It appears James anticipates that notwithstanding plenty of recent literature showing the long tail on the probability distribution for climate sensitivity should be trimmed, the IPCC authors will continue retain that long tail.

I encourage people to go read the whole thing. It’s a nice article to discuss at happy hour. Alas, I will be at the Downers Grove Fish Fry tonight. The fish is great, but the people aren’t climate addicts and aren’t likely to want to discuss the climate topic du jour! Luckily, that’s what blogs are for. 🙂

119 thoughts on “Annan Climate Sensitivity Blog Post”

  1. I find this post by Annan very refreshing and it kind of gives me hope for future discussions across the trenches. This is markedly different from his well-known post about sensitivity ‘being’ 3.0, period, full stop. (He wasn’t that dogmatic but that was how it was perceived.)

  2. So what is the Lindzen strategy? Based on his prior history with Lindzen, I’m guessing he means Exaggerating the probability of low sensitivity. That’s not really a reverse strategy, but the same strategy to different ends.

  3. Reading that post ( http://julesandjames.blogspot.com/2006/03/climate-sensitivity-is-3c.html) one can get a sense of just how difficult it is to publish a paper indicating very high sensitivity is improbable.

    It ends

    As for the upper limit of 4.5C – as should be clear from the paper, I’m not really happy in assigning as high a value as 5% to the probability of exceeding that value. But there’s a limit to what we could realistically expect to get past the referees, at least without a lengthy battle. It is, as they say, good enough for Government work 🙂

    And that was in 2006. It’s now 2012…. temperatures still not rising the way projected in 2007. Of course some say could be excursion due to “noise”. Of course– everything can be “noise”. But still. If you are doing probability, that’s still the observation.

  4. MikeN–
    I think that’s what James means: There is the possibility that someone lies in the opposite direction of the colleague who told him he lied on the high side.

    I’m not aware of Lindzen telling anyone he follows a strategy of intentionally mis-representing his best estimate of sensitivity to anyone much less an expert elicitation whose intention is to report the data as “science” or to use it as a Bayesian prior.

  5. While I am glad for reduced estimates of sensitivity, in light of the recent nonlinear sensitivity exercises (Paul_K) I don’t see how Annan’s reference to the 2000s data helps to constrain anything except near-term sensitivity. Only the paleo part does that. If I was somebody who was tempted to lie to influence a Bayesian prior based on expert elicitation, I’d be pretty interested in proving that the nonlinearities in the models are real (and continuing to implicitly conflate transient and equilibrium responses).

  6. So it’s either ECS is high with a relatively low TCR or ECS is low with a relatively high TCR. My guess is that both (and all points in between) are equally capable of replicating observations. I’m sure this debate will continue for years and years to come.

  7. AJ, absolutely thus the need to conflate!

    Lucia et al – I thought the Lindzen strategy was hypothesizing low sensitivity then ignoring evidence to the contrary. Thus, the reverse…

    On second reading I agree with Mike though I read “reverse” as equivalent to “to different ends” since the ends here are opposite poles of the argument.

  8. Hmm… BillC–
    I guess your interpretation is right. He’s accusing the IPCC of the Lindzen strategy in the reverse.

    “reverse” as equivalent to “to different ends” since the ends here are opposite poles of the argument.

    I read reverse that way too.

  9. AJ

    My guess is that both (and all points in between) are equally capable of replicating observations. I’m sure this debate will continue for years and years to come.

    And probability density function (pdf) for both is what?

    It’s true short term data are more useful to constraing TCF rather than ECF. But the two are not utterly decoupled. If TCF were very low the only way for ECF to be high is for the two to be decoupled.

    If you look at various lines of evidence, you could come up with what you thought p(ECF|TCF) and also find p(ECF)= integrate {p(ECF|TCF) dp(TCF) } over the probability density function for TCF (i.e. p(TCF) ).

    It’s fair to say that p(ECF|TCF) must approach zero as ECF because very much higher than TCF. That means if we get information tightening — and lowering– the distribution of p(TCF), that will suggest a lower range for p(ECF) also. The constraint isn’t tight, but it’s there.

    Moreover, to the extent that ECF is hugely different from TCF, it would mean that the time scales for change are extremely long. If so, then ECF might be high, but that fact would be practically irrelevant because we’d take forever to get to the ECF. We’d likely run out of coal, natural gas and oil and natural processes would remove CO2 before we reached ECF!!

  10. And that was in 2006. It’s now 2012…. temperatures still not rising the way projected in 2007. Of course some say could be excursion due to “noise”. Of course– everything can be “noise”. But still. If you are doing probability, that’s still the observation.

    They just need to find the adjustment algorithm for the raw data that will make it fit their will. They’ve already done it with the dustbowl era. They’ve adjusted the 1930s, the dirty thirties, so it was less severe than today’s climate! As Mark Twain often quoted Benjamin Disraeli, “there are 3 kinds of lies: lies, damn lies, and statistics”. Is there any doubt that they will not just adjust the current global temperature pause to fit their beliefs?

  11. Lucia,

    I agree that a PDF is for the reasons you state. My guess, however, is that it’s probably sensitive to relatively small changes in observed forcing and temperature values. So a particular PDF observation today might steer the debate in a particular direction, but it won’t kill it.

  12. AJ– Sure. A decade of data isn’t enough to make a huge change in the pdf for either TCS or ECS. But it’s still something. There are those who want to deny that positive long term trend suggests anything at all about ghg’s effect and others who want to deny that the recent stall also suggests something about the effect. In reality, both are observations– and should be considered when estimating the probability distribution for either.

  13. James says “I expect them to brazen it out, on the grounds that they are the experts and are quite capable of squaring the circle before breakfast if need be. But in doing so, they risk being seen as not so much summarising scientific progress, but obstructing it.”
    .
    Well, it is nice that James sees this, but he is bit late to the party. The end of the high sensitivity meme and associated catastrophic scenarios for the future will happen, but it will be slow and difficult, since it is a primarily a philosophical/political subject rather than scientific subject, as James’ story of a climate scientists saying he ‘exaggerates the sensitivity’ to force political action on emissions shows.
    .
    The fact anybody working in climate science, would have the cajones to say such a thing to their scientific colleagues at a conference, or say something like the late Stephen Schneider’s ‘truthful versus effective’ comment, or get involved in the odious and unprofessional corruption of peer review revealed in the UEA emails, and not suffer professionally as a result, just confirms, yet again, the political corruption of climate science.

  14. “In reality, both are observations– and should be considered when estimating the probability distribution for either.” Yes. I was fascinated by Nate Silver’s observations about climate sensitivity in his wonderful book on prediction, The Signal and The Noise. He is, of course, a liberal and believer in AGW. But he points out that the data probably fits a low climate sensitivity better than a high, and that his favorite Bayesian statistics ought to lead to that conclusions. Sadly, he doesn’t realize that that puts him in the same boat with the most important AGW opposition, the luke-warmists – because liberals tend to think that their opponents don’t believe in basic physics.
    So he was astonished when Michael Mann attacked his book. Silver didn’t realize that he had in effect joined the enemy.

  15. Lucia wrote:
    ” A decade of data isn’t enough to make a huge change in the pdf for either TCS or ECS.”

    *Which* decade makes some difference. According to some folks’ outlook, the most recent decade should have reflected both the relatively high CO2 concentrations and heat “already in the pipeline”.

  16. Brent:
    I’m not sure what you mean by “*Which* decade”. I mean: data we do not yet have but which we did not incorporate into our previous estimate.

    Sitting here now, if you were to do a Bayesian calculation and were able to add the upcoming decades surface temperature, one expects an additional decade won’t have a huge effect on the computed estimate of climate sensitivity. The reason is the spread of expected values is large, probably contains the correct value and 1 decade worth of data is just not enough to narrow the spread much. This doesn’t mean it can’t change or narrow it much. If we end up with a 23 year long trend of 0C/dec with no volcanic eruption… well… that’s going to move the high end down quite a bit because it’s sustained and so narrow the distribution. Or if the next decade rises at a rate of 0.5C/dec, that’s going to narrow the distribution reducing the estimated probability of the low end.

    The reason it will affect the estimate of the pdf TCS a small amount is a decade isn’t much data and the spread of our current estimate is pretty large. The reason it won’t affect the estimate of the pdf of ECS a large amount is that transient data is rather indirect when used as evidence for ECS. So, it is evidence through the TCS.

    But — even just based on past data and no models– we probably won’t see either of these two.

    Mind you: Better data on aerosols, ocean heat uptake and other factors could narrow the spread relative to only adding a decade worth of surface temperatures. But still— Bayesian posteriors change with data, but the rate of change per increment of data can be slow relative to what an investigator might wish.

  17. ” A decade of data isn’t enough to make a huge change in the pdf for either TCS or ECS.”

    Just how much reliable data is available? I would have thought that another decade would amount to a significant increase for many of the elements that come into play.

  18. Complicating the issue is that we have reason to expect multidecadal cycles. Meanwhile, there are ways to make the PDF of TCR/ECS broader; e.g. postulating a transient negative cloud feedback due to latitudinal distribution of warming.

  19. Diogenes–
    With Bayesian analysis, data makes a difference *relative to what you (or someone) believed prior to getting the data*. There are two sort of “big” classes of priors: Informed and uniformed. (Well… there are other divisions too. We can get ‘n’ dimensional.)

    An example of ‘informed’ priors to update based on data from 2013-2022 for the probability distribution for equilibrium sensitivity *might be*

    a) the distribution from some “respected” group of climate models.
    b) a range of answers you got from a group of “experts”.
    c) a range of answers you got from a group of “randomly sampled non-experts”.
    d) a range of answers you got from “flat-earthers”.
    e) the estimate you got based on data from 1890-2012.
    f) an estimate you got based on paleo data.
    g) anything you like and
    h) any combination of above you like.

    You are permitted to use any of these (though the journal where you might publish the results of some of the choices will vary.)

    Meanwhile, uninformed priors try to get the answer without considering any prejudgement about what the sensitivity actually is. I don’t entirely understand the discussions, but I think the idea is that you do prejudge something about the functional form of the final result, but nothing else. (Like you might decree that the functional form of the final result is normally distributed rather than binomial or poisson or something like that.)

    When I stop being a slug, I think I’m going to write a post on data from 1980- 2013 ought to influence someones best estimate of the “underlying mean” if they (a) really believed the runs in the AR4 established ‘the prior’ and (b) if they really believed there was no warming. I think I have it done in a “scratch” calculation somewhere. It’s not checked for errors– but when I did it I was surprised how little things ‘moved’. (Mind you, we have data from before 1980!)

  20. Lucia wrote:
    “Brent:
    I’m not sure what you mean by “*Which* decade”. I mean: data we do not yet have but which we did not incorporate into our previous estimate.”

    I meant to emphasize that the most recent decade may provide greater constraints than any particular decade longer ago, as conditions of relatively high CO2 and (to some outlooks) “heat in the pipeline” more recently would imply larger signal-to-noise ratio more recently (CO2 signal relative to natural variability).

    Nothing special, just one reason each unfolding year may be more interesting than the last….

  21. Is there a common definition for what the term “high sensitivity” means relative to how much global warming occurs for a doubling of CO2 concentration?

    If one believes that the adverse impacts of global warming are now in evidence, and that a rising CO2 concentration is the fundamental cause, then what is the lowest temperature increase per doubling of CO2 which should be associated with the term “high sensitivity?”

    For purposes of more precisely defining what the term “high sensitivity” means in practical everyday use, where should the lower boundary of the high sensitivity regime be set?

    Should the boundary value be set at 1.5 degrees C per doubling of CO2? At 2.0 degrees C? 2.5 degrees C? 3.0 degrees C? 4.0 degrees C? At 4.5 degrees C?

  22. I remember a few years ago, when I made a similar argument at Tamino’s, that lower sensitivity is more likely due to a flattening of temperatures, I was told two things by the open minded one: temperatures haven’t flattened, and models show an acceleration of warming, so the high sensitivity models and low sensitivity ones would look about the same in the short term.

  23. As to reversal of strategy, if two teams are trying to win a basketball game, with the same plan of victory, we don’t say that they are using opposite strategies. Reversal of strategy is like Isiah Thomas defending the Eddy Curry trade,’Everyone was trying to get smaller, we wanted to get bigger.’

  24. Scott,
    I would personally set “high sensitivity” at anything above ~2.5C per doubling, because reasonable best estimates of current GHG forcing, ocean heat uptake, and aerosol ‘offsets’ make any higher value unlikely. BTW, rhetorical questions are frowned on by the owner of this blog; if you ask, you have an obligation to offer an answer.
    .
    MikeN,
    The openminded one is so very open minded that his capacity to rationally evaluate climate related data long ago fell out of his head; IMO, he’s just a political hack.

  25. I wonder if scientists were forced to carry ‘retraction insurance’ to pay for the costs and fines of retracted publications, we would see some change in their behaviour?
    I only ask because I work with brain surgeons and their changes in behaviour has been driven by the amount of money they can save on their insurance premiums.

  26. Scott–
    Like SteveF, I suspect some of the statements you posted ending with ‘?’ are rhetorical.

    Let me address them:
    1)

    Is there a common definition for what the term “high sensitivity” means relative to how much global warming occurs for a doubling of CO2 concentration?

    Let me first observe that though many of us use the words “hot”, “warm”, “cool” and “cold” there is no specific temperature single that everyone in the world will agree divides “hot” from “warm”, “warm” from “cool”, and “cool” from cold” in all contexts. “Cold coffee” is often at a temperature one might consider “warm” for a bath or even for a glas of water.

    Nevertheless people manage to communicate; context helps.

    The IPCC range is sufficiently well known that when I– and some though not all other people- say “high sensitivity”, we mean the higher end of the IPCC range. However, the precise value can differ in context. In the AR4, the IPCC wrote

    “This value is estimated, by the IPCC Fourth Assessment Report (AR4) as likely to be in the range 2 to 4.5 °C with a best estimate of about 3 °C, and is very unlikely to be less than 1.5 °C. Values substantially higher than 4.5 °C cannot be excluded, but agreement of models with observations is not as good for those values”

    So, in many contexts, I consider “the high sensitivity range” to begin just above 3C called out as “the best estimate” above. However, in some contexts, I might use “high sensitivity” to mean “above 4.5C” another number called out above. In many contexts people actually state numerical values in addition to using adjectives. If you want to know the values James meant– click over there.

    2.

    If one believes that the adverse impacts of global warming are now in evidence, and that a rising CO2 concentration is the fundamental cause, then what is the lowest temperature increase per doubling of CO2 which should be associated with the term “high sensitivity?”

    I don’t understand this question. The structure seems to be “If one believes in the adverse effects of cats and that cats kill crows, then what is the highest percentage of body fat permitted before we say the cat is fat.” If you simply wish to know the temperature increase for doubling CO2 that separates “high sensitivity” from “not high” sensitivity, you already asked that. My answer was in (1).

    3.

    For purposes of more precisely defining what the term “high sensitivity” means in practical everyday use, where should the lower boundary of the high sensitivity regime be set?

    This seems to be the same question as 1. See the answer to 1.

    4.

    Should the boundary value be set at 1.5 degrees C per doubling of CO2? At 2.0 degrees C? 2.5 degrees C? 3.0 degrees C? 4.0 degrees C? At 4.5 degrees C?

    This entire series of questions seems to be the same as (1).

    If you think you have a specific value that everyone in the world should use as the lower boundary for “high” sensitivity, feel free to let us know. I suspect no matter what temperature value you suggest, the world will continue to use language they way they have– that is, adjectives will often have slightly different meaning in context.

  27. DocMartyn–
    No papers are going to be retracted over this. The previous estimates where what people got using the methods they used.

  28. Richard Betts mentioned this one on Twitter:

    http://www.scirp.org/journal/PaperInformation.aspx?paperID=24283

    Michael J. Ring, Daniela Lindner, Emily F. Cross, Michael E. Schlesinger
    Climate Research Group, Department of Atmospheric Sciences, University of Illinois at Urbana-Champaign, Urbana, USA

    Additionally, our estimates of climate sensitivity using our SCM and the four instrumental temperature records range from about 1.5°C to 2.0°C. These are on the low end of the estimates in the IPCC’s Fourth Assessment Report. So, while we find that most of the observed warming is due to human emissions of LLGHGs, future warming based on these estimations will grow more slowly compared to that under the IPCC’s “likely” range of climate sensitivity, from 2.0°C to 4.5°C. This makes it more likely that mitigation of human emissions will be able to hold the global temperature increase since pre-industrial time below 2°C, as agreed by the Confer- ence of the Parties of the United Nations Framework Convention on Climate Change in Cancun [54]. We find with our SCM that our Fair Plan to reduce LLGHG emissions from 2015 to 2065, with more aggressive mitigation at first for industrialized countries than de- veloping countries, will hold the global temperature in- crease below 2°C [55].

  29. SteveF: I would personally set “high sensitivity” at anything above ~2.5C per doubling, because reasonable best estimates of current GHG forcing, ocean heat uptake, and aerosol ‘offsets’ make any higher value unlikely. BTW, rhetorical questions are frowned on by the owner of this blog; if you ask, you have an obligation to offer an answer.

    lucia: ” ……. If you think you have a specific value that everyone in the world should use as the lower boundary for “high” sensitivity, feel free to let us know. I suspect no matter what temperature value you suggest, the world will continue to use language they way they have– that is, adjectives will often have slightly different meaning in context.”

    Thanks Steve, thanks Lucia.

    Yes, my original question was in part rhetorical, as SteveF pointed out; and James Annan did (of course) set his own definition for “high” sensitivity at 4 C.

    SteveF offers his own reasons for choosing to set the definition for “high” at 2.5 C or above, as opposed to James Annan’s choice of 4 C for “high.” Steve’s reasons make good sense to me, and so that is where I would set it myself.

    So what was the point in asking the question?

    In observing the AGW debates, and the behavior of those who stay largely on one side of the question, or largely on the other, in their public discourse, I think that one’s semantic choice of words in describing the estimated ranges of CO2 sensitivity as either “low”, “medium”, or “high” has impacts which go beyond the mere classification of the estimated sensitivity ranges for purposes of enhancing technical/scientific discussion.

    As has been evident for some time, there is a rough translation in effect between CO2 sensitivity ranges and the range of possible adverse consequences, as follows: ….. “low” sensitivity equates to “undesirable but survivable” consequences; ….. “medium” sensitivity equates to “very dangerous consequences with severe impacts upon humanity”; and …. “high” sensitivity equates to “the end of the earth’s ecosystem as we know it.”

    Taking today’s GMT as the baseline, if a temperature rise of between 1.0 C and 2.0 C above today’s baseline still carries very dangerous consequences, then James Annan can in good conscience be critical of CO2 sensitivity estimates that are far outside the realm of even theoretical possibility, while still retaining his public credentials as a climate scientist committed to carbon emission reductions.

    And he can do so without publicly calling into question the basic theories behind CO2-driven global warming scenarios — which is just what his detractors inside the climate science community might be tempted to accuse him of doing, given that he has put a chink in their dogmatic armor simply by raising an honest question as to how realistic their “high” estimates actually are.

  30. I think “nice” separates “warm” from “cool”. The difference between cooling and warming is simply the slope over time. I prefer, even “like” warm and warming. The alternative would truly be alarming.

  31. Scott (Please excuse my lack of formatting ability)

    In response to your categories: “low” sensitivity equates to “undesirable but survivable” consequences; ….. “medium” sensitivity equates to “very dangerous consequences with severe impacts upon humanity”; and …. “high”sensitivity equates to “the end of the earth’s ecosystem as we know it.”

    I would argue for a different classification of sensitivities.
    “low” sensitivity equates to “noticeable but negligible” consequences; ….. “medium” sensitivity equates to “undesirable but survivable” consequences and …. “high”sensitivity equates to “very dangerous consequences with severe impacts upon humanity”; and “very high” (say above 4.5) sensitivity equates to “the end of the earth’s ecosystem as we know it”

  32. Scott Brim,

    In observing the AGW debates, and the behavior of those who stay largely on one side of the question, or largely on the other, in their public discourse,

    We’ve had discussions of what “lukewarmer” means. Sometimes people I would describe “lukecoolers” try to co-opt it. These people predict cooling as the most likely future, but think “lukewarmer” is somehow “right” because they admit it warmed. I’m not going to describe the entire argument, but my position is that those people are not “lukewarmers”.

    as follows: ….. “low” sensitivity equates to “undesirable but survivable” consequences; ….. “medium” sensitivity equates to “very dangerous consequences with severe impacts upon humanity”; and …. “high” sensitivity equates to “the end of the earth’s ecosystem as we know it.”

    The difficulty with that definition is that then we end up arguing whether a 5C change in 50 years is “survivable” &etc. (Yes. These arguments occur as I suspect you are aware.) Also, arguably the rate of warming affects whether a certain change is “survivable”. So: 5C in 5 centuries might be not so difficult…. 5C in 10 years? Not so easy.

    he can do so without publicly calling into question the basic theories behind CO2-driven global warming scenarios

    He hasn’t publicly called into question the basic theories.

  33. @TF

    I find this post by Annan very refreshing and it kind of gives me hope for future discussions across the trenches. This is markedly different from his well-known post about sensitivity ‘being’ 3.0, period, full stop. (He wasn’t that dogmatic but that was how it was perceived.)

    JA has been saying pretty much the same thing for years, which is why I keep reading him. He says 3C, which is still defined as being dangerous (>2C) and destabilising for the world since it will mean significant change to the environment we experience.

    As someone pointed out on his blog, 3C is actually the mean of what the IPCC has been predicting. He says there is still a 5% chance that it will be >4.5C, which is not really much comfort either. A 1 in 20 chance of a major disaster is something that would normally catch most people’s attention.

  34. Scott Brim,
    I rather think the Earth and its ecosystems are quite robust to modest temperature changes…. Especially since the greatest warming is likely to be at high latitude in winter, when it is pretty damned cold anyway. I understand that there are inevitable consequences of adding a lot of GHG’s to the atmosphere, but the issue of climate sensitivity is key to rationally evaluating the magnitude of that risk and the need/urgency of public action. When climate scientists willfully inflate climate sensitivity, they do terrible damage to both the credibility of climate science and the quality of public discourse.

  35. bugs

    He says 3C, which is still defined as being dangerous (>2C) and destabilising for the world since it will mean significant change to the environment we experience.

    I think you are (a) confusing sensitivity with the realized temperature change and (b) misusing the word “defined”. 3C is not “defined” based on it’s effect on the earth. The magnitude is defined based on a temperature standard.

    As for the 2C–that is an arbitrary limit principally selected for a sort of “advertising” “communicating” campaign. Nothing magic happens at 2C.

  36. bugs,
    Here’s is what James actually wrote in comments at his blog with a time stamp of 2/2/13 12:36 pm (England. No time travel involved.)

    Yeah, I should probably have had a tl;dr version, which is that sensitivity is still about 3C.

    The discerning reader will already have noted that my previous posts on the matter actually point to a value more likely on the low side of this rather than higher, and were I pressed for a more precise value, 2.5 might have been a better choice even then. But I’d rather be a little conservative than risk being too Pollyanna-ish about it.

    So even back in 2006, evidently, if pressed he now thinks he would have said 2.5 C might be a better value than 3.0. If lukewarmer is defined based on climate sensitivity, that would put him in lukewarmer camp now — and would have put him in it then.

    We can quibble whether this level is “safe” or “not safe”. Whether it is or whether it is not depends on additional factors. Among those are ghg concentrations.

  37. Lucia, now that Annan may have inadvertently endorsed your position, can we refer to you as ” Lukey Skywalker”

  38. Toto

    a sensitivity of 2.5 deg / doubling

    Belief that sensitivity is probably 2.5 deg has always been consistent with the definition of “lukewarmer” irrespective of the basis of the belief. We’ve had plenty of arguments about that word here– since way back in.. oh… 2008.

    The only real arguments are with the people who are really coolers, believe sensitivity is very near 0, and are convinced that cooking is more likely than warming over the next 30 years and want to call themselves “lukewarmer”. I’ve consistently told them that I don’t consider them lukewarmers, because a lukewarmer has to be a sort of warmer. Those who predict cooling as more probable than warming during the next 30 years are coolers.

    I admit I don’t “own” language. But if coolers call themselves warmers, then lukewarmers would need another word. And if the coolers who call themselves lukewarmers all decide they should move on to using the new word lukewarmers call themselves, lukewarmers will need to find yet another word.

    But when coined: Luke warmers are warmers. They are not coolers. I will continue to use it this way.

  39. Toto (Comment #109422)
    February 2nd, 2013 at 12:09 am
    Wait, a sensitivity of 2.5 deg / doubling after Bayesian adjustment is “lukewarmer” now?
    .
    2.5C is the median probability of some “adjusted” results, but there are other studies which put that median closer to 2C, with a most probable value of ~1.6C. That range may better represent the Lukewarmer POV.

  40. Lukewarming is sufficiently ill defined that to some extent one is forced to resort to the circular definition: lukewarming is what lukewarmers believe in. But it is posible to set certain limits, beyond which the term becomes essentially meaningless.

    At the lower end a lukewarmer has to believe that the climate sensitivity is strictly positive, that is CO2 does have a warming effect. One can of course believe in negative feedbacks, which reduce the sensitivity below the no-feedback value, but not in a negative or zero sensitivity.

    At the upper end a lukewarmer has to generally believe that the climate sensitivity is lower than the consensus view expressed in (say) AR4: otherwise why bother?

    So a lukewarmer believes that the best estimate of the climate sensitivity is above 0 and below 3 K/doubling. In practice the range around 2K/doubling seems popular.

    Beyond this most lukewarmers seem to be sceptical of the long tail at high sensitivity in the consensus pdf: I think they would mostly be very surprised if the climate sensitivity turned out to be above 4.5, and astonished if it turned out to be above 6.

    So is Annan a lukewarmer? Almost, but not quite: his best guess of the climate sensitivity is just too high to count. But it wouldn’t take much to make him one.

  41. Re: Mark W (Feb 2 09:09), the Climate Audit issue is being discussed at BH. The main view seems to be that Steve forgot to renew his domain name and lost the site for a few hours; supposedly he has now fixed this but it will take a few hours for the DNS update to propagate.

  42. Toto,
    Out of curiosity– what is the ‘adjustment’ in the “Bayesian adjustment”? The way the quote from James reads is that if pressed for greater precision, he would have said 2.5C even back when he wrote the 3C post.

    were I pressed for a more precise value, 2.5 might have been a better choice even then

    At least that’s how I read “even then”– as referring back to the time when he wrote the post he was talking about.

  43. Adjustment is application of the non uniform prior. IIUC he basically downweights the highest results. I may have got this completely wrong though.

  44. Toto,
    If you call that the “adjustment” then it seems to me you are using “adjustment” to mean “not using the result computed using a methodology widely viewed as totally incorrect by statisticians and instead using the result from a method that is considered acceptable. “? I would think nearly anyone should prefer a numerical value computed a method that is not considered acceptable rather than selecting the value that is computed using a method that is widely considered to be deficient or — in a word — wrong. Because the reason Annan is saying you shouldn’t use the uniform prior is it’s use is pretty much “wrong”. That’s oversimplifying a bit– but not much.

  45. Lucia: I admit I don’t “own” language. But if coolers call themselves warmers, then lukewarmers would need another word. And if the coolers who call themselves lukewarmers all decide they should move on to using the new word lukewarmers call themselves, lukewarmers will need to find yet another word. …… But when coined: Luke warmers are warmers. They are not coolers. I will continue to use it this way.

    .
    A compromise solution …. call the coolers “fluke warmers.”

  46. This comment by Steve Jewson on “adjusting” is pretty interesting too:

    Subjective and objective Bayesians disagree on many things, but they would agree on that. The reasons why are repeated in most text books that discuss Bayesian statistics, and have been known for several decades. The impact of using a flat prior will be to shift the distribution to higher values, and increase the mean, median and mode. So quantitative results from any studies that use the flat prior should just be disregarded, and journals should stop publishing any results based on flat priors. Let’s hope the IPCC authors understand all that.

    (Read on from this point for other interesting chatter.)

    According to Jewson (and Annan), uniform priors are simply wrong. Possible a better choice of words instead of “adjusting” would be “fixing a wrong result.”

  47. Carrick,
    Do not fear, AR5 will, um, err, continue to accept papers that assume a uniform prior, and recalculate the results of any paper that uses a non-uniform prior…. using a uniform prior. Yes, maintaining a long tail and higher mean and mode is THAT important, no matter how technically wrong the process may be.

  48. Carrick–
    Someone wrote and asked me to write some simple examples about uniform priors. I looked at it a bit and realized that to do so, I would first have to explain *subjective priors*, how those relate to data and so forth. After that, I would have to discuss the uniform prior– and discussing how to fix the uniform prior issue in an accessible way for people who don’t already know what it means requires some thought.

    But for many, it’s possible that just discussing *subjective* priors would be a useful start. I need to think of the right “toy” problem though.

  49. Scott Brim

    A compromise solution …. call the coolers “fluke warmers.”

    Unfortunately, there is a problem of self labeling. I tend to think that coolers should call themselves coolers. Or “no warmers” should call themselves “no warmers”. Each should find a descriptive term and embrace their position instead of trying to adopt one already in use and which doesn’t make any sense.

    I think “fluke warmer” could apply to the “no warmer” camp. Those who think it is equally likely to cool or warm over the next 30 years especially if they think 20th century warming was due to natural causes. That is: They think it’s a fluke. I think any adjective with “warmer” would be a poor fit for the people who actually claim that owing to ‘whatever’ the earth is at risk of entering a cooling phase, with the possibility of a relatively prompt return of the Wisconsin ice sheets. But of course, I don’t own English.

  50. That’s ok –we don’t own the ice sheets, either! 😉 However, I’ll let you know when they are coming. I will say “HEY look out for the ice sheets!”

    You’re welcome!

    Tim W. (in Wisconsin)

  51. Tim W.
    Thanks. Being in Illinois, I live in fear of the return of the Wisconsin ice sheets which could lumber on down and smush us at any time.

    My mom and sister live in Lake County and only 20 miles / 25 miles from the Illinois/Wisconsin border. I’ll need word so I can evacuate both households to warmer, milder DuPage County! That said, I would prefer if you doughty Wisconsinites steeled yourself with a meal of cheese, sausage and beer, rallied and fought them back for us.

  52. SteveF:

    Do not fear, AR5 will, um, err, continue to accept papers that assume a uniform prior, and recalculate

    I do expect them to do this, and I expect it to back fire on them in the process. This makes them look really bad..as it should.

    Expect outrage from Victor and for him to start calling them “statisticians” (with the scare quotes) at any minute now for, in his words—and this appears to be his criterion for the scare quotes—”people that judge an argument by the results….” >.<

    Lucia, I'm not an expert on Bayesian statistics by any means, but a choice of prior that isn't scale invariant and leads to large positive biases, should be particularly troubling, especially when statistical experts disagree with its choice.

    Really I think they are mistaking uniform with uninformative.

    Here’s a link to a fairly nice commentary on uninformative versus uniform, including a quote supposed from Fisher (that I couldn’t verify): “Not knowing the chance of mutually exclusive events and knowing the chance to be equal are two quite different states of knowledge.”

    It seems to me [whoever this quote belongs to] is right here… you are asserting that all outcomes are equally probable. That’s a statement of knowledge, not a statement of absence of it.

  53. Jonathan Jones (Comment #109430) 
February 2nd, 2013 at 9:29 am

    Jonathan,

    That was a good post. However…

    [“So a lukewarmer believes that the best estimate of the climate sensitivity is above 0 and below 3 K/doubling. In practice the range around 2K/doubling seems popular.”]

    I would not necessarily agree that a lukewarmer could be defined as a believer in a CS of, say, 0.1C. That figure (and maybe up to 0.9) would also fit into the sceptic viewpoint, depending on what you consider a sceptic view to be. That is, whether one is sceptical about the ‘catastrophic’ label or whether one is sceptical about the radiative GHG theory.

    The problem is that the idea of a ‘climate sensitivity’ is a theoretical construct. Until somebody provides real evidence that ANY of the 0.8C warming since 1850 (‘IPCC accurate data’ period) can be attributed to the radiative properties of CO2, the entire debate about the quantitative figure of CS is moot. In the interim, it is just argument by assertion from anyone with a ‘warmer’ label of whatever tepidity.

  54. Annon and I got into it a bit when McKitrick and McIntyre’s panel regression of models came out. Their results showed that in general, models were statistically separable from observation. Annon seemed to believe that I didn’t understand the result, and kept making the moot point that models were sometimes inside the CI.

    It is good to see that those who want to be considered scientists, are finally recognizing the obvious separation between models and measurements. Eventually, the consensus will have to change as well.

    If the science wasn’t so polluted with leftist dogma, the change would have happened already.

  55. Jeff Condon:

    If the science wasn’t so polluted with leftist dogma, the change would have happened already.

    Just say politics in general. It isn’t just the left wing agenda that distorts science & engineering choices. Plenty of blame to go around on that one.

  56. The working definition of lukewarmer is this.

    1. Accepts radiative physics. GHGs warm the planet they do not cool it.

    2. If offered a bet on sensitivity at 3C we take the under bet.
    3. single digit probability that it is less than 1.

    To be a skeptic in my book you have to be convinced that CS is below 1. that is you think above 1 is improbable.

  57. Arfur Bryant, as you say one can get to a climate sensitivity of 0.1K/doubling from a variety of perspectives.

    If you accept standard radiative physics but believe in really strong negative feedbacks then in my language you are still a lukewarmer; I suppose that Lindzen would come into that category. If you don’t believe in standard physics and get to 0.1K/doubling by some other means, then while your sensitivity lies within the lukewarming range you would probably self describe with some other name.

  58. Lucia, we had a brief discussion of uniform vs uninformative priors over at BH a week or two back. Very wordy though, none of your elegant maths. The potential for confusing the two very similar names was indeed remarked upon.

    The standard way to start is with Bertrand’s paradox: once you realise that the meaning of “uniform” is much less clear than you first thought you can at least start to see what the problem is. Understanding the solution is trickier though.

  59. Carrick
    I think (though I”m not sure) that the “uniform” is a particular type of uninformed prior because it meets the ‘principle of indifference’. (I may need to reread).

    But it turns out that finding something that satisfies “the principle of indifference’ is not sufficient to define the good prior– there are many possible candidates. So people use other methods to pick out from among the candidates. For example: You’d like it to get the same answer if you change units ( farenheit to celcius to K) or make small algebraic changes (i.e. do your analysis using log(T) or T ) and so forth. The problem with using uniform as the choice of uniformed among all uniformed is you get different answers when you do tweaks like this. So, you want a different uniformed prior. But there are a potentially infinite number that meet ‘the principle of indifference’, so the question is then: How do you pick?

    After that.. math ensues…

  60. Jonathon–
    Isn’t one communication problem that the climate papers claiming to use uniform actually use “uniform between 0 and Tmax“. A truly uniformative uniform for sensitivity would be “uniform between [-infinity, infinity]. That choice really does prefers nothing.

    And it could be used — I think– because it would give a proper posterior (I think). But I don’t think anyone uses it… right?

  61. Lucia, you can use a genuinely uniform prior between plus and minus infinity. This is, as you say, an improper prior, because it cannot be normalised, but you can get away with that as the posterior will be normalisable and so can be normalised.

    If you think about using an infinite uniform prior you realise that doing so is actually equivalent to just taking the likelihood and directly normalising that. Alternatively you can just truncate your uniform prior at sufficiently large values, essentially well beyond the point where the likelihood goes to zero. The advantage of this is that you can then normalise your truncated uniform prior, and use it in exactly the same way as any other proper prior, while you have to handle the infinite uniform prior as a special case. The disadvantage is that it is a fudge, and you may truncate too far in by mistake.

    But the real problem here is the assumption that “uniform” has any meaning when applied to a continuous variable. For discrete variables the principle of indifference can be applied fairly simply, but for continuous variables you have to choose a parameterisation with respect to which the distribution is uniform. Bertrand’s paradox considers a situtation where three different parameterisations, leading to three different uniform distributions, are all apparently equally reasonable.

  62. Lucia,

    How was the fish fry? I’m curious, what kind of fish are they serving at a February fish Fry? In my neck of the woods it hasn’t been cold enough for the traditional fresh water fish caught through the ice that are so delicious during the winter.

  63. I noticed that Annon and Hargreaves have a discussion paper up on LGM temps and forcings. In it they give an estimate of climate sensitivity of 1.7 with a relatively small 95% confidence range of 1.2-2.4. Then they say its not very robust because of nonlinear effects. Still, it sounds to me as if Annon is shading his “best” estimate high so he won’t be tarred and feathered.

    http://www.clim-past-discuss.net/8/5029/2012/cpd-8-5029-2012.pdf

  64. So I’m not sure why Annon is disagreeing with Nic Lewis at all. Sounds like the old “multiple lines of evidence” doctrine mainstream climate scientists used to place so much confidence in.

  65. Lucia, I guess this is semantical at some point, so I’ll leave aside the Bayesian definitions till I understand them better.

    But suppose our prior knowledge is that any outcome (over a finite range) is equally likely, and we build that into our analysis.

    How is treated, mathematically, different from “we don’t know what the outcome is, so we’ll assume an equal outcome?”

    Mathematically the two have to be distinguishable or one assumption is really the other regardless of descriptive language used, if you see what I mean.

  66. SteveF

    How was the fish fry? I’m curious, what kind of fish are they serving at a February fish Fry?

    The fried fish is dramatically improved over previous years to the extent that I could recommend anyone in within driving distance of Downers Grove should consider coming to eat the fish. In previous years, volunteers ate the fish. Now an actual trained cook cooks the fish, and it is *yummy*. And only $8 to $10 for a full fish dinner depending on what you order. (The cook’s name is Kim– and she is great! Volunteers take orders and so on.)

    I don’t think the fish are fresh caught. They are purchased by “The Moose”. But they are good.

  67. Steven Mosher (Comment #109469)
    February 2nd, 2013 at 4:23 pm

    and

    Jonathan Jones (Comment #109471)
    February 2nd, 2013 at 4:29 pm

    These two comments appear, on the face of it, contradictory. On a simple numerical chart, I might agree with Steven as to where to separate the labels. However, I also agree with Jonathan that the ‘reason’ why someone arrives at their CS figure makes the debate slightly subjective.

  68. re:David Young (Comment #109487)
    February 2nd, 2013 at 6:40 pm
    Hi David,

    Still, it sounds to me as if Annon is shading his “best” estimate high so he won’t be tarred and feathered.

    He acknowledges this himself in his comments.

    James Annan:-
    “The discerning reader will already have noted that my previous posts on the matter actually point to a value more likely on the low side of this rather than higher, and were I pressed for a more precise value, 2.5 might have been a better choice even then. But I’d rather be a little conservative than risk being too Pollyanna-ish about it.”

    I’m not sure that I’m ready to damn him for that just yet. I think he is genuinely trying to get to an honest answer.

  69. BillC, AJ, Lucia,

    You have all raised issues related to the curvilinear response in the GCMs of aggregate outgoing flux to average temperature increase. I agree with you all that short-term observational-based estimates of climate sensitivity can only place a lower bound on ECS.

    I may be wrong, but I suspect that Annan’s subjective assessment of the pdf of ECS takes into account two papers that he has recently published with Hargreaves. Both of these look at temperature change over the LGM where the observational timeframe should be long enough to exclude this particular problem, although modeling the LGM introduces a number of other questions concerning data quality.

  70. @Lucia

    As for the 2C–that is an arbitrary limit principally selected for a sort of “advertising” “communicating” campaign. Nothing magic happens at 2C.

    It’s not ‘magic’ number, but ‘magical’ events are happening already, with only half that temperature rise. Makes me think that 2C is a valid estimate.

    Annan could be right, but the ‘no problem’ attitude some seem to think that implies are on completely the wrong track. Even so, there is still a significant risk present of higher than 3C (or 2.5C), based on his work. If I understand his estimate correctly, it is based on what we know, but the feedbacks that are yet to come won’t be present in what we know already.

  71. I understand that:

    Estimates of negative forcings have decreased
    Estimates of positive forcings have increased
    Post 2000, the temperature gradient decreased

    These suggest a decrease in our estimates for the sensitivity.

    Somethings are not clear to me:

    In terms of before and after ratios: have the new estimates in the uncertainty of the forcings scaled with the new estimates of the forcings, tightened relative to that scale, or loosened relative to that scale?

    I.E. has the scale changed uniformly to lower estimates, or the mode moved to a lower values with tightening (could the lower tail have moved to higher values), or the mode moved to lower values with loosening (could upper tail have failed to move to lower values)?

    Do the changed estimates for the forcings impact our estimates for natural variability? E.G. is the post WWII plateau still explained, is the 1975-2000 gradient still explained?

    I believe that a large value for the aerosol effect assists in obtaining a good wiggle match for the latter half of the 20th century and hence a lower estimate for natural variability.

    Any need to increase our estimates for natural temperature variation suggest some increase in our estimates for the uncertainty in the sensitivity compared to ignoring this factor. Additional uncertainty in the temperature increase may, where applicable, call into question whether the likelihood function for the parameter 1/S (the inverse of the sensitivity) should be considered to be normally distributed, as opposed to some ratio distribution. Hence whether the likelihood function for the parameter S should also also be considered to have the form of a ratio distribution.

    It seems pretty clear that a lowering of values for the sensitivity estimates are indicated by a net increase in forcings and any lessening of the warming, but I wonder what a more detailed argument will have to say about the effect on the tails. Who has looked at this, and what did they find?

    Alex

  72. Lucia,
    It was SteveE who asked about the fish fry. I’m not wild about fried fish; I prefer grilled or broiled.

  73. bugs

    but ‘magical’ events are happening already

    I’m unaware of any “magical” events having already happened.

  74. SteveF–
    They also serve broiled fish. But I had the fried cod. I would never fry fish myself as it would be awful. But Kim does a great job and I do like the crunchy crust when it’d properly done. Last year (and all the years before) amateurs were cooking the cod. The fried fish was… “ewwwwww!!!” But it’s great now!

  75. Paul_K (Comment #109510)
    February 3rd, 2013 at 6:17 am
    —————-
    “Can the Last Glacial Maximum constrain Climate Sensitivity”.

    http://www.jamstec.go.jp/frsgc/research/d5/jdannan/Hargreaves_Geophys.%20Res.%20Lett_2012.pdf

    How can one possibly constrain CO2 climate sensitivity from the LGM without using a realistic estimate of the Ice-Surface-Cloud Albedo forcing at the LGM.

    These studies are ridiculous. They are all silent about how much impact there is from the global Albedo increase or they have a ridiculously low estimate of just -1.5 W/m2 to -4.0 W/m2.

    You can make up any CO2 climate sensitivity you want in that scenario. And that is what they are doing.

  76. I think a proper discussion of Bayesian priors is in order here since so much of the analyses we do at these blogs is from a frequentist point of view.

    http://www2.isye.gatech.edu/~brani/isyebayes/bank/interplay.pdf

    “Statisticians should readily use both Bayesian and frequentist ideas. In Section 2 we discuss situations in which simultaneous frequentist/Bayesian thinking is essentially required. For the most part, however, the situations we discuss are situations in which it is simply extremely useful for Bayesians to use frequentist methodology or frequentists to use Bayesian methodology.

    The most common scenarios of useful connections between frequentists and Bayesians are when no external information (other than the data and model itself) is to be introduced into the analysis { on the Bayesian side, when `objective prior distributions’ are used. Frequentists are usually not interested in subjective, informative priors, and Bayesians are less likely to be interested in frequentist evaluations when using subjective, highly informative priors.

    We will, for the most part, avoid the question of whether the Bayesian or frequentist approach to statistics is `philosophically correct.’ While this is a valid question, and research in this direction can be of fundamental importance, the focus here is simply on methodology. In a related vein, we avoid the question of what is `pedagogically correct.’ If pressed, we would probably argue that Bayesian statistics (with emphasis on objective Bayesian methodology) should be the type of statistics that is taught to the masses, with frequentist statistics being taught primarily to advanced statisticians, but that is not an issue for this paper.

    Several caveats are in order. First, we primarily focus on the Bayesian and frequentist approaches here; these are the most generally applicable and accepted statistical philosophies, and both have features that are compelling to most statisticians. Other statistical schools, such as the likelihood school (see, e.g., Reid, 2000), have many attractive features and vocal proponents, but have not been as extensively developed or utilized as the frequentist and Bayesian approaches.

    A second caveat is that the selection of topics here is rather idiosyncratic, being primarily based on situations and examples in which we are currently interested. Other Bayesian/frequentist synthesis works (e.g., Pratt, 1965, Barnett, 1982, Rubin, 1984, and even Berger, 1985a) focus on a quite different set of situations. Furthermore, we almost completely ignore many of the most time-honored Bayesian/frequentist synthesis topics, such as empirical Bayes analysis. Hence, rather than being viewed as a comprehensive review, this paper should be thought of more as a personal view of current interesting issues in the Bayesian/frequentist synthesis.

    Inherently Joint Bayesian/Frequentist Situations

    There are certain statistical scenarios in which a joint frequentist/Bayesian approach is arguably required. As illustrations of this, we first discuss the issue of design { in which the notion should not be controversial { and then discuss the basic meaning of frequentism, which arguably should be (but is not typically perceived as) a joint frequentist/Bayesian endeavor.”

  77. “but ‘magical’ events are happening already”

    …What can this mean?…

    Climategate?.. DRAFT SoD leak?… Lower estimates of “climate sensitivity”?….

    …Is there more to come?!?

    :-O

  78. Hi all (and especially Lucia),

    Over at my new blog (http://thelukewarmersway.wordpress.com/, and I think this is the sixth weblog I have run… sigh…) William Connelly is claiming that there is no gap between modeled temperatures and observed. When I linked to three of your posts he provided what seems like gobbledygook excuses to say he didn’t trust your numbers. If and when you have time, could you provide some assistance? The post is http://thelukewarmersway.wordpress.com/2013/02/02/matt-ridleys-third-question/#comments.

    Thanks

  79. Paul_K (Comment #109509)
    February 3rd, 2013 at 6:05 am

    I agree with you all that short-term observational-based estimates of climate sensitivity can only place a lower bound on ECS.

    How do we know that the initial transient response is not greater than the ECS? In my rock-head conceptual model, the ECS is the sum of strong, weak, positive, negative, local and temporal TCSs that occur over a time period that adequately represents the long-term mean.

    Paul_K (Comment #109509)
    February 3rd, 2013 at 6:05 am

    I may be wrong, but I suspect that Annan’s subjective assessment of the pdf of ECS takes into account two papers that he has recently published with Hargreaves. Both of these look at temperature change over the LGM where the observational timeframe should be long enough to exclude this particular problem, although modeling the LGM introduces a number of other questions concerning data quality.
    Do you think that an ECS calculated from LGM insolation which has a low globally average delta-Watts and a very high regional delta-Watts that produces unique feedbacks can be compared with LLWMGHG CO2 forcing? To my way of thinking, these two physical processes would produce different results like getting a face tan with one of those old school face tanning reflectors versus laying on a beach towel.

    I can’t help but think that some of the ECS calculation is a modern version of the gradual reduction of the Millikan oil drop experimental error.

  80. ‘ On a simple numerical chart, I might agree with Steven as to where to separate the labels. ”

    Well, since I helped define the term Lukewarmer that would probably be a good thing.

    Lets put it this way. Someone who asserts that CS is less than 1 is not a lukewarmer. Somebody who said it was .8 (-.2 +1) would
    probably qualify. I find it funny that skeptics who argue against settled science think that their estimate of ECS is settled.
    with forcings in the denominator and forcings being most uncertain, yu’d think they grasp the effect that has on uncertainty.
    I dunno.

  81. F__k Me Steven. Can you just for one bleeding minute give up on uncontrollable Narcissism?

    You and Mike Mann, except you have a core of intellectual honesty.

    Stick to that.

  82. Steven Mosher, 0.8 +/-0.2 Luke warm, tepid water is about the same 🙂 But that assumes ~ 4Wm-2 atmospheric forcing, not CO2 specifically. I mean if the 333 to 345 Wm-2 is due to atmospheric forcing, then what else would it be?

  83. Howard,

    How do we know that the initial transient response is not greater than the ECS?

    The way ECS and TCS are defined, and the way objects respond to heating, it’s extremely difficult to imagine TCS is not lower than ECS. I don’t know what the sentence after your question means.

  84. @lucia (Comment #109514)
    February 3rd, 2013 at 8:11 am

    I’m unaware of any “magical” events having already happened.

    The completely unexpected Arctic ice collapse, with only a small amount of warming. The rise in weather events, confirmed by insurance company payouts. They are in it for a profit, so they are either bumping up rates, or dropping customers. (A comparison with natural events, such as earthquakes, has risen also, but only about half as much as weather, so it’s not a matter of just more risk being insured.)

  85. Howard,
    I think that I have interpreted your question a little differently from Lucia above.
    We can characterize the behaviour of climate sensitivity over time by looking at the gradient of a plot of (Forcing – Net Flux) against temperature. The gradient of the plot (which has units of W/m2/deg C) should be related to the inverse of unit climate sensitivity expressed in deg C/(W/m2). If you generate these plots for the GCMs, they almost all show a slow tailover – i.e. the gradient slowly decreases as temperature rises. (The very few exceptions show a near-linear behaviour.) Hence, the effective climate sensitivity in the GCMs quite generally shows an increase with time and temperature.

    Your question, if I understand it correctly, is whether or not it is possible that this plot (in the real world as opposed to the models) should actually show a gradually increasing slope with time? This would then give rise to a decreasing effective climate sensitivity, possibly associated with negative feedbacks which only become apparent in the longer-term. I hope I have understood your question correctly?

    A decreasing effective climate sensitivity is not entirely impossible, but there are two reasons why it seems unlikely. The first relates to estimates of feedbacks from the various types of observational data which we have. The feedback values (which reflect the slope of the above characteristic plot) calculated from very short-term data are generally equal to or higher than the values calculated from the observed data over longer timeframes, lending some support to a gradually increasing effective climate sensitivity. However, I would have to acknowledge that this evidence is weak because of data uncertainties.

    Stronger evidence comes from understanding the physical model which gives rise to this curvature in outgoing net flux with temperature. I would strongly recommend you look at the following paper by Armour et al 2012.
    http://web.mit.edu/karmour/www/Armouretal_JCLIM_2col.pdf

    It offers a simple, yet coherent explanation for the curvature observed in flux response in the GCM’s. Basically, the tropical and subtropical regions show a rapid response but with a high feedback/low temperature gain. The high latitudes show a much slower response with a low feedback/high temperature gain. It is easy enough to show that when these responses are collapsed into a global aggregate response, it does indeed result in the sort of curve shape evident in most of the GCMs – an apparent increase in effective climate sensitivity as time progresses.

    If the opposite is true in the real world, then you would have to postulate the existence of some regions of high temperature gain and very rapid response, and some regions of low temperature gain with a very long response time. This would require a very fast high-temperature response in sub-polar regions, and a long slow response in the tropics and subtropics. This does not seem consistent with observations – poor as they are.

  86. Howard,
    Re the LGM, I think that any conclusions are subject to great uncertainty. I was mainly pointing out that Annan’s view on the pdf of climate sensitivity was likely influenced by these two recent papers.

  87. Bugs, you mention the Arctic ice collapse as an example of magical things that apparently have happened. Ice melts during summer, there is nothing magical about that, it has happened for millions of years. Last summer there was an extreme storm in the beginning of August that disrupted the Arctic ice much more than usual, resulting in a steep decline within a matter of days. Very soon the amount of sea ice has recovered, right now it is similar to the mean value of the last decennium. At the same time the Antarctic sea ice level is well above normal value, almost at a record level for the satellite era. I would not call that magical, but it is certainly remarkable.

  88. The global sea ice area is in decline, because the Arctic loss far exceeds the Antarctic gains. The ‘magical’ reference is not for literal usage, it was just a figure of speech, in reference to Lucia’s use of the word. The 2C ‘magical’ figure seems to be more than magical since we already have extreme events happening already. The storm was large, but it only broke up the Arctic ice because the permanent thick ice was gone, it was just churning up the thin ice the managed to rebuild over winter. What has happened in the Arctic is remarkable, and happened far quicker than was predicted.

    Weather events such as storms and flooding are causing more damage than before, to the extent that people are already giving in and talking of abandoning long inhabited areas.

  89. wilt –
    According to NSIDC, increased Antarctic sea ice is due to winds increased by ozone depletion.
    .
    [The webpage shows the Antarctic warming graphic from Steig et al. 2009, rather than the equivalent result from O’Donnell et al. 2010. I’ve sent a note to the author.]

  90. Bugs, I’m a little surprised to see you bringing the example of storm damage. I assume you know that your conclusion is extremely controversial, and that plenty of authorities (perhaps even the consensus) seem to feel that the entire increase in damage is a result of random change plus the increased amount of development in vulnerable areas. Certainly some important indicators, like the number of major hurricanes, have shown no visible trend at all. You may not agree, but bringing it as a proof doesn’t help your case. Don’t call something “magical” that has not been shown to exist at all.

  91. Bugs,

    Here is some data for you.
    http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg
    It’s often easier to deal with data rather than magic. Well sometimes anyway. In this instance, a magic wand is needed to explain why the level remained absolutely flat during the main heating period, and then started to show increased variability and a slight decline as temperatures leveled off. (I don’t know the answer.)

    Here are some genuinely nice photos of the north pole for you:-
    http://wattsupwiththat.com/2009/04/26/ice-at-the-north-pole-in-1958-not-so-thick/

    Another thing I find very interesting is that the NW passage opened up “for the first time” in 2000 with a huge fanfare in the news media. It opened up again “for the first time” in 2007, according to a number of slightly flawed reports. A quiz question for you:- how many times was the NW passage successfully traversed before the start of the satellite era, and who was the first to do it?

    I know that Lucia frowns on rhetorical questions, but I honestly think you might gain some insight from a little historical research into natural fluctuations of Arctic Ice extent.

  92. Bugs, you wrote that with respect to sea ice Arctic loss far exceeds Antarctic gains. This is just not true, apparently you have not looked at the data. Most recent data: Arctic minus 0.63 and Antarctic plus 0.66 million square km. In percentages this implies about 5 percent loss, and 25 percent gain compared to normal values. Similar values have been reported for quite a while now, there is no cherry-picking here. Link: http://wattsupwiththat.com/reference-pages/sea-ice-page/

  93. HaroldW, you mentioned that increasing winds contribute to sea ice gain in the Antarctic area. Although you do not say so explicitly, it sounds as if we can therefore more or less ignore the striking increase. At the same time someone like Bugs ignores the role of the wind/ storm in breaking up the summer ice of the Arctic. One cannot have it both ways (blaming the wind in one part of the world, and ignoring the role of the wind on the other side of the planet). Furthermore, the August storm in the Arctic was a well defined temporary event. I am not so sure that the winds in Antarctica were more intense than usually during the whole last year when the Antarctic sea ice was above normal.

  94. The deep summer melt in the Arctic isn’t so much a collapse as a regime change from the previous thirty years that started in 2006. In 2007, a combination of wind and ocean currents caused a lot of multi-year ice to pass out of the Arctic Ocean into the Greenland Sea and melt. This increased the vulnerability of the Arctic sea ice in the summer. But the 365 day moving average of the Arctic extent is still showing a more or less linear decline at about 0.07 Mm²/year. A new low was set this year (9.917 Mm²), but it was only a little lower than the previous lows in 2007 (9.946 Mm²) and 2011 (9.952 Mm²) especially compared to the linear trend. One could almost say that the 365 day moving average has been nearly flat since 2006.

  95. Greenhouse gases emitted through human activities and the resulting increase in global mean temperatures are the most likely underlying cause of the sea ice decline, but the direct cause is a complicated combination of factors resulting from the warming, and from climate variability. The Arctic Oscillation (AO) is a see-saw pattern of alternating atmospheric pressure at polar and mid-latitudes. The positive phase produces a strong polar vortex, with the mid-latitude jet stream shifted northward. The negative phase produces the opposite conditions. From the 1950s to the 1980s, the AO flipped between positive and negative phases, but it entered a strong positive pattern between 1989 and 1995. So the acceleration in the sea ice decline since the mid 1990s may have been partly triggered by the strongly positive AO mode during the preceding years (Rigor et al. 2002 and Rigor and Wallace 2004) that flushed older, thicker ice out of the Arctic, but other factors also played a role.

    Since the mid-1990s, the AO has largely been a neutral or negative phase, and the late 1990s and early 2000s brought a weakening of the Beaufort Gyre. However, the longevity of ice in the gyre began to change as a result of warming along the Alaskan and Siberian coasts. In the past, sea ice in this gyre could remain in the Arctic for many years, thickening over time. Beginning in the late 1990s, sea ice began melting in the southern arm of the gyre, thanks to warmer air temperatures and more extensive summer melt north of Alaska and Siberia. Moreover, ice movement out of the Arctic through Fram Strait continued at a high rate despite the change in the AO. Thus warming conditions and wind patterns have been the main drivers of the steeper decline since the late 1990s. Sea ice may not be able to recover under the current persistently warm conditions, and a tipping point may have been passed where the Arctic will eventually be ice-free during at least part of the summer (Lindsay and Zhang 2005).

    Examination of the long-term satellite record dating back to 1979 and earlier records dating back to the 1950s indicate that spring melt seasons have started earlier and continued for a longer period throughout the year (Serreze et al. 2007). Even more disquieting, comparison of actual Arctic sea ice decline to IPCC AR4 projections show that observed ice loss is faster than any of the IPCC AR4 models have predicted (Stroeve et al. 2007).

    Last updated: 4 January 2013

    http://nsidc.org/cryosphere/sotc/sea_ice.html

  96. Bugs, you manage to write a full page on the Arctic sea ice and at the same time ignore everything I wrote on the remarkable increase of the Antarctic sea ice. I leave it at others to draw conclusions, and will not repeat the arguments that I submitted in previous contributions.

  97. @Wilt

    from the same link

    Arctic and Antarctic Sea Ice Extent Anomalies, 1979-2012: Arctic sea ice extent underwent a strong decline from 1979 to 2012, but Antarctic sea ice underwent a slight increase

    The decline is ‘strong’, the increase is ‘slight’. The Antarctic and Arctic have unique characteristics to their climate, which means they are reacting differently to the changing global conditions. The Arctic has declined far more quickly than expected, which demonstrates that only small increases in temperature can have stronger change than you would expect intuitively, or even from intense scientific analysis and modelling.

    He didn’t write it. That was a cut and paste job.

    Yes, it was. I think they put it well.

  98. bugs who knew that james could be right about ecs and wrong about curry.
    please leave your logic lapses at the door

  99. Paul_K (Comment #109509)

    My comment about either a low ECS with a high TCR or a high ECS with a low TCR was the result of my own “analysis”. Basically, through a curve fitting exercise I estimated how the models account for the heat in the pipeline and then compared my model against observations at various ECS,TCR combinations.

    My model showed a clear best fit line. Perhaps interesting to Lucia it predicts which models will be running hot and which will be running cold. I’ll wager that my model will fail 🙂 Perhaps interesting to you is that I started off by using Isaac Held’s TOA Flux/delta Temperature plot that you had used in your curvilinear analysis.

    Anyway, for what it’s worth, here’s my model:

    https://sites.google.com/site/climateadj/simple-model-of-models

  100. When I was creating my model I had a sneaky suspicion that I was recreating Lucia’s “Lumpy” model. Looking back at her post from ~5 years ago it appears I did. Only I used the cube-root of time as estimated by curve-fitting to model output. So, Happy Birthday Lumpy!

Comments are closed.