Cowtan & Way Trends compared to AR5 model trends.

Yesterday in comments, I mentioned that the Cowtan and Way corrections do not make much of a difference to the sorts of things I’ve been saying pre-Cowtan Way. Regular readers will be aware that I have been testing models against observations, and that I show three observational data sets: NOAA, GISTemp and HadCrut. I had planned (but been slow at implementing) trying to look into masking the model runs– the motivation similar to that that which motivated Cotwin and Way to incorporate the poles into their temperature reconstruction. That is: we know NOAA and HadCrut simply drop out the poles, and if the poles are warming faster than the rest of the earth, this will lead to a bias. For a similar reasons, I was not planning to put my nose to the grindstone and try to write anything formal up until such time that models are looking distinctly bad relative to GISTemp (which could be happening at close of this year). The comparison with GISTemp shows some years rejecting the model mean and others not rejecting. So…. I was sort of just waiting to get a bunch of scripts ready to run things at year end.

But in the meantime, one might ask: What sorts of differences do Cowtan and Way make? Well, it happens that table IV in Cotwin and Way provides bias corrections for HadCrut. Theses bias corrections are amounts to subtract from HadCrut 4 to correcting to trends Cowtan and Way use getting their “Hybrid Method s=1” Note that for the 16 year period
1997/1 to 2012/12, the “Hybrid Method s=1” gets a trend in GMST of 0.119 C/decade, while their krigging method gets Kriging 0.108 C/decade. So, for that year, the Hybrid Method achieves the larger trend. (I don’t know what happens in other years. )

The trend corrections required to make HadCrut4 trends computed from (Jan, year) to (Dec, 2012) match their Hybrid method were tabulated. I’ve reproduced that below.

Table IV. Bias in HadCRUT4 temperature trends running from various
dates to the present, estimated using the hybrid data (s = 1.0), in units
of ◦C decade−1. The impact of the bias on the significance of the trend
is given in the third column; this is the trend bias in units of uncertainty
(σ) of the trend, and measures how much significance test on the trend
will be impacted by the bias.
Start year Trend bias Trend bias in σs
1990 -0.020 -0.39
1991 -0.020 -0.38
1992 -0.027 -0.47
1993 -0.030 -0.50
1994 -0.034 -0.54
1995 -0.036 -0.52
1996 -0.039 -0.51
1997 -0.055 -0.70
1998 -0.058 -0.67
1999 -0.056 -0.60
2000 -0.055 -0.54
2001 -0.057 -0.53
2002 -0.056 -0.46
2003 -0.083 -0.58
2004 -0.081 -0.48
2005 -0.045 -0.22

I modified my script to simply overlay the “corrected” value of the trend computed by subtracting the correction from HadCrut4. The result for trends from Jan 1990-Dec 2012 are shown below.

CotwinWayCorrectionSince1990

Note that GISTemp was just barely outside the spread of the “all weather in all models”; Cotwin and Way are now just barely inside. So, while the result is slightly different, it is not greatly different. (It may be that if Cowtan and Way had used krigging as their main method, the Cotwin-corrected HadCrut4 trend would lie on top of the GISTemp correction and so would make no difference at all, but I can’t be sure of that.)

Someone might ask: How do things look if we start from the year Lucia likes to use to compare AR4 projections to data, i.e. 2001? Here’s how it looks:

CotwinAndWay_2001

Note above that this comparison shows we could not reject AR5 models trend range using Cowtan and Way. That said: We already couldn’t do so using HadCrut, NOAA, or GISTemp. So the Cowtan correction makes no substantive difference here either.

Naturally, I would like to incorporate the Cowtan and Way correction to data ending in September 2013. I can run that for HadCrut, NOAA and GISTemp, but that data is not yet available with Cowtan and Way, who, after all, only just published their paper. But it appears they intend to make an Cowtan-and-Way updated to include recent months data version of their temperature series available and fairly promptly. That said, they are two authors, not an agency funded to create a time series, so we can’t know for certain how regularly they will update their series. If and when they make monthly data available regularly occurs, I plan to update my script to incorporate that more fully. In the meantime: The rhetoric in press – releases may be making great hay out of the notion that this has “killed the pause”, but it hasn’t injured the “models are off” claims. If we examine longer trends, many AR5 individual models appear much too warm, and the observations remain well in the lower extreme of the spread of “all weather in all models”. This strongly suggests the models are biased ‘warm’.

223 thoughts on “Cowtan & Way Trends compared to AR5 model trends.”

  1. Nice analysis Lucia. You noticed the main point of all this. That is, that their new analysis does show a slight amount of “warming” over the pause period (versus NO warming), but it does not show very much, and does not show warming anywhere close to that predicted by the models. It just allows some to say there has been warming over the pause period, implying that there is no pause, when that is not an accurate description. Alarmists use this trick often when discussing temps. Yes, you can say that a .01 deg rise over such and such amount of time constitutes warming, but it does not validate the model projections.

  2. Bruce

    It just allows some to say there has been warming over the pause period, implying that there is no pause, when that is not an accurate description.

    The press release stuff and some blogs are hammering on the “pause killing” aspect heavily and also making it read as if killing “the pause” is sufficient to say the models are looking valid. Alas, killing the pause is not sufficient for that!

    That said I think it’s important to mention that “killing the pause” doesn’t seem to have been the authors motive for doing the research. Mosher knows Robert Way. Way works on polar research, and so has a specific interest in temperature changes at the poles. In that light, one could see he would have a strong motive to want to have a better method to fill in temperature at the poles. That filling in those temperatures will affect the estimate of the global average is true– but that doesn’t mean that ‘fixing’ a ‘problem’ with the global average (particularly ‘the pause) was the main or even a motive. The motive was more likely “getting a better estimate of temperature changes in the part of the world that Robert focuses on anyway.

    That said: We do need to put the changes in context of testing models, and they don’t make a big difference. Models still look pretty bad, though maybe a tiny bit less bad. If the models are bad we can be pretty confident the divergence will increase over time — though it might take longer. OTOH: if the models are ok, the divergence will correct itself. Observing this puts us exactly where we were before C&W was published!

  3. Lucia,
    I agree that the authors did not write the paper with the intent of eliminating the pause. They just did the work to investigate the issue , as a scientist should. It’s just that many of the usual suspects are using it in that way.

  4. Bruce,

    You can tell someone (eg Rahmstorf) is primarily motivated by the politics when they loudly trumpet any result, no mater how modest, which supports their preferred view of the world, while remaining silent about (or worse, criticizing without good reason) any result which conflicts with their world view. There is no real interest in finding truth. It applies to both extremes. The problem, as I see it, is that climate science, on average, does not lie in the middle of the political spectrum.

  5. Lucia,

    The press release stuff and some blogs are hammering on the “pause killing” aspect heavily and also making it read as if killing “the pause” is sufficient to say the models are looking valid. Alas, killing the pause is not sufficient for that!

    Besides which, buried over at Judy’s in the comments is this:

    http://judithcurry.com/2013/11/13/uncertainty-in-sst-measurements-and-data-sets/#comment-413386
    –snip
    Dataset trend ± stderr(trend)
    NCEP/NCAR 0.178 ± 0.107
    GISTEMP 0.080 ± 0.067
    NOAA 0.043 ± 0.062
    HadCRUT4 0.046 ± 0.063
    Null 0.064 ± 0.078
    Kriging 0.108 ± 0.073
    Hybrid s=1 0.119 ± 0.076

    the critical value of a one-tailed test at 5%, against the null that the trend is nonpositive (i.e. that there is a pause) -1.64. The NCEP/NCAR z = -1.66, so that rejects a “pause.” That is the only one of these seven mean/stderr pairs that does so. In particular, Cowtan and Way’s null, Kriging and Hybrid s=1 results do not reject the null of a pause.
    –snip

    Maybe the Pause isn’t quite as dead as some might have hoped.

  6. It seems more reasonable to just conclude Cotwin and Way is just an exercise in post hoc excuse making.

  7. I know I’m late to this game, since The Blackboard has gone on to other things, but I thought I’d reproduce a brief exchange I started yesterday at Climate Audit before comments were closed, and then see if anyone wanted to explore the issue further here. My comment is below, followed by replies from David Young and Paul K – all from Climate Audit – and then followed by a further comment from me here.

    On Nov. 21, I wrote:

    In his post, Steve said, regarding CW13: “It doesn’t appear to me that their slight upward revision in temperature estimates has a material impact on the discrepancy between models and observations – a discrepancy which remains, despite efforts to spin otherwise.” Well, model imperfections are well known, but here’s the point I’d like to explore. The model projections for the past 15 years or so clearly overstate surface warming. However, ocean heat content appears to have been rising at a fairly unabated pace – i.e., the surface temperature hasn’t risen much but the world has accumulated considerable heat. One apparent reason for the discrepancy is a change in the vertical distribution of heat gain within the oceans, with disproportionately more accumulating at depth compared with the earlier decades. If this redistribution had not occurred, what would have happened to surface temperature? How serious a flaw is it for models not to have anticipated this type of redistribution? More particularly, how long can this type of discordance between planetary heat gain and surface warming persist? My guess is that at some point, the surface must begin to catch up, and this will cause model projections at multidecadal intervals perhaps to conform better to observations than they have when only one or two decades are considered. There are certainly other problems with models, but is this particular failure to match observations as serious as it is sometimes made out to be?

    • David Young
    Posted Nov 21, 2013 at 8:20 PM

    Fred, This is an interesting question. In my experience in a complex nonlinear system all scales affect all other scales. So, if models don’t reproduce the recent climate system changes, this may affect the longer time scales too. My years of experience since our last discussions at Judith’s have convinced me that simple models constrained with good data may be more accurate than complex ones with many parameters.
    One common way of arguing about the models is “they don’t simulate weather, but they get the long term right.” If so, are they any better than simple conservation of energy models? Unphysical dissipation means details are smeared and damped and subtle effects are just lost. But the overall energy balance might still be right.

    • Paul_K
    Posted Nov 22, 2013 at 4:39 AM Fred,

    The models do not just overestimate surface temperature gain; they overestimate tropospheric temperature gain and simultaneously overestimate ocean heat gain. See (Troy) Masters 2013 for a simple comparison of observational data with CMIP5 models. No place to hide there.

    Here are my thoughts in response to David and Paul:

    David, I agree with your point in general, but in this particular circumstance, I’m merely suggesting that since upper ocean temperature is an important determinant of sea surface temperature, the surface would have warmed more if the heat observed to be entering the ocean had not undergone a redistribution leaving less at shallow depths and more at deeper levels. The potential importance of this lies in the expectation that the upper ocean would have to catch up sooner or later and thereby render surface warming more commensurate with ocean heat uptake.

    Paul, I’m not sure I understand how your point about tropospheric warming changes anything. Tropospheric warming is strongly determined by surface warming, and so saying that tropospheric warming was overestimated is to a large extent the same as saying surface warming was overestimated. I think you’re right that the models have tended to overestimate ocean heat gain. My point, though, was that even with the observed ocean heat gain, the surface temperature would have risen more than observed if the vertical distribution of ocean heat uptake had not changed significantly from previous decades. The model outputs would still have been higher than the observed warming curve, but the observations might well have fit within a 1-sigma spread of model means. At least, that’s my speculation based on my perception that the overestimate of planetary heat imbalance was not extreme.

    As to the reasons for the overestimate of heat gain, this is somewhat tangential to the above points, but it’s not clear to me how much is a deficiency in model skill and how much is due to flawed input data – forcing estimates in particular. In the past few years, a number of reports, including one you cite, have estimated effective climate sensitivity based on aerosol forcing values that were less negative than previously thought. This is still a very uncertain area, however, and some recent evidence suggests that the strength of negative aerosol forcing may have been underestimated rather than overestimated. An example is Carslaw et al, Nature 503:67-71 (Nov. 7, 2013) – a link to the full (paywalled) article is uncertainty in indirect forcing. I don’t claim this to be the last word, but it emphasizes how vulnerable energy balance models are to uncertainties in both forcing and heat uptake data.

  8. Fred even if what you say is correct it was my understanding that the ocean heat uptake is still lower than the proposed energy imbalance caused by external forcings. Is this true?

  9. Re: Fred Moolten (Nov 22 13:38),

    My point, though, was that even with the observed ocean heat gain, the surface temperature would have risen more than observed if the vertical distribution of ocean heat uptake had not changed significantly from previous decades.

    That’s a big if. The only evidence we have for this is ARGO and it hasn’t been around long enough to be considered completely reliable. I also know of no proposed mechanism for this change. Yet another reason to be skeptical.

  10. HR – Yes, based on the input values for external forcings, the CMIP5 models estimated a greater ocean heat uptake than has been observed. How much of this difference is due to incorrect values for the forcings and how much to model error is unclear to me. However, as I said above, that difference alone should not have caused as much disparity as has occurred between model predictions and the observed surface temperature change. An additional contribution to the disparity was (apparently) the change in where the heat went – more went deeper and less to the upper ocean than previously.

    I don’t think even much improved models will be able to anticipate all such changes in heat distribution. On the other hand, heat can’t keep going disproportionately into the deeper ocean forever without the upper ocean, and hence the surface, beginning to catch up. If that conjecture is correct, the question will be how long it will take for ocean heat and surface temperature to once again rise commensurately. I would guess perhaps not much more than 5 years, and very unlikely more than 10. Five years from now, for what it’s worth, we’ll be close to a nadir in the solar cycle 24 to 25 transition, but 10 years from now, we should be in the upstroke of cycle 25. Although this shouldn’t make a huge difference, it will at least slightly affect the timing and scale of a surface warming uptick.

    In none of my comments have I tried to make any quantitative estimates of much surface temperature would have risen if the distribution of ocean heat gain over the past 15 years had not changed. It seems to me that the models might be helpful for quantifying this. In general, as I understand it, the models use forcing data to estimate ocean heat uptake, with attention to a variety of parameters affecting uptake efficiency, including diffusivity, mixed layer depth, convection, latitudinal and circulation effects, etc. Having done this, the models then utilize the uptake estimates in formulating estimates of surface warming. What would happen if instead of this, the models directly specified ocean heat uptake at the observed level, but with the other parameters unchanged from their previous values?

  11. Re: Fred Moolten (Nov 22 16:23),

    You lost me on Reanalysis. I have even less confidence in Reanalysis numbers, I refuse to call them data, than I do in the ocean heat content measurements themselves, which at least are actual data. Plots like this one make me suspect that we don’t have much of a clue what’s happening in the oceans. I see no good reason for the break in the relationship between OHC and steric sea level in 1995 other than the data are simply wrong.

  12. Fred Moolten,
    I don’t see that there has been a big change in the rate of ocean heat gain; to me it looks reasonably steady for 0-2000 meters for the last 30 years or so (Levitus et al), especially when you consider that the pre-ARGO data is quite sparse, and so comes with considerable uncertainty. To emphasize what DeWitt noted: The issue with GCM’s is that they rather grossly overestimate surface warming AND ocean heat uptake AND tropospheric warming. It is not that the ‘heat is hiding in the deep ocean’… it is that the heat is truly missing, and I suspect on it’s way to the next galaxy, not hiding somewhere on Earth.
    .
    The clear implication based on a simple energy balance (well, clear to me at least) is that either the true climate sensitivity to GHG forcing is considerably less than the models estimate, or the level of aerosol offsets is far higher than the current best estimates (which according to the IPCC AR5, are somewhere near 0.8 watt/meter^2 in total). Since the aerosol offsets are mainly observationally based, I consider them more credible… and the models less credible.
    .
    I suspect ten more years of relatively slow warming will sort this all out. I know that is not a very satisfying prospect to many, and probably not to you, but significant energy policy changes will for sure take longer than that to implement… and will ultimately (I hope) be based on solid science and a reasoned balance of costs and benefits. The grown-up discussion on climate and energy policy that Paul_K wants seems to me nowhere close to happening, but James Hansen’s recent public pronouncements on the need for nuclear power are at least a step in the right direction.

  13. Steve -As you say, ocean heat gain (0 to 2000 meters) has been fairly steady for the past 30 years, but that was the point I was trying to make – perhaps not clearly enough. If recent heat gain was similar to that of 30 years ago, why was the surface warming much faster 30 years ago than recently? The dissection of the levels where warming has occurred (see my link to ocean heat content above) suggests an answer – less of recent heat gain is near the surface (down to 700 meters), where it can heat the surface, and more at lower depths (700 to 2000 and perhaps below).

    The other points you made are ones I’ve already tried to address, and I hope other readers will look at what we’ve all said and chime in.

  14. Fred Moolten, “but that was the point I was trying to make – perhaps not clearly enough. If recent heat gain was similar to that of 30 years ago, why was the surface warming much faster 30 years ago than recently?”

    Some would think is has to do with the ~60 year pseudo-oscillations varying the mixing efficiency. If the current rate of uptake is on the order of 0.8C per ~300 years then the ~0.9C cooler period former known as the little ice age may still have a little recover left since 1816 and 1900 solar/volcanic impacts might have slowed the recovery. If it takes about as long to recover as it does to reach the minimum, then “normal” may have been underestimated by a few tenths of a degree.

    I know that is just crazy talk since there is zero evidence of a longer term secular trend /sarc

  15. Fred I was including the deep ocean when I said there isn’t enough heat around. But ignoring that, the models matter. The models give us the scary story 50 or 100 years from now. If we are going to ignore what the models tell us now why should we continue to take notice of what they tell us will happen in 50 years? You seem to be suggesting the models are unreliable in the present period.

    If your proposing that we replace the models with the what-Fred-thinks-o-meter can you tell me what the what-Fred-thinks-o-meter is saying about 50-100 years from now before I make a judgement?

  16. I agree that one thing to look at is what do GCM’s do with ocean heat uptake specified. The exact distribution of the heat in the oceans may or may not matter that much. The rate of warming of the top layer is probably more important. I also agree that the rather poor model performance with regard to the tropospheric temperature gradient with altitude is really extremely serious because it goes to the fundamental theory of the greenhouse effect. Also, the actual rate of warming of the oceans is in fact very slow which means to me that the system has a large capacitor that will moderate the rate of any changes (hopefully). But we all knew short term trends don’t mean that much.

    The problem here is that we are talking about such small net forcing numbers compared to the overall energy flows in the system that the uncertainties are very large.

    The other problem is that modeling a nonlinear multiscale system is always very questionable in the absence of careful constraint by real data, but then our data is rather noisy too.

    So, what’s the bottom line for me? We are rather uncertain about past climate changes and we are very uncertain about the future. We know that ice ages happen for sure and they are catastrophic. We are not so sure what a warming world will look like but we had better be ready for it because of the manifest failure of climate policy.

    If I were king, I would dramatically curtail investment in GCM’s and start trying to amass better data and work on the theory for simpler subsystems. And I’d enforce stricter ethical standards on public servants such as they have in England curtailing their policy advocacy. But perhaps fortunately, I am not king . 🙂

    One other thing I know is that in the highly political atmosphere around this issue, you can’t trust most people. People here seem to be open minded and I do respect Lucia. McIntyre is similarly trustworthy I think.

  17. Fred Moolten:
    ” If recent heat gain was similar to that of 30 years ago, why was the surface warming much faster 30 years ago than recently? ”
    .
    Maybe because of other processes (pseudo-cyclical and otherwise) have changed over 30 years. The key is to consider energy balance…. count the joules and don’t worry too much about the rest (and just forget the hysteria). The heat balance says that ever higher GHG forcing, taking into account ocean heat uptake, is causing less warming than high climate sensitivity demands…. unless you believe that net aerosol offset is very high, in spite of the weight of observation evidence to the contrary. The ‘pause’ (I prefer ‘slower recent warming’) is quite real, and the consequences of that reality are inevitable, even if inertia from the leading lights in the field causes some delay. Slower recent warming just means the models almost certainly have serious problems that lead to quite wrong projections. I think the sooner the models are fixed, the sooner public policy can be based more on rational consideration of costs and benefits and less on alarming model projections.

  18. DeWitt Payne (Comment #121431)
    November 22nd, 2013 at 4:39 pm
    “I see no good reason for the break in the relationship between OHC and steric sea level in 1995 other than the data are simply wrong.”
    ——————————————————
    Which data do you think are wrong – the OHC measurements or the assignment of the steric component of sea level rise? And it looks like some sort of one-time adjustment error.

  19. The heat balance says that ever higher GHG forcing, taking into account ocean heat uptake, is causing less warming than high climate sensitivity demands…. unless you believe that net aerosol offset is very high

    Over at Troy’s site, Paul_K estimates TCR at 1.34K which is quite close to Held’s estimate at 1.4K. I guess we can look forward to his forthcoming analysis. Compared to Gillett et al range of 1.3-1.8K and CMIP3 median of 1.8K, this is on the low side but (perhaps) not grossly lower. I have no idea what that means for ECS although TCR is probably more relevant for policy. I’m not that interested in policy, but the issue of slow feedbacks/nonlinearities and what it implies for linear analysis of mean T_anomaly to TOA flux imbalance occasionally rears its head and seemingly disappears from conversation. I suspect that the question of sensitivity estimates (ECS/TCR) can be argued for decades.

  20. HR (Comment #121425)
    November 22nd, 2013 at 3:38 pm
    “it was my understanding that the ocean heat uptake is still lower than the proposed energy imbalance caused by external forcings. Is this true?”
    ————————————————–
    Is not the TOA imbalance determined almost entirely by changes in OHC (rather than from the forcings)? If so, the ocean heat uptake cannot be lower thant that predicted by the TOA imbalance.

  21. RB,
    “I suspect that the question of sensitivity estimates (ECS/TCR) can be argued for decades.”
    .
    Sure, maybe more than decades. But I do think that gradually constrained/improved estimates of aerosol effects and ocean heat uptake ought to (gradually!) reduce the credible range. The arguments will continue, but the range argued about should become smaller. The thing that is weird to me is the seeming immunity of climate models to measurement data. Within AR5 you have past model projections which have diverged from reality, furious arm waving about rapid future warming projections, and aerosol specialists saying that the aerosol offsets the modelers have assumed are too high. All too “through-the-looking-glass” for my taste.

  22. Re: Owen (Nov 22 19:50),

    Which data do you think are wrong

    The OHC data, of course. The thermal expansion coefficient of water doesn’t change as indicated by the slope being the same. Either the pre-1995 data are too high or the post 1995 data are too low. It can’t be a one time measurement error. OHC is a measurement of temperature. The heat capacity of sea water doesn’t change either. That means that all temperatures measured before 1995 were too high or all temperatures measured after 1995 were too low by a constant amount. There is no excuse for this sort of error not being caught and corrected. And it is an error.

  23. @ Owen (Comment #121444) ,
    Your explanation seems pretty much circular.
    It seems reasonable to question the energy imbalance claims. There are many variables involved in the energy budget, and any one or more of them could be off, influencing the outcome substantially.
    Additionally, the amount of energy referred to in discussions on energy budget is less than 1% of the total in a system that has wide dynamic swings due to weather, seasons and geography..
    Frankly this underscores the trivial nature of so much that the AGW promotion industry claims as important.

  24. Re:Owen (Comment #121444)
    November 22nd, 2013 at 8:10 pm

    Hi Owen,
    There are some eggs and chickens here. We don’t know the TOA flux imbalance from radiative measurement to an accuracy of better than plus or minus 5 Watts/m2, so we end up reliant on the differential of ocean heat gain as the best “measurement” of residual net flux after a series of forced changes. There are at least two problems with this; first, as DeWitt has demonstrated very succinctly, our knowledge of ocean heat gain is itself far from perfect, and secondly the use of OHC to estimate net flux imbalance rests on an assumption that, since the OHC represents 90-94% of all energy left in the system from a forced change in radiative flux, then its scaled differential is also a valid proxy for the net radiative flux imbalance. I think the evidence is mounting to challenge this last assumption and am planning on doing so in the near future.

  25. Hi Fred Moolten (Comment #121421),

    I’m chiming in a bit late, but you raise some interesting points:

    In the past few years, a number of reports, including one you cite, have estimated effective climate sensitivity based on aerosol forcing values that were less negative than previously thought.

    If you are referring to Paul’s cite of my paper (Masters 2013, Climate Dynamics), I would clarify that the study is not based on the less negative forcings of AR5, but rather on the aerosol forcings of AR4 (which were stronger). Using the AR5 estimates (which weren’t official when I submitted the study) would lower the estimated effective sensitivity further.

    My point, though, was that even with the observed ocean heat gain, the surface temperature would have risen more than observed if the vertical distribution of ocean heat uptake had not changed significantly from previous decades. The model outputs would still have been higher than the observed warming curve, but the observations might well have fit within a 1-sigma spread of model means.

    While the different vertical distribution of ocean warming has made the models looks worse in terms of surface warming, it is important to keep in mind the tradeoff between surface temperature increase and the TOA imbalance decrease that comes from the radiative response to this surface warming. That is to say, if the vertical distribution remained the same, and the surface temperature rose at the same rate as previously relative to the rate of overall ocrean heat uptake, the TOA imbalance would have decreased relative to its current value. For example, if the temperature rise over the last decade+ was an additional 0.1K (to get it into the 1-sigma spread of models), a simplified 2 W/M^2/K radiative response would have resulted in a reduction of the TOA imbalance by ~0.2 W/m^2. In hypothesizing on the effect of maintaining the same efficiency on surface temperatures (which would make models look better in that department) it is important to remember that it would also make the models look worse in other areas.

    As to the reasons for the overestimate of heat gain, this is somewhat tangential to the above points, but it’s not clear to me how much is a deficiency in model skill and how much is due to flawed input data – forcing estimates in particular… This is still a very uncertain area, however, and some recent evidence suggests that the strength of negative aerosol forcing may have been underestimated rather than overestimated. An example is Carslaw et al, Nature 503:67-71 (Nov. 7, 2013) – a link to the full (paywalled) article is uncertainty in indirect forcing.

    Personally, I think it is hard to make the case that the CMIP5 models are underestimating the influence of aerosols (as opposed to the other way around). Unfortunately, I do not have access to the Carslaw et al paper you mention, but I don’t see in the abstract that they suggest an underestimate of the negative aerosol forcing (or provide their own estimate)…if you have access, can you share a few points or figure where they discuss this? Regardless, analysis like Murphy (2012, http://www.nature.com/ngeo/journal/v6/n4/full/ngeo1740.html) suggest the aerosol forcing has stabilized since 2000 (granted, this does not include the indirect effect, but generally both effects increase / decrease as the other does), and Klimont et al (2013, http://iopscience.iop.org/1748-9326/8/1/014003/pdf/1748-9326_8_1_014003.pdf) find a decline in global sulfate emissions since 2005, down to below the 2000 level. Basically, even if one is uncertain about the magnitude of the aerosol effect and how much of the 1850-2000 warming it masked, I have not seen much to suggest that aerosols could have had a meaningful impact on the current “slowdown”/”hiatus”/”pause”.

  26. Hi Fred,

    Tropospheric warming is strongly determined by surface warming, and so saying that tropospheric warming was overestimated is to a large extent the same as saying surface warming was overestimated. I think you’re right that the models have tended to overestimate ocean heat gain. My point, though, was that even with the observed ocean heat gain, the surface temperature would have risen more than observed if the vertical distribution of ocean heat uptake had not changed significantly from previous decades.

    I think your point is basically correct. Troy’s qualification above is also important.
    If we look at the temperature variation in history we see a 50-70 year cycle in surface temperatures with strong evidence from long-running instrumental records and from even longer proxy records that it has been around for a very long time.
    The GCMs cannot model these cycles. They are effectively treated as stochastic aberrations. Under this assumption, the temperature gain in the late 20th century is matched to some approximation, but the ocean heat uptake in the models is seen to be high using the labs’ own best estimates of forcing series.

    Similarly, if an energy balance approach is applied to the data post-1955, numerous authors have found TCR values around 1.3 to 1.6 deg C and ECS most likely values (under assumption of constant linear feedback) in the range 1.6 to 2.0 deg C. From memory, I think that the CMIP5 models have a TCR range from 1.6 to 2.5 and ECS values from 2.3 to 5 with median around 3.4 deg C. This suggests prima facie that (still using the same assumption that the multidecadal cycles can be ignored) the models are running hot. An adjustment or prescription of their respective ocean heat uptakes must forcibly bring down their estimates of sensitivity.

    Moreover when the cycles are properly accounted for – and they do influence apparent ocean heat uptake as well as surface temperature – the apparent sensitivity of the system is further reduced and the AOGCMs look even worse!

  27. DeWitt,

    In your plot, what was the source of the data you used for the steric sea level? Is it determined independently of the change in ocean temperature or from the change in OHC?

    Thanks.

  28. Paul_K,

    From memory, I think that the CMIP5 models have a TCR range from 1.6 to 2.5 and ECS values from 2.3 to 5 with median around 3.4 deg C.

    I think that is about right. But I think even some people involved in modeling are starting, albeit with a dozen different qualifications, to admit that those numbers may not be right. IIRC, Isaac Held has hinted at a TCR of about 1.3-1.4C, in line with measurement based estimates.
    .
    Unless the true aerosol influence is much higher than the best estimates (AR5), or the ARGO OHC measurements are way off, it is obvious the models have just been ‘tuned’, probably with strongly positive cloud feedback and exaggerated ocean heat uptake, to be much more sensitive than reality. I can understand how that could happen: the models were being developed during an apparent upswing in the 50-70 year oscillation, so the rapidly rising temperatures gave some support to high sensitivity. The assumed high aerosol offsets post WWII (from high sulfate emissions in a rapidly growing world economy) was then used to explain away the downward part of the 50-70 years cycle, and presto: a clear (but biased) confirmation of high climate sensitivity.
    .
    Fair enough, I expect bias from a bunch of very idealistic people convinced they were saving the world from evil humanity.
    .
    What I don’t expect, and what I think reflects very poorly on the entire field, is a reluctance to use improved data like ARGO based OHC measurements, lower aerosol estimates from improved measurements, and a recognition of substantial 50-70 year pseudo-cyclical variation in both instrumental and proxy records, to demand that the damned models be revised to bring them in line with reality. Modelers ought to be under a great deal of pressure from others in the field to make those modifications, but nowhere is there evidence of that. I mean really, how can modelers continue with a straight face to produce projections under different emissions ‘scenarios’ out over a hundred years or more when it is already obvious the modeled OHC uptake is too high, high aerosol offsets are nothing but a grotesque kludge, and virtually every run of every model is far above reality for the last 20 years? How can those involved with writing AR5 WGI continue in good faith to obfuscate the divergence of model projections from reality, and insist that future warming will be extreme? I can only conclude that those in the field are so motivated by their desire to ‘save the world from evil humanity’ that they are willing to sacrifice good science to try to achieve that end.
    .
    They will of course fail, since reality is quite insistent on having its own way, and poor people are quite insistent on becoming less poor through economic growth. I predict that history will not treat climate science kindly when it becomes clear how much delay the field has caused in economic growth… growth needed to reduce poverty and the human suffering that inevitably accompanies it.

  29. SteveF/Paul_K,
    I’m sure you’ve seen it, but there was Otto et al. (2013) which presented TCR of 1.3C with a 5-95% of 0.9-2C using the most recent decade’s observations (which they also consider to be the most accurate). The TCR for data from entire 1970-2009 period is 0.7-2.5C.
    BTW, they deduce an ECS of 1.2-3.9C compared to CMIP5 ECS range of 2.2-4.7C.

  30. Forster et al. looked at 23 models of CMIP5.
    The TCR range of those models is 1.1 to 2.5 K, with a mean of 1.82 K and median of 1.80 K.
    ECS range is 2.1 to 4.7 K, with a mean of 3.22 K and median of 2.89 K.

    While the ranges overlap those of Otto et al., the models run about 40%-50% hotter. The upper end of models is about twice as sensitive.

  31. Dumb question: is it possible to download CW13 monthly anomalies over just the area of satellite coverage? I have something I want to try.

  32. Troy and Paul K – Thanks for the interesting perspective on the mismatch between models and observations. It wasn’t my intention to make the models “look good”, but rather to suggest that the duration of future mismatches in modeled vs observed temperature trends is likely to depend on why they have been mismatched for the past 15 years. If a progressive shift to greater depths of the distribution of ocean heat gain has contributed significantly to the discordance, I expect that this process will inevitably be self limited, and the shallower oceans (and hence the surface) will start to catch up perhaps within a relatively short time. I’ll go out on a limb and guess no more than five years for a significant temperature uptick, with ten years being an extreme limit. That has the virtue of at least being testable (with a little patience).

    Regarding aerosol indirect forcing, here is the Carslaw et al excerpt:
    “Figure 1 shows the annual mean first indirect radiative forcing and the associated 1σ uncertainty when assuming the 1750 reference state. The global annual mean indirect forcing is −1.16 W m−2 (σ = 0.22 W m−2, 95% confidence interval −0.7 W m−2 to −1.6 W m−2), compared to the multi-model range reported in ref. 2 of −0.4 W m−2 to −1.8 W m−2 (best estimate, −0.7 W m−2) and an estimate (−0.6 ± 0.4 W m−2) based on assimilated PD aerosol optical depth.”

    Paul – I will also be eager to see your analysis of the imperfect relationship between OHC gain and planetary energy imblance.

  33. Fred Moolten,
    ” I’ll go out on a limb and guess no more than five years for a significant temperature uptick, with ten years being an extreme limit. ”
    .
    I suggest you choose a very low limb, because the pseudo-cyclical behavior of the past century suggest to me an ‘uptick’ is unlikely any time in the next decade. 😉 Will there be slow warming? Sure there may be, but well below model projections of ~0.25C per decade. 0.1C per decade, give or take a bit, seems a reasonable estimate. My honest-best-guess is about 0.09C per decade for the next 15 years or so.
    .
    Suppose that there were no significant uptick for 5 to 10 years, and warming over the next decade were near 0.1C or even less. Would you then accept that the models are simply too high in diagnosed sensitivity? If not, what data would it take for you to be dissuaded from your (apparent) belief that the modeled climate sensitivity is correct? If the current divergence is not enough, then I really don’t know what it would take.

  34. Re: SteveF

    Aside from all of the packaging within that comment, from the above Forster et al. link:
    Given the present day large model spread and the finding that climate sensitivity has little effect on model spread, there is no indication of any tendency by modelling groups to adjust their models in order to produce observed global mean temperature trends.

    Re: Troy I have not seen much to suggest that aerosols could have had a meaningful impact on the current “slowdown”

    I don’t think that is the suggestion being made i.e., hiatus due to ocean heat uptake first and perhaps the aerosol accuracy impacting the energy budget discrepancy next is a possibility.

  35. Steve – I haven’t tried to quantify climate sensitivity. We do disagree somewhat on the pace of future warming, so we’ll have to see which estimate is more accurate.

    You mention “pseudo-cyclical behavior”. Are you postulating the existence of internal climate cycles that cause surface temperature to oscillate substantially over intervals of a decade or more while ocean heat uptake remains fairly constant at about 5 x 10^22 joules/decade? How do you perceive those to operate?

  36. RB,
    “There is no indication of any tendency by modelling groups to adjust their models in order to produce observed global mean temperature trends.”
    .
    Humm.. Odd then that a plot of diagnosed climate sensitivity versus assumed aerosol offset shows a strong inverse relationship. More sensitive models assume higher aerosol offset, less sensitive models assume lower aerosol offsets. Of course the models are tuned…. how explicitly/consciously they are tuned and exactly how they are tuned may vary, but tuned they are.

  37. The final version of Forster et al. abstract reads:
    Given the large present-day model spread, there is no indication of any tendency by modelling groups to adjust their aerosol forcing in order to produce observed trends. Instead, some CMIP5 models have a relatively large positive forcing and overestimate the observed temperature change

    Within the draft:
    In contrast to the IPCC estimate, where the spread was principally attributed to aerosols, the spread found here comes from both non-greenhouse gas forcing agents and differences in the rapid response of cloud to greenhouse gases.

  38. Clouds should respond to greenhouse gases, unless they mean water vapor. If I was a reviewer, I would have suggested that be changed to “response of cloud to warming by caused by greenhouse gases.”

  39. SteveF, Yes, anyone who has any numerical experience knows that the models are tuned to match observartions. I would fire the modelers if they didn’t do that.

    I think you and I may disagree a little about whether they can be “fixed.” If you mean they can be tuned to reflect recent observations, they probably can. If you mean can their predictive skill be increased, I’m not so sure. Turbulence modeling for example has seen intense work for over 50 years but in the last 20 years, not much has changed. The result of this is a thicket of models and tunings that provide practitioners a “dial an answer” capability. As one modeler said earlier this year at a NASA workshop, models are postdictive and not predictive. It’s not clear to me that this situation can be improved using current methods or theories.

  40. Fred Moolten

    Are you postulating the existence of internal climate cycles that cause surface temperature to oscillate substantially over intervals of a decade or more while ocean heat uptake remains fairly constant at about 5 x 10^22 joules/decade? How do you perceive those to operate?

    .
    I don’t think it takes a lot of imagination to conclude there is the real possibility cyclical behavior over the instrumental record. (http://www.woodfortrees.org/plot/hadcrut4gl/plot/esrl-amo) There are also paleo records which indicate similar temperature swings over much longer than the instrumental record.
    As to what could cause that behavior, there have been a couple of different explanations offered. But surface temperature is controlled mainly by the current energy balance; there does not have to be an increase in ocean heat uptake to account for a slowing in the rate of warming. For example, there is a clear relationship between ENSO and global average temperature (even though most of the ENSO influence is between 30S and 30N), yet there is no clear relationship that I can see between ocean heat uptake and ENSO. The 0-700 Meter ARGO data (http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/) shows no correlation with ENSO. Changes in cloud cover/type and/or changes in ocean circulation rate could change the surface balance without necessarily changing OHC very much.
    .
    But honestly, I am puzzled: If you have not tried to quantify climate sensitivity, then why do you think there will be an ‘uptick’ in the rate of warming within the next five or, at most, ten years? If you believe there will be rapid future warming (and your comments here and elsewhere consistently indicate you do) then on what basis do you believe that if you have no opinion on climate sensitivity? I may not be right in my belief that climate sensitivity is much lower than the GCM’s diagnose, but I am at least consistent… I think future warming will be lower than the model projections because I think the models are just plain wrong on amplification. Absent any opinion on climate sensitivity to GHG forcing, how do you rationally conclude future warming will be rapid?

  41. David Young (Comment #121513),
    By ‘tuning’, I mean make adjustments in the parameters which are not part of the ‘basic physics’ (like cloud amplification influence) and modify the ocean part of the model so that it doesn’t grossly overestimate uptake rate. Once that is done, they will need to drop the assumed high present and historical aerosol effects in order to reasonably match the historical temperature record. The diagnosed sensitivity and rates of future warming would then be much lower. They would still be a very long way from making accurate predictions; they can’t simulate the 50-70 year cyclical behavior,regional predictions are poor, etc. But they would at least stop being an embarrassment. Modeling any complicated system is difficult, but failing to adjust the model to conform with the best available data is just stooopid.

  42. Paul_K (#121498)
    Sorry about the busted link. I usually test them after submitting, since one can correct typos here. Obviously I slipped up on that one. Correct link is this.

    Anyway, your link goes to an earlier version of the paper, which includes only 19 models (and has slightly higher means for sensitivity).

  43. My best estimate of the short term (decadal) trend is always just the trend in the last ~thirty years. If cyclical behavior is real and goes into a more cooling phase, I’ll probably be too high, if high sensitivity is right, probably way too low.

    For the lower atmosphere, I’d guess the next decade will be about .14+-.02 degrees warmer than the previous one. I believe that is a reasonable best estimate, and is consistent with a pretty low sensitivity but definitely not with a very sensitive climate or with the forecasts of the IPCC.

  44. RB:

    Given the large present-day model spread, there is no indication of any tendency by modelling groups to adjust their aerosol forcing in order to produce observed trends. Instead, some CMIP5 models have a relatively large positive forcing and overestimate the observed temperature change

    The fact the models are running warm certainly does not have to imply no tuning is involved in the aerosol sector.

    You can “get away” with a larger positive feedback if you have more aerosol forcings. However, I think there’s a separate tendency to run the models warm compared to observation (there seems to be internal pressure to do this) which explains this latter comment of Forster.

  45. The Forster et al paper (link above) shows that there are several of the better known models which (according to Forster et al) have been substantially reduced in sensitivity, for example:
    .
    Model/ECS/TCS
    GFDL-ESM2G /2.39/1.10
    GFDL-ESM2M /2.44/1.30
    GISS-E2-H /2.31/1.70
    GISS-E2-R /2.11/1.50
    .
    IIRC, these models had higher ECS and TCS values in the past. I specifically recall Gavin commenting a few years ago at RealClimate that E2-R had a diagnosed sensitivity of ~2.8C C/doubling. If Forster et al are right (and assuming my memory continues to serve about past modeled ECS’s), that is a big change for GISS. Heck, GISS seems to be becoming a hotbed of ‘denialist-lukewarmers’. Go Gavin… only 0.3C per doubling from reasonable!
    .
    Of course there are lots of models that continue to diagnose very high sensitivity (up to 4.67/2.2 ECS/TCS). If the ‘modeling community’ would just throw out all the models above 3.5 ECS that would be progress; still too sensitive on average, but progress nonetheless. Will they do that? Probably not. The really big ECS numbers boost the model mean and add that invaluable terror factor (eg. “much of Earth’s surface may become uninhabitable”, “billions may die from starvation due to heat related crop failures”, etc.) in communicating warming projections to the press and public.

  46. Andrews and Allen reported in 2007 in their survey of CMIP3 models, that GISS-EH and GISS-ER had ECS values of 3.04 and 2.57 K respectively. [Compare to the current GISS-E2-H and GISS-E2-R, at 2.31 and 2.11.] The older TCR values were 1.6 and 1.5 K, now 1.7 and 1.5 resp. So some reduction in ECS, but not in TCR.
    .
    For GFDL, Andrews and Allen list a “GFDL-CM2.0” and a “GFDL-CM2.1” but nothing which directly matches “GFDL-ESM2G” or “GFDL-ESM2M”.

  47. HaroldW,
    Thanks for that Andrews and Allen data.
    .
    BTW, I forgot to add… billions will die (for certain!) over the next 75 years…. of natural causes.

  48. Based on this comment at James Annan’s blog, the Otto et al. paper might have under-estimated aerosol contribution. The claim is that this will raise their linear feedback ECS (“Charney sensitivity”) to 2.2C from 2C as calculated in the paper.

  49. RB,

    The point is that high sensitivity values (like the AR4 canonical value of 3.2C per doubling) are becoming ever less credible. Whether the ECS is 1.5C or 2.2C per doubling, that is much different from 3.2C per doubling. The extreme knock-on consequences (melting Greenland to species extinction to wholesale human death and even the collapse of civilization) fade from the range of plausible at lower (and more realistic) warming rates. Should people conserve fossil fuels and do their best to find alternatives (like nuclear)? Of course, and they will in response to higher fossil fuel prices driven by scarcity. Should we set artificial prices on fossil fuels, which will condemn hundreds of millions or more to continued poverty? Of course not; that would be both foolish and immoral.

  50. “The extreme knock-on consequences (melting Greenland to species extinction to wholesale human death and even the collapse of civilization) fade from the range of plausible at lower (and realistic) warming rates.”

    SteveF, I will have to respectfully disagree that acknowledging a lower sensitivity value for GHGs and a lower ultimate global temperature is going to placate those who really want the government fully involved in these matters. The newer mantra that I hear is that it is the rate of warming and the not the ultimate warming that is critical. Living organism including humans will not be able to adapt/evolve to rapid change and no matter that that change over time might be less than once predicted. I think the climate scientists are clued into this thinking and will use it for funding projects to show the current rate of warming is faster than any other since – you fill in the blank.

  51. Steve,
    I believe Held is saying that discussions about ECS may not be too relevant from a policy perspective, it is TCR we should be concerned about, which is also more quantifiable from observations in any case. Regarding policy, my belief on record here sometime in the past is that policy measures world-wide are likely to be mostly cosmetic and shuffling numbers around for decades to come.

    But coming back to the ECS issue, it does seem like aerosol uncertainty influences various observation-based calculations from Otto, Nic Lewis etc. e.g. other comments at Annan here and here .

    Based on all of the recent observation-based papers I’ve seen, ECS could be in the lower IPCC range. It is a weak opinion weakly held though (like the stock tips I pick up on the internet :)).

  52. Kenneth,
    What I’ve heard is that people will adapt to climate change but not to policy change 🙂

  53. RB,
    “policy measures world-wide are likely to be mostly cosmetic and shuffling numbers around for decades to come.”
    .
    Absolutely and without doubt. Nobody is going to accept poverty in the name reduced global warming. Almost nobody is going to throw huge quantities of capital (monetary and human) at essentially speculative threats. Mr. Obama will be gone in three years, about the same time as Germany’s Angela M exits stage left; it seems to me likely that cooler heads will prevail once that dynamic duo is gone.
    .
    Kenneth,
    “I will have to respectfully disagree that acknowledging a lower sensitivity value for GHGs and a lower ultimate global temperature is going to placate those who really want the government fully involved in these matters. ”
    Nothing will placate the neo-Malthusians, short of forced rapidly reduced human population, huge reductions in global wealth, and continued extreme poverty for a couple of billion. But really, who cares about such loons? The truth is that globally, extreme poverty is falling, and is likely to continue to do so, in spite of the best Malthusian efforts to ensure the poorest remain poor forever (Michael Tobis, where are you when we need an easy demon? 😮 ).
    .
    The truth is that global CO2 emissions are rising, and will continue to do so for at least a couple of decades. China and India (not to mention sub-Sahara Africa) have a long way to go before they enter the rich world, but that entry seems inevitable over the next 50 years. Reality trumps neo-Malthusian ‘climate science’ every time. Leftist nutcake policy prescriptions (the Joe Romm type) are already dark toast (OK, they were pretty stale bread to start with), because they fail the test of congruence with reality… human and physical.

  54. This seems to be another reason for why TCR might be more important than ECS i.e., while the ocean equilibrates, there is also a slow decay of CO2 levels (seemingly due to similar processes in the ocean).

  55. SteveF, those who got us to spend on the order of trillion dollars on wars in Iraq and Afghanistan and brought us Obamacare were certainly not considered kooks. Those wars were never advertised as killing and maiming the numbers that it did or costing what it did. Obamacare was obviously never advertised as a redistribution of wealth from the young and healthy to the not so young and not so healthy.

    Government attempts at mitigation of AGW will be advertised as actions that will save money even in the short run and in answer to some immediate climate problem – real or imagined.

    Oh my, how good it would be to be able to identify these potential failures of government because they were backed primarily by certifiable kooks and crazies.

    Pardon my cynicism and ignoring the glories of democracy promised and idealized in Civics 101, but I worry about that little old lady from the League of Women Voters who gets elected to my city council.

  56. Kenneth Fritsch (Comment #121560),
    .
    Reverend, you are preaching, if not to the choir, at least to one of the deacons. Of course many actions by government have bad unintended consequences. Heck, most everyone’s actions have bad unintended consequences. It is only a matter of scale. But as you compare the costs of really bad choices made by everyone, from your next door neighbor to heads of state, keep in mind: wars eventually end… but programs of wealth redistribution never die. Honestly, did anyone really imagine that you can give generous health insurance benefits (in the world’s highest cost health care market!) to 20 million people without someone paying a lot more? I sure didn’t. I am not certain, but I rather suspect a common thread… health care, climate change, education, well, just about everything… is better done by the federal government, right?

  57. Kenneth Fritsch (Comment #121555)-The irony being that reducing the sensitivity reduces the rate of warming, too-at least up to the point where the effect of a shorter response time becomes dominant.

    RB (Comment #121557)-I invite you to try adapting to the IRS sometime. Unless you have an army of lobbyists to get you carve outs, “adapting” is going to get you thrown in a maximum security prison.

    So the irony would probably be that those big evil corporations would “adapt” to policy change-in the sense that they alter the policy to negate it’s effects on them. It’s everyone else that would get screwed.

    RB (Comment #121562)-Heh, one wonders if the pun was intended.

  58. RB (Comment #121569)-Odd, your linked article makes no mention of taxes having anything to do with that trend whatsoever.

    But what I meant was pretty clear. Try “adapting” to the IRS by not paying your taxes. Go on, I’ll wait.

  59. Hi Andrew,
    Of course there are multiple reasons for that including lower trading costs etc. But taxes do play a role in investment decisions for individuals and small businesses. Another example would be the home mortgage deduction – I guarantee that people would be paying less for a house if interest were no longer deductible.

    I think I was pretty clear on the adaptation contrast too 🙂

  60. RB (Comment #121572)-Obviously people change their behavior due to tax incentives/disincentives. People responding to incentives is the whole point of policy trying to incentivize them to emit less. That’s not adaptation to policy, that’s the policy working as intended. The problem (what you interpret as “people won’t adapt to policy”) is that the allocation of capital is changed in such a way as to always be less efficient. The only way to “adapt” to that is to outright defy the policy, and, good luck, they have guns.

    And this is what you have heard, this is why these policies have negative consequences far worse than the problem they are trying to solve. You interpret this as people just sitting by and letting the policy harm them. On the contrary.

  61. GCM output is very uncertain because of aerosol uncertainty. Aerosol forcing is uncertain because of GCM modeling uncertainty. And so, like for models, aerosols are the monkey wrench for a wide range of “observation” based sensitivity calculations. At least, that’s my understanding of the situation.

    Because of the inherent complexity of the aerosol indirect effect, GCM studies dealing with its quantification necessarily include an important level of simplification. While this represents a legitimate approach, it should be clear that the GCM estimates of the aerosol indirect effect are very uncertain.

    http://www.ipcc.ch/ipccreports/tar/wg1/238.htm

  62. DeWitt,
    Except for the Chinese model, all of the models in this study (Table 1) include indirect effects. There’s the Carslaw paper linked to by Fred Moolten above as well. I remember reading an old reference where the indirect effect was estimated to be 0 to -1.7W/m^2, so Pielke Sr’s estimate is also in the range, I suppose.

  63. RB (Comment #121579),
    Sure, there is uncertainty in aerosol influence. The indirect effects are particularly uncertain, because they are the least measured and the least constrained. My personal guess (based on a lot of experience with Oswald ripening), is that the influence will turn out to be quite small.
    .
    The rate of change in cloud droplet size distribution is very high when the droplet size is smallest, which is where there is a credible influence of number of available nuclei. Once the droplet distribution migrates to a larger average droplet size, the rate of change in the size distribution is much slower… and by then the number of initial nuclei becomes essentially irrelevant. The ‘drift rate’ of a fine initial droplet distribution towards a larger size range (say, where rain can form) is inversely proportional to the average droplet size, so even an initially very small droplet size distribution rapidly changes to a much coarser, and slower changing, distribution.. and it is the slower part of the ripening process which mainly controls the cloud lifetime. I really do not expect much influence of the initial number of nuclei on the cloud lifetime.
    .
    For the same reason, I don’t expect much influence of number of nuclei on overall cloud albedo. The finer initial droplet distribution, due to more available nuclei from sulfate particles (yielding higher albedo), is just too thermodynamically unstable and so too short lived to have very much net influence. Of course, there has to be some effect, but I would be very cautious about accepting large sulfate influence on cloud albedo.

  64. RB,

    Maybe cosmic rays have a role, but there already is an enormous abundance of natural nuclei… mainly from oxidation by ozone of dimethyl sulfide from the oceans to sulfates. There are many more (orders of magnitude) nuclei available for forming cloud droplets than droplet numbers actually observed in clouds. You need only look at “instantaneous” cloud formation from air flowing over an airplane wing at low altitude, modest speed, and high relative humidity to see how quickly and easily cloud droplets form due to adiabatic cooling. The point I was trying to make is that cloud nuclei are naturally very, very abundant, and adding more doesn’t seem to me likely to have too great an effect on the properties of clouds… albedo or lifetime.

  65. Re: Andrew_FL (Nov 26 16:18),

    The closest I could find in a quick search is this :

    Observational inferences on indirect radiative forcing do not support the large values of forcings being applied in models. I would recommend model assessments be done with/without IRF [indirect radiative forcing]

  66. Steve,
    Obviously I have much less certainty than the practitioners. But let me close with one last reference to a paper that suggests a correlation between models that have the indirect effect and observations of precipitation changes during the mid twentieth century hiatus.

    Analysis of single forcing runs from CMIP5 (the fifth Coupled Model Intercomparison Project) simulations shows that the mid-twentieth century temperature hiatus, and the coincident decrease in precipitation, is likely to have been influenced strongly by anthropogenic aerosol forcing. Models that include a representation of the indirect effect of aerosol better reproduce inter-decadal variability in historical global-mean near-surface
    temperatures, particularly the cooling in the 1950s and 1960s, compared to models with representation of the aerosol direct effect only. Models with the indirect effect also show a
    more pronounced decrease in precipitation during this period, which is in better agreement with observations, and greater inter-decadal variability in the inter-hemispheric temperature
    difference. This study demonstrates the importance of representing aerosols, and their indirect effects, in general circulation models, and suggests that inter-model diversity in aerosol burden and representation of aerosol–cloud interaction can produce substantial variation in simulations of climate variability on multi-decadal timescales.

  67. DeWitt Payne (Comment #121594)-Hm, interesting, at any rate.

    RB (Comment #121605) -“better agreement with observations” on precipitation is not that impressive, given how uncertain precipitation estimates over the period in question are. And it’s not like temperatures where we don’t need to know what’s going on with temperatures somewhere we aren’t measuring. the lack of over ocean measurements and significant gaps over land precludes closing the budget, so comparison with models on a global scale is impossible.

  68. RB – The current issue of PNAS has a paper analyzing the mechanisms and strength of aerosol indirect forcing – http://www.pnas.org/content/110/48/E4581.full. The microphysical effects of aerosols on droplet size appear to be a dominant influence. The paper suggests that IPCC estimates of the indirect effect have underestimated its strength:

    “Because current weather and climate models do not yet include cloud microphysics and aerosol–cloud interactions in deep convection parameterizations, the warming at the surface could be overestimated by current models of the Intergovernmental Panel on Climate Change, considering the strong net cooling (5–8 Wâ‹…m−2 here) at the surface estimated in this study that can be produced by aerosol effects.”

    However, the reported results are for regions sampled because of their aerosol concentrations, and so it’s hard to know how to translate the above conclusion quantitatively into a global average.

  69. RB,
    I have no doubt that large aerosol offsets (both direct and indirect) can be used, and have been used, to make models better match the mid-20th century coung. I just don’t believe those assumed aerosol offsets are anything close to right. They are nothing but a kludge, and taylored to have the models match the temperature observations. Speculation and motivated reasoning, nothing more.

  70. Steve,
    So, would the alternative theory look like this? Hope I managed to get that link right. They curve-fit to 20th century temps with 2.4C sensitivity including the AMO. Aerosol forcing of -1.1W/m^2 is higher than the Otto average of -0.73W/m^2 but is in-line with numbers as contested here .

  71. “The microphysical effects of aerosols on droplet size appear to be a dominant influence. ”

    it sounds as if it might be conveying meaning….

  72. RB,

    Yes, that PowerPoint lays out most of the arguments for natural variation rather than varying human generated aerosols causing the observed pseudo-cyclical pattern of the instrumental record.
    .
    Diogenes,
    Most aerosols have a very short lifetime in the atmosphere (days to a week or so). They therefore are mainly a local rather than global forcing. Most man-made aerosols are generated in the Northern hemisphere. If those aerosols have a strong cooling influence, then it seems rather odd much more warming over the past 40 years has been in the Northern hemisphere rather than the Southern. (http://www.woodfortrees.org/plot/hadcrut4nh/plot/hadcrut4sh)

  73. Fred Moolten,.
    The key phrase from the abstract is: “By conducting multiple monthlong cloud-resolving simulations with spectral-bin cloud microphysics…”
    .
    It is a modeling study, and the ‘results’ of the study only as good as the model. Observational based results are very uncertain, and even conflicting, as the paper correctly notes. A model is a poor substitute for observations in a case like this.

  74. Steve – You probably should read the paper if you haven’t yet. All estimates of indirect aerosol forcing require modeling (actually, because forcing is not an observable quantity, all estimates of any forcing require modeling, but the role is particularly important for aerosol indirect effects). However, the study uses or references observational data as constraints, including cloud differences between regions with high and low aerosol concentrations.

    Even so, it’s only one study. Aerosol forcing remains one of the more uncertain elements of climate estimates, and it’s risky to try to go beyond estimating a plausible range of values.

  75. Yes, SteveF, this mystery of the hemispheric differences was something that came up a while ago at James in discussions with an aerosol modeling specialist, albeit a young one. He presented some detailed data on supposed aerosol forcing geographical distributions and they didn’t match at all the distributions of temperature changes. Its a little disconcerting.

  76. SteveF,

    ” If those aerosols have a strong cooling influence, then it seems rather odd much more warming over the past 40 years has been in the Northern hemisphere rather than the Southern. ”
    ————————————-
    More than one variable. The biggest is probably the amount of land per hemisphere

  77. Spatial pattern of warming, hm? Well there is *one* way to explain it, but you won’t like it.

    With regard to the aerosol indirect effect, one area that needs more work is the effect on supercooled clouds. Choi et al suggest this would cause an indirect *warming* effect, and in present models this effect is misparameterized.

  78. SteveF (Comment #121619)
    “If those aerosols have a strong cooling influence, then it seems rather odd much more warming over the past 40 years has been in the Northern hemisphere rather than the Southern. “

    The NH warmed, but the latitude band 45-55°N cooled recently.

  79. Nick Stokes,
    The cooling in the sub-arctic latitude band may have something to do with warming in the arctic. An increase in exchange of heat between those bands can easily account for both. One way to help clarify would be to look at winter versus summer trends for those latitude regions; there is little aerosol influence in winter on albedo because there in not much solar energy. My guess is that the summer temperature trend remains positive for the sub-arctic, but is negative in winter, the opposite of what is expected for a cloud albedo driven cooling.

  80. Nick Stokes,
    I took a look at your link (very nice graphic, by the way), and as I suggested in the above comment, it is wintertime cooling in the sub-polar region, but with continued warming for summer and autumn. That seems to me unlikely to be due to cloud albedo effects from increasing aerosols… Not much sun above 45N in winter.

  81. Owen (Comment #121623),
    “More than one variable. The biggest is probably the amount of land per hemisphere”
    .
    Well maybe. But there is a considerable net flux of heat from south to north due to ocean currents; that could also explain the difference.

  82. Fred Moolten,

    I had already looked at the paper when I commented before. Please look at figure 2S (supplemental information) http://www.pnas.org/content/suppl/2013/11/07/1316830110.DCSupplemental/pnas.201316830SI.pdf#nameddest=SF1
    .
    Note how poorly the model simulates the measured rainfall rates for all three studied regions, for both assumed “polluted” and assumed “clean” air. Seems to me not much of a cloud model. Really Fred, it is just a model study that is poorly constrained and which doesn’t appear to simulate reality very well. AR5 has at least this much right: secondary aerosol effects are by far the least certain of all man-made forcings.

  83. Apparently the above paper was withdrawn due to an error in one of the calculations. I will try to contact the author and find out what the error was.

  84. Nick Stokes, I kinda think a trend from 1997 to the present is not very relevant to the effects of increasing aerosols. Like, at all.

    I believe you will find strong warming of that band in the last ~30-50 years, during which aerosol loading was probably more important than the last ~20 years.

  85. Upon further reading, the fit by Andronova/Schlesinger in (Comment #121618) obtains an ECS of 2.4C (with a residual AMO-related fluctuation) over the 1850-2000 period. The aerosol forcing in their model is similar to the one in the IPCC AR2 with a 1990 aerosol forcing mean of -0.3W/m^2 and indirect of -0.8W/m^2. I did earlier miss this interesting post by Zeke here showing how the residual is pretty much the AMO (whether forced in part or not).
    So, it does seem possible to account for natural variations and come up with any sensitivity number from 1.6C or so depending on the indirect aerosol assumptions.

  86. RB …”(with a residual AMO-related fluctuation)”

    I think the AMO/PDO should be retired as far as “climate” goes. At 55-65 N there is the minimum ocean to land ratio which is a bottle neck for polar ocean heat transport. If you use 30N-60N SST you have an “index” for both AMO and PDO and when you consider how the land in the 30N-60N band amplifies the impact of the 30N-60N SST you have a better indication of the actual impact of natural variability. If you use just the AMO which relates to only about 10% of the ocean surface you underestimate the “global” impact of natural variability by about a factor of two.

    Plus once you consider the bottle neck it is easier to understand why AWW is so much greater when there is improved ocean heat transport through the bottleneck which improves NH radiant heat loss. That heat loss is really kicking butt this Turkey day btw.

  87. Possibly the detrended AMO fluctuation is a proxy for the multi-decadal variability due to pole-to-pole overturning (whose amplitude could also be partially forced). Using inter-hemispheric temperature trends, Isaac Held makes a good case here , in combination with results from Friedman et al. for why natural variability is likely comparable to aerosol forcing for the recent 1980-2010 period (i.e, a~0.5) but not dominant. This probably underlies Held’s analysis here resulting in TCR estimates of 1.4K and aerosol forcing of -0.7W/m^2 over the 1980-2010 period: numbers that are coincidentally similar to those used in Otto et al.

  88. RB, the variability in the NH pretty much overwhelms the more well mixed SH and does really “explain” the difference. “Invoking” the AMO or PDO without knowing what is actually going on is a bit mystifying to me.

    http://redneckphysics.blogspot.com/2013/11/stop-assuming-so-much.html

    The choke point is a reasonable explanation since both hemispheres would have the same Coriolis Effect just differences in the current patterns due to land distribution asymmetry.

    So instead of assuming the AMO or the PDO does this or that I think it makes more sense to say why this or that causes the AMO/PDO. Then the actual impact includes the land amplification which explains why the impact can be 0.5C and not limited to an assumed period.

  89. I probably shouldn’t chime in again, because much of the recent discussion is only marginally relevant to the original topic. I agree with SteveF that aerosol indirect effects are highly uncertain. I view the PNAS paper I linked to above more favorably than he does, and I don’t agree that selecting one variable among many subjected to model/observation comparisons is representative. The paper (as well as others I cited) are examples of reports suggesting that the indirect forcing may have been underestimated, but there are other reports in the opposite direction. However, this is something individual readers should judge by visiting the paper themselves rather than listening to what others say.

    Very briefly, on topics well covered in the literature as well as past web discussions: There are reasons for faster NH than SH warming based on land mass that outweigh any difference in aerosol effects -an illustration can be found by noting the higher global mean surface temperature during NH summer than SH summer. Also, internal climate oscillations are known to affect surface temperature, but as discussed extensively by Held and others, ocean heat content increases since the mid 20th century appear to render it implausible for those internal variations to account for more than a small fraction of the observed warming – far less than attributable to anthropogenic GHGs see e.g., http://www.gfdl.noaa.gov/blog/isaac-held/2011/08/23/16-heat-uptake-and-internal-variability/ . This is too complicated a topic, however, to be adequately covered by an exchange of blog comments here tangential to the main topic. Those who disagree with the above may have good reasons, but I would recommend that an extensive discussion be reserved for a post that specifically addresses those topics, here or in other forums – Isaac Held’s site I linked to above would be a better place.

  90. Fred Moolten, ” This is too complicated a topic, however, to be adequately covered by an exchange of blog comments here tangential to the main topic. ”

    Not really. It is pretty simple to compare land amplification by latitude to explore the “other” impacts since CO2 is a well mixed gas. Aerosols are seriously complex though.

    http://www.clim-past-discuss.net/9/6179/2013/cpd-9-6179-2013.html

    That paper is still in discussion but their model results indicate that synchronous combinations of “weak” forcing and ocean inertia can be a bit of a challenge.

  91. Fred Moolten (Comment #121678)-“ocean heat content increases since the mid 20th century appear to render it implausible for those internal variations to account for more than a small fraction of the observed warming”

    Hm, haven’t we had this argument before? In which case you are already aware of the counter arguments, and don’t even acknowledge them with a dismissive handwave.

    Which, as I recall, you had on hand at the time. And they didn’t make any more sense then than they do now.

  92. Andrew – I don’t remember having an argument with you on this subject before, but if I did, and dismissed your comments with a handwave, I apologize. It’s true, though, that I’ve seen this point discussed before on many occasions, and counterarguments offered, and that I didn’t find them convincing. However, my depth of knowledge of the subject is much less than Isaac Held’s, and since he’s posted specifically on this point and is generally willing to participate in the exchange of comments, his site would be a good place for further discussions

  93. Fred Moolten (Comment #121681)-I have tried and failed for a while to find the conversation I’m alluding to. Maybe it wasn’t you. It would have been the same talking points if it was RB or anyone else among the alarmed. No offense, but I was even less impressed by the dismissals than you would have been of the arguments.

    I just wish I could find the argument so others could judge.

  94. Andrew_FL,
    I didn’t have any conversation regarding this with you and I don’t lose much sleep over this either.. My knowledge on these matters is very likely even lower than Fred Moolten’s. His suggestion to discuss this with Held, if you are not too intimidated to do so, is a good one.

  95. You are far too modest, both of you.

    At any rate, I must confess to having little interest in being lectured or talked down to. It’s not very productive.

  96. RB (Comment #121683)
    November 29th, 2013 at 10:15 pm

    Actually in my view Held attempts to keep it simple for those who participate at his blog. He makes simplifying assumptions clear most often and if not does so in his replies to posters. His blog would hardly intimidate anyone with a reasonable understanding of what he is doing. Gavin Schmidt can be informative in an understandable manner when it comes to climate modeling, but unlike Held, his advocacy often gets in the way of his communications and admitting to the full uncertainty of these matters.

  97. Kenneth,
    Seems to me Gavin has gotten in with the ‘wrong crowd’. At times he seems quite reasonable, unlike his several wild-eyed co-bloggers. That GISS model ER has… ahem… recently drifted towards much lower ECS values may be a reflection of Gavin’s more realistic take. Of course, there is little chance he would ever publicly distance himself from the nut-cakes. Dyed-in-the-wool advocates don’t do such things.

  98. When Isaac Held brings up first-order lagged (exponential) responses to thermal impulses, I tune out. Why they don’t follow James Hansen’s lead and do the full diffusional response, which is not a damped exponential, I don’t understand. Held’s “simplifying assumptions” are counter-productive in many respects.

    This is just a pet peeve of mine, having worked in the semiconductor industry where characterizing process is important. Try to model diffusion as a first-order damped exponential process and all the process formulas (oxide growth, dopant concentration, etc) become useless.

    So what happens with Held is that he tries to dumb down the math and it doesn’t cut it. Ultimately this turns into a chasm between coming up with first-order calculations with perhaps some compartmentalization, and the finite difference slab numerical calculations that are closer to the truth.

    There are ways of using uncertainty qualification to simplify the nasty Error() functions that come out of a diffusional response, and I find this more straightforward to handle.

  99. Of relevance to some of the discussion here regarding energy build-up vs internal variability. Bart poses the same question on three (or perhaps four) separate occasions to Koutsoyiannis. If it was answered, I didn’t see it, although Koutsoyiannis did reply as follows

    I do not avoid answering your questions—or other people’s ones. I am just subject to energy and time constraints. There is a lot of interesting stuff to read and comment on. As you see, I am working to contribute, but I have limitations (and additional duties). Thus I must put priorities. My first priority is that we should understand each other on general principles before we can discuss more specific issues. This is not easy.

  100. Andrew FL, I have compared the Oppo 2009 to a fair number of ocean reconstruction and it holds its own with Ca/Mg and d18O. The Foram concentration reconstructions have major issues with current variability so you need a monster ocean model to sort them out.

    http://redneckphysics.blogspot.com/2013/12/2000-years-of-climate-correlations.html

    This is a bit different way of looking at things so I am trying to use methods for the statistically impaired including myself 🙂

    RB there is without a doubt long term persistence.

  101. Re: WebHubTelescope (Dec 1 13:02),

    The Error function is only nasty if you have to calculate it yourself or worry about CPU cycles. It wasn’t an available function in FORTRAN when I was in graduate school and the algorithm I used required double precision math on a 60 bit word Control Data 6600 to converge.

  102. Carl Mears’ outline of the model-observations discrepancy issue is interesting. While the modeling physics issues are well-known here, his listing of the issues with forcings used by models is interesting. Those who haven’t read this before should check out the entire comment for themselves, but I’ll take the liberty to lift out a portion of his statement.

    Is there any evidence for incorrect forcings over the past 35 years being used for model input? In fact, there is.
    One example is radiative forcing due to stratospheric sulfate aerosols, which are little droplets of SO2 and water that scatter the incoming light from the sun. It is well accepted that increases in stratospheric aerosols warm the stratosphere and cool the surface and troposphere. Both effects can clearly be seen in the MSU temperature record after the colossal eruptions of El Chichon and Pinatubo. These eruptions spewed large amount of gaseous sulfur into the stratosphere, where it oxidized to form excess levels sulfate aerosols. These events, and others before 2000, are well represented in the stratospheric aerosols datasets used to drive the 20th century simulations for CMIP-5. After 2000, the level of stratospheric aerosols in the input datasets is allowed to decay to zero. In real life, however, observations indicate that the background level of stratospheric aerosols increased over the 2000-2010 period (Solomon et al, 2011), probably due to a large number of small volcanic eruptions (Neely et al, 2013). The effect is large enough to offset about 25% of the effect of increasing CO2 over this period (Solomon et al, 2011).
    Other forcings with possible problems include solar output, stratospheric ozone, and black carbon aerosols. The sun has been in a quiet, low-output phase for longer than expected. This is not included in the CMIP-5 forcings, and thus model results should be expected to be slightly warmer than real life. Temperature changes in the upper troposphere and lower stratosphere have been shown to be very sensitive to the stratospheric ozone concentrations used (Solomon et al, 2012). These effects appear to extend below the tropical tropopause, low enough to affect tropospheric temperature trends and the tropospheric hotspot. The ozone dataset used in the CMIP-5 simulations is the one with the most conservative trends in ozone. If one of the other datasets had been used, the models would have shown less upper tropospheric warming. There are probably other similar problems that I am not aware of.
    None of these effects are large enough to explain the model/measurement discrepancies by themselves, but they are each likely to be part of the cause. The cumulative effect of all has not been evaluated.

    For some reason, updates to model inputs seem to occur way too slowly.

  103. Hi RB,

    A quick couple of thoughts. You might be interested in Ed Hawkins’s post with updated GHG, solar, and volcanic forcings for GISS simulations (which showed virtually no difference from previous simulations), although it is a small sample and Gavin mentioned there are still more updates to come : http://www.climate-lab-book.ac.uk/2013/obs-forcing/

    For recent (post 2000) volcanic eruptions, indications are a rather small effect on surface temperatures. For example, Haywood et al (2013) – http://onlinelibrary.wiley.com/doi/10.1002/asl2.471/abstract – calculate about an 0.02 to 0.03 K drop in temperature, corresponding to a very small trend difference from the 1990 start that Lucia shows in this post (maybe 0.01 K/decade?). It is perhaps even smaller if the actual TCR is less than that of the model used in the study. This is unsurprising given that we’re looking at recent volcanic forcings that are maybe 1/20th the impact of Pinatubo, if that: http://data.giss.nasa.gov/modelforce/strataer/tau.line_2012.12.txt

    The solar difference in CMIP5 runs vs actual seems to me quite overblown as well. Rather than the predicted 11-year cycle (starting in 2000) with a minimum ~2006, we got a 13-year cycle with a minimum ~2008. And rather than peaking in 2011 we are looking at a peak this year. The peak minus minimum was about what was predicted (1 W/m^2 of TSI), and I don’t see any indication that the cycle was locked in the “minimum” state…it looks to me like the overall cycle was just elongated: http://lasp.colorado.edu/data/sorce/tsi_data/TSI_TIM_Reconstruction.txt. Maybe for trend comparisons ending in 2008 this might have some impact, but I fail to see how this would cause any sort of substantial discrepancy at this point?

  104. Troy_CA,
    My experience is that really violent arm waves can cause shoulder injuries. Lots of people, including Gavin, seem to be endangering their shoulder joints. 😉
    Ex post facto explanations are a lot like weak tea… not worthy of anything more than the drain in the kitchen sink. The currency of science is honest-to-goodness accurate predictions; climate science is woefully short on those…. some might even say ‘bankrupt’.

  105. Yeah, most of the things Mears cites are definitely negligible. I am not sure about Ozone but I’d bet it is negligible, too.

    Also, the cooling effect of El Chichón is hardly what I think any reasonable person would call “clear” in *any* dataset. Like most volcanic eruptions, Pinatubo being the notable exception, it is visually hidden by noise. The stratospheric effect, yeah it’s pretty clear in the MSU data.

    Now, the effect *is* there, you *can* isolate it from the noise:

    http://devoidofnulls.wordpress.com/2013/11/07/can-you-isolate-a-volcanic-temperature-signal-in-the-temperature-data/

    But no, it is clear, by itself.

  106. It is worth mentioning that there is fairly strong evidence that TMT trends are biased low due to cloud and precipitation effects over the oceans.

    Weng et al. 2013. Uncertainty of AMSU-A derived temperature trends in relationship with clouds and precipitation over ocean. Climate Dynamics.

    “Microwave Sounding Unit (MSU) and
    Advanced Microwave Sounding Unit-A (AMSU-A)
    observations from a series of National Oceanic and
    Atmospheric Administration satellites have been extensively
    utilized for estimating the atmospheric temperature
    trend. For a given atmospheric temperature condition, the
    emission and scattering of clouds and precipitation modulate
    MSU and AMSU-A brightness temperatures. In this
    study, the effects of the radiation from clouds and precipitation
    on AMSU-A derived atmospheric temperature trend
    are assessed using the information from AMSU-A window
    channels. It is shown that the global mean temperature in
    the low and middle troposphere has a larger warming rate
    (about 20–30 % higher) when the cloud-affected radiances
    are removed from AMSU-A data. It is also shown that the
    inclusion of cloud-affected radiances in the trend analysis
    can significantly offset the stratospheric cooling represented
    by AMSU-A channel 9 over the middle and high
    latitudes of Northern Hemisphere.”

  107. Robert Way (Comment #121744)- I have no idea what point you are trying to make but I’d hardly call one more specious study trying to increase satellite warming strong evidence of anything, except perhaps the intellectual bankruptcy of climate science.

  108. Andrew_FL,
    You made the claim that the paper I just pointed to was wrong. Could you please provide some more insight into how you came to this conclusion? Perhaps you can relay some specific details from the paper’s analysis that you found unconvincing (aside from the result)?

  109. Robert Way:

    The abstract of the article you posted is interesting. Are you aware of any discussion of its merits by Mears, Wentz, Spencer, Christy or other experts in the satellite arena? I assume it came out after the Climate Dialogue blog exchange which RB references above.

    Mears said something very interesting on that exchange about the possibility that lower troposphere turbulent dynamics may inhibit the formation of the hot spot by limiting the availability of water vapor…I’ll look it up if it becomes more relevant to this discussion.

  110. Andrew_FL (Comment #121747),
    I’ve been distracted (with sewing. See Dec. 5 post).

    But I have to admit that reading your Andrew_FL (Comment #121745), I too would have thought you think the paper Robert Way cited was seriously flawed in some way. You called the study “specious”, and suggested it shows the “intellectual bankruptcy of climate science”. Specious can mean “having a false look of truth”, which would suggest you think the paper is “not true” which many might interpret as meaning you think it is “wrong”. But evidently, you think RobertWay interpreting it that way means he lacks reading comprehension. So, presumably you don’t think it’s wrong?

    Perhaps you could explain why you think the paper Robert Way mentions does not provide strong evidence that “TMT trends are biased low due to cloud and precipitation effects over the oceans”.

    (I have no dog in this fight by the way. I’d just like to read why you are not convinced by that paper– and I suspect Robert Way may also wish to know why.)

  111. lucia (Comment #121753)-Alright, mea maxima culpa, I should have used “dubious” not specious.

    The thing is a lot of people in the community devote their entire lives and psychic welfare to proving the satellite data wrong. Over and over again there is some paper that uses some incredibly suspicious or flat nonsense technique to claim to “prove” the satellite data have to be way wrong. This strikes me as intellectually bankrupt. The paper in question appears to be the next in a long, long line of such papers. If every one of them was correct, I suspect the total correction would imply an atmospheric amplification rate somewhere on the order of 100X or so.

    But, more specifically, I honestly don’t get how one could arrive at the result they are claiming in the first place. The idea that the stated effects create a long term trend bias would seem to require some sort of trend in the variables allegedly responsible for the bias. But there really isn’t much evidence for that, certainly not that is reliable, at any rate.

    Okay, here’s an idea, send me the time series they get for TMT and/or TLT. Examining it should allow the possibility of detecting any obvious artifacts.

  112. Lucia,

    On a whim I just emailed you the paper. I can’t remember if you have access and I guess Andrew doesn’t.

  113. Hi Troy,
    Thanks.. upon reading Solomon et al. 2011, it looks like Mears might have misunderstood what was being said there since CMIP5 uses historical forcings until 2005, not 2000. In any case, Nielsen-Gammon has I believe demonstrated that even after accounting for Kosaka/Xie’s study of the tropical Pacific, model trends are still higher than observations. There was a comment there about GFDL’s aerosol assumptions which brings us back to model TCR vs aerosol forcings again I guess.

  114. BillC,
    Thanks! I’m on my way to meet Jim for lunch.
    (Have pdf slicing routine getting correct overlap so seamstrist-user can tape bits together…. )

  115. On a quick read of the paper I guess I was conflating two issues: the global trend model vs. obs discrepancy, and the “missing” tropical tropospheric hot spot. Although RB’ss comment quotes Mears from the “hot spot” thread on ClimateDialogue, the paper quoted by Robert Way is not about the hot spot. Rather, and interestingly, it actually shows the clear-sky (“corrected”) data producing LESS of a tropical tropospheric hot spot – and a bigger one over the mid-latitude oceans.

  116. BillC (Comment #121762)-See, that seems like an odd and unlikely effect to me.

    If you don’t mind sending me a copy:

    linkcrazy543 @ hotmail.com

    Also, it kind of sounds like the effect is only over the ocean? So one should not expect a trend difference over land?

  117. BillC,
    Yes it is interesting actually what is shown there. I actually care very little whether the data show the “hotspot” or not. I’m simply interested in seeing that we have the best available datasets for analysis and this paper provides some insight that I think is important when discussing the particularities of the discussion over at climate dialogue.

    Andrew_Fl,
    I feel that prior to making some of the statements you have it would be worthwhile to read the paper. I think that the reason that there are a good few people who have spent time studying the satellite data is because it is extremely challenging to put together this dataset and requires much more expertise than a single person (or group) could provide in terms of making a perfect record. I think the history associated with biases in the UAH product are a clear testament to how difficult it can be to assemble such a record. Do you feel that there are no remaining biases? When one is looking at the radiances of oxygen particles in the atmosphere it is hardly surprising to see that clouds and precipitation can have an effect.

  118. Robert Way (Comment #121771)-Well, I’ve had a look, color me totally unimpressed. First, it appears there is no a priori reason to expect that the effect they are talking about *always* leads to a bias in one direction, and second, they have in fact analyzed data over a pretty short period. Second, it is obvious that the *primary* effect of excluding these radiances is to reduce the noise in the data. As far as I can tell, they have done no significance tests on trend of the differences, which I am virtually certain would, well, not be.

    “I think the history associated with biases in the UAH product are a clear testament to how difficult it can be to assemble such a record.”

    I think this is a clear testament to your ideological bias.

    ” Do you feel that there are no remaining biases?”

    There are never *no* biases, the question is whether any biases actually make any significant difference, and moreover one (or rather an entire intellectually bankrupt community) should not merely look for biases in one direction over and over, they should look for all possible biases. Some of the datasets do in fact have identifiable biases. John Christy has published extensively on this, but I’d imagine you’d discount that out of hand. Because of course, “It would of course, at this and any other time, be very nice to show that UAH is wrong.”

    As for whether I think UAH in particular has any *significant* remaining biases (specifically cool ones?) for the lower troposphere, no, definitely not.

    I would note that one of the authors of this publication is the lead author of the STAR analysis, which has known and easily identifiable *warm* biases-which are actually significant in size!

  119. Robert Way,

    Did you compare the RSS to UAH lower troposphere records for places in Africa where there are inadequate surface stations?

  120. “I think this is a clear testament to your ideological bias.”

    Yes I am clearly biased towards thinking it is hard to put together a satellite record. Bravo – you’ve caught me. How dare I make such outlandish statements 😛

    “(or rather an entire intellectually bankrupt community) ”
    “…but I’d imagine you’d discount that out of hand. Because of course, “It would of course, at this and any other time, be very nice to show that UAH is wrong.”

    I’m not sure who you are talking about but you let me know when you’re ready to rejoin this conversation and have an actual discussion.

  121. Robert Way (Comment #121776)-I’d rather not have a discussion with you, ever. Let me know when you’ve bothered to read the relevant literature.

    Or rather don’t. Because again if I never speak with you again it will be too soon.

  122. SteveF,
    Not as part of the publication we have presented. It is on a to-do list though but there are other things we have looked at with consistent results. At this point we have done a great deal of tests and all have been strongly consistent. RSS and UAH comparisons are interesting but I suspect since the two groups have very consistent patterns of change that it would not appreciably change things.

  123. I just finished reading the Weng paper and I as I understand their process they (1) used channels 1 and 2 from the AMSU-A unit to retrieve atmospheric cloud liquid water path information using an algorithm developed by the lead author and (2) then used a model, named CRTM, which was also developed by the lead author, and that, as I understand it, provides optical properties from various cloud conditions that were used in turn (3) using certain assumed standard conditions to determine how much the brightness of the radiation used by AMSU-A is decreased/increased under the cloud conditions derived from these assumptions and calculations over the 13 year trend period in question.

    The authors found a significant increase in the cloud corrected AMSU-A measured trends over the 13 year period. What really bothered me in reading this is that I cannot found anywhere in the paper a reference or mention to the possible cause or detection of the changes in cloud conditions that I would have to assume would be significant over that period in order to increase the adjusted trend significantly. I could be missing something on my first read of this paper, but it read like the increased adjusted trends just happened when the adjustment for clouds was made and that the why and how of the changed cloud conditions was not important.

  124. Robert Way,
    That comparison may add some credibility, since there are enough differences between RSS and UAH to introduce some level of uncertainty.
    .
    I think the relative lack of mid-troposphere warming in the satellite record is far from explained, with plenty of data, including baloons, which reinforce a relative lack of mid-tropospheric warming. I would be cautious about concluding that the satellite data is very biased for the mid troposphere (if that is what you think).
    Just like Lucia, I liked the paper’s approach, even considering that there are some uncertainties involved.

  125. Robert Way – To what extent, if any, do you think a tendency for the Arctic Oscillation to decline since the 1990’s (i.e., become less positive/more negative) may have influenced the ratio of Arctic warming to temperature change at lower latitudes? The correlation is far from perfect, but could it have contributed to the disproportionate uptick in Arctic temperature compared with elsewhere?

  126. Fred Moolten (Comment #121785)-You are talking about a pattern the daily variations of which are almost an order of magnitude larger than the variations on an annual average basis. If that’s significantly contributing to any long term trends, I’d be rather surprised.

  127. Fred_Moolten,
    Interesting point. I believe that this is very important in the eastern Canadian Arctic winter season.
    See our FAQ on the website – in particular the comparison of 1998 and 2010. There is some interesting discussion there on the impact of the very negative Arctic Oscillation pattern during the winter of 2010 on global temperatures and by extension coverage bias in Hadley. That pattern produced extreme temperature anomalies in regions with reduced observation coverage during that year making it appear less warm in Hadley then it actually was.

    [Make sure to click more/less on the right FAQ question]

    http://www-users.york.ac.uk/~kdc3/papers/coverage2013/faq.html

  128. Kenneth Fritsch (Comment #121779)

    By my read of the paper they are finding the increased trend simply over those areas which are “not cloudy” by their algorithm, irrespective of changes in the composition or overall fraction of cloudy areas (i.e. it’s not attributable at least directly to changes in cloudiness).

    I’m going to take down the link, hopefully whoever wants the paper with respect to this discussion has it.

  129. As far as this paper (Weng et al) is concerned, I can think of several hypotheses to test using the full dataset which I suppose could be requested from the authors or maybe is available online somewhere.

    1) Is there a trend in the amount of cloud cover by region? (well, we know there is from other similar data, but do we find it here?) Do this by taking the annual distribution of “clear sky” pixels (they show 2008 as an example in their Fig. 3. and calculating the trend in the # of clear sky pixels in each region per year. My guess – ENSO effects will make the data too noisy to discern much. BUT the same might be said of the information used to draw the conclusions in the paper, judging by the low R^2 values for their trends.

    2) Do any trends in cloud cover affect the overall trends in clear-sky pixels? We do this by taking the trends in subsets of pixels which do not have a trend in the amount of cloudiness, and comparing that to the trend in all clear-sky pixels. My guess – yes, this will affect the trend significantly, but I don’t know in which direction.

    This discussion then gets into the cloud forcing/feedback issue.

  130. After resolving that most of the temperature trend in the Cowtan and Way gridded series was contributed by the latitudinal zone 60N-90N, I was curious about the latitudinal zone warming difference over the two time periods of 1979-1996 and 1997-2012 that were covered in the Cowtan and Way paper.

    I looked at the gridded temperature series from CISS Infilled, Cowtan and Way Hybrid (CWH), UAH and 5 runs from the GISS model, GISS _E2_Hp1_RCP45, from the CMIP5 series. The biggest zonal differences in trends over the two periods were between 90S-60N and 60N-90N and that is where I concentrated my analysis. It also is the contrasting global areas in what is describe in the literature as the Arctic ploar amplification.

    For the two time periods, I calculated (1) the area weighted percentage of the global trends contributed by the 60N-90N zone, (2) the 60N-90N trends, (3) the 90S-60N trends, (5) the global trend and (6) a polar amplification metric calculated by dividing the 60N-90N trend by the 90S-60N trend. The results are listed in the table in the link below.

    In general all gridded temperature series for both time periods show an Arctic polar amplification, but an amplification that varies from series to series and with time periods used in the analysis. Also as a general case the period 1997-2012 shows a much greater amplification than the period 1979-1996 for the three observed series , whereas the GISS model runs show the opposite with amplification greater in the earlier time period. For the 1997-2012 period the three observed series shows the 60N-90N zone which is less than 7% of the global area contributing 40 to 50% of the global warming trend.

    An interesting point in these observations in attempting to understand a dominant mechanism operating to cause the Arctic polar amplification turns on amplified warming trends in the 60N-90N zone not consistently depending on a contemporaneous warming in the 90S-60N zone.

    I am not aware of acknowledgment of a warming mechanism taken from climate models that might account for the Arctic polar amplification. Many mechanisms explaining at least a part of the Arctic amplification have been presented in peer-reviewed papers. For example we have the observed reduction in the sea ice in the summer Arctic over past decades that feeds directly to a positive albedo feedback mechanism. Using that mechanism as explanatory requires a further explanation of the warming at lower latitudes being seemingly out of phase with the 60N-90N warming. Perhaps the transport of heat from the lower latitudes put in place sea ice melting and plant growth that decreased the albedo there and became sustaining through a positve albedo feedback. The momentum for this process could then sustain an accelerating warming there with a contemporaneous decelerating warming at lower latitudes. The critical question going forward then becomes one of what this mechanism foretells for future warming in the Arctic given two scenarios with one being a continuing pause in lower latitude warming and the other with a warming rate comparable to that of the 1979-1996 period.

    If the sea ice albedo feedback is not a major contributor to Arctic polar warming then some other major and changing conveyor of heat to that zone from the lower latitudes is required and it must continue to operate even during a period of decelerating (pause in) warming in the lower latitudes. Would we expect from this mechanism that the pause in the lower latitudes is compensated by warming in the 60N-90N zone and that if the conveyor mechanism changes to transporting less heat northward would we expect a decelerating Arctic warming compensated by a renewed warming in the lower latitudes? Obviously the transport conveyor effects must be combined with the albedo feedback and heat transport to space effects and thus we cannot oversimplify here in determining the compensation of heat from south to north.

    I have vacillated between albedo feedback and a changing heat conveyor as possible dominating mechanism for the observed Arctic polar amplification and am currently favoring the latter.

    Another interesting side light to this exercise resides in comparing the zonal trends between climate models and observed temperatures and that we might better make these comparisons and perhaps gain insights into differences. Also of interest is that the GISS model runs vary only by the initial conditions used and yet these runs result in very different 90S-60N and 60N-90N trends. While the model run amplification for 4 of the models is reasonably in agreement for the 1979-1996 the fifth run is very different. That the differences might be the result of a prior period of warming or cooling does not appear to be the general case with these model runs.

    http://imageshack.com/a/img9/2762/b9uc.png

  131. BillC:

    I did not see a direct tabulation of r or R^2 for the trends listed in the Weng paper. I saw the trends and the standard deviations presented in separate tables with the SD value about 1/10 of the trend value.

    I also am having a difficult time getting my head around what is causing the trends to increase by way of the MSU cloud corrected brightness measurements. I would assume that Weng determined a correction for every 5 degree latitude by 5 degree longitude area of the globe for every hour(?) based on his cloudiness algorithms, the uncorrected brightness measurement and the measurements from the Channel 1 and 2 on the liquid water path. Now the cloud condition for that grid could change over time such that the corrected to uncorrected measurements showed a corrected trend increasing or decreasing over the uncorrected one. I believe that Weng also confined his corrections to the 60S-60N latitude and to measurements over the ocean and to only the AMSU-A unit. For each 5X5 grid over 13 years there are then 13*365.25*24 corrections or approximately 114,000 corrections. The number of 5X5 grids are 24*72*2/3 or approximately 1150 grids and an approximate grand total of corrections of 130,000,000.

    Anyway the foregoing would result in 1150 time series based on hourly measurements. I would be interested in the difference series derived from the corrected and uncorrected series and the resulting trend and whether that trend was significantly different than zero and the R^2 values for those trends. I suspect that one would want to convert the hourly measurements to monthly ones and work with that number of degrees of freedom.

  132. Interesting work, Kenneth.

    The 60-90N band is a very complex environment with comparatively very little data. I think the range of values you get in the 60-90N trends is suggestive of the degree of uncertainty in what is happening there.

    Even the radiometric satellite data is of very poor quality due to the large look angles. So even if we could work out a plausible model linking satellite radiometric to surface temperature data, uncertainties will have to remain large.

    It is clear that C&W has a substantially larger trend for this region than other other series. Have you done a sensitivity analysis to see how that affects this conclusion? I would be interested in 2000-2009 for example (so it misses the 1998 and 2010-11 ENSO events).

  133. Meanwhile Antarctic sea ice area has set 93 new highs, 126 second highest and 55 third highest areas for the daily areas (Cryosphere Today) from 1979-2013. The mean rank, ytd, is 2.4. Global sea ice area,as a result, is currently running well above the median with ranks as high as fourth highest on 11/29. The average rank, ytd, is 16, slightly above the median of 18.

    The logical conclusion, to me anyway, is that net energy is being transported from the SH to the NH and that the effect of this is felt most strongly at latitudes greater than 60N.

  134. Carrick (Comment #121803)
    December 6th, 2013 at 11:47 am

    “The 60-90N band is a very complex environment with comparatively very little data. I think the range of values you get in the 60-90N trends is suggestive of the degree of uncertainty in what is happening there.”

    The climate models have all the data – no infilling, extrapolation or kriging required. Seriously though the data that I used for the observed temperatures definitely shows increasing noise in the trends for the 60N-90N, but the trends were very significant.

    I would think, with a period like 1997-2012 where we see the 60N-90N zone contributing as much as 50% of the global warming trend, getting better accounting of trends up there would be important to the climate science community. In that case I would see Cowtan and Way as a good start. I have wondered whether using the AVHRR measurements as was done in O’Donnell et al for the Antarctica could be applied to the Arctic. It is similar to Cowtan and Way in that it uses the spatial relationships and not the less reliable temporal correlations.

    I do not consider my conjectures here as anything more than a laypersons wonderings and leave the hypothesizing about these matters for the peer-reviewed literature. I have not yet found any literature that deals with a mechanism for a decelerating warming in the lower latitudes with a contemporaneous accelerating warming in the polar Arctic – as we have been experiencing over the past 17 years or so. Perhaps the lack of serious theoretical work stems from the lack of reliable temperature data in the 60N-90N zone. I can see a theory developed and reinforced with existing data and then along comes a Cowtan and Way changing that existing data.

  135. Kenneth, to make it clear, I don’t doubt that there is a real trend, I just suspect the differences between the series are within true uncertainty bounds.

    I was curious what would happen in your analysis, if you shifted your start and stop years for the fit from 1997-2012 to 2000-2009? I don’t have time to play with this right now, or I’d do this myself.

  136. “I was curious what would happen in your analysis, if you shifted your start and stop years for the fit from 1997-2012 to 2000-2009? I don’t have time to play with this right now, or I’d do this myself.”

    I’ll do it after the grandkids go home Sunday.

  137. Re: DeWitt Payne (Comment #121805)

    The logical conclusion, to me anyway, is that net energy is being transported from the SH to the NH and that the effect of this is felt most strongly at latitudes greater than 60N.

    That seems to be consistent with this .

  138. Kenneth Fritsch (Comment #121802)

    I agree that the analysis is confined to 60S to 60N but I don’t see anything like the number of measurements you state. There aren’t hourly measurements…if you look at their Fig 3a, 3b, 3c you can see that in 2008 for example the greatest number of measurements in any 5×5 grid cell with LWP <0.5 was just over 1600 for the whole year. If we did a trend analysis of just one of the approximately 1150 grid cells, we might be looking at 500 cloudy points and 1000 clear points per year. To get the "clear sky trend" we just use the 1000 clear points or whatever is available in each year. What's interesting to me is there's no information in the paper to indicate how the clear sky fraction of the total changes over the years, and I would think that would be an important part of the story.

  139. The UAH NoPol anomaly for November was -0.59C. That’s the lowest it’s been since 2004. Admittedly, it’s just one month. But Arctic sea ice area, extent and volume have recovered a lot this year from the low of 2012.

  140. BillC (Comment #121819)
    December 7th, 2013 at 5:19 am

    “There aren’t hourly measurements…”

    Hourly sampling even with my question mark does not make much sense. I know that UAH report daily readings. I became curious how often the AMSU-A unit does sample a given grid and I came up empty in my search. From your observation we could approximate that the sampling rate is 1500 per year/365 days per year or 4 times per day and not 24. The total 13 year number of corrections then becomes 32,500,000 if one includes the clear sky observations that do not require correction.

    It would appear to me that a proper analysis, given the sampling conditions, requires a trend of the difference series at each grid between the corrected and uncorrected series.

    Also I did not see any confidence intervals for the trends reported in Weng paper or the need to correct for auto correlation.

    I continue not to know where you saw or deduced the R^2 values for the trends in the Weng paper.

  141. BillC, it appears that I have missed the import in the Weng paper of clear sky and all weather characterizations. Are the authors merely comparing the trends of cloudy (all weather?) days and clear sky days or are they making corrections for the cloudy days and then making a comparison with clear sky days.

    I just assumed that a correct comparison would require comparing the same sampling times and that would require a correction for cloudy days. The model noted in the paper under sensitivity study was capable of making that correction – I thought.

    If the authors, using the limit greater than 0.01kg/m^-2 for cloudy (all weather), are indeed basing their conclusions of trends for all weather and clear sky on that limit, but those observed times are not coincident, I would think that the authors would be obligated to say something about temperature variation with cloudiness.

  142. Obviously All Weather should be Clear Sky plus Cloudy and the comparison would be between all days (ALL Weather) and Cloudy. That does not alter my concerns expressed above.

  143. Kenneth,

    I deduced a “low R^2” from their figure 7.

    I think the comparison is between all weather and cloudy as you say in 121833. I don’t see where they’ve addressed confidence intervals for the trend or any sort of issue related to the temporal or spatial location of the different samples. Just “removed seasonality” was all I saw.

  144. Kenneth,

    AMSU-A is on NOAA-15. According to Wiki, it is on a sun-synchronous polar orbit and completes an orbit every 101 minutes. So it makes 14+ orbits a day. Near the equator at least (and I haven’t done the geometry enough to know how far N and S), that means it would sample only about 25% of the grid cells in any latitude band in a given day.

  145. Carrick, I have completed your suggested starting/ending data sensitivity test for the Cowtan and Way Hybrid gridded series and put the trend results in the link below. The linked table gives the measurements made previously and for the time periods 1979-1996 and 1997-2012 from before for comparison with the time periods 1979-1999 , 2000-2009 and 2000-2012.

    The time periods 1979-1999 and 2000-2009 show approximately the same global and 90S-60N trends while the 60N-90N trend is over 2 times higher than in the 2000-2009 period than for that in the 1979-1999 period which results in an amplification of 2.6 for the earlier period and 6.5 for the later period.

    While zonal trends are obviously sensitive to starting/stopping dates, what is of interest to me is that the 60N-90N zone warming is not contemporaneously dependent on the warming of the 90S-60N zone. In comparing the amplification of 2000-2012 period to that of the 2000-2009, we see the global and lower latitude warming trend has decreased on extending the period from from 2009 to 2012 and the 60N-90N trend has increased a little and thus we have an amplification of 15.5. From this I picture the 60N-90N zone sucking heat from the south and then radiating it more efficiently to space. Can a more efficient transport of heat to the Arctic actually warm the Arctic will cooling the globe as a whole?

    http://imageshack.com/a/img51/843/7f9i.png

  146. Re: Kenneth Fritsch (Dec 7 15:37),

    Can a more efficient transport of heat to the Arctic actually warm the Arctic will cooling the globe as a whole?

    I believe that’s the underlying hypothesis to the closing of the Isthmus of Panama causing the increase in glaciation over the last few million years. With the rest of the planet cooling, the Arctic can’t stay warm.

  147. DeWitt/Kenneth “I believe that’s the underlying hypothesis to the closing of the Isthmus of Panama causing the increase in glaciation over the last few million years. With the rest of the planet cooling, the Arctic can’t stay warm.”

    Toggwieler and Brier both have papers on the impacts. Toggweiler has modeled the Drake Passage impact which according to his estimates reduced “global” temperature by about 4C while warming the NH by 3C at the expense of the SH cooling by 3C. Brierely in his “relative impact of meridional and zonal sst imbalance” paper estimates zonal impact of ~0.6C and meridional of 3.2 C. While closing the isthmus had an impact, the Drake Passage opening creating the ACC and thermally isolating Antarctica is more likely the big Kahuna.

  148. Re: dallas (Dec 7 19:17),

    Severing the connection between South America and Antarctica was indeed a more important event for the Southern Hemisphere than the connecting of North and South America. It probably led to the formation of the Antarctic ice sheet. But that happened ~30 Ma. The cycle of Northern Hemisphere glacial and interglacial periods started much later, ~3Ma. Not coincidentally, that’s close to the estimated date for the closure of the Isthmus of Panama

    According to Schmittner, et.al.:

    Closure of the Isthmus of Panama about 3 million years ago (Ma) was accompanied by dramatic changes in Earth’s climate and biosphere. The Greenland ice sheet grew to continental extent and the great cycles of ice ages commenced dominating climate variability henceforth.

  149. DeWitt, I didn’t say the closing had no impact just that the Drake Passage had more impact. Closing of the isthmus would have increased the Gulf Stream flow and reduced the eastern equatorial Pacific SSTs. Herbert’s reconstructions of equatorial SST show a trend in the eastern Pacific starting ~4.5ma ago with a blip at ~3 ma. that gradually decreases to ~1ma then starts warming again. That is the shift from a 41ka world to an ~100ka world which now looks like it is shifting to a 21ka world which doesn’t seem to have a closure or opening as a cause. The odd blips 1.4ma and 2.25ma would suggests impact events may have a bigger role in the ice age cycling periods. Schmittner could be 100% correct, but there are a few warts I would like to see explained a bit better.

    Schmitter

  150. Kenneth, thanks—interesting results. Excluding the outlier periods reduces the slope by about 20%. In any other field, that would be considered a big effect. 😉

  151. I admit the large difference between GISTEMP and C&W still concerns me. I don’t think you can view C&W as simply improving the infill technique used in HadCRUT… there seems to be something more to what it’s doing.

  152. “I admit the large difference between GISTEMP and C&W still concerns me.”

    Carrick,
    it is actually quite easy to justify – (A) they do not include the most recent SST adjustment and; (B) their smoothing method reduces the trends in some cells bordering the Arctic.

  153. Carrick, since I have been distracted to Arctic polar amplification by the CW paper, I have been more interested in the pause in the warming in the lower latitudes over the past several years while the 60N-90N zone continues to warm. As a general case both the CWH and GISS series show an increased Arctic polar amplification and it is nearly the same for the period from 1979-2012. GISS shows a temperature anomaly series that has started bending in the downward direction at the 75N, 80N and 85N zones over the past several years while the CWT series continues to show an upwardly directed trend in the anomaly series for 75N, 80N and 85N and a downward bend for the 60N-65N, 65N-70N and 70N-75N over the same time period. The UAH anomaly series for the period after 1990 looks more like the CWT series than the GISS series.

  154. Robert Way (Comment #121848)
    December 8th, 2013 at 3:58 pm

    From the abstract of the paper you linked we have:

    “We used the recalibrated AMSU-A level 1c radiances recently developed by the Center for Satellite Application and Research group.”

    I believe that the recalibrated radiance comes from the Weng paper which BillC and I have been discussing on this thread – as Weng works for that organization. Let us see if we can make sense of that paper before going forward.

  155. It sounds like their diurnal corrections, based on models yet again, are too strong-over this period, reducing warming more than necessary, actually.

  156. Kenneth,

    I think we’re on the same page…multiple satellites, each one making 14 or so orbits a day. I didn’t look into the image sizes, that’s going to factor into the number of data points per grid cell. I am imagining at most latitudes, an image (especially a “nadir” image, which is what Weng et al used) is contained within a single grid cell. I suspect each one gets multiple images along the same meridian as it passes over a grid cell. I’m sure the data is available to know exactly.

    Robert Way,

    thanks, I will bookmark it.

  157. Kenneth as a side note I thought this was interesting work highlighted by Curry recently.

    http://judithcurry.com/2013/12/06/selection-bias-in-climate-model-simulations/
    https://pantherfile.uwm.edu/kswanson/www/publications/GRL_selection.pdf

    In a nutshell one suggestion in the paper is that models are running hot because they are being tuned to capture the warming (and ice loss) in the arctic. That is for the models to get Arctic conditions correct they have to overly warm the lower latitudes, remember this is with the GISS/Hadley levels of arctic amplification. The C&W global temp might show a slightly better match with the global temp of the models but presumably if you took the Swanson approach and looked at the latitudinal pattern the situation is going to be even worse given how high the polar amplification is for the C&W dataset.

    (The Intro and Conclusion of the paper make great sceptical reading)

  158. A 1 C surface temperature increase plugged into MODTRAN Sub-Arctic Winter atmosphere increases TOA flux at 70 km by 2.7 W/m² at constant water vapor pressure or 2.26 W/m² at constant RH. So if the Arctic has warmed more than most models predicted, does that mean the “missing heat” was, in fact, radiated to space?

  159. HR, DeWitt,
    Nick Stokes’ area-weighted (sine latitude) plot of 1997 to 2012 anomalies versus latitude may help answer the “missing heat” question. (http://moyhu.blogspot.com/2013/11/seasonal-trends-for-infilled-hadcrut.html) The warming in the arctic (using the C&W hybrid infilling method) shows a very large wintertime warming above ~60 north, reaching ~+3C near the pole, but a concurrent drop in wintertime temperatures for 40 N to 60 N, reaching ~-0.9C at ~50 N. An estimate of net annual change in heat flow with MODTRAN would have to account for the relative temperatures in these adjacent regions and their respective surface areas for each season. It is not clear to me if the wintertime increase in heat loss from ~3C warmer near the pole will dominate a fraction of a degree cooler between 40N and 60N, because the wintertime arctic temperatures are much lower than 40N to 60N, and so produce much lower overall radiative emission.

  160. I don’t know if this helps but RSS has absolute temperature for the LT which for 75N-90N the average is 250K so that is 3.56Wm-2/K. The difference between the 1982-2013 mean and the 2005 and 2013 mean is about 2.6 Wm-2. Lots of uncertainty and all that of
    course.

  161. BillC (Comment #121852)
    December 8th, 2013 at 9:28 pm

    “I am imagining at most latitudes, an image (especially a “nadir” image, which is what Weng et al used) is contained within a single grid cell.”

    BillC, I noticed the limited area used in Weng on a second or third read of the paper. I have had a problem with this paper by my anticipation of what I thought the authors were attempting do and then skimming through the paper – that is not an efficient way to comprehend the material.

    On background it is my understanding that UAH uses a correction for rain in its MSU brightness readings. From that I can see where Weng et al are interested specifically in cloudy conditions on measured brightness as that is not a condition for which UAH corrects. The threshold for rain in Weng is LWP greater than 0.5kg/meter and that is not part of the comparison in Weng as I now see it. I think this means that the All Weather condition includes cloudy (but not rainy) and Clear Sky conditions.

    The fact that Weng limits their analysis to the nadir fields of view (15 and 16) , to the oceans, to 60S-60N and to a comparison of Clear Sky to All Weather conditions that fall on different days with no attempt to relate temperature variation to these conditions, in my view means that Weng’s purposes were merely to show that there could be a significant decrease in AMSU-A brightness readings, on average, from cloudy conditions.

    A further, and perhaps what makes a publishable, element in Weng’s trend conclusions is that limiting the AMSU-A to Clear Sky conditions over the 1999-2012 period shows a global mean temperature trend higher than that derived from All Weather. That observation really means little with regards to trends derived from cloudiness uncorrected UAH measurements without a more comprehensive analysis than what Weng has done over the 13 year time period.

    In summary I think my anticipation that this paper was making a comparison that was relevant to the trends derived by UAH (and RSS) was not borne out after more careful reading of the text and the discussion we have had on this thread. I do, however, see this work as a stepping stone to making this comparison and perhaps that is what the Wang paper, linked by Way above, has done. Remember that the 60N-90N zone, that Weng did not include in his analysis, contributed nearly 53% of the global warming trend from 2000-2012 in the Cowtan and Way Hybrid series. Weng is doing troposphere temperature trends while CWH is doing surface temperature trends, but still I would think that there should be a reasonable relationship of surface to troposphere temperatures in that zone.

  162. Re: SteveF (Dec 9 09:22),

    I probably should have added a /sarc tag to my comment since I believe that missing heat is a model artifact. The lower temperature in the NH mid-latitudes doesn’t appear to show up in the models, as would be expected if it’s mainly the result of a cyclical peak in latitudinal heat transfer, AMO, e.g.

  163. DeWitt,

    Sorry, I thought you were completely serious. I agree that there is clear evidence of fairly long term cyclical changes (multiple decades) in latitudinal heat transfer as well as S/N hemispheric transport.

  164. Paywalled, but here’s an article about modeling issues concerning the Atlantic MOC .

    .. Fluctuations in overturning MHT are dominated by Ekman transport variability in CM2.1 and CCSM4, whereas baroclinic geostrophic transport variability plays a larger role in RAPID. .. The horizontal gyre heat transport and its sensitivity to the MOC are poorly represented in both models. The wind-driven gyre heat transport is northward in observations at 26.5°N, whereas it is weakly southward in both models, reducing the total MHT. This study emphasizes model biases that are responsible for the too-weak MHT, particularly at the western boundary. The use of direct MHT observations through RAPID allows for identification of the source of the too-weak MHT in the two models, a bias shared by a number of Coupled Model Intercomparison Project phase 5 (CMIP5) coupled models.

  165. From Trenberth and Fasullo regarding results from Otto et al., (without invoking aerosol uncertainty)

    Climate sensitivity estimates are greatly impacted by such variability especially when the observed record is used to try to place limits on equilibrium climate sensitivity [Otto et al.,
    2013], and simply using the ORAS-4 estimates of OHC changes in the 2000s instead of those used by Otto et al., so that the entire system uptake changes from 0.65 to 0.91Wm−2, changes their computed equilibrium climate sensitivity from 2.0∘ C to 2.5∘C, for instance.

  166. Kenneth Fritsch (Comment #121858)

    I agree with most of your comment. I continue to think that if there is significant temporal variation in the location and number of cloudy cells over time, it would be worth looking at how that affects the adjusted trends. I don’t seem to have access to Wang, but it looks like I will when it becomes included in a print edition.

  167. If I as a humble blog reader and commenter have the power to do so, as granted to me by host, I invite WebHub Telescope to come by and comment on the latest CA post.

  168. Re: RB (Dec 9 11:04),

    They get the sign wrong on MHT at 26.5N! Are the winds blowing the wrong way or what? I thought the ocean part of the models was bad, but this is worse than I thought. By the way, do I need to put a trade or registered mark on that phrase?

  169. BillC (Comment #121863)
    December 9th, 2013 at 11:56 am

    “I continue to think that if there is significant temporal variation in the location and number of cloudy cells over time, it would be worth looking at how that affects the adjusted trends.”

    I agree that could be the case. I was first going to do a toy model to convince myself what could happen to trends given various temporal variations. Or is there a non linear response to temperature here to worry about?

  170. RB (Comment #121862)-There are a rather large number of egregious errors in Trenberth and Fasullo’s paper. Enough that perhaps the editor ought to resign for such a flawed paper to be published. For one thing they appear, bizarrely enough, not to know the difference between the difference of the means of 1976-98 and 1999-2012, and the trend over the latter period. This is actually a rather baffling error! It leads them to say “it is the central and eastern Pacific more than anywhere else that has not warmed in the past decade or so.” Which is not only something which one cannot conclude from their analysis, which has nothing to do with the trends over the last decade (or 1999-2012 for that matter) it is flat wrong when one actually looks at the appropriate test for that hypothesis! The trend over 1999-2012 is *positive* in much of the ENSO region!

    They also make the risible claim that “deniers” engage in cherrypicking *while engaging in an act of cherry picking!* Once again we see someone trying to assert that a period ending the middle of the effects in a large volcanic eruption (Pinatubo) didn’t show much warming either-Shocking! But they compare, bizarrely, that period to the period from 1997-2008 *even though four additional years of data is available since then*, presumably because including the years from 2009-12 (and, naturally, the years from 1994-1997) would have *completely invalidated their conclusion based on comparing 1982-93 to 1997-2008!* And they avoid the question whether the alleged cherry picking by “deniers” of 1998 even *matters*, by not noting that the comparison between 2001-2012 and 1986-1997 would *also* contradict their claims, *despite not taking advantage of the El Ninos either of 1998 or 1982!* *And despite Pinatubo dragging much of the end of the latter period down!*

    This is a bad, low quality paper. It should not have been published, anywhere, by anyone.

  171. BillC, I have run my toy model for brightness decreases in cloudy conditions versus clear sky per Weng. I used 10 grids with 1001 data points in each grid and varying trends for each grid with white noise and then (1) randomly reduced 200 data points for each of 10 grids by 20%, (2) 500 data points for each of 10 grids by 20% and (3) 500 data points for each of 10 grids reduced by a constant amount (not a percentage). I then compared the overall trends and standard errors for these trends of these 3 reduction scenarios with those for the original. The original representing clear sky and the reductions representing all weather with 20 and 50% cloudy conditions.

    The results after seeing them from the toy model make sense. Reducing brightness randomly by 20 percent for 200 data points of 1001 reduces the trend by approximately 0.20*200/1001 or 4% and doing the same for 500 data points reduces the trend by approximately 10%. The standard error is also reduced by increasing amounts with the reduction percentage and number of data points. Reducing the brightness by a constant amount has no effect on the trends or standard error for trend.

    While the model results show that the trends can be reduced by a random percentage reduction in brightness measurements, the Weng paper shows that the standard error increases with the all weather condition over the clear sky which is opposite of the toy model result. The standard error can increase with all weather conditions where the cloudy condition, that in general reduces brightness, increases brightness and makes the temperature appear higher than clear sky conditions would. That counter effect can be seen in the zonal changes in trends where for some zones and channel 3 we can see All Weather running with warmer trends than Clear Sky.

    What has become very apparent in this exercise is that nowhere in the Weng paper can I find a detailed summary of the mechanism how the cloudy condition effects the brightness measurement other than it generally decreases it. There is a discussion in the paper of how rain can affect the brightness measurements and then later a single sentence in the paper notes that heavy rainfall conditions were not considered in the analyses in this paper. BillC was there an SI for this paper?

  172. Does your toy model include an annual cycle, and consider anomalies before and after adding noise?

    The effect is to lower absolutes, I believe. I can think of ways this would raise some anomalies.

  173. Andrew_FL (Comment #121869)
    December 9th, 2013 at 9:41 pm
    RB (Comment #121862) “……..they appear, bizarrely enough, not to know the difference between the difference of the means of 1976-98 and 1999-2012, and the trend over the latter period. This is actually a rather baffling error! It leads them to say “it is the central and eastern Pacific more than anywhere else that has not warmed in the past decade or so.” ”
    ———————————————————-
    I seem to be missing the error that they have made. Could you elaborate?

  174. Robert Way:

    it is actually quite easy to justify – (A) they do not include the most recent SST adjustment and; (B) their smoothing method reduces the trends in some cells bordering the Arctic.

    I’m afraid it’s not so simple for me, as the infilling from GISTEMP is potentially artificially inflating the amount of warming.

    There is, as far as I know, no expectation that the Arctic Ocean should exhibit higher temperature trends than neighboring land regions, at least when the surface is ice free.

    This is a figure I like to show, looking at the difference between amplification for land versus ocean. I believe that ocean was HadSST2 here, so updating this figure would be useful.

    It may be that during winter time, when the sea is still frozen, you get amplification, but when it’s open water you don’t. If most of the amplification is seen during winter months in the form of increased minimum temperatures, then plausibly your result is correct.

    You’ve probably seen figures like this, but here’s GISTEMP trend by season.

    So I think it would be diagnostic to break down your results by season. If you persist in seeing a large amplification for all seasons, I would consider that problematic.

  175. Kenneth Fritsch:

    The UAH anomaly series for the period after 1990 looks more like the CWT series than the GISS series.

    Thanks for that. So if they are pinning the arctic temperature on the formerly maligned by SkS series UAH, it makes sense that C&W would find UAH like trends. Hopefully I’ve understood the thrust of what you are saying.

  176. Owen (Comment #121878)-Look at figure 9. The quote I gave is their interpretation of that figure. Do you understand why that interpretation of that figure is wrong?

    Here’s a key bit of help for you: go to the GISS maps page, and do a map of the trend from 1999-2012. It doesn’t look anything like T&F’s figure 9. And yet it is the figure that they should have shown if they were interested in the areas that warmed/hadn’t warmed, over the period 1999-2012. The problem is that it would not have agreed with their hypothesis expressed in their interpretation of figure 9.

  177. Andrew,

    I took their hypothesis to be that natural multidecadal oscillations like the PDO can markedly influence the average global surface temperature, and even more so temperatures of certain zones. So when they give averages for 1976-1998 (positive phase of PDO) and compare that average to the 1999-2012 period (negative phase of PDO), we go from warm eastern and equatorial pacific (76-98) to much cooler eastern and equatorial pacific (99-12) – even though there may be a positive trend from 1999-2012 in some of those zones. The PDO effect is greater than the current warming trend in those zones. I don’t see a problem.

  178. Carrick (Comment #121879)
    December 10th, 2013 at 11:20 am

    Carrick, the winter months definitely show the largest trends in the Arctic polar region. Nick Stokes graphic linked below shows the seasonal differences well and agree with those that I have calculated and shown in non interactive form.

    http://moyhu.blogspot.com/2013…..dcrut.html

  179. Owen (Comment #121882)-Is it or is it not *false* that the equatorial pacific “more than anywhere else” has not warmed during the pause? Thus is their claim.

  180. Andrew_FL (Comment #121877)
    December 10th, 2013 at 10:55 am

    The trend calculations should be unaffected whether one uses absolute or anomaly data. I average the trends for the 10 grids. Actually the grids have no effect until I input non random and different brightness reductions into the different grids.

  181. Kenneth, thanks for reminding me of the link. What’s interesting, besides the fact that the amplification is largest in the winter (so this is consistent at least), is that C&W is not consistent with UAH in summer and fall. UAH shows virtually amplification for these seasons, but C&W still has a relatively large amplification. For fall in particular, HadCRUT4 actually has a larger trend the UAH.

    I wonder if UAH is not reliable when there is mixed ice & water?

  182. Kenneth Fritsch (Comment #121876)

    The concluding paragraph of their Section 3 is

    Only the integrated effect of clouds on the temperature trend can be estimated fromreal data.

    This follows a discussion of how different types of cloudiness can raise or lower the effective brightness temperature. They then proceed to remove cloudy grid cells and calculate a “clear-sky” temperature trend to compare to the “all-sky” trend.

    The point I’m trying to make is that if the quantity or location of the cloudy cells changes over the same time period as they calculate their trend, this could affect the trend itself. I have no idea how large that effect would be. I didn’t see an SI.

  183. “I wonder if UAH is not reliable when there is mixed ice & water?”

    the profile curves change( as I recall) when you go from land to water. My bet it they dont have an ice mask.

  184. Kenneth Fritsch (Comment #121885)-Ah, I misinterpreted what it was you were saying, sorry. Yes, you are correct it should not make a difference to the *trend*.

  185. Andrew_FL (Comment #121884)
    December 10th, 2013 at 1:27 pm
    “Is it or is it not *false* that the equatorial pacific “more than anywhere else” has not warmed during the pause? Thus is their claim.”
    ————————————–
    Their statement is not ***false*** (as you put it). The average regional surface temperature change (relative to a 1976-1998 baseline) over the past 13-14 years is at the most negative in the eastern and equatorial pacific – see GISS map at http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?year_last=2013&month_last=10&sat=4&sst=3&type=anoms&mean_gen=1212&year1=1999&year2=2012&base1=1976&base2=1998&radius=1200&pol=reg
    In any case, it is a minor point at best.

  186. Owen (Comment #121890)-No, their statement *is* false, and apparently you are equally ignorant as they are of the difference between a level difference between two periods and a trend over the latter period. This is the plot that would be relevant to their claim:

    http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?year_last=2013&month_last=10&sat=4&sst=1&type=trends&mean_gen=0112&year1=1999&year2=2012&base1=1951&base2=1980&radius=1200&pol=reg

    *It shows their claim is false.*

    It is *not* a minor point, either, their paper is almost entirely based on the hypothesis that ENSO and PDO “explain” the pause.

    It’s actually laughable how invested you are in Trenberth that you can’t abide that he would be so deliberately misleading or incompetent as to make such a basic error.

  187. WebHubTelescope (Comment #121874)

    You can explain the reasons between the results of your CSALT model and the TCR estimates from Otto et al and the other papers Nic is looking at. What’s your latest CSALT TCR – 2.1? That’s a lot higher than most of the others particularly the “observationally constrained” ones which is what yours seems to aspire to. Thoughts?

  188. BillC (Comment #121887)
    December 10th, 2013 at 3:31 pm

    I think I know what you are saying BillC. I first wanted to see if a lowered and random cloudiness response could change the trend. It could if all cloud conditions reduced the brightness readings by a percentage amount, but that is obviously not the case with the Weng results.

    If we assume that the clear sky condition provides brightness measurements that are related to the true troposphere temperature and that cloudy conditions can produce brightness measurements that erroneously will relate to lower or higher temperatures then using only clear sky measurements has to give the proper trend providing the clear sky condition is not associated with time periods that are trending differently than cloudy time periods. It would still be possible for the cloudy condition measurements to give the correct trends if that condition occurred randomly over time and produced a constant brightness offset. All we need to be concerned with, given the assumption that clear sky gives the most valid brightness/temperature measurement, is that clear sky times are representative trend-wise of the all weather conditions. To analyze that proposition we need to determine how those conditions might arise that are not representative and better how we would test that proposition.

    Obviously one could use an independent source to test the proposition and it could not be the satellite MSU data – as that would create a circular argument. Depending on how well surface trends apply to satellite results, I would suppose we could look at surface trends of cloudy versus clear days. We would have to confine the analysis to individual stations and I am not aware whether that data exists, although I suspect it does. I have found the following linked paper that discusses cloudy versus clear condition trends in the Arctic and it finds clear conditions associated with warmer trends in the 60N-90N zone. Those results could be unique to that region of the globe.

    It appears that, if nothing else, all this discussion points to the Weng paper’s failure to discuss these issues. I might do a more leisurely search for other papers dealing with cloudy versus clear condition trends, but at this point I would think the Weng paper would motivate the UAH fathers, Christy and Spencer, to reply. Missing a biasing of their MSU data by as much as the Weng paper might indicate is a big deal. The exercise on which Weng bases their conclusions does not involve more than comparing observed results and attempting to show that difference is an artifact of the measurement process.

    http://journals.ametsoc.org/doi/pdf/10.1175/2007JCLI1681.1

  189. Carrick (Comment #121886)
    December 10th, 2013 at 3:22 pm

    “I wonder if UAH is not reliable when there is mixed ice & water?”

    The CRTM model referenced in the Weng paper under discussion here has claimed to be capable of quantifying those effects that can affect the satellite radiance measurements. I am not aware of this model being used at this point in time to make corrections to these measurements but the model should make users aware of potential biases.

    http://en.wikipedia.org/wiki/Community_Radiative_Transfer_Model

    http://www.star.nesdis.noaa.gov/smcd/spb/CRTM/

  190. It seems to me that to the extent there is significant cloud feedback/forcing, *of course* the temperature trends in clear sky regions would not be the same as those in cloudy regions. This would certainly confound any attempt to measure a “bias” in trends due to cloudiness: part of the signal would actually be *real*.

  191. Kenneth,

    It seems to me that one way of testing would be to find a subset of the clear-sky data that has no trend in LWP> over the analysis time span, and re-do the trend for that subset.

    Actually it seems like the authors of this study could have and maybe should have done exactly that;

    1) find the areas with no trend in LWP
    2) analyze the clear-sky trend in those areas, where sufficient info is present.

    This would be like selecting a subset of land temperature stations by station quality, which has obviously been done or attempted by several.

    Let’s take a hypothetical region in which cloudiness increased (decreasing the number of clear-sky measurements) at the same time as the trend in clear sky grid cells is higher than in the all-sky grid cells. That’s a perfect formula for accidentally hiding any negative cloud feedbacks.

    It’s worth mentioning that I’ve seen discussion of the idea that the Southern Ocean may have negative cloud feedbacks during periods of increasing forcing, then gradually lose them as the ocean heat uptake equilibrates. That was all model-based, so take it as you want, but try mixing Weng et al with that idea and you have a nice formula for measurement bias.

  192. Steven Mosher (Comment #121888)
    December 10th, 2013 at 4:32 pm

    the profile curves change( as I recall) when you go from land to water.

    this is my recollection. interestingly it is not mentioned in Weng et al, I guess they assume it’s so obvious as to not need to be stated, though they state far more obvious things IMO.

  193. Andrew,

    You have misread the Trenberth and Fasullo paper, leaving out the context of the comment you quoted. The fuller context reads as follows (CAPS ARE MINE): “If we now examine the hiatus period of 1999–2012 and compare it to the time when global warming really took off from 1976 to 1998 (Figure 9), THE NEGATIVE PDO PATTERN EMERGES VERY STRONGLY THROUGHOUT THE THE PACIFIC although warming prevails in the Atlantic and Indian Oceans and on land. In other words, it is the central and eastern Pacific more than anywhere else that has not warmed in the past decade or so.”

    This quote comes from their general dsicussion of the PDO. They clearly state they are comparing two time periods (1976-1998 [positive phase of PDO] and 1999-2012 period [negative phase of PDO]. They are not dealing in any with a trend within the latter period (as you seem to think). They are correct in their statement about the effect of the PDO in the pacific, as is demonstrated by Figure 9 in their paper and by the GISS link I sent you.

    And your responses to me when I asked you about this were angry, insulting, and abusive, much like your earlier responses to Robert Way.

  194. Thanks for the explanation, Owen. It’d be interesting to see if there were a linkage between PDO and Arctic temperature, relating back to Robert Way’s findings.

  195. Owen (Comment #121899)-Oh my god you are so obtuse.

    “They are not dealing in any with a trend within the latter period (as you seem to think).”

    NO, I seem to think that they SHOULD have been dealing with the trend over the latter period BECAUSE THAT IS WHAT IS RELEVANT TO THE CLAIM “it is the central and eastern Pacific more than anywhere else that has not warmed in the past decade or so.”

    You are perpetrating their exact same error because you are apparently as incompetent or deceitful as they are.

    The claim “it is the central and eastern Pacific more than anywhere else that has not warmed in the past decade or so” IS WRONG. You do NOT answer the question “which places, more than others, did not warm over the period 1999-2012?” by taking the difference between that period and an earlier period! THAT’S NOT HOW TIMESERIES WORK.

    “They are correct in their statement about the effect of the PDO in the pacific, as is demonstrated by Figure 9 in their paper and by the GISS link I sent you.”

    No, they are claiming “it is the central and eastern Pacific more than anywhere else that has not warmed in the past decade or so” which is WRONG. As demonstrated by the link I gave YOU.

    “And your responses to me when I asked you about this were angry, insulting, and abusive, much like your earlier responses to Robert Way.”

    I will not tolerate a refusal to admit error. Cry me a river.

  196. As an observation, it is interesting that at least some on the sceptic side of the spectrum have come to acknowledge that the deeper ocean is indeed accumulating heat, contrary to opinions two years ago . It remains to be seen whether claims that indirect aerosol forcings are underestimated are proven right. Coincidentally, Trenberth’s estimates for global area heat accumulation (0.84W/m^2) for the 2000s end up being quite similar to Hansen’s estimate of 0.85W/m^2 in 2003.

  197. RB,

    Coincidentally, Trenberth’s estimates for global area heat accumulation (0.84W/m^2) for the 2000s end up being quite similar to Hansen’s estimate of 0.85W/m^2 in 2003.

    I can’t say I have looked in-depth at the methodology of the ORAS4 reanalysis, but here’s why I would be, for lack of a better word, skeptical:

    The large estimated TOA imbalance (global heat accumulation) for the 2000s in ORAS4 comes primarily from the period from 2000-2004 (where it is ~1.23 W/m^2). From 2005-2009, the TOA imbalance is much lower, and in-line with other estimates (~0.38 W/m^2). This is a huge drop in the TOA imbalance, one that surely should appear in the satellite record but is not evidenced at all (note: the satellite record is accurate for relative changes in TOA imbalance but is not accurate for absolute TOA imbalance). The discrepancy (drop-off), which is observed to a lesser degree in observational OHC sets, caused the infamous “missing heat” quote…the fact that ORAS4 has substantially amplified this problem does not do it any credit.

    Nor is there any physical or theoretical reason for such a drop-off in heat accumulation of which I am aware. I would suggest that Hansen’s estimate of 0.85 W/m^2 (which – if I recall correctly – was simply the output of a GISS model ensemble) does not lend credibility to this number, as those runs do not show a sudden decrease in the rate of a accumulation mid-decade, nor do they propose a mechanism for such a thing. (Also note that the “new and improved” CMIP5 models generally show a lower TOA imbalance for the 2000s than did the GISS CMIP3 runs).

    There is *something* that changed mid-decade, however, and that is the deployment of ARGO, which substantially improved coverage of the oceans. Given the substantially better coverage that we have in the 2nd half of the decade, and the lack of evidence of any drop-off in TOA imbalance in the satellite record, something has to give – personally I would argue that the early decade OHC data is the least reliable of the 3, and would resolve the conflict by assigning the largest uncertainty to that portion.

    For more details and specific numbers, here is a post I previous did on this topic:

    http://troyca.wordpress.com/2013/07/13/what-does-balmaseda-et-al-2013-say-about-the-missing-heat/

  198. Troy_CA,
    Yes, there are substantial issues with the non-ARGO to ARGO transition. I think that cooler heads have recognized the rather large uncertainty associated with that data input change, but alarmist? Not so much. I think analyses which cover the pre-to-post ARGO time period need to consider carefully the added uncertainty associated with the change in data. Better to look at only post-ARGO data when that is possible.

  199. BillC (Comment #121897)
    December 11th, 2013 at 11:39 am

    KNMI has cloud cover by station in a very useable form and better the corresponding temperature anomalies . I might do some temperature trends for clear and cloudy sky conditions with these data.

    As an aside I note that the person most instrumental in the KNMI website, and in my opinion doing as much as any climate scientist in making climate data available to interested laypersons such as myself and in very usable form, Gert van Oldenborgh, announced at the website that due to an illness he will not be attending the site for 6 months. I wish him well in his recovery.

Comments are closed.