299 thoughts on “Lewis and Crok: Discuss.”

  1. I haven’t read through it either. But it seems to be getting a lot of attention.

    Here’s an interesting post by Piers Forster on Nic’s work.

    Nic gets a bit testy in the comments.

    I can’t imagine you could digest a paper of this length in just a few minutes and write a coherent rebuttal, and get your basic facts right. It doesn’t appear that this has happened here, either.

  2. Your link is broken. And I don’t just mean the fact that it ostensibly leads to the GWPF website.

  3. It seems good news to some is bad news to others. All depends on your POV, objectives, goals, and priorities. Lewis and Crok probably differ in those areas from the climate science consensus (and even more from the IPCC poobahs 😉 ).

  4. Perhaps I’m wrong toto, but GWPF seems a bit more sane than say CEI in the US.

    One of the points in dispute on Hawkins thread is the question of how bias lowed is the HadCrut data. The idea is that we’re missing data in the north polar region, and potentially this region is warming faster than the rest of the Earth.

    Not including this region in your average is like assuming that it is warming at the global average (this is what HadCrut does).

    This is the formula I worked out, assuming the missing area is small:
    $latex \mu_{\hbox{global}} = 1 + (\mu_{\hbox{missing}}) {\Delta S \over S}$.

    Here $latex \mu$ is the ratio of trends (actual to measured) for the global and for the missing region. $latex \Delta S/S$ is the fraction of missing surface area, where there is no data.

    If the $latex \mu_{\hbox{missing}} = 1$ (no difference in trends), then $latex \mu_{\hbox{global}} = 1$ also.

    If we play games with the trend informed by the pattern of variation in trend with latitude, then I’d take $latex \mu_{\hbox{missing}} = 6$ and $latex \Delta S/S = 0.025$ (rough fraction of the Arctic ocean), this gives a 15% bias error for HadCRUT.

    By the way, we don’t know if the Arctic Ocean really warms at this faster rate or not, because the polar amplification seems to be a land only phenomenon (does it apply to north polar ice?). There are new BEST products coming out that may help answer this question.

    Is that really enough to explain the discrepancy between results or is this just another red herring? Eyeballing Lucia’s latest comparison, I see more like a factor of 2 between models and data.

    I think this correction just explains the difference between GISTEMP and HadCRUT. There’s still questions about whether C&W 2013 is a valid approach. Again BEST daily may help answer that.

  5. By the way if you invert the formula I give above to get $latex \mu_{\hbox{global}} = 2$, assuming the area is contain in 2.5% of the Earth’s surface, that works out to $latex \alpha \cong 40$! To get to 0.2 °C/decade for HadCRUT would require a value of $latex \alpha$ in excess of 20.

  6. Hopefully we’ll see more interaction between Forster and Lewis in the comments. Here is one comment I posted in response to Forster’s short reply to Nic (although it’s been in moderation):

    Piers Forster,

    I think it would be helpful for many of us if you expanded on your points of agreement / disagreement given the last comment. Do both of you agree that…

    1) The Lewis and Crok approach for predicting temperatures based on TCR is not exactly the same as Gregory and Forster (2008), primarily because an additional value is added (0.15K) for heat “in the pipeline”. Whether this method is *substantially* different from GF08 may be up for debate, as is whether this is an improvement, but the resulting projected value is modified. A 2-box model would be preferred, but this projection would then depend on other properties of the 2-box model as well.

    2) Regardless, using this method of projection will tend to underestimate the temperature in 2100 for the RCP4.5 scenario for models, though perhaps not to the degree shown in Fig. 1 above if the “in the pipeline” warming is accounted for. It seems to me this is most likely to result from 3 factors: a) changing ocean heat uptake efficiency (ratio of change in ocean heat content to surface temperature increase) in models, b) changing “effective” sensitivity (the radiative response per degree of surface temperature) in models , or c) models generally have a higher effective sensitivity / TCR ratio than used by the 2-box in Lewis and Crok. What reason(s) would either of you suggest is/are most likely?

    Now, one point of disagreement seems to be…

    3) Does the model / observation discrepancy in TCR (in Otto et al) primarily arise from the coverage bias in HadCRUTv4? Piers, you seem to suggest that this is indeed the case, although I confess I find it hard to sufficiently read the difference in Fig2 & Fig3 to see this. Nic, you argue that this is not the case.

    I also had one more question about Fig 3 above, which, as I mentioned above, I am having some trouble reading/understanding. The little red circles in the graph represent the difference between pre-industrial and the 2000s for the CMIP5 models, correct? My confusion arises because it appears, according to the figure, that most of the models have a surface temperature change of only 0.4-0.6 K over that period. Am I reading that correctly? Even the unmasked (fig 2) observations show 4 models in that range.

    As I said, I am confused by the addition of model circles to figures 2 & 3, as some appear to show substantially less surface temperature change over the 20th century than I recall when I last looked at the CMIP5 model runs.

  7. Posted this question on judithcurry as well:
    Using your own prefered method Bayesian priors, how much do the TCR and ECS change for each month of flat temperatures? Can we all agree that the answer must be a non-zero negative number? How much good news is each month of the Pause?

  8. “Missing heat” in the surface temperature datasets? But models hindcast surface temperatures “accurately”.

  9. Carrick,
    The agreement between BEST and CW2014 is very strong using their preferred reconstruction method (e.g. treating sea ice as land) which our paper shows is optimal when compared to Arctic Buoy data. Realclimate has a post (by Zeke and Rohde) which has this comparison as well. We’ve also done some interesting work with the AIRS satellite data facilitated by Mosher (who has done his own comparisons with BEST) which gives credence to the approach.

    “The idea is that we’re missing data in the north polar region, and potentially this region is warming faster than the rest of the Earth.”

    The challenge with the HadCrut4 data is not that it is simply in the North Pole where there are missing observations but rather that it extends to high latitudes in general. GHCN has even worse spatial coverage than HadCrut4 which makes this an equally important issue for that dataset.

  10. Hi Robert,

    Thanks for the comment.

    I was referring to this post by Keven:

    This is fantastic work. I’ve been making constant use of the BEST/Google maps station browser to tackle the very interesting question of why our trends are higher than GISTEMP (which also extrapolates air temperatures over sea ice). It’s unfortunately far from simple, but the combination of your tools and MERRA have allowed us I think to crack an important piece of the puzzle.

    Writing it up is taking a while though. Hopefully we’ll have a rudimentary report later in the month.

    You also said:

    The challenge with the HadCrut4 data is not that it is simply in the North Pole where there are missing observations but rather that it extends to high latitudes in general.

    Do you gave an estimate of the missing area?

  11. It seems good news to some is bad news to others.

    How is it good news? I thought nothing bad would happen even if the Earth warmed 5C.

  12. Oops mid-posted on another thread. Lucia this thread has the chance to be very informative. Please don’t let Boris ruinit. Thanks.

  13. Boris,
    “How is it good news? I thought nothing bad would happen even if the Earth warmed 5C.”
    .
    It is good news that extreme warming is less likely, if only because it will help keep folks like you from instituting foolish and wasteful public policies. But on a more basic level, surely you can appreciate that whatever the true potential for disruption due to warming (say, long-term sea level increases), a lower best estimate for climate sensitivity reduces that potential, and so ought to be considered good news…. if you are sincerely concerned about negative consequences from GHG driven warming.
    .
    I have never said that a 5C increase in GMT would not be disruptive; almost certainly it would be, if only due to the potential for long term sea level rise (a ~0.8C increase clearly has not been very disruptive). What I do say is that a 5C increase is quite implausible based on the best available data…. that is not the same thing.

  14. Troy_CA,
    “Hopefully we’ll see more interaction between Forster and Lewis in the comments. ”
    I would not count on it. Remember that engaging the deni*rs is forbidden among climate scientists… and it doesn’t matter if those deni*rs have multiple peer reviewed publications or not. You see, actually debating the issues gives too much credibility to people who are by definition mistaken… it confuses the public to hear an alternative view as well.

  15. I see that Nic states that using GISS did not change his results appreciably. GISS extrapolates in their 250 km version up to 250 km and that results in much less missing data from the Arctic region than HadCru4 or GHCN. They have a version that extrapolates at greater distances.

    As I recall the Cowtan Way global temperature trends were higher but close to those from GISS. Also I recall that Cowtan Way concentrated on a recent short period of the instrumental record in their paper.

  16. Robert, thanks for the paper.

    Are you guys ever going to go away from the 1997-2012 period? That’s a not very choice of intervals because of the big ENSO event in 1998(ish).

    Can you at least publish the 2002-2012 period as well?

  17. Hooray for Climate Sensitivity!

    This is the nub of the issue!

    Yay for a thorough Summary by Nic Lewis…

    Though not sure if his estimate is any better than anyone elses… And I don’t think his reasoning shows it is a compellingly better estimate.

    Don’t really think we can take too much joy from this – he would need to model the impacts based on his best estimate of ECS to show they were benign or some such – don’t think that will happen.

    IPCC projections of impacts seem to underestimate the reality (eg Sea Level and Arctic ice loss).

    It’s good that his estimate lies in the range estimated by the IPCC, so can’t understand why he thinks they hid the ‘good news’ – is a bizarre title.

  18. I do have some work on AIRS I can post. Given robert ways
    Setup it will be easier for them to use it than
    For us to use it. But the bottom line is that
    The cowtan and way and berkelek earth approach
    To the arctic appear more defensible than
    The hadcrut approach.

  19. “Robert Way (Comment #126225)
    March 6th, 2014 at 3:33 pm

    The agreement between BEST and CW2014 is very strong using their preferred reconstruction method (e.g. treating sea ice as land) which our paper shows is optimal when compared to Arctic Buoy data.?

    One big difference between ice and land is that there is no major annual cycle of land disappearing and appearing.
    You reconstruction therefore just swaps ice for water, and given the drop in overall ice area, gives enormous warming. A single storm, smashing the ice would give a warming signal.
    Nice if you like that sort of thing I suppose.

  20. “You reconstruction therefore just swaps ice for water, and given the drop in overall ice area, gives enormous warming. ”

    water is typically warmer than ice…

  21. Steven Mosher:

    But the bottom line is that
    The cowtan and way and berkelek earth approach
    To the arctic appear more defensible than
    The hadcrut approach.

    Not sure it’s that definitely more defensible, at least for the Arctic. You can think of HadCRUT as a lower bound. I don’t know how to get an upper bound.

  22. Tave=(Tmax+Tmin)/2, which is sort of OK as a metric for a 24 hour diurnal cycle, where the planet is in a 4380 hour diurnal cycle not so good; at the transition, the twilight zone, it is all a bit of a mess.

  23. Carrick our alternative which uses sst under ice would be
    A better lower bound

  24. Doc

    “You reconstruction therefore just swaps ice for water, and given the drop in overall ice area, gives enormous warming. A single storm, smashing the ice would give a warming signal.”

    Not sure you understand.

    First remember that the temperature is an index. folks are already adding water temps to air temps.

    In the arctic where the ice cover varies then you have a choice.

    First off, where you have open water you use it.
    Next up, where you have land you use the air temp over land
    Now, what to do about the Ice which is varying

    A) Dont infill.
    B) estimate the air temp over ice.
    C) use SST under ice.

    So folks need to wrap their arguments around those three choices. Note, in one sense there is no getting the temperature “right” since we are adding air temps and SST temps. The issue with Hadcrut and option A) is bias it produces.

  25. Steven Mosher (#126260) –
    You omitted option D) don’t use the region at all. It’s an index, as you point out. It’s still an index if one limits to +/-60 deg latitude. One might even say that it’s a more useful index, as relatively few folks live beyond those limits.
    Why get all wrapped up about how to estimate temperatures where there are no sensors and alternate sea/ice, when there’s no need to? Just mask it.

  26. I don’t particularly see why they’re so strident in their result. Unless it’s simply they think they’re right…
    Does anyone think their method is superior?

  27. “I don’t particularly see why they’re so strident in their result.”

    Read the paper, and Nic’s other papers and you will hopefully notice that the sheer detail and objectivity that has gone into this is beyond anything on the climate market. It isn’t friendly to the PSI’s or the Real Climate crowd, just the facts.

  28. Jeff ID

    It’s a report, rather than a paper. It’s a summary. Unless you mean the published paper they reference?

    Why do you think it is better?

    I have also read they would get very different ECS values depending on the time period of data they use… That’s not so promising.

    Also, if ECS is around 1.5, what sort of impacts do we expect? Pretty similar, just delayed by another decade or so…

  29. Nathan,
    The document states multiple times the observationally based estimates of TCR average to ~1.35, not as you say “about 1.5”, while the models have an average TCR of ~1.75.
    .
    A delay of “another decade or so” means humanity has another decade or so to accurately define the sensitivity value and, more importantly, the likely consequences of warming. It also gives more time to a) develop greater wealth and resiliency among the world’s poorest, and b) implement sensible energy policies with less economic disruption.

  30. The observational evidence points to a TCR of closer to 2C.
    Take the warming from 1880 and the growth of CO2 from 1880 and it shows a TCR of 2C.
    Take the warming from 1960 and the growth of CO2 from 1960 and it shows a TCR of 2C.

    This is based on assuming the sensitivity of temperature proportional to log[CO2]. The proportionality means that the CO2 acts as a control knob and will pull water vapor along with it.

    Compensate for all the natural variability and the continuous plot turns into a straight line, with the slope on a semi-log graph given by TCR/log(2).
    http://imageshack.com/a/img823/7237/wif.gif

    I consider this the consensus science and what someone that follows the literature would do. Lewis does something different and thus gets a low-ball estimate.

  31. Nathan: “Also, if ECS is around 1.5, what sort of impacts do we expect? Pretty similar, just delayed by another decade or so…”
    .
    If ECS is 1.5 we will probably not see a 2C rise in temperatures above preindustrial levels this century. Why do you think impacts will be similar to what they would be if ECS is 3 C?
    .
    If ECS is 1.5 we would need to see CO2 quadrupled to reach 3C above preindustrial temperatures.
    .
    Are you saying that CO2 in the atmosphere will double in a decade or so?

  32. HaroldW (Comment #126265)
    March 7th, 2014 at 12:03 am

    “Steven Mosher (#126260) –
    You omitted option D) don’t use the region at all. It’s an index, as you point out. It’s still an index if one limits to +/-60 deg latitude. One might even say that it’s a more useful index, as relatively few folks live beyond those limits.”

    Harold, I think in the context of Nic’s paper and review it would be important to know the entire global temperature change. Cowtan and Way do show a very accelerated warming in the Arctic that becomes even more accelerated at the highest northern latitudes. Even though those areas make up a small percent of the globe area, the effect on global temperatures was significant. GISS extrapolates and in fills just as Cowtan and Way do. Nic stated elsewhere that using GISS did not appreciably (my wording) change his result.

    It would probably be best to have Nic explain all this before the discussion gets totally off track. His review paper was very well written and easy to understand.

    Also I do not believe we see any mainstream climate scientists using Cowtan and Way or Best at this point in time. In fact most favor HadCRUT – which would my last choice.

  33. “A delay of “another decade or so” means humanity has another decade or so to accurately define the sensitivity value…” Furthermore, according to many economists, the price of renewables is likely to drop below fossil fuels, _really_, around mid-century. At that point or before, most of us are going to switch to the cheaper alternative on our own. This is not forever. The question is, how much harm will that half-century cause, short- and long-term? That depends on the sensitivity. If it’s low enough, doing nothing about mitigation becomes more attractive.

  34. “it’s simply they think they’re right…”

    Boy, there’s been none of that happening the last 20 years about AGW.

    Andrew

  35. If you read Nic’s coauthored paper you see that the paper talks about the IPCC and the difficulty of keeping the politics and science separate. The consensus scientists are probably going to be silent about this matter and let the IPCC political conclusions stand even when not supported by scientific evidence. With that in mind, an independent climate scientist, like Nic’s, findings and analyses are important. Going against grain in such a public setting is not for the faint of heart or the lesser informed.

  36. I would have thought that the title of Nic Lewis’ report is unwise. “How The IPCC hid the Good News on Global Warming”. He is a peer-reviewed scientist with a important message, and he can have a very big impact on what the public views as the “mainstream” of climate science. The 97%. Or, he can take up arms against the warmists and get himself pigeonholed as a partisan. They are going to try to do that to him anyhow; he should be very careful to avoid giving them ammunition.
    Steve McIntyre has been so effective precisely because he never steps a toe out of line.

    I’d add that Robert Way has been very effective in the same way from the other side. Whatever nasty stuff he said in a private forum, they published their paper, came to the skeptic blogs and defended it, and did a very good job. They were far more convincing than they would have been if they were always talking about deniers.

  37. MikeR, using zippy titles is an art form. Perhaps they went a bit overboard, but it has virtually assured that his paper will be read.

    People in the climate community who want to be seen as part of “the crowd” will predictably write snotty looking replies. People defending the consensus will write more objective sounding comments without reading the paper (as happened with Piers Forster’s commentary it seems). .

    But a fair number will read the commentary and it will further undercut the current IPCC heavily politicized approach. While one can decide climate has a higher TCR than Nic is suggesting from his data, I don’t think the IPCC numbers are viable anymore, which is his real point. Main stream people like James Annan have come down with lower values. Hopefully it will make it easier for them to comment on the ludicrousness of the situation.

  38. I guess I’d add that there are those of us for whom the “fat right hand tail” of the sensitivity is the most important part. Really catastrophic impacts, black swans, may be less likely, but they are the possible outcomes that can convince me to drop everything else and mitigate. Anything that chops off that right hand tail is important good news.
    AGW advocates tend to dwell on really catastrophic impacts for a good reason.

  39. MikeR:

    They were far more convincing than they would have been if they were always talking about deniers.

    I agree, as opposed to this stomach turning exchange between the ever affable Dana Nuccitello and Victor Venema.

    Victor Venema ‏@VariabilityBlog 14h
    @RogerPielkeJr stories are unbalanced & bad example but a different order of wrong as dragon slayers, WUWT or GWPF. @davidappell @dana1981

    Dana Nuccitelli
    ‏@dana1981
    @VariabilityBlog that is his goal after all, to position himself as ‘honest broker’ between deniers & ‘alarmists’
    Reply Retweet Favorite More
    6:01 PM – 6 Mar 2014

    Victor Venema ‏
    @dana1981 I know. While he is in between it is our job to make clear that he is not an honest broker.

    It’s their “job”??? WTF.

  40. Nathan:

    Also, if ECS is around 1.5, what sort of impacts do we expect? Pretty similar, just delayed by another decade or so…

    Are all the harms and costs linear? I doubt that. For example, a Miami Beach building may have a useful life of 40 years. If most of the beachfront is still there at the end of that time, taking it down is costless in one sense. Whereas, having to do it ten or 15 years earlier because the ocean is in the lobby is a major economic hit.

    (And of course, it goes without saying that we rule out the heretical possibility that a small warming is a net economic positive in favor of an unforgiving graphical slide to catastrophe which says that a small warming is merely a partial, delayed catastrophe.)

    A 2 degree rise by next thursday is more costly than a 2 degree rise by the year 2214 no matter how you calculate it.

    A really slow pace to the magic harm point (2 degrees and up?) gives plants animals, ecosystems and economic and political orders plenty of time to adapt. What always made the hype about harm from climate change scary was not just the exaggerated scale but the alleged rapidity, a pace that precluded less painful planned adaptations.

    And given that sensitivity is about non-linear (doubling) effects, lowering sensitivity presumably exponentially lowers impacts and/or exponentially increases the time required for their arrival.

    I think lower sensitivity and the end of climate porn politics means doing impact projections from scratch.

  41. kenneth

    ‘Also I do not believe we see any mainstream climate scientists using Cowtan and Way or Best at this point in time. In fact most favor HadCRUT – which would my last choice.”

    Well you would be wrong.

    You have to read the fine print.

    Look at Gavin Schmidts latest paper on GCMs.

    “Figure 1 | Updated external influences on climate and their impact on the CMIP5 model runs. a, The latest reconstructions of optical depth for volcanic aerosols9,10 from the Mount Pinatubo eruption in 1991 suggest that the cooling effect of the eruption (1991–1993) was overestimated in the CMIP5 runs, making the simulated temperatures too cool. From about 1998 onwards, however, the cooling effects of solar activity (red), human-made tropospheric aerosols (green) and volcanic eruptions (pink) were all underestimated. WMGHG, well-mixed greenhouse gases. b, Global mean surface temperature anomalies, with respect to 1980–1999, in the CMIP5 ensemble (mean: solid blue line; pale blue shading: 5–95% spread of simulations) on average exceeded two independent reconstructions from observations (GISTEMP Land–Ocean Temperature Index (LOTI)6, solid red; HadCRUT4 with spatial infilling7, dashed red) from about 1998. Adjusting for the phase of ENSO by regressing the observed temperature against the ENSO index11 adds interannual variability to the CMIP5 ensemble mean (dashed blue), and adjusting for updated external influences as in a further reduces the discrepancy between model and data from 1998 (black). The adjusted ensemble spread (dashed grey) clearly
    shows the decadal impact of the updated drivers. As an aside, we note that although it is convenient to use the CMIP5 ensemble to assess expected spreads in possible trends, the ensemble is not a true probabilistic sample.

    Here is the trick. They say they use HADCRUT but then note in a footnote about a bias correction or spatial infilling

    Same with another new paper coming out. that I cannot share.

    The interim solution for scientists will be to “use” HADCRUT but then apply the COWTAN and WAY fixes. No reviewer will object to this because it gives them cover. They can say they used the HADCRUT data, and where its flaws make a difference they just add a footnote and use Cowtan and Way.

    That way nobody ever has to suggest that HADCRUT not be used.

    pretty sneaky

  42. WHT #126282,
    As usual, you analysis is nonsense. You ignore all other man-made forcings (equal to about half of CO2 forcing), as always. Please stop writing the same irrational things over and over.

  43. George,
    “I think lower sensitivity and the end of climate porn politics means doing impact projections from scratch.”
    .
    You may get some resistance on that… there are thousands of catastrophe papers already in print premised on high to very high climate sensitivity. Lots of people in the field, plus all the green NOGs, plus lots of famous green/progressive politicians, have a huge investment in the high CS paradigm, and especially the plausibility of extreme CS values. I would not count on any change in the consensus position unless it is forced upon the field by political reality. (“Sorry, we are going to have to cut your funding because your warming projections have consistently been the purest cr@p we have even seen.”) IMO, ‘kicking and screaming’ is too weak a description of how the change is going to look. If control of the US Senate changes hands in the next election, the screws will start to tighten on CCS (not ‘Carbon Capture and Storage’, ‘Catastrophic Climate Science’).

  44. “HaroldW (Comment #126265)
    March 7th, 2014 at 12:03 am
    Steven Mosher (#126260) –
    You omitted option D) don’t use the region at all. It’s an index, as you point out. It’s still an index if one limits to +/-60 deg latitude. One might even say that it’s a more useful index, as relatively few folks live beyond those limits.
    Why get all wrapped up about how to estimate temperatures where there are no sensors and alternate sea/ice, when there’s no need to? Just mask it.”

    Masking it is option A. you are effectively arguing that the masked region warms as the whole warms.

    It might make more sense to do observation studies of ECS by separating out the SST, SAT, and OHC.. not sure if that would make a difference or how you would do it

  45. @ Carrick (Comment #126296)
    Dana and pals implying they are the honest people in the room, much less in the position to professionally point out the honesty (or lack of) in others (!)
    From people who pal around with Lewandowsky and whose friends dress up as cheesy WWII extras (!)
    rotf&lmfao.
    Thanks for a great Friday laugh.
    Cheers,
    hunter

  46. Carrick: “It’s their “job”??? WTF.”

    Of course he regards it as his “job” to “make clear Pielke jr is not an honest broker.” Did you believe he is in it to seek truth? 😉

  47. Lucia,
    Gavin says: “the ensemble is not a true probabilistic sample.”
    .
    Maybe that could be modified to be more informative “The ensemble is a true probabilistic sample only of the beliefs and biases of the various modeling groups.”

  48. Steven Mosher (#126308)
    “Masking it is option A. you are effectively arguing that the masked region warms as the whole warms.”
    First, a quibble — it’s not quite the same as option A, which is infilling. It’s excluding the entire region, not just areas away from the few ground stations (or further than 1200 km in the case of GISS).

    As to arguing that “the masked region warms as the whole warms’ — no, I was not trying to make such a patently false assertion. I understand the *mathematical* equation that assigning the mean to areas which are otherwise N/A, and then computing the global mean, produces the same answer. My point is that there’s no particular reason why the index has to cover the entire globe. It’s an index, and at the risk of sounding like Humpty-Dumpty, the question is who is to be the master, us or the index? If you’re looking for Arctic effects, you don’t want to be using a global index anyway, you’d want a (northern) polar index.

    Yes, it would mean adjusting previous results because obviously a -60-to-+60 index will have a lower trend. But now it has a much purer observational character, without extensive extrapolation or arbitrary decisions such as sea or air temperatures where there’s seasonal ice. And isn’t that what we’re really looking for from an index? [To avoid the rhetorical-question police, my answer is: yes.]

  49. SteveF,
    I think the way Gavin behaves is “They aren’t a probabalistic distribution unless letting people think they are is more convenient at this current more moment in time. “

  50. I should note however: At least he is finally admitting that the trend that we have seen was not consistent with what the models actually projected. 🙂

    Of course, this has been obvious for some time not withstanding his insistence that the claim was “bogus”.

    Oddly enough though, given that people are admitting warming was slower, we are no in a waiting game to see if the explanations pan out. Yes– lower solar would make a difference. And evidently, Gavin now sees part of the problem was “too much” Pinatubo but…well…we’ll see.

  51. Steven Mosher:

    Here is the trick. They say they use HADCRUT but then note in a footnote about a bias correction or spatial infilling

    If you don’t care about global, HADCRUT is a better product than GISTEMP in many ways, and certainly in terms of convenience of access to the gridded data.

    With GISTEMP you pretty much have to port the software to your platform and run it. (Tricky but not impossible.) ClearClimate was an option for a while… but they wrote the gridded output in some funky binary format. And it’s not supported as far as I can tell, so it breaks with newer versions of python.

    (Which by the way is why I don’t use python—I don’t feel the desire to be constantly maintaining my legacy software to keep it running on the latest version of an interpreter.)

  52. Of course he regards it as his “job” to “make clear Pielke jr is not an honest broker.” Did you believe he is in it to seek truth? 😉

    Seems they think Pielke Jr. isn’t an “honest broker” and wish to illustrate that belief. That doesn’t make them liars–even if they’re wrong.

    Based on what I’ve read of Pielke Jr., they have a point.

  53. re HaroldW (Comment #126316) : I agree for the following reasons. The arctic is very sparsely instrumented. Interpolation is very suspect. Thus the effect of including polar climate change in the global index could be very misleading. Even if polar regions warm more than the rest of the globe, it is the rest of the globe where we all live. The panic over polar bears seems to me to be letting the tail wag the dog. If all that is going to happen with warming is the arctic melts in summer and polar bears perish (which they won’t by the way if we stop hunting them…) then “yawn”. And the artic melting in summer does NOT raise sea level. Most of Greenland and Antarctica is high elevation and very very cold and not likely to melt. So adding the arctic into the global average has a purely political payoff, rather than being informative about what weather change most of us are or will see.

  54. Mosher, I’ve seen this infilled trick used before. Not sure if it was a paper or Tamino.

  55. Craig, they know all that. However, as there is no satellite data pre-79 they will be able to model that two and get the kind of profile they want for pre-79 reconstructions. Sea ice is now land.

  56. Craig Loehle,
    “The arctic is very sparsely instrumented. Interpolation is very suspect.”

    [RW] It is important to note that for a period of time the Arctic was fairly well-instrumented with many Arctic Buoys grounded in the sea ice etc… In our paper and the SI we compare the results of interpolation and using our hybrid approach to the Arctic Buoy data and atmospheric reanalysis and find very good results with both approaches, though interpolating sea-ice as land produces much more accurate results than interpolating sea-ice from SSTs. Many of my colleagues have weather stations operating on high Arctic ice caps and they have found that reanalysis products have performed very well for these very isolated stations.

    Now if you were to check the literature or to have looked into some of the data you might find that interpolation actually works very well at high latitudes with respect to temperature anomalies, though during the summer the interpolation ranges tend to be lower because of the heterogeneous nature of summer weather in the Arctic.

    I have actually validated our approach between the period 1880-1920 against BEST and Env Canada data for an isolated high latitude region where there are no contributing stations to the CW2014 product before 1920 but where there are stations active that are included in the BEST and Env Canada data. The datasets are nearly indistinguishable in terms of the annual anomalies leading one to conclude that the broad patterns are captured by the CW2014 approach.

    Finally an important point is that both BEST and our data have been compared to the AIRS satellite data with very favorable results indicating that the interpolation method used (e.g. Kriging) is working well in the Arctic.

    “Thus the effect of including polar climate change in the global index could be very misleading. Even if polar regions warm more than the rest of the globe, it is the rest of the globe where we all live. The panic over polar bears seems to me to be letting the tail wag the dog. If all that is going to happen with warming is the arctic melts in summer and polar bears perish (which they won’t by the way if we stop hunting them…) then “yawn”. “
    “So adding the arctic into the global average has a purely political payoff, rather than being informative about what weather change most of us are or will see.”

    [RW] …well I am an inuit from northern Canada. In my area people have been able to adapt to recent changes reasonably well because we have a road and airport but in coastal Nunatsiavut there are no roads to and from the communities and sea ice keeps shipping from being able to transport goods in the winter. Therefore these local inuit communities are completely reliant on hunting, fishing and firewood in addition to air lifted freight. Unfortunately, there have been significant impacts on people in these communities due to the recent regional warming of the past 15 years in particular. In the anomalously warm year of 2010 in one of the coastal communities, something like 1 in 5 people went through the ice on snowmobile because of poor ice conditions. In other areas people have had difficulties accessing traditional hunting grounds and areas to collect firewood. For the Inuit and other residents of northern Canada these changes matter – even if they don’t to someone who lives in the mid-latitudes.

    “And the artic melting in summer does NOT raise sea level. Most of Greenland and Antarctica is high elevation and very very cold and not likely to melt. “

    [RW] That depends. If sea-ice buttressing tidewater glaciers is removed it can lead to indirect impacts. More importantly taking away sea ice leads to warmer Arctic Ocean waters which in turn can have huge impacts on tidewater outlet glaciers. Remember it is warm ocean water that is the primary driver of mass losses in many high latitude outlet glaciers in Greenland and West Antarctica. I can provide you some references for this if you like.

  57. Boris,
    “Seems they think Pielke Jr. isn’t an “honest broker” and wish to illustrate that belief.”
    .
    A self professed Obama voter and strong believer in GHG forced warming is an odd candidate for ‘dishonest broker’. IMO, Pielke Jr. is about as honest a broker as you are going to find. The real issue is that he points out obvious errors/inconsistencies in claims of doom versus data, and that makes him just another ‘dene*er’ for the climate faithful.

  58. Robert Way,
    “Unfortunately, there have been significant impacts on people in these communities due to the recent regional warming of the past 15 years in particular.”
    .
    How many people are you talking about?

  59. Steven Mosher (Comment #126305)
    March 7th, 2014 at 10:54 am

    “Here is the trick. They say they use HADCRUT but then note in a footnote about a bias correction or spatial infilling

    The interim solution for scientists will be to “use” HADCRUT but then apply the COWTAN and WAY fixes. No reviewer will object to this because it gives them cover. They can say they used the HADCRUT data, and where its flaws make a difference they just add a footnote and use Cowtan and Way…

    ..That way nobody ever has to suggest that HADCRUT not be used.

    pretty sneaky”

    I am not at all clear here whether CW is noted in Gavin’s paper.

    Also that tactic might be useful where the HadCRUT agrees with a model – use it – and where it doesn’t use it plus the CW correction. All this reminds that I must go back and compare how the models handle the accelerated Arctic warming that CW finds and ratio it to the warming at lower latitudes. Currently the observed temperatures show a plateauing of the lower latitudes warming and an accelerated Artic warming and here particularly for CW’s data set. I am not sure on the details of how the models handle this, but I plan on comparing the CW gridded results with the GISS model. My analyses show the GISS models as the best of those from CMIP5 in getting the weather noise correct.

  60. re: Craig Loehle (Comment #126323)…

    It seems to me, the issue of a warming Arctic has less to do with polar bears and more to do with changes in albedo and the positive feedback created by increasing amounts of open ocean during the summer months.

  61. Carrick (Comment #126319)
    March 7th, 2014 at 1:28 pm

    “If you don’t care about global, HADCRUT is a better product than GISTEMP in many ways, and certainly in terms of convenience of access to the gridded data. ”

    Carrick, do you use the data in KNMI?

  62. Kenneth, possibly overkill, but…

    I download the published reconstructed fields from HadCRUT4 directly. The structure of their files makes it easy for me to process without having to write special scripts. They also have ensemble data that allows you to Monte Carlo’ing in addition to the means, and you know I prefer Monte Carlos to assuming a particular probability distribution.

    If I want long-term data from individual stations, I usually go with GHCN or GHCNdaily.

    If I want recent temperature records for all stations near a particular location (and don’t care about homogenized records), I prefer weather underground. For atmospheric soundings I use http://weather.uwyo.edu/upperair/sounding.html

    I do use the KNMI site for the climate model output typically.

  63. WHT:

    I really don’t have a problem with an estimate of ~2C TCR. I get ~1.5C-2.0C using my own homegrown method. I neither find your estimate nor Nic’s unreasonable. Climate science is science in slow motion. Only time will tell, eh?

  64. So is the consensus that even if you used something like Cowtan and Way it wouldn’t do much to change the values of Nic Lewis’ estimates of sensitivity?

  65. Robert Way, thanks again for the comments.

    One thing that strikes me is that, as you know, individual sites are incredibly noisy. I believe it’s a difficult problem to do well, but I think it would be made better with more reliance on regional scale weather models.

    I’m mainly troubled by the ad hoc nature of your hybrid model. Ad hoc models are useful constructs, but difficult to nail down modeling error on.

  66. HR:

    So is the consensus that even if you used something like Cowtan and Way it wouldn’t do much to change the values of Nic Lewis’ estimates of sensitivity?

    It would raise it, in my opinion, from about 1.5°C/doubling to about 1.7°C per doubling for a 70-year period.

  67. Robert Way (Comment #126329) thanks for your comments. One man’s “good match” to data is another’s not so good. Hansen has insisted he (GISS) does a great job of filling in missing high latitude data but many people don’t buy it, including me. One of the difficulties is that even if you guys did a great job, it is still the case that high latitude warming may give a false impression of how much the occupied part of the globe is warming (or will warm) since the arctic has been warming more.
    While your impressions of effects of warming on the Inuit are interesting, they are anecdotal. Parts of Alaska have been rapidly cooling and I don’t think your anecdotes cover that. As to the coastal villages, Steve McIntyre covered one of those cases, where the Inuit village was out on an exposed sand spit, not due to 1000 year tradition but because the government moved them there some decades ago. Such unstable locations are always subject to erosion and storms, no matter what happens with climate. Just like building on the coast in the US Southeast is not a great idea. It is also the case that warming might cause the Inuit short-term disruption, as for example when permafrost melts, but permafrost is not a social good per se, nor is hunting polar bears a sustainable way of life if the native population increases. My new paper in E&E shows why it is reasonable to believe that Canadian forests are likely to not only do fine but grow faster over the next 100 years.

  68. Fitzy, I am deeply shattered.

    So half again of the CO2 warming is due to an unnamed man-made warming not related to anthro-GHG’s.

    Piers Forster claims that the planet will show “significant warming out to 2100 of around 3C above preindustrial if we continue to emit CO2 at current levels.”

    It is kind of confusing when you read the Lewis paper and find that it is at odds with the researcher whose expert opinions Lewis relied on. So perhaps the definition of TCR needs reworking?

  69. Mosher: “You omitted option D) don’t use the region at all. It’s an index, as you point out. It’s still an index if one limits to +/-60 deg latitude. One might even say that it’s a more useful index, as relatively few folks live beyond those limits.”
    YES, it’s the more useful index.
    Also, to a greater degree than elsewhere, but not quantifiable, the Arctic melting and warming is because of albedo change from black carbon (soot). This makes it a less useful index because another fudge factor- assumptions about CO2 forcing vs. black carbon/albedo change forcing- is introduced.

  70. WHT,
    Try to read better; nowhere did I suggest the other forcings were not GHGs. Of the 50% over CO2 forcing, most is methane, N2O, halocarbons, and tropospheric ozone (you know, like what the IPCC WGi says).

  71. Fitzy, Those get rolled into the CO2 climate sensitivity.

    If you don’t like that definition, you can start by changing the Wikipedia entry for Climate Sensitivity. Here you go, just hit the edit button.
    http://en.wikipedia.org/wiki/Climate_sensitivity#Radiative_forcing_due_to_doubled_CO2

    Start by changing the lead sentence to something you would rather see: “CO2 climate sensitivity has a component directly due to radiative forcing by CO2, and a further contribution arising from feedbacks, positive and negative”

  72. [RW] … so many contradictions

    In my area people have been able to adapt to recent changes reasonably well because we have a road and airport [ surely the road and airport are recent changes?

    Therefore these local Inuit communities are completely reliant on hunting, fishing and firewood in addition to air lifted freight. [what did they rely on before airlifted freight?]

    Unfortunately, there have been significant impacts on people in these communities due to the recent regional warming of the past 15 years in particular. In the anomalously warm year of 2010 in one of the coastal communities, something like 1 in 5 people went through the ice on snowmobile because of poor ice conditions.
    [No because they did not adapt to the changing conditions. A true Inuit would never go through areas of poor ice conditions, he would know to travel where the ice was better. Just like you wouldn’t walk on an thin ice pond]

    In other areas people have had difficulties accessing traditional hunting grounds and areas to collect firewood. [please less ice more trees more firewood to access. less ice means more firewood arrgh] and more room to fish.

    For the Inuit and other residents of northern Canada these changes matter – even if they don’t to someone who lives in the mid-latitudes. [yes and if it warms it opens better access and more livability, perhaps they could hunt bison and elk instead of being hunted by polar bears [[whose numbers are increasing with the warmer weather or did you forget]]

  73. [RW] It is important to note that for a period of time the Arctic was fairly well-instrumented with many Arctic Buoys grounded in the sea ice etc
    I have actually validated our approach between the period 1880-1920 against BEST and Env Canada data for an isolated high latitude region where there are no contributing stations to the CW2014 product before 1920 but where there are stations active that are included in the BEST and Env Canada data.

    Technically speaking you have not validated your approach at all.
    You have compared some data [no , interpreted it] with very dubious means.
    Ie how many stations did you actually use 5 or 10 if you were lucky, for 1 small area, Canada. 130 to 90 years ago, and you have the gall, the unmitigated flummery, to dare suggest this is in any way a validation of your approach.
    please tell us the number of stations, the type of temperature measuring they used and then tell us what assumptions you made to get your krigged data best to resemble this tiny hodgepodge of data. And then to say it has any justification at all.
    No one can say what the temperature of the Southern Hemisphere was with any semblance of accuracy in that period yet your method relies on comparing this data to that of the whole world.
    You should be ashamed to use the word validate in any way shape or form in regard to small amounts of data from so long ago.
    It fits the fact that you keep claiming your method is reliable because it fits every part of the world when you “cross reference it.
    Lets be blunt here, when something scientific fits perfectly, you have found the holy grail.
    Or much, much more likely, something is wrong purely and precisely because it is so perfect. It fits wit Africa, It fits with Europe, It always warms in the Arctic and it can go back 120 years to even less real data than the few Arctic ones you have and it is still perfect. Your mate hasn’t got it yet but the pennies will drop one day.
    perfection demands the absence of perfection in a climate system yet you have perfection everywhere like the Emperor’s clothes..

  74. HaroldW & Craig,
    Excluding the Arctic isn’t quite that simple. The linear feedback equation is a statement of energy balance which should work on a global basis. One reason it has validity is that all of the internal heat fluxes are self-cancelling. If you try applying it on a latitude basis, you have to include heat flux terms between the latitudes – oceanic and atmospheric – which are not constant as temperature changes. Excluding the Arctic leaves any calculation open to the problem of the unknown value of the material meridonial changes with temperature. One can make assumptions to approximate such changes of course but it is not straightforward.

  75. Paul_K (#126360): “The linear feedback equation is a statement of energy balance which should work on a global basis.”

    I disagree. We know the response varies geographically (most prominently with latitude, but also land vs. ocean). The full equilibrium response is (apologies for not expressing this prettily in Latex form)
    delta_T(x) = lambda(x)*F
    where x represents surface location (latitude and longitude).
    From this form, the usual ECS is the (area-weighted) mean value of lambda over all x. With a geographically limited temperature index, its ECS will be the mean value of lambda over its region.

  76. I’ve backfit the highest resolution temperature estimates from the paleoclimate to the exact same timeline that the CO2 estimates are available from (and there are 2,560 from reliable methods).

    In other words, something approximating equilibrium CO2 sensitivity (holding all other factors constant – with Albedo being by far the biggest other factor – and solar in the distant past).

    Here are the results of the last 10,000 years, the last 200,000 years, the last 50 million years and the last 750 million years.

    http://s23.postimg.org/yn3htmr6j/Equil_CO2_Sensitivity_Last_10_Kys.png

    http://s14.postimg.org/wl8j1si5d/Equil_CO2_Sensitivity_Last_200_Kys.png

    http://s29.postimg.org/631jjig9j/Equil_CO2_Sensitivity_Last_50_Mys.png

    http://s29.postimg.org/tgcg4i2mv/Equil_CO2_Sensitivity_Last_750_Mys.png

    I think it is really a Null or Zero sensitivity result given that the range is +/- 40.0C and there is no logic to the patterns.

    And/or one needs very accurate estimates for Albedo and the other forcings in order to use the Paleoclimate to estimate CO2 sensitivity. It also means that anyone can make up any number they want just by changing the assumptions on the other forcings (see any of Hansen’s recent paleo papers).

  77. Boris,

    You read of Pielke, Jr.? That’s a fine way of making a judgement. Try reading what he’s written before you write him off so casually. I suggest The Honest Broker or The Climate Fix.

  78. WHT,
    If you are going to relate historical warming to radiative forcing by man made GHGs to evaluate climate sensitivity (essentially an energy balance calculation) then you have to include the contributions of all the man made GHGs, not just CO2. It has nothing to do with the definition of climate response to a doubling of CO2 (which is equal to about 3.7 watts per square meter forcing). Current total forcing from man made GHGs is about 3.1 watts/M^2, well over 80% of the forcing from a doubling of CO2. When you ignore the other GHGs, you grossly overestimate the sensitivity. The real uncertainty is in the magnitude of offsetting aerosol effects, which according to AR5, have been revised downward based on better measurements. As Nic Lewis has pointed out, the best measured values for aerosol influence are even lower than what AR5 says, so the most probable sensitivity value is even lower…. somewhere south of 2C per doubling.

  79. Paul_K (Comment #126360) I did not say to leave the arctic out of the models, but out of our index of how much the earth is warming.

  80. Paul_K (Comment #126360) I did not say to leave the arctic out of the models, but out of our index of how much the earth is warming.

  81. As long as we are discussing what to leave out of an index
    I propose leaving everything out except cell 247.

    And now you know the problem with post hoc
    Decisions about what to include and what not
    To include.

    For a long time people have criticized indices because
    Of sparseness. What if we are missing cold spots

    A whole pile of crap on the great thermometer dropout
    Was devoted to it.

    Here is what I find. The more data we uncover from the past…the more we find a past cooler than we thought
    And a present warmer than we thought

    It could have been otherwise.

    Also we criticized the cru methods harshly. As we improve
    On them..starting with jeffid..we find one consistent
    Thing. Improving methods using more information leads to
    A world warmer than we thought. More data better methods.. its a good thing.

  82. As long as we are discussing what to leave out of an index
    I propose leaving everything out except cell 247.

    And now you know the problem with post hoc
    Decisions about what to include and what not
    To include.

    For a long time people have criticized indices because
    Of sparseness. What if we are missing cold spots

    A whole pile of crap on the great thermometer dropout
    Was devoted to it.

    Here is what I find. The more data we uncover from the past…the more we find a past cooler than we thought
    And a present warmer than we thought

    It could have been otherwise.

    Also we criticized the cru methods harshly. As we improve
    On them..starting with jeffid..we find one consistent
    Thing. Improving methods using more information leads to
    A world warmer than we thought. More data better methods.. its a good thing.

  83. As long as we are discussing what to leave out of an index
    I propose leaving everything out except cell 247.

    And now you know the problem with post hoc
    Decisions about what to include and what not
    To include.

    For a long time people have criticized indices because
    Of sparseness. What if we are missing cold spots

    A whole pile of crap on the great thermometer dropout
    Was devoted to it.

    Here is what I find. The more data we uncover from the past…the more we find a past cooler than we thought
    And a present warmer than we thought

    It could have been otherwise.

    Also we criticized the cru methods harshly. As we improve
    On them..starting with jeffid..we find one consistent
    Thing. Improving methods using more information leads to
    A world warmer than we thought. More data better methods.. its a good thing.

  84. Craig Loehle (Comment #126369) …”I did not say to leave the arctic out of the models, but out of our index of how much the earth is warming.”

    Um, then would that not ensure that observed trends don’t match models? What are the models attempting to models if not global (truly global) surface temperature?

  85. Boris,

    You read of Pielke, Jr.? That’s a fine way of making a judgement. Try reading what he’s written before you write him off so casually. I suggest The Honest Broker or The Climate Fix.

    I have read what Pielke’s written.

  86. Fitzy, The shorthand of CO2 sensitivity includes all the other GHGs bundled into the definition. You may not like it but that’s the way it is. The scientists decided long ago that they needed a single metric that they could present and not have to qualify it with estimates of sensitivity of all the other GHGs. And since CO2 is the control knob for the biggie GHG H2O, this makes logical sense.

    You can argue this all you want but this is the conventional wisdom that permeates the common knowledge of climate science.

  87. WHT,
    “The shorthand of CO2 sensitivity includes all the other GHGs bundled into the definition.”
    .
    My first reaction to this nonsense was: ‘He must be joking with me’. But after reviewing some of your earlier comments, I tend to believe that you are just very, very confused. No wonder all your calculations and historical curve fits to estimate climate sensitivity are wacko. The correct definition is (from Wikipedia):

    The equilibrium climate sensitivity (ECS) refers to the equilibrium change in global mean near-surface air temperature that would result from a sustained doubling of the atmospheric (equivalent) carbon dioxide concentration (ΔTx2).

    .
    No talk there of rolling forcing from N2O, methane, halocarbons, and tropospheric ozone into the calculation. No talk of aerosols, ocean heat uptake, or anything else either, because none of those things are part of the definition.
    .
    Equivalently, the climate sensitivity can be stated as the temperature response to unit radiative forcing, with units normally degrees (C or K) per watt per square meter of forcing. This differs from the ‘degrees per doubling of CO2’ value defined above by a constant factor of 1/3.7, because doubling of CO2 generates ~3.7 watts/square meter of forcing. You really need to get your head wrapped around the basic definitions before going off into the weeds doing nutty curve-fits that generate bizarre conclusions about climate sensitivity. It might also help for you to be sure you understand heat balances, which I suspect from your many comments you do not understand.

  88. SteveF, MTCE is a commonly used shorthand for megaton carbon equivalent. Your definition is probably including other compounds.

  89. Mosher: Given the consistent choice of all groups to rely on SST rather than marine air temp, no temperature index is going to free of potential post hoc bias. The most appropriate choices may depend on what your index is used for. Paul correctly points out that anyone interested in energy balance can not afford to ignore the Arctic.

    However, if one is interested in calculating economic damages and in optimal public policy, one might want to underweight the Arctic to the point where the current disagreements are unimportant. (Ignoring SLR for the moment). When I read that there will be more warming at night and in the winter and in polar regions, I think the impact will be less than than if warming was equal everywhere. Maybe one could have an index for public policy purposes where grid cell anomalies are weighted by population, and perhaps average high temperature (above 30 degC?) and lack of rainfall. Such an index would be highly subjective, more economics than science. Economists do not believe – as scientists (like you) do – that a purely objective best answer always exists. It doesn’t take a lot of thought to realize that the differences between HadCru and BEST for calculating 2 degC above pre-industrial climate are trivial compared with the gross misuse of either for public policy purposes.

  90. Craig Loehle,

    I did not say to leave the arctic out of the models, but out of our index of how much the earth is warming.

    Fair enough, Craig. I can consult local temperature records to decide how much my particular bit of the world has warmed over the last 150 years. (Not much.)
    I am only cautioning in a friendly way against the definition of a temperature index that has a diminished utility as a measure of global effect. I have absolutely no objection to you or anybody else defining and using such an index, providing its limitations are clearly stated.

  91. Fitzy, I know that it is confusing:

    “Radiative modeling analyses of the terrestrial greenhouse structure described in a parallel study in the Journal of Geophysical Research (Schmidt et al., 2010) found that water vapor accounts for about 50% of the Earth’s greenhouse effect, with clouds contributing 25%, carbon dioxide 20%, and the minor greenhouse gases (GHGs) and aerosols accounting for the remaining 5%”

    http://www.giss.nasa.gov/research/briefs/lacis_01/

    When the ECS was estimated at around 3C back in 1979 for the Charney report, most of these factors were already considered as contributing to the sensitivity. Again you may not like it but this is the way it is defined.

    If it was just CO2 alone, we would be stuck at 1.2C for an ECS, but since CO2 does in fact act as a control knob, pulling along H20 for a fast positive feedback ride, this is what we have to contend with — an effective climate sensitivity where we use CO2 as an indicator of how far along we are into the 3C sensitivity.

    I am apparently just as “wacko” as the consensus climate science community. Given that, I actually do not mind at all that you call me wacko.

  92. HaroldW,
    Hi again Harold.
    Your equation form is only valid for equilibrium. The energy balance it derives from looks like this:-

    Net Radiative Flux = Forcing – DeltaT/lambda

    The solution defines a transient relationship between temperature and time. As the system approaches steady state, the net flux goes to zero and the relationship becomes DeltaT = lambda* Forcing. Which is your starting point, I think.

    If you want to apply an energy balance to any region less than the full global surface, the energy balance equation gets a lot more complicated.
    It becomes:-
    Total heat flux (into the region) = Net Radiative Flux + Oceanic and Atmospheric Heat Flux from adjacent regions = Forcing – DeltaT/lambda + Oceanic and Atmospheric Heat Flux from adjacent regions

    Now if you know the heat flux terms between the regions you can calculate the values of lambda as you suggest.

    An alternative way of showing that there is a conceptual problem with what you suggest is to consider as one of your regions a part of the tropics which has very low positive or even a negative value of lambda; in such region the radiative flux in is partially balanced by the large heat flux flowing out from the tropics.

  93. WebHubTelescope (Comment #126418)

    ” . . . CO2 is the control knob for the biggie GHG H2O . . .”.

    The supposed water vapor feedback has very little to do with water being a GHG – almost all of it is attributed to changes in the altitude and fluffiness of clouds which the models somehow imagine.

    With the amount of time you spend waffling on the internet about CAGW it’s amazing how myopic you are regarding the basic details. Kindly read up on the subject – it would save everyone a lot of time.

  94. WHT,
    The influence of water vapor, which is indeed a greenhouse gas, has nothing to do with ignoring the man made GHG’s other than CO2. Ignoring those contributions to total forcing means that you are consistently overestimating the influence of CO2. Nobody, including all of main stream climate scientists, does this. Except WHT. It is plainly wrong, and simply shows your lack of understanding. But it is now clear you lack either the ability or the desire to ubderstand this most basic of concepts, so it would only be a waste or my time and yours to discuss this any further. You can lead a horse to water but you can’t make him drink. A deus.-

  95. “Fast positive feedback” for H2O, and “>17 year hiatus” do not fit together. SO, the ocean ate it, the wind ate it, the volcanoes that did not erupt ate it, the volcanoes erupted ate it, there is no pause, the sun ate it, etc. etc. etc.
    At least some of the climate obsessed trying to explain it away are trying something new, however inconsistent. Angry obsessives are trying to deal with the problem by trying to silence skeptics or calling skeptics names more loudly.
    The slowest of the obsessed seem to be stuck on just repeating the consensus view over and over.

  96. hunter

    At least some of the climate obsessed trying to explain it away are trying something new, however inconsistent.

    On the one hand, it’s fair enough to try to explain it in a way that doesn’t require someone to throw away notions previously held. For example: if light go out in my house, I tend to look for explanations other than “electricity and magnetism no longer work”. I hunt to see whether the circuit breaker flipped, then find out if the neighbors light’s are out… downtown? and so on. All this “hunting” takes longer in climate science. Data also trickle in more slowly.

    But of course, “climate modeling not very reliable” isn’t on the same level as “electricity and magnetism works as defined by….”.

    In three or four years, we should see if warming resumes anywhere near projected rates. Absent very large stratospheric eruptions that one detects at the time of the eruption none of “volcanoes”, “in the ocean” or “low solar” can really explain failure to resume somewhere near projected rates because we supposedly are already at the “low solar”, “highish-volcano masking” and “lots hiding in the ocean”. So those can’t be reasons for continued slow warming.

    Man made aerosols could be a reason for continued slow warming if some countries emit more and more and more.

  97. FergalIR (#126420)

    You quote WHT (#126418)
    ” . . . is the control knob for the biggie GHG H2O . . .”

    I can’t see where he actually said that.


  98. You quote WHT (#126418)
    ” . . . is the control knob for the biggie GHG H2O . . .”

    I can’t see where he actually said that.

    Of course I said that. I said that because I read the research of scientists such as Andrew Lacis, who explain how the CO2 mechanism works as a controlling agent:
    http://www-atm.damtp.cam.ac.uk/people/mem/co2-main-ct-knob-lacis-sci10.pdf

    ” it is clear that CO2 is the key atmospheric gas that exerts principal control over the strength of the terrestrial greenhouse effect. Water vapor and clouds are fast-acting
    feedback effects, and as such are controlled by the radiative forcings supplied by the noncondensing GHGs.

    One can wind back the CO2 from the pre-industrial level of 280PPM to the 1PPM or so where it ceases to have a radiative impact and can account for the full 33C warming that the earth experiences due to its GHG properties.

    To say that the effect has simply stopped in the year 2000 due to observed compensation of natural variation is quite presumptuous and more an assertion than a fully fleshed out model on the part of skeptics. OTOH, I am not asserting anything, just reiterating my understanding of the research literature and trying to model the recent observations within this context, which is why I developed a simplified representation that tries to account for all the possible factors
    http://contextearth.com/2013/10/26/csalt-model/

  99. Lucia,
    “On the one hand, it’s fair enough to try to explain it in a way that doesn’t require someone to throw away notions previously held.”
    .
    Sure but not to the point where previous notions are never revised. Reading the transcript of Ben Santer’s presentation to the American Physical Society committee (revision of the APS statement on climate change), I was actually a little embarrassed for him; he is so twisted into a pretzel rejecting the obvious (the models have serious problems which lead to too much projected warming) that I don’t think he can think straight any more. At some not too distant date folks like Santer are going to have to revise their thinking on models, or accept a rapid decline in scientific relevancy. The day of reckoning is near.
    .
    By coincidence, I am at OHare awaiting a flight to Tokyo…. Chicago is too cold for my tastes. My massive carbon footprint over the next 16 hours may help warm Chicago a few microkelvin next winter. 😉

  100. Another Water Vapor chart which is, perhaps, more important.

    Water Vapor versus temperatures scatter back to 1958. Yes, water vapor increases as temperature increases (as the Clausius Clapeyron Relation and global warming theory says) but it is only increasing at 4.14% per 1.0C change versus 7.0% in the theory and in the C-C relation.

    That is enough change to drop the climate sensitivity from 3.0C per doubling to 1.8C per doubling (holding the other feedbacks the same as in IPCC AR5).

    http://s21.postimg.org/5g73ffe87/Temps_vs_PCWV_Scatter_1958_Feb14.png

  101. Bill Illis,

    If you correlate the measured column water vapor (Satellite) against lagged ENSO, you can determine how to “adjust” water vapor for the influence of ENSO and so reduce weather noise. What is left can be plotted against global average surface temperature.
    .
    One point: it is true that C-C says ~7% per degree, but you need to keep in mind that the global warming which has taken place is concentrated in the high latitude north. This area is cold, so the atmospheric water vapor increase (in an absolute sense) is relatively small per degree compared to the concentration of water vapor in the tropics. If you calculate a global average total column water vapor that is area weighted, the contribution to the average atmospheric water from warming at high latitudes will always be much less than a comparable warming at low latitudes. I suspect this may explain much of the discrepancy between the C-C 7% per degree value and the measured increase of ~4%.
    .
    The model/reality discrepancy is more likely the result of incorrect model treatment of clouds and aerosols than a lack of increase in atmospheric water vapor. Net cloud feedback is ‘parameterized’ as atmospheric water vapor increases, rather than calculated from first principles, and so easy to turn into a kludge that generates (the desired!) high sensitivity as a ’emergent property’. Climate sensitivity in the models is anything but an emergent property; it is pretty much baked in the cake when choices for cloud feed-backs are made.

  102. Heat of vaporization of H2O is 40.68 kJ/mol. At room temperature 300K, this puts the delta change in partial pressure closer to 5.5% than 7% per degree C.

    dP/P = (H/kT) * dT/T

    Go back to the fundamentals.

  103. WHT:

    To say that the effect has simply stopped in the year 2000 due to observed compensation of natural variation is quite presumptuous and more an assertion than a fully fleshed out model on the part of skeptics.

    This is some pretty remarkable spin. It is not that a forcing from a 3 degree sensitivity has stopped but that forcing at that level was never happening in the first place. One does not need to build a new model before noting that the performance of an existing model sucks.

    Your breezy repetition of percentage weight of forcing factors from Lacis’ affirming his membership in The Consensus to Revkin is not exactly dispositive. That the models (a) overstate the forcing from CO2; and (b) do a lousy job of accounting for cloud formation is a far more reasonable summary of the current state of climate science than pretending that the models are merely a tweak or two away from perfection.

  104. WHT,
    Wrong again. Go to the Wikipedia page on C-C. The ubcrease is very close to 7%per degree… And the wiki page references the IPCC.

  105. Lets start with an ice-ball Earth with no CO2 or other non-condensing GHG gasses; low sea levels and mountains of ice at the poles and on the mountain tops.
    No clouds, so a full 340 W/m2 hits on average the Earths surface.
    At the equator this is a full 1360 W/m2.
    Few high altitude clouds, more solar in.
    Lower lapse rate.
    Reduced hydrologic cycle, water not transported such a great distance.
    Lots of dust, as less rain, lots of dusty ice.
    Can we model it?
    I think not.

  106. WHT (#126432) “Of course I said that” (“biggie GHG H2O” &c)
    Not in #126418 you didn’t.
    (I’m objecting to FergalR’s incorrect use of quotes. Maybe you don’t think that matters).

  107. Some claim that Clausius-Clayperon only applies to the partial pressure of the liquid medium that the vapor is in quasi-equilibrium with, and thus of the temperature of that liquid. Since global temperature is rising at a TCR=2C doubling WRT CO2 of which 3C is that from land and 1.5C is that from SST
    2C ~ 0.7*1.5 + 0.3*3

    Then the global water vapor level does not look too much out of line if one uses the SST value.

    I believe in the physics and am skeptical of any claims that the macro characteristics are not following fundamental laws.

  108. Inhomogeneous forcing and transient climate sensitivity
    Drew T. Shindell
    Nature Climate Change , (2014) | doi:10.1038/nclimate2136.
    PUBLISHED ONLINE: 9 MARCH 2014

    “Understanding climate sensitivity is critical to projecting
    climate change in response to a given forcing scenario.
    Recent analyses have suggested that transient climate
    sensitivity is at the low end of the present model range
    taking into account the reduced warming rates during the past
    10–15 years during which forcing has increased markedly. In
    contrast, comparisons of modelled feedback processes with
    observations indicate that the most realistic models have
    higher sensitivities. Here I analyse results from recent
    climate modelling intercomparison projects to demonstrate
    that transient climate sensitivity to historical aerosols and
    ozone is substantially greater than the transient climate
    sensitivity to CO2.This enhanced sensitivity is primarily caused
    by more of the forcing being located at Northern Hemisphere
    middle to high latitudes where it triggers more rapid land
    responses and stronger feedbacks. I find that accounting for
    this enhancement largely reconciles the two sets of results, and
    I conclude that the lowest end of the range of transient climate
    response to CO2 in present models and assessments (<1.3 C)
    is very unlikely."

  109. R.Way #126329

    Unfortunately, my comments tend to be hit-and-run owing to time constraints and the following observations have been made at least once before (somewhere!).

    I have no expertise at all with temperature datasets and unfamiliar datasets can lead to all kinds of mistakes. My impression however is that by no means all stations on the ‘Arctic periphery’ are showing the rapid warming driving your interpolations? (Details in your paper would have helped!)

    What I am much more confident about is the nature of kriging and its sensitivity to outliers. It is one of many multivariate statistical techniques based upon projection (linear) operators. These are all open to being ‘hijacked’ by a small proportion of the data. A telling example is MBH98, since singular value decompositions
    (principal components if you must) are also projections (as in operators). MBH reconstructions are effectively dominated by 1 dimension in over 1000 possible.

    Use of anomalies (likely to inflate ratios relative to those for actual temperatures) and a kernel/variogram of the form exp-d probably do not help (why not exp-d^2?) in this.

    Cross validation is important in assessing any interpolation, but your reference to one historical fit for the Arctic does rather underline the problems in this regard.

  110. @comment 126221 of mine
    http://www.climate-lab-book.ac.uk/2014/gwpf/#comment-104254
    Ed Hawkins made a remarkable suggestion over there.
    “I don’t know how much difference it would make to the TCR best estimate – agreed it would be slightly negative – but it would also probably widen the uncertainty in TCR in BOTH directions as the role of variability would be deemed larger.”
    Am I wrong to think that that cannot happen? My uneducated guess would have been that even if the variability goes up, the median must go down faster. Does anyone have an example?

  111. This appears to be the crux of the argument:

    “Only in recent years has it become possible to make good empirical estimates of climate sensitivity from observational data such as temperature and ocean heat records. These estimates, published in leading scientific journals, point to climate sensitivity per doubling most likely being under 2â—¦C for long-term warming, and under 1.5â—¦C over a seventy-year period. This strongly suggests that climate models display too much sensitivity to carbon dioxide concentrations and in almost all cases exaggerate the likely path of global warming.”

    But that assertion doesn’t really get you anywhere. One could easily write the same paragraph with the opposite conclusion, i.e., “The findings of physics-based climate simulations consistently find a higher climate sensitivity, suggesting the observational studies’ methodology in almost all cases understates the likely path of global warming.”

    So here’s a fact: one type of study tends to arrive at a slightly higher climate sensitivity, another type at a slightly lower number.

    The rational response of the scientific community and the irrational response of the anti-science folks and their inactivist fellow-travelers are both predicable in such cases: the scientists consider all the evidence, and continue to look for ways to resolve the disagreement between different methods; the right-wing antiscience trolls pick the method that’s closer to what they want to believe and accept it wholeheartedly, pausing only to denounce the conspiracy to suppress the truth.

  112. I have found some data and calculations that I made when we were discussing the Cowtan Way paper and have summarized it in a table linked below. I show temperature trends for the periods 1979-2012 and 1997-2012 for the data sets Corwtan Way Hybrid (CWH), GISS infilled, UAH and 5 runs from the GISS model, GISS-E2-H_p1_rcp45. The trends shown are for the latitudinal zones of 60N-90N (Arctic), 90S-60N and 90S-90N (globe). I also show the percentage of the global trend that is contributed by the 60N-90N zone, the differences in trends from 60N-90N and 90S-60N and the 2 standard deviations for the GISS model means for determining which observed trends are significantly different or approaching significance. All trends are reported as Degrees C per Decade.

    Some points that I take away from this summary are:

    1. The CWH temperature set, and to a lesser degree the GISS and UAH sets, has a large contribution to the global trend coming from the 60N-90N zone for 1997-2012 (53%) and to lesser but substantial percentage in the 1979-2013 (35%). While these sets use extrapolation to obtain these temperatures from the sparsely measured stations in this area and in adjoining areas, I do not think that area should be ignored in attempting to estimate global trends or the Arctic amplification.

    2. With regards to the point made at this thread regarding Nic Lewis using CRU temperatures in his studies and noting that using GISS did not significantly change his results and what that portends for using CWH in his calculations, it can be seen that, as you go to longer term trends, the CWH and GISS trends tend to converge with 1979-2012 trends of 0.172 and 0.160 for CWT and GISS, respectively.

    3. The “weather” noise in the model results are fully displayed in the table with the much larger standard deviations for the shorter 1997-2012 period. The ranges of results even for the longer 1979-2012 period for the model make a comparison with the observed data sets difficult when attempting to attach statistical significance. The 90S-60N and global trends are significantly lower for all the observed sets. Where we see differences between observed sets is that CWH gets the 60N-90N trend and difference between 60N-90N and 90S-60N significantly higher than the GISS model mean for both the 1979-2012 and 1997-2012 time periods – and here the model means +/- 2 standard deviations gives a wide range. Of course, CWH has gone where no temperature data set has gone before.

    http://imagizer.imageshack.us/v2/1600x1200q90/829/7v99.png

  113. Robert (IdiotTracker) (Comment #126452)

    Idiot Tracker, may I suggest that you might be tracking only on one side of the tracks.

  114. OAS (Comment #126446)
    March 9th, 2014 at 2:10 pm
    “I have no expertise at all with temperature datasets and unfamiliar datasets can lead to all kinds of mistakes.”

    [RW] Perhaps it is best to familiarize yourself with the paper you’re discussing and the datasets in question before making assumptions about the problems with the presented datasets.

    “My impression however is that by no means all stations on the ‘Arctic periphery’ are showing the rapid warming driving your interpolations? (Details in your paper would have helped!)”

    [RW] If you re-read the paper closely you will find some of the details you’re looking for. If the details you’re searching for are not in there then you could check the 13 pages of supporting information with the paper. If they’re not there then you could check the project website which has three updates totaling an extra 28 pages of sensitivity tests and further discussions. As for the individual stations themselves if you would like to check to see if all Arctic stations have been warming over the past 17 years you can check in the BEST interface. The truth is that most have been rapidly warming.

    “What I am much more confident about is the nature of kriging and its sensitivity to outliers…Cross validation is important in assessing any interpolation, but your reference to one historical fit for the Arctic does rather underline the problems in this regard.”

    [RW] You said you read the paper but this statement you present makes me feel that it was not read closely. If you notice in Figure 3 we present cross-validation results at various ranges using three methods. My mention of an individual fit was an interesting point but should not be mistaken for cross-validation which is in the paper. Did you miss the cross validation statistics and maps perhaps in your read through?

  115. “I suggest that you might be tracking only on one side of the tracks.”
    Well, yes. The idea that “physics-based climate simulations” should be considered on an equal footing with “observational studies” is not very scientific. I doubt too many scientists would agree with his argument. Physics-based climate simulations that fail to match observations need to be fixed so they do.

  116. Kenneth Fritsch (Comment #126457)
    March 9th, 2014 at 3:46 pm

    The differences between our results and those presented by GISS will be addressed in the future. We have an ongoing project aimed at better understanding them. However, two quick responses are that we use a different SST source than GISS which *does* impact recent trends and GHCNv3 (used by GISS) has less high latitude coverage than CRU which we show here:

    http://www-users.york.ac.uk/~kdc3/papers/coverage2013/update.140205.pdf

    Figure U1 and Table U2 are the relevant ones on that subject. In general if you apply our methods using the reduced coverage of GHCNv3 you get lower trends.

  117. “The idea that “physics-based climate simulations” should be considered on an equal footing with “observational studies” is not very scientific.”

    Thanks for illustrating the point, Mikey. Science illiterates like you will instantly acquire unquestioned faith in whoever is telling you what you want to hear. And of course your lack of scientific chops won’t stop you from ignorant pronouncements as to what you consider “scientific.”

    As I said, totally predicable. 🙂

  118. Here’s an exercise: there are probably a couple dozen pseudoskeptics posting on this thread. Of that n, how many think the recent observational studies are right, with a slightly lower climate sensitivity, are right, how many think the paleoclimate and energy imbalance models, which tend to get slightly higher numbers, are more correct, and how many reserve judgement and say it’s hard to know what will prove to be more correct?

  119. Rbt (IT)

    Sensitivity calcs are all bollocks because the problem is too complex. Ocean cycles cannot be predicted, yet we are told that ENSO driven winds are responsible for the pause. Or was it deep ocean heat sequestration? How much glacial melt was caused by deposition of particulate air pollution? What are the constraints to natural variability?

    Lets get a handle on the Plio-Pliestocene and the Mid-Pleistocene Transitions first. Drill, baby, drill.

  120. For a minute, I didn’t realize Robert is not Robert Way. I was going to ask what was up because he seemed to be behaving differently than normal.

  121. Robert:

    The alarmist persona is now “one who withholds judgement” as opposed to denialists who rush to judgement? Is that new? It is so important to be au courant

    Before the disparity between the models and actual observations grew to enormous, statistically significant levels, it was presumably OK for alarmists to bask in the sheer certainty of the model-based consensus while denialists were (by definition) those who refused to accept the obvious correctness of the models.

    So now if empirically-based studies increasingly point to a lower sensitivity range for CO2 than was incorporated into the models, the new alarmist position (according to you) is to sneer at those who rely on such studies while simultaneously proclaiming the (newfound) alarmist tolerance for a wider range of methods, inquiry and outcomes.

    The important thing is that whatever gyrations may be required, alarmists are always smarter and never wrong. Have I got the gist?

    By the way, your take on Pilke Jr. Comment about growth was very weak. The notion that you can significantly restrict growth in the developed world while still magically promoting it elsewhere is silly.

  122. “Sensitivity calcs are all bollocks because the problem is too complex.”

    Sigh. I should have known my menu of options wasn’t flexible enough to describe the…unique…perspectives of the “skeptics.”

  123. Robert IT, thank you for identifying yourself so quickly. I will stick to discussing things with people who want to discuss ideas, and avoid people who sneer and deal out insults.

  124. Rbt (IT):
    The failure of many popular climate science celebrities and their groupies to recognize the complexity and enormity of the problem that such a new branch of science is tackling is a fatal flaw. Fortunately, many scientists are working quietly unwinding the knot without the need for political fanfare or vain certainty in their beliefs.

  125. CL/RW “Thus the effect of including polar climate change in the global index could be very misleading. Even if polar regions warm more than the rest of the globe, it is the rest of the globe where we all live.”
    More than that. The majority of the heat coming into the globe does come into the globe in the non Arctic areas. If it is heating up it can only be from leftovers of the tropical warming. And yet the rest of the globe is not warming.
    Tail wagging the dog does not do justice to this piece of hyperbole which tries to influence the perception of the world surface temperatures by looking in the fridge and saying “see the ice is melting” When it is winter outside.
    Well not really winter.

  126. There seem to be some people who claim empirical estimates and climate models generate on “slightly different” sensitivity values. Hummm… by that reckoning a 4 foot 10 inch woman and a 7 foot 3 inch basketball player are only “slightly different” in height. The most probable estimate from empirical studies is a bit under 2C per doubling, while the IPCC model ensemble diagnoses ~3.2C per doubling on average. That difference is neither small nor unimportant for developing economically justified public policy.

  127. Robert (IdiotTracker) (Comment #126464):

    You imply that the “scientists” have reserved judgement about the discrepancy between the two different climate sensitivities calculated using different techniques.

    But what to make of the many scientific advocates who urge carbon taxes and to stop building coal power plants and the need to mitigate right now?

    Have they reserved judgement?

    I think not.

  128. For those wondering, (IdiotTracker) is appended to Robert’s name by the blog software. It’s a plugin I wrote because I like him to be identifiable relative to all other “Robert”s who might also comment.

  129. SteveF:

    You missed a memo. Robert has informed us that the alarmist camp is no longer defined by a canonical acceptance of (a) the model ensemble and (b) sensitivity estimates that start at above 2.0. The Consensus is now to let a thousand flowers bloom, accepting a wide range of unceratinty and methods whereas denialists (who used be those who simply did not accept the models as gospel) are now those narrow-minded enough to adhere to an uncreative notion of low CO2 sensitivity, hung up as they are on empirical crap instead of a broader view like the great minds.

    So a wide range of sensitivities and meghodologies is OK, mostly because it is an excuse to continue to offer 4.5 as a viable “maybe” and thus keep disaster scenarios in play.

    So it is no longer ‘the science is settled and we have to Act Now’. It is now ‘the science is so uncertain that we have to Act Now.’

    Gotta keep up, SteveF. These are highly mobile, motorized goalposts presumably using renewable fuels.

  130. Lucia (#126490) –
    Thanks for the explanation. I agree that R(IT) doesn’t resemble the other Robert posting here in any Way.

  131. HaroldW,
    I think I wrote the plugin when Robert Way occasionally commented. Reading the two sets of comments would cause some whiplash because one person was moderate in tone and included substance and the other one would just come in an post substance free snark.

  132. Hi Robert(IdiotTracker),

    So here’s a fact: one type of study tends to arrive at a slightly higher climate sensitivity, another type at a slightly lower number.

    Yes. Another fact, one type of study tends to reflect what we actually see happening in reality, another type does not.

    Look, it’s silly to try to discourage your opposition with rhetoric here. Right wing anti science trolls pick the method that’s closer to what they want to believe and accept it? Not bad as far as rhetoric goes. Let’s examine it.

    I certainly want to believe our use of fossil fuels isn’t going to cause catastrophic climate change, you bet. Who doesn’t? You suggest that this desire is the basis for belief. Well, there’s all sorts out there on both sides, I’m sure that’s so in some cases. I can only speak for myself really.

    I don’t think the evidence shows I’ve got a track record for disregarding what science says so that I can instead believe whatever I want to believe, honestly. I don’t like lots of things that are scientific truths, but I accept them. I don’t like it that I can’t get a free lunch, or generate power from nothing, or even break even and build a perpetual motion machine. Seriously, I don’t like it at all. I wish that wasn’t so. It’d really be great if that wasn’t so. I really want for that not to be so. Doesn’t matter.

    Another example can be found in quantum physics. I don’t like quantum physics. I have a hard time understanding it. It’s counter intuitive. I don’t want it to be true, doesn’t fit in with my comfortable world view. Still, you won’t catch me blogging about how quantum physics is bogus. Why is that? It’s because it works of course. Not that I’ve done any of the experiments myself, but I read that every experiment run to check it verifies it. I’m no conspiracy theorist, when everybody reports that the experimental results match the theory then I accept that, and that’s what I hear, the empirical data matches the predictions of the theory for quantum physics which I dislike. So I accept quantum physics. Another big plus lies in the fact that devices built utilizing the theory work.

    What I don’t hear from quantum physics is a lot of post hoc rationalizations about why the observations didn’t match the predictions. So give me a call when the models and theories are corrected, preferably after they’ve been verified and validated with observations, and at that point I will gladly and cheerfully accept your theory.

    That’s all I ask for from any science.

  133. Robert Way (Comment #126462)

    Robert, I have read your update paper linked in your post above. I judge from the 6 different versions of your hybrid and kriging applications for producing a globally gridded temperature data set that you are interested in looking at the trend variations that are obtained from using different temperature sources and methods. I would suppose that the uncertainty that those results show could be incorporated with the uncertainties derived from estimating, what we call here at the Blackboard, “weather” noise into an overall uncertainty. I have a few questions:

    (1) The update notes that CRU had a recent update of their data set and I noticed that the 1997-2012 trends using CRU data have increased. Was that increase from the additional CRU data in their update?

    (2) Why do you not give at least equal time to looking at the 1979-2012 trends?

    (3) I notice that you provided no confidence intervals (CIs) in your trend results in your update and thus it becomes difficult to determine where there might statistically significant differences.

    (4) In your first paper I noticed that you took as your model for calculating CIs the ARMA(1,1) model from a Grant Foster paper. Have you attempted to determine the best ARMA model fit to the temperature series residuals and would it vary with the different series produced by using different methods and data sources?

    (5) What do you think of the benchmarking tests for evaluating and comparing various temperature data sets available that ideally provide what the producers of the benchmarking test would consider realistic, non climate conditions that require adjustment of the data set? You use this adjusted data in your methods and do not do the adjustments, but I would think this might be of interest to you and Cowtan.

    I have had discussions with those generating benchmarking tests and asked that they have a test that looks at potential non climate factors that could produce poor test results, i.e. factors that might be overlooked. I know there are non climate effects, like a slowly changing condition over time, that would be very difficult to impossible to detect. I also strongly suspect that the current GHCN algorithm for temperature set adjustments does not completely adjust all station non climate related changes. I have done that by comparing characteristics of a model of the instrumental series with the actual observed series. I am not saying that this effect will bias the current trends from these series but would have to add some amount of uncertainty.

  134. Using HADCRUT4, the 10 year period with the largest temperature trend was:

    .41C/decade Dec 1973 to Nov 1983

    Why did it go down after that?

  135. Since the alarmists want to shut down all coal plants and seem to have trouble learning from data (by which I mean the lower likely climate sensitivity and divergence of the models), I suggest some direct data for them: shut down the coal plants supporting Washington DC first. See if that cuts through the learning disability.

  136. Just a comment on why the divergence between models and data and the wide spread of sensitivity estimates does not seem to bother alarmists. It is due to love. You know how when two people are in love and they are both slobs wearing goth clothing with holes in their pants and spend all day playing video games and so on, but they are in love and think the other is beautiful and forgive all their quirks? It is like that when alarmists look (if they bother) at model output: it goes up and that is all they need to see beauty. The fact that it does not match actual data and that the models can’t do clouds at all (cue Judy Collins song) or precipitation. They are in love and all is forgiven. You deniers just don’t understand!

  137. “For those wondering, (IdiotTracker) is appended to Robert’s name by the blog software.” Lucia, that’s awesome and extremely helpful. Can the rest of us submit requests for appellations? Such as, MikeR (the Magnificent).
    Of course, I wouldn’t want anyone to think that I’d choose such a name myself, that’s why you would do it for us automatically.

  138. MikeR,
    Reading through and appending for everyone would be sort of computationally intensive. Not horribly, but enough that it’s not generally worth it. One may elect to add an appellation themselves. I don’t remember whether the two Robert refused to add extra applelations (like. KY, FL and so on), or if it was just too late before I figured out there were two Roberts. (IdiotTracker) was selected for one because that’s the name of his blog.

  139. Kenneth:

    (2) Why do you not give at least equal time to looking at the 1979-2012 trends?

    I agree. They need to spend more time on other periods. 1997-2012 is particularly non-diagnostic because of the circa 1998 ENSO event.

  140. “I think this correction just explains the difference between GISTEMP and HadCRUT.”

    I wonder how much closer they are in the raw data? It strains credulity to imagine every set of GISS adjustments would increase the past warming trend. Maybe climate scientists should head to Vegas?

    By the by, I thought it was interesting that the latest IPCC estimate bounds now reach nearly all the way to zero, and if current trends in IPCC predictions continue the range will soon include the negative. It would certainly be amusing if global cooling re-entered the mainstream! 🙂

  141. “The findings of physics-based climate simulations consistently find a higher climate sensitivity, suggesting the observational studies’ methodology in almost all cases understates the likely path of global warming.”

    Indeed, as Feynman once said: “It doesn’t matter how smart you are or how beautiful your experiments are — if they don’t match theory, they’re wrong.” Science!

  142. I’m curious why I have not seen the following reaction from believers in AGW to Nic Lewis’s paper: “Oh, thank God! I don’t know if this is right, but I pray that it is. It would be such incredible good luck. Mitigation was failing so badly, no one is serious, no one is doing it nearly fast enough. This would be such a gift: it would be like _several_ _successful_ Kyoto accords. Just like that, we have more time, total damage is much less severe. A wonderful reprieve.”
    Why do all the accounts look like this: “Lewis’s paper just illustrates one of several possibilities, that climate sensitivity may be _slightly_ lower than we thought.” Take a look – they all add the word “slightly”, or “a little”, or “a tiny bit”. Or, “we’d have an extra _few years_.” Remember that they are describing Lewis’s value which is about a third smaller, and where very high sensitivites are almost wiped out.
    Isn’t this (potentially) great news for everyone?
    I don’t mean to be cynical. I imagine that they have already set their minds on severe mitigation, and therefore their only reaction is, “Enemy. Trying to stop us. Resist.” They can’t see anything else.
    Of course, if some of them really like the de-industrialization that serious mitigation requires, low climate sensitivity would be a really annoying setback.

  143. Robert Way wrote (Comment #126445), March 9th, 2014 at 12:33 pm:

    Inhomogeneous forcing and transient climate sensitivity
    Drew T. Shindell
    Nature Climate Change , (2014) | doi:10.1038/nclimate2136.
    PUBLISHED ONLINE: 9 MARCH 2014

    “… I conclude that the lowest end of the range of transient climate
    response to CO2 in present models and assessments (<1.3 C)
    is very unlikely."

    This paper's conclusions seem worthless to me. See my critique of it at http://climateaudit.org/2014/03/10/does-inhomogeneous-forcing-and-transient-climate-sensitivity-by-drew-shindell-make-sense/

  144. George Tobin,
    ‘the science is so uncertain that we have to Act Now.’
    .
    Yes, I think that fairly sums up the current ‘justification du jour’ for immediate and drastic fossil fuel reductions. Before there was obvious divergence of measured temperatures from GCMs, the most popular justification among the Malthusian Eco-Loons (‘MELs’) was more like ‘the science is so certain that we have to Act Now’. As one might expect, in the transition period, both justifications were offered at the same time, and often by the very same people… the humor of which seems lost on many.
    .
    If there is continued fairly slow warming for another decade (or maybe even less), which puts the catastrophic sensitivity range above ~3C per doubling beyond credible, we can count on other shrill justifications for immediate action to be continuously offered: ‘In 100 years Miami will be under water!’, ‘Clams will soon be without shells!’, ‘A sudden shift to a very hot equilibrium point is almost here, and human population will rapidly collapse!’, ‘Thousands of species will go extinct each year.’, ‘Devastating (floods..heat waves..tornadoes..hurricanes) will be commonplace!’…. etc, none of which will be supported by data, and all of which will be wildly speculative. There is no upper bound for the number of wild-eyed scare story justifications that can be offered. IMO, the real objective is limiting economic activity and wealth as a globally consistent and vigorously enforced public policy. The fundamental disagreement has never really been about ‘the science’. It is mainly a philosophical, moral, and political disagreement about things other than science. The political mistake the MELs make is in attempting to hype climate science forecasts to force people to accept a philosophical and moral POV which they plainly reject. The longer climate doom holds off, in spite of ever growing fossil fuel use, the less effective this approach becomes, until it is ultimately ignored by voters. If the MELs just wanted big reductions in fossil fuel use, then they would be vigorously campaigning for a rapid substitution of nuclear power for coal. That is not what they do, which I think more clearly demonstrates their objectives. Convincing people that they and their descendants need to be materially/economically poorer (for ever!) seems to me a fairly hard sell, so I can’t suggest what an effective political strategy in democracies might be. Short of a change to totalitarian control of the population, I think one may not exist. The dark proposals one sometimes sees (executing ‘deni#rs’, prosecuting executives at petroleum companies, classifying people as ‘insane’ based on their political views, etc.) suggest some MELs already agree with me about that.

  145. Nic Lewis,
    I am no expert on climate sensitivity – I was simply presenting a new study which seemed relevant for the chosen topic here. That being said I find the tone of your critique to be somewhat negative which makes it a somewhat unenjoyable read.

    I feel that regardless of the end value for transient climate sensitivity the point raised by Shindell’s paper is that the regional distribution of forcings can have appreciable impacts on how we estimate transient climate sensitivity. I see no discussion in your critique of this point.

    Tangentially it would be useful for comparison if your boxplot presented temperature trends for models updated with the observed volcanic, aerosol and PDO conditions which Schmidt et al (2014) present.

  146. Robert Way,
    I wasn’t criticising you for presenting the Shindell study, merely pointing out that it seemed so deeply flawed as to be worthless.

    Yes, my critique is somewhat negative. I found it depressing that a well known climate scientist like Drew Shindell should publish a study with such obvious flaws, and that it got through peer review.

    I haven’t seen much valid evidence that the regional distribution of forcings actually does impact how TCR is estimated. Can you point me to some?

    Nor can I see why the issue should be relevant to studies that use latitudinally-resolved temperature and forcing data, like Aldrin et al (2012), Ring et al (2012) and Lewis (2013), all of which give ECS estimates that imply TCR is in the region estimated by a global energy budget study like Otto et al (2013) or using the AR5 forcing data.

    It’s Steve McIntyre’s box plot, not mine. In any case, how exactly would you update the models with the observed volcanic, aerosol and PDO conditions which Schmidt et al (2014) present – even if one thought their observational estimates were correct?

  147. James Annan seemed a bit bearish about this paper of Shindell as well. I don’t know enough to make very many meaningful comments.

    It does feel like the same old recycled arguments, with a lot of energy being spent just trying to get the data to match up with the models, instead of exploring both as equally plausible alternatives.

    I mean, it isn’t that implausible that models were “over-tuned” during the period of relatively rapid temperature increase from 1985-1998 rather than the data are underreporting the newer slower temperature increases.

  148. SteveF (Comment #126550)

    “If the MELs just wanted big reductions in fossil fuel use, then they would be vigorously campaigning for a rapid substitution of nuclear power for coal. That is not what they do, which I think more clearly demonstrates their objectives.”

    SteveF, I am in essential agreement with what you said in your post, but are we so sure that nuclear is an economical substitute for CO2 emitting fuels.

    In IL here we have had a relatively large presence of nuclear power plants that at one time were very profitable, but recent studies indicate that most if not all nuclear power plants are no longer profitable given the competition from CO2 emitting energy sources.

    I suspect that profitability would of little concern to our greener friends and they would probably be more favorable to nuclear if it were a government enterprise and/or at least heavily subsidized and, of course, heavily regulated by government. In that case we must be careful what we wish for.

    I suspect the long lead times and huge capital investments required for nuclear are major factors in the economic calculation. Just to decommission these plants seems to take forever.

  149. Robert Way, I posted a few questions for you in a post above. If you have the time and the desire I would appreciate hearing what you have to say about these issues.

  150. Carrick: ” mean, it isn’t that implausible that models were “over-tuned” during the period of relatively rapid temperature increase from 1985-1998″

    Not that rapid …

    HADCRUT4 13 year trends

    Sep-1985 to Aug-1998 0.232C/dec
    Oct-1932 to Sep-1945 0.257C/dec

  151. Re: Kenneth Fritsch (Mar 10 17:28),

    I seriously doubt that any currently approved nuclear power plant design, i.e. has been built and operated somewhere, is competitive with a natural gas fired plant at current gas prices, even if you didn’t need an extra decade to get a nuclear plant licensed. Coal probably isn’t competitive with gas, especially if you have to ship the coal any distance. The real problem is that we don’t have the pipeline capacity to allow a substantial fraction of the coal fired plants to be converted to gas fired.

  152. SteveF said:

    IMO, the real objective is limiting economic activity and wealth as a globally consistent and vigorously enforced public policy. The fundamental disagreement has never really been about ‘the science’. It is mainly a philosophical, moral, and political disagreement about things other than science.

    There is certainly an underlying clash of worldviews behind many of these debates. Part of the problem is that environmentalists only see the downsides of Man’s activities, without weighing it up against the benefits.

    Most of the current global economic activity – farming, logging, mining, strip-mining, fishing etc. – is done for the benefit of the 1.5 billion of us in the developed world. It would be naive to state that this activity is consequence-free. Often the consequences are terribly ugly, wasteful, disruptive, and destructive.
    With Chinese and Indian and South American development, we look like adding another 2.5 billion to that, with the rest of the world will follow. Unless some degree of sensitivity and caution is exercised during this process, it’s going to leave dreadful scars even if we DON’T break something irrevocably and end up in a heap of trouble.

    Some environmentalists are so horrified by the prospect that they seem to want to shut it down altogether. My view is that the undeveloped world has just as much right to wealth as the developed world, indeed it’s immoral bordering on disgusting to suggest they can’t have it. BUT they can’t, MUST NOT take the same path to wealth as us. It’s going to take all the ingenuity mankind has to offer to raise the world out of poverty without cutting all the trees down or ripping the last bluefin tuna out of the Pacific.

    We’re going to need a monumental nuclear rollout so we can replace all the coal mines, oil wells, gas wells, pipelines, oil tankers, LNG ships, coal trains etc. on the face of the planet with a few uranium mines. Not an exaggeration. That’s something environmentalists fail to appreciate about nuclear – the ENORMOUS reduction in resource use that a transition from fossil to nucear would bring. Instead of transporting 300 tonnes of coal per HOUR to every single 1MW coal plant, by rail or ship, you transport a few tonnes of fuel rods every six months in a truck. That’s with current reactors: it becomes every few YEARS with current Fast Neutron Reactors (only Russia has commercial examples, but expect that to change.) Renewables can’t hope to do this: they are too low density. Even if we could power the world with wind and solar, we’d still be transporting gigatonnes of material around just replacing the bloody things as they wear out. Biomass for the backup is far less energy dense than coal – we’d be transporting gigatonnes of wood around by rail and ship. (And it would probably have to be GM wood, or it would take more energy to synthenise the nitrate fertilliser to grow the biomass then we’d get back by burning it.) Nuclear power reduces this problem by orders of magnitude. The millions of engineers currently employed in ripping gigatonnes of fossil deposits out of the ground and transporting them to enormous heat engines would be replaced by tens of thousands of nuclear engineers, leaving the rest of us (I’m a fossil fuel ripper, to my chagrin) to do something more creative with our skills, like figure out how we’re going to feed everybody. Speaking of which…

    We’re going to need widespread adoption of GM crops. GM crops that can fix their own nitrogen like legumes do, so we don’t have to dump ammonium nitrate all over the land. Crops that don’t need pesticides, crops that break up the soil with deep roots so we don’t get compaction and run-off, crops that leave root mats in the soil to restore the organic matter. That’s cost-effective solar power right there! Automatic weeding robots, directed drip
    irrigation to save water. Self-piloting drones to survey the fields and sample soil, so we don’t over-fertillise, don’t over-water.

    (Further outside the box, we’re probably going to have GM fungi or algae that produce meat or fish protein, but grows in shallow salt water ponds so we can still have our burgers and fish sticks without the resource waste involved in livestock farming. Blue-green algae is already used as animal feed, Quorn is already a fungi-derived meat substitute.)

    We’re going to need other cybernetics and automation. Google’s self-driving cars will lead to robot taxis. Call one with your phone from anywhere, tell it where you’re going and if possible, one already in transit will divert to pick you up. Other people going your way will also get picked up, unless you’re in a hurry and
    pay extra for exclusive use. Integrated central control – no congestion, no stoplights, no parking… and no need to take a tonne and half of metal with you for every damned journey. Want to go on vacation in the mountains? Hire one for the week. A 4×4, or camper, or whatever floats your boat.

    I think that just possibly, if we get it right, we can ALL live well. All of us can have sanitation and clean water and cheap food and hot showers and foreign holidays, while still keeping the forests and mountains and glaciers and jungles intact and not screwing the climate up too much. But it won’t be easy, and we won’t do it
    without widespread nuclear power, and we’re wasting a lot of time right now.

  153. In an attempt to rebut Lewis and Crok, Skeptical Science writes:

    Lewis and Crok make the following argument.

    “Between the Fourth and Fifth [IPCC] Assessment Reports the best estimate of the cooling effect of aerosol pollution was greatly reduced. That necessarily implies a substantially lower estimate for climate sensitivity than before. But the new evidence about aerosol cooling is not reflected in the computer climate models. This is one of the reasons that a typical climate model has a substantially higher climate sensitivity than would be expected from observations: if a model didn’t have a high climate sensitivity, its excessive aerosol cooling would prevent it matching historical warming.”

    However, according to climate modeler Gavin Schmidt of NASA GISS, this is incorrect.

    “Their logic is completely backwards. Climate model sensitivity to a doubling of atmospheric CO2 is intrinsic to the model itself and has nothing to do with what aerosol forcings are. In CMIP5 there is no correlation between aerosol forcing and sensitivity across the ensemble, so the implication that aerosol forcing affects the climate sensitivity in such ‘forward’ calculations is false … The spread of model climate sensitivities is completely independent of historical simulations.”

    http://www.skepticalscience.com/news.php?n=2443

    I don’t know where they got this quote from Gavin, Google gives me no clues. But is seems improbable to me that there is no correlation between aerosol forcing and sensitivity in GCM’s. If a model has a high climate sensitivity it has to have a comparatively large aerosol-offset to track historical temperature observations, no?

  154. Niels A Nielsen,

    http://www.climate-lab-book.ac.uk/2014/gwpf/

    At Ed Hawkins blog (link above) part of the discussion is about this topic. According to Ed the sensitivity isn’t calculated from the historical hindcast runs. Other experimental runs are included in CMIP5, one of these runs keeps everything fixed except CO2 which increases at 1% per year. TCR/ECR is calculated from these control runs so aerosol forcing does become irrelevant in this case as it’s only CO2 that’s changing. It seems to make sense.

    Another criticism of Nic Lewis’ approach includes the problem that Nic has generated TCR for the ‘observed earth’ (obviously he’s using observations). This isn’t truly global as areas such as the very high N latitudes aren’t included. Climate model TCR is truly global, covering the whole earth. This is covered in the discussion about Cowtan and Way and Lucia has mentioned it with respect to her comparisons of models and observations, it’s about comparing apples with apples. Again at Ed’s blog the analysis shows when this is corrected for the model and observation methods come into alignment. There are other criticisms as well.

    I find on the whole the criticisms of Nic look strong and the two important conclusions of Nic’s report 1) that observation methods don’t match models 2)and that the IPCC range should be lowered, both look weak to me.

  155. Niels, I think Gavin isn’t being totally honest here. And unfortunately, following it’s historical pattern of selecting only the things that people say that aren’t totally honest but fit a desired “talking point”, SkS chooses to quote this.

    It’s the modeling cloud feedback that gives you the freedom to dial in the sensitivity you desire. This parameterization is totally phenomenological, and is subject to bias and tuning.

    If you assume a particular aerosol history, and you want to follow a particular historical temperature trajectory, you can essentially dial in that sensitivity with the cloud sensitivity parameterization chosen. It may not happen in a conscious fashion, but certainly larger aerosol forcings require larger climate sensitivities to explain the historical temperature record.

    Gavin is also not characterizing Nic’s paper honestly either. See the text starting at the bottom of page 21. It’s clear that Nic is addressing the tuning of the models, and this discussion in no way relates to aerosol tuning.

    I hate to say it, but it doesn’t look like Gavin actually read (as in comprehended) the paper before commenting on it, and, again following historical precedence for them, neither did SkS.

  156. Robert Way, I have a bad habit of analyzing papers dealing with climate topics and specifically and with intent attempting to determine realistic confidence limits for the evidence presented. I feel less bad about my habit when I see that Judith Curry, a prominent climate scientists, has become obsessed with uncertainties in these matters.

    Your published results with various compilation methods applied to a couple of temperature data sets, I judge shows that a topic like the instrumental temperature record which some might consider settled science is in actuality a work in progress.

    Sometimes the extent of the uncertainties in reporting results is not made clear. For example, your update article linked above shows 6 combinations of methods/data with global trends in degrees C per decade for the period 1997-2012 of 0.114, 0124, 0.112, 0.092, 0.107 and 0.99. I note that the ocean temperatures for all these combinations uses kriging and it appears that the SST, and thus the ocean trends, for all 6 combinations will be nearly or exactly the same. Since the oceans make up approximately 2/3 of the global area, the global trend difference of the 6 combinations needs to be multiplied by 3 to obtain a difference for these combinations as applied to the land area. That calculation produces a significantly wider range of differences between combinations. I suppose one can attempt to “select” a best combination, but without a good independent confirmation one is left with a wide range of results.

  157. HR:

    TCR/ECR is calculated from these control runs so aerosol forcing does become irrelevant in this case as it’s only CO2 that’s changing. It seems to make sense.

    Again, see the text starting on the bottom of page 21. That’s not the argument that Nic is making.

    Another criticism of Nic Lewis’ approach includes the problem that Nic has generated TCR for the ‘observed earth’ (obviously he’s using observations).

    As I noted on Ed Hawkins blog, you can’t get a factor of two from the missing 2.5% missing area or so. I estimated a limit of 15%, enough to bring HadCRUT4 in line with GISTEMP. This is barely an interesting correction compared to modeling uncertainties. Ed Hawkins has to know this.

    I find on the whole the criticisms of Nic look strong and the two important conclusions of Nic’s report 1) that observation methods don’t match models 2)and that the IPCC range should be lowered, both look weak to me.

    My objection is the first isn’t a criticism of anything said in Nic’s paper, and the correction suggested in the second is very minor. They must have known #2 is small, which makes me question the balance of the criticism made by Piers Forster and endorsed by Ed Hawkins.

  158. Here’s the link to my comment:

    I dont’ think the paleo records show much (or any) amplification in the Antarctic interior. The north polar corresponds to perhaps 2.5% of the surface area of the Earth.

    I agree there is a bias with missing this region (if it really shows an increased trend), but the best I can get is about a 15% effect on long-term trend, using realistic upper limits on polar amplification in the Arctic Sea.

    That’s enough to explain the discrepancy between HadCRUT and GISTEMP, but not nearly enough to explain the discrepancy between warming trend of measurements and models.

    Also see my comment above.

    In the second comment, I had a typo. It should have read:

    By the way if you invert the formula I give above to get $latex \mu_{\hbox{global}} = 2$, assuming the area is contain in 2.5% of the Earth’s surface, that works out to $latex \mu_{\hbox{missing}} \cong 40$! To get to 0.2 °C/decade for HadCRUT would require a value of $latex \mu_{\hbox{missing}}$ in excess of 20.

    That is you have to assume a trend of at least 20 higher than the global trend for the missing area, assuming the area is centered in the Arctic. That value is extremely implausible of course.

  159. Kenneth Fritsch:

    I suppose one can attempt to “select” a best combination, but without a good independent confirmation one is left with a wide range of results.

    That’s the impression I’m left with too.

    I think cross validation appropriately is harder to do that these guys realize.

  160. Carrick, Piers Forster essentially admitted in the comments of that thread that #2 is very minor:

    You might expect as the poles are missing in HAdCRUT4 that a true global average trend from a global version of HadCRUT4 would show larger trends – (e.g. Cowtan and Way, 2014). But Nic Lewis is right about other global datasets not showing greater trends than the HadCRUT4 data, so I think the jury is still out here

  161. Matt,
    We are not terribly far apart, save for a few points. You said

    BUT they can’t, MUST NOT take the same path to wealth as us. It’s going to take all the ingenuity mankind has to offer to raise the world out of poverty without cutting all the trees down or ripping the last bluefin tuna out of the Pacific.

    Seems to me that there is no possibility they will follow the same path, since the developing world has available the technology and knowledge that allows them to leapfrog much of the development path historically followed by most of the developed world (eg. cell phones with reasonable available call handling capacity completely eliminate the need for wired phone connections to every house and office). The developing world also receives very different market signals compared to 40-50 years ago, which will change significantly how they develop (eg gasoline and diesel have approximately quadrupled in price (in real terms) since my childhood… there will be little room for inefficient “gas-guzzling’ cars for most people in developing countries, or anywhere else if the price rises too much. The higher price of energy today also guides people to make choices that reduce the energy use of buildings and equipment (while modestly increasing capital investment costs). Once again, developing countries will not make the same wasteful energy and material decisions people made 40 or 50 years ago. So I guess I am a lot more optimistic than you about economic development.

    I agree that GM crops (and other plants) should continue to be developed and used to increase food production. I also agree that rapid and widespread development of nuclear power is absolutely required to slow growth in CO2 emissions. But these are a couple of the most sacred of sacred cows, so I suspect you are not going to get much support among people who say they want to reduce CO2 emissions. I can’t imagine the existing combination of Green NGO’s and climate scientists will endorse rapid growth in nuclear power any time in the foreseeable future. I agree that rational analysis says they should, but I’m guessing they won’t.

  162. When the self tracking idiot starts posting his AGW apologia, it is clear that the mental dissonance is reaching a new level in the alarmist community.
    The problem is that the political class and embedded bureaucracies in far too many countries are not paying attention to this dissonance, and are moving towards the AGW wasteland by momentum and certainly not by informed critical thinking. In the context of industry, think of how Sir Richard Branson has told ‘deniers’ to get out of his way, even as his chief rocket scientist, Burt Rutan, is offering a comprehensive take down of the AGW extremists.
    http://rps3.com/Files/AGW/EngrCritique.AGW-Science.v4.3.pdf
    so should Sir Richard fire Burt to get him out of the way?

  163. Niels A Nielsen at 126579

    “If a model has a high climate sensitivity it has to have a comparatively large aerosol-offset to track historical temperature observations, no?”

    It’s not inevitable. The 20th century was a period of non-equilibrium with rising forcings, so its simulation was closer to an exercise in TCR than ECS. The TCR/ECS ratio is not fixed, but varies with the efficiency of ocean heat uptake.

  164. Gavin’s argument that the model-based estimates of sensitivity don’t use aerosols, while literally true, misses the point. Without the extra high level of aerosols some models use as input, the models with high sensitivity would be running even hotter than they are compared to data. That is, the models show divergence from historical data (the pause) and without the ability to pick and choose which aerosol forcing to use this divergence would be even worse. The use of model-derived sensitivity is begging the question–it is to assume that which we wish to answer.

  165. Niels, HR, Carrick, Craig,
    Regarding aerosols and model sensitivity, thanks for the discussion on this. Gavin’s quote might as well have been specifically designed to throw people like me (lacking in depth knowledge of this specific issue) for a loop. This helped.

  166. Just so we are talking about the same latitudes in referring to the upper latitudes and percentage of the global areas: 60N-90N is 6.7%; 70N-90N is 3.0% and 80N-90N is 0.75%.

    I believe Nic was using longer term global temperature trends than what we keep getting from Robert Way, i.e. 1997-2012. Remember that the differences between data sets are much larger for that time period than even the moderately longer 1979-2012 where CWH and GISS global trends are nearly the same. Perhaps for completeness we need to go back further with the Cowtan Way kriging temperature series and compare trends there.

  167. Carrick here are the Cowtan Way Hybrid trends by latitude for the period 1997-2012 in degrees C per decade. That is why I commented awhile ago that my grandkids and maybe even my adult kids would enjoy vacationing in the balmy reaches of the Arctic

    87.5 1.508529622
    82.5 1.691609199
    77.5 1.4515695
    72.5 1.196037165
    67.5 0.82480352
    62.5 0.353477644
    57.5 0.08795719
    52.5 -0.035465728
    47.5 -0.034368133
    42.5 0.101835547
    37.5 0.066423872
    32.5 0.074213269
    27.5 0.085595965
    22.5 0.090813629
    17.5 0.089566171
    12.5 0.092962588
    7.5 0.059294916
    2.5 -0.010612256
    -2.5 -0.04386644
    -7.5 -0.039717927
    -12.5 -0.003871571
    -17.5 0.009665207
    -22.5 0.063352761
    -27.5 0.092541081
    -32.5 0.140600177
    -37.5 0.094248459
    -42.5 0.074151583
    -47.5 -0.047709831
    -52.5 -0.038847034
    -57.5 -0.040860115
    -62.5 -0.008822442
    -67.5 0.098664704
    -72.5 0.566324089
    -77.5 0.645817245
    -82.5 0.684839954
    -87.5 0.900996115

  168. Yep thanks Carrick. I’ve gone from confusion to clarity, back to confusion, clarity and now thanks to you back to confusion.

    I think you’re right about the aerosol discussion Carrick it is something of a red herring, but it is in Nic’s report so I don’t know whether it isn’t Nic that introduced it.

    We really seem to be in exactly the same place with sensitivity as we are with Lucia’s comparisons of temperature (obs v model). The hot end of the model ensemble look too hot to be considered in line with observation but there is an overlap between some models and observation given uncertainties.

    I think Ed/Piers do have a point with respect to Nic’s estimate being “observed earth” and models ‘truly global’ it would be better to see Nic correct for this in his estimates than guessimate at what it might be. 15% on top of the 1.35 TCR best estimate is not trivial given we’re comparing that to 1.85 for models.

  169. Craig Loehle:

    Gavin’s argument that the model-based estimates of sensitivity don’t use aerosols, while literally true, misses the point.

    Unfortunately he didn’t just miss the point, he mischaracterized the much-more carefully crafted argument actual given by Lewis.

    Because it was such a prominent part of the paper, it’s hard how you miss that, if you actually read the paper. And if you didn’t read the paper, posting a review as if you did, is well.. disingenuous at best.

    Kenneth, can you calculate this for 2002-2013?

  170. Carrick (& everyone else)

    I just followed your link to the Ed Hawkins blog post and skimmed the thread.

    Wow, just wow –

    Fred Moolten:

    In any case, your assertion that the 1800 m value can be no more than 0.30 W/m2 is unjustified. At best, you can claim it might be true, but not that it must be true. You should reconsider that assertion.

    andthentheresphysics:

    In fairness to Nic, it has been pointed out to me – on my blog – that Table 1 in the published version of the paper is quite different to that in the final draft. The value quoted for 2004 – 2011 (0 – 1800 m) in the published version is indeed 0.29 W/m^2

    Troy MAsters (Troy_CA):

    FWIW, you may want to check out the acknowledgements in the published / final version (p1954): “Nicholas Lewis pointed out an
    error in the accepted version as well.”

    Fred:

    Thank you, Nic, for amending your earlier statement. At this point, it’s best not to waste more time on this. Readers can make their own judgments and probably don’t care much about how the final understanding of the facts came to be agreed on as long as the agreement came about

    Simon (says ;)):

    Fred, you say “as long as the agreement came about”. Does this mean you concede that Nic was correct all along in his use of OHC uptake as reported by Lyman and Johnson and you were mistaken in saying “your assertion that the 1800 m value can be no more than 0.30 W/m2 is unjustified”? I have not seen you acknowledge this and I think it would be useful for readers to understand exactly what the facts are that you say have now been agreed on.

    As an aside, I hope you will avoid language in the future like: “Did you do that here, and also in your testimony to the UK Parliament, or are you recalculating the reported values in some way you don’t specify?” Adding this provocative wording to a legitimate question was unnecessary and you should be particularly careful when not working from the final published paper.

    That is a double whammy – a complete (and in this case justified) takedown of Fred (who I have no ill will against) and furthermore…

    0.29!!!!?!?!?!?!?!?!?!

    /rant

  171. The best I can do right now is 2006-2012. I am busy this evening so I’ll give you 2002-2012 tomorrow. I do not believe that CWH has been updated to 2013. We can check with Way – when he gets back here.

    87.5 1.187998785
    82.5 1.435928774
    77.5 1.023761348
    72.5 0.660055398
    67.5 0.467264368
    62.5 0.196413418
    57.5 -0.249694256
    52.5 -0.436194982
    47.5 -0.672988374
    42.5 -0.097674479
    37.5 0.124659242
    32.5 -0.098728376
    27.5 -0.17926771
    22.5 -0.160714235
    17.5 0.01550383
    12.5 0.051696838
    7.5 -0.126444501
    2.5 0.003609058
    -2.5 -0.156842817
    -7.5 -0.124667763
    -12.5 -0.134310047
    -17.5 -0.017145236
    -22.5 -0.066657166
    -27.5 -0.126792464
    -32.5 -0.021784769
    -37.5 0.223850393
    -42.5 0.464152813
    -47.5 0.622056866
    -52.5 0.408687455
    -57.5 0.064522696
    -62.5 -0.126797577
    -67.5 -0.214991765
    -72.5 0.001437616
    -77.5 0.297080541
    -82.5 0.931322652
    -87.5 0.740115234

  172. Carrick it strikes me that Craig interpreted what Nic wrote on page 7/8 pretty well. The point isn’t that aerosols impact sensitivity estimates but allow models to reproduce historical temperature well even with wonky sensitivity. There isn’t really any other way to interpret what Nic wrote. My opinion is the report would be better without that speculation because it does allow diversion away from the main points on sensitivity.

  173. bill_c 126607,

    I agree that the Ed Hawkins blog has been illuminating, and is continuing to shine light on the issues raised here and elsewhere. I’ve posted a number of long comments, but more importantly, I’ve benefitted from the discussions by Ed, Piers Forster, and others.

    Regarding the point in your comment, If you want to understand why, in my view, you have probably misinterpreted the exchange with Nic Lewis you mention, and why my statement was justified in the context of the exchange, you can email me – fmoolten at gmail [dot] com. I won’t waste the time of other readers here to give you the background since it’s off topic and unlikely to be of general interest.

  174. HR,
    “We really seem to be in exactly the same place with sensitivity as we are with Lucia’s comparisons of temperature (obs v model). The hot end of the model ensemble look too hot to be considered in line with observation but there is an overlap between some models and observation given uncertainties.”
    .
    Yes, that is true. But for me the more important issue is how climate science responds to that. All or nearly all runs of all the IPCC models are well above measured trends (averaging ~0.23C per decade). Seems to me the most reasonable conclusion to draw is: there is a small chance the Earth’s recent (post 1850) response to increasing GHG forcing represents an extreme outlier, and that the models are A-OK, but it is far more likely that the models are just too sensitive to GHG forcing. As Carrick and others have consistently pointed out, tuning of the models (conscious/explicit or not) via a combination of model parameters and aerosol offsets makes it possible to arrive at most any ’emergent sensitivity’ one likes. The AR5 reduction in the best estimate range for aerosol offsets (based on recent measured values) makes the very large aerosol offsets used in most GCMs less likely to be correct, and the models more likely to be simply too sensitive. The AR5 aerosol estimates are most consistent with much lower values for transient and equilibrium response, as Nic’s work (and other empirical estimates) has consistently shown.
    .
    Were this not climate science, I would expect a rapid winnowing of the models to those which are more reasonable, with the obviously wrong (absurdly hot!) ones eliminated from serious consideration, followed by an examination of why the truly crappy models differ from the somewhat less crappy ones….. to try to figure out how to improve the remaining contenders. But that is not at all what happens. Each model is stoutly defended by climate scientists, and no model, no matter how bad its results, is seriously questioned, never mind discarded. In spite of many tens of billions of dollars of public expenditure in support of climate science, and in spite of far better and much more data (satellites, ARGO, etc.), and most of all, in spite of obvious divergence between model projections and measured reality, there has been zero progress in narrowing the 3C+/-1.5C range from the 1979 Charney report. 35 years of hugely costly publicly funded effort, and not a bit of progress on the most important parameter? If this be science, then it is like no science I know. This is not a difficult and poorly understood scientific subject like dark matter or dark energy, where 35 years with little or no progress is not surprising. We have been told (ad nauseam) for 35 years that all the basic processes controlling Earth’s climate sensitivity are reasonably well understood (my favorite phrase is ‘based on physics’, followed closely by ‘state of the art model’), so the models just have to be a reasonable representation of Earth’s climate. They are not; they are almost certainly much too sensitive to GHG forcing.
    .
    IMO, the evidence is overwhelming: The field does not make progress on the most important question for public policy because true technical progress on that question would be in the direction of Nic Lewis’ work, reducing the plausible range for sensitivity, and so would reduce the urgency ‘for immediate public action’ to cut fossil fuel use. Real scientific progress is not happening because that progress would diminish the field’s raison d’être. Which is to say, I think climate science is not really science in any normal meaning of the word, but rather similar to a grotesque kabuki theater we are all forced to attend, where green political agendas motivate the characters’ actions, but are hidden from the audience’s view behind opaque masks of distorted ‘science’. I’d say it’s a good time to stop buying the theater tickets, or at least to dramatically reduce the ticket price.

  175. Kenneth F,

    Apologize as this will probably be my final comment for a while. Heading to the high Arctic this week for ~45 days so I am in the process of doing preparations.

    “I judge from the 6 different versions of your hybrid and kriging applications for producing a globally gridded temperature data set that you are interested in looking at the trend variations that are obtained from using different temperature sources and methods.”

    [RW] Yes – we feel it is important to nail down the differences between different datasets and methodologies.

    “(1) The update notes that CRU had a recent update of their data set and I noticed that the 1997-2012 trends using CRU data have increased. Was that increase from the additional CRU data in their update?”

    [RW] I’ll check with Kevin if that was the cause but it is worth noting that in the previous updates (there are three in total) that we have made improvements to the methodology used in the paper for combining SST and Land data. We have also produced long (1850-present) kriged gridded version which reconstructs the HadCRUTv4 ensemble of reconstructions. Our preferred reconstruction is the median of our reconstructed ensemble.

    “(2) Why do you not give at least equal time to looking at the 1979-2012 trends?”

    [RW] I presented a talk specifically on those trends at the ArcticNet Annual Science Meeting last year. The reason we focus on the 1997-2012 period is because that period of time emphasizes the impacts of coverage bias as a result of the rapid warming which began in the Arctic during the late 1990s and continued to present (Table 4). Coverage bias works both ways – in cool periods the lack of coverage in the Arctic can lead to an underestimate of regional cooling. Over the period 1979-2012 the trends we present are similar to the original Hadley data partially because the biases can cancel out to some degree (see Figure 6 in the original paper).

    (3) I notice that you provided no confidence intervals (CIs) in your trend results in your update and thus it becomes difficult to determine where there might statistically significant differences.

    [RW] The updates are preliminary work which is aimed at documenting some of the interesting things we are seeing as we progress with this project. We will provide CIs for any additional papers that use information on these alternative reconstructions. Alternatively we have made these series readily available on the website so if you wish you could calculate them.

    (4) In your first paper I noticed that you took as your model for calculating CIs the ARMA(1,1) model from a Grant Foster paper. Have you attempted to determine the best ARMA model fit to the temperature series residuals and would it vary with the different series produced by using different methods and data sources?

    [RW] I surmise the noise model could change depending on the dataset used. In our experience the Foster paper method for calculating CIs is a reasonable approach but if one wanted to you could determine the best model for each. To be honest it would be on the bottom of my to do list at this point.

    “(5) What do you think of the benchmarking tests for evaluating and comparing various temperature data sets available that ideally provide what the producers of the benchmarking test would consider realistic, non climate conditions that require adjustment of the data set? You use this adjusted data in your methods and do not do the adjustments, but I would think this might be of interest to you and Cowtan. I have had discussions with those generating benchmarking tests and asked that they have a test that looks at potential non climate factors that could produce poor test results, i.e. factors that might be overlooked. I know there are non climate effects, like a slowly changing condition over time, that would be very difficult to impossible to detect. I also strongly suspect that the current GHCN algorithm for temperature set adjustments does not completely adjust all station non climate related changes.”

    [RW] Long story short this is the subject of a future research update and I would prefer not to show our results at this point as you can understand. Nevertheless, I think there are issues which can arise in terms of non-climate factors and homogenization algorithms, particularly ones which assume a slowly varying climate or which use a spatially uniform outlier detection scheme.

    “I have a bad habit of analyzing papers dealing with climate topics and specifically and with intent attempting to determine realistic confidence limits for the evidence presented. I feel less bad about my habit when I see that Judith Curry, a prominent climate scientists, has become obsessed with uncertainties in these matters.”

    [RW] I have no qualms about pushing towards having greater understanding of uncertainties so long as it is dual-sided. Unfortunately I feel that Dr. Curry has characteristically emphasized uncertainties in mainstream climate science while underplaying uncertainties in pseudo-climate science.

    “Sometimes the extent of the uncertainties in reporting results is not made clear. For example, your update article linked above shows 6 combinations of methods/data with global trends in degrees C per decade for the period 1997-2012 of 0.114, 0124, 0.112, 0.092, 0.107 and 0.99. I note that the ocean temperatures for all these combinations uses kriging and it appears that the SST, and thus the ocean trends, for all 6 combinations will be nearly or exactly the same. Since the oceans make up approximately 2/3 of the global area, the global trend difference of the 6 combinations needs to be multiplied by 3 to obtain a difference for these combinations as applied to the land area. That calculation produces a significantly wider range of differences between combinations. I suppose one can attempt to “select” a best combination, but without a good independent confirmation one is left with a wide range of results.”

    [RW] I’m running out of time a bit to answer all of these but it is more nuanced than to say what is mentioned above. However, the general point is that high latitude land coverage is very important for recent temperature trends. Notice Table U2 in the update. Trends from 60-90N for GHCNMv3 using purely kriging are 0.705 while for CruTem4 it is 0.951. As you saw in the Figure U1 of that same update CRUTEM4 has much more coverage than GHCNv3. Personally I consider more data in high latitudes to be better than less so selecting a “best” combination for me begins with the larger data pool.

    “Perhaps for completeness we need to go back further with the Cowtan Way kriging temperature series and compare trends there.”

    [RW] The long-kriged version is on the website. Updated to 2013.

  176. Let me know if I understand this correctly. The AR5 SPM states that: “The total anthropogenic RF for 2011 relative to 1750 is 2.29 [1.13 to 3.33] W/m^2”. Based on the central estimate and modern warming of 0.8C, Nic finds that TCR is about 0.8*3.71/2.29 = 1.3C. Sounds good to me, except the error bars are from 0.9C to 2.6C. So until we get better data on future human aerosol forcings the error bars will remain high enough for this debate to continue for decades to come. Maybe I shouldn’t be so pessimistic?

  177. Robert Way, thanks for your detailed replies.

    Carrick, here is the CWH 2002-2012 latitudinal trends. I notice some negative trends here at higher latitudes. The very highest latitudes (>72.5N continue to show higher trends). I think my calculations are correct, but I might double check. Shorter time periods can, of course, be dominated by natural variation (“weather” noise). That is also why we need to look at trends longer than 1997-2012.

    I’ll download the kriging series Robert Way noted above and look at longer term latitudinal trends.

    87.5 1.185262935
    82.5 1.106883752
    77.5 0.862583285
    72.5 0.449320486
    67.5 0.061570317
    62.5 -0.193947873
    57.5 -0.254596158
    52.5 -0.258385733
    47.5 -0.121161852
    42.5 0.057581919
    37.5 0.11348447
    32.5 0.07625862
    27.5 0.026687599
    22.5 -0.002387114
    17.5 -0.023168626
    12.5 -0.058785818
    7.5 -0.09520564
    2.5 -0.222599407
    -2.5 -0.225988339
    -7.5 -0.105524559
    -12.5 -0.082135496
    -17.5 -0.082645471
    -22.5 -0.07180729
    -27.5 0.004957678
    -32.5 0.122076921
    -37.5 0.172914351
    -42.5 0.149610964
    -47.5 0.034556455
    -52.5 -0.090892016
    -57.5 -0.240082827
    -62.5 -0.32320377
    -67.5 -0.168715416
    -72.5 -0.023537682
    -77.5 0.169975746
    -82.5 0.125181322
    -87.5 0.009816709

  178. Fred Moolten (Comment #126611)

    “Fred, you say “as long as the agreement came about”. Does this mean you concede that Nic was correct all along in his use of OHC uptake as reported by Lyman and Johnson and you were mistaken in saying “your assertion that the 1800 m value can be no more than 0.30 W/m2 is unjustified”? ”

    Does not seem like it would take over a paragraph to answer.

  179. Nic Lewis states in his paper that it has become possible around 2002 to provide an estimate of the climate sensitivity with mostly observations and inputs independent of climate models and thus provide a measure of where these sensitivities are with regard to the models. I would think that would be focus of this discussion.

    Secondarily Nic discusses the uses and abuses of the Bayesian statistics in determining the distribution of sensitivities and some evidently far-fetched use of data by other scientists working on climate sensitivities – all of which should be fodder for discussion here.

    I am personally most interested the analyses of scientist’s work and where it might have errors or not admit to large uncertainties and that piques my interest with Nic’s paper. We have the added bonus that Nic provides his own estimates of climate sensitivities, but there I leave the replies to critiques of it to him.

  180. Robert Way (Comment #126622)

    “Unfortunately I feel that Dr. Curry has characteristically emphasized uncertainties in mainstream climate science while underplaying uncertainties in pseudo-climate science.”

    Curry has criticized the mainstream climate science (and I agree) with failing to acknowledge the full extent of uncertainties in some of their results. Understanding your reference to underplaying the uncertainties in pseudo-climate science would require knowing generally who the pseudo scientists are and what uncertainties.

  181. SteveF:

    We are not terribly far apart, save for a few points.

    Indeed we are not. For me, the three flies in the ointment are:
    (1) Sheer numbers of people. Another 5.5 billion people using quarter the resources per capita that the developed world does is still a hell of an increase in economic activity.
    (2) Coal power generation. with apologies to those involved in the coal industry, which has driven the Industrial Revolution, saved and improved orders of magnitude more lives than it has ended and ruined, it is time for coal to go. We have a viable alternative in nuclear RIGHT NOW. At least four modular 3rd gen PWR designs are licenced in many countries, under construction in many countries, and some have operational track records in many countries. We should be building more. Taking externalities into account, they are cheaper than coal in the long run. I hope the Chinese experience as they develop their CAP 1400 and CAP 1700 reactors prompts them to reign in their coal program and hugely expand their nuclear program.
    (3) transition to meat and high-protein diets as wealth increases. Farming takes a lot of land, a lot of water, a lot of energy, a lot of chemical synthesis. Something like a third of all protein in our bodies originated from nitrogen fixed by the Haber process: we are literally made of fossil fuel! The figures become worse when you start growing crops for fodder rather than direct consumption, which is what happens when people start regarding meat as a daily snack rather than a weekly treat. Short-term thinking could destroy a lot of arable land, and food shortages are lousy for stability, peace and prosperity.

    Most days I’m optimistic that we will meet the challenge and stabilise global population at 11-12 billion, with poverty eliminated and environmental damage halted. With a high level of global wealth, population should stabilise itself and then fall by attrition if the current developed world is anything to go by. Thing is, you have to believe in progress and technology and human capability to be optimistic, and some environmentalists seem actively opposed to all three.

  182. Saw this paper:
    Climate trends in the Arctic as observed from space
    Comiso and Hall (Early View)
    http://onlinelibrary.wiley.com/doi/10.1002/wcc.277/abstract

    Interesting passage:
    “For the period 1981–2012, the trend in surface temperature using GISS data, was estimated, as shown in Figure 2(b), to be… 0.60â—¦C/decade… for the Arctic region (>64â—¦C)… For comparison, the corresponding trend in temperature using AVHRR (satellite) data for the same time period in the Arctic (>64â—¦C) is 0.69â—¦C/decade… The slightly greater trend in Arctic temperatures from AVHRR is mainly due to stronger trends in the Central Arctic Basin (see Figure 2(a)) where in situ data are very sparse.”

  183. Fred Moolten,

    I don’t know why you think your response would be off topic and not of general interest. On the contrary, it is directly relevant to the topic of this thread which is to comment on Lewis and Crok. I for one (like Kenneth Fritsch and Bill C) would be interested in your view.

    In reading the Ed Hawkins blog post, it appears you rather foolishly accused Lewis of an error (using some unfortunate language) based on your relying on an unpublished and incorrect draft of the Lyman and Johnson paper. When this was pointed out to you, you did not acknowledge your mistake but seemed to double down by again (incorrectly) accusing Nic Lewis of a “serious error”. If you would like to correct this impression, it would be good to hear your side and, in particular, why you seem to think Lewis and Crok is still wrong on this point on OHC uptake.

  184. Regarding the 0.29 W/m^2 OHC value that Lewis applies, take a look at the cumulative OHC charts out there and one can see how this value does not describe the long-term slope of these curves.
    http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/

    Take care to note that this is a globally adjusted value and is not just over the ocean.

    I can see someone else interpreting this slope as 0.56 w/m^2 more easily than 0.29 w/m^2, which is why the error looked so suspicious.

  185. matt (Comment #126630)

    You have presented your view of what you think is required for the future of mankind. It is difficult to judge in my view the exact path of how mankind will handle the challenges thrown at it, but if mankind is allowed to innovate and the free market is allowed to operate so that those ideas that work and are most efficient are given opportunity to be put into practice and those ideas that do not are allowed to fail, I have great optimism for our future.

    The pricing and regulation of those operations and processes that can be shown to do harm to individuals and groups of individuals if allowed to be handle through a system of torts and not through government taxation and general regulations would provide an optimum way in my view of handling GHGs – if, of course, harm can demonstrated in a court.

    I do have to admit when I hear someone making suggested as to what they see specifically as the requirements for the future of mankind, I get a little nervous when they do not specify how they think all that can be accomplished. Detailed and specific plans remind me too much of those governments of past with commands economies that had plans that never were admitted to failure until the governments themselves failed.

  186. WHT:

    Regarding the 0.29 W/m^2 OHC value that Lewis applies, take a look at the cumulative OHC charts out there and one can see how this value does not describe the long-term slope of these curves.

    Long term slope? Fred Moolten objected to this assertion from Nic Lewis who was talking about 2004-11:

    The latest paper on ocean heat uptake (Lyman and Johnson, 2014, J. Clim) shows warming down to 1800 m during the well observed 2004-11 period that, when added to the AR5 estimates of other components of recent heat uptake, equates to under 0.5 W/m2.

  187. Gates on Smil

    In 2000, the dry mass of humans was about 125 million metric tons. For all domesticated animals, it was 300 million tons. That’s a total of 425 million tons, compared to just 10 million tons for all wild vertebrates.

    So, yes, the free market has taken care of the growing population, it also remains an experiment with the biosphere.

  188. RB (Comment #126641)

    First of all we do not have a truly free market in any of the major economies of the world and thus we have to be careful when we attribute problems to the free market that are often the result of government actions directly or of those that the governments favor – as in crony capitalism.

    Secondly while the point of your comment is not clear it is appears you might be suggesting some ratio in those dry weights that is appropriate and needs maintaining.

    Thirdly I strongly suspect that wild vertebrates are primarily controlled and in the domain of governments where the free market cannot and does not operate.

  189. RB (Comment #126643)

    Would that magic number be enforced or is that merely a suggestion that is supported by whatever consensus applies in this matter?

  190. Thanks Robert.

    I saw the Cosimo stuff at AGU ( as you know )

    For other folks here one of the reasons I’ve been fairly confident that HADCRUT was underestimating the arctic and that CW2014 was an advancement in estimating the arctic was the charts from the paper he cites above. A couple of them were shown at the fall AGU in the surface temperature session.

    There are on going projects ( FINALLY) to construct consistent satellite based gridded time series for albedo, NVDI, LST, Leaf cover, etc. I’m looking at doing one for impervious area back to the early days of landsat. To the extent that these geo graphic features influence surface temperature our estimates of the satellite era will improve.

  191. RB (Comment #126641),

    In 2000, the dry mass of humans was about 125 million metric tons. For all domesticated animals, it was 300 million tons. That’s a total of 425 million tons, compared to just 10 million tons for all wild vertebrates.

    Vertebrate normally means ‘with a backbone’, which includes fish. The recent global ocean take of wild fish in documented fisheries is about 90 million tons per year. (http://www.earth-policy.org/indicators/C55/fish_catch_2012) This for sure does not take into consideration fish catch which was was not documented. Converting to dry weight (factor of ~5) means the commercial take each year is ~18 million tons dry weight. Perhaps you are suggesting harvested fish have an average lifetime of a few months or less? (It is actually several years for most species.) Of course, there are serious problems with overfishing of target species, which need to be address ASAP to preserve and rebuild wild stocks of targeted species. But keep in mind lots of fish species are not targeted, so the ocean’s dry biomass of vertebrates has to be much larger than 10 million tons…. probably 10 to 30 times than amount, and maybe more. It’s OK to argue that humankind dominates Earth’s biosphere (it does), but it’s not OK to exaggerate that domination.

  192. SteveF, no exaggeration when the link includes the breakup. Gates was referring to wild land vertebrates. The figure shows 250 million metric tons of marine vertebrates.

  193. RB,

    Wait, there’s more mass in domesticated animals than fish in the ocean? That’s kind of astounding.

    WHT,

    Of course it’s not consistent to the long term slope. That means that Argo is better and the long term slope is wrong, or it means that the slope has changed (or Argo is worse than the old trend, which I doubt). If the slope has changed then the missing heat is not hiding in the deep oceans during the pause. At least not all of it..

  194. RB side stepping “dry mass of humanity” as my new least favorite way to devalue human life (Ahead of of brown babies as “mouths to feed”, or all of us as little more than “carbon emitters”), whats your point?

    History seems to tell us, as a species, we do OK even with near exponential growth in our species (Malthus must be spinning in his grave). Having grown up knowing regular famine in parts of Africa and Asia it’s very warming to know we’ve almost completely eliminated mass famine now. It seems only when one of the things that make us human, society, breaks down do we see famine return. A bit more rational decision making would be nice but mostly things seem sweeter with this much dry mass.

    HumanityRules

  195. I was curious (or perhaps feeding an obsession) to see what the confidence intervals for latitude zones would be for the various temperature data set trends we have been analyzing. I did both GISS and CWH for the period 1997-2012. I used period since that period appears as an obsession here also. I used the linearly regressed residuals from 36 latitude zones temperature series and ARMA modeled those series and selected, using the lowest aic score, the best model. I also calculated the standard deviation of the selected ARMA model residuals. Finally using the selected ARMA coefficients and standard deviations did 10,000 simulations of trend estimates to obtain the 95% CIs for trends at a given latitude zone. In the linked table below I show the ARMA model, ARMA standard deviations, the trend and the 95% CIs for each latitude zone for CWH (GISS was much the same and is not shown).

    The table shows that the CI range is much larger at the highest northern and southern latitudes than at the mid latitudes. I thought that perhaps the smoothing that could occur with the extrapolation might not show a great variability in the areas sparsely covered by stations, but that is not the case. There may be other reasons for the large variations we see in trends for these areas, like land versus sea, but with the short time period used and the very large CIs at the higher and lower latitudes it is difficult to say anything certain about trends in those regions of the globe, and particularly when comparing across methods and data sets. One can say that the warming is much faster in the most northerly and southerly latitudes for CWH for the period analyzed. Overall the CWH data set has higher CIs than those from the GISS data set. I need to do this same exercise for longer time periods.

    It should also be noted that the selected ARMA model varies with latitude. That might be telling us something about how temperatures vary in reality in these different areas or something about how those temperatures are adjusted. I would suspect we should see the SSTs with lower standard deviations but with higher autocorrelations. I should also note that different ARMA models derived from a given temperature series can give nearly the same trend CIs.

    http://imagizer.imageshack.us/v2/1600x1200q90/5

  196. Carrick, in a post above I inadvertently gave you GISS latitudinal trends for 2002-2012 instead of the intended CWH trends. Here are the CWH trends by latitude for 2002-2012. This shows that trends in the northern latitudes continue to warm at a faster rate with CWH than GISS even though the lower latitudes are warmer at a lesser rate. I believe I pointed this difference out in our initial discussions of Cowtan and Way.

    87.5 2.003845957
    82.5 2.107571649
    77.5 1.719944509
    72.5 1.125558501
    67.5 0.562290394
    62.5 0.013155542
    57.5 -0.134247929
    52.5 -0.113117479
    47.5 -0.096611998
    42.5 0.123415717
    37.5 0.066400552
    32.5 0.024982216
    27.5 0.007497813
    22.5 -0.022180147
    17.5 0.011244389
    12.5 -0.034517248
    7.5 -0.138455425
    2.5 -0.219403884
    -2.5 -0.240959551
    -7.5 -0.175518576
    -12.5 -0.085128448
    -17.5 -0.094755791
    -22.5 -0.151230088
    -27.5 -0.08858863
    -32.5 0.112168846
    -37.5 0.241070194
    -42.5 0.293312632
    -47.5 0.203154995
    -52.5 0.021298097
    -57.5 -0.182402063
    -62.5 -0.270696941
    -67.5 -0.265887998
    -72.5 0.009452635
    -77.5 0.383068341
    -82.5 0.466280944
    -87.5 0.14559799

  197. Kenneth Fritsch (Comment #126653),
    Your link seems broken, or the target file contains ‘errors’ that make it impossible to display.

  198. Kenneth Fritsch,

    I plotted up your CWH trend values for 2002 to 2012 with the x-axis being the sine of the latitude, to better show the area weighted contributions. http://i61.tinypic.com/24wd479.png

    The pattern is, well, very strange, with an almost incredibly rapid increase above 70N.

    I also calculated the (approximate) area weighted averages: Southern hemisphere = -0.024C/decade
    Northern hemisphere = -0.011C/decade
    Global = -0.017C per decade

    So on average, global cooling. Do these numbers look right to you?

    I am shocked by the above 70N rate, since CO2 acts everywhere but the only rapid warming is in the Arctic. Are we just seeing the warming influence of more open water (less ice cover)? Maybe a look at the seasonal trends (a la Nick Stokes) above 70N would help clarify what is driving this rapid warming.

  199. SteveF (Comment #126656)

    I put forth this observation in our initial discussions of Cowtan and Way which put attention on the Arctic polar amplification. I was wondering if the fact that the 90S-60N warming had plateaued while the Arctic continued to warm at an accelerated warming pace would tell us anything about a likely cause for the polar amplification and heat transport there or generation on site. While the feedback from open ocean areas and lesser albedo has to have some effect I thought the general opinion was that most of the Arctic warming is via transport from lower latitudes.

    The GISS temperature data set shows a recent bending of the trending in the Arctic during the recent warming pause globally, while CWH showed little or none. UAH was intermediate as I recall. I was attempting to determine if CWH made sense in the scheme of things for amplification and we had something new to learn from that data set or that it was suspect.

    I think what I have learned is that there is no conclusive theory on the mechanism for Arctic polar amplification and that warming at extreme latitudes is uncertain with regards to magnitude.

  200. I forgot to add Steve that we looked at seasonal temperature trends and most of it is produced in the winter and spring. That also agrees with the theory of melting summer/autumn ice. Open waters while having a lower albedo can store absorbed heat and thus reduce the surface air warming. During the winter and spring with ice cover the stored heat is radiated into the surface air. I made some graphs but Nick Stokes took the same data and made neat animated graphs showing the season changes much better. I’ll attempt to find that a link to put here.

  201. Kenneth Fritsch

    I do have to admit when I hear someone making suggested as to what they see specifically as the requirements for the future of mankind, I get a little nervous when they do not specify how they think all that can be accomplished. Detailed and specific plans remind me too much of those governments of past with commands economies that had plans that never were admitted to failure until the governments themselves failed.

    Oh don’t worry, I have no desire for nuclear build-out by international fiat. I hope (expect?) the market to provide.

    Consider: China is currently completing a number of Westinghouse AP 1000 reactors at about 6 billion US overnight cost each. These are the first to be built: as the supply lines become established and lessons are learned, expect that price to come down. Also, China is building its own version of the AP 1000, the CAP 1400, with the intention to expand to the CAP 1700. Before 2020, there should be 1.7 GW reactors being built in China for 4 billion US or less. With 90% capacity factor, 80 year life and low operating and fuel costs, that might be directly competitive with coal in China over the long haul. China of course is the driver for most of the current global CO2 trajectory. In the short term, it doesn’t matter much what the rest of the world does.

    Also consider that Russia is expecting to offer a lead-bismuth cooled small modular fast neutron reactor by 2017, an offshoot of the reactor used to power the Alfa class submarine. Nuscale and Babcock and Wilcox are developing Pressurised-Water small modular reactors. All of these units are designed to be factory-built and rail-shippable. If only a few are built, they will flop. With full order books, they could both compete directly with cheap coal and gas, and lower financial risk by allowing a nuclear site to develop capacity incrementally. We shall see.

  202. Robert Way writes in response to Kenneth’s insightful question

    “(2) Why do you not give at least equal time to looking at the 1979-2012 trends?”

    [RW] I presented a talk specifically on those trends at the ArcticNet Annual Science Meeting last year. The reason we focus on the 1997-2012 period is because that period of time emphasizes the impacts of coverage bias as a result of the rapid warming which began in the Arctic during the late 1990s and continued to present (Table 4). Coverage bias works both ways – in cool periods the lack of coverage in the Arctic can lead to an underestimate of regional cooling. Over the period 1979-2012 the trends we present are similar to the original Hadley data partially because the biases can cancel out to some degree (see Figure 6 in the original paper).

    So if the HADCRUT global warming trend from 1979 to 2012 is “similar” to the C&W “adjusted” trend for the same period then that implies the Arctic was actually cooling (or at the very least warming much less than the rest of the world) in the 1979 to 1997 period so that the overall trends match.

    So much for Arctic amplification, eh. It only amplifies when we say it amplifies mkay?

  203. What Clive Best asks is important
    http://climateaudit.org/2014/03/10/does-inhomogeneous-forcing-and-transient-climate-sensitivity-by-drew-shindell-make-sense/#comment-505045


    This definition TCR(E) can be measured by experiment. It is simply the average temperature rise when CO2 levels reach 560ppm. It can be essentially measured today. This definition removes the non-CO2 anthropogenic effects (aerosols, methane, soot etc.) and avoids getting trapped by the model centric view. These effects are essentially anthropgenic feedbacks jus like climate feedbacks – increased H2O.

    In all other branches of physics models make predictions and experiments then test the models. Why should climate science be different?

    What Clive says is replace Lewis’ difficult problem of estimating
    TCR = F2×CO2 ΔT/ΔF

    where F2×CO2/ΔF is model-based, with the much more tractable experimental estimate based on differential log(CO2) sensitivity

    ΔT = TCR/ln(2) d[CO2] / [CO2]

    Example:
    http://imageshack.com/a/img27/2007/mkx.gif

  204. matt (Comment #126663)

    Your comment on the China and Russia investment in nuclear brings home a point I made previously in this discussion and that being the distortion of allocations of scarce resources in a command economy. Much of the capital that is raised for these investments in China comes through the government controlled and directed banking system. In Russia I would guess that investment has much to do with government connections like in crony capitalism.

    Now some people and even those in the US admire from afar the ability of these governments to bypass, for example, in this case the seemingly unwarranted fears of nuclear, but that same government power allows these governments to make uneconomical decisions and never having to admit failure when the investments turn out to be uneconomical in the real world.

    When the truly free market place is used to make these decisions and bad investments are allowed to fail we get much closer to the real world. Unfortunately in most, if not all, developed nations of the world the market is blurred by subsidies and differing tax rates for different sources and production of energy. I am also not forgetting that the market and/or system of torts needs to be applied to energy sources that can be shown to do harm to individuals and groups of individuals. In command economies those issues need not be considered.

  205. “This definition TCR(E) can be measured by experiment. It is simply the average temperature rise when CO2 levels reach 560ppm.”

    And then subtracting all of the natural warming.

  206. TimTheToolMan (Comment #126666),
    While I like the Cowtan and Way effort to infill missing data in a rational/justifiable way, you are absolutely right that it should not be a means to emphasize that recent warming ‘is worse than we thought’, unless the same methodology is applied over all the applicable data, where the message turns into ‘it’s about like we thought’. Focusing only on the later period which ‘amplifies’ the Hadley trend sounds very much like a cherry pick, and even more like a cherry pick if the semi-digested pap fed to the mainstream media ignores that earlier period. I think Cowtan and Way have an obligation to note, and even to emphasize, if their method does not lead to a significantly more rapid warming trend in the Arctic compared to the Hadley trend when all the data (back to 1979) is considered. My hope is that any improved method for infilling of data would be used to advance understanding (What factors drive large differences in temperature trends in the Arctic compared to the rest of the world, and over what time scales?), not to advance an agenda of alarm. It would be wise for them to avoid even a perception of bias in how the results are presented. It falls under the general heading of ‘bending over backwards to not mislead’…. yourself or anyone else.

  207. Kenneth Fritsch (Comment #126664),
    If warming in the region above 75N is driven primarily by changes in ice cover (including albedo influences), then it ought to be possible relate year-on-year Arctic trends to year-on-year changes in ice cover and ice volume. That is, if a summer season melts more ice, then the temperature in the late autumn and early winter, when ice cover is reforming, ought to be exceptionally warm, while the late winter and early spring ought to be not as much influenced by the previous summer melt extent. I guess looking at the October/November/December trends by latitude versus January/February/March trends by latitude, along with the trend in September ice minimum might be instructive.
    .
    Looking at Nick Stokes’ seasonal trends, I am struck by the rapid COOLING between about 40N and 60N in the winter months, even while the above 70N region shows rapid wintertime warming. That fairly well screams (at least in my ear 😮 ) that the mechanism of recent Arctic warming is related to a change in the rate of heat exchange between the mid and upper northern latitudes in winter, and not directly due to GHG radiative forcing. The drop in summer ice cover then becomes a consequence of the warming, rather than a cause, although without a detailed mechanistic model it is possible that ice cover is in part both cause and effect. (Does lower summertime ice cover lead to greater heat exchange between the Northern mid latitudes and the Arctic the next winter?).
    .
    Anyway, interesting stuff.

  208. Correction. I wrote: “I think Cowtan and Way have an obligation to note, and even to emphasize, if their method does not lead to a significantly more rapid warming trend in the Arctic compared to the Hadley trend when all the data (back to 1979) is considered.”
    .
    That should have been: “I think Cowtan and Way have an obligation to note, and even to emphasize, if their method does not lead to a significantly more rapid global warming trend, compared to the Hadley trend, when all the data (back to 1979) is considered.”

  209. SteveF writes

    That fairly well screams (at least in my ear ) that the mechanism of recent Arctic warming is related to a change in the rate of heat exchange between the mid and upper northern latitudes in winter, and not directly due to GHG radiative forcing.

    Doesn’t it just. If the Arctic is cooling while the mid latitudes warm and vice versa there is little escape from that conclusion. At least from the point of view of assigning a primary driver…

  210. How far back can Cowtan and Way go?
    An analysis of the early 20th century arctic warm period would be interesting.

  211. SteveF (Comment #126701)

    If anything the Cowtan Way paper got me to think about the Arctic warming and a possible mechanism for the polar amplification. My background reading has left me as puzzled as before. I was thinking along the lines that if Cowtan and Way had produced a data set that for the first time truly represented the 60N-90N temperatures that could provide a path to better select amongst the mechanisms for amplification put forth in the peer-reviewed literature. Looking in more detail at the uncertainties that continue to exist in these data sets (and particularly over short periods of times that can be dominated by “weather” noise), I am less sure that Cowtan and Ways methods have yet to be properly and thoroughly evaluated.

    Cowtan and Way also forced me to look harder at the recent plateauing of the warming in the mid latitudes (where everybody lives) and seasonal trends versus annual.

    I had planned on looking at ice cover in the Arctic and attempting to relate it to temperature variation there. I think your suggestions make sense, but I am wondering about the memory of the Arctic ocean with the complications of ice cover and whether we would expect a good year over year effect. That is probably though a good starting point.

  212. HR (Comment #126748)
    March 14th, 2014 at 7:53 am

    “How far back can Cowtan and Way go?
    An analysis of the early 20th century arctic warm period would be interesting.”

    Upstream in a post Robert Way said:

    ” The long-kriged version is on the website. Updated to 2013.”

    I have not yet been back to the website but I would suspect it goes back as far as other temperature data sets do. Cowtan/Way had put all the data sets in the same format and it is easy to download and manipulate. I give them much credits for doing that.

  213. Hi Kenneth,

    I’ve been out of pocket the last few days…

    Thanks for the trend estimates. This showed what I expected to see, which was that the trends in the Arctic for C&W are well above the trends seen from other reconstruction methods. These sorts of novel results beg to be critically tested

    The regions with negative trends is fairly diagnostic I believe. This pattern is what one would expect from the redistribution of heat energy, rather than an increase in global heat energy.

    (I believe SteveF makes a similar point.)

  214. SteveF:

    That should have been: “I think Cowtan and Way have an obligation to note, and even to emphasize, if their method does not lead to a significantly more rapid global warming trend, compared to the Hadley trend, when all the data (back to 1979) is considered.”

    But that is not what seems to have happened. Instead they focus on the anomalous 1997-2012 period, one that I’ve repeated pointed out to them as non-diagnostic due to the presence of a significant episodic event near one end point (the circa-1998 ENSO of course).

    Possibly Robert Way is being more subtle here in what he meant:

    The reason we focus on the 1997-2012 period is because that period of time emphasizes the impacts of coverage bias as a result of the rapid warming which began in the Arctic during the late 1990s and continued to present (Table 4).

    But yes, if you include 1997-1998 near one end point, you are almost guaranteed to observe the most extreme impact from the missing area from virtually any interval you could choose, but by emphasizing this interval you are effectively overselling the importance of the corrections. I’d even be willing to describe this as unconscious and confirmation-bias driven cherry picking.

  215. Carrick,

    Possibly Robert Way is being more subtle here in what he meant:

    “The reason we focus on the 1997-2012 period is because that period of time emphasizes the impacts of coverage bias as a result of the rapid warming which began in the Arctic during the late 1990s and continued to present (Table 4).”

    Perhaps, but a less generous interpretation is that they are looking for data to show that ‘the pause’ is less important than it would otherwise seem. Even that would be OK, so long as they also point out that their data show the more rapid warming between 1979 and 1997 was exaggerated due to incomplete Arctic data during a period where the Arctic was much colder.
    .
    Is the apparent ~60-year oscillation in the temperature record since the mid 1800’s in part explained by a lack of temperature data in the Arctic, combined with an oscillation in heat transport between mid and upper latitudes? That seems to me a perfectly legitimate (and interesting!) question to ask and to try to answer; the Cowtan and Way method may be a good approach to answer that question. Ex post facto explanations for observations are perfectly OK, but ex post facto cherry picks to discount inconvenient observations are not OK.

  216. … It’s OK to argue that humankind dominates Earth’s biosphere (it does), but it’s not OK to exaggerate that domination….

    … post facto explanations for observations are perfectly OK, but ex post facto cherry picks to discount inconvenient observations are not OK. ….

    Moral policing runs strong. Must be dear leader Stevie Mac’s insinuation style influence.

  217. SteveF, I agree with your comments that it is okay to consider the 1997-2012 interval, as long as one considers other intervals too. I am bothered by the fact C&W seem insistent on only considering that one interval. I am also bothered by the complete silence from people like Nick Stokes, who would be all over this as an obvious cherry pick were it Roy Spencer making a point inconvenient to the current climate science lore.

    Robert Way also commented that:

    “Unfortunately I feel that Dr. Curry has characteristically emphasized uncertainties in mainstream climate science while underplaying uncertainties in pseudo-climate science.”

    I have no idea what he means by “pseudo-climate science”, unless that is climate science that doesn’t conform to the current versions of the talking point memos published on Skeptical Science (realistically, this is what their posts amount to).

    But anyway, I would say the problem is the opposite:

    In my opinion, Robert is overplaying the measurement uncertainty, which I don’t think can be as large as people are claiming to explain the discrepancy between the model sensitivity and the lower ones suggested by observational constraints. At the same time, I think, he and others are downplaying very real and substantive uncertainties raised by Judith Curry and many others, such as the huge fudge factor present in cloud feedbacks.

    They may be correct that the “actual values” are closer to the climate sensitivities of the models, in spite of the greater uncertainties of these estimates, but I don’t see how you get there by “adjusting” the temperature trends up by that amount.

    At the moment, I’m totally not sold by the parametrization they are using to combine surface and satellite records. I actually think this method is very unlikely to work, in the sense of producing more accurate temperature estimates in the regions where instrumental data are missing.

    In fact, it seems like people are grading the veracity of the result based by how much steeper you can make the temperature trend. To the degree this is happening, this is as wrong a$$ed as you can possibly get.

    This would be outcome based reasoning (degree of truth is defined in terms of what we expect) rather than process based reasoning (degree of truth is defined in terms of how precisely we can understand the methodology). In my books, outcome based reasoning would be an exemplar of “pseudo-science”

  218. RB:

    Moral policing runs strong. Must be dear leader Stevie Mac’s insinuation style influence.

    What you are doing here is just an exercise in poking a finger in other peoples’ eyes rather than addressing substantive issues.

  219. Kenneth Fritsch

    Thanks for the pointer on the Cowtan and Way data

    Is there any (free) software I can download to handle the gridded data? I have very low skill set and can only generate latitudinal data if the dataset is on KNMI Climate Explorer.

  220. RB (Comment #126792),
    “Moral policing runs strong.”
    .
    You are joking right? If there is ‘policing’ going on it is by those like you who are adamant fossil fuel use must be drastically reduced ASAP, independent of cost (economic or political), and who actively try to silence anyone who might interfere. You know, like the behind the scenes pressuring of journal editors, the ostracizing of anyone (like Judith Curry or Roger Pielke Jr.) who dares question the accuracy/certainty of ‘the science’, or even is willing to point out factually false statements about the causes of extreme weather. Or the insistence that the Keystone pipeline must be blocked, even though blocking it will, if anything, increase emissions of CO2. Or the suggestion that trains carrying coal are like trains carrying Jews to concentration camps. Or that all ‘deni#rs’… AKA, anyone who opposes immediate drastic action…. are either immoral, selfish, corrupt, crazy, stupid, criminal, or a combination of these. Ya, those efforts at ‘policing’. Among devoted greens, the utter lack of self awareness, humility, or even common decency towards others is obscene.

  221. Carrick,
    I read over Cowtan and Way again. I think that they should have extended Table IV (estimated trend bias by starting year until 2012) all the way back to 1979. They stopped at 1990, where the estimated trend bias had fallen to 0.02C per decade. Here is their Table IV data in graphical form (http://i57.tinypic.com/9hmgpc.png), which I think a lot more informative than their table. I can’t be sure, but it would not surprise me if the estimate bias in the Hadley trend continues to drop before 1990… and maybe approaches zero.

  222. Kenneth, I’ve had a chance to go back and look at HadCRUT4 trends broken out by zonal averages (“latitudinal bands”).

    These turn out to not be very different than the trends that you reported for Cowtan and Way, but substantially different than the values reported by GISTEMP.

    Trends are in °C/decade.

    SERIES Zone 1959-2012 1979-2012 1997-2012 2002-2012

    HadCRUT4 80-90N 0.437 0.712 1.44 1.76
    HadCRUT4 GLOB 0.132 0.168 0.060 -0.044
    DMI/ECMWF 80-90N 0.409 0.722 1.01 1.35

    The DMI values are daily meteorological forecasting/backcasting values based on the European weather forecasting model (see ecmwf.int). My trends are computed using a variation on RomanM’s algorithm.

    I would consider DMI to be a better ground truth than satellite measurements.

    I’ll try and update this will GISTEMP numbers. Overall the GISTEMP trends versus latitude often look very different than the HadCRUT numbers.

    By the way, 1979-1997 is a very interesting interval to look at too (it mostly avoids the issues with the 1998 episodic event).

    If I get a chance, I’ll update these numbers with GISTEMP. Not sure what the best way to go from here is. Perhaps we need a new open thread discussing the question of Cowtan and Way cross-validation?

  223. Here is a comparison of Cowtan & Way (based on Kenneth’s calculations), HadCRUT 4 (my calculations) and GISTEMP (based on their widget).

    Figure.

    Conspicuously absent from this are error estimates. It is tricky to compare, because you’re looking at the results of different analysis algorithms using very similar data sets. So the differences aren’t so much a reflection of the true uncertainty in the measurements, but rather how the errors in the measurements get affected by different analysis methods. Noise amplification is always an issue, and it is one of my chief concerns with the C&W hybrid method.

    I can think of ways of getting at the uncertainty associated with the analysis process, but it requires more time that I can afford to give at this point.

    SteveF, yes that is interesting, and suggestive of other things. It could be that the bias is negative by the time you get to 1979-2012 for example.

    I’ll see if I can get around to downloading the appropriate files from the C&W website and redoing the above analysis based on their numbers. Your 1979-1997 interval is probably more diagnostic than 1997-2012 IMO.

    Again having time to play with this is an issue.

  224. SteveF wrote

    I can’t be sure, but it would not surprise me if the estimate bias in the Hadley trend continues to drop before 1990… and maybe approaches zero.

    Robert Way said “Coverage bias works both ways – in cool periods the lack of coverage in the Arctic can lead to an underestimate of regional cooling.” and may have said so because its actually in the data. If that were the case then before 1990 could easily be positive.

  225. Has anyone here looked at the Comiso paper I referenced above? It seems odd that this important paper isn’t at all discussed.

    “Thanks for the trend estimates. This showed what I expected to see, which was that the trends in the Arctic for C&W are well above the trends seen from other reconstruction methods. These sorts of novel results beg to be critically tested”

    Well our data agrees with the AVHRR surface temperature satellite data (Comiso and Hall [2014]), Berkeley Earth (Land+Ocean) and AIRS satellite data over the last decade. GISS by contrast underestimates warming since about 2005 and Hadley/NOAA significantly underestimate the warming.

    “I would consider DMI to be a better ground truth than satellite measurements.”

    There is a wealth of references which discuss how reanalysis-based datasets should not be used for trend analysis.

  226. Robert Way:

    There is a wealth of references which discuss how reanalysis-based datasets should not be used for trend analysis.

    Then by all means provide a single reference. That should be easy if there are a “wealth of references” to choose from.

    But in any case, I could probably make the same argument about other model based approaches, such as your hybrid model.

    AVHRR has its issues too, especially in the Arctic.

    GISS by contrast underestimates warming since about 2005 and Hadley/NOAA significantly underestimate the warming

    Define “significant”. Do you mean significant in a statistical sense (“resolvable difference”), or significant in the sense of “does matter” when comparing measured trends to model predictions?

  227. Put another way, just about everything in experimental science has warts. The question is only which warts matter the most, not whether they exist or not.

    I think it’s pretty interesting that Robert seems to be arguing for a conspiracy of errors of the various methods that seem to cursorily agree with each other here.

  228. Here is a first paper:

    annual and seasonal tropospheric temperature trend comparisons of radiosonde and reanalysis data at regional scales Davey (2009).

    Abstract:

    Data from global reanalyses are routinely used as initial and lateral boundary conditions for regional numerical weather prediction modeling. Reanalyses have also been used for longer term assessments of tropospheric temperature trends. Our study compares linear tropospheric temperature trend estimates for radiosonde and reanalysis data, both annually and seasonally, at land-based sites in the Americas and Australasia/Oceania from 1979-2001 in order to assess the quantitative agreement between the two types of data. In our analyses, we found that the average radiosonde trends generally fell in between the average reanalysis trend values and indicate that reanalyses are indeed appropriate to use for climate trend analysis.

    The most significant differences between the radiosonde and reanalysis datasets occurred during the Northern Hemisphere growing season (April – September), and at upper levels of the troposphere (200 and 300 mb). The semiannual variations in the significance of the reanalysis- radiosonde average temperature trend differences may be indicative of regional variations in these differences. Additional reanalysis-radiosonde comparisons using newer radiosonde datasets that have more global coverage are recommended to further investigate such regional patterns and better understand global properties of these trend differences.

    Emphasis mine.

  229. A second paper.

    Intercomparison of Temperature Trends in the U.S. Historical Climatology Network and Recent Atmospheric Reanalyses
    Vose (2012)

    [1] Temperature trends over 1979–2008 in the U.S. Historical Climatology Network (HCN) are compared with those in six recent atmospheric reanalyses. For the conterminous United States, the trend in the adjusted HCN (0.327 °C dec−1) is generally comparable to the ensemble mean of the reanalyses (0.342 °C dec−1). It is also well within the range of the reanalysis trend estimates (0.280 to 0.437 °C dec−1). The bias adjustments play a critical role, as the raw HCN dataset displays substantially less warming than all of the reanalyses. HCN has slightly lower maximum and minimum temperature trends than those reanalyses with hourly temporal resolution, suggesting the HCN adjustments may not fully compensate for recent non-climatic artifacts at some stations. Spatially, both the adjusted HCN and all of the reanalyses indicate widespread warming across the nation during the study period. Overall, the adjusted HCN is in broad agreement with the suite of reanalyses.

  230. A third paper.

    Estimating low-frequency variability and trends in atmospheric temperature using ERA-Interim

    Simmons (2014).

    Abstract
    Low-frequency variability and trends in temperature from 1979 to 2012 are examined. Observational improvements are noted and near-surface behaviour of the ECMWF ERA-Interim reanalysis is reviewed. Attention is then focussed on how closely ERA-Interim fits the upper-air data it assimilates, the bias adjustments it infers for satellite data, and its agreement with the ERA-40, MERRA and JRA-55 reanalyses and with model simulations.

    Global-mean fits to independently homogenised radiosonde temperatures and variationally adjusted satellite brightness temperatures are mainly within 0.1 K in the troposphere, with some degradation over time from assimilating varying amounts of aircraft and rain-affected microwave-radiance data, and from a change in source of sea-surface-temperature analysis. Lower-tropospheric warming appears to be somewhat underestimated. Temperature variations in the tropical upper troposphere correlate well with those at the surface, but amplitude is more than doubled, in agreement with modelling. Specific humidity varies in concert; relative humidity is largely uniform, but dips during El Niño events.

    Agreement with the other reanalyses is particularly close in the lower stratosphere, where radiance data and the background model constrain cooling to be slightly slower than in the homogenised radiosonde data. Perturbations to global-mean temperatures from underestimating warming following the El Chichón and Pinatubo volcanic eruptions and from assimilating recent GPSRO data are at most 0.2 K, less than 20% of the net change since 1979 at 50 hPa. Middle-stratospheric variations are more uncertain. Recent cooling appears to be underestimated by assimilating increasing amounts of unadjusted radiosonde data, but results do not support a recent reprocessing of earlier sounding data that suggests stronger middle-stratospheric cooling than previously indicated. Strong analysed upper-stratospheric cooling agrees quite well with model simulations if occasional jumps due to unadjusted bias changes in high-sounding satellite data are discounted.

    Producing ERA-Interim in two separate streams caused only minor discontinuities where streams join at the start of 1989.

  231. A forth paper.

    Can Climate Trends be Calculated from Re-Analysis Data?

    Bengtsson (2004).

    Several global quantities are computed from the ERA40 reanalysis for the period 1958–2001 and explored for trends. These are discussed in the context of changes to the global observing system. Temperature, integrated water vapor (IWV), and kinetic energy are considered. The ERA40 global mean temperature in the lower troposphere has a trend of +0.11 K per decade over the period of 1979–2001, which is slightly higher than the MSU measurements, but within the estimated error limit. For the period 1958–2001 the warming trend is 0.14 K per decade but this is likely to be an artifact of changes in the observing system. When this is corrected for, the warming trend is reduced to 0.10 K per decade. The global trend in IWV for the period 1979–2001 is +0.36 mm per decade. This is about twice as high as the trend determined from the Clausius-Clapeyron relation assuming conservation of relative humidity. It is also larger than results from free climate model integrations driven by the same observed sea surface temperature as used in ERA40. It is suggested that the large trend in IWV does not represent a genuine climate trend but an artifact caused by changes in the global observing system such as the use of SSM/I and more satellite soundings in later years. Recent results are in good agreement with GPS measurements. The IWV trend for the period 1958–2001 is still higher but reduced to +0.16 mm per decade when corrected for changes in the observing systems. Total kinetic energy shows an increasing global trend. Results from data assimilation experiments strongly suggest that this trend is also incorrect and mainly caused by the huge changes in the global observing system in 1979. When this is corrected for, no significant change in global kinetic energy from 1958 onward can be found.

    Note they are discussing issues with MSU (satellite) rather than field reconstruction.

    These were the first four unique papers that came up with the search terms:

    temperature reanalysis trend estimate

  232. Robert Way:

    Has anyone here looked at the Comiso paper I referenced above? It seems odd that this important paper isn’t at all discussed.

    If you think it’s important to discuss it here, please feel free. This isn’t a classroom, and we don’t have to look at something just because you think it is important.

    I read parts of it, it just didn’t say that much to me. I was more interested in understanding the data sets, rather than somebody else’s reading of the tea leaves.

    But if you want to summarize the article beyond copy & pasting the abstract, Feel Freeâ„¢ .

  233. “Our results also suggest that studies of the Arctic climate based on reanalyses should be undertaken with extreme caution.”
    http://www.atmos-chem-phys.net/13/11209/2013/acp-13-11209-2013.pdf

    “This time-varying mix of observations can result in discontinuities in the reanalyses and induce artificial trends. Care especially needs to be taken when using reanalyses in the polar regions, where continuous long-term observations are particularly sparse (e.g., Sterl 2004; Bromwich et al. 2007).”
    http://journals.ametsoc.org/doi/pdf/10.1175/2010JCLI4054.1

    “However, the reanalyses completed to date have undesirable and in some cases very obvious (e.g., Fig. 1 in Bosilovich et al. 2006) and unphysical time-varying biases, which at best reduce their utility for long-term trend monitoring (but not real-time monitoring, for which they are an undoubtedly valuable tool) and at worst make them useless for such activities, depending on the region, variable of interest, and application (e.g., Bengtsson et al. 2004; Karl et al. 2006; Thorne 2008).”
    http://journals.ametsoc.org/doi/pdf/10.1175/2009BAMS2858.1

    “These observations further reinforce the notion that (air temperature) trends in the reanalysis products should be used with caution and that uncertainties at the regional scale are substantial.”
    Lindsay et al. (In Press) Journal of Climate. Evaluation of seven different atmospheric reanalysis products in the Arctic.

    The Lindsay et al paper goes through many other variables for the Arctic as well. There is a challenge in evaluating ERA-interim because it uses the observational datasets at the surface unlike most other reanalysis.

    Furthermore, if you look at our updates in detail you will see that there are large biases in reanalysis data in many regions. They’re great tools – I think they have their use – but it is very important to be cautious with those datasets. Furthermore, to trust the ERA-I dataset better than AVHRR is difficult to understand. Yes there are issues with AVHRR (cloud cover) but there are equally as many – if not more – issues with using reanalysis for trend analysis. This is fairly common knowledge amongst those who use the datasets regularly.

  234. Carrick,
    Humm… so are you saying that the claim “There is a wealth of references which discuss how reanalysis-based datasets should not be used for trend analysis ” is inaccurate?
    .
    FWIW, I am not much convinced by most “reanalysis” efforts I have read; too much reliance on the model used being ‘correct’, though I admit I have only looked closely at a few papers. The result seems to have too much potential to be nonsense like Balmaseda et al, where the reanalysis ‘data’ is bizarre and contrary to both measurements and any reasonable physical expectation.

  235. Robert Way, the issues with the reanalysis that you reference are due to discontinuities in the underlying observational data sets.

    To start with, it’s clear your original comment:

    There is a wealth of references which discuss how reanalysis-based datasets should not be used for trend analysis.

    Is counterfactural to the actual consensus view.

    It is certainly true that reanalysis tools can be used to examine trends and can even help with determining the bias in satellite measurements. There is even a “wealth of references which discuss” this. It’s interesting in passing that Bengtsson (2004) identifies a satellite splicing problem from the comparison with the temperature reconstruction.

    I agree there there needs to be caution, but it relates to data quality issues, not with the reanalyses per se. A reanalysis can “brush under the rug” these discontinuities, but then so can simpler reconstruction methods.

    If you look at satellite data critically, the first thing you will notice is that there hasn’t been just a single satellite orbiting the Earth since 1979. There have been many, and there are a plethora of instrument issues with these satellites as they go into and out of service that have to be controlled over time.

    Orbit decay, calibration drift, etc. These manifest as “step function” like changes, for example, in comparing RSS to UAH, and do affect the long-term trend estimates, the same as with reanalysis methods. With AHVRR there are many other issues in relating it to surface air temperature. These aren’t “measurement errors” per sé, but relate to whether in the end you are even measuring the same thing.

    If you are really measuring skin temperature with radiometric methods, well that’s a complicated quantity to relate to 1-m elevated surface air temperature. If you are measuring mid-tropospheric values (above the boundary layers), you are likely missing important surface physics.

    For mid-tropospheric measurements, you aren’t even taking a local measurement anymore, which is a big thing from an empirical science perspective. Without knowing the Green’s function connecting surface to elevated temperatures (or whether the Green’s function even well defined), you can’t even really relate them. (And this takes a reanalysis model of the sort you are so strenuously objecting to.)

    And it typically isn’t as simple as assuming a linear combination of observational quantities at the two separate measurement locations, except in the simplest of dynamical models (e.g., it should work in memoryless systems).

    I also think you are putting an over-reliance on the black box that you don’t understand, satellite measurements, over surface measurements which you do understand.

    But splices are part of life here, both in satellite and surface data. You identify them by a combination of looking at meta data, and taking different breakpoints for your analysis to see how large the effect of them is on the data.

    Furthermore, to trust the ERA-I dataset better than AVHRR is difficult to understand. Yes there are issues with AVHRR (cloud cover) but there are equally as many – if not more – issues with using reanalysis for trend analysis.

    First, I never said I “trust the ERA-I dataset better than AVHRR”… actually I don’t. I distrust reanalysis products and satellite measurements about equally.

    So be careful about putting words in my mouth. My suggestion is to quote something I actually said and comment on it, rather than trying to paraphrase me and taking the risk of getting it wrong, as you did here.

    You should also remember I have a lot more experience than you do in relating time series associated with measuring similar but not identical things than you do. I haven’t fully elucidated my concerns with your hybrid method partly because I haven’t

    This is fairly common knowledge amongst those who use the datasets regularly.

    The value of being skeptical in all things and all methods is also fairly common knowledge. 😛

    So it goes both ways:

    No one instrument or method is perfect, that’s why we prefer multiple tools, each with their own systematics. When we obtain agreement between different methods, each with its own unique set of systematic issues, that gives us some confidence in the accuracy of our measurement.

    I wouldn’t discount AVHRR skin measurements (for example) or mid-tropospheric reconstructed temperature (second example) simply because there seems to be good agreement between HadCRUT4 and DMI between 80-90*N.

    At the moment, I’m not even sure there’s a meaningful difference in trends between AHVRR and DMI or HadCRUT or C&W.

  236. SteveF:

    Humm… so are you saying that the claim “There is a wealth of references which discuss how reanalysis-based datasets should not be used for trend analysis ” is inaccurate?

    It’s just not an accurate statement of the consensus view. Robert Way may believe this, but I think he is probably in the minority. Whether other people agree with him depends partly on whether they are defending their field or trying to argue on how broken it is when applying for new funding.

    FWIW, I am not much convinced by most “reanalysis” efforts I have read; too much reliance on the model used being ‘correct’, though I admit I have only looked closely at a few papers.

    There are of course different types of reanalysis tools. ECMWF for example is physics based and highly reliable when you have a tight network of instruments that are well characterized. The issues are, as always, with what happens when you have a sparse network and inaccurate data.

    The issue with trends is that the network can change over time, producing erroneous secular trends. Of course there is a danger this will happen regardless of the method you use… HadCRUT or GISTEMP is just as susceptible in principle as any other method in this way.

    These types of models are usually more robust against data errors, because you can use physics constraints to filter out unphysical values. NCDC uses these physical constraints to a certain extent in their temperature reconstructions. I can’t see a good argument against this sort of use.

    There are other approaches that work well too…. wavenumber filtering is a method for saying “high spatial frequency information should not be present and is unphysical for long term averages”. Nick Stoke’s method is an example of this. NRL’s G2S is another.

    We often do cross-comparisons of ECMWF, WRF (the public domain open source mesoscale model), G2S and our own radiosonde measurements (to roughly 35,000 meters). What we typically find is a very good correspondence between the methods. This is also found in other similar comparisons that have been made.

    Oddly in the end, the climate models agree poorly with radiosonde and these reconstructions, so guess what the tune is that is song there… (e.g., Santer 2008).

    The main limitation with the reconstructions we use is the local physics that isn’t captured by the underlying physics model. For example, air flowing across the Rocky Mountains produces “standing gravity waves” on the lee side of the mountains. These gravity waves are usually smoothed out in the reconstruction methods we use (and their effects get added back in using Monte Carlo methods).

    It is very important though to recognize when you combine data sets to produce a new data set, that is a type of reanalysis. Simply because you combine data sets doesn’t mean that the errors of the reanalysis are always smaller either. If the model you use to combine them is sufficiently broken, you’ll end up inflating the errors when you regress for the parameters of the model. (OLS regression is especially notorious for this.)

  237. SteveF, here’s a link that discusses weather forecasting models (ECMWF is one of those) and the link to global climate models.

    Ironically, the link is GCMs make weather forecasting less accurate!

    This is because it sucks the pot dry and prevents access to better computational facilities for models that actually work, and work better when you add more computational resources.


  238. Is the apparent ~60-year oscillation in the temperature record since the mid 1800′s in part explained by a lack of temperature data in the Arctic, combined with an oscillation in heat transport between mid and upper latitudes? That seems to me a perfectly legitimate (and interesting!) question to ask and to try to answer; the Cowtan and Way method may be a good approach to answer that question. Ex post facto explanations for observations are perfectly OK, but ex post facto cherry picks to discount inconvenient observations are not OK.

    This may be part of the Atmospheric Circulation Index which coincides with the LOD and Stadium Wave. The index is calculated by how often the atmosphere is in a zonal circulation versus a meridional circulation.

    “The first type, zonal circulation, is characterised by increasing intensity of the zonal circulation at all latitudes and pole-ward shift of the wind intensity maximums. The circulation is accompanied somewhat by a decrease in the overall range of surface-air temperature between the equator and poles and by an overall increase in the mean global surface-air temperatures. Ocean-surface temperatures tend to increase in high latitudes. The second type, meridional circulation, is characterised by weakening in zonal circulation, shift of the main atmospheric streams toward lower latitudes, and overall decrease in global temperature (Lamb 1972). Both easterly and westerly winds increase during the zonal type of circulation and both decrease in the periods of the meridional type of the circulation.”

    References and graphics here:
    http://contextearth.com/2014/03/04/decadal-temperature-variations-and-lod/

  239. Carrick,
    Well, any demonstrable improvement in weather forecasting with better resolution (more computationally intensive) models could probably be justified based on immediate public benefit (knowing where the hurricane will hit on the coast of Mississippi a day earlier is valuable information!). But would you not need both higher resolution/more accurate input data AND a lot more computer horsepower? My understanding is that both uncertainty in the initial state and computational resolution impact how quickly a weather model diverges from reality; do I have that wrong?
    .
    I think a large part of the funding for GCM’s is a poor public investment, at least so long as climate modelers take the ostrich approach and refuse to dump the rubbish models (4.5C ECS is a bad joke!) They need to work toward real optimization….in both performance and value for the (public) funds being spent. The feeling I get is that the current crop of models are a bit like gold-plated horse drawn chariots, constantly leaving horse dung on the streets, when what we need is a Ford Model A. The modelers need to improve the conceptual approaches (like how to accurately parametrize cloud influence, accurately model ocean heat transport, figure out how to accurately simulate the temporal and spacial spectrum of variability, etc.), not simply add computational power….. which will only put a better shine on the chariot’s gold plating, but make no reduction in the amount of horse dung we deal with. They owe the public real progress on accuracy, not arm waves and political advocacy; a bit of humility wouldn’t hurt either (Ben Santer is just about insufferable).
    .
    Moving a lot of those GCM funds to weather forecasting is, IMO, a good idea if increased funding would improve the accuracy of weather forecasts…and would help focus the minds of the GCM tribe as well.

  240. Re: Carrick (Comment #126809)
    March 14th, 2014 at 9:33 pm

    Hi Carrick,
    I think you have made your point re trends from reanalysis datasets!

    The results from Cowtan and Way don’t actually change the picture in any significant way. The paper may be excellent, but modal estimates of transient sensitivity remain about the same even if the C&W temperature series is substituted for HADCRUT4, as far as I can see.

    Moreover the C&W series does very little to explain the true divergence of model results from observational data. In my opinion, the divergence of model results is often too narrowly focused on surface temperature differences. There are two much larger problems faced by the GCMs. The first is that the observational data shows an unexplained reduction in net incoming flux since the turn of the century. The second is the (abysmal) failure of the models to match the atmospheric temperature profile in the vertical.

    John Christy highlighted this latter problem dramatically during the recent APS discussions to reconsider its position on climate change, as well as making some interesting points on reconciliation of radiosonde data with satellite. Well worth taking a few minutes to read.

    See pps 339-355
    http://www.aps.org/policy/statements/upload/climate-seminar-transcript.pdf

  241. Robert Way:

    The AVHRR surface temperature satellite data with regards to the Antarctica was discussed in detail at the Air Vent and mainly CA. It lead in part to the paper by O’Donnell, Lewis, McIntyre and Condon (2010) which refuted some of Steig methods (2009). Those discussions strongly indicated that the AVHRR data, with cloud masking being a further complication, were suspect when used over time – as the satellite data appeared to be subject to documented satellite changes. The AVHRR data was used wholly in O’Donnell (2010) and partly in Steig (2009), much as Cowtan and Way used the UAH data in their temperate sets, i.e. the spatial relationships over the areas of interest were used with the sparsely measured station data and thus skirting around the issues of temporal problems with AVHRR.

    I’ll have to read the more recent paper you linked above.

  242. SteveF:

    Moving a lot of those GCM funds to weather forecasting is, IMO, a good idea if increased funding would improve the accuracy of weather forecasts…and would help focus the minds of the GCM tribe as well.

    Increase forecasting accuracy saves money and lives too. Lives because improved early warnings from storms saves lives. Money because farmers (for example) rely on weather forecasting for their livelihood.

    In my opinion, there are enough uncertainties in the world we live in now that need addressing before we “overkill” on very long term projections that have unknown validity.

  243. Paul_K, thanks for the link to the transcripts. Looking like fun reading.

    it’s interesting to see some poeple voice so much skepticism on a physics based product that has proven physical accuracy, but are very happy to use a globally based, azimuthal symmetric correlation (kriging) function.

    I think the inability of GCMs to reproduce observed natural variability (in a statistical sense, not a predictive one) should be added to your list of issues. People probably focus on the surface temperature, because that more directly affects us (though for changes in 30-year averages, typically much less than people assume).

  244. As promised, the transcripts proved good fun. Here’s a relevant quote:

    The IPCC said, however, we have only low confidence in the observations. And that bled into later chapters where they said well, the models and observations don’t agree, but that could largely be due to poor observations.

    I agree there are issues with observations, which you have to factor in when comparing it to models. I’ve made a living off of doing just that. But it’s an order of magnitude problem here. The issues with data quality can be solved using well established metrics for screening the good data from the bad. Christy alludes to this in his presentation.

    The GCMs, well we don’t even know just how badly they work. The uncertainties aren’t even quantifiable. It boggles the mind how just how far this politicized process can jump off the rails and keep going.

  245. Probably overkill at this point, but below I linked a graph showing the CWH and GISS 1979-2012 seasonal spline smoothed series for the latitude zones 60N-90N, 60S-60N and 90S-60S and a table with the associated trends for CWH and GISS. While there are some differences between CWH and GISS that show mainly as a difference after 1997 in the 60N-90N region overall there is not large differences. This paints a different story than the one looking exclusively at 1997-2012 for this comparison.

    The global trend differences for CWH and GISS for the time period 1979-2012, 1979-1996, 2002-2012 and 1997-2012 are, respectively, with CWH denoted first: 0.17 vs 0.16, 0.11 vs 0.11, 0.02 vs -0.03 and 0.12 vs 0.08.

    I am a bit curious that a period 1997-2012 would be focused on when we have heard numerous warnings from the climate science community about how difficult it is to make statements about statistically significant differences over short periods of time given the “weather” noise in these series. It can also be shown that observed temperatures series including CWH can be shown to produce significantly different trends than the mean of the CMIP5 climate models over the 1979-2012 time period where degrees of freedom can be your friend (or enemy).

    I am not sure how the noise in these series should affect these comparisons of different observed temperature data sets since regardless of that noise the realization produced should be the same for all sets. I would suppose from this we can only judge that a given data set for a given period of time produces higher or lower trends and significant differences would depend on the method/measurements errors of each data set as opposed to weather noise inherent in all these series. Cowtan and Way have shown that some large trend differences can occur between method modifications and data sources within the realm of their methods. The differences become larger when considering in their update that the SST applied to 2/3 of the global area is the same and thus all the differences derive from the land. Based on land differences, the global trend differences have to be multiplied by 3. Way has stated here that he favors a given data source by the amount of station data it provides and thus takes HadCRUT4 over in GHCN in the update (GISS was not compared in the update). I have not heard his preferences for hybrid versus kriging but there are differences there also. Another thought is that if numbers count in their methods than it might be an indication that their methods applied in the sparsely stationed polar regions are not yet optimum.

    My thoughts on independent ways to judge the Cowtan and Way data sets would have to focus on the Arctic polar region where most of the differences with other data sets exist and for only the most recent past time period. I was at one time thinking of using the polar amplification to determine which data set could best explain that phenomena. After looking at some background, I am much less sure that there is sufficient agreement on a mechanism to used that phenomena as a benchmark. Maybe something along the lines of sea ice area and conditions versus temperature change as suggested by SteveF might be informative.

    http://imagizer.imageshack.us/v2/1600x1200q90/36/guiv.png

    http://imagizer.imageshack.us/v2/1600x1200q90/824/o3l7.png

  246. PaulK,

    The first is that the observational data shows an unexplained reduction in net incoming flux since the turn of the century. The second is the (abysmal) failure of the models to match the atmospheric temperature profile in the vertical.

    By incoming flux I assume you mean short-wave flux at the surface. Seems like an underestimate in the formation and albedo of low clouds would account for that. The failure to match the tropospheric profile seems to mecmore likely related to boundry layer effects…. maybe it’s not so much a problem with the expected moist adiabat as a disconnect between the 1 meter temperature and the temperature of the immediately overlying air (2 to 100 meters) which leads to an exaggerated expectation of tropospheric warming.

  247. Carrick 126835,
    Yup, the discrepancy with balloon and satellite data is large and no reasonable explanation has ever been offered. That this discrepancy can continue for more than a decade without leading to major model revisions is clear evidence the modeling community has problems differentiating model projections from measured reality. They are off in the weeds and seem oblivious to the need for models and measurements to be congruent.

  248. Steve, If the model disagrees with the data, the data must be wrong. That is the excuse I hear too often. Data is data and it needs to be explained.

  249. Carrick (Comment #126835)
    March 15th, 2014 at 9:27 am

    “As promised, the transcripts proved good fun.”

    Not finished reading, but I agree. Some good stuff in there and interesting interplay between participants.

  250. Here’s the status of my curve fitting exercise.

    link

    These are trends for a number of different periods. I’ve replaced the DMI series (which isn’t just one reconstruction method, the methods change over time, so there are issues here) with ERA-Iterim. ERA-Iterim is available to non-EU citizens via the ClimateExplorer website. I’m using the 2-m elevation values.

    I’ve computed GISTEMP zonal trends using the widget on their website. These trends are much larger in earlier periods than the HadCRUT or C&W values. I used the default SST data set here, ERSST.v3b, but switching to the other SST data set has only a minor effect on the trends.

    I used the versions of the files from Kevin Cowtan’s website associated with C&W’s newer publication:

    Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends, Cowtan & Way (2014).

    Here are the trends for just a few columns, given in °C/decade as usual.

    SERIES 1997-2012

    80-90 N

    HadCRUT4 1.44
    C&W-Kriging 1.63
    C&W-Hybrid 1.59
    ERA40-Iterim 1.93
    GISTEMP 1.91

    Global

    HadCRUT4 0.060
    C&W-Kriging 0.126
    C&W-Hybrid 0.136
    GISTEMP 0.093
    ERA40-Iterim 0.098
    NCDC 0.049

    and

    SERIES 1979-2012

    80-90N

    HadCRUT4 0.712
    C&W-Kriging 0.868
    C&W-Hybrid 0.839
    ERA40-Iterim 0.696
    GISTEMP 2.59

    Global
    HadCRUT4 0.168
    C&W-Kriging 0.181
    C&W-Hybrid 0.181
    GISTEMP 0.165
    ERA40-Iterim 0.121
    NCDC 0.155

  251. Regarding the new Cowtan and Way publication, here is part of their abstract that I find “remarkable”:

    Temperature trends are compared for the hybrid global temperature reconstruction and the raw HadCRUT4 data. The widely quoted trend since 1997 in the hybrid global reconstruction is two and a half times greater than the corresponding trend in the coverage- biased HadCRUT4 data. Coverage bias causes a cool bias in recent temperatures relative to the late 1990s, which increases from around 1998 to the present. Trends starting in 1997 or 1998 are particularly biased with respect to the global trend. The issue is exacerbated by the strong El Niño event of 1997–1998, which also tends to suppress trends starting during those years.

    Emphasis is added of course. The second remark first: They acknowledge the reason—the big ENSO event—that using this interval is particularly non-diagnostic (outlier near one end point) in their abstract, then continue to use it. WTF?

    The first remark… the global biases aren’t simply multiplicative. As we see from Kenneth Fritsch’s analysis, the global values arise from the sum over positive and negative trend values for different latitudes. So they are taking what is largely the effect of an arithmetic cancellation and equating it to a global trend.

    If we look at the 80-90N strip, we see the differences are much smaller than a factor of two.

    In my opinion, it’s unfortunate that what is otherwise a good effort is marred by this sort of exaggerated nonsense.

  252. This should read: “So they are taking what is largely the effect of an arithmetic cancellation and equating it to a multiplicative bias.

  253. Hi Carrick, is there any chance you can do those trends for the 1979-1997 period?

  254. Yep, here it is.

    I’m excluding GISTEMP 80-90 because it’s become apparent now there’s a software bug in their computation of the zonal trends.

    Curiously, this doesn’t affect the newer data (e.g., the 1997-2012 trend), but for the older data, the zonal trends are way too large.

    SERIES 1979-1997

    80-90N

    HadCRUT4 0.429
    C&W-Kriging 0.547
    C&W-Hybrid 0.522
    ERA40-Iterim -0.159

    Global
    HadCRUT4 0.108
    C&W-Kriging 0.108
    C&W-Hybrid 0.112
    GISTEMP 0.108
    ERA40-Iterim 0.046
    NCDC 0.105

    There is also an issue for ERA40-Iterim for 80-90 north.. I think it is related to missing data points between 1980-1990 that are being poorly handled by Climate Explorer. I probably should have left out that series too.

  255. Ok, thanks Carrick. I thought that would make it clearer but now I’m more confused 🙁

    Did all the 1997-2012 warming happen in Africa and/or 80-90S?

  256. Tim, to me it looks like the warming from 1979-97 was pretty much global in character, with an increase/amplification/acceleration in warming as you go further North.

    Switch between 1979-97 and 1979-12. What I see is the pattern stays about the same, just shifted down. This is consistent with a reduced rating of warming, and I don’t see any evidence that improving the coverage in the Arctic changes this picture in any meaningful way.

    I think this statement of Paul_K is basically the take home:

    Moreover the C&W series does very little to explain the true divergence of model results from observational data.

    Focusing on the fact C&W trend is 2x the HadCRUT trend for one cherry picked interval in no way changes that conclusion.

    One of the main real differences in warming rates seems to be an overall vertical shift in the pattern, so you get more cancellation at lower latitudes. This is of course has much different implications than the “warming is hiding in the Arctic” meme that C&W seem to be trying to sell.

  257. Carrick (Comment #126860

    Carrick, I am in essential agreement with your data analysis and comments, but I do not get the huge divergence you see in the GISS temperatures at the higher latitudes. There are two versions of GISS and one is more completely infilled, but I doubt that is the difference. I believe I used the GISS data from the Cowtan and Way web site. I have downloaded the GISS data from KNMI and could perhaps compare. I think we need to resolve that big discrepancy ASAP. My trends for 60N-90N for CWH and GISS for 1979-2012 are 0.67 and 0.59 degrees C per decade, respectively.

  258. Kenneth, the values I get for GISTEMP come from this widget.

    You can reproduce the graph I showed by setting the following:

    Map Type: Trends
    Mean Period: Annual (Jan-Dec)
    Time Interval: 1979-2012

    “Make Map”

    I am sure it is a bug in this widget, since the global trend you get is much higher than then trend using the global GISTEMP land+ocean data published by GISTEMP.

    I suspect what I would need to do, to compute the data on my own machine, is compile the latest GISTEMP, then modify the program to produce gridded data in an ascii format. (GISS’s version saves an intermediate product, but it’s a Fortran binary file, so it has undocumented fields in it.)

    I didn’t think to look at GISTEMP on Climate Explorer. I really need to write a web automation tool if I’m to do much with this though. I generated the 5° bands for ERA40-Iterim using their GUI-based tool, which is a bit of gruesome way of doing things.

  259. I used KNMI-Climate Explorer to generate the 5° averaged bands for GISTEMP, and as expected, these come out more or less in agreement with the other series.

    So here are my updated zonal trend estimates. These are new images, the old versions of the PNG files have been left in place.

    1979-1997
    1979-2012
    1997-2012
    2002-2012

    I took a bit more care to keep the scale the same for each figure (this allows you to flip between them and see absolute differences more easily.)

    Also, if somebody could do me the favor of checking whether they can replicate the problem I’m having with GISTEMP’s trend calculator, I’d appreciate it.

  260. Nic, Robert Way,

    I am a bit late to the party, but the thing that bothered me about Shindell (2014) was that it did not seem to consider the observed NH/SH warming ratio, which would likely be informative about the effects of spatial inhomogeneities. I found that for the 2 of the 6 models used by Shindell that nearly matched the observed ratio of ~1.5, the “bias” of the Otto et al., (2013) method was virtually non-existent (6% and 2% underestimates), whereas the Shindell (2014) method created huge overestimates (30%-40%). The model that the Otto et al. method performed the worst on was GFDL-CM3, but this has a NH/SH warming ratio of 0.84, which is obviously nothing like the one observed. Full write up is here: http://troyca.wordpress.com/2014/03/15/does-the-shindell-2014-apparent-tcr-estimate-bias-apply-to-the-real-world/

    Unless I am doing something wrong or badly interpreting these results (which is always possible), I don’t see how one could argue that the Shindell (2014) TCR bias is likely to apply to the real earth system.

  261. Since the thread seems to be winding down, my interest in Cowtan and Way (2013) was whether it could in practice be used to explain the difference between model-based and observational trends, as has been at least implied by some people.

    As I pointed out above if you take the area where there is missing data, you have to assume really huge trends in the missing regions to make the two results in agreement.

    Somehow this little factoid isn’t ever discussed.

    I realize that Robert Way has his own agenda here, but my interest in critically examining his paper was to decide whether the errors could really be inflated enough in his method to accommodate much higher trends.

    Based on what I’ve seen here, I think his results are more likely to be correct than HadCRUT4, but this can probably only explain at most about 15% of the difference in trends between models and observations, where a factor of two is needed to get agreement.

  262. Troy, thanks for the link. Did you get a chance to read PaulS’s comments on the andthentheresphysics thread?

    If you look at the 2002-2012 trend, there does seem to be a depression in the trends around 45°N in the observations.

    Isn’t that consistent with inhomogeneous NH-only forcings?

  263. I have added to my knowledge base in reading some of these blog comments from the likes of Nic Lewis and Troy and I thank them for their efforts.

    I have currently become interested (obsessed) with the polar amplification and will pursue that until I think I have found something worth reporting or the data leads me nowhere or I run out of the abilities/background to properly analyze it.

  264. Kenneth Fritsch, I can relate to you on this. I think it’s a very interesting question to explore.

    By the way, I think Christy ends up jumping the shark in his comments, starting around page 370.

    I suspect what you’ll find is tmin warms more rapidly than tmax (an actual prediction from climate modeling), that this is as true over the US as it is over Africa (it is very sloppy that he only considered Africa, e.g. page 385), and that it happens more in winter than in summer.

    So far, I’ve just considered annual averages and tavg (which may or may not be (tmin+tmax)/2, depends on the analysis). There’s a lot more that one can look at here.

    I think Mosher has a poster on this (sorry memory fails and prep’ing for a visitor atm).

    It would have been helpful to have Rhode or Muller there to counter balance him. I don’t mind him talking even when he’s obviously wrong, but it is better when there is a bit more balance.

    I’ll see if I can find some references on tmin, perhaps somebody else remembers them off hand too. Mosher Mosher Mosher.

  265. Thanks Carrick. I did get a chance to go over some of Paul S’s comments, which were quite interesting, and it appears he had already mentioned some of my sentiments over there (namely, the discrepancy between the modeled spatial warming and that observed). He has come over and brought welcome participation on my post.

    With regards to the trends you show, I suspect it may be a bit of a short timescale to consider a signal (particularly since it seems that there has been no real increase in anthropogenic aerosol emissions over that time period), but overall I agree that there are effects from NH-only inhomogeneous aerosol forcings at different time periods. This I think was well established prior to Shindell (2014). My objection is with the implication that these effects lead to a method like Otto et al (2013) significantly underestimating TCR, when the contrary seems easily provable by running the methods on the same models that Shindell uses.

  266. Carrick, now I think I know what you are referencing in your criticism of Christy. Christy talks about the maximum day time temperature being the critical one for measuring climate changes and further that maximum temperatures are more susceptible to non climate related effects in measurement. He starts with conjecture and anecdotal evidence unlike some of the other contributors who started with analysis and experimental evidence and then bring in the conjectures.

    I am on page 384 or so in my reading, but so far I like the exchanges. It appears that when the right questions are asked of the authors of papers in this type of forum you get a much better view of the uncertainty involved. I noticed that Ben Santer said more than a couple times something to the effect that we know such and such for sure or for certain, but even he admits to some basic climate uncertainties. I like that Santer is using models of observed temperatures and doing simulations. There was not a lot of detail on this matter since he requested that the graphs not be published in this review because it was pre-publication.

    Santer was very set on showing that an accumulation of aerosols from small recent volcanic events where part of the cause of the warming pause/hiatus/stasis. What was not clear or questioned was whether that background aerosol from these small eruptions is something unique to the period or could that same background be present and undetected back in time.

  267. Kenneth, yes, he makes conjectures about how the theory behaves, which I believe are wrong, then he makes claims about the experimental observations, which I believe are also wrong. Total strike out.

    I think this is probably accurate:

    I suspect what you’ll find is tmin warms more rapidly than tmax (an actual prediction from climate modeling), that this is as true over the US as it is over Africa (it is very sloppy that he only considered Africa, e.g. page 385), and that it happens more in winter than in summer.

    I also think you will find the models predict a larger effect on tmin —as seen in the data that Christy invokes aerosol effects to explain.

    I found the exchanges insightful (and agree with your characterizations of Ben Santer’s comments), but having Muller or Rhodes there would have benefited the dialog.

  268. Troy, thanks for the comments. I see from your exchange with Dana that he’s being pretty rational here. That’s progress.

    I’ve been interested in the land versus ocean differences for a while. E.g., a long time ago, I generated this figure of land vs ocean using gridded HadCRUT3.

    I believe BEST has gridded tmax and tmin products, so it would be interesting to compute the results for these. On my list to do.

  269. Carrick,

    Interesting graphic. I am puzzled though; there is almost no land area between 45S and 70S, just mostly southern ocean. There appears to be almost zero land area between 55S and 65S, what land there is represents a few tiny islands plus the very tip of the Antarctic Peninsula…all extremely close to the ocean. So I am not sure how meaningful the land averages for that region are, especially for land at 60+/-5 degrees south; a large difference between ocean and land at those latitudes looks very suspect.

  270. “I am a bit late to the party, but the thing that bothered me about Shindell (2014) was that it did not seem to consider the observed NH/SH warming ratio, which would likely be informative about the effects of spatial inhomogeneities. I found that for the 2 of the 6 models used by Shindell that nearly matched the observed ratio of ~1.5, the “bias” of the Otto et al., (2013) method was virtually non-existent (6% and 2% underestimates), whereas the Shindell (2014) method created huge overestimates (30%-40%). The model that the Otto et al. method performed the worst on was GFDL-CM3, but this has a NH/SH warming ratio of 0.84, which is obviously nothing like the one observed. Full write up is here:”

    somewhere around here I have a complete evaluation of all the models with respect to getting these ratios and other ratios correct. Suffice to say the models suck and as I recall Shindell has selected some of the suckiest ones.. to use a technical term.

    My hope was to create a cascade of tests…

    1. get the absolute temperature correct to within 1 degree
    2. get the amplification ratio ( pole to equator) right within
    50%
    3. get the land/ocean constrast right

    etc
    etc.

    The idea was that merely looking at a low dimensional metric ( global average time series) wasn’t very stressing because of potential tuning.. so a higher dimensional metric should be used
    like.. getting the absolute temp AND the spatial pattern correct.

    by the time you add more than a couple dimensions to the metric the number of models that pass is the null set.

  271. Carrick,

    I can probably whip that trend versus latitude chart out. Its on my to do list

  272. Steve Mosher,
    “by the time you add more than a couple dimensions to the metric the number of models that pass is the null set”
    .
    Which is I guess the reason some think public energy policy should be dictated by model projections of warming over the next 100 years. 😮
    .
    I think that your message (the models utterly fail to simulate reality beyond the obviously tuned metric of global average temperature) is crucially important when considering public energy policy, but somehow the only thing ever discussed is the modeled projections of average temperatures….. and how humanity is bound to be cooked in it own juices. A more sophisticated critique of the models, if accurately conveyed to the public, would be a real contribution.

  273. “I think that your message (the models utterly fail to simulate reality beyond the obviously tuned metric of global average temperature) is crucially important when considering public energy policy”

    SteveF, you evidently missed Ben Santer’s talk at the APA where he explained in detail that the dipole of stratospheric cooling and troposphere warming can only be produced by AGW – as in attribution and detection.

    On reading the review I keep asking myself why is a qualitative exposition on this required for any but a total denier of any amount of AGW and models capability to at least qualitatively find it. Later the degree (quantitative) with which the climate modeled tropospheric warming in the tropics misses the mark of what is observed was discussed and one participant replied that perhaps that is simply finding a small area where the models fail and that analysis might be missing the bigger picture.

  274. The fallacy of using ensemble models is that the promoters get to cherry pick, and pretend that any results from any of the ensemble that gets a point of time right somehow validates the entire ensemble. I think what is actually happening is more like a scientist running an experimental ensemble of monkeys banging away on typewriters. When a fragment of something they type out can be parsed together and read, the scientist claims that is evidence the monkeys are making excellent progress as an ensemble of writing the works of Shakespeare.

  275. Kenneth,
    “SteveF, you evidently missed Ben Santer’s talk at the APA where he explained in detail that the dipole of stratospheric cooling and troposphere warming can only be produced by AGW – as in attribution and detection.”
    .
    I read Santer’s APS talk. I don’t need to be convinced about simple things like the tropospheric/stratospheric dipole. What I do need to be convinced of is that the climate models are of any utility at all for making the kinds of quantitative projections that are needed to help formulate rational public policies. I also need to be convinced that the modeling tribe, and the entire field of climate science for that matter, is capable of accepting data which is plainly correct and plainly contrary to what the models say… and not just in the divergence from recent measured surface warming. Were the general public aware of just how poorly the models do with simple things (like inability to accurately calculate the Earth’s known surface temperatures), like Steve Mosher was pointing out, I think people would quickly realize that the models are just very, very poor representations of the Earth’s real climate, and so essentially useless in making meaningful projections about the future. In short, I think Santer and many others in the field (like Isaac Held) are deluding themselves so as to avoid facing the obvious and serious problems with the models.
    .
    As someone who has worked in science and engineering for many years, I can’t even begin to imagine how modelers can begin to work on anything else when the basic stuff is all wrong. IMO, it’s crazy… and even more crazy to suggest the models can ‘inform public policy’.

  276. SteveF, I didn’t go into too much detail above, but of course the trend estimates are not very useful when there isn’t much land mass that is well away from coast lines.

    It would also be helpful to have a real theory that prescribed under what conditions (distance from nearest shore, latitude, elevation, etc) we’d expect to see certain classes of polar amplification, which would allow us to be able to predict the actual profile expected, assuming only an increase in CO2 forcing.

    Without that, there’s a limit of the utility of these sorts of graphs. For example, if we knew the relationship between the expected warming at 45°N versus 75°N, and we saw a depression in the lower latitude warming, that would be a signature for regional-scale aerosol forcings at work. Here, it’s tough to say.

    In terms of your comments about e.g, 65°S, here is a profile of percent land mass in a given latitude band. It gives some indication how much land in each latitude band.

    I haven’t computed, nor am I planning to, the fraction of land that is say greater than 50-km from the nearest ocean shore, in say a 10°-band. But that’s the sort of curve we really want to see.

    I should mention I computed the land-only trend using CRUTEM3 and the ocean-only trend using HadSST2. The gridding for these products is 5°x5°, and only 5°x5° cells that have at least one station have usable values.

    Possibly one way to address some of your concerns would be to throw out 5°x5° cells that have both sea and land values (e.g., cells that are nonzero for both CRUTEM3 and HadSST3). That could be reproduced without huge additional effort.

  277. Steven Mosher, thanks for the response. Good to know the Incantation Three works for Mosher .

    I am going to work on generating these sorts of trends myself, but if you felt like an extra AGU poster was in order, something that addressed SteveF’s comments is a good project.

    Because you guys only provide NC files, there are start-up costs for me to look at your data. The primary one being I need to write a short C program using the netcdf library to read your files.

Comments are closed.