Return of Tighter Uncertainty Intervals?

It’s not often one gets to revert to tighter uncertainty intervals due to measurement noise! But, oddly, this may be happening.

Just this morning, I posted mealy-mouthed uncertainty intervals rather than firmly decreeing the 2C/century the IPCC predicts falsified. The reason I’d gone all soft on my uncertainty intervals was that JohnV has suggested I compare to data during a volcano free era.

I did, and got too many false- falsifications. (I got about 20%, and should get roughly 5% according to the definition of the statistical tests.) This was bad news for my uncertainty intervals. They weren’t insane — but they needed to be a bit wider– possibly 40%-50% wider.

But guess what?

After I agreed JohnV was reasonable to suggest I should look at the data, it appears the data describing the means surface temperature of the earth may be about to change! Yes, the forties may be getting warmer. Maybe even a lot warmer!

You’ll read all sorts of important consequences all over. But, with respect to my morning post, and my various posts over the past week it means: I may be reverting back to the 1.1C/century uncertainty intervals after all. (Or, at least, the uncertainty bands aren’t going up to 1.4C/century!)

If you want to read more about the breaking news, you can read a Nature Article explaining how the Americans and British measured Sea Surface Temperatures differently before, during and after the war. This resulted in a huge amount of “instrument measurement” noise. Imagine? There is “instrument measurement” noise, in the record and it appears big enough to mask “weather noise” — at least during WWII. 🙂

You may also be interested in reading Climate Audit’s take on this issue. They’ve blogged about the likely bias in sea surface temperature measurements many, many times. Here’s a chart illustrating the possible magnitude of the effect due switches in measurement techiques:

Effect of Bucket MistakeOriginal from Climate Audit.

See the spot where the black line and the red line first diverge? It appears at least some climate scientists think that sort of divergence is due to measurement errors, not “La Nina”, not “The PDO”, not any sort of pseudo-periodic oscillation. In fact, it’s likely got nothing to do with “weather noise”. It may be garden variety measurement error.

With respect to things I’ve been doing, the papers implications are:

  1. If cut that ambiguous part of the data out of the analysis JohnV suggested I do, my uncertainty intervals for the estimate of 30 year trends consistent with what we measure over 7 years tighten quite a bit.
  2. The possibility of weather noise of this magnitude supports my suggestion that the Schwartz’ analysis needed to be adjusted to consider measurement noise when estimating the time constant.

Naturally, I’m still going to be looking at any and all factors that shed doubt on my uncertainty intervals– and any decrees that the IPCC projections are falsified. But… well… those projections are not looking well supported!

18 thoughts on “Return of Tighter Uncertainty Intervals?”

  1. Hi Lucia – yes this is interesting; on the other hand, I don’t think we have a numerical reconstruction of temperatures corrected for this effect yet. Steve McIntyre’s adjustment doesn’t match the details of the changes claimed in the Nature paper (warm bias 1941-1945, cool bias 1945-? gradually replaced by warming bias as buckets replaced by engine room measurements, then cooling bias again from 2000? – present with buoys?)

    If this change in analysis of the measurements eliminates much of the warm peak and supposed cooling of the mid 20th century, it does bring models and observations into quite a bit closer agreement, which is probably not such a good thing for those who argue the models are flawed!

  2. Arthur Smith says:

    If this change in analysis of the measurements eliminates much of the warm peak and supposed cooling of the mid 20th century, it does bring models and observations into quite a bit closer agreement, which is probably not such a good thing for those who argue the models are flawed!

    I thought the modellers had figured out that aerosols explain cooling in the 70s and models that incorporate aerosols do match the record? If all those aerosols suddenly disappear because we have a correction to the SST then we pretty strong evidence that the model hindcasts are little better than curve fitting exercises which tell us nothing about their predictive skill. The same issue comes up with the latest TSI estimates. Older models matched the historical record by including a TSI related effect from 1900 to 1940 but that option disappeared as newer solar data indicated that TSI is more constant than previously thought. Newer models made up the difference by tweeking the aerosol fudge factors.

  3. Raven, do you have a reference for the models fitting 1940’s data with aerosols? The nature article seems to say otherwise:

    It is welcome news for climate modellers. The post-war temperature anomaly has been grossly outside the range of all computer-based climate reconstructions considered by the Intergovernmental Panel on Climate Change (IPCC), and it was prominently featured in the group’s 2007 summary for policy-makers.

  4. And try this one: http://www.pewclimate.org/docUploads/PewSB1-Attribution-SMALL_102606.pdf

    The best fit of the model results to the observed climate was
    produced when all of the forcings were included, implicating
    all of the forcings in producing the overall pattern of change
    (Fig. 4A). However, different forcings dominated at different
    times during the century (Takemura et al. 2006). For instance,
    the temperature rise in the early part of the century was
    dominated by natural forcings (Fig. 4B), whereas the warming
    after 1975 was dominated by man-made greenhouse gases (Fig.
    4C). The cooling during the mid-century was consistent with a
    combination of natural volcanic and man-made aerosols
    (Nagashima et al. 2006).

    Real Climate also makes the claim: http://www.realclimate.org/index.php?p=110

    Currently, the best climate models include estimates of all these effects: anthropogenic greenhouse gas forcings, aerosols, natural solar cycles, and volcanic eruptions (see here for example). The inclusion of aerosols in climate simulations has improved the model hindcasts when tested against past climate and dimming (Wild and Liepert, 1998). The latest climate projections for the future therefore all include some estimates of aerosol changes.

  5. Raven,
    I may have slightly misinterpreted your comment, as it refers to a more general period. The Nature ref says that models were unable to match the temperature dive in the 40’s. Your plot from Meehl actually affirms that, showing a dive unmatched in the models. It’s true that without the dive, the model results will be on the low side. The later quotes are more general, saying that they got better fits overall using aerosols, without particular reference to this 40’s phenomenon. That may still be true after the correction – we shall see.

  6. Nick,

    Try http://www.climateaudit.org/?p=3114#comment-254141

    IPCC AR4 Review Comment 3-321 responding to concerns over WW2 records:
    There was a prolonged El Niño in the early 1940s and the peak in global temperature is very likely to have been real.

    Realclimate also criticised the Scaffeta and West paper because their solar model did not reproduce the drop in the 1940s (i.e. they implied that their CO2 models did correctly model the drop).

    I have noticed a tendency to rewrite history when new data comes out (e.g. claims that models predicted the antarctic cooling before it happened are not supported by any facts that I have seen). I would interpret the comment in the Nature paper as an attempt to spin the results in a positive light.

  7. But, Raven, the early 40’s peak is not in dispute. The issue is the sudden drop in temperature, which may not have been real.

  8. Mr. Stokes.

    …the early 40’s peak is not in dispute.

    It’s only a peak if there is a drop afterwards. It could instead be more like an inflection point. It’s just all speculation at this point, and possibly forever, given the state of the data.

  9. Miscolzi demonstrates the answer to the question of why the uncertainty intervals on the models is not narrowing, and that is the great uncertainty in optical path length T. Big surprise, huh?
    ======================================

  10. Arthur–
    I don’t disagree with anything you said. 🙂

    If I drop the ambiguous data from my test of uncertainty intervals, it means my first set no longer look bad. BUT…
    1) That leaves quite a bit of warming during the 30’s.
    2) The models look better at hindcasting.

    Of course, my contention has never been that models are useless, it’s simply that they appear to be overpredicting what’s happening now. That could happen for lot and lots of reasons. The reason doesn’t need to be lack of fidelity of GCMs. They could be fine–but driven with incorrect forcings. Those chosing the SRES may be, overlooking what’s happening in parts of the world etc. (It could be problems with initial conditions because some runs for the 21st century ignored volcanos during the 20th!)

    Or, overall, they could just be off 30% or so. That’s still useful. But for planning purposes, it would be worth knowing or at least suspecting sooner rather than later.

  11. As is typical in this debate, certain advocates pick a metric which can be manipulated because of a fundamental lack of precision to say almost anything desired. This is particularly insidious in the oceanic metric, since the Argos data is so devastatingly suggestive, and the thermal contents of the oceans are so critically large. Tree Rings? That was tiddleywinks; this is some serious stuff, here.
    ==============================================================

  12. Lucia – I have to take issue with one of your statements. You indicated that the divergence “may be garden variety measurement error.” However, the strong dip in the HadCrut timeline is not due to measurement error, but adjustment error. As was clearly pointed out in Steve M’s article, all post-1941 temperatures were mistakenly adjusted based on the assumption that nearly all measurements were taken from ship intakes after that time. Of course, the other question that I haven’t seen answered is has anyone actually done a side-by-side study to see what actually is the average difference in bucket v. ship intake measured temperature or was the adjustment amount just based on a statistical analysis of what they thought the data should be.

    It is increasingly clear that the biggest issue with varioius temperature histories is the enormous amount of adjustments that are made to the actual measured values based on sometimes unsupportable assumptions and statistical analyses used to detect undocumented “changepoints”. (Yes, I do believe that when the metadata indicate changes in measuring location/instrumentation/recording technique that well-thought out adjustments are appropriate).

    Regards,
    BobN

  13. Ok—I’m seriously trying to wrap my brain around this: What’s the big, big picture? In haiku form (kidding, kidding. 😉 but seriously, I’ve read this at Climate Audit and Dr. Pielke Jr.’s site and I’m unsure as to the implications.)

  14. Terry–
    I don’t think we can be sure of the implications.

    Measurement (or adjustment) uncertainty in data causes difficulties. Recognized biases of unknown magnitude — as indicated in this Nature paper– causes huge arguments in other fields (even absent political implications.)

    If lab measurements to resolve the issue were possible, we’d go back to the lab to resolve this issue. But, that’s not possible here. So, arguments over the 40s-60s adjustments are going to be fierce.

    For today and next week (while we lay the flooring….) I’m just observing that the main evidence indicating my uncertainty intervals might be to low just went out the window.

    The 40’s dip is now, officially, suggested to be ‘bad data’, not “weather noise”!

  15. There’s something I don’t understand about the changes suggested by this Nature article. As I understand it, the bucket-vs-inlet problem only affects ocean temperatures. The GISTEMP trend for *land* shows a warm peak ~1940 followed by cooling until ~1955. The *ocean* trend has a much less pronounced warm peak:

    http://data.giss.nasa.gov/gistemp/graphs/Fig.A4.lrg.gif

    Is this a difference between HadCRUT and GISTEMP?
    Can anyone point me to similar plots (land-vs-ocean) for HadCRUT?

  16. Lucia states:

    “The 40’s dip is now, officially, suggested to be ‘bad data’, not “weather noise”!”

    Actually, it is not BAD DATA!! As mentioned already, it is INCORRECT ADJUSTMENT of the data.

    Back on ClimateAudit a gentleman chimed in who claimed to be a sailor during the later period involved. His description of the equipment and procedures would lead to a belief that the engine intake measurement would actually be the closest to a true temp.

    http://stinet.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA035012

    The abstract of this one, from 1964, seems to support the sailor.

Comments are closed.