HadCrut & NOAA September Anomalies: One down, one .. !

HadCrut3 Data

HadCrut GMST data are in: September was cooler than August. If you view the recent HadCrut3(NH+SH) update, you will see they report September as cooler than August. The respective anomalies are 0.385C and 0.376C. This would lead us to conclude the HadCrut September anomaly is down relative to August.

But is it?

Recall according to GISSTemp, August’s anomaly dropped rose 0.11C during Setember;(the rise was from 39/100 C to 50/100 C.) The change was sufficient to make it difficult to say whether the GISSTemp Sept. anomaly was hotter or cooler than August.

That experience taught us an important lesson: Check last month’s record before decreeing the anomaly up or down!

So, I checked the Google cache:

Figure 1: Updated & Cached HadCrut Anomalies
Figure 1: Updated & Cached HadCrut Anomalies

According to HadCrut August also got cooler during September, but the drop is smaller. The HadCrut record Google cache gives an August anomaly of 0.387C, the October update read 0.385C, for a drop of 0.002C.

Note that since HadCrut reports 3 significant figures and GISSTemp reports only two. Under the GISSTemp convention the August temperature reported by HadCrut would not have changed.

It (almost) goes without saying that one should expect small revisions in recently reported temperatures. The task of collecting and quickly reporting data means there will inevitably be a few tallying errors, formatting errors etc. However, I think we’ve all learned that we need to check the cache if we wish to accurately report whether the anomaly rose or fell from one month to the next.

With HadCrut: September’s anomaly is down both based on the August anomaly reported in Google cache and that reported in the updated record at Hadley.

You can view the most recent data at HadCrut3 (NH+SH); the google cache is here.

NOAA/NCDC Data

On NOAA/NCDC reports a September GMST anomaly of 0.4421C up from an August anomaly 0.4313C.

I get NOAA data from an ftp site which Google does not cache. However according to my records, in September, NOAA’s reported an August anomaly of 0.4425C. So, according to NOAA records, August cooled 0.0112C during September. 🙂

Now for the million dollar question: Did NOAA’s September temperature anomaly rise or fall relative to August’s?

That depends: September’s anomaly of 0.4421C is lower the August anomaly of 0.4425C reported in September, but it’s higher than the August anomaly of 0.4313C reported in October.

Not Robust finding: August got both hotter and colder during September!

The surface based stations do agree on this: August’s temperature anomaly dropped during September! The surface based stations disagree on whether August’s temperature anomaly rose or dropped during September! I have not yet determined the effect on the hypothesis test using the AR(1)+White Noise model based on the average of all three sets. This takes a little time to run, so I’ll likely take a look at that next week. Last month’s report is here.

Some will recall I said:

What will happen in September?

Beats me! But as we get more data, the bell shaped curve in figure one will get narrower and taller. That means there will be less uncertainty in the estimate of the trend– that’s the major effect of more data. So, next month, you’ll see a tiny difference in the spread. When we get the new data, we’ll see if it stays inside or falls outside the 95% confidence interval.

Of, if you know a psychic, maybe she can tell you now!

I have to admit my psychic did not predict August getting both hotter and colder during September. Did yours?

Heh!

Evidently, though I linked to my own article, I skimmed! HadCrut’s anomaly rose. Well… if they can revise, so can I! Strikeouts now apply!

13 thoughts on “HadCrut & NOAA September Anomalies: One down, one .. !”

  1. Fred– You’re right! Heh. I should wait for the coffee to hit my system before clicking “publish”. And then wait again.

  2. UC– do you have an algorithm to predict the how much warmer or cooler Sept. 2008 will get during October? That would be test any forecaster’s skill! 🙂

  3. Lucia,

    You’re perhaps aware of this but Tamino and his dog pack are having a real sideswipe at you in his recent post on Lomborg.

  4. Dave– I usually don’t read comments over there. It’s the weekend, my husband is here, we have lots of doors to paint because we are perpetually redecorating. Maybe I’ll go check it out on Monday.

    Still, glad to read they are having fun over there. 🙂

  5. Dave, Tamino is a cherry picker who can’t control his language or anger. I have never seen him outside his closed mind echo chamber. I think he is still in shock after getting taken to the cleaners several times by Steve at CA.

  6. Bob B is correct: Open Mind is mainly frequented by people with an anger management problem and an inability to conduct a logical argument. An honorable exception in the Lomborg posting is Lazar, who does offer a specific argument it might be worth your while to address:

    “The IPCC projected a range of trends due to forced and stochastic variation, and model bias.
    Lucia did not estimate those sources of error so she treated 0.2 deg C/decade as an exact prediction, which is wrong.
    The RealClimate projections CI contains the bias and stochastic elements but not forcing variation, why it’s a projection not prediction.
    Even HadCRUT eight-year trend is clearly consistent with the model projections.
    Instead Lucia tests whether 0.2 deg C/decade falls within the observational error, calculating confidence intervals for the observations and not the projections is precisely the wrong way round given the data. She is inferring the true i.e. long-term observational trend by estimating it from the sampling distribution, and testing whether that trend is consistend with an exact projected figure. There is no single true long-term projection figure, and over shorter periods the projection variance blows up. The RealClimate approach is much simpler and better; the models project this range of trends over an eight-year period, does the observed trend fall within that range?”

    I would advise against reading the rest of the 100+ comments, unless you are considering doing a study on the psychopathology of everyday life.

  7. michel

    “The IPCC projected a range of trends due to forced and stochastic variation, and model bias.
    Lucia did not estimate those sources of error so she treated 0.2 deg C/decade as an exact prediction, which is wrong.

    Currently, I do treat 0.2C/decade as an exact projections. This has been discussed here. In fact, it was discussed long, long ago.

    I disagree that it is “wrong” to do this, but concur it is better to include the uncertainty in the IPCC projection measured using the Standard Error. However, in the AR4, the IPCC failed to state their uncertainty, only showing a graphical interpretations showing the 1σ distribution temperatures.

    To remedy the situation requires downloading the IPCC data runs. Initially, I did not know where to obtain them. However, eventually, II learned of the existence of “The climate explorer”, and I downloaded them this summer. (You can see blog posts with information from those.)

    One of my intentions is to include the Standard Error in their projection into the future computations, so we can see how that compares. However, I have not done this yet. When I do, I’ll show the distribution.

    That said, I don’t think discussing whether or not the actual center trend falls inside the uncertainty bounds is “wrong”. You will note that Santer et. al. recently published in the Journal of climatology does just that for individual model runs! (They do more– but they do discuss whether or not a specific best estimate of a trend treated as “point” estimate falls inside the range of observations.

    The RealClimate approach is much simpler and better; the models project this range of trends over an eight-year period, does the observed trend fall within that range?”

    This question is a simpler question. It is just a different question. Also, it’s not clear it is a better question.

    I test whether the “best estimate” falls inside the range consistent with observations. This is a perfectly valid question. I happens to be a question addressed recently by Santer et al. (2008). (Gavin is one of the 17 authors of that paper.)

    So, presumably the question I ask isn’t an invalid question. It can be difficult to answer, but the fact that it may be difficult to answer doesn’t argue against it’s importance.

    The Real Climate article answers a different question which is: Do the observations fall inside the range of the projections? As you can see, this is flipped around.

    Of course, it is also a valid question. However, I would like to point out that though Gavin is an author of Santer et al (2008); he clearly thinks the test he suggested at Real Climate question is valid and useful, that test does not appear in the 17 author paper.

    We could speculate why the “simple” test answering the question “Do the observations fall inside the range of the models” does not appear in Santer et al. The reasons could include:
    a) Some or all of the other authors disagree with Gavin.
    b) The authors tried to put that analysis in the paper but the peer reviewers think the question it answers is uninteresting.
    c) The authors anticipated the peer reviewers would think the question was uninteresting and so didn’t bother to write it up.
    d) Someone or some group of people think the idea is peachy but they are is writing up this other idea as a separate paper (so as not to risk Santer17 not passing peer review.) or
    e) Other.

    Either way, it is clear that the question answered by the test I use is considered valid by the 17 who wrote Santer et al– after all they wrote a paper asking it. Also, the peer reviewers think it’s a valid question.

    The issue that remains is: Is the method I use to answer it convincing?

    The fact that Gavin at RC can answer an entirely different question using an “easy” method gives us no insight into the answer to that question.

  8. If the observations fall within the confidence intervals of the “range” of the model predictions, do we then conclude the “average” of the model predictions is still within the intervals?

    No, only the models with the lowest predictions are still within the intervals given the observations are far below the average.

Comments are closed.