IPCC 2C/century: Remains in Low Confidence Region Through Sept.

This is a quick post because I plan to do more work. Still…we’re all excitable around here. So, I thought many of my excitable readers would be interested in this result.

Notice the title says 2C/century is in low confidence? Well… sort of. When I ran the numbers for GISS and NOAA, the 2C/century is in the “very low confidence region”; That is to say, if 2C/century is the “underlying trend” and weather noise can be described using the “AR1+White Noise” with parameters estimated based on the “volcano lite” years, the probability of GISS Land/Ocean or NOAA/NCDC trends we’ve had since Jan 1, 2008 being as far off from 2C/century is less than 10%. (The probability of trends as low as we’ve seen would be less than 5%. You can read about GISS here.) )

However, to avoid picking favorites, I usually use the average of GISS/NOAA/HadCrut as my standard. When the trend of +2C/century in GMST is compared to the merge, the current trend falls below the crucial 2.5% threshold, associated with the “falsified” or “rejection” region at the 95% confidence level. Here’s the histogram of the distribution of trends we’d expect if the “underlying trend” were 2C/century, with weather noise as described above (and in previous posts.)

The image is shown below (click for larger):

Figure 1: The Jan 2001-Sept 2008 trend based on the \"merge 3\" falls in the \"falsified\" range.
Figure 1: The Jan 2001-Sept 2008 trend based on the 'merge 3' falls in the 'falsified' range.

Are you wondering about HadCrut? It has the lowest trend, yes it’s also outside the 95% confidence limits. (Only 1.8% of all possible trends would be below the Jan 2001-Sept 2008 HadCrut trend.)

Recall last month, only Hadcrut fell just inside “falsified” territory with “merge3” just outside the 2.5% bubble. In September the average temperature rose, but the average trend barely budged. The number of months increased by 1, and that reduced the uncertainty a tiny bit. (Note, I also now run 40,000 cases of simulated weather instead of 10,000. It doesn’t make a lot of difference, but I just count to get the 2.5% cutoff, so this reduces the uncertainty in the cut-off a bit.)

What will happen to the result of this test when another month of data arrives? Well… consult the psychic hotline. I sure don’t know! On the balance, September seemed warm around here — despite the snow.

Work planned for tomorrow

Since this has been “Santer17” week, I decided I should add a new twist to the tests: I’ll throw the uncertainty in the best estimate of the trend predicted by the models into the mix, while but AR1+White noise.

Recall, in Santer17, the authors assumed the “weather noise” was AR1, but also included the uncertainty in the best estimate of the trend based on models “model noise”. We already know that if we use that and apply it to test GMST since Jan 2001, the models are found inconsistent with data.

However, since we’ve been using AR1+White noise around here, I told you all that I don’t consider that result to necessarily falsify the models. AR1+White noise gives bigger error bars. Tomorrow (or the next day), I’ll adapt the method using the “AR1+Noise” data, and see what we get. (I haven’t done it, so I don’t know. 🙂 )

15 thoughts on “IPCC 2C/century: Remains in Low Confidence Region Through Sept.”

  1. Hi Lucia,

    I suspect you meant “since Jan 1st 2001” in your second paragraph.

    I thought I had got the SE/SD thing but It appears I haven’t. The distribution based on the 2ºc/century trend with simulated noise makes sense to me but it seems that you have not allowed for the possible error (SE?) in the observed merged trend, a la Santer17.

    Clearly the chance of a trend of -.59º/century is given by the distribution you have calculated but how sure are we that the -.59 is correct. I think I am confused about what we really mean by an SE of a single time series. It is straightforward in that it is a measure of the residuals after doing a least squares linear fit but I can’t quite imagine what population the single trend is supposed to be drawn from.

    I think I am a frequentist in that I use the word probability as a shorthand for the frequency of a particular type of event within a class and I try not to mix that up with my strength of belief or confidence in my expectations. The problem seems to be that a one off measurement of a weight or a trend does not appear to be associated with any defined class.

    The last thing I want to do is drag this topic into a philosophical debate about how we ‘know’ anything but I really am lost about the meaning of a single unrepeatable measurement.

    I think this may be related to the multiple earth idea where somehow the particular observation on our earth is to be thought of as just a random sample taken from the population of all possible earths. The variance in this population is to be applied to our confidence in a single observed trend when comparing to model trends. Sometimes it almost seems that modellers are trying to model the frequency distribution of all these imaginary earths and if our earth does not play ball with its random sampling, that is certainly not a fault in the models.

    This probably is a nonsense post as I cannot clearly articulate something that is fuzzy in my head to start with. If you can help in anyway I would appreciate some assistance.

  2. Jorge–
    This post doesn’t “do” the Santer thing. I’ll translate this into SE/SD when I write up more. That’s why this one is brief.

    The -0.59 C/century for the actual period is pretty darn close to correct. The measurement error is fairly small. So, that observation is what happened.

    But the bell curve shows the stochastic distribution around the predicted 2C/century (that is, assuming we really knew the AR1 and white noise parameters.) That matches your “frequency” idea.

    I’ll try to address your issues in the fuller post! It will include different graphs. But to understand the *idea* of Santer 17, that bell curve should be placed around the -0.59 C/century to describe the possible trends consistent with the observation!

  3. Where did you get the October GMST anomalies? I haven’t seen them released anywhere? Or should the title say through Sept?

  4. Anyone want to comment on this historical climate model I built which covers the monthly Hadley Centre temperature anomalies from 1940 to 2008 – 725 monthly temps – no 5 year smoothing – just a straight-up simple monthly anomaly model.

    The model is based on a global warming trend of 0.8C per century with the “noise” in the climate based on 15% of the ENSO anomaly (with a 3 month lag) – (I included a 0.3C impact for 18 months for the 3 major volcanoes which occured over this time as well but I’m not sure this is really required.)

    To me, there is a shocking correlation of the global temp anomaly with the ENSO (Nino 3.4 region) anomaly of 3 months prior. The global temp anomaly is directly and continuously impacted by the ENSO anomaly – the ups, the downs and the neutrals.

    The 1997-98 El Nino, for example, peaked out at an anomaly of +2.8C in Nov 1997 and the Hadley Centre temp anomaly peaked out at +0.749C in Feb 1998 (it highest month ever). The model tracks the up and down of this El Nino very closely as well as matching the peak at the right time.

    Now, noone can forecast the ENSO accurately (and for all we know it could be impacted by global warming as well) but the ENSO is likely to be a natural cycle resulting from ocean currents and trade wind patterns.

    I think this model also demonstrates that a large part of the increased temperature trend since the mid-1970s is really just an ENSO influenced trend with the El Ninos of 1982 and 1986 to 2006 driving the trend up (through simple math alone.)

    It is not perfect. It is not optimized. But it does demonstrate some important conclusions.

    You can download the Excel file with the data and the model charted against the Hadley dataset at this link. (go to the bottom of the page and download at “Click Here To Download”).

    http://myfreefilehosting.com/f/ee8a33f896_0.42MB

  5. John–Sorry. I meant Sept. I’m so used to writing for “last months'” data, but and I was looking at “Nov” on my calendar. Let me edit that. . . 🙂

  6. John–
    ENSO is a climate feature, and yes it’s known that the MEI index helps us predict weather. If you have an idea, why don’t you write it up with words and figures? It’s difficult for people to figure you what you want to communicate by reading an .xls file. Blog are better for that.

  7. lucia, just check out the chart on the “Model Chart” tab which is part of the workbook xls file.

    I guess the exective summary is that global temps track the ENSO much, much closer than has previously been thought. I think that noone has charted it on a monthly basis before (they have been using annual data or 5 year moving averages or percentage changes etc.) or noone has gone as far back as I have. I have never seen this chart on the net anywhere before.

    The other part of the executive summary is that after accounting for the ENSO driven changes, the global warming trend is only 0.8C per century. I happen to believe (or more accurately, accept that the data shows”) that there is a GHG signal in the temp data, just that it is much less than the models predict.

    Anyway, I imagine this will come out someday. It convinced me and I’m a skeptic.

  8. John– Do you mean the trend since 1900? It wasn’t all that large anyway.

    The link between ENSO and GMST is known. But, but the frequency of ENSO is part of the system. So, you can’t take it “out” in terms of the full trend.

  9. The other references concerning whether temperature trends can be explained by ENSO and other similar internal climate phase patterns;

    http://bobtisdale.blogspot.com/2008/04/is-there-cumulative-enso-climate.html

    Bob regards the PDO as a residual of ENSO and looks at the AMO as well; his correlations between NINO3.4 and temperature are interesting; and I have a bit of a fiddle here;

    http://jennifermarohasy.com/blog/2008/10/temperature-trends-and-carbon-dioxide-a-note-from-cohenite/#comments

    I Draw heavily on the good work done by our host. I note in your compilation, which you link to, that the temperature increase from 1898-2008 is 0.447C; Bob Tisdale would argue, as do I, that that is a result of there being 2 EL Nino/+vePDO periods and only one La Nina/-vePDO phase during the 20thC.

  10. John Lang,

    Nice graph. Do you have any GCM outputs against the anomoly to look at for comparison? How would you test the statisical significance of your formula? I hesitate to call it a model, since it cannot function predictively without some forward looking ElNino factors. Kerry Emanuel’s recent paper took an interesting approach to random tropical phenomena (tropical cyclogenesis) by seeding grids with model vorticities and then letting a second model, coupled ocean-atmosphere, grow the seeds or kill them. The results had a realistic look to them. Perhaps something similar could be attempted to model ENSO upwelling?

  11. I have since extended the model back to 1871 and the same kind of correlation exists.

    The model is a little too high for the 1903 to 1913 period, and a little too low for the 1939 to 1945 and 1999 to 2006 periods. Looks like I will have to add the AMO index to the model. Anyone know where I can get a 2 dimensional AMO index [month, anomaly]?

  12. Okay, I’ve got the AMO index in now and all the problems are fixed I believe.

    R2 is up to 0.753. The model is within 0.2C for almost the entire record of 1652 monthly temps (only 14 months make it to 0.4C off the Hadley temp estimate and there are no long periods when it is over or under.)

    I guess the only problem is with using the Nino 3.4 anomaly and the AMO index anomaly, the model is covering the temp anomaly over quite a bit of the globe already.

Comments are closed.