For those who missed it, Roger Sr. wrote a post discussing the correlation between the temperatures of the lower troposphere vs. those of the surface. Yesterday, BarryW asked me to comment on Roger Sr.’s dicussion of the relationship between the reported temperature anomalies and the honest to goodness temperatures. Specifically, Roger said:
Thus the change in an anomaly value between months must be placed within the context of its percent change over this time period relative to the average expected change. For September, for example, a change of an anomaly from August to September of 0.1C is a 17% variance from the average expected change. A positive value (such as reported for this past September) means the global cooling was less than the average cooling rate for this month.
Ok, so let’s unpack this:
- Each month the services report temperature anomalies. We all discuss these.
- The actual temperature of the earth can be expressed in honest to goodness degrees C– with 0C representing the freezing point of water.
- When monthly averages are collected, the honest to goodness global mean surface temperature. For what it’s worth, the models also predict the monthly averaged of the global mean surface temperature varies. Here a plot of 30 year averages for honest to goodness temperatures from two selected models:

Figure 1: 30 year average values for montly GMST As you see, the temperature in drops from Sept. to October.
- The anomaly for Sept. 2008 will be the difference between some 30 year average of all temperatures for Sept. and the honest to goodness monthly average temperature for Sept. 2008. A similar process is used for each month.
Notice that GISS EH predicts October is more than 1C cooler than Septemberon average. So, if the anomaly for October increase 0.1C relative to September, the actual temperature of the planet still fell! It means the actual temperature of the planet fell less than it usually does.
Needless to say, these sorts of things can complicate certain analyses.
So why use anomalies?
Anomalies are useful if we want to detect overall warming or cooling. If the planet actually warms, we still expect to see an annual cycle. However, we expect the temperature for all months to shift up. Examining anomalies is supposed to help us detect that sooner.
Closing
As always, it’s worth reading Roger Sr. full post, and I recommend you visit it.
He doesn’t support comments anymore. But there is no reason you can’t discuss his post here. 🙂
Why do the temp vs. month curves only cover 11 months? I was scratching my head wondering why the two end values and their derivatives weren’t equal before I noticed that there is a missing month.
Shoot– I must have not grabbed one of the end points! I’ll fix it.
Erhmmm. Wait– I do have 12 months. I guess it’s conventional to repeat “o” and “12”, so I’ll do that.
You have twelve points but the plot only covers 11 months.
I assumed because the lines are curved that you had more than 12 data points. Now I guess you are using some kind of curve fitting algorithm.
It’s not a big deal. It just confused me, which isn’t hard!
Ahh… I should probably have picked a bar chart or not chosen the “connect the dots” feature in EXCEL. The points are the monthly values.
I added a month 13. Hopefully people will realize that is January repeated.
Lucia – Thank you for the link back to my website! I will be glad to answer science questions on your website with respect to this (or any other post).
Best Regards Roger Sr.
Re anomalies:
You know the good physics that goes into climate models? Why does the output appear as anomalies? Do the models produce temperatures which relate to the real surface temps as measured by GISS, Hadley etc and do they then get converted to anomalies, or do they output anomalies?
What do the models say is the absolute temperature?
JF
From what I read on the Hadley website they don’t utilize temperatures except at the site level. They appear to keep an average for each site by month and create an anomaly for the present monthly site temp based on that. They then combine the site anomalies together to create the global anomaly. So as best I can understand it, with Hadley you couldn’t recreate an actual global temperature directly from the global anomaly.
Also is the cyclic nature of the actual temperature mostly due to the NH-SH asymmetry in land-ocean with a component due to orbit eccentricity?
quote So as best I can understand it, with Hadley you couldn’t recreate an actual global temperature directly from the global anomaly unquote
My point is this — if all the physics is right then even allowing for parameterisation, the models must be outputting something like 15 deg. The reference period must be something like 14.5 deg (or whatever).
What does pop out at the end of the run? Is it 15/14.5 or is it 90/89.5? Is the absolute output realistic or is it only relatively realistic?
JF
Base periods for determining anomalies are a bit of a fraud; GISS’s base period covers 1951-80, smack bang in the middle of the -ve PDO from 1946-1976 whereafter the +ve PDO phase took over with a large temp step-up courtesy of the Great Pacific Climate Shift of that year; anomalies can’t cater for that sort of bias even if, especially if, arbitary weightings are made, as GISS and HadCrut do use. The whole fallacy of a global average temperature has been critiqued by Essex, McKitrick and Andresen and it is why GCM predictions are not relevant at a regional level as Koutsoyiannis found. But another reason why average temp is a non-event was looked at in a prior paper by Roger Pielke Sr;
http://climatesci.colorado.edu/publications/pdf/R-321.pdf
With great regional differences between temp based radiative emissivity possible by virtue of Stefan-Boltzman, the idea that there would be a TOA bottle-neck is in itself meaningless unless those points where there is outward radiative congestion are compared with the radiative capacity at those areas where windows exist. The consequence for temp is that there may well be regional anomalies without there being a global radiative disequilibrium.
The ability of the global climate models to skillfully produce the value of the global average surface temperature (as contrasted with the anomaly) should be a priority requirment.
The model-climatological averaged variations of this temperature during the year (as well as for the layer averaged temperatures for the different layers as diagnosed by the MSU data) should available from each of the IPCC modeling groups. These values should be made available for their “natural” model runs and for their added CO2 model runs in order to see what changes are simulated.
Roger–
Much of this is available. However, often the data are gridded, and in real temperatures. The files are huge, which makes it difficult for “the public” to download and handle.
On the one hand, It’s understandable that the GCM data are available in the “rawest” of forms. You can’t predict what researchers might want to look at. But, on the other hand, many of the quantities of interest to people outside the “climatati” who want to ask questions the climatati may find uninteresting (or inconvenient).
I have been downloading monthly avearage GMST data from the climate explorer. That site does provide post-processed data for a number of metrics. I just haven’t looked at them all.
The monthly GMST data is in real honest to goodness “C”– That’s how I was able to compute the average for each month. As you see “planet cgcm3_1” world is distinctly cooler than “planet GISS EH”.
To my mind, the 2C discrepancy is rather large. After all, we can estimate the temperature of the surface to within 33C when we neglect the atmosphere entirely. We can get closer using simple energy balance models for some sort of “average” atmosphere with no circulation.
So, 2C seems like a large discrepancy compared to temperature difference that one wants to explain using a GCM at all!
Hi Lucia – Your Figure 1 is an excellent example of the type of comparisons that are needed. The modeling centers, by not making this information routinely available and not describing how it is computed, such as the level the temperatures correspond to within their model (e.g. surface, 2m, etc), are obscuring an important issue.
For example, since the outgoing radiative fluxes depend of T**4, the absolute value does matter. Also, how well the models track with respect the observed annual cycle (such as for the lower troposphere displayed at http://discover.itsc.uah.edu/amsutemps/ ; select a level and look at the long term average) is a valuable model comparison metric.
Thank you for looking into this very important issue!
Let’s see if I got this right:
The difference between the models appears to result in a 1.5-3.0% difference in outgoing radation. The total outgoing radiation should be roughly balanced with the incoming radiation of 1366 w/m2 which means the difference between models amounts to 20-40 w/m2.
The purported CO2 effect is in the 4 w/m2 range.
It is not clear to me why models should be able to accuracy model the effect of a 4 w/m2 phenomena when they can’t even agree on the magnitude of effects which are 10x larger.
Raven–
The counter argument appears to be this: “Well, they can.”
Some how, the difference drops out due to the anomaly methods.
Mind you, it’s not clear this makes sense. The fact that the models can’t predict the real GMST of the earth suggests the physical processes are not fully understood. The magnitude of the deviation between models may be a sizable fraction of the contribution of global circulation on the basic greenhouse effect. So, it’s not at all clear how much modeling the global circulation(the “GC” in GCM) in detail reduces the uncertainty in the prediction of GMST!
My position is– if GCMs can project, then they will prove themselves right by forecasting future temperatures measured with modern day thermometers correctly. That is: barring identifiable deviations between the applied forcings and the projections (caused by volcanic eruptions or attacks by Martians), the projections should stay within ±(earth weather noise) of the observations.
In the AR4, the IPCC method is to explain that the average of all models is the more reliable prediction that individual models, so I test the average. If we estimate “weather noise” based on a period with minimal volcanic eruptions, the models aren’t currently holding up.
Lucia,
The anomoly method can compensate for seasonal variations but I don’t see how it can compensate for a different radiative energy budget.
The large difference implies that we really don’t have any quantative answers for many of the basic questions about climate and that modellers are grossly underestimating the uncertainties in the model outputs.
I have been of the opinion that model averages are meaningless and what we really need are many more runs of a best models which would give us a true picture of the probability distribution produced by the model.
Raven–
I share your reservations. The anomaly method should be useful for stripping out seasonal variations. But I tend to think the claim that projections of trends can be accurate when the baseline is wrong is just that: a claim.
Ideally, we should have a model that gets all the physics right and run it over and over and over to average over all weather. The method the IPCC is using is an approximate one — and that’s even account for all the “dice rolling” analogies we read. The IPCC appears to have a batch of dice each of which is biased. They may all be biased high, they may all be biased low. Some may be high and low. We don’t know.
I do believe GHG’s cause warming. I’m much less certain the AOGCMs can predict the magnitude particularly well.
BTW: I’m also of the opinion off just means off. Oddly enough, due to issues associated with time constants, being too low now could mean the ultimate temperature are higher. (Alternatively, it could mean sensitivity is off, in which case, predicting too high now means models will predict too high. Or, the models could just be “off” because they miss something that happens to reduce the trend now, but which doesn’t necessarily always do so.)
“The difference between the models appears to result in a 1.5-3.0% difference in outgoing radation.”
I don’t think it does. Surface temperature does not necessarily correlate directly with OLR. I would be very surprised if all the models don’t agree within better than 0.5% on OLR annual budget. Differences in lapse rate and humidity profiles can change the relationship of OLR to surface temperature.
DeWitt,
Models are obviously going to compensate for the different temperature baselines by having different parameterizations for other phenomena but the difference in is still striking. In fact, adjusting the absolute temperature is likely one of the knobs that the modellers twist to get their models to produce the correct trend when measured in anonmolies.
I find this surpising because the absolute GMST is something that is claimed to known within a few fractions of a degree so it makes no sense to accept models that are off on this parameter by 1+ deg.
Raven–
I don’t think the absolute temperature is twisted like a knob. It’s off because the models incompletely and/or inaccurately describe some physical processes of the earth’s climate system.
Lucia,
I don’t think it is adjusted directly, however, if the physics of the model is wrong but it still gets the GMST trend right then something must be adjusted to compensate for the error. Adjusting parameters that cause the GMST to decrease could be one of those compensating factors.
That said, my main issue is the fact that the model makers don’t even attempt to match what is supposed to be a reliable historical record. A model that was 2 degC off on the anonomlies would not be considered a useful model.