The US Global Climate Change Research program published a report yesterday which included their graph comparing surface temperature observations to projections. Take a look:

I copied that from page 25 of the full report which you can download here.
Squinting at that figure, it appeared to me that the according to the USGCR, the observed temperatures have fallen below the multi-model mean projections in every year this century. So I visited the slide show and got a close look at years 2001, 2005, and 2008:

As some may recall, I previously criticized Stefan Rahmstorf’s characterizations of relative agreement between observed temperatures and projections. As quoted in Environmental Research Letters he claimed:
“The global temperature is rising just as expected. If you look at the trend over the last twenty years or so, of course there is natural variability, around that trend, there are some warmer years like 2001 to 2005 were above the long term trend, and then 2008 is a little below the long term trend. But global temperature is basically rising as expected, and that’s very reassuring to me as a climate modeler, because we think global temperature is easy, we understand it well, it’s simple energy balance, so we shouldn’t be too far off.”
Well, according to the graph in yesterday’s US Climate Change Research report, Rahmstorf’s quote is not just clever use and non-use of adjectives to create a false impression of the agreement between models and observations. Rahmstorf’s simply factually inaccurate. In 2001, the observations show a cooler earth than expected based on the multi-model mean of some set of climate models. In 2005, the earth was cooler than the multi-model mean projection. In 2008, the observations were not only cooler than the multi-model mean projection but also fell outside the uncertainty intervals that Stefan himself chose when communicating the projections to the public.
One presumes that way back in 2007 when projections were published but before the cold 2008 temperatures had not yet been observed, Stefan thought those uncertainty intervals communicated something about what modelers and scientists “expected” to occur. If so, then temperatures are at least somewhat lower than expected. Or, even if scientists now wish to explain that the expected the uncertainty intervals to be larger than those they published, the “publish” might suggest that they were lead to believe the publish uncertainty intervals communicated what scientists expected based on what scientists wrote and published. After all, the purpose of these reports is to communicate what scientists expect including the uncertainties.
In any case, irrespective of how Rahmstorf or others interpret this graph, I think many can see that it’s not just bloggers who are finding the observations fall outside published uncertainty intervals. The reason I find the surface temperature anolalies fall outside the uncertainty intervals published in the AR4, is because they do.
Something funky happens betwixt the postjections and the prejections. There seems to be an odd jump.
AndrewFL– I read the text quickly. I don’t know why the two seem to have a jump. Also, when I compute these, the multi-model mean is very close to the observations in 2001. So, I’m not quite sure what the authors of the CCSP report did.
Tunnels can lower this even more if we want them to.
For some reason the failure of the sunspot cycle 24 predictions popped up in my mind when I looked at that chart.
At the risk of getting “Borised”, isn’t that blue line the drastically reduced emissions model? So the observations are outside of even that condition. Remember the thread you had discussing lying be saying something that’s true? Here’s a definite example. Use misdirection: show the model curves and hope they don’t see the discrepancy by zooming in to close.
I agree with Andrew_FL. The projections do not “line up” with the 20th century “hindcast” or the observations in that graph.
This chart shows the correct alignment I believe.

http://deepclimate.files.wordpress.com/2009/05/ar4-smooth.gif
My take on observations vs projections:
http://deepclimate.org/2009/06/03/ipcc-ar4-projections-and-observations-part-1/
Summary:
“[A] comparison of the AR4 near-term projection to smoothed observed trends … shows the recent observed trend to be somewhat below the projection, but still well within a reasonable confidence interval.”
The difference in baselines has to be taken into account (I’m going to convert all this to the Hadcrut3 baseline that we are used to seeing).
AR4 used the 1980-1999 baseline (or +0.16C under Hadcrut3) and this report is using the 1960-1979 baseline (-0.1C under Hadcrut3).
The AR4 projection from the chart starts in 2002 at about +1.0F which is equivalent to +0.55C. So the chart starts with an anomaly of +0.65C in 2002 (using the Hadcrut3 baseline). The AR4 hindcasts would be around 0.55C in 2002 so it does look like they placed the line a little higher than it should be …
… but the AR4 projection has temps rising (effectively) every year since 2002 to about 0.64C in 2009 while temps have declined since 2002 to about 0.4C in 2009.
They keep trying to change the baselines to confuse people who would like to audit the results but this time, they confused themselves a little.
BarryW-
Yes. Over the short term, it’s the currently existing overburden of CO2 that matters. The difference in the emissions affects things later.
Deep-
A) Do you really think your dashed line indicates those are 5%-95% confidence intervals? If yes, that means 2008 was outside the 90% confidence intervals.
B) Figure 10.4 represents the IPCC projections and its based on the underlying models. The trend over the period you show is higher than 2 C/century.
C) If you are going to compare smoothed data to IPCC projections, it would be best to apply the same smoothing to the actual, honest to goodness IPCC projections. You should avoid apples to coconuts comparisons even if the apple to coconuts comparisons give you the answers you prefer.
Ah… but Lucia, the temperature isn’t rising, it’s “basically rising.” If we know that the trend exists as the models say it does, then any particular cooling data this year can only be balanced by increases in future years. It’s a trend, after all. That’s what trends do. As a Marxist might say, you look simply at “vulgar” temperature data. While the vulgar temperatures may go down, the true temperatures must inevitably (basically) rise, out of historical necessity. Climate scientists of the world unite!
It makes my head hurt when I try to think like that.
Could they be premature projectulations?
A climate projector von Deutschland
Pronounced ‘It’s all good’ as he yawned.
But El Sol rules,
The Pacific cools,
Prediction rejection, Rahm’s p3wned.
===============================
The computers hum
And modelers spit out song.
I hear dissonance.
========================
I just saw this post. It seems like a smll thing but it’s not if you consider that lowess filtering was probably used to make sure the recent temperatures have additional positive corrections applied.
Good stuff.
I like the haiku also Kim.
“While the vulgar temperatures may go down, the true temperatures must inevitably (basically) rise, out of historical necessity. Climate scientists of the world unite!”
I found this comment funny, because I do observe a certain Whiggish certitude as to the future of climate that does remind me of Hegelian “History as Dialect process” nonsense. Consider the popular analogy of “the coming summer” to Catastrophic AGW. Why is it seen as comparably inevitable? Don’t know, but it is ridiculous…
Jeff– Do you mean you think the observations were lowess filtered? Or that that was done to define average during the baseline period? If CCSP did something like that, there is no discussion to indicate it.
The peaks and troughs in the black curve resemble plots of annual average data.
Lucia,
I’ve posted your comment and a response to your comment at my blog addressing each point you raised, i.e.:
A) 2008 outside confidence limits.
B) Linear trends 2000-2010
C) Smoothed projection trend
For this thread, I want to stay on topic. My point was that there is a slight mismatch between observations and projections in the graph, which now seems to be agreed on all sides. It’s clear that not all years are below the projections, once the curves are properly aligned.
Exactly how many are at or above the projections depends on the baseline used, which varies from study to study (or even within the same study – one of the graphs in AR4 Chapter 9 uses a baseline of 1900-1950, which places the projections considerably lower relative to observations than using a more recent baseline).
Another reasonable baseline approach is to use the end point of the smoothed observation trend, just prior to the projection period, as was done in the Rahmstorf et al brief on TAR projections.
A) Deep: 2008 is the first full year after projections are published. So, while it is admittedly true that one expects the temperatures to be outside the 90% confindence limits, the fact that it happens the first full year after projections are made does not induce much confidence. (FWIW: When I comp up with temperature projections, year end 2008 is not outside the 90% confidence limits.)
B) We all agree somethink is weird with the CCSP graph.
c) Of course, whether or not the temperatures are outside projections depends on the baseline selected. By defintion they 20 year average centered on 1980-1999 will have a perfect match with the projections. This is achieved by subtraction. I have mentioned this many times at the blog. T
The Rahmstorf approach was idiotic because that method of determining the baseline wasn’t specified when the projections were made. It was invented afterwards and bears no relation to what anyone was thinking when projections were published. Projections should be tested according to the definitions applied when the projections were made. For the AR4, that’s the average relative to 1980-1999. It is entirely inappropriate for people to use a different baseline when later assessing the models. One is not allowed to switch just becuase it might give results they like better.