Many of you are used to seeing comparisons of observed and projected trends here. But I also sometimes cycle through to show comparison of absolute temperature anomalies to projections temperature anomalies using the baseline the authors of the IPCC AR1 selected when illustrating projections. That is: I show how temperature anomalies would appear if super-imposed on Figure 10.4 in WG1 of the AR4. In today’s post I’ll
- Show and discuss a portion of Figure 10.4.
- Show a plot of annual average temperatures from HadCrut, GISSTemp, RSS and UAH superimposed on the multimodel mean of the annual average projection based on runs from the A1B scenarios.
What “the scientist projected”
It’s difficult to say something like “scientists projected X”. All scientists rarely agree on anything. However, in the case of climate change, the IPCC AR4 would appear to represent a consensus document. In principle, the contents of that document should reflect what some group thought a likely projections at the time they wrote it.
The projections by the IPCC were provided in tables, graphs and narrative in the AR4. The main figure for projections of surface temperatures in Figure 10.4. An exploded view is shown below.

The caption to this figure reads:
Figure 10.4. Multi-model means of surface warming (relative to 1980–1999) for the scenarios A2, A1B and B1, shown as continuations of the 20th-century simulation. Values beyond 2100 are for the stabilisation scenarios (see Section 10.7). Linear trends from the corresponding control
runs have been removed from these time series. Lines show the multi-model means, shading denotes the ±1 standard deviation range of individual model annual means. …
As you can see several scenarios were used to project. Many here know I focus on A1B; this is because these have been made readily available at The Climate Explorer. However, during the first 30 years, the projections from A1B, A2 and B1 scenarios don’t differ very much.
The other thing you can see is that authors elected to
- Project surface temperatures.
- The multi-model mean of models (not a linear extrapolation from data).
- Show ±1 standard deviation of model meant trends to express uncertainty in the temperature anomaly.
- Use a baseline from 1980-2000.
What is not entirely clear from the text is the precise method of computing the uncertainty intervals. However, it seems reasonable to suppose these are the ±1 standard deviation of annual average temperatures, longer temporal averages or the uncertainty intervals have been smoothed. Other choices would result in rougher looking graphs and larger uncertainty intervals.
AR-4-like Figure with Observed temperature anomalies super imposed
Using the assumption that all curves in the Figure 10.4 above are based on annual averages, I reproduced the temperatures anomaly projection for surface temperature from the A1B scenario and super-imposed the observed temperature anomalies from HadCrut, GISS, RSS and UAH. (RSS and UAH measure TLT not surface temperature, so this is a bit inapt. But people are alway curiious, so these are shown.)
Here are some observations:
- By definition, use of the “anomaly method” forces all observations and projection to exhibit identical average temperature anomalies between Jan. 1980 and Dec. 1999. However, the instantaneous temperature can differ due to internal climate variability for the earth itself and for the model.
- It happened that the earth’s temperature fell below the model projections during 2000. both Hadley and GISS observations are near the lower 1 standard deviation (SD)of all models projections. This could be attributed to the earth having experienced a La Nina. Even absent global warming, one generally expect the earth to warm after La Nina, and it did slightly.
- The projected temperatures reached and just barely exceeded the multi-model mean projection during 2001. We see the observed temperatures repeat this a few more times; the temperatures dropped back down during 2008. Notice that Hadley and GISS observations veered well outside the -1SD of multi-model means and nearly penetrated the -2SD range; that is, the actual temperatures fell below nearly 95% of the range of muli-model mean trends. This does not necessarily mean a lot because we are comparing weather to the range of expected values from a range of models. It merely means that if plotted using the conventions adopted in the AR4, the temperatures would look “not as bad as expected” by a large margin.
- What is interesting is that the earths observd surface temperatures fall mostly below the multi-model mean and barely penetrate the “best estimate” even during the 2007 El Nino. It is possible to discuss this quantitatively, but I won’t here.
- Currently, the earth’s surface temperature fall near the -1SD value for multi-model mean projections of this value.
So, for now we can see that despite the El Nino, the earth’s annual average temperatures fall below the projected trend. This El Nino may last, and we may finally bust through the multi-model mean trend in some definitive way. Or, El Nino may be week, end in March dive. If so, the annual average temperatures may do no more than graze the mean projected values.
The future awaits. What’s your guess?

Well as record cold is being/and about to be experienced across the world (Europe, America, Australia – one wonders where the warmth is coming from!
twawki–
Of course there must be some record cold somewhere. But there’s an El Nino out there!
Now, I do need to go shovel. The season called ‘winter’ has arrived and it looks like we may have a white christmas.
My guess: the temperature will change.
twawki (Comment#27234)
Record cold in Australia? November in Melbourne was 6C above average. Beat the previous record by 2C. Same story through NSW and SA.
So, did any of those scenarios predict the massive increases in CO2 emissions seen in this decade?
A2 seems to be the worst. Did the increase in Chinese emissions outstrip the A2 assumptions?
Duncan–
I don’t know how well the SRES were on track.
The divergence between satellite temps and models is actually greater than it appears on this graph. If two quantities are diverging, the best way to illustrate this is to align them at the same starting point, not to use long-term average as a baseline.
The result of using an averaged baseline is that for the first five years of the satellite data (1979-84), the satellite anomaly is almost continuously higher than the models. As a starting point, this is where the sources should be in alignment and anomalies should be zeroed.
Look at it this way, if we used 2004-2009 as a “baseline”, we could say that temperature data and models are currently in near perfect agreement!
Any period after significant divergence begins should not be used in the baseline.
Messy messy messy.
I hate messy graphics. pixels are cheap.
more charts with fewer series per chart
1. GISS, HADCRUT, 1 Sd
2. GISS, Hadcrut 2sd.
Discussion:
3. UHA, RSS, 1 sd 2sd
Discussion
Done right for presentation a chart should convey one idea which is captured in a foot stomping bullet at the bottom.
Braddles–
Yes. This method minimized the appearance of divergence relative to some other methods. Or course, its’ possible to minimize the appearance even more or show it more clearly.
This method is just showing people how data would look superimposed on the IPCC graph. The reason this is useful is that people can say “Hmm… that’s the graph scientists thought useful before the data were observed.” It may not be the best possible way to look at the data, but this is a big advantage in the sense that it limits people from “method shopping” to finding that method that is most advantageous to their “side”.
Lucia,
What would happen if you took into account the extended and low solar minimum that we have been experiencing since 2007?
I know that you took some notice of this in a previous post (I think it was from 2007), and I think that the conclusion from that that it could have had an impact of up to .1 of a degree, with solar maximum being at the beginning of the decade (it depends – there were two peaks, one in 2000 – from memory, so I could be mistaken – and one in 2002). It might be instructive to have another look at it.
Steve–
I hate it when people scatter information that could be put in one graph over many graphs. Hate. Hate. Think it’s a wretched practice. Bad, bad, bad.
There are cases where it is useful, but I don’t think this is one of them.
David
We don’t know. Not only did they modelers not anticipate this but they didn’t even include the “best estimate” based on recent 11 year solar cycles when projecting.
To even speculate, I would have to know what they did do. I don’t know the modelers didn’t let the solar intensity vary but I don’t know if they left it stuck on “high”, or immediately dropped it to “average” at the start of their projection period. Which they did could affect the mean result, though it might not be very obvious in an individual realization.
To Nick Stokes – I suggest using the word retard is a bit harsh. I live in Melbourne and yes there were days that were hot! Records….according to the observational results we had a hot day! Above average for a few (4 or 5 I think) However – if you choose to ignore that the beginning of November which was well below the average than you yourself must be considered to be a retard. There were some cold days and some warm days….but hey….that’s called WEATHER. And it don’t mean nothing…..
Nick Stokes,
twawki: Well as record cold is being/and about to be experienced
I think he was guessing Aussi is “about to be.”
Lucia,
“Now, I do need to go shovel. The season called ‘winter’ has arrived and it looks like we may have a white christmas.”
We have a nice dusting of snow in the San Gabriel Mountains. Very nice. Our 50F probably won’t decline to snow down here close to sea level though!!
Maybe you will get a snow blower for Christmas??
My guess is that we will get a pointed, circular inclined plane from the politicians. This guess has a confidence of better than 95%!! 8>)
Kuhnkat–
We aren’t going to buy a snow blower. My driveway isn’t long, and we need garage space for radial arm saw, air compressor and various woodworking things for my husbands remodeling “hobby”.
Lucia,
Not that I would expect you to be heard “prima facia”, but why not forward your observation under #4 above to the various interested parties currently mud wrestling at Copenhagen? And, why not, ask them variously to comment, just for fun?
🙂
My guess is that once this El Nino ends surface temperature will drop dramatically relative to current levels. Dramatically meaning in the range of 0.2 to 0.4C. Possibly even more if a La Nina develops.
Assuming the extemely low level of solar activity does have a direct impact on climate we should see much lower anomalies in the next 2 to 3 years.
Am I the only one that has an issue with saying “cooling”? To me it’s not cooling but rather a lack of warming. A La Nina for example doesn’t cool the earth IMO it simply doesn’t provide as much heat into the climate system.
Even massive volcanic eruptions simply prevent warming by preventing solar radiation to reach the surface.
If I’m off base please correct me!
tetris–
Who? Do you really think any one of them cares? Copenhagen is about politics.
Bill Jamison– I don’t know how far the temperature wil go up during this El Nino or down in the next La Nina. I also don’t know how long they will last. That’s one of the difficult things to even guess.
Lucia I am just used to briefing mode One slide one point. Frankly in the spagetti graph I keep having to back and forth between the text and the graph and the legend. My prac tice is way better than yours. so there. You are probably hiding a decline in there somewhere
“What’s your guess?”
Well, depends on the timescale. Anything can happen on the century scale but I believe on a longer scale we are generally cooling and have been since 1400 BC or so. It becomes very pronounced from 0 AD until today. Each warm period is cooler than the one before. The graph in this post that shows back to about 2500 BC shows this fairly clearly. Temperatures 2000 years ago were about 2 degrees warmer than today in the Greenland ice cap and the trend is downward since.
While we will have periods of short term warming and cooling, the next glacial period might have already started two or three thousand years ago and we could already be on the slide into “ice age” conditions.
Lucia,
As you know, I am an old hand at the politics of “business”, and my suggestion was only a joke. Thereof the 🙂
That said, good post.
Steve–
In power point presentations, one slide with one point per graph can make sense. In journal article it’s not done. I don’t really think it’s useful at blogs, so I don’t do it.
I know that no matter what I do, each person has a different preference. So, in this case, I present stuff organized the way I prefer to read it. If I were reading your blog and you did it the way you say, I’d be telling you to do it my way.
Lucia,
This looks nice.
So presumably you could do some stats to show how well the distribution of observations matched the distribution of the model projections. During 1980-1999, the means are the same. How well does the rest of the distribution match? Now, what about the period 2000-present? At first glance, it seems that the projections don’t quite work as well as the emulations…thus it seems easier to fit than to predict. But perhaps you could enlighten us further as to the match.
Thanks!
-Chip
Nick Stokes (Comment#27249)
“weather? events such as the persistent blocking highs in the south Tasman sea are well known to cause temperature anomalies in the Southern Australian states.
eg http://www.bom.gov.au/announcements/media_releases/ho/20080320.shtml
This has been understood for a long time eg Van Loon 1956,Trebneth 1986.
Equally understood is the inverse temperature response in NZ such as -1.4c for Oct,- 0.1c for November.
The theoretical mechanism is the shift of the polar jets and storm tracks poleward .This is caused by changes in the ozone density and geopotential height eg Unep/ozone report 2006 and expert assessment update 2008.
A2 and A1B aren’t significantly different out to 2030 and are in reasonable agreement with current measurements (Mauna Loa) and projected BAU trend. B1 is slightly low now and looks to be way below the likely path in the next few decades.
year A1B A2 B1
2010 391 390 388 ppmv
Current MLO seasonal corrected value is 388 ppmv. That would make the expected value at the end of 2010 about 390 ppmv.
“maksimovich (Comment#27340) December 9th, 2009 at 11:21 pm
Nick Stokes (Comment#27249)
“weather? events such as the persistent blocking highs in the south Tasman sea are well known to cause temperature anomalies in the Southern Australian states.
eg http://www.bom.gov.au/announce…..0320.shtml
This has been understood for a long time eg Van Loon 1956,Trebneth 1986.”
But they have always been happening. They haven’t caused the recent response to date. AGW is the ‘icing on the cake’ in a way. Take an extreme, than add a bit more to it.
bugs (Comment#27347)
Your URL evidence has the following message.
The page you requested was not found on this server.
Pretty much says it all for you.
Here is something which has always puzzled me like crazy:
Why the heck does A1B go above A2 for several decades? I thought A2 was the emissions intensive scenario?
Hm…Could it be that not every modeler has bothered to calculate their models for every scenario? Are the A2 models actually less sensitive than the A1B models?
Lucia,
Some models simply use a flat solar irradiance post-20C3M as I’ve shown here neglecting any reasonable sense of variability.
The CO2 concentrations used in the simulations can be found here
“Some models simply use a flat solar irradiance post-20C3M as I’ve shown here neglecting any reasonable sense of variability.”
The variability cannot be predicted and makes no contribution to the end result.
PLease correct me if I am wrong, but I believe there is a path forward (to my knowlegde) not explorerd.
As far as I have understood Ax, By.. are socio-economical “scenarios” / “story lines”.
If I have understood Gavin Schmidt correctly, all(?) the overall projections (?) incorporate CGA models conventionally run with an annual CO2 increase of 1% (!).
However, the later years the CO2 increase has been rather linear, approaching (very roughly ) 400 ppm (388). The annual increase is somewhere below 2 ppm. Hence, the annual increase has been closer to 0.5% than the 1% conventionally used in the models. (and will continue to decrease if the linear increase continues).
I suspect that the discepancy between models and observations could be reduced if they tried that exercise.
Is it really true that such an exercise hasn’t been done?
Cassanders
In Cod we trust
The CO2 numbers are just below the A1B scenario currently (1 ppm), N20 is just under and Methane is under by a little more. So A1B is still a pretty good projection.
The El Nino is forecast to peak in early January and the sub-surface warm temperatures look ready to surface which could produce a short spike in the numbers around that time. And then the indicators are lining up for a La Nina event afterward – but one never knows.
I think we are going to see rising temps now for several months and then some moderation afterward.
I am sure you are aware that Tamino has posted “Riddle Me This” where he claims that current temperatures are just where climate scientists predicted because they are close to the thirty-year historical trendlines for GISS and RSS.
I tried posting some comments that seemed reasonable to me. None of my comments made it through moderation.
My first comment was off-the-cuff, but I think makes a valid point. I wrote:
“So you are saying that we spend millions of dollars on climate models that predict that termperatures will follow their historical linear trend?”
My second comment makes the point more explicitly:
҉ۢ Your post shows that temperatures have followed their historical trend line, but does not show projections made by any of the climate models.
It would be helpful to provide the slope coefficient for each of the charts. From eyeballing, my guestimate is that the slope for the GISS chart is about .15 per decade and that the slope for the RSS chart would be about .11 per decade.
I believe that such slopes would be considerably less than the projections from models with a high climate sensitivities and would be consistent with the climate sensitivities suggested by the “lukewarmersâ€.”
In my last comment I just cited to your post and suggested that it might contain a possible answer to his riddle.
Now when I read the comments, I do find that one link to your posting. It says:
“Took her a little longer, and she was much more subtle about it than I expected:
http://rankexploits.com/musings/2009/temperature-anomalies-v-absolute-projections/”
This seems to include some type of inside joke or innuendo. If you care to answer, what is that about?
Lucia,
The Climategate email thread 1255553034.txt touches on this subject (and the RC post mentioned in it is similar to what you have presented).
Looks like those boys use the analysis to conclude that the models are working just fine.
Thus my earlier inquiry as to whether the distribution of observed anomalies is the statistically the same as the A1B projected anomalies.
It doesn’t seem to me that Gavin and Mike’s observation that the observations are contained in the model projections serves serves as a sufficient test of the projections. As you mention–the observations all fall in the lower half of the projections, which seems to me that something is amiss (either in the observations, the models, or the emissions scenarios…or all three).
-Chip
Chip–
I thin the mean trends aren’t the same and the for the choice of start year of 2001, the means aren’t the same. I haven’t shown the latter. These are climate tests.
Gavin and Mike (and a number of other papers ) are testing as *weather* not as *climate*.
As individual weather events, the climates are not yet far enough apart to always show the climate is different when we apply *weather* type test. But if we apply *climate* type tests, that’s more sensitive and we get stronger rejections and more of them.
Lucia,
Interesting post. I agree that we will see what happens, and it will be fun to bet on if the AR5 generation of models predict a short-term warming rate above or below those of the AR4 (or TAR, for that matter), given that the AR4 models raised the proverbial bar quite a bit over TAR models.
On the SRES question raised earlier, while emissions have been a bit above the scenarios for the last few years, it has little to no effect on the projections given the relative insignificance of a few years’ flow on the cumulative stock (as well as thermal inertia, etc.)
Zeke Hausfather (Comment#27375)-There is a more important reason why the emissions scenarios being low emissions wise doesn’t matter. It hasn’t translated into greater than expected concentrations of CO2!
If the concentration isn’t higher than expected, then what the emissions did in relation to the projections is irrelevant.
It isn’t the emissions that do the warming. It’s what stays in the atmosphere.
Lucy,
What does it look like when you compare A1B surface air temperature models against HadCRU and GISSTEMP Surf instead of Surf+SST? Or use the available Land/Sea masks, maybe?
Au contraire, my dear Andrew, the percent of CO2 uptaken by the atmosphere in the last few year is, if anything, slightly higher than in the past.
http://i81.photobucket.com/albums/j237/hausfath/Picture302.png
Zeke Hausfather:
Really interesting chart. Thank you.
Why is that? Is there some equilbrium shift at work? Oceans’ maxing out?
luminous–
By “surface only” do you mean land only? Why would anyone do an apples to orange comparison of that sort?
The projections are for “the whole earth’s surface” i.e. “land and sea surface”. So, that’s “surface over land” and “sea surface temperature”.
Look in the IPPC / copenhagen reports etc and you’ll see they consistently compare these to temperatures measured over both land and sea.
If you want to compare the observed rise over the 1/3 of the earth that is covered by land, you’ll have to find projections masked to predict temperatures over land only. I don’t now if these are available at The Climate Explorer. If they aren’t you can download gridded data from PCMDI and compute them yourself.
“Why would anyone do an apples to orange comparison of that sort?”
.
The nice lady lucy asked you a question, o luminous one.
George,
The Global Carbon Project has some interesting data on both land and ocean sink efficacy, as well as overall carbon flows:
http://i81.photobucket.com/albums/j237/hausfath/Picture304.png
http://i81.photobucket.com/albums/j237/hausfath/Picture303.png
I’m not sure what the proximate cause of lowered percent uptake by the sinks; I imagine there is some marginal saturation effects, and ocean surface temperature, for example, influences carbon uptake rates (e.g. cold water uptakes more carbon).
(Lucia: if you could image the above, I’m having trouble with it again 🙁 )
Zeke Hausfather,
Is the confidence limit on the slope corrected for serial autocorrelation? My guess is not and that the corrected confidence limits would show less significance for the slope. Knorr (2009), for one, doesn’t seem to think it’s significant.
lucia,
I asked this question on The Air Vent and Jeff Id didn’t know, but maybe you do: If the lag-1 autocorrelation is 0.997 or greater, is the Santer correction for degrees of freedom still valid?
DeWitt,
It most likely is not significant. However, it certainly doesn’t support Andrew’s claim that recent high CO2 emissions somehow have not translated into the associated atmospheric concentrations.
I believe the tas over the oceans is a projection of air temps. sst is surface temps of the water. It looks like a difference of about 1C in the models I looked at. The CE does have land only and oceans only buttons for some models, not for others. Only a few have sst output fields.
DeWitt Payne,
I think when the serial correlation is extremely large, the various corrections (Santer, Nychka, etc) may give unrealistic adjusted degrees of freedom (like 0 or less than 0). I’ve run into that problem in some of my numerical experiments.
Luminous,
The IPCC variable “tas” is 2-m air temperature. “sst” is the skin surface temperature with a land-mask applied. From what I remember, on CE you can get the sea surface temperature by selecting “tos” or “sst”. “tos” is direct from the AR4 archive while “sst” is something Geert Jan put together using the land mask.
luminous–
Feel free to share your explanation about appropriate metrics to compare projections of surface temperautre with whoever wrote chapter 9 of the AR4.
You will notice they use HadCRUT3 to compare to surface temperature. Get back to me when you convince them to use your preferred metrics.
bugs:
Do you just make this stuff up?
This is definitely not true in nonlinear systems. GCMs are an example of that last I looked.
But it’s not necessarily true even in a linear system where the variability is due to changings in feedback strengths (parametric variabilty), since even in a linear system, the response of the system to a forcing will depend in a nonlinear fashion on the parameters of the system.
Simple example, consider a climate feedback parameter f, then gain associated with this feedback is
G = 1/(1-f)
Let = f0, a constant. Then,
= 1 + f0 + f0^2 + + …
As to not being able to “predict” the fluctuations, that’s mostly BS on your part too. Many of them are associated with changes in well-defined ocean circulation patterns, and probably even if the fluctuations can’t be predicted, the statistical moments can be.
Chad,
Thanks. That was helpful.
The climate models predict a downturn or stabilization in the trend when one of the “forcings” goes negative. Yeah sure, there is some variability in the predictions but they only flatline for 10 years or more when one of the “forcings” goes negative or in a model that no one believed before because it had so much random fluctations.
So what “forcing” has gone negative.
Certainly not the volcano forcing since the last big volcano – Pinatuba – was 18 years ago and all of its effects were over long ago (unless you subscribe to the theory that stratospheric ozone and volcanoes are major drivers of the climate).
Aerosols? Well, North American, European, and Russian aerosol emissions are down considerably. The Asian aerosol increase is showing a positive impact on temperatures versus a negative impact.
So, the climate scientists are left with relying on the models that produce large random fluctations (that they dismissed before).
There is no large negative “forcing” increase to explain why the “GHG forcing” is not working now. They are relying on the “random models now as the best predictors”
Bill Illis,
One of the forcings has gone negative – the sun. It was at maximum between 2000 and 2002. It has been at minimum since 2007.
Lucia,
“My driveway isn’t long, and we need garage space for radial arm saw, air compressor and various woodworking things for my husbands remodeling “hobbyâ€.”
You solution is obvious. Buy the newest combination snow blower/air compressor/radial arm saw/…
Zeke Hausfather (Comment#27409)-“it certainly doesn’t support Andrew’s claim that recent high CO2 emissions somehow have not translated into the associated atmospheric concentrations”
That’s because I never made that claim! I said something along the lines of: Yes, emissions are higher than expected. But, CO2 concentrations aren’t.
That there is a slight change in the airborne fraction over some period is not relevant to what I said. What I said was very specific, and was that the concentrations of CO2 have been in line with predictions, not greater than.
If you wanna show that’s wrong, you are gonna have to find actual projected concentrations. Good luck-I know Gavin had something like this he dragged out against Monckton, but I believe it supported my view.
Hi Lucia,
I believe that the IPCC claims not to able to make valid predictions as to how the climate will change in specific regions. However, that’s the IPCC. Apart from that we find every day new “scientific” articles that claim that, according to this or that climate model, a given region is going to face doom in the shape of draughts, floods, hurricanes or whatever.
Currently, models are accepted if they can, more or less, correctly replicate the evolution of the global temperature anomaly for the last 30 or 50 years. But as far as I know, that could well be a way of averaging mistakes. How well are the current models able to replicate the regional conditions experienced by the different significant parts of our world in the last years? And how well do they agree on that?
I think an analysis like that would be very interesting, and I think that the results would probably reduce the models’ credibility a lot, not to mention the credibility of those articles that claim to predict doom in specific regions because some model says so.
David Gould (Comment#27463)
Bill Illis, One of the forcings has gone negative – the sun. It was at maximum between 2000 and 2002. It has been at minimum since 2007.
The Sun is very quiet still but the main measure of “solar forcing” in the models is Total Solar Irradiance. And even in this minimum, it has only declined to levels similar to previous solar minimums. The models have a +/-0.05C built in for the solar cycle and they can only realistically be using the -0.05C right now (although they always push the limits of the forcings to match up to the historical record so the modelers will be pushing the solar impact down as far as possible.
Bill Illis,
However, if we are looking at predictions from the models – predictions made prior to it being known that a long and deep solar minimum was going to occur – it is not likely that the models had that negative .05 degrees built in for 2007,2008 and 2009. Thus, we cannot really say with 95 per cent confidence that the models have failed in their estimation of temperature, simply because they are not designed to predict unpredictable events (obviously). However, the temperatures would still be at the lower end of the model predictions.
Andrew,
If emissions are higher than expected, and the airborne fraction is not decreasing, than concentrations are higher than expected unless the IPCC predicted a higher airborne fraction or the uncertainties with concentration projections are larger. I’d strongly suspect the latter, in which case there is nothing interesting to see here.
When one tests a model one has made it is normal to plot the residuals and the sum of squares on the same axis. The Mark I eyeball is remarkably good at seeing if there is any information in the residuals. If there is any pattern in your residuals, you have screwed up.
If there is a trend in the sum of squares, you have screwed up very badly indeed.
On first inspection, especially of 1975 to 1985; which the modelers would have used to fit and tweek the model, this is quite awful.
The way the line shape of the model changes at +/- 1 and 2 SD beggars belief; it is modeling, but not as I know it.
The model appears to have data that is self-aware and that the +1 SD (-) -1SD would actually contain information, which is odd to say the least.
It rather looks like they have used a zero-order loading component that drives a Y component above a series of barriers, but allowing some fall back. Essentially this (model) is snakes and ladders, but with dice of increasing size; but there is a memory component so that the model, metaphorically, remembers how hot the summer was as it sulks in its winter.
Presenting a graph, with links to horrid data sets rather than to a text file is rather bad form.
Put a link to a text file please.
The temperature wilt fall, but if it doesnt fall…
It will either rise or remain the same…
For now is the winter of our discontent,
Made glorious summer by this sun of Global Warming – that is Climate Change…
The models are not claimed to be able to give good regional predictions, as the grid sizes they use are too large. New generation hardware should help with this problem. The regional issues are mostly irrelevant to the models, though.
Ok here is my theory: Global temperatures from 1998 will be the same as 1998 +/-0.3C
Compare this with the AGW theory of IPCC: Global temperatures from 1998 will increase by 0.035 C every year (+/- say 20% or 0.007C)
Admittedly the IPCC AGW theory has been worked out on supercomputers, using complex GCM’s. But so far my theory is more accurate.
We can check back after 5 years, Al Gore permitting.