Hadley posted their September 2010 temperature anomalies. Their NH&SH mean is reported as 0.391C down from 0.475C in August 2010. In comments on my post discussing GISTemp’s September Anomaly, Ron Broberg asked a few questions about why I select one set of graphs over another to include in a post. I gave an answer and also volunteered that I’d be happy to show the graphs beginning in 2001 when Hadley comes out.
So, with no further ado, below is a graph showing the temperature anomalies since 2001 (in red), the least squares trend (black dashed), an MEI corrected trend (yellow ) with some uncertainty intervals (neither of which I’m going to explain today) along with the 0.2C/decade trend shown in orange for reference:
As you can see:
- This is not the warmest September in the record; it is not even the warmest September since 2001. ( Septembers are circled blue.)
- Temperatures have recently dropped; this is likely due to the transition from El Nino to La Nina.
- The least squares trend computed since Jan 2001 is negative.
- Despite all claims by people who think otherwise, Jan 2001 was not the top of an El Nino.
- September’s temperature falls below the ordinary least squares trend line. (This means if temperatures remain low next month, the computed least squares trend since 2001 will decline.)
But in answer to Ron’s question about the reason I have been showing longer term trends is that many of us have been wondering which records will be broken this year. For that purpose, it is more useful to display a graph that includes the maximum, and that means displaying a longer time period:
In this graph you can see the following additional things:
- The trend since 1980 is distinctly positive, but also falls below the reference value of 0.2C/decade.
- The Hadcrut temperature anomalies reported for 2010 never approached the peaks seen in 1998 or even a few later years. (Illustrating this has been my main reason for preferring this graph over the shorter term one when writing more recent posts.)
I’ll now turn to the 12-month lagging average graphs illustrated below; those corresponding to calendar years are circled in orange . (You may need to click to embiggen). The multi-model mean for based on A1B are shown for reference:

Examining this graph we can see that the 12-month lagging average was set in 1998; the calendar year average (circled) was set the same year. Neither record has been broken during the first 9 months of 2010, it seems unlikely either will be broken this year.
You can also see the 12-month lagging mean based on Hadley remains below the 12-month lagging average of the multi-model mean. Because real earth temperatures will tend to drop with the transition to La Nina, while the Multi-Model mean largely averages out the effect of any ENSO in model runs, I think the observations are unlikely to approach the Multi-Model mean until we hit the next El Nino. (Heck, I don’t think they will even during the next El Nino, but I have less confidence in my ability to foresee that.)
Now, for the sake of fairness, I told Ron that if I showed the short term graph of Hadley, I’d show the short term graph for GISTemp:
What we see is that at the top of El Nino, computed since 2001 the Ordinary Least Squares trend computed based on GISSTemp managed to show a slight positive trend. I’ll be discussing confidence intervals in a later post– probably in two or three months. Those are illustrated because I think they are “about right”, but I would like a to use a more formal method, and that’s why I am deferring discussion.



The “warmest” line in your 2nd figure is misplaced (on top of the “current”). /nitpick
The short term trends for GISS vs. Hadley differ by .093 K/decade. (MEI or OLS) That’s an astounding difference, when placed in context of the longer-term trend.
Enough of this nonsense… when is the world gonna start boiling… they’re having a sale on brauts at the local market
The “record” in orange appears to go through the all time high maximum HadCrut temperature in 1998 to me. The current (green) appears to be going through the current HadCrut to me. What am I missing.
Of course, I did not ask for you to show short term trends. I said that I thought the practise was questionable. Not sure if you are pulling my leg or if you have misconstrued you, but I sense you are in a bit of a silly mood this week.
Ron–
Yep. 🙂
Acually, I’m often in a silly mood.
Reverting to a less silly response: I know you didn’t ask me to show them. But you asked why I didn’t show them, and it occurred to me some people might be wondering why and thinking not showing the “meant something.” It also occurred to me that some– not necessarily you– might think some were not being shown because something monumental had changed in the short term graphs or that I had come to think there was something “wrong” with showing them. Nothing really has–the trends since 2001 are still very low etc. I don’t think there is anything wrong with showing shorter term graphs.
So, my saying I would be willing to show them actually made me think I’d like to show them. So, here they are.
I do think it’s not quite fair to only show short term for Hadley and not show it for GISS (or vice versa.)
It’s 10/16. Sea ice minimum bet results, please.
Yeah Ron. I have been asking for Lucia to publish a graph starting when there was no ice in the arctic up till present day, nothing. I want to see this graph so then I can make the claim that ice is growing at an alarming rate and we need to stop it right away.
This is really interesting Lucia. I have a least squares trend for HadCrut that runs from 98 through 2009. It is almost flat, and it’s value is about 4.1C. So September is now below that trend line. I also have one of UAH for the same period. It starts at about 2.4C and ends at about 2.1C. So the UAH September is currently almost .4C above the trend for that time period. This puts UAH and HadCrut about .4C apart when compared to their own longer term trends. It strikes me that something is wrong here. I’m thinking that either HadCRU or UAH or both have something going wrong. Comparing temperature trends to ENSO trends in the past shows that there is about a 4 month lag to the ENSO effect. Because of this I had expected September to either hold when compared to August or else show the start of an anomaly decline. The number that came out of UAH was a bit of a surprise. Maybe the different methods also lead to different lags. In any case, that large .4C discrepancy needs to be explained, and it should be food for those inclined to track these kinds of things down.
“Yeah Ron. I have been asking for Lucia to publish a graph starting when there was no ice in the arctic up till present day,”
Well, there is an article suggesting that the Arctic was ice free in summers about 6000 – 7000 years ago.
http://www.ngu.no/en-gb/Aktuelt/2008/Less-ice-in-the-Arctic-Ocean-6000-7000-years-ago/
So run a line from zero, 6000 years ago, to the summer low in ice today and tell the government that you need a grant for a couple of million to study the potential of snowball earth. If you can figure a way to make mankind responsible and justify more taxation, you’ll be certain to get your grant.
Lucia (#54542) –
I meant the graph of temperature anomalies since 1980 with the orange dashed line for warmest and red for current. Not the running averages one.
What about a sea ice maximum bet?
Harold– Yes. You are right! I’ll fix that later.
Well, there is an article suggesting that the Arctic was ice free in summers about 6000 – 7000 years ago.
.
Yes, that was because the Earth was closer to the Sun due to orbital variation causing summer increased insolation at high northern latitudes. Is the same happening now?
Neven
Wasn’t the peak Milankovitch forcing about 10ka? This is what has always puzzled me about the Holocene Thermal Maximum. It came ~4ka AFTER the peak in solar forcing.
Did all the energy get stored in the climate system (seems unlikely)?
Did something else cause it, and if so, what?
Just pondering really – and very interested if there’s informed comment by any paleoclimate savants out there…
Dominic,
.
Coincidentally I read about that earlier today (when trying to find out how much W/m2 difference the orbital forcing made) in this paper by Fischer and Jungclaus 2009:
.
For the last two interglacial periods insolation maxima at high northern latitudes occurred 9000 yBP in the Holocene and 127 000 yBP in the Eemian. Due to the global climate system’s inertia, e.g., continental ice sheets and ocean heat content, there is a delay up to several millennia in climate response, thus the climate optimum of the present interglacial, the Holocene, as well as that of the Eemian occurred several millennia after the insolation maximum.
.
I’m not sure this is the correct W/m2 figure, but here it is anyway:
.
In the Holocene and the Eemian simulation, sea-ice cover
in the Arctic is reduced compared to pre-industrial (Fig. 5)
especially in the summer months where the insolation in
high northern latitudes increases by up to 40W/m2 in the
Holocene and 65 W/m2 in the Eemian. In the winter months,
where in both the Holocene and the Eemian insolation is
lower compared to pre-industrial, sea ice is not building up
to compensate for the reduction. This leads to an allmost icefree
Barents Shelf troughout the year and a retreat of the ice
margin along the east coast of Greenland. The corresponding
reduction in surface albedo (Fig. 1c) leads to increased ocean
heat uptake in summer that reduces sea-ice growth in winter
and thus leads to a positive sea ice-albedo feedback.
Neven
Thanks for the link. Apologies for the slow response. I am making Sunday dinner this afternoon so no chance for a nice, quiet read. As soon as time permits…
My understanding is that the W/m2 figures in the second quote are widely accepted.
The attribution of climate inertia to OHC and ice sheet formation/melt is obvious and logical. The scale is humbling and the implications for the present are disturbing if all these assumptions are correct.
Shifts in the PDO and AMO since 1980 probably account for ~0.15 C of the temperature change (independent of GHG forcing) since 1980. A more realistic estimate for GHG driven warming since 1980 is in the range of 0.11C per decade.
SteveF (Comment#54649)
October 17th, 2010 at 10:10 am
Shifts in the PDO and AMO since 1980 probably account for ~0.15 C of the temperature change (independent of GHG forcing) since 1980. A more realistic estimate for GHG driven warming since 1980 is in the range of 0.11C per decade.
Evidence? or just a guess.
Stephen Richards,
Based based on how the AMO has correlated with global temperatures since ~1870. The PDO has a very similar correlation, but appears to lead both the global temperature and the AMO index buy about 5 years. Both indexes show a pseudo-period of ~60-70 years, which suggests that the warming from 1910 to 1945 and from 1975 to about 2005 were related in part to shifts in theses indexes.
Could there be no causation at all between the indexes and global temperature (only coincidence)? Sure, but I would not bet on it if I were you. The most likely temperature trend over the next 20 years is a very modest rise in average temperature (<0.1C per decade) as these indexes swing into negative territory, partially off-setting GHG driven warming.
Obviously just a guess since he doesn’t know what drives either indice, and he does not know what other natural variations contributed to temp increases in he last 100 or so years, or how much that temp increase has been exagerated by various human errors.
Neven:
“Is the same happening now?”
If the cycle time is 100,000 years or more, then we are probably not that far from the pattern that existed 9000 years ago.
And concerning the millenial lag time, how do they come up with that? Do they just look at the orbital variation and the sea ice and attribute the difference to lag time?
And concerning the millenial lag time, how do they come up with that? Do they just look at the orbital variation and the sea ice and attribute the difference to lag time?
.
This I don’t know, but I do know that they are smarter than me and you. 🙂
[stephen richards (Comment#54669) October 17th, 2010 at 12:56 pm
SteveF (Comment#54649)
October 17th, 2010 at 10:10 am
Shifts in the PDO and AMO since 1980 probably account for ~0.15 C of the temperature change (independent of GHG forcing) since 1980. A more realistic estimate for GHG driven warming since 1980 is in the range of 0.11C per decade.
Evidence? or just a guess.]
I would say a guess for the following reasons:
The two sentences are demonstrably incompatible with each other. As the total warming since 1980 is measured (not estimated)by HadCrut3 at appx 0.33 in three decades, then the measured (not estimated) trend is 0.11 per dec – from ALL forcings and ALL feedbacks. Therefore either the first sentence is wrong and ALL the warming is due to GHGs or the second sentence is wrong because the estimate is overstated and hence unrealistic. Note the words ‘probably’ in the first and ‘estimated’ in the second. The evidence does not support the theoretical warming.
Arfur
Arfur,
Please look at Lucia’s graph above for 1980 to 2010. http://rankexploits.com/musings/wp-content/uploads/2010/10/LongTermTrend.jpg
You will see that the slopes of the linear best fit line and the MEI corrected best fit line are both very close to 0.16C per decade. I do not have any idea where your value of 0.11 comes from, but it is most certainly not the measured trend since 1980.
“but I do know that they are smarter than me and you”
Neven,
Smarter than you? That is your call, obviously. 😉
Smarter than any other particluar person? A belief of yours, perhaps.
Andrew
“And concerning the millenial lag time, how do they come up with that? Do they just look at the orbital variation and the sea ice and attribute the difference to lag time?”
The Scandinavian ice-cap only melted completely about 10,000 BP and the North American one about 8,000 BP (calendar years), so you would expect a certain amount of lag in nearby areas due to this. Studies in northern Canada (more specifically the distribution of dated whale remains in the Parry archipelago) strongly suggests that sea-ice was already much reduced compared to current conditions by 10,000 BP.
High solar insolation in the Arctic as in 10K years ago, was not going to have much affect until the glaciers melted from the south. Albedo of the glaciers and sea ice at that time would have been about 0.65 so the high solar insolation was mostly just reflected back to space. There was still some ice age glacier left by the Holocene Optimum.
Close-up of 65N June Solar Insolation back 10K (the peak) and going out 10K years (we probably need to get to about 470 watt/m2 to kick us into an ice age so the next one may take 50K or 150K years).
http://3.bp.blogspot.com/_nOY5jaKJXHM/S0pMeoX0DYI/AAAAAAAAAl8/ficHeOA8Jdw/s1600-h/insolation+10k.jpg
Bill Illis,
“we probably need to get to about 470 watt/m2 to kick us into an ice age so the next one may take 50K or 150K years”
How do you know this?
“There was still some ice age glacier left by the Holocene Optimum.”
How do you know this?
gosh,
people get concerned that by showing short term graphs that somehow the public will be mislead. Like showing pornography leads to rape. in truth, the response to a sign is not inherent in the stimulus. What’s that mean?
that means regardless of what Lucia displays as a sign, some people will always respond by talking about ice ages. Others will write limericks and some will discuss semiotics. There is no controlling it, except perhaps by blowing up those who disagree with your interpretation.
relativistically yours,
moshpit
SteveF,
I use the following data:
http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt
But I must admit my .11C figure, which I based upon the three-year running average and memory, is inaccurate. I have rechecked the data and find that the figure is either .123C (using the 3 year running average) or .145 using the year-so-far figure for 2010. Even if we use the year-so-far figures, for a trend of .16C to be accurate, the total warming since 1850 would have to be .48C, and it is .43C. If we then deduct your .15C for the PDO/AMO we get 0.095C as a trend. That trend has to take in all factors other than the PDO/AMO. This means that the GHG forcing trend cannot be .11C as you say.
You may think these are small differences but bear in mind that the total warming from all factors and forcings since 1850 (taken as the accepted start of anthropogenic emissions) is less than 1C (actually 0.95 using the 2010 year-so-far figure and less using the 3-year running average) which gives an overall trend of 0.06C per decade.
Arfur
Arfur,
Did you look at Lucia’s graph with the trend lines on it? You don’t need to use a three-year average, or any other particular non-standard method; the best least-squares fit trend is calculated based on all the data in the period under consideration. The trend is very close to 0.16C per decade over the past 30 years. I do hope you can see that on the graph.
Whatever forcing you want to consider in 1850 (or 1900, or 1925), that forcing doesn’t have much to do with the radiative forcing right now. The trend since 1850 is not terribly relevant when you consider than most of the increase in radiative forcing (from all GHG’s, not just CO2), which had been rising slowly since the early 1800’s, rose sharply starting in the middle of the last century; nobody suggests the rate of warming in the early 20th century and before should be similar to the later part of the 20th century until the present.
SteveF,
Yes, I did look at Lucia’s graph. The data I provided you is, as far as I can make out, official data from source. I am not sure why Lucia’s trend starts at nearly -0.2C anomaly when the HadCrut3 data clearly shows 1980 was a positive anomaly year.
You told me in another thread:
“So it is easy to show that a relatively short period of flat (or even slightly falling) temperatures does not prove anything…”
I agree. Trying to show the veracity of GHG radiative forcing over a thirty year period does not prove anything. I disagree with your dismissive attitude to the overall trend since 1850. The overall trend is the only one that counts! If your point about the radiative forcing being more effective in the latter part of the period since 1850 is valid, then the overall trend will show an increase – it has to. Unfortunately, it does not. In fact, if you look at the official data I have provided, the overall trend today – even at the end of another El Nino period – is markedly lower than its peak at around 1878!
Therefore, whilst there may have been ‘some’ warming from GHG radiative forcing, until the overall trend shows a significant rise, the argument that radiative forcing – even if it can be proved that the warming is caused by radiative forcing – makes any significant (dare I say catastrophic) contribution is not supported by evidence. We have just been through another El Nino and the global temperature has still not exceeded its high point in 1998.
Short, interim trends are meaningless until they significantly change the overall trend. You can pick any discrete thirty year period and come up with a totally different trend.
Arfur
I use a different baseline from Hadley, as indicated on the graph. Each agency uses different baselines. I set all to a common baseline 1980-1999.
Of course you can get a different trend over different 3 year periods. That doesn’t mean short term trends are meaningless. For example: The short term trend after volcanic eruptions tends to decrease relative to the pre-eruption level. This is due to the eruption: i.e. we understand it based on physics. The short term trend confirms something we expect based on physics and is not “meaningless”. Other examples are possible. But anyone who runs around saying short term trends are meaningless doesn’t know what they are talking about.
SteveF (Comment#54796)
October 18th, 2010 at 9:07 am
Glacial retreat timeline (note that the extent of sea ice is rarely shown in these maps – the majority of the Arctic ocean has 7 metre ice throughout most of this period).
http://lh6.ggpht.com/_WtnYwFZtgHI/SshOURn0q0I/AAAAAAAAAe0/nLz–G3sWUo/s800/LGM%20deglaciation.jpg
Sea level did not peak until about 5500 BP.
June Solar Insolation at 65N historically needs to drop below 470 Watts/m2 to kick us into an ice age. After that Albedo takes over and the mass balance of the glaciers needs 2 or 3 solar insolation peaks to melt back completely. 470 is like the sunshine at 9:00 am so in the Arctic, this is about the wattage required to melt all the snow on land during the summer. The forecast only dips to 470 at 50K AD and 130K AD.
http://2.bp.blogspot.com/_nOY5jaKJXHM/S0pMWh0jC_I/AAAAAAAAAl0/jFrzGsHsWEE/s1600-h/insolation+200k.jpg
Neven:
“but I do know that they are smarter than me and you”
And how do you know that? Looking at lag times for other climate events, like ocean heat content response to surface temp; or surface temp response to solar cycle forcing or CO2 forcing, etc., a multi millenial lag time seems very long. I’m willing to bet that they decided that orbital variation was the answer, so by default, the response lag time that they saw was attributed to be the orbital forcing lag time. It’s the old, “what else could it be” answer that is so common to climate science.
Lucia,
I’m sorry to have upset you. There are two points here…
[I use a different baseline from Hadley, as indicated on the graph. Each agency uses different baselines. I set all to a common baseline 1980-1999.]
Ok, I never actually had a problem with your graph. I wasn’t sure why the anomalies were different from the HadCrut data but you have explained that. Fair enough. My problem was with SteveF’s dogmatic statement that we should attribute 0.11C per decade to GHG radiative forcing without any evidence in support of that statement. The fact that I came up with a slightly different trend was a sideshow to the discussion as to whether his statement was a ‘guess’ or not.
[Of course you can get a different trend over different 3 year periods. That doesn’t mean short term trends are meaningless. For example: The short term trend after volcanic eruptions tends to decrease relative to the pre-eruption level. This is due to the eruption: i.e. we understand it based on physics. The short term trend confirms something we expect based on physics and is not “meaninglessâ€. Other examples are possible. But anyone who runs around saying short term trends are meaningless doesn’t know what they are talking about.]
You have taken my comment out of context. Again, I used the term ‘meaningless’ in the context of SteveF’s assumption that he could attribute a certain figure – whatever that is – to radiative forcing by GHGs simply by using your 30-year trend. Since the start of emissions (taken as 1850) there have been around 130 ‘thirty-year trends’. These can vary markedly over very short periods – as you are aware. They are, however, effectively meaningless to a discussion on radiative forcing as SteveF seems to think the trend is increasing due to GHGs after the mid-20th Century when the steepest overall warming ‘trend’ actually took place long before the start of the 20th Century! Since then, the overall trend has decreased. How much ‘meaning’ is attributed to any one of the 30-year trends in the late 19th Century and early 20th Century today? None, because the only relevance is the total overall trend since the agreed start date. We can say “yup, the rate of warming went up , then down, then up a bit, etc… but only the overall trend is required to give us the big picture. And it will only be if or when that trend shows a significant steepening that will show whether the GHG radiative forcing argument holds water. How many of those 30-year trends have actually backed up the science you speak of? They are of academic interest maybe, or useful for having fun debating…
As to your point regarding volcanoes, I agree trends may help in this regard although virtually any time-series of data will suffice to show such an effect. Again, this was not the context of my post.
Regards,
Arfur
Arfur,
This graphic http://i54.tinypic.com/53wuaw.jpg shows the Hadley Crut3V global temperature trend along with the predicted temperature from a simple model that includes total forcing, 5 years of ocean lag, and ~10% of radiative forcing off-set by aerosols. The wiggle in the green trace is caused by variation in solar intensity over the 11 year solar cycle, and is based on an estimated effect of ~0.002 watt per square meter per sunspot (which comes from satellite measurements of solar intensity variation over the solar cycle).
The Hadley data does appear to oscillate a bit around the green line with a period of roughly 60-70 years. If we assume a similar pattern will hold in the future, then it would be reasonable to expect the trend in measured temperature to be a bit lower over the next 20-30 years. But the way, the overall trend of the green line since 1967 is 0.113 C per decade.
SteveF,
The graphic actually shows the measured temperature. We interpret a trend from the graph. It is interesting that you pick 1967 as a start to produce your .113 C pd trend. I say we should start at the beginning and, if we do, we get a trend of appx .05 C pd. There, we now have two trends from one graph.
The green line is a hindcast model ‘prediction’ (or should that be ‘postdiction’?) based on an input by a programmer some 150 years or so (I don’t know when the model was done…) after the start of the measured data. Funnily enough, the algorithm used by the programmer produces a line which sort of approximates to the measured data. Now, what would have been REALLY clever would have been a similar green line produced by a programmer (or soothsayer) back in 1850… As an aside, why doesn’t the green line continue well into the future if it is a prediction?
The programmer has had to make some assumptions to produce the ‘prediction’, just as you are in assuming that a similar pattern will ensue in the future.
The model is not proof that radiative forcing is the only or even the most significant factor producing the measured temperature rise. No model is.
Arfur
They use the studied forcings and other known physical processes, to make the prediction, not ‘assumptions’.
They are meaningless in the sense that the short term signal will be mostly due to cycles such as El Nino that are stronger than the long term AGW warming signal.
Arfur
I agree – you are spot on.
Anyone calculating trends from data as short as 30 years ago (roughly the last cycle trough) is either fooling him/her/self and doesn’t understand what she/he is doing;
OR understands only too well,
And is trying to fool other people who know even less about the uses and misuse of statistics.
The NCDC global index (at least as the data existed a year ago), showed a long term trend since 1880 of 0.7 degrees per 100 years (0.07 degrees per decade). How much of this increase is due to UHI, measurement and thermometer siting errors and movements, processing the data in strange ways after the reading etc, etc, etc.
Nobody knows.
Through the long term average, runs an approximate 65 year zigzag pattern, which explains why the temperature has risen sharply since 1975.
But when you look at many long term records for individual locations, not distorted by UHI, it is difficult if not impossible to detect any long term increase at all, over more than 150 years in many instances.
Bugs – I agree that short term trends are short term trends, not meaningless fluctiations.
But you can’t even begin to say anything about the future based on short term trends.
Many AGW alarmists pounce on short term trends and wrongly use them to warn of a hot hell future to come,
Ausie Dan (Comment#55345) October 20th, 2010 at 5:32 am
I think you just disagreed with Lucia.
bugs
Could you elaborate what you think?
bugs–
And they are meaningful in other senses. Hence “not meaningless” because they have some meaning. You do get this concept, right?
bugs
Studied or not, the levels of forcings used in model runs are “assumed”. The assumptions can be informed by science, they can be good assumptions, they can be assumptions we have pretty good confidence in, but they remain assumptions.
The parameterizations in models are also “assumptions”. Simplifications are also made based on “assumptions”. People make assumptions in science all the time. Pretending the assumptions aren’t “assumptions” is just silly.
Lucia:
“The parameterizations in models are also “assumptions—
Yes, not only are the parameterizations assumptions, but the idea that we know what all the factors that effect climate are and that we have included them in the models is also an assumption.
For example, the Solomon et al paper about changes in water vapor in the stratosphere was a revelation that came out just last year. It showed that there was another element to the climate that was significant and that was unaccounted for by any models. Cosmic radiation research is ongoing, and may yield another important factor in our climate. Again, it is not accounted for in models. The assumption that we have enough of the important forcing factors identified and included in the models to predict long range climate is at this point absurd.
Now that things have slowed down a bit, how about closing out the summer minimum ice bet. I’ve convinced my 12 yr old son to take a stab at the Oct. UAH bet and need the quatloos 🙂 🙂 🙂 . His bet is toward the low side, but maybe the lag has finally ended and he’ll win. He’s already impatient to find out how it comes out. I think it’s a great way to get a kid interested in and start relating to science.
My prediction made in July 2008 ( http://www.climateaudit.info/data/uc/GMT_prediction.txt ) is still doing quite fine. I almost had to throw it away in Mar/Apr, but El Ninos are not in the model so let’s wait for the next 2-sigma event 😉
Arfur (Comment#55340),
Well the ‘programmer’ is me, though it is not really much a program, just a spreadsheet. Now I expect you think any projection of teh future based on the past is just garbage (or perhaps rubbish, if you are a Brit). Please look at this graphic: http://i56.tinypic.com/n4fu41.jpg.

The blue trace is the Hadley temperature data. The green trace is the same model that I showed you yesterday, but this time including the Nino 3.4 and AMO indexes (along with ocean lagged forcing and 10% aerosol off-set, as before). The Nino 3.4 and AMO indexes make the fit between the model and the Hadley data much better between 1871 and 1976 (this is not a surprise, since these indexes capture much of the short term variation). The ‘model’ represented by the green line is based (calibrated, if you will) on only the Hadley temperature data between 1871 and 1976… the model has no “knowledge” of the Hadley data after 1976.
The red traced is the same model output as the green line, but applied to the lagged GHG forcing, and Nino 3.4 and AMO indexes since since 1976, with not further guidance for the Hadley measured temperature. The red trace (1977 to 2009) represents an honest “test” of the model, since the model constants were based only on pre-1977 data.
As you can see, the red trace follows the Hadley data pretty closely.
You can simply dismiss this sort of thing, and based on everything that you have written until now, I do expect that is what you will do. But please keep in mind what that red trace says: Based only on how the temperature, Nino3.4, AMO, and total radiative forcing varied between 1871 and 1976, you could have made an almost perfect prediction of what the temperature evolution be between 1977 and 2009. Pure coincidence? Pure luck? Sure, maybe, but then again, maybe not. I suggest that you at least consider possible causation between GHG radiative forcing and temperature increases.
SteveF,
I am genuinely impressed! Why on earth didn’t you show me this graph in your last post? Given that you say the model has no knowledge of the Hadley data after 1977 the correlation is stunningly close.
Now, If you could just show me the continuing prediction of the model to say, 2020, then we will be able to see just how good the model is at predicting the future. While you’re there – you wouldn’t care to have a stab at the Eurolottery numbers for next month, would you? 🙂
A couple of points;
I don’t think any prediction of the future based on the past is rubbish. Not at all. I predict that the winters will, on average, be colder than the summers in the UK for at least the next ten thousand years. That is prediction based on historical data. It is a prediction of the future – not a prediction of what has already happened. The only way your model can be used successfully in this discussion is by actually making a prediction. Go for it.
What happened to the green line after about 1979? It would be nice to see what difference the change in index actually made.
I do not dismiss your work, SteveF. I am, by nature, a sceptic. You have produced a graph which, on the face of it, represents an uncanny similarity to what has actually happened. However, you have chosen all the inputs and you may have made changes to those inputs based on historical data. Until y our model actually predicts the future to the same degree of accuracy as it predicts the past then I hope you’ll excuse me if I temper my genuine admiration for your modelling/spreadsheet abilities with some eager anticipation of what they hold for the future. You said: “you could have made an almost perfect prediction…”. Have a think about that statement and then give it a go using the present tense instead of the past tense.
Finally, you said:
[I suggest that you at least consider possible causation between GHG radiative forcing and temperature increases.]
I can assure you that I have considered this A LOT. I have no doubt that there ‘some’ causation. I suspect it is the amount of causation that we will disagree about. At present, there is no way of knowing what that amount is. We can both guess and that’s about it. Your estimate of .11C pd due to GHG radiative forcing is still a guess, albeit an educated and studied one.
Arfur
Arfur,
A couple of clarifications:
.
The green and red traces are one in the same, the change in color only shows the point at which the output from the ‘model’ changes from where it is based on the Hadley data to where it is not based on the Hadley data, that is, the change from hind-cast to fore-cast. Since the model is based on pre-1977 data, you would of course expect the fit in that period to be good. Please keep in mind that there is some disagreement about attribution of changes in temperature to indexes like AMO and Nino 3.4… are changes in those indexes a cause or an effect? Or some of both?
A projection of the future through 2050, with no AMO, Nino 3.4 or solar cycle is shown in this graph:
http://i53.tinypic.com/281atj7.jpg
.
The green trace is the same as you saw before, but extended to 2050, without any contribution from AMO, Nino 3.4, or solar cycles. There is also a comparison line showing the IPCC AR4 near term projection of 0.2C per decade.
.
Since nobody knows a) how the emissions of GHG will change over time, b) how much of the CO2 emissions will be in the atmosphere and how much sequestered, c) how the solar cycle will behave, d) how much of future radiative forcing will be off-set by aerosols from combustion of fossil fuels, or e) how natural pseudo-cycles will evolve, there has to be considerable uncertainty in any projection. I used a projection of CO2 emissions based on 1.5% per year growth for 20 years (that is, an exponential growth of CO2 emissions), followed by gradually declining rate of growth… reaching ~0.75% per year growth in emissions by 2050. I also based future absorption of CO2/biosphere on the % rate estimated for the last 50 years. Lots of people will argue with these choices, but the truth is… nobody knows for sure what will happen.
.
The projected rate of temperature increase is 0.11 per decade through 2040, with a diagnosed climate sensitivity of ~1.2C per doubling.
.
You should be aware that the choices made in the model (like future aerosol off-sets proportional to future CO2 emissions, overall rate of aerosol offset per unit emission of CO2, and ocean lag period) all have some effect on the projection, and none of these is very certain. A different combination of assumptions will generate a different plausible projection (not as good a fit to the Hadley data, but still reasonable). These choices are somewhat constrained by the more-or-less known rate of heat accumulation in the ocean since ~1955: you have to assume a longer ocean lag period (which gives more heat accumulation) along with higher aerosol offsets to have the calculated ocean heat accumulation match the measured heat accumulation.
.
One of the surprising things (well surprising for me at least) is that if you assume a minimum of aerosol offset and relatively short ocean lag, the projected temperature increase is quite a bit steeper for the next 30 years than if you assume higher aerosol offsets and much longer longer ocean lag periods, while further in the future (post 2050) the warming increases… especially when the rate of growth in fossil fuel use begins to decline. For example, if you assume the current aerosol offset of GHG forcing is ~40%, the project rate of temperature rise through 2040 declines to about 0.09C per decade, but the diagnosed long term climate sensitivity increases to ~1.8C per doubling.
.
The reason I think the IPCC sensitivity estimate of ~2.85C per doubling has to be wrong is that the level of aerosol off-sets and assumed ocean lag periods need to be quite extreme (for example, >60% aerosol offset), and the fit to Hadley data starts to look substantially worse. Everything I have looked at points to a sensitivity below 1.8C per doubling, and the best fit to the data gives 1.2C and 1.4C per doubling.
Arfur,
One last throught..
If you look at how the Hadley data has varied around the longer term (secular) trend, you can see that the “natural variation” is quite large compared to any relatively short term trend. For example, the graphic I posted shows a trend of 0.11C per decade, but the historical Hadley variability suggests natural variation will lead to short term swings (over 1 – 2 decades) of +/- 0.2C around the secular trend. So it takes a large discrepancy between the measured temperature and what a model projects as the secular trend before you can conclude the model’s secular trend projection is wrong. It looks about impossible to show GCM projections are statistically ‘wrong’ based on measured temperatures unless you are willing to wait for 20-30 years from now. Which is why I have argued (mostly with Carrick over at the Air Vent) that much better data on ocean heat accumulation and Earth’s albedo are needed to constrain the models more quickly than 30 years. We might get lucky (well, unlucky for those who live nearby!) and have a major volcanic eruption, which would allow the models to be better constrained within a few years, but that is not likely.
SteveF,
This is fascinating. Thanks for the discussion. A few points from me in return…
You now say:
“The projected rate of temperature increase is 0.11 per decade through 2040…” If this is the overall trend, as that statement implies, then surely it must include PDO/AMO? And yet, in your original post, you stated that the 0.016 pd trend would be offset by PDO/AMO leaving 0.11 pd from GHG warming. Which is true?
At the end of the second graph, the red line is going up well above the blue line. In your third graph, the red line is going down towards the blue line. Which is true?
Are you using the same baseline as Hadley? If so, why is your blue line for 2009 well below the data provided by CRU on the HadCrut3v dataset? According to that set, the average for 2009 is .439C. Obviously this would actually support your trend but I like to be totally objective about this stuff.
I hate to labour the point but your projection (prediction) in the third graph is a simple trend line. Why would you not include all the other factors you mention and project a more accurate graph if you are confident of your estimates?
Back to the subject…
Of course “there has to be considerable uncertainty in any projection…” That’s my point. That’s what makes it a guess. The main assumption in these predictions/trends is that the cause of the warming is mostly GHGs. A far as I am aware, there is no empirical data to support this assumption. I realise that there is likely to be ‘a’ causal factor from GHGs but you are basing your model around that prime assumption. Then, you use differing aerosol offset figures (for example) to adjust the basic trend. What if the basic forcing from GHGs is incorrect? Do you have any concrete real-world proof that GHGs are the main culprit and that your figures are correct? I use this as an example in agreement with your conclusion that any projection/prediction has to have considerable uncertainty surrounding it.
If you have to estimate every single input factor to put into the model, because there are so many uncertainties, then how can the output be considered anything other than a guess? (Again, albeit an educated one and in your case a highly educated one…)
I guess only time will tell what the climate sensitivity is. (By the way, I happen to think your estimate is not too far off – but I’m guessing!) Until we can get more concrete evidence, I would not be in too much of a hurry to make predictions.
Arfur
Arfur,
.
The hind-cast/fore-cast test of the model was run a some time ago, before I had current data. I had made an estimate for the final year based on part of a year’s data. This is the biggest reason for the discrepancy you see.
.
Another reason for small discrepancies is that for the final graph (predicting past the present year) I returned to calculating the model parameters based on all the Hadley data, not just 1871 through 1976. This does not make a huge difference, but does make some.
.
My guess with regard to the 0.16 per decade trend (what Lucia showed) is that this trend is exaggerated by the transition from the negative AMO index (in 1980) to the currently positive AMO index after about 2000. I did not want to get into this in the earlier discussion, so I did not bring it up, so I noted only that the future trend is likely to be lower than the last 30 years. 0.11C is my best estimate of what the secular trend will be. Pseudo-cyclical factors can alter that trend somewhat. Remember, nobody knows for certain, and anyone who says they do is (for certain) full of horse dung.
.
“I hate to labour the point but your projection (prediction) in the third graph is a simple trend line. Why would you not include all the other factors you mention and project a more accurate graph if you are confident of your estimates?”
.
Because we do not know what those factors will be. Nino3.4 and AMO change over time, and while we could guess what they will do in the future, that would be really just a guess. Same thing with the solar influence. Will the peak of the current solar cycle be very weak (lower solar intensity)? Will we go into another deep minimum like the Maunder Minimum? Will the next solar cycle be very strong? Nobody really knows.
.
Once you get to 5 or 10 years from now, you can put in the appropriate values for Nino3.4, AMO, and the solar cycle, and see how the model did (in detail) between now and then. Until then, all you can do is project a secular trend, and recognize that anything more is impossible (unless you figure out a way to accurately predict Nino 3.4, AMO and the solar cycle).
.
“The main assumption in these predictions/trends is that the cause of the warming is mostly GHGs. A far as I am aware, there is no empirical data to support this assumption.”
.
Well, yes and no. The spreadsheet calculated how well the temperature history is correlated with the radiative forcing. But had there really been no effect, then the spreadsheet would have calculated a very low correlation, and instead reported other factors were responsible. The fact that the radiative forcing is one of the variables considered does not mean it is automatically important… it might have little or no influence. In fact, the spreadsheet says the forcing and the temperature are strongly correlated.
.
It is possible there are other factors involved that we just do not know about (which could either inflate or deflate the measured trend!). The specific question the model tries to answer is: “Assuming that the temperature trend is the result of a combination of radiative forcing, aerosol off-sets, ocean heat accumulation, and pseudo-cyclical natural variations (Nino 3.4, AMO), what is our best estimate of the influence of the influence of radiative forcing?” If someone wants to point to other factors which could explain the observed warming, then they carry the burden of showing/proving that these other factors both actually exist and that they can influence Earth’s temperature in a physically sensible way.
.
The assumption that radiative forcing is important is.. well, an assumption, but there are very good physical reasons to believe infrared absorbing gases will warm the surface of the Earth, even if the amount of that warming is not accurately known. Radiative physics is pretty well accepted and proven, and has been for a very long time. For GHG’s to not have any influence, radiative physics would have to be way wrong…. and that is not very likely. The model is just a attempt to make a reasoned estimate of the amount of warming that can be attributed to GHG’s.
.
“If you have to estimate every single input factor to put into the model, because there are so many uncertainties, then how can the output be considered anything other than a guess?”
.
You don’t have to estimate all the inputs; you have some pretty good numbers, like the Hadley temperature data, measurements of CO2 (especially since 1957), accurate sunspot numbers, accumulated ocean heat measurments, etc. There is some guesswork involved, of course, but that guesswork is (we hope) educated…guided by some understanding of how things work in general. The guesses should all be pretty reasonable/realistic, even if not perfect.
.
Neils Bohr said, “Prediction is hard, especially about the future.” And he was right.
SteveF,
the key issue I see with your prediction is the possibility that Global temperature drives AMO. AMO roughly corresponds to the general pattern of warming 1900-1940, steady 1940-1980, and warming 1980 to now. But total forcings for aerosols+solar as used in GISS Model E for instance also show a similar pattern. So aerosol and solar roughly correspond to AMO. There are a couple different possibilities:
The correlation is coincidence; or
AMO has caused the variation in measured solar + aerosols, i.e. AMO has driven temperature, and temprature has driven scientists to fudge aerosol measurements to fit this curve; or
aerosols+solar drive temperature which drives AMO.
AMO is a measure of the temperature of the north Atlantic, with the long term trend removed, so it seems quite obvious that global temperature could be driving AMO. The fact that the long term trend has been removed from the AMO calculation should reduce the amount that warming has affected the AMO. The only reason I personally don’t dismiss the AMO completely as a driver of global temperature is that climate modelling studies tend to confirm that AMO variations can drive global temperature variations.
Back to your model. Your model would have been great at predicting 1980-2010 temps if the AMO,Nino and had been known in advance. I suspect that if we can know the AMO, Nino and GHG radiative forcings for 2010-2050 we would probably get another great prediction of temperatures. But if global temperature has an effect on AMO, and future warming drives the AMO index higher, then your assumption of no change in AMO will underestimate the futre warming.
A tricky issue is the fact that when calculating the AMO index the long term trend has been removed. This means that any long term trend in global temperatures should not be affecting the AMO index. However shorter term variations in temperature will also drive the AMO if they can affect the Atlantic Ocean temperatures. As an example the AMO took a dive for both the 86 and 92 volcanic eruptions.
Due to the way AMO is calculated, then if the rate of warming due to CO2 in the future stays constant, then the AMO will not be affected. If it speeds up at all (or slows down), then this will affect the AMO, to the extent that Atlantic ocean temperatures change at the same rate as global temperatures.
Michael Hauber,
.
Yes, the issue of AMO being both cause and effect is well known, and this might influence the predicted secular trend.
.
The fact that the AMO is linearly de-trended versus time is probably the biggest issue, since the temperature trend ought to be closer to linear with respect to net forcing, not linear with respect to time. I am thinking about how to generate a non-linear de-trending of the same North Atlantic Ocean raw temperature data as is used for AMO to reduce this potential error.
.
That being said, the presence or absence of the the AMO index in the analysis makes little difference in the projected future trend and the diagnosed sensitivity. Removing the AMO index completely increases both the projected near-term trend and the diagnosed sensitivity by about 4.5% (eg. 0.115C per decade versus 0.110C per decade). So it seems the difference between non-linearity in the temperature trend and the AMO’s linear de-trending method does not have a huge effect. The results would be more robust if adding or removing the AMO index made no difference at all, and a properly de-trended AMO would (I think) do this, and also generate a better overall fit to the historical data.
.
With regard to aerosol forcings assumed by GISS Model E (and other GCM’s): The aerosol forcing history assumed by different modeling groups varies quite widely, and the selected aerosol history seems (no surprise!) to make each model fit the historical temperature data reasonably well. It is obvious that there is a strong correlation between the diagnosed climate sensitivity of a GCM and that model’s assumed aerosol forcing; higher sensitivity models use higher historical aerosol off-sets, and lower sensitivity models use lower historical aerosol off-sets. Since nobody knows what the historical offset really was, it is clear that aerosols are in reality something of a “kludge” that make the model hind-casts look better.
.
I expect the members of each modeling group would become indignant (profanity might be used) if someone suggested aerosols are a kludge, and they would offer other rational, and much arm-waving, to show why their particular choice of a an assumed aerosol history is correct. They are either deluding themselves or they are somehow fudging (perhaps unknowingly) their models to fit a particular assumed aerosol history… there is no way for me to tell which it is. But for sure, the substantially varying assumed aerosol histories make all GCM projections dubious at best. The correlation between selected aerosol off-set and diagnosed sensitivity is just too strong to be coincidental.
SteveF,
Sorry, I’ve been away for a couple of days. Many thanks for your discussion points. They at least explain what you are doing with the models that you based your initial posts on.
.
[“Assuming that the temperature trend is the result of a combination of radiative forcing, aerosol off-sets, ocean heat accumulation, and pseudo-cyclical natural variations (Nino 3.4, AMO), what is our best estimate of the influence of the influence of radiative forcing?†]
.
This statement appears to explain why the model will be self-limiting. If you tell the model that the temperature trend HAS to be the result of those 4 factors, then the model only has those four factors to limit its answer. If you have a reasonable figure for 3 of them, the model will give you what’s left as a measure of – in this case – radiative forcing. If you called the fourth factor ‘stuff’, then the model will give you the figure for the best estimate of the influence of ‘stuff’! You are limiting the model with an assumption of yours.
I agree that there may be ‘some’ figure for radiative forcing but, as you pointed out to Michael Hauber, the effect of eg AMO on the sensitivity is very small, so the radiative forcing is probably one of the biggest factors being inputted into the model. There has to remain the possibility that some other factor COULD be responsible for at least some of the measured warming; maybe a large portion.
.
[“The assumption that radiative forcing is important is.. well, an assumption, but there are very good physical reasons to believe infrared absorbing gases will warm the surface of the Earth, even if the amount of that warming is not accurately known. Radiative physics is pretty well accepted and proven, and has been for a very long time. “]
.
The physics behind radiative forcing has been accepted but has it been proven? Do you or anyone else know – accurately – what figure you can use? Your statement starts with an acknowledgement that the importance of RF is an assumption!
.
[“If someone wants to point to other factors which could explain the observed warming, then they carry the burden of showing/proving that these other factors both actually exist and that they can influence Earth’s temperature in a physically sensible way.”]
.
Well, not necessarily. It is not the burden of anyone to ‘prove’ another effect. It is up to those who hypothesise an effect to prove such, and in doing so come up with an accurate figure for that effect. There is no need to invent a false god to explain something we don’t know. It should be sufficient to say ‘we don’t know’ but ‘something’ is causing it…
.
[“Once you get to 5 or 10 years from now, you can put in the appropriate values for Nino3.4, AMO, and the solar cycle, and see how the model did (in detail) between now and then. Until then, all you can do is project a secular trend, and recognize that anything more is impossible (unless you figure out a way to accurately predict Nino 3.4, AMO and the solar cycle).’]
.
So the only reason your graphs so closely match the measured data is because you went back to the model and inputted measured data you have received after the period. Until then, all you had was a secular trend line. That was sort of my point a few posts ago. The model is inherently incapable of predicting accurately without some hindsight input. It is therefore of academic, hypothetical interest only. It/they certainly shouldn’t be used to give political policymakers some sort of scientific authority, which is exactly what has happened.
I appreciate your expertise and your time.
Arfur
Arfur,
Just a couple of points in reply.
“There has to remain the possibility that some other factor COULD be responsible for at least some of the measured warming; maybe a large portion.”
Sure, but on the other hand, there may be no other important factor. Progress in all of science is based not on proving theories correct (since this is impossible), it is based on disproving theories. The proposed theory is that increases in CO2 (and other infrared absorbing gases) could cause a rise in average surface temperature. My exercise with the model shows that the existing data does not disprove that theory of GHG warming. It is always possible to say (as you regularly do!) “your test doesn’t prove that theory is right”, and that is obviously correct, but completely beside the point. A theory (any theory) becomes “well accepted” because nobody can find a way to disprove it, not because someone “proves” it is correct. If someone says “that theory is complete nonsense”, then they DO have the burden to show that the theory is in conflict with data, or in conflict with a “well accepted” theory like radiative physics.
.
“So the only reason your graphs so closely match the measured data is because you went back to the model and inputted measured data you have received after the period.”
.
Sure. It is impossible to known how pseudo-cyclical processes like the AMO and the ENSO will evolve over time, so it is impossible to make an exact prediction of the average temperature years in the future. The model calculates how much these other variables influence the Earth’s average temperature (based on historical records). With these calculated influences in hand, you can in the future “see” the true secular trend by subtracting the influence of these variables from the observed temperatures. The projected (post 1976) trend almost perfectly matches the observed trend (post 1976), which just shows that the model’s calculated influence of these pseudo-cyclical variables and the model’s calculated secular trend is consistent with the observed a secular trend being caused by GHG forcing. The real test of the model is: Will the model’s calculated secular trend based on GHG concentrations in the atmosphere, be consistent in the future with the observed temperatures, once these are adjusted to account for the influence of the pseudo-cyclical variables?
.
Or to put it in simpler terms, the model’s secular trend prediction says: The average temperature will be Y +/- ‘normal variation’ caused by AMO and Nino3.4. The model is dis-proven if the measured trend outside the projected trend +/- normal variation.
.
“It is therefore of academic, hypothetical interest only.”
Well it is of much more than academic interest to me. It is of interest because 1) it is remarkably consistent with the observed history, and 2) the projection of the secular trend is MUCH lower than the trend projected by GCM’s. Which suggests to me that there is good reason to doubt the accuracy of the GCM’s. And it is GCM’s that are being used to justify draconian public action. My little model argues just the opposite: the projected future warming is too low to be very alarmed about.
Off Topic
Just a lil’ short story of what happened this weekend.
The liberal girl whom I dated a few years ago (and made fun of because she was so Predictably Progressive)… well, to both our credits, we’ve stayed friends. Anyway, I went to vist her over the weekend because she’s a new mom now. I haven’t held a newborn baby in awhile and it was a poignant moment. And the Communist Manifesto that was sitting on her coffee table when I met her for our 2nd date, is now boxes of baby meds, and now there are beautiful new eyes to gaze into, the mind poison nowhere to be seen.
Andrew
SteveF,
I thank you for the discussion. I have never sought to question the basic theory that greenhouse gasses ‘could’ affect global temperature. I question the ‘catastrophic’ nature of it as publicised by the ‘alarmists’. In that sense, I totally agree with your last sentence (as I agree with much of the rest of your post). I daresay that the next ten years or so will either support your estimate, or not.
I wish you all the best and, again, thank you for your time.
Arfur
Andrew_KY,
I sincerely hope your friend’s baby grows up in good health and in a world relatively unsullied by preconception, bigotry and dogma.
Arfur
Steve F,
I like your approach as a way of formulating hypotheses, but, as a means of testing any hypothesis in the realm of attribution, it seems to me that it suffers from the same problem to which the AOGCMs are prone. This problem relates to “completeness of understanding” of the relevant physics.
In your modelling, if I am correctly following what you have done, you have used (as regression input variables) net forcings which have been optimally fitted to the temperature series by adjusting (inter alia) highly uncertain aerosol values to offset CO2 forcing historically. You conclude that the results are “not inconsistent with” a CO2 sensitivity amounting to about 1.2 deg C per doubling. Hmmm. OK so far.
One of the major problems that exists in the IPCC attribution studies, and which, I believe, still exists in your approach, is that there are a number of “observations” which are not explained by, or taken into account by, existing climate models and which could – and very likely do – have an impact on attribution. These elements, which concern ignored or unknown physics, are accounted for in the models by kludging the aerosol values and cloud physics to achieve a history match, and notably, by sacrificing model matches to a wide swathe of important observations in favour of matching average surface temperature. By transferring the already fitted aerosol forcings into your regression model, I believe that you inherit the same attribution problem.
To illustrate, here are six such items, not in any particular order and certainly not exhaustive:-
a) aa-magnetic index shows a very strong correlation with temperature (R=0.85 according to one recent paper). Why? Some unknown physics?
b) regression of temperature against TSI is significantly improved by the inclusion of high-energy GCR (proxies) as a variable, and estimates of climate sensitivity over different timeframes are rendered more consistent by this inclusion (Shaviv). Why? Some unknown physics?
c) Variations in stratospheric water vapour, which, according to the RTE, should have a strong effect on global heating, are not model-matched. Ignored observations and physics?
d) OLR is systematically underpredicted by the AOGCMs against the trend in measured data over the critical 1976 to 2002 period, offset by overprediction of emitted plus reflected SW. Ignored observations.
e) mid-to-upper level tropospheric temperatures in the tropics are overpredicted. Ignored observations.
f) matches to regional data (temperature and precipation) are abysmal. Ignored observations.
g) The models do not match the significant albedo reduction observed between 1985 and 2000. Ignored observations.
I could add to the above list recent findings regarding the impact of UV on stratospheric absorptivity, but they are a little too recent.
All of the above suggests that the models are missing some physics, ignoring flux-source-dependent sensitivities, and, where necessary ignoring observational data. The main kludging corrections to overcome these deficiencies are in historical aerosol forcings and cloud physics. I believe that by accepting net forcings, and “correcting them” using measured cyclic behaviour, you are inheriting the same problem in attribution.