In comments, people have been comparing trends from various reporting agencies. The discussions include various theories of why the different trends might differ. Are political statements are made? Yes. Are axes are being ground? Yes.
Still, no matter whose neck you wish to chop off with an axe, it’s worth looking at the trends themselves. Because the issue of UHI had been broached, I happened to mention that, if we compute observed surface temperature trends since 1900, the trend is somewhat larger if we choose HadCrut rather than GISSTemp. This is shown below:
(Click for larger.)- If we use 12 month averages, the Jan 1900- Dec 2008 HadCrut trend is 0.73 C/century. The GISS trend is 0.66 C/century. (Slightly different trends are obtained if we use the underlying averages, but the HadCrut trend remains larger than the GISS trend.)
- The difference between the two observed trends is roughly 10%.
- The trend averaged over the 26 GCM models is larger than both observed trends. However, some individual trends are larger and some smaller.
- If we select a much more recent time for the start date, the GISSTemp trend exceeds the HadCrut trend. (This is fairly widely known by climate blog addicts.)
Update: I replaced the original graphs with one highlighting the mean model trend.
Update 2: Barry wanted to see the differences. I subtracted monthly temperatures and plotted.
(Click for larger.)
Lucia,
The funny thing is that GISS is a model of transparency compared to HadCRUT. I bet Hansen et al are more than a little jealous of their peers over the pond after the umpteenth accusation of dishonesty and fraud.
Zeke–
Hansen is more transparent now. But he was dragged there kicking and screaming. Wasn’t the US Congress involved?
I do suspect Hansen is jealous that people complain about him more than Hadley. But… well…. Anyway, I do think part of the reason Hansen gets more flak is that currently HadCrut shows higher anomalies and trends. Hansen intentionally makes himself visible; that contributes. And, of course, people witnessing the resistance to transparency won’t really give him much credit for proactive transparency.
That said: anytime I’ve written Hansen (which is about 3 times) he has responded promptly, and I’ve considered the answer responsive. So, I have no bad experiences with Hansen. I’ve read the reasons for the adjustments. I think his method is complicated, may not improve anything. (I take a dim view to the idea that one can process historic data to and be confident that you corrected errors. This isn’t just with climate science. Poor data is difficult to fix.)
All in all: I don’t buy anyone’s conspiracies about intentional distortion by either Hadley or GISS. It’s not that I’m any less cynical than others. I just note that both groups operate in countries where people blab.
Yes, in the few cases that I have read complete texts of Hansen’s words (vs excerpts), he has sounded reasonable and his ideas resonated (given that he is speaking as though catastrophic AGW was reality).
They’re probaly both envious of NCDC and Lugina et al, who don’t seem to have copped any flack for their surface records! –
http://www.ipcc.ch/graphics/graphics/ar4-wg1/jpg/fig-3-1.jpg
Pretty graphs! 🙂
Simon–
There are advantages to being ignored. 🙂
A difference graph between GISS and Hadley would be interesting to see where the divergence is, what the trend, if any, in the divergence is and if their are any other periods.
BarryW– I added a difference graph. Plotted over time, the graph is noisy, but you can see the trend has been down. Very recently, the short term shows a reversal. Don’t know what this means. Possibly nothing.
Crutem3 is a land only dataset. It shows higher values than the land/sea blended Hadcrut3.
http://hadobs.metoffice.com/hadcrut3/diagnostics/comparison.html
Hey Lucia. If you calculate the 95% CI, the trends for both GISS and HADCRUT are statistically indistinguishable. I got 0.656 ± 0.0747 °C/century for GISS and 0.728 ± 0.0807 °C/century for HADCRUT. I’ve been working on integrating the uncertainties in the surface/satellite temperature records to make more accurate comparisons to the model data.
Chad–
Whether or not the trends are indistinguishable depends on what question you ask.
Bear in mind that the anomalies are supposed to measure the same thing. So, each measurement is the sum of
“the true GMST” + “noise”.
The “true GMST” should be the same for both cases. Unlike comparison with a model run, this “true GMST” is for the same true earth during the same time, so it’s the same for both records.
If you do this:
1) Compute both trends based, do a test like Santer’s to see if they are different, you conclude they are not different. This is what you should get if the measurement noise was small compared to the “weather noise”. (Note: Assumptions & caveats apply. But, basically, you conclude the trends are the same.)
2) But now, suppose you want to see if the difference in noise as measured by the two services has a trend. Now, first subtract Hadley from GISS to get the difference in temperature anomalies. (So, this is “noise Hadley” – “noise GISSTemp”.)
Now do a trend analysis on that. That value is significant.
What does this mean? It suggests something the two services has done results in some systematic difference, and that difference is large enough to result in a real trend in the difference. But the difference due to uncertainty in the measurement process is relatively small compared to the weather noise.
One small quibble with the graph. I think showing the linear trend of the mean model is both misleading and inappropriate. If you look at it like any other data you’d think that the current spike up in the mean model could be just random variation and in the future the mean model would probably revert to the trend. But we know that it is not just noise, and it won’t revert to the trend. Until of course they recalibrate the models.
Kazinski– Often, trends are just descriptive statistics. I’m using them that way here.
But you are right, the trends don’t tell us anything about whether the observations will eventually match the model trend. (I could plot the model projections. They exist. But we don’t have the observations yet, and my main point was to compare HadCrut to GISS.)
Jorge (Comment#10155) February 10th, 2009 at 12:55 pm
Crutem3 is a land only dataset. It shows higher values than the land/sea blended Hadcrut3.
That’s correct. The GISS, NCDC and Lugina analyses in the graphic I posted above are also land only, so it is a comparison of like with like.
We know that the analysis of SSTs is different for GISS and Hadley (GISS employs satellite measurements), and also that coverage differs (GISS extrapolates the Arctic), so I’m not particularly puzzled by short term variations.
Also, the ‘popular’ comparison of GISS/Hadley trends since 1998 is as much a revelation of the fact that Hadley ‘overshot’ in 1998 as it is of any positive inclination in the recent GISS trend. It seems rather obvious to me that we need to look at these records over sufficient periods of time for their systemic inclinations to be assessed sensibly,
Lucia: HADSST data has a step change (increase) after 1997.




It’s visible in the difference with ERSST.v2 data:
http://i34.tinypic.com/2zswhac.jpg
And in the difference with ERSST.v3b data:
http://i33.tinypic.com/2j11y6r.jpg
And in the difference with OI.v2 STT data:
http://i37.tinypic.com/ighm9s.jpg
And here’s a longer term difference with ERSST.v2 data:
http://i28.tinypic.com/2ronf9w.jpg
The reason appears to be, the Hadley Centre changed data sources for SST in 1998.
“The SST data are taken from the International Comprehensive Ocean-Atmosphere Data Set, ICOADS, from 1850 to 1997 and from the NCEP-GTS from 1998 to the present.â€
And from the ICOADS webpage:
“ICOADS Data
“The total period of record is currently 1784-May 2007 (Release 2.4), such that the observations and products are drawn from two separate archives (Project Status). ICOADS is supplemented by NCEP Real-time data (1991-date; limited products, NOT FULLY CONSISTENT WITH ICOADS).†[Emphasis added.]
The change of data set also helps explain why HADCRUT3 Global, Northern Hemisphere, and Southern Hemisphere data sets consistently run high since the 1997/98 El Nino when compared to other land and sea surface temperature data sets.
The links for the Hadley Centre and ICOADS quotes are in my post here:
http://bobtisdale.blogspot.com/2008/12/step-change-in-hadsst-data-after-199798.html
Regards
Bob–
Thanks. Some of the big changes do seem to be traceable to known data processing or measurement issues. If you look at the graph showing differences above, you’ll see a big downspike followed by an upspike in the early forties. There is a fair amount of wiggling in that region. That could be partly the “buckets” issue.
I know some people suspect I’m a pessimist, but I don’t think these sorts of glitches can be removed from the data.
Oops, I forgot to add. GISS used HADSST data until November 1981 then switched to OI.v2 SST data in December 1981.
Simon,
Thanks for the clarification. It is hard to keep track when different data sets go by the same name!
Lucia,
To state my point a little better, we do have the model projections out into the future. And we do know they sharply diverge from the linear trend of the model mean.
But I do realize that was not the purpose of your graph. What would be interesting is to take the model projections from 1900 out to say 2100, and then plot the GISS and Hadcrut trend of 0.66 to 0.73C per century, for comparison purposes only of course.
Kazinski– If we extrapolated the observed trend, then we would be implying a claim the trend will persist!
Sure, but isn’t that the best information we have now?
I guess that depends on who you ask. Over at RC they would say that the best information on what will happen in the future is the mean model. And I suppose you could find others who would say the observed trend since 1998 would be the best information available, that might be the other extreme.
But looking at your graph above, both GISS and Hadcrut are not inconsistent with their linear trend since 1900 (of course I’m eyeballing it with out the benefit of seeing the CI). That can’t be ignored.
Lucia: Since HADCRUT and GISTEMP use the same SST data before 1981, I would think that the 1930s and 1940s wiggles are more a function of the different methods of “smoothing” of land surface temperatures, not the bucket adjustment.
Zeke Hausfather [10142]
Lucia’s comment about Hansen’s “transparency” in her reply to you [10143] is very much to the point. It took a major effort including repeated prodding via the US Congress, not to mention a couple of very public exposures at CA of the proverbial “hand in the data processing cookie jar”, to coax GISS this far. As I argued yesterday on another Blackboard thread, Hansen’s very public AGW/ACC advocacy and the use of the RealClimate website for messaging remain serious credibility issues.
Unfortunately, the UK Met Office Hadley and their partners at the University of East Anglia have not [yet] been subjected to this type of drive for transparency, and until they are somehow forced to come clean on their data gathering and in particular their “data processing” methods, HadCru data should be viewed with a healthy dose of skepticism.
By way of example, one of their key figures, Phil Jones, Director of the East Anglia Climatic Research Unit, like Hansen an avowed AGW/ACC advocate, is currently the subject of a Freedom of Information procedure to the British Government for his refusal to archive and make public the core data and algorithms used in his research published in “peer reviewed” journals. Never mind that this begs the question about which peers reviewed exactly what before the paper was accepted for publication, the data and algorithms remain unavailable.
It also so happens that Jones is closely associated with Mann, a confirmed multiple offender in the same “lack of transparency” department, which to put it mildly is not helpful in terms of one’s credibility. Suffice it to say that scientific transparency is not at the top of HadCru’s list of thing to check. Bottom line: lack of transparency > lack of scientific credibility.
PS: Also in the scientific credibility department: Jones is on the record [as late as December, 2008] for his view that La Ninas are in fact “masking” global warming that in his view remains prevalent… My understanding is that the La Nina and El Nino phenomena are an integral part of global climate, not something you can parcel out at will and ascribe special roles whenever it suits your argument.
Kazinski [10165 and 10172]
Caveat emptor: beware of models. Their value has been greatly overstated by some. Viewed with a kind eye, they are best treated as very limited interpretations of our understanding of extremely complex phenomena [like global climate, for instance 🙂 ]
By way of example, you may want to have a look at a recent posting at Roger Pielke Sr’ weblog: it would seem that actual ocean warming is out by a factor of 2.5 compared to the GISS model.
Another interesting feature of GISS is how sensitive it is (or at least it has been for the last 10 years or so) to the level of spatial smoothing.
David Smith pointed out this interesting difference in trends, depending on whether one looks at the 250 km smooth or the 1200 km smooth.
http://www.climateaudit.org/?p=4667#comment-317252
It seems that the 1200 km smooth gives the ocean more “land character” than the 250 km smooth. At least as far as the last 10 years is concerned, this brings GISS much closer to the satellites and HadCRUT3.
The only way I have found to do this is go month-by-month at this GISS site.
http://data.giss.nasa.gov/gistemp/maps/
I don’t know whether there is an extensive tabulation of the 250 km smoothing monthly anomalies anywhere.
John M: For 250km smoothed GISTEMP data, go to the KNMI Climate Explorer website and select GISS (250km), the second field:
http://climexp.knmi.nl/selectfield_obs.cgi?someone@somewhere
On the next page, select your latitudes and longitudes. Click “make time series”. The third graph down is anomalies. Click on “raw data” above it and presto! There’s your GISTEMP data with 250km smoothing.
Bob Tisdale [10178]
Having done that, how does it compare with the now 30 year run of satellite data [eg RSS and UAH]?
Bob Tisdale (Comment#10178)
Thanks! Interesting site.
It’s late, and I’m using Open Office, so I don’t completely vouch for this, and this may fall into the category of more smoke than light, but here’s what I did:
I went to the site, followed your instructions, and left lat/long blank. Got a reasonable looking curve, changed the base period to 1961-1990 and clicked on raw data.
This led me to this page:
http://climexp.knmi.nl/data/igiss_temp_0-360E_-90-90N_n19611990a.txt
After importing to Open Office (and assuming the years start in January, i.e. “2008” = Jan 2008, “2008.08” = Feb 2008, etc.), here’s what I get:
Jan 2000 – Nov 2008: slope = 0.047/decade (David Smith got 0.052 using 5 month avgs)
Jan 1900 – Dec 2008: slope = 0.057/decade
Tetris, with regard to your question:
Jan 1979 – Dec 2008: slope = 0.131/decade
Interestingly, since I’m not sure my date assignments are correct, when I shift the 2000-2008 period by just one month (Feb 2000 or 2000.08 to Dec 2008 or 2008.92), the slope goes from 0.047/decade to 0.028/decade).
Hmmm….better check that out in the morning.
Like I said, it’s late, so maybe it’s all crap. 🙂
Hmmm…
Funny how math works.
2000 0.1471
2000.08 0.3999
2008.83 0.4101
2008.92 0.3413
With the slope so close to zero, shifting one month down could indeed lead to that change.
Lucia,
Did you normalize the GISS and Hadley data to the same base period before you did the difference? If I did it right it looks like Hadley is consistently warmer than GISS (if you smooth the data before differencing), with the difference in the early part of the century being smaller.
JohnM [10187]
As the artist would say, it’s all a matter of perspective. Which, as we know in this field, is something that can be altered at will using either statistics or cherry picking or both, if necessary.
Lucia,
I followed the procedure you outlined with a few differences.
1. I took GISS/HadCrut on 1900-2008.
2. Adjusted HadCrut to have the same baseline as GISS.
3. Calculated annual averages.
4. Used uncertainty estimates for GISS and HadCrut to run a monte carlo simulation to get a more accurate fix on the differenced time series.
I found the trendline to be on the verge of being statistically significant. The critical t-stat is about 1.96. The t-stat for a zero trend is between 1.88-1.93. The reason for the spread in t-stats is because in Hansen’s 2006 paper he states the estimated (2-sigma) error in GISS temp “decreases from 0.1°C at the beginning of the 20th century to 0.05°C in recent decades.” The t-stat depends on what you consider “recent decades” to be. I’ve run the simulation assuming the change in error occurs in 1950,1960 and 1970. Considering that the HadCrut errors change dramatically with time and that I modelled the GISS errors as a simple step function, I would take these preliminary results with a few grains of salt.
Chad–
When you say you did this:
I have two questions:
1) You mean you used the rms of the residuals and their lag-1 autocorrelation, then assumed that the residuals were AR(1) noise, right?
2) When you ran MonteCarly, did you create realizations for GISS and HadCrut with the independent “noise” for the Hadley and GISS realizations?
If you read my comment #10157, using “independent” noise is the equivalent of answering question 1. But, it doesn’t answer question 2– which is a different question.
If you are trying to learn whether measurement process seems to be introducing any bias at all, question 2 is the more powerful question to test. If we find the noise does introduce bias, then it is worth looking at question 1, as we would like to know how bad the bias is compared to the actual trend we wish to detect. (My answers are: There is bias. The magnitude is not enough to make us doubt there is a positive trend.)
Now, to explain question 2, and put it in the context of monte-carlo:
The temperature in any given month for the HadCrut and GISS are strongly correlated because most of the “noise” is from the earth’s weather noise, not the measurement. (If measurements were perfect, the correlation would be 1).
So, for example: Both GISS and Hadly were “up” in 1998 because the earth actually experienced an El Nino.
If you are going to do the montecarlo, you need to make sure the method includes this cross-correlation for the “noise” you use to drive the error. So, you need your noise term to have a “weather noise” component shared between the two, and then “measurement noise” component for each. The “measurement noise” for GISS and HadCrut would be different.
Or, you could just subtract GISS from HadCrut, analysis that noise and run the montecarlo on the noise properties for the difference.
If you set the cross correlation to zero, you answer question (1), not (2). Question 1 is a valid question. But it’s a different one from question (2). (Experimentalists trying improve technique always ask both.)
BarryW–
To plot the difference, I just subtracted. That won’t affect the slope on the trend line, but affects the intercept. The average difference is not zero.
Sorry , never mind, operator error my previous comment: BarryW (Comment#10189)
The smoothed data shows Hadley less than GISS for approximately 1900-1920 time frame and greater than GISS after about 1960. Depends somewhat on the the smoothing parameters used. Hadley is starting to fall below GISS since about 2005. The largest differences seem to be prior to 1920, if you take that out the delta seems very small IMHO.
If you start at 1880 (GISS start date) the trends are very close (G = 0.005652 /yr vs H = 0.00588/yr
Lucia-
Answer to question 1: No.
Answer to question 2: I used data on Hadley’s website about the estimated error in their temp series. From that I was able to get sigma to generate random realizations. GISS is another matter. I don’t have an actual time series for GISS so I had to try to make one from a vague description of how the series changes in time. I have a post up explaining how I got the error data.