Atmoz recently ran the difference between RSS and UAH monthly temperature anomalies and reports two things:
- UAH and RSS temperatures seem to drift since 1979. Atmoz seems a distinct switch in 1992. (I too see the too. It’s noteworthy, and worth discussing. However, I have no particular ideas. So, I won’t be discussing that today.)
- There is a peak in energy spectrum at 1 year, as illustrated a bit further down in this post. I’m going to suggest a possible cause in this post.

Original from Atmoz
Atmoz comments:
The plot clearly shows that the difference ‘noise’ clearly has a significant spike in the power spectrum at a frequency of 1 cycle per year. Neither the RSS or UAH data should have a significant peak in power for a period of 1 year. Therefore, the difference shouldn’t have a significant peak for a period of 1 year. Right? Or am I missing something fundamental here?
Here’s my guess: the peak in energy at 1 year is the left-over effect due to the uncertainty in the process of calculating anomalies from noisy data. Let’s consider how the anomalies are calculated.
- A data service, say RSS, picks a 20 year period. They then compute the average temperature for that month over 20 years. (RSS uses 1979 through 1998. ) This value could be called a “baseline temperature estimate”. Bear in mind, that given measurement uncertainty, and weather, that even if climate were stationary, the calculated average might not be a perfect match for what might be called “the true climate mean”. So, each month’s “baseline temperature estimate”, is equal to the “true climate mean” for the month, plus some “error for the month”.
- To create monthly anomalies, the service then subtracts that “baseline temperature estimate” from the the value for each month. This means that each monthly tempearture is adjusted by both the “true climate mean” for the month, and the “error for the monthy”
- Note that, whatever the “error for the month” may be, it is subtracted year after year after year. So, this error pattern recurs with an annual cycle. So, although the purpose of subtracting the monthly averages is to take out the energy from the true underlying annual variations, the procedure can only reduce the amount of energy at 1 year. It cannot eliminate the energy entirely. And there it is!
The question remains: How much energy might we find in the annual signal?
Well, it’s difficult to say, as the magnitude of the error in the estimate of the “true climate mean” is random. However, when we fit the monthy GMST data to straight lines (so as to detrend the warming signal) we tend to get unexplained variabilities of approximately 0.1C each month. This suggest that when we determine the “baseline temperature estimate” for any month, the uncertainty in that value estimated using 20 months of data is likely to have a standard error of 0.1C/&sqrt;(20)=2.3 * 10-2C.
If the error in the calculated “baseline temperature estimate” had been miraculously perfectly sinusoidal, then we’d expect to find the square of the amplitude of the “false” energy in the annual mode equal to 0.1C2/(20) = 5 *10-4 C-2. (This is relative to the “true” signal, which should have no energy at periods of 1 year.)
However, if we recognize that error for each of the 12 months may be uncorrelated, we might conclude that the ‘false’ amplitude introduced might be in the vicinity of 0.1C2/[(20)12] ~ 4 * 10-5 C2.
Eyeballing Atmoz’s graphs, looks like the amount of energy is about 3 * 10-5 C2, which is of the same order of magnitude as 4 * 10-5 C2. So, may be this is the cause. Or not. 🙂
Importance to Hypothesis Testing
As readers know, I keep harping on why there are advantages to averaging over NOAA, GISS, HadCrut, RSS and UHA data sets when trying to test hypotheses about climate trends. One of the initial difficulties in applying the hypothesis test was the “color” of the residuals that remain after trend analysis. Ordinary least squares require “white” residuals; Cochrane-Orcutt requires “red” residuals.
Unfortunately, any systematic error with period of 1 year, is neither “red” nor “white” and, if sufficiently strong, can make it impossible to use either method.
Interestingly enough, by averaging over 5 data sets, each of which is defined with its own “baseline temperature estimate” for each of the twelve months tends to reduce this “false” spectral feature. The degree of the reduction is difficult to anticipate, but would likely fall between “no reduction” and a decrease by a factor of &root;(5). Though this may seem small, it can be crucial when applying a particular statistical tests, which can be particularly sensitive to ‘noise’ with longer periodicity.
In short, this specific example illustrates one of the reasons why averaging permits us to reduce unnecessarily large error bars and improve our ability to detected the “signal” hidden in the “noisy data”.
Update:Spence UK generated 4 charts. I’m putting them here for now so people can discuss them. Click for larger. Also, if you want to discuss further, you can paste images in plots; if you don’t know how, I’ll fix the html. ![]()
![]()
Data:data from spenceUK
My memory is fuzzy on this, but I believe the step change in 1992 is caused by a new satellite with little or no measurement overlap from the previous satellite. UAH and RSS have different methods of combining the new data with the old data.
A very minor point — it’s “UAH” not “UHA”. 🙂
Just a suggestion, I don’t know if it has any merit. UAH global temperature covers latitudes 82.5S to 82.5N. RSS covers latitudes 70.0S to 82.5N. The remaining difference (82.5S to 70.0S) will have a significant annual variation due to the axial tilt of the earth (i.e. the southern polar seasons).
The 1992 step is likely to be related to the merging of data from different satellites and how they are calibrated to one another.
Thank John! (I hate acronyms. I hate spelling in English. I’d move back to El Salvador just to avoid spelling in English, but I don’t remember how to speak spanish. GRRR!)
Spence_UK: Your reason would explain differences before the two agencies adjust for anomalies. But they supposedly find the average for the month and subtract. So, in principle, the portion of the annual variation related to real physics should be removed. That’s either the ‘weird’ or ‘convenient’ thing about anomalies (depending on point of view.)
It would be interesting to know the values of the data before the monthly average is removed. Those tend to be difficult to find. (In this case, I haven’t looked though.)
To further test my idea, I wanted to see if this 1 year noise was more or less prominent depending on whether or not the “baseline” periods over lapped. I hunted around to find UAH’s time period for calculating their standard, but I didn’t find it. (That said, I didn’t hunt very hard.) But I know that once it’s computed, it’s applied to all data. So, the uncertainty can introduced features that repeat year after year.
It will be interesting to see if we read any other reasons for this. I haven’t seen any suggestions at Atmoz yet, and I looked to see if any other blogs had commented. But, it’s an interesting thing he noticed.
Fair point, but the estimates of the anomaly correction are based on a different data set (i.e. latitudinal extent); as estimates, they are noisy, and will have one-year periodicity. I’m not claiming this is the reason, just a suggestion.
RSS certainly make “climatology” maps available with raw temperature data. Unfortunately in their ASCII data sets they don’t seem to include it. In their binary data sets here they have it. Data reading routines here. (No R script, unfortunately)
Addendum to the last post: I originally skim read your article and noted that the article did not include the different spatial extents, on reading your article more carefully I think we are making the same point, except you give the example of possible differing temporal extent whereas I gave the example of differing spatial extent, both of which could have a similar effect. Seems we are of a similar opinion.
Spence– Ok.
Yes, basically, I think the fact that they can’t get the baseline perfectly, and subtract a different imperfect baseline each month means you still have some energy left in the “annual” period. Evidently, based on what Atmoz showed, it’s not insignificant.
One big difference is in the tropics. Prior to 1989, RSS data shows consistently lower tropical temperatures than UAH. After 1989, RSS shows the tropics consistently higher than UAH. i.e. the RSS data shows a pronounced warming trend in the tropics while the UAH data does not.
However, over the last few months they have converged again – and RSS and UAH both show the tropics as being the coldest in 19 years.
Hmmm. I’m not so sure about this one now.
I grabbed some of the satellite data kicking around on my hard drive (UAH about six months old, RSS a couple of months old), differenced and fft’d the data for the NH. I used Jan 79 to Dec 06 (I didn’t have a full 2007 in the UAH data set, and using full years worth gives me a sample spot on 1 year from a simple FT). Both data sets use the same latitude extent for NH so I thought these would be more comparable – and no spike occurs at 1 year. Of course, the global spike is only around 5x amplitude, so it is possible that the spike just happens to be at or below the noise floor, but it seemed to indicate that perhaps there was something in the latitude differences.
So then I plotted the global data. Hmmm, not much of a spike there either. What? Double check calculations… try a few tricks (oversampling, different window sizes), nope, I really can’t convince myself there is a spike in the global data either. There a mix of peaks around the 1-year cycle in the data I checked, but there is are similar peaks in the range 3-year through to 10-year wavelengths as well. Also, the peak at the 1-year cycle isn’t actually at 1-year but shifted slightly to the left… that doesn’t sound right. I know Atmoz removed these lower frequency terms with his running mean but I see no point in this – they are not large enough to contaminate the rest of the data with sidelobes (and even if they were, you can rule that out through windowing). I used a range of Dolph-Chebyshev windows from 40dB to 100dB sidelobe rejection, with no real differences visible (beyond the expected loss of resolution).
A quick experiment trying repetitive noise (replicating residual error from monthly anomaly correction), iid, gaussian, and found that (as expected) a peak exactly on 1-year occurs (not just off 1 year!) plus some very strong harmonics at 0.5 years, 0.333 years and 0.25 years wavelength. Again, no sign of this in the processed data.
I just quickly tried this stuff out yesterday so it is possible I’ve made a mistake, but in short, I am not convinced there is a statistically significant peak in the data sets I used at 1 year periodicity at all. Certainly not a robust one, anyway. (Plotting power on a linear Y-scale can tend to give a false impression of significance – I prefer a log Y-scale for this kind of analysis).
BTW I managed to download the RSS data (climatology and anomaly). I used the MATLAB script provided to read the data – the organisation is pretty simple, 4-byte (single precision) floats arranged in a 3D array (144x72x324) giving longitude x latitude x month – the first eleven months contain nothing as the file starts at Jan 1978. The data file only runs to around 2005 as they don’t seem to update it as frequently. Longitude and Latitude steps are 2.5 degrees.
For some reason my PC corrupted negative numbers in the data – so the anomaly data was half garbage – but there are no negative numbers in the climatology (all in K) and this read in fine. “Missing” gridcells are NaN’s. The anomaly data is gridcell and I confirmed (from the positive numbers!) that I could exactly replicate the anomaly data by subtracting the 1979-1998 monthly mean for each gridcell, as you surmise. By then taking a areally weighted mean of the gridcell anomalies (weighted by cos latitude at the centre of the gridcell), I get results which match the RSS global series to the precision of the text files.
Spence– interesting! So, maybe it’s not there? Do you have plots? If you email them to me, I could post them.
Don’t know if this provides any detail related to the discussion:
http://wattsupwiththat.wordpress.com/2008/03/08/putting-a-myth-about-uah-and-rss-satellite-data-to-rest/
The jump is from differences in merging satellite data and is best discussed in Christy et al., 2007. My two cents on the one year cycle is that it may be from the diurnal and hot target temperature corrections used. These corrections do have an annual cycle in them and as each group uses a different process to determine diurnal corrections (which is used to determine hot target corrections) it is reasonable that in the difference series Atomz created you would see the cycle. To support this hypothesis, plot ocean and then land you will see that the land’s oscillation has greater amplitude than the ocean, indicating that the diurnal correction is involved. In addition, it appears the amplitude is increasing in time. This would support our suggestion that if the diurnal temperature range was decreasing with time and a correction was developed using a time frame early in the series it would overestimate the diurnal correction later in the time series. Just food for thought.
Robb
Christy, J. R., W. B. Norris, R. W. Spencer, and J. J. Hnilo (2007), Tropospheric temperature change since 1979 from tropical radiosonde and satellite measurements, Journal of Geophysical Research-Atmospheres, 112.
I created a Google presentation with graphs showing UAH vs. RSS Global, tropics, and an odd rotation which appears to be applied to the RSS tropics data.
I didn’t do anything funny to get the plot.
Monthly global temperature data used: RSS and UAH.
I took the mean out of both available here (shouldn’t matter). Then I subtracted RSS from UAH. The long-term average was removed. I used a Hanning window and then took the Fourier transform and plotted it. I get essentially the same plot (but without the lower frequencies) if I don’t remove the long-term average. I get essentially the same plot if I don’t use the Hanning window. I get the same plot with a log y-scale.
Hmm… more interesting.
As noted above, I was using an old UAH dataset. So I downloaded the current UAH dataset, and compared the 6 month old UAH dataset with the current UAH dataset – and suddenly I see a big spike at 1 year cycle. Note this isn’t comparing UAH with RSS, but UAH (6 months back) with UAH today!!!
Furthermore, because I’m comparing UAH to UAH, the noise power is much, much lower (around 1e-6 C^2, driven by quantisation in their reported data set), I get 1/2 yr, 1/3 yr and 1/4 yr harmonics as clear as day, just as I would expect from Lucia’s explanation. Have UAH changed their baseline recently?
Lucia, I did send you mail to a random e-mail address I had floating about for you, but from a dodgy freebie e-mail account that is probably at least 50% likely to get trapped by a spam filter. I might send an update with latest plots shortly…
RSS-UAH (Land and Ocean) shows that for the periods of interest here, the oscillation is larger over the ocean than over the land, in contrast to Robb’s comment above.
Spence– I’ll check my spam filters. Even if my local machine gets it, it’s somewhere. 🙂
Atmoz– Does it appear to be larger after 1995 also? It looks that way for the ocean measurements to me.
Lucia,
I’ve included in the e-mail the old UAH data set. Just try plotting (new UAH) – (old UAH) for the overlap period. It is quite revealing.
Atmoz,
The change is clearest in current UAH vs. old UAH. Comparing to RSS just blurs the issue with a bunch of extra noise. I e-mailed a copy of the old file I used to Lucia, hopefully if she can track it down, she may be able to post it up. Otherwise the way back machine might have something – I downloaded the previous file on 13 Nov 2007.
I tried plotting the UAH cumulative sum for each month from Jan – Dec, starting at 1979. The cusums zero at year 20 as expected, so there is no change to the actual baseline date range. After year twenty, the cusums have a substantial positive gradient. The old UAH and RSS data seem to keep a consistent spread, the new UAH seem to diverge somewhat. Since the baseline is the same, I assume UAH have changed some detail of their processing which has had this side effect. I know they’ve been looking at the annual cycle (due to eccentricity of the earths orbit, etc) lately – e.g. this discussion at RPSr’s place – perhaps they have changed something predicated on that analysis?
I messed up on making the top panel the first time around. It’s fixed now, with oscillations at or around 1 year. It’s hard to tell by inspection which has larger magnitudes. That might require a statistician.
In the bottom panel, the oscillations seem to be getting larger after 1995. That’s just by eye, but the first couple have magnitudes ~0.05 while the latter ones have magnitudes ~0.1.
Spence,
I don’t have the old UAH data. Perhaps lucia can send my email address to you. Or use the contact form at my site.
Atmoz – Lucia has managed to fish my e-mail out of the spam folder and has the old UAH file, so she should be able to post it up or forward it on.
Spence– Sorry for the delay. It’s up.
Atmoz,
I’m a bit confused as to your comment or possibly I wasn’t clear on what I was trying to say, but your plots above clearly show that from min to max in the one year cycle there is a change of ~0.2deg or greater in the land difference and ~0.1deg in the ocean difference. I have a plot of land diff and ocean diff on the same graph to show what I’m talking about but am unable to post.
Robb
Robb,
See comment 2148. When I first made the plots, I made a mistake in the top panel, and didn’t figure it out until after I posted. You wouldn’t think it would be hard to subtract one time series from another, but somehow I messed it up. I should have been more careful, and checked before posting.
Well, you are in good company! I have a note on one of my early posts (before anyone read this blog) with an admission of a “very stupid mistake”. Yes… I’d subtracted wrong. The sorts of mistakes one makes with codes are different form the sorts one makes with hand calcs. I learned this long ago, but still make mistakes.
I now also sometimes plot nearly everything, have fiduciary “check” cells and do all sorts of stuff… and I still make mistakes sometimes.
I find the best way to find the errors is to click “publish”.. go do laundry, and come back in an hour. Mortification sets in… I fix and hope no one read! (On of the benefits of low readership is this can work. Unfortunately, leaving things as draft just doesn’t have the same effect as clicking “publish”. Also, to work you’d better not ping someone you are criticizing! 🙂 )
Atmoz,
I should have read a little more carefully!! Sorry for the confusion! FWIW I just plotted one over the other and did an eyeball, better if you just plot 1990’s and later.
Robb
30 celcious and -40 celcious.what is ΔT?pls
Amena, I’m not sure wat you mean by “what is ΔT?” The graph shows a value that is units of temperature squared. Does that answer your question?
I noticed the same effect when I plotted the difference series between tropical RSS and tropical UAH in connection with a recent post. It has a very marked annual cycle i.e. annual power. There is also a noticeable difference in relative trends by month. For June, there is little relative trend of RSS relative to UAH, but for DJF there is a strong drift upwards in RSS relative to UAH.
If you have different relative drifts by season, then my guess is that, when combined with an anomaly procedure as hypothesized by Atmoz above, the combination would yield the seasonal pattern in differences.
Lucia
Comment 2113 is not me. Could you remove it – or ask the person who posted it to choose a different handle please?
JM–
Sometimes people have the same handles or names. (Like, for example, Jim or Bob.) I’m not going to babysit ‘handles’. If you wish to have a unique handle, you’ll have to use something more distinctive.