Trends in Global Mean Surface Temperature: Bars and Whiskers Through May.

May temperatures are in! So, it’s time to estimate the trends based on the 89 months since January 2001, and take a look at how various other trends are tracking. Here the quick graphical summary of results for data merged from five sources: GISS, HadCrut, NOAA/NCDC, UAH/MSU and RSS.

IPCC Projections Falsify Through MayClick for larger.

IPCC trends as stated in the AR4.1 Observation based on merged data.

The squares indicate the most likely climate trend consistent with the monthly weather data. The vertical whiskers indicate the 95% confidence intervals as computed using a particular method. Caveat: I am not certain these are correct, and have been exploring other methods of estimation. However, if they should be larger or smaller than shown, that applies to all tests.

OLS uncertainty intervals computed adjusting the standard devaviation and estimating the “t” value using an effective number of degrees of freedom based on the lat one autocorrelation: Neff=N (1-ρ)/(1+ρ) to Neff=N (1-ρ- 0.68/ √N )/(1+ρ+ √N). ( See Lee& Lund.2, 3)

Examining the graph we see that with 95% confidence we can say:

  1. Since 2001: The FAR central tendency of 3.0 C/century, AR4 central tendency of 2.0 C/century and the SAR/TAR central tendencies of 1.5 C/century all exceed the maximum trend consistent with data based on either classic Cochrane-Orcutt or OLS using the method to estimate uncertainty intervals suggested in Lee and Lund. The uncertainty range is large and the “no-warming” hypothesis cannot yet be excluded.
  2. Since 2000: All of the IPCC projections and the “no-warming” hypothesis fall inside the uncertainty intervals for OLS. The FAR 3C/century falls outside the range for CO. Therefor, these two tests indicate no statistically significant warming since 2000, but Cochrane-Orcutt says we should exclude 3.0 C/century.
  3. Since 1995/6. Since the publication of the SAR, which projected 1.5C/century, 1.5 C/century is looking good! Using OLS, we can’t exclude 3C/century; using C0, it is excluded. We cannot exclude “no warming” with either method.
  4. Since 1990: Since publication of the FAR which projected 3.0 C/century, we can exclude “no warming” based on either method. The prediction of 3.0 C/century is excluded based on Cochrane-Orcutt but OLS says it falls inside the 95% confidence limits.

Additional tests

A commenter requested I test the residuals using the Jarque-Bera test for normality.

  1. Cochrane-Orcutt: The probability of getting a JB coefficient as large as that achieved for the data from 1990, 1995/6, 2000 and 2001 were found to be 0.8%, 5.4%(24.4%), 14.4% and 10.5% respectively. This indicates that, except for the data beginning in 1990, we cannot exclude the hypothesis the residuals are normally distributed. We must exclude it for data from 1990. I suspect the strong non-normality for that data was caused by the volcanic eruption of Pinatubo, and I’m surprised to see 1995 survive the test.
  2. OLS: The probability of getting a JB coefficient as large as that achieved for the data from 1990, 1995/6, 2000 and 2001 were found to be 0.7%, 1.2% (2.5%), 8.0%, 15.2% respectively. This indicates that, data analyses beginning after 2000, we cannot exclude the hypothesis the residuals are normally distributed. We must exclude it for data from 1990 and 1995. I suspect the strong non-normality for that data was caused by the volcanic eruption of Pinatubo.

So…It’s not looking good for the 2C/century

That an understatement. 🙂

The trends most consistent with the data fell sufficiently low that the AR4 projection of 2C/century based on merged data now falsifies based on both OLS and CO. I have been examining a few issues that might permit be to actually calculate better uncertainty estimates based on simple trend analysis. However, for now, there remains the possibility these might be too small.

That said, these uncertainty bound compare favorably with the variability of 7 to 8 year computed trends during periods when the atmosphere was clear of stratospheric aerosols, and when there is no confusion associated with the bucket-jet inlet transitions. Moreover, as we know, Tamino used a method that results in smaller uncertainty intervals when testing the “no warming” hypothesis back in 2007, near the peak of El Nino. So, those who believe that analysis was valid method to prove statistically significant warming since 2000 back in 2007, should consider it equally valid when used to exclude 2.0 C/century. (Though, if you were dubious back then, you should be dubious of this now. 🙂 )

For what it’s worth, I still prefer Cochrane-Orcutt since that’s the one I chose in the first place and I’ve haven’t seen any evidence it’s worse than OLS. But, certain bloggers elsewhere suggested a strong preference for OLS, and will continue to insist the test must use OLS. Well, the 2C/century falls outside the OLS uncertainty intervals just as it has been for a while now!. )

Next week I’ll compared computed trends to monthly data for individual data sets. You’ll see lots of “rejections” of 2C/century. Guess which data doesn’t reject 2C/century. 🙂

Notes:

1. Trends predicted in FAR, SAR, TAR are mentioned in Technical Summary of AR4; see page 68.

2. The method of estimating the uncertainty intervals for OLS used here results in lager uncertainty intervals than those we would calculate using the method used by Tamino in August 2007, when, in Garbage Is Forever, he performed a hypothesis test using 91 months of to prove there was statistically significant warming since 2000. So, whatever the flaws or merits of the method, the applied equally back in 2007.

3. Cochrane-Orcutt uncertainty intervals are smaller because the standard method uses the total number of degrees of freedom for an OLS fit. In this case, this is N=87. I retained this value when computing the “t” value. I used the effective value of data points when applying the method of Lee and Lund, resulting in larger values of “t”.

22 thoughts on “Trends in Global Mean Surface Temperature: Bars and Whiskers Through May.”

  1. Hi Lucia,

    “So, it’s time to estimate the trends based on the 89 months since January 2000”

    That should either be 101 months or January 2001.

    Doubt if it changes anything though.

  2. So the trends have been falling since 1990. What’s the “trend of the trends”? It looks like at least -2C per decade! at that rate we’ll be in a ice age by 2050.

  3. So the trends have been falling since 1990.

    Well… be careful drawing conclusions. Obviously, the 2001-now data is a subset of the 1990-now data. The trend appears to drop because the 2001-now data has a lower trend than 1990-2001.

    But yes, the trend has slowed down– but at least part of that slow down is consistent with weather variability. The other part may be the result of volcanos. From 1990-2001, part of the ramp up was due to GHGs, part to clearing of the volcanic aerosols, and then, possibly part due to ghgs. Since about 2001, we the volcanic aerosols have been clear, so we lost that bit.

    So, quite likely, right now the natural is off setting the ghg’s.

  4. Lucia:

    Have you considered the possibility that all the missing heat is simply being sucked into the earth’s mantle as a prelude to an AGW-caused planetary explosion? Isn’t this illusory cooling thing a predictor of the big boom?

    Sincerely,

    Frightened in Melbourne

  5. Have you considered the possibility that all the missing heat is simply being sucked into the earth’s mantle as a prelude to an AGW-caused planetary explosion?

    Of course. I even considered buying a Tom Chalko bioresonant t-shirt to support his efforts to come up with an anti-gravity machine. This invention would permit us to do all sort of things withoug use of energy, and so reduce the need for carbon generating power plants.

    However, I’m in a quandaery: What if the carbon emissions associated with shipping the t-shirt here result in immediate warming, causing the earth to explode before Tom manages to invent and deploy the anti-gravity machines?

  6. Lucia, you say “the natural is off setting the ghg’s”. Given the logarithmic heating limitations of CO2 the official heating mechanism has always depended on the +ve feedback from H2O. Watts has some interesting NOAA data about atmospheric levels of specific humidity; interesting in that the levels could not possibly provide a +ve feedback, even if you accept that thesis in the first place. The observation is made that the H2O vapor levels, as shown by NOAA, correlate well with the 1977 Pacific Climate Event so detested by Tamino and others. That being the case, perhaps what we are seeing now is the natural off setting the natural. There is a paper about the Pacific Climate Event by McLean and Quirk; it would be interesting to statistically compare what McLean and Quirk assert is the temp response from the Pacific Climate Event with the NOAA H2O data. Here is the Watts’ link;

    http://wattsupwiththat.wordpress.com/2008/06/21/a-window-on-water-vapor-and-planetary-temperature-part-2/

    The McLean and Quirk paper is here;

    http://mclean.ch/climate/Aust_temps_alt_view.pdf

  7. If Anthony’s data supports Miscolczi’s ‘saturated greenhouse effect’, we will have a new paradigm.

    And a mystery; why weren’t the models compared with humidity data? Scandalous.
    ================================================

  8. Kim–
    I don’t think there is much of a mystery to why the models aren’t compared to relatively humidity to the extent that they are compared to other things. I suspect there is less historic data, and the quality of the data are less well understood.

    The graphs Anthony shows start in 1950 rather than the late 1800s we see for surface temperature. The behavior between 1950-1960 oscillates wildly. Is that weather? Or is it an artifact of measurment problems with a new system to measure something?

    After that, it might be interesting if someone compares globally and vertically integrated water vapor to model hind casts. (Maybe someone has or will!)

    I guess the bee in my bonnet isn’t so much which metrics are compared to the simulation predictions of hindcasts. I’d just like to see some selected by the IPCC as standard, have the IPCC itself report how each individual modes on which they base their prediction does relative to that metric. Then, it would be if they included the results of the evaluations in a digested form. That is, things like Taylor diagrams etc. rather than a collection of maps with colors super-imposed.

    Obviously, the IPCC would have an incentive to select metrics based on things that have been measured with greater confidence rather than lesser confidence. They would also likely pick those models do better on rather than worse on– but…that’s ok. It’s better than what I consider to be the vague qualitative stuff in the AR4.

  9. Lucia,

    Beautiful work as always. But one should always remember that the trend analysis says nothing about the role of increased CO2 on the observed trend. The trend is whatever it is, and its underlying cause is something else. It is still a possibility that increased CO2 contributes next to nothing to global warming, because of unknown negative feedbacks. Or it could be that it contributes much more, but that it is offset by “natural” countertrends.

    In this sense, “fingerprint” studies are more useful. IOW, what other symptoms would there be that could point to the role of CO2 (and other GHG’s) in climate change. But that too is not easy. For example, increased water vapour would be a symptom of any sort of warming (or forcing), according to the water vapor feedback theory. So it’s impossible to distinguish between GHG forcing and other forcings, like solar, for example. It may turn out that in a complex system like climate, indirect forcings (of feedbacks, if you want) are as important, if not more, than direct forcings. The simplistic view that equates direct forcing to warming would then be grossly in error.

    As a matter of fact, a simple observation of the historical climate records (including paleo climate records) points to a highly nonlinear behavior, that cannot be described by a simple linear model. Just look at glaciations, that result from a tiny change in insolation.

    So, even though the trend analysis tells us something about what is likely not to happen, it tells nothing about the underlying causes of climate change.

    In the end, physics rules, not statistics (and of course I say this because I’m a physicist!).

  10. Francois O:

    In the end, physics rules, not statistics (and of course I say this because I’m a physicist!).

    I agree with you– and I’m a mechanical engineer. But, also, empiricism trumps modeling, because I admire Francis Bacon more than Aristotle. 🙂

    A agree with the idea of testing fingerprints with respect to attribution. The difficulties are:
    a) Identifying fingerprints all modelers agree to be fingerprints of AGW and not just warming and
    b) Getting a hold of data climatologists agree is sufficiently accurate to test the finger print.

    I was about to start looking at stratosphere data when the JohnV suggested I compare the uncertainty intervals to history data. So I’ve been puzzling over that. (Particularly as at first I concluded that my uncertainty intervals might be too small. But then, it turned out the “evidence” of they might be too small all came from the wild oscillations in trends when the jet-inlet to bucket and back transitions were occuring! So, they may be just fine!)

    But, in anycase, when I turn to the stratosphere, I’ll also need to read to figure out how accurate people think those are. (OTOH, I could just do the blog-worthy posts, show what the current data made available to the general public show. Some climatologists may not like it.. but… my response to that is: So? Then be more careful with your data sets in the first place! )

  11. Yes, but still, if the difference between the direct greenhouse effect of CO2 and the enhanced greenhouse effect from water vapor has to do with humidity, wouldn’t you think someone would have wondered about it and looked to see if the postulated effect was happening? It looks like we have several decades of relatively reliable humidity data. Wouldn’t the need to supply a multiplier of 3-6 times provoke the curiosity to look at what was actually happening. Well, maybe, see no evil. And there I go again.
    ====================================================

  12. This site is so 19th century, I mean, common, you want to compare the consensus facts with empirical data? Get the with the program dammit:P

  13. Paulidan,

    I prefer to think I take a 16th century view!

    Bacon’s philosophy of using an inductive approach to nature – to abandon assumption and to attempt to simply observe with an open mind – was in strict contrast with the earlier, Aristotelian approach of deduction, by which analysis of “known facts” produced further understanding. In practice, of course, many scientists (and philosophers) believed that a healthy mix of both was needed—the willingness to question assumptions, yet also interpret observations assumed to have some degree of validity.

    🙂

  14. Lucia

    Interesting graph. I’m curious as to why the CO number for any given year is always less than the corresponding OLS number. Looks like a systemic difference to me.

    Have you done a similar analysis for past – longer – periods, to see if that apparently systemic difference persists?

    Any thoughts?

  15. Have you done a similar analysis for past – longer – periods, to see if that apparently systemic difference persists?

    1) I’ve done it with randomly generated data. It is not a feature of the method.
    2) I’ve done it for many past 8 year periods while testing a question John V asked me about uncertainty intervals. CO sometimes gives larger and sometimes smaller trends when tested over the historical record.

    I suspect the reason for the very similar relation for different start dates here is that the periods of time are overlapping. So, for the recent period, CO happens to be lower in these five instances (two of which are separated from each other by only 1 year.)

  16. Lucia,

    Please don’t take my post as a criticism of your approach. You’re absolutely right that statistical analysis may be all we’ve got. My opinion is that climate researchers should aknowledge that fact more openly, and not pretend to do physics when they don’t. Finding trends and correlations only tells you what’s been happening in the past, but is no guarantee about the future. It may invalidate some model predictions, but then again, never at 100%.

    But one thing I’ve learned with my own miserable attempt at playing with CO2 data is that it’s almost imposible to extract a meaningful conclusion from them. I thought I had one, and then I realized I was wrong. But my consolation is that everybody else seems to be wrong, in that case about the CO2 cycle response time. We wee a lot of claims about a 50 year lifetime, but in no way can you get that number from the data. Any lifetime seems to fit the data just as well, and if anything, shorter ones seemed to give a lightly better fit, but only marginally so.

    So even good data may not be sufficient to allow discriminating between good and bad models.

    The real sin is to proclaim certainty when there isn’t any. So trend analysis is at least useful in that regard.

Comments are closed.