Some of you may rush to your computer this morning and noticed that Gavin posted to let us know he found a “neat” way to ENSO correct data. Amazingly to any American, he did this at 7 pm, July 4th. If you are an avid reader of “The Blackboard”, you likely immediately asked: “How will these revisions affect the most recent 89 month hypothesis tests Lucia’ posted in June?
The answer is: Modifying data always has some effect. However, with respect to the questions I have been asking at my blog, the Gavin-endorsed ENSO correction has very little effect. Major conclusions based on data merged from five sources are unchanged.
Cochrane-Orcutt applied the standard way, and OLS using Tamino endorsed- Lee & Lund uncertainty intervals still say the central tendency of IPCC AR4 projections of 2C/century projection is falsified using data from Jan 2001-May 2008.
So, just what exactly happens when I apply the Gavin-endorsed ENSO correction?
The temperature trends shift a little, but every single, “Reject IPCC AR4 prediction/projections” holds steady.
Why is this? Because while Gavin’s revisions do increase trends since 2001 slightly, they also narrow the uncertainty intervals. (The Blackboard readers knew this about ENSO corrections because we all discussed this in comments in April. )
Using the Gavin-approved ENSO correction method simply confirms the previous result, showing this finding is robust to different methods of correcting for ENSO.
The graphical presentation for the 5 data source merge now looks like this:
Click for largerFigure 1: The IPCC AR4 projected trend of 2C/century is illustrated in brown. The Cochrane – Orcutt trend for the average of all five data sets is illustrated in orange; ±95% confidence intervals illustrated in hazy orange. The OLS trend for the average of all five data sets is illustrated in lavender, with ±95% uncertainty bounds in hazy lavender. Note, the IPCC projected trend lies outside range of uncertainties consistent with data, as determined using OLS and Cochrane-Orcutt.
Individual data sets were fit using Cochrane-Orcutt; those fits are illustrated with dashed lines.
The key feature of this graph is this: The 2C/century line lies outside the very large range of trends that are consistent with the earth’s GMST as measured since 2001.
What’s the difference between this graph and the previous data-through-May graph?
To create the new graph and perform the updated hypothesis test, I:
- Downloaded Gavin’s corrected and original HadCrut & GISS data.
- I subtracted the “original” data from the “corrected” data in the file of HadCrut and GISS data Gavin kindly supplied. I noticed identical corrections were applied to both HadCrut and GISS for each month. (This is what I expected.) I decided these must be Gavin’s recommended ENSO corrections for each month.
- I added the ENSO correction to each of the five data sets I’ve been using since early this year. These are: GISS, HadCrut, NOAA/NCDC, UAH/MSU and RSS.
- Adjusting the data automatically updated all Ordinary Least Squares (OLS) trends calculated last June, along with their “Tamino Recommended Lee & Lund” error bars.
- To obtain the Cochrane-Orcutt fits, I had to manually iterate the lag-1 correlation coefficients until they converged. (This generally took 3 iterations or so. Otherwise, I’d write a script!)
Those concerned with propagation of uncertainty and its effect on the uncertainty intervals will wonder: What is the uncertainty in the ENSO correction Gavin posted?
Beats the heck out of me! Gavin didn’t discuss this in his article. Possibly I could flit around trying to discover the how Gavin thinks his uncertainty intervals are. But, as he seems to treat this uncertainty as insufficiently important to mention, for my analysis I am treating the correction as having no uncertainty. I’ll discuss this later.
Want more detail?
Based on the methods applied in June, the following results for mean trends and 95% confidence intervals obtained:
- Ordinary Least Squares average of data sets: The temperature trend is -0.3 C/century ± 1.6 C/century. This is inconsistent IPCC AR4 projection of 2C/century to a confidence of 95% and is considered falsified based on this specific test.
- Cochrane-Orcutt, average of data sets: The temperature trend is -0.6 C/century ± 1.5 C/century. This is inconsistent with the IPCC AR4 projection of 2 C/century to a confidence of 95% and is considered falsified based on this specific test for an AR(1) process.
- OLS, individual data sets: All except GISS Land/Ocean and NOAA exhibit negative trends since 2001. The maximum and minimum central tendencies for trends based on observations were 0.4 C/century and -0.9 C/century for GISS Land/Ocean and UAH MSU respectively. Based on this test, The IPCC AR4 2C/century projection is rejected to a confidence of 95% when compared to HadCrut, NOAA and RSS MSU data. It is not rejected based on comparison to GISS and UAH MSU. (That’s right: UAH with the lowest central tendency fails to reject. Why? It displays the largest scatter, and so has the largest uncertainty in slope.)
- Cochrane-Orcutt, individual data sets: HadCrut, RSS MSU and UAH result in negative trends; GISS and NOAA show slight positive trends since 2001. The IPCC AR4 2C/century is falsified by every set individually. This is because, though positive, the trends are too small to be consistent with the 2C/century central tendency projected by the AR4.
Note however, when I say “all sets falsify”, the result is very tight for one set of ENSO corrected observations. Using Cochrane-Orcutt, the highest trend that falls within the ±95% uncertainty intervals is 1.998 C/century; this particular result applies to GISS Land/Ocean data.
- The null hypothesis of 0C/century cannot yet be excluded based on data collected since 2001. This, does not mean warming has stopped. It only means that the uncertainty in the trend is too large to exclude 0C/century based on data since 2001. Bar and Whiskers charts showing the range of trends falling inside the ±95% uncertainty intervals using selected start dates and data not corrected for ENSO are discussed in Trends in Global Mean Surface Temperature: Bars and Whiskers Through May. I will be updating this to show the effect of the Gavin-endorsed ENSO correct later .
Of course, all the illustrated uncertainty intervals are computed based on the assumption that Gavin’s suggested correction contains no uncertainty and that the GMST may be treated as AR(1) noise. If Gavin had mentioned the uncertainty associated with his ENSO correction, I would have used this to expand the uncertainty intervals.
However, we know ENSO is weather noise. The variability due to ENSO was treated as weather noise in the non-ENSO corrected analysis early in June. So, should the uncertainty in his ENSO correction be large, we can always just revert to those ‘uncorrected’ values, and note Gavin’s ENSO correct does not change the results of the previous hypothesis test.
Discussion
Readers may be aware this is Gavin’s third ( fourth, fifth?) attempt to explain why short-term trends cannot be used for…. Well, evidently for anything.
In the most recent two attempts he appears to be rebutting someone or something though he doesn’t mention precisely who or what. The consequence is I feel the need to mourn the dead strawment who must have been scattered around Gavin’s living room each time he clicked “publish”.
That said, since Gavin’s posts seem to be addressing something he believes must be rebutted, it’s worth reviewing his recent blog-viations on short term trends.
In one of his previous discussion of the short term trends, Gavin suggested one could not falsify the IPCC projection of 2C/century using data beginning in 2000 because a group of models with different parameterizations and different initial conditions gave very large standard errors for the best-fit trend over 8 years. The standard error he suggested is larger than displayed by the entire thermometer record on the real earth! So, it is likely his variance is an artifact of ensemble averaging over the physical approximation equivalent of several different small planets, each intended to be “like” earth, but none precisely identical to the real earth. I show that comparison here.
Turning to Gavin’s Fourth of July post, he says:
Despite our advice, people are still insisting that short term trends are meaningful, and so to keep them happy, standard linear regression trends in the ENSO-corrected annual means are all positive since 1998 (though not significantly so). These are slightly more meaningful than for the non-ENSO corrected versions, but not by much – as usual, corrections for auto-correlation would expand the error bars further.
What an odd paragraph.
Reading it, I can’t help asking, “Gavin, did you really reverting to trying to convince thinking people that flat trends are consistent underlying up-trends because the flat 8 year trends happen during periods that end with volcanic eruptions?!
I thought we’d gone past that.
When I recover from my surprise, I am left asking, “What error bars?” “Why is Gavin focusing the idea the trend itself is positive? And “Why did Gavin switch to 1998?”
As we know, in a recent post, Gavin focused on trying to show one could not falsify the IPCC projections since 2000. This was to rebut… well, some un-cited person who might happen to be looking at trends since 2000. I am aware of no such person; maybe Gavin is. (Note that 2000 happens to be a local minimum in the temperature record.)
Now that Gavin has discovered a “neat” way to correct for ENSO Gavin switches from 2000 to 1998. While insisting the trend itself is unimportant, he proceeds to remind us that it is positive, though not statistically significantly different from zero.
The curious reader, aware of, actual discussions in the blogosphere, might ask, “So, does the positive trend you find differ or agree with IPCC projections over time?”
The curious reader cannot tell based on Gavin’s post.
Had Gavin shown uncertainty estimates for the trend he calculated since 1998, or described how he calculates his uncertainty intervals, we might be able to understand the point of showing us the trend since 1998. We might be able to apply his method to other years– say since 2001 or 1990 and determine if those trends differ from zero using a Gavin-endorsed method. We might also be able to determine whether those trends differ from IPCC projections of using a Gavin endorsed method. (These were 3.0 C/century, 1.5 C/century or 2C/century in the various reports)
Unfortunately, since Gavin doesn’t illustrate his method, and his previous methods were the “volcano eruption” method, and the “collection of trends from a bunch of models with different physics” method, the rest of us will have to content ourselves with using methods that have been described.
Later this week, I will show the uncertainty intervals associated with the temperature trend since 1998 using Cochrane-Orcutt and Tamino-recommended Lee&Lund uncertainty intervals. The Cochrane-Orcutt method is a standard method. The Tamino-recommended Lee&Lund method seem to be an idiosyncratic method described briefly in a recent lightly cited peer reviewed paper. But, since Tamino used it for hypothesis testing using 7 years of data, we’re using it here too.
When I post my analysis, readers will then be able to determine whether the positive trend Gavin shows is statistically significant to a confidence of ±95% as required to reject the “no warming” hypothesis. We’ll also see if the post 1998 trend consistent with any particular IPCC projection/prediction of interest.
Anyone want to guess what I’ll find? 🙂
Update: July 7
In comments, Gavin indicated that he thinks the correction, developed for Hadley, may be applied to land measurements, but not to sattellite measurements. So, I repeated the hypothesis test for the merge of NOAA/GISS/HadCrut, eliminating RSS and UAH. The result is:
- Using Ordinary Least Squares: The best-fit trend beginning in January 2001 is -0.1 C/century. The 95% uncertainty is ±1.4C/century. This results in an upper bound of 1.3 C/century. The IPCC trend of 2C/century exceeds 1.3 C/century and so is falsified based on this test.
- Using CO: The best-fit trend beginning in January 2001 is -0.2 C/century. The 95% uncertainty is ±1.4C/century. This results in an upper bound of 1.2 C/century. The IPCC trend of 2C/century exceeds 1.1 C/century and so is falsified based on this test.
Gavin has expressed the opinion these uncertainty intervals are too small. I consider the evidence he provides to to support his view flawed. This is discussed in comments.
Lucia, Credit where credit is due – this is not my method, nor my analysis (neat or not) – it is due to David Thompson and is described in his recent paper. You unfortunately can’t use the delta’s derived from HadCRUT3v for the satellite records because they are not measuring the same quantity. I used them for GISTEMP because it nominally is. The MSU data have a much stronger ENSO signal and you’d need to adjust the corrections for them specifically. Finally, I have no idea who you are talking about in the last half of your post. You attribute to me statements and motivations that I have never stated or had. Instead of conducting conversations with an imaginary version of me, I would recommend just talking to me – I’m really not that shy. And for the record, someone asked me to put this analysis up because they were curious – nothing more, nothing less.
And I thought I was the only person ever to use an interactive interval-halving ‘numerical method scheme’ in order to avoid writing some code.
ps
I’ve had this very same experience many many times … hmmm, … where was that??? I’m sure it’ll come to me.
Gavin–
You might want to re-read your posts to see what they imply. The quote I attribute to you was cut and pasted from your post:
Who are these people still insisting on the short term trends? And in what way do you advise they are “unmeaningful”. The link in the word “advice” links to your article with Rahmstorf explaining that short flat spots happen during long time up trends. Every single one flat spot or downtrend occurs as a result of a volcanic eruption.
Presumably, by linking the word “advice” and re-raising the issue of short term trends, you are bringing these arguments up again. You appear to be rebutting some unstated argument by whomever these people might be.
If you mean that statement to be understood some other way, you might want to clarify that. If you don’t wish to appear to be arguing with straw people, or supporting your claims with arguments based on volcanic eruptions, don’t link back to that argument.
I’m perfectly aware you are not shy. But quite honestly, I don’t enjoy entering comments in moderated comments blocks, returning to see whether they passed the moderation and then waiting for an inline comment response. I would prefer to post here at my blog. I know that my “ping” will appear and if you have a comment, you are free to continue the conversation either by private email (as we have conversed before), in comments here or in your blog. In any case, the course I take means your blog visitors needn’t be disturbed by our conversations over these sorts of issues.
On the ENSO: I know it’s not your correction. But you used it, and so, relative to the one we used here before, it’s the one you are endorsing by using it. I haven’t checked if it is correct. I just want to see the consequence if it’s even approximately true.
On the HadCrut/ UAH issue: Everyone here has discussed the issues with land vs satellite data, and many an argument ensued. For that reason, I sometimes deal with all five together, and sometimes with the land only measurements and also show what happens if we apply an analysis to any particular set.
As it happens, if we apply that correction to NOAA or HadCrut, we still falsify the 2 C/century trend using those sets — you’ll see this is stated above. Two land based measurements are rejecting the 2C/century after ENSO corrections apply.
With regard to the idea the correction might not apply to UAH, sure. We all understand that the satellite measurements differe from the land measurements. That said, the correction it might equally well not apply to GISS either. To the extent that GISS differs from HadCrut due to the pole extrapolation, the ENSO corrections might differ for the two. So, the application of correction for HadCRut might be equally problematic for GISS.
If we collectively average NOAA HadCrut and GISS, that falsifies the 2C/century trend using either CO or OLS using the uncertainty intervals Tamino uses.
I’m amazed that there is still squabbling over the AR4 projection falsification issue. Also I thought that incorporation of the MEI as a surrogate for ENSO effects on global temperature of a couple of months back was a transparent way of correcting for ENSO influence. It also slightly lifted underlying climate trends and narrowed uncertainties, leaving the OLS and C-O falsifications intact. But hats off to Gavin for treating the issue seriously rather than falling back on the new party line: “There may not be warming for another ten years or so, but just you wait: after that we will REALLY heat up.” Also in view of the dawning realization that the earth is not cooperating with climate models and that more and more people are learning about this disconfirmation, there is the effort to shift attention to all manner of non-warming “climate changes,” such as ocean acidifation etc. All this is classic Leon Festinger denial in action (see Chris Horner http://planetgore.nationalreview.com/post/?q=OGNhOTFiMDk1MzY2MzI1ZjZhYmMwYmIwODY2YTU5ZmI=). We are witnessing the contortions of a dogma faced with empirical failure…and probably just the beginning.
Lucia,
Take comfort from the fact Gavin had to reply immediately to your post. Shows you are making an impact – keep up the good work!
“We are witnessing the contortions of a dogma faced with empirical failure…and probably just the beginning.”
Exactly so. The significant indicator here is when proponents of a theory feel unable to abandon peripheral tenets. Funnily enough, this often happens while they are furiously arguing that the tenets are peripheral, and so, even if the attackers were right, the theory would not be damaged. But they still insist on defending them fiercely.
It would be so easy to just say, probably 2 degrees/century was an overestimate, in the light of new evidence, but it was reasonably close. Just as it would be very easy to say that in the light of later examination, MBH seems to have been a flawed series of studies, but it was one that advanced knowledge in the field by stimulating further work, and so remains a valuable contribution. However neither of these things are possible to admit. Just as it is not possible to reject Hansen’s latest ravings on prosecuting the expression of opinions. We cannot say that Hansen made a great contribution in the earlier stages of this topic, but now seems not to have both oars in the water. Every last word he utters must be infallible!
And so we get these crazed contortions about expanding the uncertainty bars, when really, whether the IPCC was right about 2 degrees is quite unimportant to the AGW hypothesis, and if the value is set a little lower, it would not be falsified. But the IPCC in all its details, Mann’s work and conduct in all its details, and everything that Hansen has ever said, have to be defended to the death.
Not sure if its in the literature of Cognitive Dissonance, but the mechanism seems to be a strong fear that once one admits the slightest revision is needed, the whole edifice will come tumbling down. Of course, ironically enough, as soon as they see the tactic of defense to the bitter end of peripheral parts of the hypothesis, informed observers do not have to know any science to be able to tell that the theory is dying. The behavior of its defenders show they do not, at heart, really believe it themselves, so why should we?
The bets we should be taking are when it will happen that AGW will finally have died as a theory. We need an indicator, but my own bet would be that within 5 years from now, the science will be settled and the debate will be over.
Fred:
Yes. It would also be so easy to discuss what the likely level is. Or to talk about why it’s difficult to identify where the issue is– the SRES? The models themselves etc. Is the problem that the IPCC won’t kick out some “wild hare” models from some particular country for reasons of diplomacy?
And if one were to admit the current lack of support fro the 2C/century question, one might make the case that there is still a very good case for, 1 C/century which could also be a problem,. That would sufficient to motivate some action. The magnitude of warming, the level of certainty about the magnitude and our ability to predict what is going to happen are all important if we are to create rational policies.
But for some reason everything stalls, and we read things like this:
Short term trends do contain some meaning. “People”, whether named or unnamed know this. They say this.
Is he rebutting these “people” saying short term trends have some meaning?
Well, that would be truly odd because no one believe short term trends never have any meaning!
The modelers applauded themselves at the ability of models to predict that temperatures went down rather than up after Pinatubo erupted. That was a short term event. They took credit. They deserve it. It tells us a snippet about climate. (What it tells us is stratospheric volcanic erupts do have a cooling effect. 🙂 )
Other short term trends are also meaningful in some sense.
Or, is the purpose of that sentence to rebutt whatever he was rebutting in the article he linked? Or in recent articles? Or in unnamed blog posts posted since the article he linked? Is he trying to play the politician and say something so utterly vague that alarmists can believe he’s supporting their wild claim while he retains plausible deniability should someone call him on his statement?
Who the heck knows?
My comments are open. He can clarify if he wishes. The contact Lucia button is there, he can tell me privately if he wishes. I’ll let y’all know if he does. 🙂
Fred makes a good point. A rational science process would not defend the periphery. It would accommodate empirical disconfirmations
and flawed results through continual refinement and adjustment. But that isn’t what happens. The MBH “hockey stick” is not acknowledged as flawed with a salute to the investigators for stimulating follow up work. Instead there is the Orwellian disappearance of any mention in AR4, after it was emblazoned on page after page of TAR. No mention of it; it’s just gone.
There are two reasons for this: one originates in human nature and the other is rooted in the political process. The cognitive dissonance phenomenon does apply. There is simply too much invested in the belief in the truth of the established paradigm that disconfirmations are rejected or even assimilated as supportive. Festinger writes in WHEN PROPHESY FAILS, “We are familiar with the variety of ingenious defenses with which people protect their convictions, managing to keep them unscathed through the most devastating attacks.” Then there is the stark fact that 1 C/century really is a lot different than 2 C/century, in terms of its impact on the planet and on the international process. To acknowledge that warming will likely be smaller at this point in the Bali road map process or the US Congressional process would almost certainly make it more difficult to enact the kinds of restrictions the advocates are looking for.
An honest approach would confess that we know very little about the future impact of doubling atmospheric CO2 and that there is the potential for relatively small, as well as substantial changes. And future research is not likely to narrow this range very much. Don’t hold your breath waiting for this.
Lucia-Gavin criticizes your use of his surface temperature ENSO correctionson the sattelite data. Perhaps he could explain how this “David Thompson” calculated these corrections in such a way that you can easily convert them.
Andrew-
I don’t believe there is any “perfect” way to correct for ENSO, and I think doing what I did– which I see as a back of the envelope, “what if” sort of adjustment.
If I understand Gavin’s comment correctly, he the results of an analysis in a published paper, and the analysis was performed specifically for HadCrut.
Then, Gavin also assumed he could apply the HadCrut specific correctin with no adjustment to GISS, but feels this should not be extended to satellite data. Presumbably, the application to GISS is a “what if”, otherwise, Gavin too would have delved into the method and come up with a correction specific to GISS. But, as he was posting at his blog, it appears he didn’t feel any need to go to the effort to come up with the GISS specific correction. (Which is fine with me.)
As this is also blog, I figured I’d expend an equivalent amount of effort as Gavin expended on his blog post. I just used the same numbers and applied to all sets in a similar “what if” type analysis.
I’m perfectly content to do back of the envelop type stuff like this, and it’s fine with me if Gavin does too. But mostly, I think we already have our answer about the falsification: The 2C/century is inconsistent with the data at 95% confidence. Either it’s an outlier (which has a 5% likelihood) or the 2C/century is incorrect.
We get this with ENSO corrections, without ENSO corrections or whatever.
FWIW, I think the subject analysis is valuable. It can say that the temperature did not increase at 2C/Century during the subject decade-long period. Moreover, the analysis casts doubt upon any specific models that “guaranteed” (within certain error bars) such an increase during the subject period.
However, it may be worth mentioning that 2C/Century (over the next century) itself is not falsified (and, as we all know, could actually happen) — because the actual physical mechanisms (natural and possibly anthropogenic) could combine to result in a stair step pattern that “averages” 2C/Century over the whole century. Of course, rising temperatures are not a certainty — global temperature may, on balance, fall during the century.
Personally, I would rather experience a warmer world. So, I hope sustained GW is a fact — but, I expect it is not.
Lucia:
“The standard error he suggested is larger than displayed by the entire thermometer record on the real earth! So, it is likely his variance is an artifact of ensemble averaging over the physical approximation equivalent of several different small planets, each intended to be “like†earth, but none precisely identical to the real earth.”
I admire your dexterity with words as well as with statistics!! ;>)
KuhnKat
I admire your dexterity with words as well as with statistics!!”
I back up KuhnKat’s sentiments & add: –
If the error is so large that you can drive a herd of elephants through the “ensemble” without notice than what use is the “ensemble”. However don’t expect them to cull out the outliers any time soon; that would be “inconvenient” wouldn’t it.
Way to go Lucia; fearless search for the scientific “truth”, whatever it is.
Lucia
For some reason the comment above didn’t come out the way I intended. The blockquote should have encompassed KuhnKat’s comment above and my comment follows.
I have recently posted on Tamino and RC asking for clarification of the difference between a short-term trend and a long-term trend. I had assumed that to be a long-term temperature trend a ten year moving average would show at least some consistency of direction over a period of 30 years.
In 1988, when James Hansen made his address before Congress and made much of the statistical significance of the warming trend at the time, the moving averages over the previous 50 years showed basically no warming for 20 to 40 years up to around 1978, followed by a steep rise from the late 1970s to 1988. Gavin answered my question, “Can 20 years of flat temperature trend plus 12 years of increase equal a long term trend?” with the word “Yes”. A similar reply, averaging out over 30 years the warming that occured during 12 years, was made by a poster on Tamino’s site.
Perhaps the word “trend” has a special meaning of which I am unaware, but to my non-specialist mind it is simply silly to take a sudden steep rise over a short period and describe what has happened as a long-term trend. If in fact there was absolutely no smoothed warming trend between 1948 and 1976, how can what happened from 1976 to 1988 retrospectively make 20 years before 1976 into a period of long term warming?
The IPCC AR4 projected/predicted 2C/century as the central tendency of the during the first 2 or 3 decades of the current century. So, yes I am only falsifying their prediction/projection as they describe it in the AR4.
In some posts, I this trend superimposed on their graphical presentation of uncertainties. However they didn’t state an interval.
And no, I can’t guarantee the trend won’t increase, decrease or do something else in the future. The IPCC projected 2C/century for the early period in the century, and faster later.
Falsifying now doesn’t give us much confidence in the later predictions. But, as they say, even a broken clock is right twice a day. So, it could turn out right.
I do anticipate warming. The historic trend is up, and has been. The basic underlying theory that CO2 should warm some what appears sound. But all and all, the IPCC projections are less precise than they claim. (That is, the uncertainties appear larger than indicated on figures.)
Patrick
Gavin doesn’t actually elaborate when answering questions at RC, does he? 🙂
Actually, what bothers me about the short answer is this:
You described the period before 1978. If we examine the forcing scenarios Model E has been using, the forcings have been increasing. So, the modelers expect the warming rate to vary.
So, the difficulty is that a full answer should be.
“Yes, what you are describing is consistent with the combination of weather variability and some warming. However, climate scientists believe, based on their current understanding of the levels of forcing, that the underlying warming trend during the early portion of the century was slow, but it accelerated during the later portion. That combined with natural weather, and the specific pattern of volcanic activity, permitted the observed trend to be relatively flat for a period, followed by accelerated warming.”
Or if gavin thinks there is some other explanation, he could give it.
In my opinion, the answer “yes” is unnecessarily brief. If RC’s goal is to inform and educate, it will not advance that.
That said, RC isn’t my blog, and I don’t dictate policy there.
Wow!… Looks like RC (Schmidt), might be trying to ever so slightly shift away from AGW? By the way, the way July 08 temps are going, might be lower than January 08.. and NH ice ain’t melting at all like 07 (cryosphere today)
http://discover.itsc.uah.edu/amsutemps/
I don’t think Gavin intended to suggest he is shifting away from AGW!
Why are you being so Gothca-y lately?
And the reason Gavin keeps explaining why short term trends are not very meaningful is because people still haven’t gotten the point that short term trends aren’t very meaningful…I question whether some people will ever understand forced vs. unforced variability.
It’s like using this weeks weather to disprove a seasonal signal–but we’ve been through this before…nevermind.
I notice that Gavin still only uses the GISS and Hadley CRU datasets which are meteorological weather station datasets. He continually refuses to touch the UAH and RSS satellite datasets. I would suggest that is because they have not been Hansenized and Met Office adjusted and that they are more reliable and more resilient to “adjustment”.
No. satellite temperatures measure a broad section of the atmosphere. As Gavin notes, the MSU readings respond to ENSO events (Look at the difference between 1998 in GISS and RSS–I guess RSS was in on the conspiracy back then, but changed their mind?)
Sorry to mix facts with conspiracy theories…again.
Boris–
The reason Gavin fails to convince people that short term trends aren’t meaningful is because short term temperature trends are meaningful in many ways.
Saying things like “Short term trends aren’t meaningful.”, ending with a period is wrong. To be correct one must state the way in which they are not meaningful.
Incomplete vague claims are not meaningful. Gavin can repeat this one over and over and over, and post clearly flawed arguments to support it. People will continue to be unconvinced by his repetitious overbroad and ultimately meaningless claim.
Lucia,
Short term trends aren’t meaningful in determining climate sensitivity. Period.
Boris–
There… you added “in determining climate sensitivity”. Everyone agrees with you, and as far as I am aware, no one has attempted to us short term trend to determine climate sensitivity.
So, presumably, you aren’t suggesting Gavin is writing post after post after post trying to explain that “Short term trends aren’t meaningful in determining climate sensitivity.” and then complaining people haven’t taken his advice!
It would be like writing post after post explaining that topical application of chocolate sauce is not a cure for leprosy, and complaining people aren’t taking your advice.
What else could we be talking about? IPCC scenarios are based primarily on CS to CO2, so all these short term “falsifications” mean zero, right? Which means that short term trends aren’t meaningful with respect to climate.
fred July 5th, 2008 at 4:10 pm
Of course, ironically enough, as soon as they see the tactic of defense to the bitter end of peripheral parts of the hypothesis, informed observers do not have to know any science to be able to tell that the theory is dying.
A very insightful comment. As a scientist of a non-climate variety, I’ve never witnessed such dogmatic defense of the most minor issues, with nary a thought to acknowledge weak data or analysis when pointed out. In my field, reviewers of papers and grants proposals take a very skeptical approach to every submission. We try to tear down every argument, looking for weaknesses in the data, in the methodology, in the interpretation, and looking for every possible alternative explanation.
Boris-
Climate sensitivity has a specific definition. IPCC scenarios and the projections are not the same as estimates of the climate sensitivity.
Short term trend can be used to bracket the current magnitude of the underlying “ensemble average” climate trend. The IPCC predicted a current underlying climate trend.
And what constitutes the underlying climate trend?
Boris,
That’s normally a denialist question. If you don’t know, maybe you should ask the IPCC (or Gavin).
While you are at it, ask him the difference between weather and climate! 🙂
An exchange Daniel Klein had with Gavin last winter is relevant to this blog post:
>>>On trends starting in 1998 Gavin says:
“…all the surface records even have positive (non-significant) trends, even starting from then. The issue is significance.”
http://www.realclimate.org/index.php?p=497#comment-78104
“Daniel Klein Says:
29 December 2007 at 10:54 AM
Gavin: Are you sure about this comment you leave at number 47?
“Plus, all the surface records even have positive (non-significant) trends, even starting from then [1998].â€
With 1998 remaining the record, and 2007 is the lowest since 2001 (UKMET) you certainly won’t have a positive trend (significant or not) in that record at least.
While I agree with your comment that picking a single starting date is not good statistics, I am curious as to the meaning of your assertion, “You need a greater than a decade non-trend that is significantly different from projections.†One does need to start somewhere.
Let me rephrase the question then. How long would it need to be for the 1998 record global temperature to not be exceeded (or if you prefer, a “non-trend†beginning at that date) for you worry that something has been missed in your understanding? 2010? 2015? 2020? 2030? A single year as an answer would be appreciated.
I am simply curious and mean no disrespect with the question.
[Response: Trends are not determined by picking two dates and looking at the difference. Trends instead are (usually) least squares fits to all the data – this is much more robust and why it is preferred to cherry-picking start dates. I’m pretty sure excel or other common software allows you calculate linear regressions and their uncertainty, and I suggest you find some software to try it out so that you can follow what is being discussed. 1998 was roughly 0.2 deg C above trend. If the trend is around 0.2 deg C/dec, that implies that the mean level about 10 years later would be expected to match. But the weather noise is about 0.1 deg C in any individual year and so you might need to wait awhile. Much of this is already moot though – although the differences are small, both 2005 and 2007 beat 1998 in the GISS and NOAA analyses, more importantly, the more robust longer term averages (say over 5 years) are still increasing. To answer your question though, 1998 will likely be exceeded in all the indices within the next five years – the solar cycle upswing into the next solar max will help, and the next big El Nino will probably put it over the edge. -gavin]”
http://www.realclimate.org/index.php?p=497#comment-78140
“# Daniel Klein Says:
29 December 2007 at 11:40 AM
OK, simply to clarify what I’ve heard from you.
(1) If 1998 is not exceeded in all global temperature indices by 2013, you’ll be worried about state of understanding
(2) In general, any year’s global temperature that is “on trend†should be exceeded within 5 years (when size of trend exceeds “weather noiseâ€)
(3) Any ten-year period or more with no increasing trend in global average temperature is reason for worry about state of understandings
I am curious as to whether there are other simple variables that can be looked at unambiguously in terms of their behaviour over coming years that might allow for such explicit quantitative tests of understanding?
[Response: 1) yes, 2) probably, I’d need to do some checking, 3) No. There is no iron rule of climate that says that any ten year period must have a positive trend. The expectation of any particular time period depends on the forcings that are going on. If there is a big volcanic event, then the expectation is that there will be a cooling, if GHGs are increasing, then we expect a warming etc. The point of any comparison is to compare the modelled expectation with reality – right now, the modelled expectation is for trends in the range of 0.2 to 0.3 deg/decade and so that’s the target. In any other period it depends on what the forcings are. – gavin]”
http://www.realclimate.org/index.php?p=497#comment-78146
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Note that Gavin suggests a trend of 0.2 to 0.3 per decade which is higher than that used in the exercises done here by Lucia.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Note also that Gavin says, “more importantly, the more robust longer term averages (say over 5 years) are still increasing . . .”
So as of last December it was OK to talk about 5-year averages as being meaningful. With the June/2008 data in, there has been no significant increase (the issue is significance as Gavin quite properly claims) in the ENSO-adjusted 5-year averages from July/1998-June/2003 as compared to July/2003-June/2008 (and this is consistent with Gavin’s own analysis), and without the ENSO-adjustment the temps decline between the two periods.
So maybe one of those unidentified “people” that Gavin was responding to was himself in an earlier time, making claims about the meaningfulness of 5-year, 10-year, and 15-year periods!
I haven’t observed the climate yet.
One way to shed further light on the ‘short term’ vs ‘long term’ trend issues might be to apply Cochrane-Orcutt bounds and attempt to (or fail to) falsify the 1988 predictions?
Yep. that’s what I have been doing. Comparing modeled expectation to reality.
The IPCC says 0.2 deg/decade. I have no idea where gavin comes up with the 0.3 upper bound. But reality says “less than 0.2 deg/decade.”
It appears gavin recommends testing this the OLS function (LINEST) in excel. EXCEL OLS says “Not getting 0.2 deg/decade!”
So, would appear that I am testing in precisely the same way Gavin suggests. 🙂
Lucia, your basic mistake is claiming that IPCC specifically predicted anything for the 7 year and 5 month trend. They didn’t. They predicted instead a warming of around 0.2 deg/dec for a 2 to 3 decade period. Had they wanted to predict anything over shorter time periods, they would have done something like I did – take the model results and create a histogram of the short period trends. They would have got a distribution that was roughly N(2,2) (degC/century) as you know, and therefore a ‘likely’ range (67%) of of 0 to 4 deg C/century for that short period. That wide range (which encompasses internal variability and model uncertainty) is why they didn’t bother. The range for the 30 year trends (2001-2030) is ~N(2.2,0.5), giving a likely range of 1.7 to 2.7 degC/century. Hence their projection.
Unfortunately, just using the information within any short time period doesn’t give you a very strong constraint on either what the auto-correlation or standard deviation in the real world is and so makes it very difficult to estimate the distribution for the underlying trend. To make that clearer, if I calculate the OLS confidence interval (95%, 7 years, 2001-2007) for all the IPCC simulations it ranges from 0.7 to 13! Thus the confidence you have in your sample sd or auto-correlation should be small (try calculating their error bars, or do a monte carlo simulation of a suitable random variable).
A better question to ask is how the likely trend over 30 years is informed by the trend over a shorter period. i.e. what is the conditional probability of the 30 year trends given the trend and uncertainty for the first 7 years. This can be done with the IPCC models (which are the basis for the IPCC prediction after all). What you’ll find is that the correlation is small (r2=0.11), but since it is positive, there may be some useful information. For instance, if you just take the runs with 7 yr trends of between -1 and 1 deg C/century (12 simulations), the distribution for the 30 year trends is N(2.0,0.4) – not a significantly different mean to what you get using all the simulations. (For a range of -1 to +2 (22 simulations), the 30 yr distribution is N(2.0,0.4), for all 7yr trends less than zero (11 simulations), it is N(1.8,0.7) – do you start to see a pattern here?). Thus one must conclude that there is very little information in the short term trends for judging what the long term trends will be. The reason is the substantial decadal variability in the models (and in the real world) which you neglect at your peril.
Hi Lucia – not sure I can improve on Gavin Schmidt’s last comment, but maybe a slightly different perspective…
First, the issue isn’t so much the time period involved, it’s the expected temperature change over that period. For a 2 degree per century trend, 0.2 degrees per decade, then over 7 1/2 years that’s just 0.15 degrees expected temperature change. Over 30 years it adds up to 0.6 degrees. We know there are many factors influencing Earth that bump temperatures up or down on the order of 0.1 degrees, and some of them have many-year-long autocorrelations (the solar cycle, for one, that seems not to have been mentioned yet on this thread!) If the trend expectation was 4 degrees per century rather than 2, then 7 1/2 years might be quite adequate to have good statistics on the trend whatever the other factors were up to. But the trend isn’t that high – at least not yet.
So isolating an expected temperature change of about 0.15 degrees while there are random (or not so random) 0.1 degree perturbations bumping temperatures around by that much is quite tricky. That’s at least part of the reason why 7.5 years is just too short.
But over 30 years, we have an expected change of say 0.6 degrees, and that should be easy to distinguish from the little 0.1 degree bumps. It was only when the 20th century temperature change hit about 0.6 or 0.7 degrees that IPCC found the evidence for warming conclusive.
I know this is roughly what you’ve been talking about when looking at confidence intervals in the trends, but looking at the absolute temperature change across a given time period makes the issue a lot clearer.
Second, as I and others have been saying for a while here, you have been insisting that the confidence intervals you get for trends over this short period are meaningful, but there is clearly autocorrelation (once again, at the least, the solar cycle) that inevitably makes those confidence intervals narrower than they should be. Cochrane-Orcutt only helps with short-term autocorrelation, which you make worse by looking at monthly data anyway. Either you have to completely subtract out all the long-term non-trending components in the temperature record (not just ENSO, but solar cycle, volcanoes, PDO if it exists, etc.) or you have to wait for a sufficiently long period that those oscillations average out. Decades, at least.
Technically you certainly have a valid point in your analysis, but when you phrase it as a “falsification of the IPCC prediction” you are greatly overstating your case. And we get comments around the blogosphere about how “lucia has falsified the IPCC”! as if what you’ve done actually negates the entire body of painstaking climate research of recent decades. It doesn’t, and you do yourself and the world a disservice by perpetuating that representation of the useful stuff you have done. I’m sure it’s been satisfying to see all the traffic that’s come here though…
What if the IPCC is actually wrong and the short term trends observed to date continue for the next 30+ years but governments impose draconian anti-CO2 policies anyways? The economic and social consequences of wrong policy decisions would be enormous and would completely undermine the credibility of the scientific establishment for a generation or more (perhaps even earning scientists a spot next to lawyers and journalists as the most despised profession).
For that reason the data to date should be telling everyone to be cautious, adopt policies that would make sense even if CO2 is not threat and wait until alt least the next solar maximum. If the warming returns with a vengeance then the alarmists will be vindicated and it will be virtually impossible to oppose more radical action on CO2. If the warming does not reappear then society avoids wasting precious resources on an exagerrated problem.
Gavin,
The IPCC communicated their projections for the underlying climate trend stripped of weather noise with this graph:
Whether one likes it or not, this graphic conveys the idea that projections are being made over a continuous period of time starting in 2000 and ending in 2300. That is: The IPCC are communicating short term projections. Note the graph includes uncertainty intervals and they don’t have the shape you suggest.
One might question whether the decision to communicate this information was wise. One might question whether the IPCC fully grasped what they were communicating. But the fact is, they include these graphs, and so they communicated projections starting in year 2000.
Maybe, if you could have persuaded them, they should have communicated the sorts of uncertainties you suggest. However, they did not. The reality is the communicated that graphs.
The only question is what are they projecting. The answer appears to be the underlying climate trend stripped of weather noise.
This is why I compare earth trends stripped of weather nose to their projections.
As for the standard deviations you mention you the first paragraph in your comment:
First, it is obvious, as you say, the IPCC did not bother to communicate that sort of information. So, all we know for sure is they didn’t. And, like it or not, it is not appropriate for the IPCC to rely on climate bloggers, whoever trained, talented, or well informed, to re-interpret or reframe what the IPCC as a consensus group actually communicated. The IPCC communicated small uncertainty intervals.
Second, is not at all clear to me that the numbers above mean anything for the real world. First, because models are approximate, individual models may not reproduce real earth weather. But even if one does, your full collection can could produce a very wide variance because your individual trajectories are generated from:
* different models each with different parameterizations. (That is: slightly different “planets” each of which is an attempt to create an approximation for earth.)
* different histories before 2000. Some models have volcanic eruptions in the past, some don’t. Same for solar. Same for a few other details and
* slightly different forcings during 2000. (Some have solar, some don’t.)
So, the range of variances in 8 year OLS from your various model-earths should be larger for the models than for the real earth.
Since the variances in 8 year trends across all models runs should be larger than for the real earth, there is no reason to expect the consensus of the IPCC would have been to suggest that variance is the variance for the real earth. (And they didn’t in anycase.)
Equally importantly, there is absolutely no reason in logic, science or statistics to use your proposed standard deviation in OLS trends to test whether the IPCC 2C/century projection matches data collected from the real earth. To determine the confidence interval in the real world trend, one should use the real confidence intervals based on real earth variances. These confidence intervals are half what you suggest. Given the current trend, they say 2C/century is not consistent with earth data.
But, one (particularly a modeler) might argue they believe the variances you came up with using runs over a collection of different model is somehow correct.
But, it can be shown it’s not. If we calculate the standard error of OLS trends for all 8 year trend in the thermometer record for the real earth, the real earth standard errors are distinctly smaller than the value of 2.2 C/century you suggest. The the real earth standard errors include periods with large variations in forcing, (volcanic eruptions, changes in aerosols, changes in GHG’s), garden variety measurement errors and the dramatic “bucket-transition” period. So, it is rather remarkable the variance over all your model runs with similar forcings is larger than for the real earth over the full thermometer record!
In my view, given strong theoretical reasons to expect your standard deviation over all models should be larger than seen on the earth, and empirical evidence it is too large, it is rather amazing that you have convinced yourself that the variance across a collection of models is remotely appropriate to use to find the range of trends consistent with real earth data!
Do you mean you found fit OLS to every model run, calculated the ±95% confidence intervals for all the runs, and you got ranges from 0.7 to 13 C/century? (Using what method?)
Did it occur to you to compared this with the range of confidence intervals for all 7 year trends in the actual honest to goodness real earth thermometer record correcting for red noise? You might want to do so. Let me assure you it does not range from 0.7 C/century to 13 C/century! (Nowhere near this.)
The fact that your confidence intervals are so strickingly different from those in the thermometer record suggests the models are failing to replicating the features of the weather noise rather dramatically! ( I’d have to check, but I think ±0.7 C/century is also pretty bad.)
So, if the weather noise in these models appears to have characteristics this far off from the real earth weather noise, why would anyone use it to obtain confidence intervals for the trend on the real earth? Why not just use the features of real earth weather, as I do? (My method has the advantage of being conventional!)
Gavin. That question has been explored here. JohnV asked it.
Or more precisely, I started a similar exercise with real earth data, and limiting to the volcano free periods starting in the 20s. (The shortness of the one volcano free period prevents comparison to 30 year trends, so I was looking at 20.).
I stopped when we ran across the whole “jet-inlet” snafu. But basically, I was doing this for the big “no-volcano” period.
At worst, I was getting uncertainty intervals 20% larger than I get using OLS or Cochrane-Orcutt. I only got the +20% because I was getting too many rejections during the period with the jet-inlet/ bucket transition. Once that got throw out, the confidence intervals I get using the least squares fit work just fine.
I can dig that up again for you and present it in a more coherent fashion as I didn’t document it as thoroughly as I might have had the confusion over the jet-inlet data not intervened.
Honestly, Gavin. I think you are looking at a lot of model data, and not checking whether the numbers for standard variances, autocorrelations of trend and confidence intervals agree even remotely with the thermometer record.
I know modelers get used to looking at model data, and doing model inter comparisons. It’s a reasonable and necessary thing to do. But if you want to convince other people, you really need to sit down and start looking at each number you get across your models, comparing those to what exists in the thermometer record, and communicating that to your audience. Because those standard devations, autocorrelations and confidence intervals simply don’t match real world numbers. At all.
Gavin wrote: “They predicted instead a warming of around 0.2 deg/dec for a 2 to 3 decade period.”
Then: “A better question to ask is how the likely trend over 30 years is informed by the trend over a shorter period. i.e. what is the conditional probability of the 30 year trends given the trend and uncertainty for the first 7 years.”
Why 30 years? Wouldn’t you want to check the other limit if you’re checking the correctness of IPCC predictions? If you got really ambitious, you could do all years from 20-30 and really have some fun.
Arthur:
The current period doesn’t have volcanic eruptions, so that’s not an issue. But yes, we’ve discussed the possibility of energy at longer cycles.
As you know, I’m happy to discuss the other cycles, and we have been exploring the possible upper magnitude. JohnV thinks the solar cycle explains this–but my impression is most climatoligists think the solar cycle isn’t that strong.
One of the things I have slated is trying to go through and find the amount of energy in the longer cycles by looking at control runs from the models. On the one hand, I don’t have much confidence they are necessarily right. But on the other hand, if we had spectral components from a control run, we could come up with a form for better confidence intervals, and sort of meld that with data. (Unfortunately, without doing this, I know what I’m saying isn’t at all clear.)
But, using classic methods the trends are falsifying. It’s not a good sign for models.
Lucia
that’s a smack from Arthur Smith you naughty girl. Arthur reckons you have done some “useful stuff” but you’re getting to close to the bone now and you need to be put back in your place. My goodness, they might be calling you a denier soon, I mean all that traffic you are generating, surely Arthur isn’t suggesting that you have an ulterior motive for looking at this issue.
Arthur my boy, your just plain creepy in that last paragraph.
It would seem that Gavin continues to have difficulty separating models from the real world in his thinking, considering his statement “The reason is the substantial decadal variability in the models (and in the real world) which you neglect at your peril.”
What does the variability in an assortment of models have to do with evaluating a predicted trend line from a fixed point in time forward? There’s an implicit assumption that the model decadal variability is reflective of the real world, which I don’t think has been established.
Julian–
I don’t think Arthur said I’m doing this for the traffic. Anyway, knitting brings more traffic. So, if that was my motived, I’d just go back to explaining that. 🙂
As for overstating: I am saying I have falsified 2C/century and nothing more.
If others mistate it here or elsewhere, I correct them. It is perfectly possible for Arthur to tell people that all I’ve falsified is the 2C/century, and that if they don’t believe him, they can come ask me.
I’ll tell them I certainly haven’t negated “the entire body of painstaking climate research of recent decades.” (I can’t remember reading anyone who says I did that anywhere. But if they did, I would simply correct the impression. You can send them here, and I’ll say so.)
There is no reason to pretend that XC/century (for any value of X) is consistent with the data simply because some people might misunderstand. This is equally true for 0C/century, which is not consistent with the longer term data and 2 C/century, which is not consistent with data since the time the projections were made.
Given the need to make policy decisions forward, it would be unfortunate to blindly accept 2 C/century just as it is unfortunate to blindly accept 0C/century. All scientific hypotheses need to be compared against data critically. Models are complex, and predictions may be inaccurate for many reasons. If they appear flawed, one needs to say so.
It is perfectly possible for Arthur to tell people that all I’ve falsified is the 2C/century, and that if they don’t believe him, they can come ask me.
Ok, I’ll do that – in fact I did already on at least one occasion. The main culprit I’ve seen is our friend “kim”, and I’m not sure “believing me” has ever crossed her/his mind… but I’ll give it a try…
On solar cycle subtraction (and removing other spectral components) – I was looking at that, but basically gave up at the point I realized the underlying causes are not strictly periodic. Solar cycles in particular. I’ve seen some other analyses that attempted to deal with the non-periodicity (Shaviv’s cosmic ray stuff for instance) – you might be able to do some rough thing just based on the sunspot numbers. Maybe I’ll give that a try next – or maybe Dr. Thompson should plug that in along with ENSO…
Well, that was an evasive answer!
But you haven’t done this. ENSO is not the only source of unforced variability.
Boris–
I don’t consider correcting for ENSO the step that “strips” of weather noise. I consider computing the bet fit trend “stripping” of weather noise as best as possible.
The idea of an underlying climate goes sort of like this:
The “weather” is a stochastic process which can be measured using metrics like GMST (among others.) Any time series of weather, including that we see on earth, is a single trajectory drawn from all possible trajectories of the population of “weather” with whatever we consider the “same” initial conditions, forcings or other relevant features.
Climate is the ‘expected value’ of all possible weather trajetories in this population. It is sometimes called the “underlying climate” relative to any single weather trajectory.
Under the ergodic hypothesis, the long time average of weather will give identical results to the ensemble average over all trajectories. (This is a hypothesis that has never been proven for any system involving the Navier Stokes equations. In fact, it is known to be wrong in some cases. However, it widely used in various forms.)
So, under the ergodic hypothesis, applying a best fit regression to the temporal trend is a way to average out (or strip) the “weather” or “weather noise” and estimate the underlying climate trend.
I didn’t answer previously because I thought you were asking a rhetorical question. After all, you discussion weather vs climate quite frequently. So I presumed you were familiar with the difference and accept the abstraction as a valid way to learn something about climate vs. weather.
Arthur–
I don’t want to try to correct data for the sun because in my experience, if the goal is to do a hypothesis test, the uncertainties in the corrections quickly pile up to be larger than the uncertainties due to effect on wishes to “correct” out of the data. To some extent, you need to recognize that data are noisy. But, noisy or not, if one correctly estimates the uncertainty intervals, and a hypothesis is outside those supported by the data, then that hypothesis, treated as “true” prior to collection of data, is then falsified by the data.
So, in the end, I am more concerned with trying to obtain correct defensible estimates of the uncertainty intervals rather trying to make them small by correcting for effects like the solar cycle. (That said, I given what uncertainty intervals mean one can’t just inflate them without justification. Doing so makes 95% confidence limits 99.999% limits. )
So, oddly enough, I actually do want to look at “model data” for a bit. I want to switch to examining the spectra in ‘model data’ control runs That’s where the “no-volcano” periods are (at least in model form).
Of course, along the way, I might manage to see what the properties of those time series look like.
But, should I ever manage it, the outcome will be that I might concoct a method of estimating uncertainty intervals that I believe can be applied even handedly to both the “no warming” hypothesis and the 2C/century hypothesis. (Whether other people will believe I cannot say. But this is my blog, so I get to post and explain. 🙂 )
Well, here is a thought from left field. As Gavin notes the IPCC makes explicit “projections” for 20 year
periods. The first being 2011 to 2030. Their graphs show information between 2001 and 2011, but apparently
these lines are not to be taken seriously. Be that as it may. So, we always have this option. Wait until 2030 to see how well the models did. I’m willing to do that, is Hansen? Ok, that’s a little snide. Lets
look at some simple math.
We start with the IPCC:
“All models assessed here, for all the non-mitigation
scenarios considered, project increases in global mean surface
air temperature (SAT) continuing over the 21st century,
driven mainly by increases in anthropogenic greenhouse gas
concentrations, with the warming proportional to the associated
radiative forcing. There is close agreement of globally averaged
SAT multi-model mean warming for the early 21st century for
concentrations derived from the three non-mitigated IPCC
Special Report on Emission Scenarios (SRES: B1, A1B and
A2) scenarios (including only anthropogenic forcing) run by
the AOGCMs (warming averaged for 2011 to 2030 compared
to 1980 to 1999 is between +0.64°C and +0.69°C, with a range
of only 0.05°C). Thus, this warming rate is affected little by
different scenario assumptions or different model sensitivities,
and is consistent with that observed for the past few decades
(see Chapter 3). Possible future variations in natural forcings
(e.g., a large volcanic eruption) could change those values
somewhat, but about half of the early 21st-century warming is
committed in the sense that it would occur even if atmospheric
concentrations were held fi xed at year 2000 values.”
Now, lets decipher this. The warming DELTA averaged for 2011 to 2030
is between .64C and .69C for SRES B1, A1B, and A2. Those scenarios
postulate vastly different GHG forcings. Essentially, no matter what we do
the average increase in temp from 2011 to 2030 will be be .64 to .69C
OVER the average we saw from from 1980 -1999. correct?
Well, what was the average from 1980-1999. According to hadcrut the average
Anomaly for that period was .08C. That’s .08C above the 1961-1990 Anomaly.
So, we add .64C to .08C and we get .72C. And we add .69C to .08C and we get .77C
So, the IPCC project that the average of 2011 to 2030 will be .72C to .77C
relative to the hadcru 1961-1990 baseline. Follow? and ,64C to .69C relative
to a 1980 to 1999 baseline. Follow?
Simple question.
What trend in GSMT do you have to assume between 2001 and 2030 to produce an
average GSMT over the peroid of 2011 2030 to satisfy the prediction that the average
between 2011 and 2030 will be .64C higher than the average between 1980 and 1999.
hint. .2C per decade wont quite get you there. you have to assume a trend Of higher than
.2C per decade from 1999 onward to get you a place where the 20 year average of 2011 to 2030
exceeds the average of 1980-1999 by .64C.
check my math. I have stuff to do
You had me at “ergodic hypothesis”……
I am confused, however. I thought the difference between “weather” and “climate” was that it’s “weather” if one’s predictions and models start looking shaky but becomes “climate” if and when one’s predictions are looking good. It is my understanding that from the mid-1970s thru 1997 we had a lot of “climate” but for this entire recent decade we have had mostly “weather” instead.
Bet RC will not reply any further
Lucia, The figure you show is exactly a summary of the data I am talking about. The uncertainties in that figure a one sigma for individual years after summing over ensemble members. Your eyesight must be significantly better than mine if you think you can derive an uncertainty bound on the 7 yr trend from that figure alone. The numbers I showed are exactly what you should want from those runs, and they have exactly the distribution I showed.
You seem to be arguing that the details of what made up the IPCC projection and what their uncertainty is completely irrelevant to your falsification attempt. That seems like an odd stance to me.
Your point about the models variance versus the real world is worth thinking about of course. The models do not all have exactly the same amount of internal variability. However, the uncertainty in the short term trends is mostly due to the shortness of the data series (remember these are all annual means) and not because of some enormous range of internal variability.
But we can put as many prior restrictions on which models you want to use as you like. For instance, the GISTEMP OLS for 1975 to 2000 to 1.6 +/- 0.6 deg C/century (95%, no correction for auto-correlation). We could use that as a screen for ‘sensible’ model runs – so let’s only consider runs that had a 1975-2000 trend of between 1.0 and 2.2 degC/century, and with confidence limits of between 0.4 and 0.8 degC/century (feel free to suggest other tests). That filter leaves 27 out of 55 simulations, and their 2001-2007 trend distribution is….. N(2.0,2.0). With tighter bounds on the 1975-2000 trend (say between 1.3 to 1.9), you get… N(2.2,1.4) – again, not significantly different. In both cases, if you impose an additional constraint that the 2001-2007 trend is < 1 degC/decade, you get the likely trend for 2001 to 2030 to be around 2.2 +/- 0.8 degC/century. Again, basically the same as before.
The problem you have is that there is one realisation of the real world and from any short period you cannot constrain the statistics of intrinsic variability that is longer than that period. Statistically it is hard to do even for longer periods because of the different forcings and such and so you need to use models to do the attribution. I have just demonstrated that the difference between selecting ‘good’ models or just using them all is pretty small, but I’m happy to try whatever recipe you suggest.
PS. do not confuse the distribution of a random variable (the trend) with the uncertainty in defining the trend in a single realisation. They are not the same. Try some monte carlo experiments with synthetic AR(1)+trend time series to see.
Well, re the rate accelerating during the last part of the last century keep in mind what Josh Willis said over at dotEarth today:
There may be (well, is, actually) an underlying rise in temperature, but the rate of warming may not be entirely due to what the models think, but strongly affected by feedback from the ocean which is sometimes positive and sometimes negative.
Maybe that’s why they call it weather and would rather ignore it. But this ocean feedback, positive or negative, can run in thirty year stretches. If Tamino or Gavin should say “but the PDO is negative now so you can’t use just ten years” we can counter with “but the PDO was positive during the faster warming at the end of the last century” which would have to remove some of the alarmism from their vocabulary. So I guess they just won’t go there and will simply rely on ‘but ten years is too short a trend’.
Just as the temperature hasn’t risen as much as projected, the warming rate hasn’t increased as much as they think, either. And for the past six months, at least, the rate has reversed.
The key bit about adjusting for ENSO is how it affects the hindcasting.
If ENSO is really currently shifting an underlying AGW trend of 2.0C/c down to roughly 0.0C/c, (and this whole debate is focusing on “how close is it really to zero?” ) then ask this question:
How much was the other hemicycle contributing to the positive growth of the nineties? And does that really remain internally consistent with an underlying trend of 2.0 C/c?
Additionally, mild temperature growth (say, 1.0C/c) isn’t incompatible with the pre-industrial temperature record. The actual size of the background trend is up for debate, but the papers focus on variations between strong-growth, growth, and stasis – none of which is ‘declining temperatures’.
The blockquote thingy got mangled. Josh Willis is the first paragraph in the blockquote. the rest is me.
I also quoted more of his, but it got swallowed.
Lucia-according to you, the data falisifies, at 95% confidence, the .2 C/century. What do your methods say about, say, a trend of about .17 C/century? That is the trend from 1977 to at least 2005 and it is about what Pat Michaels and Hansen now predict (Hansen assumes some emissions reduction to get such a forecast.):
http://www.worldclimatereport.com/index.php/2006/01/31/hot-tip-post-misses-the-point/
You seem to think the characterstics of the actual weather shouldn’t be used to estimate uncertainties. This seems an odd stance to me.
Actually, if I were asking the questions: “Would I consider the earth’s single realization an outliers compared to the batch of models runs?” I would look at this your way. I think that’s an important question when developing models– since you do want to know that at a minimum, your full spread did capture the earth’s trajectory.
However, that’s not the question I’m asking. It’s also not the one most people who don’t spend their lives developing climate models ask. I am asking:
Is the central tendency communicated by the IPCC consistent with the earth’s data?
To answer the second question, we must look to the empirical data and find the range of trends consistent with the empirical data. And the answer to this question is “No”.
I think if you compare our questions, you will likely see why I don’t give your uncertainty ranges precedence over those of the data. I also think you will understand why your model uncertainty ranges are larger than mine. In fact, your uncertainties are required to be larger than the ones I use.
If models were perfect, and people’s ability to develop SRES were perfect, our uncertainties would be identical The models weather trajectories would simply provide the uncertainty bounds associated with all weather in what we call the population of all weather we are trying to predict.
However, since the models are imperfect, and the SRES have variations, your uncertainties become larger.
On some other issues: I view the graphs at 800% in illustrator. 🙂 I’ll make a few graphs to show people tomorrow to show a few other issues associated iwth comparing data to that graph. But, now I need to watch the Colbert report!
Boris,
if short term trends have little or no meaning, what was Hansen testifying about in 1988 after 10 years of warming??
We are currently at the end of ten years of neutral. Should he be testifying that there is a disaster of neutral weather ahead we have to do something about??
Basically all the hoopla is over a 20 year period. How many times have we been told there needs to be at least 30 years??
You don’t have a 30 year warming period.
The reason of course that all these posts are very important, is that there are billions of dollars and peoples lives that are starting to be affected by this “theory” ie electricity bills Australia 2010. The work being analyzed here is VERY important, and thanks to Lucia who is analyzing the data as a unbiased scientist.
Look, a long term trend is just a bunch of short term trends one after the other. If short term trends aren’t meaningful at all then a long term trend can’t be meaningful either. People who claim that short term trends aren’t meaningful need to modify their language.
Lucia,
Figure 10.5 in Chapter 10 of AR4 WG1 report page 763 shows a spaghetti graph of the 21 IPCC scenario runs.
When I look at the A1B plot closely, 4 or perhaps 5 of the 21 projections show cooling or very little warming by 2008. It’s not obvious because not all the lines start from 0 in 2000. For some of the lines it’s hard to tell where they start.
In my view there is a yellow line, 2 green lines and a dark blue line at the bottom when you look at about 2008. The possible 5th is another dark blue one, but you can’t see its starting point.
By the end of the century, the yellow line shows 2C of warming and the others are between 2C and 3C warming.
How does that square with your statistic? It seems to me that there is more like a 20-25% chance that the observed temperatures match the scenario range.
SteveMilesworthy:

I agree that it’s very difficult to read the spaghetti diagrams. Here’s what I wrote before:
I posted A2 because it has the fewest strings of spaghetti. For the distinct ones, at the edge, you can see that those that are low stay low. Those that are high stay high.
But even at that, I’m not entirely sure that criss-crossing is entirely weather noise. We know that criss-crossing occur if some models have different over-all time constants for responding to the climate. This coupled with rebaselining will result in variations.
Gavin:
If we are going to discuss the statistics of Model E (which I would like to do) is it possible for me to get access of time series of individual runs? Steve Moscher told us where to get runs, but they are all averages over 5 realizations. If I had the individual trajectories, I could also show readers graphs so they would know what we are dicussing too.
On this: “so you need to use models to do the attribution.” Sure. I’ve never stated models are useless. I’m just saying the central tendency of the current prediction looks like it has over-predicted the trend and that is based on statistical measures. I know the world’s weather is only one realization, but it is the only realization that we can be sure is governed by the real earth’s physics.
I have ran a bunch of monte carlo with white noise+ trend and also AR(1) + trend series around June 14. I compared the distribution of the randome variable (the trend) to the uncertainty in defining the trend from a single realization.
For white noise, I saw what I thought I’d see based on what my Sophomore year math book says about estimating the distribution of the trend based on the uncertainty in a realization and what the econometrics texts say for red noise.
What are you suggesting I would see?
Lucia,
Gavin might send you to the IPCC repository to get access to individual runs. I hope he doesnt.
To get access you have to write a short proposal and then they get back to you.
It would nice if GISS made the individual runs for ModelE available. Gavin is one of the best in making stuff available. So, hope springs eternal.
Anyway, I have an avalanche hitting me now otherwise I would comment more.
Gavin,
Thanks for showing up and commenting. Leave a haiku if you get a chance. or make a climate cookie bet.
ask lucia she’ll explain
Your second answer to em was more evasive than the first. Oh well.
Sorry if this discussion has been revisited before, but in the A2 plot, the brown-orange line (coolest at 2008) shows a drop between 2000-2008, but ends up following the trend very closely. There’s a purple line that similarly drops initially and ends up about 0.3C below trend – but still gives a 3C rise. The dark blue that ends up meeting the purple line at 2100 also starts with a dip.
One is on the trend. Two are just below the 3C central trend, but they’re not far off. And they’re all in the shaded area of the other AR5 plot.
(Incidentally, I can see these particular lines individually in the AR5 pdf because my graphics card is slow enough that the lines are not all rendered at the same time. Double-clicking on the image before it has fully rendered helps slow the rendering).
Steven–
I agree that Gavin is one of the best at being open and making stuff available.
If the way to get them is to write a proposal to the IPCC repository, I’d be happy to do that.
On the cookie bets: I might propose a cookie bet with gavin, but we’d need to negotiate a trivial climate oriented bet, as those are the best ones one which to bet a dozen home baked cookies. It may, of course, turn out that gavin doesn’t like cookies. Worse, like my sister, he could be gluten intolerant, and we’d have to switch to something like meringues (which don’t ship well.) Also, I already have the bet on GISS temp with Tilo, and I can’t risk having to cook a billion cookies at Christmas!
Boris,
What are you talking about? In comment 3968, I provided a lengthy qualitative discussion of the meaning of the underlying climate trend. I avoid mathematical terms like “compact support” and “manifold”, and extensive dissertations on chaos and the various disagreements on whether or not the system is ergodic.
SteveMilesworthy–
To decide what I think of your verbal description of how this looks in slow motion rendering, we would both need to be able to watch the movie, stop it and discuss precisely when cross over occurs so we could make our various points. With regard to incorporating the information in the spaghetti graph into ideas about testing the hypotheses, we need to do much more than squint at those and argue about a) whether cross overs occur b) how often, c) for which cases et.
In order for all of us to have a fruitful discussion of this, a few of us (not just gavin) need access to the time series for those trends. I can get access to gridded data, but post processing all that to obtain the time series in the IPCC documents is a bit much to expect of members of the public trying to make up their minds.
But, you must also remember: I base my hypothesis test in the central tendency predicted by the IPCC on the features of the real earth data. Obviously, if I believe we need to test the estimate of the mean tendency, I’m not going to blindly accept the idea that the model runs must provide correct estimates of the higher order moments (i.e. standard deviations) or other features of the time series (spectral features, autocorrelation etc.)
I might be willing to buy the idea that squinting at the spaghetti graphs is useful if the standard errors for the 8 year trends that gavin estimated based on that output appeared remotely realistic. It doesn’t.
His standard errors are larger than one would expect based on the full thermometer record, including periods with no volcanic eruptions, no measurement error etc. This suggests that there is something wrong with either a) the magnitude of the interannual variation or b) features like the autocorrelation or spectra of the GMST.
Of course, there may be nothing wrong, but in that case, the apparent mis-match must be explained. Until it is, we really aren’t going to agree on anything by squinting at those graphs because we disagree about the ultimate meaning of those graphs vis-a-vis falsifying the prediction for the central tendency against measured data.
So, with luck, it may turn out there is someway for people like me to access the time series. I have the time and patience to plot them, and I even take requests! So, if they are available, I’ll show them and we can discuss them.
What a breath taking discussion. So, let’s start with a quote from Gavin:
“The problem you have is that there is one realisation of the real world and from any short period you cannot constrain the statistics of intrinsic variability that is longer than that period.”
This reminds me of a paper by Hansen, Schmidt and others whose abstract goes like this:
“Our climate model, driven mainly by increasing human-made greenhouse gases and aerosols, among other forcings, calculates that Earth is now absorbing 0.85 +/- 0.15 watts per square meter more energy from the Sun than it is emitting to space. This imbalance is confirmed by precise measurements of increasing ocean heat content over the past 10 years.”
http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf
So my question would be, how can the imbalance be confirmed by “precise measurements” over a period that Gavin claims is shorter than the intrinsic variability of the climate system?
The next question that I have has do with the idea that 10 years does not constrain the intrinsic variability, but that somehow 30 years does. How in the world do you arrive at the conclusion that 30 years covers all of the intrinsic variability. Certainly you cannot do this by looking at any multi century temperature record, where 30 year trends vary as readily as 10 year trends.
Beyond that, I have a problem with dismissing 10 year trends for another reason. I do agree that there are strong elements of variability that effect 10 year trends. But the warming community seems to imply that they understand what these elements are. So if you have a flat 10 year temperature trend and you can show that such a trend is caused by ENSO or PDO or Volcanoes, or solar, etc., then you might be able to say that the natural elements of variability have overriden the CO2 forcing trend that would otherwise be seen. But the current flat temperature trend (which is actually 11 years long) has no explanations in natural variability. Gavin’s own ENSO data clearly shows that the flat trend was not caused by ENSO. Of the .2C per decade temp rise that one would expect from CO2 forcing, only 0.01625 C can be attributed to ENSO. So where is the missing 0.183 C. It wasn’t hidden by volcanoes. The claim is that solar variability is far too weak to override CO2 forcing. So if the current flat trend is due to natural variability, what are the elements of natural variability that are giving it to us? And if we don’t know enough about natural variability to answer that question, then how can we extract the climate sensitivity signal from a natural variability that we do not completely understand.
By the way, I looked at Gavin’s ENSO corrected data from 1998 to the present and compared it to the uncorrected data for HadCrut3v here:
http://reallyrealclimate.blogspot.com/2008/07/gavin-schmidt-enso-adjustment-for.html
Notice that the difference is hardly impressive. I think this soundly puts to rest the warmer claim that the data for the last decade has been cherry picked for it’s ENSO effect, and that minus the ENSO effect we would see the CO2 forcing signal clearly. In fact the ten year flat trend has nothing to do with ENSO.
I hacked together a method for determining the ENSO effect back in May, here:
http://reallyrealclimate.blogspot.com/2008/05/ten-year-hadcrut3-enso-effects.html
And it is worth noting that the Thompson method does not disagree with the conclusion that I reached at that point, which was that the current flat trend is not ENSO related.
http://www.ipcc-data.org/ar4/gcm_data.html
here is one place to start. there is another location, but I cant find it.
Thanks Steve.
It appears time series are available. I requested a login. With luck, I’ll get one.
I have a login to another place–but I didn’t find time series. Maybe I didn’t look hard enough though! So, I’ll go back and have another look.
Lucia
Lucia,
I was looking at the wiggles in the graph above and it appears that the runs which show a flat trend for the current decade are only able to stay within the range of the other models because they have some suspiciously unphysical jumps. For example, the orange wiggle jumps 0.5 degs in 2-3 years in the 2050’s and again in the 2070’s. Such a jump has no precident in the temperature record that I can find.
I think that re-enforces you point about how outlier model runs are not an appropriate way to estimate the uncertainity in the IPCC projections.
So we know that the flat trend of the past decade has not been caused by ENSO, using Gavin Schmidt’s own data. The elephant in the room, then, is obviously – what elements of natural variation did cause the decadal flat trend. Gavin has been asked this question on his own blog, in the ENSO thread, at least four times now. His determination to avoid giving a response is extremely revealing.
http://reallyrealclimate.blogspot.com/2008/07/gavin-schmidt-enso-adjustment-for.html
We are not talking about predictive models here, we are talking about a historic event for which we should have all of the data. The assumption behind the models is that we know all of the physics necessary to make projections for the future. But if we do not know enough physics, and if we do not know enough about natural variation, to explain what natural variability caused an event that has already occured, the current flat trend, then how is this possible?