As many climate warming junkies are aware, a number of people began to notice a sort of “flatness” in the Global Mean Surface Temperature in the past decade. Way back in December, in attempt to prove the recent trend in temperature has no statistically significant meaning, the Blogger formerly known as Tamino did the following:
- Generated a series consisting of 100 years of data with a known trend of 0.18 C/century and ± 0.1 C. (This is approximately equal to the trend over the past two decades).
- Manually picked out a specific data point that happened to be “the high” relative to the trend.
- Showed that if you intentionally pick an “high” outlier out of a series of containing 100 points, you there will be a negative trend afterwards and
- Concluded that the recent downtrend is somehow statistically normal.
Of course, it wasn’t too difficult for the Blogger formerly known as Tamino to do the analysis he wished to do. The difficulty is that his little “proof” hardly supports his conclusion that recent trends in GMST fall inside the bounds of what might be expected based on any statistical measures.
Yet there are those who, in comments at blogs, are suggesting Tamino’s so called proof demonstrates that looking at recent trends to assess the rate of climate change is cherry picking.
The reality is: If the IPCC is correct about its projections, the recent trends are highly unusual. There is sufficient recent data to support this statement using standard statistical techniques. These are the exact same techniques used to support the contention that the non-zero warming trend predicted by AGW is supported by empirical data.
What do statistics tell us about Tamino’s little test?
Based on analysis of 30 year trends, statistics tell us that the standard error in the 10 year trends is approximately 1.1 C/century. If the trends are normally distributed, we expect that 5% percent of all possible trends will have slopes less than zero. Clearly, if temperature trends stay rock steady at 1.8 C/century, if you sift through 100 years looking for one with a negative trend, you can find one. 2
The ease with which one can pick out a negative 10 year trend might seem to be Tamino’s point: If the real trend is 1.8C/century with a variability around the trend of 0.1C, if you sift through 100 years of data applying absolutely no rule to limit your choice you can fish out a 10 year series with temperature trends that are less than zero.
But now one must ask this: Has anyone other than Tamino used Tamino’s technique to pick cherries? Has anyone used that technique to support the contention that the recent trend in temperatures is not exactly racing skyward?
No. Or to put it more emphatically: Absolutely not.
Constraints
In the first place, those who have examine recent trends are always compelled to select the most recent year as their endpoint: these include David Stockwell, Basil, and me.
Not one of us selected any old, entirely arbitrary, 10 year string out of 100 years to provie our point. Why would we? What could we possibly hope to demonstrate by showing there was or was not a flat point in the 10 year temperature trend trend back in 1940? Quite likely, no one is stupid enough to attempt that rather interesting method of cherry picking. Though, should someone do so, the proof that such a choice is silly now exists online.
In reality, what those who examine recent trends have done is to constrain our choice of strings by some rule.
David Stockwell, uses the most recent 10 years. David correctly points out that the only free variable is the length of the data set used for analysis. So, to examine shortish trends he selects 10 years. Ten years may be somewhat arbitrary, but it has the advantage of being a round number. If one asked, “Why not use 11 years? Why not 9?” the answer is: “David limits himself to round numbers. ”
I prefer to eliminate even this latitude. When testing the fidelity of IPCC predictions, I limit myself to data collected after they make their projections. This means that, due to statistical uncertainty, I assume the IPCC predictions are true until data sufficient to prove the IPCC projections false trickles in. (I actually anticipated the data would support their projections. But data are data, so what is one to do?)
Basil uses a standard statistical technique to select his start year.
Let us see what each of us are finding:
- David Stockwell examines the recent year to year drop. Using the IPCC definitions of “likely”, and “very likely”, he determines that the recent 10 year trend is inconsistent with the IPCC predictions of 2C/century. Depending on the data set used, the inconsistency ranges form “medium likelihood” to “very likely”.
- Limiting myself to data arriving after the IPCC made their projections — and there by limiting myself in a way that entirely prevents cherry picking– I find that, using an average of four measurement data sets, I find the recent trend is inconsistent with IPCC projections of 2C/century. Using the IPCC defined terminology for level of confidence, the central tendency of 2.0 C/century is “virtually certain” to fall outside the bounds of the data. The lower bound of their uncertainty intervals is “extremely likely” to fall outside the bounds of the data.
But notice that whether we use David’s method or mine, we each get similar, though not identical, results. The only difference in our conclusions is the confidence with which we falsify the most recent IPCC projections.
- Basil, guest posting at WattsUpWithThat?, is applying the Chow Test to see if there appear to be ‘breaks’ in the GMST temperature series. He identifies a structural break at 2001:2002. The Chow Test is a standard test, and Basil reports the statistical significance of the break. Using IPCC terminology, according to the test, he find it is at least “very likely” that a break in the temperature trend occurred at 2001:2002.
Using his own, more cautious terminology, Basil thinks the likelihood the statistical technique found a real hinge point falls between “probably” and “very likely”. He also notes that application of the techniques doesn’t explain the cause for the break.
We do not yet know what Basil will conclude about the predictive ability of the IPCC or statistical significance of any temperature trend. But I think it is safe to say that he will explain the basis for his choice of “start date”.
What do we find? And conclude?
No matter which major temperature measuring group we examine, or which reasonable criteria for limiting our choices we select, it appears that possible that something not anticipated by the IPCC WG1 happened soon after they published their predictions for this century. That something may be the shift in the Pacific Decadal Oscillation; it may be something else. Statistics cannot tell us.
It may turn out that this something is a relatively infrequent but climatologically important, feature that results in unusually cold weather . Events that happen at a rate of 1% do happen– at a rate of 1%. So, if recent flat trend is the 1% event, then 30 year trend in temperatures will resume.
For what it’s worth: I believe AGW is real, based on physical arguments and longer term trends, I suspect we will discover that GCM’s are currently unable to predict shifts in the PDO. The result is the uncertainty intervals on IPCC projections for the short term trend were much too small.
Of course, the reason for the poor short term predictions may turn out to be something else entirely. It remains to those who make these predictions to try to identify what, if anything, resulted in this mismatch between projections and short term data. Or to stand steadfast and wait for La Nina to break and the weather to begin to warm.
But back to the accusations of cherry picking?
Did Basil, David or I use the Tamino Technique for cherry picking?
Some may not like the results we are posting. But, clearly, we did not cherry pick to obtain them. We each constrained ourselves to starting from the most recent year and working back and each of us limits ourselves further by an additional constraint.
Some may dispute our methods for selecting our start dates. But everyone needs to select a start date and end date for statistical tests somehow. Some may dispute the notion of comparing IPPC predictions to short-series of data. But the uncertainty intervals of hypothesis tests are expressly designed to account for the larger uncertainty in small data sets.
To those who wish to decree the time periods are cherry picked based on results they dislike, I would ask: How would you select your start dates to test the accuracy of IPCC predictions? If their choice to support the possibly high IPCC near term projection of warming at a rate 2C/century is not based on the invention of the thermometer but rather appears to be selected based on a point near a relative minimum in the temperature series, I would say:
“Your highly processed Maraschinos look delicious. May I taste one?”
Footnote
1. Of course, it may be difficult to find a negative trend, announce it, and then see it immediately by precipitous drop. The recent, rather dramatic drop in temperature was noted many, including, instapundit, Andrew Bolt, Tim Blair and Michael Ascher. Of course that drop occurred after Tamino’s post showing the flatness was not unusual.
First off, I’m not accusing you or anyone else of cherry picking.
But consider this. We know that 1998 was a year when internal variability, specifically a strong el nino, was adding heat to the atmosphere. Further, we know that currently internal variability, specifically a la nina event, is taking heat from the atmosphere. Given that knowledge, we must also admit that if we were going to see one of those unusual trends–the 1 in 20 or 1 in 50–then the conditions are currently optimal.
So, treating this as a pure numbers problem can mislead us into premature rejections of the IPCC projections. We can also look at the physical process of internal variability to see if there is a weather explanation rather than a climate explanation.
Boris says:
The fact that this period coincides with the drop from a solar max to a solar min could also indicate that the effect of the sun on climate has been underestimated. IOW – the trend may tell us that the IPCC projections are actually wrong because they overestimated the influence of CO2.
Jorge:
Oppps. I edited out the second footnote! Tamino renamed himself “Hansen’s Bulldog”. He blogged about his decision here
dear lucÃa:
before “The ease with which one can pick out a negative 10 year trend…” there is a number “2” but only one footnote… a mistake??
off topic but, why do you say: “Blogger formerly known as Tamino”???
thank you!
great blog
Boris–
I know you aren’t accusing me of cherrypicking. This accusation is being bandied in comments at other blogs– and not specifically at me, but at those who are looking at recent data at all.
I’ll get you the specific %s tomorrow. But bear in mind: my time period does not include the 1998 El Nino. That happened before the IPCC made it’s predictions. I’ve chosen to simply examine their projections.
And yes, there is the possibility this is the 1 in 100 event. That said: Typically, the 1 in 100 cool events come after major volcanic eruptions which are known to veil the sun. So, I’m not entirely sure that La Nina is explanatory. Nevertheless: Yes things that happen 1 in 100 times, do happen 1 in 100 times.
First: my point in this article is that in picking our time periods, our start end years are constrained. No one is simply hunting for any randome set of years that might contradict some particular trend.
Also, who says 1975-1985 falsifies the known trend? And which know trend? Do you mean the real 1975-1985?
Lucia and Boris,
To this non-scientist, it is far from clear what the IPCC is attempting to ‘predict’. On the one hand, the authors of Chapter 10 of AR4 (‘Global Climate Projections’) say that ‘Uncertainty in prediction of ANTHROPOGENIC climate change arises at all stages of the modelling process’ (s. 10.5.1, p. 797, EMPHASIS added). On the other hand, they say in the same paragraph that ‘[S]ome sources of future radiative forcing are YET TO BE accounted for in the ensemble projections, including those from … variations in solar and volcanic activity’ (ibid., EMPHASIS added).
If the purpose is to predict anthropogenic climate change, why try to incorporate the proximate effects of variations in solar and volcanic activity? And if these effects are considered to be beyond prediction, which would be understandable, why say that they are ‘yet to be’ accounted for?
Lucia,
“Showed that if you intentionally pick an “high†outlier out of a series of containing 100 points, you there will be a negative trend afterwards and”
Actually he also showed that if you do it unintentionally it will be the case.
“The difficulty is that his little “proof†hardly supports his conclusion that recent trends in GMST fall inside the bounds of what might be expected based on any statistical measures.”
Strawman. That conclusion appears nowhere in the article you link.
You have apparently entirely missed the point of the article which is not only that some denialists really have cherry-picked 1998 as a high point (a real-world example is shown in Tamino’s article), but that they will use ANY short-term noise which appears contrary to AGW. If there is not a 10-year trend they will use a 7-year trend and they will if necessary use a 1-month trend. If there isn’t a 1-month low they’ll use a single cold day in Chicago or a picture of some snow. If you find yourself having to defend against accusations of cherry-picking (and I’m not accusing you of such), it’s those guys you need to thank.
Your examination of the recent trend is interesting, but you have done yourself no favors here by misinterpreting Tamino’s illustration as a ‘proof’, entirely missing the point of it, pretending that it argues for conclusions that it does not, and going on to both misrepresent and minimise his efforts (‘little test’, ‘little proof’).
You’ve apparently also missed the point of this paragraph:
The right approach is to look for the trends, not the wiggles, and to apply statistical significance testing to determine whether they’re real changes in the system or just accidental fluctuations. There’s always noise mixed in with the signal, and disentangling the two can be very tricky. But it can be done.
According to you, this is what you’re attempting. Why then do you respond to Tamino’s illustration as though it were an attack on you?
For a tamino post that is a bit more relevant to your discussion, see this.
Boris–
I checked the specific number to answer how of this event would be expected to occur if the IPCC 2.0 C/century were correct, and we had the weather variability that actually occurs. It’s less than 0.006 (0.6%) So, it’s the 6 out of a thousand event according to this particular statistical model, with it’s associated assumptions. (Normally distributed slopes and what-not.) This uses a 2-tailed distribution. Depending on the wording of the hypothesis you had in mind, a 1 tailed would be more appropriate, in which case, the probability would be even lower.
I usually prefer to just pick a confidence interval and give a conclusion rather than going through and trying to find the ‘break even’ confidence interval for a particular conclusion.
Ian–
It’s true they didn’t entire account for volcanic eruptions. But, volcanos didn’t erupt during this period. The sun didn’t do anything truly odd during the recent 7 years.
Whatever caveat may be included, I think it’s still worth seeing how the projections compare to data. And, for all the caveats included, what the specific short term projections are is easier to find in the AR4 than the TAR! (I’m sure they are in there, but I like being able to find numbers in the Technical Summary, and the guide for policy makers. I figure numbers that are crystalized down in those document are the actual projections as interpreted by the public an policy makers. Detail in appendices is great, supports and helps interpret what the TS and guide for policy makers mean, but the front matter is there for a reason!)
Frank Dwyer.
No. Tamino showed that if you run a random series with 100 points and a trend, a “high” point will exist. (This is well known.) Then if he hunts down that “high” point and intentionally makes it his high point, there will be a negative trend after that.
His decision to hunt down the “high” point was entirely intentional. When he did that, he applied zero restrictions to his choice. He could have selected any point he wished, he intentionally applied the high one, and did his analysis.
Getting it “accidently” would involve:
* creating a criteria for picking 1 specific 10 year trend (for example, the last one in the series),
* running a one and only one series of 100
* Then opening his eyes and saying: Whoa! Look! The data just, somehow, accidentally, worked out so that there is a down tick in the final 10 years!
The probability of the downtick occuring in one specific series he predefined is much lower than the probability he create a series, hunt through the data and find one.
He cherry picked, but others were not doing what he claimed.
Frank D.
I btw read You Bet. I thought it best to ignore that. It’s a fairly silly method of doing a statistical test.
Lucia,
“No. Tamino showed that if you run a random series with 100 points and a trend, a “high†point will exist. (This is well known.) Then if he hunts down that “high†point and intentionally makes it his high point, there will be a negative trend after that.”
I know, but you’re still missing the point, which is that denialists will cheerfully look for something else if there is no convenient high point. Therefore the likelihood of a randomly picked 10 year trend being negative is simply not relevant. For a start the denialists don’t limit themselves to 10 year trends, they will accept any negative trend on any scale. But more importantly, they are not looking for 10 year trends that favour their position. They are not even looking for a multi-year trend that will favor their position. They are looking for anything at all that will favor their position.
(And, yes, they will if necessary also move the end point before the present, and this has also happened, for example in illustrations of solar correlation with temp)
Again, for clarity, I’m not saying any of the above is what you have done. But it is what others have done, and that is what Tamino’s article was addressing, not your analysis.
“I btw read You Bet. I thought it best to ignore that. It’s a fairly silly method of doing a statistical test.”
Impressive rebuttal.
FrankD:
Everyone, ranging from denialists to alarmists, cheerfully hunt for data to support their view. We see reports, blog, forum posts citing daily weather as being due to AGW or its termination all the time, and wondering what it all means.
And if Tamino’s point was that people look for anything at all, he could have used a different analysis to explain that. He did not. He chose to do a specific computation, illustrating a technique no one uses. As far as I can tell, Tamino’s post disproved a strawman. Had he provided a link to demonstrate that anyone, anywhere used the technique claimed, I would think otherwise.
I didn’t intend to rebut You Bet. Is there anything to rebut there? Tamino proposes a bet. To do so, he make up his own idiosyncratic method of testing a hypothesis to place a bet. Standard methods exist to test hypotheses, and already permit us to get a conclusion based on existing data. I think that’s silly to try to prove things using his self created method and by betting, but if he wants to make bets that way, he’s free to do so.
Lucia,
“Everyone, ranging from denialists to alarmists, cheerfully hunt for data to support their view. We see reports, blog, forum posts citing daily weather as being due to AGW or its termination all the time, and wondering what it all means.”
I don’t know what blogs you are reading but what I see on the ‘alarmist’ side is constant reminders that the weather is not the climate, because climate is about weather patterns. And if you really see an equivalence between the output of the scientific literature and that of the denialists then you’re not paying attention to either of them.
“And if Tamino’s point was that people look for anything at all, he could have used a different analysis to explain that. ”
Indeed he could. Or he could have used the one he did use, since it is a perfectly good illustration.
“He chose to do a specific computation, illustrating a technique no one uses.”
Completely backwards, he used a technique – more precisely, generated some data – to illustrate what can happen if you don’t do any computation.
As for strawman, go do a search for people claiming that global warming stopped in 1998 and you will find the ‘technique’ (i.e. no technique, no analysis) is used by a great many people. Or you could just read the article which specifically quotes some of them.
“I didn’t intend to rebut You Bet. Is there anything to rebut there?”
Well, let’s see:
We can also quantify the hypothesis that global temperature hasn’t changed since 2001;
Would appear to be relevant to your discussion of temp since 2001.
As would this in reference to annual averages since 1975:
As an aside, the above graph is about as clear a graph as I’ve seen showing that there’s really no evidence — none whatsoever — that global warming has stopped.
In other words, he’s clearly stating that there is an ongoing trend from 1975 through 2001 to the present and that no data since 2001 falls outside the 95% confidence interval. So, yes, there is something to rebut. If you don’t like his ‘self-created’ technique (which is apparently based on a deep understanding of standard techniques), then feel free to use something that you think is a standard technique on that same data to show just how silly his analysis is.
Lucia, I greatly enjoy this blog, and would be sorry to see it get polluted with piles of garbage. Maybe you should make a rule that all posts with the term ‘denialist’ in them will be deleted instantly? Otherwise we will be unable to think for having obsessives rant about Bush, Creationism, Intelligent Design, Evolution, Exxon, Tobacco, the Trilateral Commission and I don’t know what else. They could just be asked to go do this stuff someplace else, surely, and leave the rest of us to discuss climate?
Frank Dwyer:
“I don’t know what blogs you are reading but what I see on the ‘alarmist’ side is constant reminders that the weather is not the climate, because climate is about weather patterns.”
Here’s a couple that might have missed the memo.
http://blogs.nature.com/climatefeedback/2008/01/ice_festival_wilts_in_global_w.html
http://climate.weather.com/blog/9_14970.html
The last one is really nice. The “Forecast Earth” guy even tries to salvage data from a single cold month to be consistent with global warming.
And here’s one that, although it has your “weather is not climate” disclaimer, makes it very clear that some like to use weather as ammunition to argue climate
http://blogs.nature.com/climatefeedback/2007/07/flash_floods_a_sign_of_whats_i.html
Sorry Frank, missed the “O”, and with St. Patty’s day right around the corner too.
Fred,
“Lucia, I greatly enjoy this blog, and would be sorry to see it get polluted with piles of garbage. ”
Good idea. Please start with yourself.
For example you could try posting something relevant to the discussion instead of calls for censorship and strawmen.
John M – I read those links but notice the reference to ‘historical records’. Weather is ammunition to argue climate only in so far as it lies outside the previous patterns, or more precisely if it is consistent (or not) with the predicted changes in patterns. No single weather event can really be attributed to warming, but a pattern of events might be.
Again, for clarity, I don’t say that Lucia is making this mistake. Indeed it is clear I think that she gets the difference.
P.S. thanks for getting my name right. I’ll have a drink for you on ‘Patty’s’ day. Slainte! 🙂
Well, we can agree that you aren’t, but the argument that the Earth has stopped warming has been around. Here is Bob Carter arguing that global warming stopped in 1998. The article is from 2006, so, as you can see, he is applying no restrictions to his choice. This is a supposedly impartial scientist and he is clearly guilty of cherrypicking:
http://www.telegraph.co.uk/opinion/main.jhtml?xml=/opinion/2006/04/09/do0907.xml&sSheet=/news/2006/04/09/ixworld.html
As for your 2001 choice, you are right to say that it was not an el nino year. But the endpoint (obviously not cherrypicked) is a strong la nina. If you had done this analysis in March of 2007 you might even have come up with the conclusion that the IPCC was underestimating the trend.
Anyway, trying to assignn probabilities and langauge such as “very likely” tot his situation, without considering the role of internal variability, gives an inaccurate impression of how the IPCC projections
But surely the 07-08 steep decline is internal variability (la nina and perhaps other things–PDO is a possibility). It could not have a source in radiative forcing. There was no volcanic eruption. We’re obviously not dealing with something typical.
Thanks for your civil reply—to me anyway 🙂
And I don’t mind tipping a few to you as well (even though I’m Italien).
With regard to “historical records” and how long is long enough, at what point do we start to agree that a flattening temperature is significant? My guess, is that you would say 5 years is way too short, but 10, 20, do we need 30? Is it signficant if we enter into a phase of continued warming, but at a much lower rate, such as happened during the last cool phase of the PDO?
I wonder if you think this data treatment by David Smith over at CA is fair.
http://www.climateaudit.org/?p=2223#comment-224281
Just my opinion, but I’ve found his treatment of global temperature trends to be fact-based and non-emotional.
Sheez, maybe I’ve started St. Patty’s day too early.
1099 was for Frank O’Dwyer.
JohnM,
“I wonder if you think this data treatment by David Smith over at CA is fair.”
It looks like it might be a reasonable thing to be doing (i.e. subtract ENSO and see what’s left). I notice though that these are troposphere temps he is using.
Aside from that we can probably all agree that the more data the better (though even with a smaller set of data it should be possible to do some analysis, just with wider confidence intervals). As long as you remember that it isn’t just data but that there are constraints that are set by physics and what we know about the forcings etc (including all that CO2).
I also agree with the general gist of Lucia’s attempt here – it’s reasonable to compare forecasts to what actually happened and I would like to see more of that. However, some of it is already done, e.g.:
Over the eight years, 2000-2007, since the Met Office has issued forecasts of annual global temperature, the mean value of the forecast error was just 0.07 °C.
http://www.sciencedaily.com/releases/2008/01/080104091616.htm
…and anyone else who wants to should be able to do similar checks on those forecasts (admittedly only one year out, but these are based on models too).
Anyway what all of these cooling/flattening results (including Lucia’s) have in common is that they do not jive with other results that say the rate of warming is still upward (for example, tamino’s post, or this from the same article I just linked)…
What matters is the underlying rate of warming – the period 2001-2007 with an average of 0.44 °C above the 1961-90 average was 0.21 °C warmer than corresponding values for the period 1991-2000.”
…and of course that the arctic and a portion of the animal and plant kingdoms don’t seem to have got the memo yet.
Frank O’Dwyer
Thanks again. I’m not sure why you would suggest tropospheric trends might not be the best way to look at this, since I thought that Tamino and others have shown that, once you zero properly, those trends are perfectly consistent with surface temperatures.
http://tamino.wordpress.com/2008/03/02/whats-up-with-that/#more-614
With regard to Met forecasts:
“Over the eight years, 2000-2007, since the Met Office has issued forecasts of annual global temperature, the mean value of the forecast error was just 0.07 °C.”
I’ve done an actual tabulation over at the CA BB
http://www.climateaudit.org/phpBB3/viewtopic.php?f=3&t=119#p1627
Here’s what I garnered from the Met forecasts (order is year, mean forecast, actual from following year’s forecast (usually preliminary), and whether the Met predicted that year would be the warmest ever).
1999 0.41 0.32-0.33 No
2000 0.41 0.32-0.33 No
2001 0.47 0.42-0.44 No(2nd)
2002 0.47 0.49 No(2nd)
2003 0.55 0.45 Yes
2004 0.50 0.44 No(2nd)
2005 0.51 0.48 No(2nd)
2006 0.45 (0.37?) 0.42 No
2007 0.54 0.40 Yes
So yes, looks like within 0.07 deg is valid on average, but interestingly, forecasts are almost always on the high side. (The question mark for 2006 is because it was unclear—to me anyway—if the Met office switched to an updated HadCRUT database during that year.)
Finally, I would agree that warming during the 90s was impressive, but that average for 2001-07 has an awful lot of flatness in it.
Again, how many years of flatness would it take to be convincing?
Fred–
It’s true I will need to figure out a moderation policy. My blog is relatively new, and only hit 1000 visitors a day this week. I’m used to my almost abandoned knitting blog, which in knitting season clocks over 1000 a day, but, which did not result in disputes.
Currently, I don’t think anyone is being a troll. I want conversation, and I think I sort of occupy the vast chasm between those who deny AGW at all and those who insist one must never, ever, ever suggest that one can whisper the slightest doubt about even the most alarming predictions for AGW.
So, I don’t intend to moderate heavily, but I am adding “fightin’ words” to my SpamKarma blacklist. When I first set up the blog, I put “Soros” in the blacklist and a few other terms– because I could see those are problems. The way spamkarma works people can still post using these words, but new visitors are likely to see their first post held up if the use the “*&^%’ words. If necessary, I’ll find a plugin to dis-embowel “bad” words. That way, I can let comments run and people can automtically see what I think of certain words.
FWIW, I’ve added both “denialist” and “alarmist” to spamkarma’s blacklist. If you’ve posted here before, this will probably not affect you. But if it does, click contact lucia, let me know and I’ll fish you out of moderation. (As a good- 50% Irish, 25& cuban, and 25% miscellaneous, married into a Swedish family, this weekend is my weekend to have the inlaws over.
So… hopefully people will behave. I am fine with people disagreeming, debating, etc. but I prefer no name calling, all caps etc. (Over time, if I see this, I’ll ban people. )
Boris and Frank: I disagree that what Tamino describes is similar to looking at recent data. I think we are going to continue to disagree on that, but that’s ok. 🙂
You don’t think Bob Carter was cherrypicking?
Lucia,
I truly enjoy your no nonsense approach. my main beef with Tamino is that he does not post his data
or his methods and he occasionally avoids standard statistical tests ( like Chow) for his own ad hoc
inventions or applications, without adequate backup.
I think you have a wry sense of humour and a fresh way of looking at things that I enjoy. and Tammy boy
throws like a girl ( present company excepted)
Glad you like the Menne piece I’ve been asking CA to cover that for months, since Menne type analysis will
used to adjust temp data.
(Unrelated. The climate does not exist. I loved Brigg’s piece on not being able to observe the mean.
I think that is lost on so many people. )
Boris– If someone’s motive for picking 1998 was because it was warm, that is cherry picking. However, it’s not cherry picking the using the method Tamino explained. Tamino’s method involves arbitrarily picks both endpoints, thereby dramatically increasing the number of cherries to select from.
If all Tamino meant was one should not do as Bob Carter does, he should have said that and spent time on an example that illustrates why Bob’s method was inappropriate. I actually discuss the difficulties in trying to falsify, and how long it can take to falsify a consensus position using annual average data here.
For better or worse, if the central tendency were 2 C century (as the IPCC currently suggests for the near term) it could quiet easily take over 15 years to falsify “no warming”. The converse also holds, if there really were no warming, but everyone believed in warming, it could quite easily take 15 years to falsify prediction.
Clearly, picking one year knowing what data have come in and then claiming to falsify and crowing often means the next year you will suddenly be shown wrong. That is, you greatly increase the chances of making a hasty conclusion.
Sometimes, you just have to wait.
In anycase, those who think there is sufficient data to disprove AGW altogether are mistaken. There is a vast distance between 2C/century and 0C/century. And, of course, as I said, this could be the 1 in 100 event. One never knows until more data come in.
I’ll post graphs about your specific questions about what we would have concluded had we started testing the IPCC projections in 2001, and the March 2007 data had just come in. I did the calculations quickly, but right now: The IPCC projections would have appeared suspeciously high, but not falsified.
John M, thank you for finding those figures. It is amusing to play around with the figures to see how good the record really is.
The Met Office average error of 0.07 per year may not seem a lot but over a century it would amount to 7 degrees Kelvin. If they are making an average error this year then their prediction of 0.37 anomaly suggests that the actual figure will be either 0.44 or 0.30. Frankly that sort of prediction record does not fill me with all that much confidence.
As you pointed out the predictions were, on all but one occasion, that temperatures would be warmer than they turned out. Taking the signs of the errors into account their predictions averaged 0.06 degrees a year too hot – so they are their way to 6 degrees of falsely predicted warming this century – just about what I would have expected from the Met. (Obviously I am just making fun of the Met figures and not doing any proper analysis of them.)
If the Met had saved time (and the UK taxpayers’ money) by simply predicting that each year would be exactly the same as the year before then the average errors would have almost halved – down to an “impressive” 0.04 degrees error per year.
Lucia,
“Tamino’s method involves arbitrarily picks both endpoints, thereby dramatically increasing the number of cherries to select from.”
First, it’s still an illustration not some kind of statistical method for cherry picking. Second, the illustration only requires that you get to pick the start point. I think you’re overlooking that in his example data there is a warming signal by construction. You don’t need to arbitrarily pick the endpoint in such a series because the high points are overwhelmingly likely to be near the end, i.e. ‘the present’, anyway. No need to hunt through history to find them.
I just did a similar experiment myself, I generated some normal noise and added a small warming signal for 120 “years” of toy temp anomalies. Sure enough, toward the end of the series are a number of 1998 style high points. Like Tamino I only generated a single series. Finding a negative slope in such a series is easy. Even though the data is warming by construction, it took me just a few minutes to find two recent ‘cooling trends’ just by eyeballing the numbers. I didn’t need to be able to pick the endpoint, that was fixed at ‘the present’.
“If someone’s motive for picking 1998 was because it was warm, that is cherry picking. However, it’s not cherry picking the using the method Tamino explained.”
Sheesh. That would most likely be because he didn’t provide a method for cherry picking. He gave an example of how noise can overwhelm the signal, and demonstrated why you need to use statistical methods to extract the trends.
The statement quoted by Frank O’Dwyer that average global temperature in 2001-07 was 0.21°C warmer than in 1991-2000 was made by Professor Phil Jones of the University of East Anglia in a UK Met Office/UEA media release of 3 January 2008. There is no dispute about the point he was making – we must look at the underlying trend rather than year-to-year variations.
But four years earlier – on 16 December 2003 – Professor Jones made the same point with a different sort of comparison: ‘Globally, I expect the five years from 2006 to 2010 will be about a tenth of a degree warmer than 2001 to 2005.’ This was a prediction, and the predicted warming of 0.1°C between successive five-year periods was at the same rate as predicted for coming decades by the IPCC: 2.0°C/century.
But the current lustrum is not showing the warming compared with its predecessor that Jones predicted. According to the HadCRUT data that he and his colleagues produce, the average global temperature anomaly in 2006 and 2007 was 0.043°C LOWER than the average for the 2001-05 lustrum. Even if the Met Office/UEA forecast for 2008 is realised (it now looks to be on the high side), the average for the three years 2006-08 would be 0.057°C LOWER than that for 2001-05. On this assumption, it would be necessary for the 2009-2010 average to increase by 0.143°C compared with the 2006-08 average in order to reach THE SAME average for 2006-10 as for 2001-05, And, still using the same assumption for 2008, to get to a 0.1°C average warming in 2006-10 compared with 2001-05 – as predicted by Jones in 2003 – it would be necessary for the 2009-10 average to be almost 0.4°C higher that that for 2006-08.
Of course it can be argued that no conclusions should be drawn from averages over successive periods as short as five years. But that was Professor Jones’s choice in 2003. He has now shifted the goalposts and has compared average temperatures for the first seven years of the 21st century with the average for the last decade of the 20th century. That’s a different comparison that doesn’t require – and has not in fact shown – any warming after 2003.
At this stage, it seems highly likely that Professor Jones’s expectation for 2006-10, as reported in 2003, will not be realised.
Frank,
you wrote:” Sheesh. That would most likely be because he didn’t provide a method for cherry picking. He gave an example of how noise can overwhelm the signal, and demonstrated why you need to use statistical methods to extract the trends.”
Good point. Can you cite any peer reviewed statistical articles where Tamino illustrated the robustness of his method with respect to finding trends in climate time series data time series data? All of his publications are in astronomy.
So I’m just curious about the peer review that has been done on his methods. The Chow method, which basil used,
is well documented and a standard proceedure. Can you point me to the tammy method in the time series statistical
literature? Or get us his code and data. that would help people test it’s robustness.
Thanks in advance for your help.
Steven Mosher,
“Good point. Can you cite any peer reviewed statistical articles where Tamino illustrated the robustness of his method with respect to finding trends in climate time series data time series data?”
Is there a reason that you are changing the subject? Perhaps when I make the claim that Tamino did that I will go looking for evidence to support it.
Meanwhile I’m discussing this post here from Lucia.
“The Chow method, which basil used, is well documented and a standard proceedure. ”
That’s nice, but which method did Bob Carter use? Which method did Dick Lindzen use? Which method do the hordes of
denialists‘skeptics’ use when they cherry pick 1998? Have you addressed your query for code and data to any of these people?Hotdog.. err frank
Bob Carter? I don’t read him. If he used a method and did not disclose his data or methods, then I’m not interested in it.
he hasnt made an interesting point. What he picked 1998 and said there was no trend since then? woop! even if true, what’s the point? no data. no method. no code. no cigar.
Lindzen? Same issue. I don’t read him ( oh wait he posted something on Watts ). no data. no method. no code. no credentials
in statistics. I’ve seen bald claims of no warming!, again, if true so what? We fully expect to see patches of cooling
in an upward trend.
Tammy boy? no data. no methods. no code. no credentials in statisics. no idenetity. His stuff was trivially true or false
and meaningless like Carter’s.
Lucia? Lucia took a unique and unassailable approach to selecting a start date. She picked a date established by the
people making the “projections,predictions,forecasts, etc etc”
very shrewd. Unassailable. and the only choice NOBODY has had a good retort to. Why? because you know its a decision
ROOTED in the practice of science. not arbitrary. not statistically biased. she picked the predictors start date.
super shrewd.
Bottom line: The IPCC underestimate their short term ( especially) and long term error bands. They do this for political reasons.
this cold weather may take a chuck of flesh from their overconfident backsides. ( Sorry L)
Steven,
I think you have mistaken me with someone who has an issue with Lucia’s choice of start date as being cherry picking. I don’t. I hope that’s clear enough. Apparently it wasn’t when I said it the first few times.
What I have been commenting on here is Lucia’s misunderstanding of the Tamino post (which is presumably one of the trivially true ones) she links to in this post.
I’m not clear why you are having trouble finding Tamino’s data either, since he links to the sources he uses right at the top of the site. I’ll assume the rest of your observations are as accurate as that. Anything else that is not clear to you, you can ask him on his site, or write something better.
“Bottom line: The IPCC underestimate their short term ( especially) and long term error bands. ”
May I see your code and data to support your claim? Thanks in advance.
“They do this for political reasons.”
Are you a mindreader or are you doing a little projection of your own there?
Mosh pit:
Isn’t this the nature of using an ensemble mean in projections. The IPCC make climate projections, not weather predictions.
Are you arguing that they should add confidence intervals for weather noise? How would these be calculated?
Boris, I suppose what Im angling at is if a gcm does not produce el nino or la nina type events, that is weather events
that last a couple years, then they are not going to be very accurate in the short run, but in the long run, at
climate time scales things would even out. Put another way, A shrew GCM guy would argue that the time scale for
comparison should be at the climate scale, whatever that is.
how would i estimate weather noise? on a seven year peroid? I’d put lucias CI onto the IPCC trend estimate
to make the point another way, IPCC make projections that are more likely to be falsified early and less likely
( if the model is correct) to be falsified later.
anyway, tough problem.
You have apparently entirely missed the point of the article which is not only that some denialists really have cherry-picked 1998 as a high point (a real-world example is shown in Tamino’s article), but that they will use ANY short-term noise which appears contrary to AGW. If there is not a 10-year trend they will use a 7-year trend and they will if necessary use a 1-month trend. If there isn’t a 1-month low they’ll use a single cold day in Chicago or a picture of some snow. If you find yourself having to defend against accusations of cherry-picking (and I’m not accusing you of such), it’s those guys you need to thank.
What’s the problem with cherry picking? The number of times I’ve read that AGW is here, based on a cherry picked bit of news is countless. e.g. The ice is melting – no artic ice cover.
The IPCC use a temperature per decade measure. Using a 10 year average is then reasonable.
Nick
Frank,
As long as you I agree on Lucia’s choice, I think it best to drop the tamino matter, since we won’t agree
about that and fighting on the sidelines is fun but distracting from the main game.