Can ENSO really explain away “the problem”? What about the PDO?

ATMOZ’ recently posted an interesting analysis that explains why one must always be aware of the properties of weather when doing statistics. I agree with that. I would go further. I would say one must always be aware of the properties of any underlying phenomena when doing statistics.

More specifically, ATMOZ suggested that maybe we need to consider the importance of a known cyclic weather pattern when evaluating measured trends in GMST using relatively short time periods. His article seemed to be aimed at making a general point, rather than specifically discussing falsification of IPPC predictions. And as such, his point is quite sound.

However, recently, someone else, seems to have suggested that ATMOZ’s rather general point undercuts my recent analysis of the IPCC projections.

So, today, I will discuss ATMOZ’s point, as I understand it, and place it in the context of testing the IPCC projections.

ATMOZ’s Spherical ENSO

To illustrate a general point about being careful when calculating trends using short data strings, ATMOZ did a calculation using a thought experiment. In his thought experiment, he created something”A Perfect Spherical ENSO”.

What is the perfectly spherical ENSO? The perfectly spherical ENSO weather is described this way:

Spherical Enso + Trend

… assume that our time series has a trends of 0.02 C/year. Our ENSO signal has an amplitude of 0.3C and a period of about 12 years. Since ENSO actually has a “period” of around 3-7 years, it’s clear that we’re not talking about the real ENSO. These values were chosen to highlight certain points about calculating trends.

(Emphasis mine.)

If we were to superimpose this perfectly spherical ENSO on recent weather, it might look as shown to the right. The red curve a sinusoidal ENSO with 12 year period and an amplitude of 0.3C. It is positioned such that the minimum is just hitting the current temperature drop. The purple curve shows the sum of this ENSO plus the IPCC estimated trend.

I think it would take real ova to suggest the temperature data of recent weather resemble this perfectly spherical ENSO, plus the IPCC trend. (Note: ATMOZ specifically said this ENSO+ Trend is not real.)

But let’s ignore the fact that the weather looks nothing like a “perfectly spherical ENSO + Trend”. Let’s just say that ENSO does contribute to part of the variability in the GMST, and so it’s useful to be aware it exists when analyzing real earth temperature data. After all, it is useful to know what can cause error in any experiment, even if the experiment is “measuring climate earth”.

So, let’s discuss the amount of error the real ENSO might introduce into a calculation of the temperature variation.

Number of ENSO CyclesTo the left, I have inserted a graphic taken and modified from Atmoz’s blog post.

The black wiggly line illustrates the trend one would measure if one assumed weather followed a perfectly spherical ENSO + Trend, and the phase of the ENSO were “just so”. So, if ENSO were perfectly periodic (as is a 24 hour day) and if ENSO had an amplitude of 0.3C in GMST (which it can’t because this significantly exceeds the estimate of all weather noise) then, ifwe calculate the trend over exactly 1/2 cycle (which is 6 years for his perfect ENSO) , we might estimate the trend in the GMST was (0.3C * 0.8) = -2.4 C/century, even though the real trend is ±2C/century.

Hhhmmm… number so close to the ones I got And yet. . . Does this make sense based on our understanding of real weather?

Let’s defer the answer to that question for a moment. 🙂

If you examine ATMOZ’s graph, notice something interesting: The total error due to measuring over only a few cycles drops dramatically with time. And, notice something else: I have drawn a square around the approximate number of “ENSOs” in the 6 year averaging period used to test the IPCC projections.

If ATMOZ’s spherical ENSO were real, but simply had an exaggerated period, I could just adjust his graph to match the real time scale of ENSO. So, I did. Or at least, I guestimated the time scale. Using more realistic estimates of the time scale of ENSO, my test of the trend falls in the region enclosed in the box. So, in reality, we expect much less error.

And I won’t say how much. You can read the graph, and change the (0.8) in the calculation (0.3C * 0.8) yourself. But before you even do that continue reading. Because the 0.3C is also not meaningful. Also, ATMOZ uncertainty intervals exhibit the behavior of uncertainty intervals used to test hypotheses.

Real ENSOs

El Nino FigureHere is a graphic, NOOA showing the multivariate ENSO Index (MEI). Counting the number of transitions from “Red” to “Blue” we have gone through at least 1 full ENSO, and maybe two or three. I don’t know exactly how they count these– I’m just assuming when it turns from one color to the other, we’ve had a switch.

So, I counted, and it seems to me that ENSOs typical cycles are bewteen 4-5 years. I figured there were 13 in 58 years. (Your mileage may vary.)

Equally important: Notice that my averaging period starts in a La Nina!

So, if we were to interpret ATMOZ graphic, false negative trends in measurents arise when we start measuring during the peak of an EL Nino, and end during the trough of the very next La Nina. But, in reality, starting in a LaNina and ending in a La Nina would mean we are in a region where the ENSO cycle contributes almost zero to the measured trend. It’s where the wiggly lines crosses the mean trend!

But that still doesn’t tell us how important ENSO might be because, of course, we may not fall in exactly the same point in perfectly constant period of the spherical ENSO. To figure that out, we need to do something a little more complicated.

More plausible effect of ENSO on statistics.

With respect to ENSO, I did the following analysis:

  1. Found the total amount of noise (weather + instrument) remaining in the measurements after fitting straight line.
  2. Estimated the amount of instrument noise in the averaged monthly data based on the standard deviation of the temperature measurements from all four sources. (GISS, HadCRUT, UAH, and RSS). I treated this as white noise added to the total “measured weather noise”.
  3. Assumed 30% of the weather noise is “ENSO”. (This makes a huge ENSO effect. After all, in addition to ENSO, we have the PDO, the Nothern Atlantic Oscillation, and we have all sort of other things. )
  4. Assumed the remaining “weather noise” is red noise, with a month to month correlation of 0.8, and the remaining “weather noise” is white.
  5. Generated a “weather string” by adding the ENSO + Instrument + Red Noise+ to a trend of 2C/century. (Because the error due to ENSO differs depending on the phase shift, I picked four separated by 90 degrees. )
  6. Found trends “N” year trends using ordinary least squares, using LINEST in EXCEL. (And i for one case, I repeated with Cochrane-Orcutt.)
  7. Estimated the uncertainty intervals for the trend based on the average properties of measured weather. This is done the same way I estimate them for other propblems. Note the uncertainties increase dramatically at small times (making falsification difficult.)
  8. Ran a weather “simulation” by using a Gaussian Randome number generator in EXCEL

So: in short, I included the a “perfectly spherical ENSO”, in my analysis, but gave it a realistic period, and an amplitude on the high side (thus overstating its importance to calculating the trend.) I included other types of weather noise which contribute to the uncertainty when I do a falsification test.

Then, I generated four “weather” events, with four possible ENSO phases, plotted that, and showed illustrated those inside the sorts of uncertainty bands we use for hypothesis testing.

So, what do we learn about ENSO’s effect on estimation of 6 year trends in GMST?

Ordinary ENSOs only matter a little in the time period I considered.

The graphic below compares trends calculated from the generated weather string, with four different ENSO’s added. Each ENSO was 90’s out of phase, there by
Estimate of uncertainty in trend due to ENSO

What is the result?
Well, the ordinarily least squares fit to the weather are show in mint green (and one in blue). The uncertainty bands calculated using OLS are shown in blue; those are known to be too small. That’s the main reason we apply Cochrane-Orcutt. The wider uncertainty intervals calculated with Cochrane-Orcutt are shown in red.

Even adding in a plausible ENSO with four different phase lags to the IPCC trend with additional weather noise to get the “weather noise” up to a reasonable level, the simulated “weather” that would be consistent with the IPCC trend never strays outside red C-O the confidence limits drawn.

Recall: to decree the IPCC projections are falsified to the 95% confidence interval using Cochrane Orcutt, the weather must stray outside the red bands in the particular test made. So, in this particular case, ENSO, never causes us to make a false positive.

The fact is: whether one knows about ENSO or not, the variability due to ENSO is included in the variability that we consider when calculating the uncertainty intervals for a hypothesis test!

Notice the uncertainty intervals?

Because some readers asked, I also want to draw attention to the shape of the uncertainty intervals.
Notice how the uncertainty bands widen for fewer and fewer number of years in the average? Notice how ATMOZ’s “trend due to ENSO” got mostly larger and larger, the fewer and fewer numbers of years in the average?

Guess what? That variability due to ENSO is part of the variability accounted for in the statistical procedure to draw the error bands.

But some are probably still thinking there is something meaningful in the “perfectly spherical ENSO” problem. Sure. It teaches us this: No one sould try to estimate the temperature trend for January by measuring at 2 minute intervals on a nice clear day from 8 am to 3pm. That’s a measurement problem that is analogous to ATMOZ’s perfectly “perfectly spherical ENSO” where the amplitude a single predominant cyclical variation is much larger than the trend. There are others.

But the ENSO issue? Does it vitiate statistical hypothesis tests based on trends measured over 6 years? Not so much. In fact… erhmm no.

We would need and unusually strong unusually slow ENSO that started at just the right time. But that’s the definition of an outlier. Outliers happen– but rarely. 🙂

Conclusion: Perfectly Spherical ENSO is an interesting thought experiment. . .

Discussing ENSO sounds like physics. Discussing statistics sounds like statistics.

But appearances can be deceiving to those who are too lazy to think: Testing scientific hypothesis isn’t “physics vs statistics”. Testing hypotheses in a systematic way is “physics illuminated by statistics!”

Climate and weather include many cyclical phenomena. These include the annual cycle, the diurnal cycle, ENSO, PDO, Norther Atlantic Oscillation and many other cyclical events. Each contributes some “noise” to the “weather noise”. It is important to be aware of these, but the existence of these cycles absolutely does not make it impossible to do hypothesis tests!

It is true that if a particular phenomena has longer time period than the one we use to estimate a trend, and that phenomena is associated with a strong effect, that can distort our statistics.

Which reminds me: Which climate cycle is known to have a time period longer than the one used to prove AGW?

The PDO

(Figure from Unversity of Washington: PDO)

I will close by quoting myself:

…it appears that possible that something not anticipated by the IPCC WG1 happened soon after they published their predictions for this century. That something may be the shift in the Pacific Decadal Oscillation; it may be something else. Statistics cannot tell us.

49 thoughts on “Can ENSO really explain away “the problem”? What about the PDO?”

  1. This info shouldn’t impact your analysis, since your ENSO weather effect was significant, but FYI.

    Two quotes from Trenberth et al (2000) in “The Evolution of ENSO and Global Atmospheric Temperatures”:

    “For 1950-98, ENSO linearly accounts for 0.06 C of global warming.”

    There was a negative trend between 1950 and the mid-70s, so the impact of ENSO is masked by the years they selected for analysis.

    “It shows that for the 1997-98 El Niño, where N34 peaked at about 2.5 C, the global mean temperature was elevated as much 0.24 C…although, averaged over the year centered on March 1998, the value drops to about 0.17 C.”

    A while ago, I generated a coefficient from the NINO3.4 and global temperature values listed, but it’s probably not realistic since the base of the NINO3.4 index may not truly represent its impact on global temperature. It is significant, though, when ENSO is removed from the GMST. Your 30% may not be too far off.

    I enjoy your posts and continue to learn each time I read one. Thanks.

  2. Great post. For While I thought Mr Nyquist was at work when Atmoz showed his trends analsyis ( 13 -19 year window)
    figuring that if enso where somewhat periodic you’d want a sampling period to be 2x etc etc. Anyway nice post.
    I think the weather simulation might lose some people but I liked it.

  3. Lucia, I have read through your recent posts on IPCC falsficaton, ENSO etc & find your statistical approach refreshing.

    This post is outstanding; you show a particular skill for communicating quite complex statisical concepts in a very engaging, informatve and good humoured way.

    You have encouraged me to brush up on my statistics!

    Keep up the good work.

  4. Lucia, I endorse the complimentary comments that have been made above about this and your other recent posts. I’ll also take the liberty of pasting in here, for the interest of your readers, an extract from a comment I’ve made on the ‘Climate Progress’ blog: I think that it’s reasonably ‘on topic’:

    “In the ‘Notes to Editors’ accompanying their climate prediction for 2008 on 3 January (‘Global temperature 2008: Another top-ten year’), the Hadley Centre and CRU said that ‘The forecast value for 2008 mean temperature is considered indistinguishable from ANY of the years 2001-7, given the uncertainties in the data’ (EMPHASIS added). In the light of that statement, I don’t understand how Joe Romm can claim that the Hadley Centre believe that their data ‘unequivocally shows we are in a warming trend, including this decade.’ If the data unequivocally showed that the global warming trend continued in this decade, the more recent years WOULD be statistically distinguishable – in an upward direction – from the earlier years of this decade. That’s what a trend means.

  5. Ian. I read Joe Romm’s screed. It appears his response to Roger Pielke Jr.’s post is to re-prove there is warming since the 70’s. There is warming since the seventies. No one denies that.

    My post relate to what appears to have happened since 2001, and whether or not the IPCC seems to be able to project correct magnitude for the trend, properly assess the uncertainty in their estimates and convey that information to the public.

  6. Bob–
    Obviously, ENSO’s impact can be noticeable on temperatures. And when we have extremely energetic events, that can cause statistical outliers. That 1998 ENSO was a biggie. (My understanding is that ENSO’s that happens during the warm phase of the POD tend to be larger than normal. La Nina’s that happen during the cold phase tend to be larger than average. But I could easily be wrong about this.)

    But, it seems to me that the ‘logic’ of “It’s caused by ENSO is: An ENSO that is 3-4 times a long as what we actually get, that has a smoothly varying shape, (so as to give the maximum possible distortion to a linear regression, and that is the Krakatoa of all ENSO’s could result in a statistical anomaly in the 5 year average. ”

    Well…yes. A very very unusual ENSO, (or any very unusual streak of weather), could occur. If that very unusual weather appears immediately after group or person makes their prediction, then the predictive ability of that group or person will be suspect.

    But, as I’ve said many times: Statistics don’t ever prove something is absolutely wrong. They can be used to show something is improbable. All “falsification to x% confidence ” means is the likelyhood is less than x%.

  7. once again, you’ve pointed out something and backed it up with solid math and statistics. And, indirectly, reminded me that I too need to brush up on my statistics, especially since I work with transportation modeling now!

  8. Good day

    I have just discovered this blog through the link at Dan Hughes’ .
    So now that you are in chaotic systems too , I may suggest you to consider following papers that are full on topic and will certainly help you to explore farther the methodological domain of considering time series .

    1) http://www.dma.ulpgc.es/profesores/personal/jmpc/Pink.pdf
    This one is a statistical treatment of the NAO concluding that it is a mix of white and red (hence pink) .
    However like most papers on the climate it concludes typically that more evidence is needed and that it is still not sure what is the effect and what is the cause .

    2) http://linkinghub.elsevier.com/retrieve/pii/S0012821X00003290
    An excellent short paper concerning the predictability of a global temperature with associated non linear statistical analysis .
    They conclude to the predictability of the global temperature with a maximum range between 2.5 and 7 years .
    From these ranges they interestingly remark :
    “El Nino is indeed chaotic and possibly a subsystem of a grand complex system . The way in which this subsystem is connected to the grand system could explain the (range of) predictability of global temperature anomalies .”
    L.Gimeno has published much interesting about these issues – time series and non linear analysis .
    On an anecdotical note you will notice that the first reference he quotes is Takens , whom I already had the opportunity to make you aware of 🙂

  9. Scafetta and West (Physics Today, March 2008) show a low passed low-passed filtered and smoothed temperature trace to remove ENSO and volcano effects. It still shows a decrease since c. 2001 of ~1 deg/century. They ascribe it to the decrease of total solar illuminance.

    P.s. They do the best work on the effect of solar varibaility on climate.

  10. It appears Tammy has decided to ‘debunk’ your analysis by showing that a C-O regression applied to the GISS dataset manages (barely) to include 0.2/decade within the error bounds. I tried to point out that the other datasets would likely not produce that result, however, it did not make it past the censor.

  11. Steve–
    It’s Easter. My family is here. I haven’t seen a trackback, so presumably, I’m not linked, which suggest Tamino must not want his readers to stray over to read alternative views.

    How much you wanna’ bet the post will exhibit:
    a) cherry picking by selecting GISS– the data set with the highest slope.
    b) ignoring the existence of other data sets. Doing this ramps up the uncertainty introduce by “instrument error”. Imagine if you measure the same thing with 4 different instruments, then average, this reduces the data error. In contrast, picking on ly one increases the uncertainty due to measurement error.

    Of course, picking the instrument that best exhibits what you wish after the data come in is… well… as I asked before “Raniers or Bings?”

  12. “Equally important: Notice that my averaging period starts in a La Nina!”

    It start with la nina but there is a prevalence of ENSO+ at the beginning of the time series and then ENSO-,net ENSO effect is negative.
    Here is a map showing trend between february 2001 and february 2008 computed from NOAA dataset(values in °C/decade): http://img291.imageshack.us/img291/5802/trend20012008rn7.png

    ENSO area(180w-80w and 5s-5n) trend is -0.99°C/decade.

    A plot of zonal mean trend: http://img175.imageshack.us/img175/5264/trendzonalmeanfn2.png
    black line: zonal mean trend
    red line: zonal mean trend weighted for area covered by data : ( zonal mean trend)*(area covered in that latitude band)/(global area covered by data)

    Some consideration:
    1-The time period is very short(too short!) and local trend sometimes strongly positive sometimes strongly negative.
    2-The world warmed north of 40N and cooled south of that.
    3- ENSO net effect is negative and account for rather strong negative trend in the equatorial latitude band( up to-0.48°C/decade in the weighted mean 0/5S), clearly ENSO effect are not limited to this latitude band and contributed to the cooling particularly in the tropical band.

  13. gb-
    In response to the “some consideration”.
    1) No one is denying the time period is short. This could also be an outlier. However, “alpha” (false positive error) of statistical tests is set to a constant at all times. So, one would expect the rate of false positives to remain constant at nearly all times. So, we should expect these false positives to occur 5% of the time in all time frames. (A false positive in this test would be decreeing the IPCC to have over estimated current warming. ) It is false negatives that decrease with more and more data.

    2)The purpose of using data for the whole world is to obtain the average effect of the world. Picking only one portion of the globe, or one data set, or one time period to get the result you like is called “cherry picking”. This is why, for example, the fact that the south pole has warmed does not disprove AGW in the first place.

    3) Showing one ENSO plot for one time period tells us nothing about the effect on the trend. With respect to regression, what matters is whether the trend is down over the entire period. Starting low and ending low pulls down an average, but not a trend.

    I have, quite consistently, stated, that this can be an statistical outlier. Things that happen 5% of the time, do happen 5% of the time. If this is the case, the temperature will not only resume warming, but at the rate the IPCC predicted. (Since I believe in AGW, I suspect they will resume warming. However, it does appear the IPPC likely overpredicted the rate of warming.)

  14. For someone so defensive about cherrypicking, you sure are throwing that accusation around a lot. Perhaps there is an objective reason to choose GISS? See if you can figure it out.

  15. Boris–
    I’m not defensive about cherry picking. I think it should not be done. For this reason, I make my choices for analysis and explain them before I post my results.

    If you believe there is an objective reason to select GISS, why don’t you tell us what you think that might be?

  16. I’m not sure if you have understand the map above, it shows linear trend in °C/decade between feb2001-feb2008, i’ve computed trend over this time period simple because your analysis refers to that….
    In your article you stated that :
    “But, in reality, starting in a LaNina and ending in a La Nina would mean we are in a region where the ENSO cycle contributes almost zero to the measured trend. It’s where the wiggly lines crosses the mean trend!”

    Well, but looking at the map above this is not true, since ENSO area in the equatorial pacific shows a strong negative trend between feb2001-feb2008 (-1°C/decade) this means ENSO had a net negative contribute on temperature in that time period.

  17. Boris: When people have reasons for ignoring widely used, well respected data sets that contradict your preferred point, it is up to them to provide the explanation.

    Was I correct when I guessed that Tamino did pick GISS, and ignore the other data? What reason did he give you when you asked him why he picked that set?

  18. Tamino has been consistent in his choice of GISS and he has provided reasoning which is sound. If you want to learn, you gotta do the legwork. I’ll not do it for you.

  19. Boris:
    You don’t wish to take the time to support your opinion. Duly noted.

    gp–
    You posted graphs like this:

    These have no captions, no narrative etc.

    We all know data exists. If you want to use this to make some sort of point, you need to tell us how you created it, not link to the top of NOAA’s web site and say data exists.

  20. The reason that Tamino, and others, continue to use GISS as the “poster boy” for temps keeps revolving around the “arctic extrapolation” theory (GISS does, HadCRU doesn’t). But the GISS is also averaged over an older, colder period.

    Yet, if GISS was the best at charting of the anomaly, then a simple comparison would prove their statement:

    1. Take the GISS and HadCRU data, and zero them, using the same reporting period (really doesn’t matter which one, but latest would be better).

    2. Plot the difference of the anomalies. This should show the “arctic difference”, since they both, AFIK, use the same number of reporting stations.

    3. If GISS is more accurate, there should be a constant, positive value of GISS over HadCRU over the entire period (no cherry-picked years). In other words, does GISS, over the entire record, show constant, higher temps than the other surface measurement record? If so, fine. If not, why not?

    In my opinion, GISS is used primarily because it shows the greatest “rise” above “zero”. More dramatic charts that way.

  21. henry–
    We shall wait to see if boris confirms the reason you have given. Had he stated that reason, I would have observed that this is a very poor reason to use this data only. The reason is that, extrapolation, in some sense, the GISS set uses data that are not observed!

    When comparing model predictions to data, it is best, as much as possible to compare model predictions to real observations.

    While I entirely understand, and sympathize with the need to account for the uncertainty due to lack of pole measurements, this is no reason to exclude data from well respected groups like Hadley when testing theories.

  22. I find the arctic extrapolation argument for GISS to be quite poor because:

    1) Including Antarctica below 70S should reduce the GISS trend.
    2) The Arctic patch not included in HadCRUD is very small and should not affect the average significantly (just like the Y2K bug in the US data did not affect the average).

    The reason Tamino chooses GISS is obvious: its trends, on average, warm faster than the other datasets.

    Here is a post that calculates the trends for each data set:

    http://wattsupwiththat.wordpress.com/2008/02/29/interesting-plots-of-temperature-trends-the-4-global-temperature-metrics-according-to-basil/#comments

    2002:01-2008:1

    GISS -0.00091450 (-0.110C/decade)
    HadCRUT -0.00270338** (-0.324C/decade)
    RSS_MSU -0.00208111 (-0.250C/decade)
    UAH_MSU -0.00130882 (-0.157C/decade)

    As you can see – any C-O test that barely passes for GISS would likely fail for any of the other datasets.

  23. Keep up the good work Lucia, this is a very interesting and very civil blog (makes a change).
    Just one trivial grammatical nit-pick 🙂
    “The fact is: weather one knows about ENSO or not,”
    There are three homophones for ‘weather’
    wether- a castrated ram!
    whether- a conjunction expressing a choice.
    weather- the state of the atmosphere at a given time and place.

  24. I didn’t know about that castrated ram thing.

    I wish I could blame my poor proof reading on English being my second language which technically it is. But my family moved to the US when I was in 1st grade, so clearly that would be lame!

    If you see any “it’s” where I should write “its”, let me know. I do that constantly.

  25. Measurement of arctic areas is only part of the answer. I’m surprised lucia doesn’t like the extrapolation that GISS uses. I missed the post where you rebutted Hansen and Lebedeff 1987.

    Remember, it’s the skeptics that cherrypick their datasets. UAH used to be the bomb before the error corrections. Then it was RSS before their error correction.

  26. Lucia, I was born and raised in sheep country hence my knowledge of that probably obscure word, I’m constantly being caught between the two stools of English and American spelling, American being my second language! I’m an extremely good proofreader of everyone else’s writing, not so good on my own! Interestingly the word ‘bellwether’ refers to the leading sheep in a flock and in its metaphorical use to a harbinger of change, quite relevant to this topic.

  27. Boris,

    We are discussing Lucia’s post and Tamino’s rebuttle. Lucia used all available data. Tamino only used GISS and censors posts that try to point out that any dataset other than GISS would fail his C-O test. Any rational observer can see that Tamino is the cherry picker in this case. Your ‘skeptics do it too’ defence is rediculous.

  28. I’ve computed trend with Ferret, if you don’t trust this data you can compare 1979-2007 trend for RSS with the one available at RSS site:
    http://img397.imageshack.us/img397/4251/rsstrendmg7.png
    http://www.remss.com/data/msu/graphics/plots/MSU_AMSU_Channel_TLT_Trend_Map_v03_0.png

    And here is again NCDC trend with more information,first term is excluded so trend are computed between february 2001(not january) and february 2008 (I changed levels so colors are not the same):
    http://img397.imageshack.us/img397/4702/ncdctrendoc9.png

    seond graph is simply a zonal mean of trend as explained above.

    Henry, NASA GISS has greater trend due both to arctic interpolation and interpolation over land (particularly eurasia), while Hadley dataset has lot of missing pixel there, since both arctic and eurasia warmed very fast in recent year(look at trend above), nasa giss vs hadley temperature difference became larger over time and reached a max in 2007 when both arctic and eurasia has been very warm…you couldn’t expect a costant difference.
    bye,bye.

  29. Boris–
    I don’t dislike the GISS data. I use it along with the other data.

    However, with regard to testing theories against observations, the fact that data rely on filling in regions using a theory rather than measurements is a short coming.
    I don’t need to rebut anyone’s article to point out this difficulty.

    Obviously, Hansen’s use of theory to fill in for lack of measurements is not the only possible shortcoming in data. We could argue which system is best until the cows come home. Recognizing that all measurement systems include ‘instrument’ noise, I use all sets that are considered sound.

    Assuming that each of the four ( and soon five) measurement sets I use is one “realization” of a different set of climatologists judgement for the “real” data, I am ensemble averaging over all these realizations. This should minimize the “instrument noise” in the same way that modelers using GCM’s try to average out errors due to modeling uncertainties using their differnt “GCM-Model-planet-Earths”.

    Of course, if the entire universe of climatologists agreed wholeheartedly with Hansen, they would all use his method. In that case, their judgement would be reflected in the average data. Right? 🙂

  30. gp– It appears you have something to contribute, and I would love to respond to you. But, to use words to describe what you have done. “Computed with Ferret” is no more informative than if I said: I did my calculations using EXCEL.

    hat are the units on your chart? Degrees C? Degrees C/month? Was what you call a trend computed using OLS? Or is it just an anomaly?

    My difficulty isn’t one with your choice of data. NOAA data are fine. But you need to describe what you did.

  31. gp-
    I added the html to make the image show.

    As far as I can tell, that image shows us that a small region of water is 0.5C cooler now than in 2001. As anomalies go, that is small. (Note, the overall range is 8C during that period.

    Equally importantly, that is a simple difference. (This is what I suspected, and why I wanted you to use words.) Simply taking the difference in means is not the effect that would have on a linear regression in the trend.

  32. Raven,

    I’ve said that tamino has good reason to select GISS. You may disagree with the reason; however, accusations of cherrypicking–especially when one has not even investigated the reason for the choice–are mere rhetorical ploys to advance one’s own arguments. Can we dispense with the sophiostry, please?

    What’s the difference between CRU/GISS and RSS/UAH?

  33. Perhaps is better if we look at timeseries instead of maps, i’m going to explain this better i can…

    The timeseries below shows SST anomaly in the region 180w-80w/5s-5n, data source is ERSSTv2(not the latest version used by NOAA merged land/ocean index),i’ve computed anomaly with respect to a 1971-2007 climatology:

    Slope is -1.03°C/decade.
    This can be verified looking at the graph, it shows a decrease of red line from +0.56°C in february 2001 to -0.16°C in february 2008, a 0.72°C drop in seven years , which equals 1.03°C/decade.

  34. gp-
    Thanks. Now I understand you. 🙂

    (Graphs work great. I do take the liberty of editing to make them show.)

    What we see is this has persisted over several cycles. The peaks are systematically dropping and the troughs are too.

    Who is to say this isn’t the trend for that region? Due to some other cycle?

    The idea of the perfect ENSO is that we haven’t sampled many. But look at that graph– we’ve sampled several. And the graph is strongly suggestive of a downtrend that is entirely separate from ENSO!

    Is there something else? I don’t know. But this graph does not look like we just happened to catch ENSO at a high and leave it at a low.

    If, on the other hand, this is due to ENSO, we should see the apparent trend reverse when we hit the top of the next ENSO. That is simply a matter of time.

  35. Boris, Actually in the past Tamino has said he PREFERS the hadcru approach to not estimating the polar regions.

    GISS estimate the polar regions from stations 1200km away and they employ a GCM to aid in this process. HADCRU
    live with the added uncertainity in their estimate.

    But, as I said, in a reply to me tamino has said that he PREFERS the hadcru approach. be that as it may. Lucia
    uses them all, so she wins and has picked no cherries.

  36. Boris, here is one you will like.

    when Tamino was proposing a bet on climate trends, here is what he said about choosing a temp series

    [Response: None of the metrics — popular or not — is 100% correct. And correcting the GISS Y2K error led to a net change in global average temperature anomaly of 0.003 deg.C.

    As I said, I’m not betting money I’m trying to establish conditions under which we can confirm or deny various hypotheses. It was framed as a bet because that seems to be popular for discussion, and it does force one to be explicit about exactly what conditions will lead to a declaration for one or another hypothesis. For a bet, I think it’s better to keep it simple and agree on a single source of data for decision.

    But for determining the outcome with highest reliability it’s better to use multiple data sets. I intend to keep track of GISS, HadCRU, and NCDC, and I’ll probably keep my eye on satellite data from RSS, UAH, UMd, and UW as well. I’ll report any significant results, regardless of the nature of the result or the source of the data. I expect they’ll end up telling the same story.]

    So BORIS, Didnt Lucia do what Tamino suggested on jan 31 2008? use all the indexes? and isnt this the proceedure he claimed would have the HIGHEST reliability? And then, in march when lucia uses all the indices to challenge the IPCC, did tamino resort to picking the SINGLE index that best made his case?

    LOOKS LIKE you just pwned yourself.

    http://tamino.wordpress.com/2008/01/31/you-bet/

    Well?

  37. OPPS Boris, In fairness to you, tamino actually says he prefers Giss to hadcru. So I was wrong WRT that. However, in adjudicating his own bet, he said he prefers to look at all indexes. So, make sense of this as you like. So, I was half wrong and half right, if your keeping score.

  38. SteveM
    Yep. The major difference between Tamino’s numbers and mine are:
    a) I averaged instead of relying on the metric that gives the least cooling and
    b) By averaging, I reduce the “noise” due to instrument error. (Which, as we see, is an idea Tamino actually endorsed in the past.)

    Of course, keeping the instrument noise by ignoring additional data keeps the error bars wide. That’s one of the reasons Tamino’s error bars are wider than mine. Wide error bars makes it difficult to falsify anything– what with the high β error, which is the error driven down by loads of data.

    So, if you don’t want to find any problems with the IPCC projections it’s easy enough to do so. Just ignore most of the data.

    But, yes. If I ignored 80% of the existing data, and picked the outlier, I too, would make the same conclusion as Tamino.

    Isn’t it amazing how I guessed precisely how he would come to his conclusions? 🙂

  39. Lucia, I read it and it didnt dawn on me until a couple days after while making a comment on William connelly.

  40. When it comes to GISS and the issue of interpolations and extrapolations, this comparison of the Jan 2008 anomaly (GISS default base) with both 250 km and 1200 km GISS smoothing as a 2 frame GIF animation (converted to Mollweide projection to represent equal areas) is instructive:

    GISS Jan 2007 anomaly - 250km vs 1200 km comparison - equal area

    Bear in mind that many of the more remote 250 km blocks have data from only one station in the grid cell, which is really just a miniscule dot in the image.

    As you can see, the 250 km image reveals just how little Arctic data there really is!

    Does anyone honestly think the 1200 km smoothing accurately reflects real Arctic temperatures?

  41. The arctic above 82.5 north is less than half of one percent of the surface of the earth, so neither its inclusion nor its exclusion can make much difference in the global temperature trend.

    However the HadCRU data excludes a great deal more than just the arctic. As far as I can see they only use data from 5 x 5 degree grids in which there are reliable instruments. This figure has changed a lot over the years. Back in 1850 they covered just 22%, in 1989 the figure reached 89% – but this figure has now dropped back to 81%.

    Since – as I understand it – GISS and HadCRU use exactly the same temperature data, the only difference being the way it is dealt with, then one has to make a choice over which one prefers. Is it better to give each station equal treatment in the way that HadCRU does, or should one extrapolate the data from some stations over areas where there are no reliable thermometers? Does the GISS extrapolation risk increasing errors from marginal stations, or does ignoring 19% of the globe mean that HadCRU is not really a global average at all?

  42. Does anyone honestly think the 1200 km smoothing accurately reflects real Arctic temperatures?

    To paraphrase Galileo ‘But still it melts!’

Comments are closed.