Cold Tongue SST data? (BLEG)

Because ‘people’ are talking about the 15 years of “no global warming”, I’ve been looking at “Do global temperature trends over the last decade falsify climate predictions?—J. Knightht, J. J. Kennededy, C. Folllland, G. Harris, G. S. Joneses, M. Palmelmelmer, D. Parkeker, A. Scaifefe, and P. Stotttt”

This is the paper that contains the oft-quoted text

ENSO-adjusted warming in the three surface temperature datasets over the last 2–25 yr continually lies within the 90% range of all similar-length ENSO-adjusted temperature changes in these simulations (Fig. 2.8b). Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

Since, the authors are discussing a graph showing ENSO adjusted warming in the beginning of the paragraphs, the context suggests the authors are discussing 15-year ENSO adjusted trends. If so, it could be interesting to determine the “ENSO adjusted” trend based on the methodology used in Knight et. al. It appears they used the method in “A large discontinuity in the mid-twentieth century in observed global-mean surface temperature
David W. J. Thompson1, John J. Kennedy2, John M. Wallace3 & Phil D. Jones4”. The method is described in their methods section. (I’m going to be lazy and not format the equation…)

METHODS
All results and analyses are based on monthly-mean data. Following ref. 3, the tropical-mean surface temperature response to ENSO is modelled as:
[badly formatted equation]

denotes the simulated response of monthly-mean tropical-mean surface temperature anomalies to ENSO variability; F(t) is the anomalous flux of sensible and latent heat in the eastern tropical Pacific, parameterized as the average SST anomaly in the dynamically active cold-tongue region (5u N–5u S, 180u W–90u W) multiplied by (1) the fractional area of the tropics covered by the cold-tongue region (,5%), and (2) a coupling coefficient of 25Wm22 K21; b is a linear damping coefficient found by linearizing the Stefan–Boltzmann law, b54sT3 E , where s is the Stefan–Boltzmann constant and the mean temperature of the tropical atmosphere, TE, is assumed to be,255 K; C is the heat capacity of the tropics per unit area and is determined empirically such that the correlation between TENSO(t) and tropical-mean surface temperature anomalies is maximized (the resulting heat capacity corresponds to the entire atmosphere and the top ,8m of the tropical ocean; C55.23107 Jm22 K21). The model was initialized with anomalies in the cold-tongue region starting at 1870 and the output TENSO was used from 1875 to 1995.

So, it seems I need a link to a resource that will provide me “the average SST anomaly in the dynamically active cold-tongue region”. I’m sure someone here knows precisely where that is. (Or knows where the gridded data are.) Anyone know where I can grab this data?

Once I have it, I figure I can compute TENSO and do the remaining things to come up with a ENSO adjusted HadCrut4, GISTemp and NOAA/NCDC. After which, computing trends is a snap so we’ll easily see when and if we get a 15 year ENSO adjusted 0 trend.

Mind you– I have little confidence that the climate model in that Knight et al gives correct “weather noise”, but it will still be interesting to see if or when we get a 15 year or longer 0C/dec ENSO adjusted trend.

Update:
There is a question in comments about what model. I thought I’d show the graph in Knight et al.
Below I think what Knight et al plotted is:

  • ENSO adjusted observed temperature rises from “year N” to the year ending Dec. 2008. The ENSO adjustment is based on a method in Thompson et al. (2008).
  • ENSO adjusted temperature rises (black) and their 75%, 90% and 95% spread based on 10 model runs down selected from (Collins et al. 2006). Those runs were all based on HadCM3 but with parameterizations varied. So some of the spread in results is “different models” and some is “weather”. (On the other hand, the model spread is constrained by picking runs with ‘long term trends’ of 0.15C/dec and 0.25C/dec with a mean of 0.20 C/dec — which is slightly below the model mean in the AR4.)

KnigthEtAlGraphs

I’ve superimposed vertical lines to show the start year one would diagnose significance of the rise from 1999 to “now” back in Jan 2009 (purple) and the start year one would use to diagnose the rise from 1999 to “now” in Jan 2013 (black.) Using “method of the eyeball” it appears, a rise of 0C for the ENSO adjusted annual average temperature in 1999 relative to the annual average temperature in 2012 would be deemed outside the ±95% confidence intervals. Note: We must ENSO adjust. I have no idea what ENSO adjusting will do. On the one hand, 1999 was a strong La Nina– so that will kick the 1999 temperature up; 2012… not so strong anything really. So, today, I’m going to see if I can implement the ENSO correction method. I’m a bit puzzled by the wording of the method, but I suspect I can sort that out after drinking some coffee…. Or just starting the analysis often helps.

92 thoughts on “Cold Tongue SST data? (BLEG)”

  1. AJ–
    That looks like it could be it. Now … to read the description and filter appropriately! Then I can get the trends adjusted for ENSO as in Knight et al. Later, I’m going to try to get the model data. John Kennedy comments on twitter, so I may be able to get it from him. He hasn’t tweeted in a week though…. could be on vacation or something!

  2. Lucia,
    NOAA will make the time series for you; instructions are here. Hopefully this URL will get to the final page. Then you just ask to plot – it shows a plot and invites you to download the data.

  3. Quote from “Do global temperature trends over the last decade falsify climate predictions?”

    “we expect that warming will
    resume in the next few years,
    consistent with predictions from
    near-term climate forecasts”

    Their data ends in Dec 2008. Everybody hates trends from short time scales but I thought it worth looking at what happened over the past ‘few years’

    Using the useful SkS trend calculator 🙂 I get Jan 2009- Jan 2013 trends of

    HADCRUT4 -0.264 ±0.825 °C/decade
    GISTEMP -0.169 ±0.895 °C/decade
    NOAA -0.158 ±0.846 °C/decade

    Not quite what ‘we were expecting’

    (BTW what’s so wrong with Harris that he doesn’t get extra letters in his name???)

  4. Lucia,
    What model are they comparing with the adjusted trend? I remember we both noted some models have crazy levels of short term (one to ten years) variability.

  5. SteveF–

    We can place this apparent lack of warming in the context of natural climate fluctuations other than ENSO using twenty-first century simulations with the HadCM3 climate model (Gordon et al. 2000), which is typical of those used in the recent IPCC report (AR4; Solomon et al. 2007). Ensembles with different modifications to the physical parameters of the model (within known uncertainties) (Collins et al. 2006) are performed for several of the IPCC SRES emissions scenarios (Solomon et al. 2007). Ten of these simulations have a steady long-term rate of warming between 0.15°and 0.25ºC decade–1, close to the expected rate of 0.2ºC decade–1.

    So… it looked like Collins et al. 2006 used HadCM3, but varied the parameterizations– so effectively different “planets”. I don’t know how many runs Collins et al. ran. Then Knight et all fished out 10 of runs with ‘long term warming’ between 0.15C and 0.25C.

    The plots show spread in *ENSO adjusted* temperature for these runs. The observations are definitely ENSO adjusted. The text for the figure reads as:

    (b) ENSO-adjusted global mean temperature changes to 2008 as a function of starting year for HadCRUT3, GISS dataset (Hansen et al. 2001) and the NCDC dataset (Smith et al. 2008) (dots). Mean changes over all similar-length periods in the twenty-first century climate model simulations are shown in black, bracketed by the 70%, 90%, and 95% intervals of the range of trends (gray curves).

    (The text in the body of the report makes it clear the model traces are also ENSO adjusted.)

    What we can’t know from the paper itself:
    1) What results one would have gotten if one did NOT ENSO adjust. (Mind you, it starts in 1999, so likely it would have looked ok.)
    2) What sort of “weather” is in that model.
    3) Whether ENSO in that model is really realistic. Maybe it’s wild. Maybe it’s non-existent.
    4) Whether the ENSO correction for the model weather does “anything” at all.
    5) How much of the spread in trends was due to model parameterizations vs. being “weather”.

    All these things made me consider the paper a “mystery” paper at the time. My diagnosis of “State of the Climate” was that it was probably one of those invited things where lots of people submit sketchy papers and they have a whole bunch “peer reviewed”. But really… almost everything gets in with little review or feedback. Because the number of questions left unanswered makes this less informative to a reader than a substandard poster presented at a poster session! (This is not to say the results are wrong nor that they are right. Only that… huh?)

  6. Lucia… WRT point 3, it looks like HadCM3 does a reasonable job replicating ENSO. Climate explorer allows you to do spectral analysis under the “Investigate this time series” section in the right bar. Using this function on the detrended anomolies for the tongue region for both HadSST3 (1950-2012) and HadCM3 (2001-2063), both periodograms shows peak powers of ~4 at ~4yrs. Whether HadCM3’s global temperature response mimicks HadCRUT4 is another question.

  7. AJ–
    That’s useful to know. Is the *amplitude* of ENSO anywhere near right? Is the correlation between ENSO and GMST similar? Those are questions it might be useful to ask– a short discussion in the paper might have been useful. (In fairness, it might be in the cited reference though.)

  8. Using the same sample periods, I get the following standard deviations on the detrended anomalies:

    HadCM3: 0.98C
    HadSST3: 0.84C

    I would say the *amplitudes* are somewhere near right. This was very quick and dirty so caveat emptor.

    Can’t say anything about the correlation between ENSO and GMST at this point. Maybe I’ll look at it later.

  9. Just one other note. The trends for the tongue area are:

    HadCM3 (2001-2063): 0.25C/Dec.
    HadSST3 (1950-2012): 0.07C/Dec.

    Only 12 years overlap, but that looks wonky.

  10. AJ,
    I’m not sure that’s evidence of wonkiness. SST3 is predicted to warm just like everything else. It didn’t warm as mucn in the past.

  11. Yeah… “wonky” wasn’t the right word. “Surprising” might have been a better choice. Bob Tisdale might agree with “wonky” though 🙂

  12. (I took me ages to work out the direction of times arrow in the figure)

    In what sense are they using anomalies in this sentence?

    “denotes the simulated response of monthly-mean tropical-mean surface temperature anomalies to ENSO variability”

    I have always taken ‘anomalous’ in a climate science sense to mean unnatural or ‘man-made’, however this isn’t that usage. Are they trying to state that the top part of a sine-wave is an anomaly compared to the mean, or am I being dumb again?

  13. DocMartyn
    Anomaly is just “difference from some reference.” So yes, daytime tends to be anomalously warm and nighttime anomalously cold relative to the daily mean (or median) temperature. On the other hand, you could define the reference to include the daily cycle, for example, “the temperature at this exact minute of the day averaged over the entire year” in which case you might imagine summer nights to be anomalously warm and winter days to be anomalously cold.

  14. Oliver, don’t mean to make this overly complicated, but it is one of my not-so-hidden talents. 😉

    Anomalization also includes “deseasonalization”, so basically that means averaging the monthly signal over several years (the “baseline period”) to obtain a mean annual cycle, then subtract this off the full index. (You also need to detrend your annual cycle before subtracting it from your data.)

    If you look at the variation in the annual index
    This slightly more nuanced (maybe clear as mud) explanation reveals that if the annual cycle varies outside of the baseline period, that will show up as a non-zero value, even if the measurements were otherwise noiseless.

    Of course this is what you want, if you want to observe a secular trend. But it could mean other things too, especially when the annual signal is typically much larger than the anomaly that is left over (so any phase shift in the annual cycle relative to the base period will show up as a nonzero anomaly)..

  15. Carrick,

    You’re absolutely right. “Not anomaly” in climate typically includes the “average” seasons, which as you say can throw you for a loop if there are even slight phase fluctations (because the amplitude of the baseline signal is so large).

    And then we still have stuff left that we can never quite fold into the baseline, so we just give it a cool name (like “ENSO”). 😀

  16. P.S.: For some reason, I first read your post as saying “Anomalization also includes ‘desalinization…'”

  17. What does it mean to ENSO-adjust the output from a model simulation when many models do a poor job of representing ENSO in the first place? Are changes in mean global temperature in the model even correlated with changes in the NINO indices, as they are in the real world?

    This paper discusses how well the HADCM3 model represents ENSO, but doesn’t discuss the correlation between model mean global temperature and NINO indices.

    http://www.met.rdg.ac.uk/~mat/h3var_paper/paper.pdf

  18. Doc

    I have always taken ‘anomalous’ in a climate science sense to mean unnatural or ‘man-made’, however this isn’t that usage.

    Temperature anomalies are commonly used to mean relative to an average computed over a baseline. It’s generally not meant to convey “man-made” or “unnatural”.

  19. Ok… I have numbers to post either this weekend or Monday. There were some of the usual difficulties going from “words” to “what they really did”. I think one was an ambiguity in the paper (not sure– but I think so. )

    Another was… well as usual, coding error. Converting frequencies in 1/s to 1/months… I uhmm.. left out the part of the conversation that does 24 hours in a day. I kept looking at things saying… that’s got to be off by a factor between 10 and 100.. I mean… it’s just not right! Still… I was blind to the typo for..oh… 30 minutes or so. Yeah. Ok… you do stuff like that too. Admit it. It’s like all the “forgot to divide by sqrt of 2 things. The rest of the time was just reading to check choices. Some choices are somewhat arbitrary — I don’t mean in a ‘bad’ way, just someone had to pick something. I want to match as much as possible and stuff is spread over 2 papers.)

    Now I’m eager to try to get the “model-data”. That would help tease out some things that are puzzling me in the paper. (Note: there are discussions of over all dT over a period of time and then also trends. Testing trends is– I think– more statistically powerful relative to looking at dT shown in the figure I inserted above. But the text also discusses trends. So, I’d like to hammer through a few to get a more specific sense. )

  20. Frank

    Are changes in mean global temperature in the model even correlated with changes in the NINO indices, as they are in the real world?

    They aren’t though to be and the paper is based on the assumption they are not.

    What does it mean to ENSO-adjust the output from a model simulation when many models do a poor job of representing ENSO in the first place?

    Good question. Thanks for the reference. I’ll read that.

  21. Doc,
    My interpretation of anomaly is “difference from what you’d expect”. Very similar to residual. That’s a reason why they are so useful.

    You see it if you are trying to present a color world map of temperatures for the month. If you show raw temp for Feb 2013, you use up all the colors just to show the latitude variation. Every Feb looks the same. Whereas the anomaly plot shows what was different about Feb 2013. That’s the news.

  22. HR (Comment #111117)
    March 7th, 2013 at 10:37 pm
    AJ got comment 111111

    And here I got a poker game tomorrow night. Shit!

    It would be suspicious if I somehow managed to get six bullets though. I might lose the one friend I have 🙂

  23. Lucia says:

    I’m a bit puzzled by the wording of the method

    The method looks very similar to this homework from the “Introduction to Climate” course that the Colorado State University had in 2005.

  24. Lucia…

    2’s don’t beat 1’s in poker. I thought one only needs 1,000,000 + 111,111 more comments before I’m beat. Then I saw this:

    http://www.youtube.com/watch?v=XunAlp2azhA

    Given that there’s no suites in comments, the next straight probably beats me. But, like climate science, we’ll make the rules up as we go 🙂

  25. Seeing that’s Ize getz to makes the rules, of this comment at least, the next best hand to beats me is not 1,234,567, but 1,111,111.
    So sucks it sucker!!!
    AJ rules!!!!

  26. Skeptical

    ) a times series of
    surface temperature anomalies averaged ove
    r the dynamically active cold-tongue region
    (CTI18702002). Both time series are for monthly-mean temperature anomalies (i.e.,
    departures from the seasonal cycle)

    My question is: If this has a secular trend…. do I detrend first. I think the answer is yes. But it’s not clear from the words in the paper.

    Since what I’m trying to do is mimic whatever they did, I want to know what they did. I think the answer is: Physically, that homework problem does what they think it does if you can be pretty sure there is no secular trend. But if there is a secular trend… you have to deal with that, and it’s better to do it on the *first* step. Otherwise, the “decay” and the “forcing” rates are relative to the ‘0’ that is the average value of the baseline… and that’s weird… So linearizing puts the estimate closer to the ‘0’ for that instant in time.

    If there is a non-linear secular trend… eek!
    (Eek! is not a critcism of the method per se. If you want to do something and the data are dirty, you still want to do something. And there is nothing you can do that’s perfect. But… I just want to know if they detrended everything *first* and I can’t quite tell from the wording.)

    It’s morning… I’m waiting for coffee…..I’m trying to remember if there was something else that puzzled me yesterday.

  27. Bill Illis:
    Is this a Guardian jumps the shark moment? Nick Stokes would have been a better choice!!

  28. bernie–
    Dana and John Abraham may have reached out to the Guardian or had someone bring up the idea. Nick probably didn’t contact them.

  29. So Nick you don’t think that it is an intellectual trap to take a data series, remove from it the ‘natural signals’ and call the result an anomaly.
    It is the identification of the natural signal that bothers me.
    I always wonder why the medic bother recording diastolic and systolic blood pressure, when they could just use blood pressure anomalies

    (((diastolic and systolic blood pressure)/2) minus 115)

  30. Carrick (#111137)
    Anomalization also includes “deseasonalization”
    If you deseasonalize a thyme series, won’t it be too bland?

  31. lucia (Comment #111161)

    Since what I’m trying to do is mimic whatever they did, I want to know what they did.
    …………..
    But… I just want to know if they detrended everything *first* and I can’t quite tell from the wording.

    I *think* that they would have detrended first, but anything’s possible. Your mimic might have to come down to trial and error until you can successfully mimic their results.

    Personally, I think what they’re doing is just an easy way to get rid of an inconvenient high point in the time series. ENSO has different regions, with region 3.4 mostly used to determine El Nino or La Nina conditions. You can see a graphical depiction of the regions here. The ‘cold tongue’ (not depicted) covers most of the ENSO 3 & 4 regions.

    A warm year is a warm year, regardless of the heat source. I don’t think it’s a good idea to remove the parts of the planet you don’t like in order to create the trend you do like. Next we’ll see people removing Antarctica from the temperature record because it isn’t warming enough… you’ll get a higher trend, but is it really worth anything?

  32. HaroldW (Comment #111167)

    That’s my kind of humor.

    I have made temperature series with monthly anomalies where I could still detect a small residual seasonal effect with an acf – or should I say just a hint of seasoning.

  33. Dumb questions 101: How come when climate scientists write a paper with a specific result, no-one can figure out exactly what they did to arrive at the result?
    Obfuscation? Sloppy language?
    Perhaps there should be a template of standard statistical procedures somewhere, then they can say we did (8a), then (2c), then (16j)…..
    Otherwise how do peer reviewers know they’ve used reasonable statistical techniques?

  34. Re: Skeptikal (Comment #111169)

    You make a good point. What could possibly justify removing the El Nino? If it is a pure oscillation with zero average, then keeping it will not change the underlying trend anyway, at least not over long enough times.

    On the other hand, if it does not average to zero over long times, then it does reflect an energy imbalance in the system, and that means you want to keep it!

  35. Skeptikal (Comment #111169)
    Sometimes one just has to ask the appropriate person. I’ve emailed Thompson and asked. I received an “out of the office reply”, so I’ve asked John Kennedy. Eventually, I’ll get the answer.

    As for why they do it– I think the must believe there is a part that can be explained as “not secular trend” and so they do it. One can debate it– but there are at least two issues of interest:

    1) People (e.g. Monckton/WUWT/Others) are making claims about what Knight et al said and many others do want to know where we stand relative to that. (Heck, I want to know.)

    2) Whether or not one thinks what Knight did was convincing/made sense and so on.

    With respect to (1), we need to know *what* they did — at some level of nit-picky detail. I’m trying to find that stuff out so I can process the data and see where we stand.

  36. cui bono–
    Why? It depends. The reviewers aren’t trying to replicate. In principle “all” the details should be stated, but the authors aren’t following it like a manual, so sometimes little things are a bit ambiguous.

    I’m reasonably certain the answer will be “We detrend SST first”. But the paper doesn’t happen to actually say explicitly. They just say “SST anomalies”. Well… if you define “anomaly” as “relative to the trend line computed based SST anomalies– defined as relative to monthly averages over period X”… then they did what I think they did. But if it’s “relative to the mean over the baseline”.

    Look: Even with legal statutes where people are trying to state precisely what the law permit or prohibits, sometimes different judges disagree. So, this isn’t just a “climate research paper” thing. In this case, I can just ask. Eventually, I’ll get an answer. I’ll probably blog first– and just say which way I interpreted it. Then, if the authors get back later, I can redo. It’s not a big deal. (I think there are going to be 3 blog posts at least.)

  37. Lucia, I’m glad you’re looking at this. I expect I’m not the only lurker following this thread with great interest and curiosity.

  38. Mark Bofill–
    Right now, I’m getting that the ENSO adjusted change in temperature since 1999 (the year Knight et al used as “start year” in their paper) is does not fall outside the range of change in temperatures if we were to eyeball and put on the graph.

    However, I also want to get a spread of trends– which the paper does not include a graph of that even though them mention the word “trends”. The reason I want the spread of trends is that I’m pretty sure that method is more statistically powerful. (I’ll blog about that– I may have in the past actually back when George Will was *criticized* for looking at change in temperature over N years and told by “many” he should look at least squares. Yet, looking at total change in temperature is the method used in Knight. I don’t know why they used that– but I’m pretty sure it’s not a good method relative to looking at the spread in trends. I’ll do some monte-carlo– and if I’m wrong, I’ll report that too.).

    Also, I’d like a number of years and I want to do more than “method of eyeball”. I’d rather recreate the graph from scratch rather than digitizing.

  39. A warm year is a warm year, regardless of the heat source. I don’t think it’s a good idea to remove the parts of the planet you don’t like in order to create the trend you do like. Next we’ll see people removing Antarctica from the temperature record because it isn’t warming enough… you’ll get a higher trend, but is it really worth anything?

    They aren’t exactly removing a part of the planet. They are trying to “explain” part of the variability as “not trend” based on what happens in a part of the planet. Whether or not this really works could be debated. Also– if I get the model data we’ll be able to discuss how ENSO correcting affected the conclusions– or the statistical power. But… right now.. I don’t have the model data for the 10 runs.

  40. @julio (Comment #111172)

    March 9th, 2013 at 12:25 pm
    Re: Skeptikal (Comment #111169)
    You make a good point. What could possibly justify removing the El Nino? If it is a pure oscillation with zero average, then keeping it will not change the underlying trend anyway, at least not over long enough times.

    Over long enough times, yes. The issue is, people are demanding explanations for short term trends, when the cyclical effects are significant.

  41. bugs–
    The length of the cycles being examined doesn’t justify removing ENSO since its removed both from the models and the observations. The justificaiton for removing ENSO would be creating a more powerful statistical method. I don’t actually know if that goal was realized because I don’t (yet) have access to model runs.

  42. Anyway, I’m pretty sure researcher’s interest in shorter trends doesn’t have a thing to do with “people are demanding explanations”.

    Pretty sure it’s not even on their radar, unless they are Trenberth or somebody like him.

  43. IIRC, the models do create ENSO like cycles, they are not ‘removed’, just as the are not ‘created’, they should just be a product of coupling the ocean and atmosphere with geographic features. The ENSO like cycles are going to occur at times that have nothing to do with the times they actually occur.

    There have been attempts to use models with initial conditions to try short term prediction of ENSO, but they don’t seem to be too successful so far.

  44. @Carrick (Comment #111184)
    March 10th, 2013 at 3:25 am

    Anyway, I’m pretty sure researcher’s interest in shorter trends doesn’t have a thing to do with “people are demanding explanations”.
    Pretty sure it’s not even on their radar, unless they are Trenberth or somebody like him.

    You haven’t seen all the ‘global warming has stopped’ threads around the blogosphere then.

  45. bugs: “You haven’t seen all the ‘global warming has stopped’ threads around the blogosphere then.”

    People who do research in general don’t pay much attention to the blogosphere (with a few notable exception), in terms of what topics to study. If they are studying short period trends, it’s because it’s an interesting topic, not because “people are demanding explanations”.

    There are notable exceptions of people who spend time reading the internet and worrying about what Anthony Watts (or Willis etc) says.

    But even somebody like myself who does a fair amount of reading is unaffected, in terms of my research topics, by blogosphere chatter. In fact, my colleagues would probably think I was really weird if I did let blogosphere chatter dictate what I studied.

    In the real world, people just don’t care that much what laypeople think, at least not in any other sense that “let me do the best to communicate my understanding to correct the deficiencies in your own understanding on this matter .”

  46. @Carrick (Comment #111187)
    March 10th, 2013 at 4:03 am

    bugs: “You haven’t seen all the ‘global warming has stopped’ threads around the blogosphere then.”
    People who do research in general don’t pay much attention to the blogosphere (with a few notable exception), in terms of what topics to study. If they are studying short period trends, it’s because it’s an interesting topic, not because “people are demanding explanations”.
    There are notable exceptions of people who spend time reading the internet and worrying about what Anthony Watts (or Willis etc) says.
    But even somebody like myself who does a fair amount of reading is unaffected, in terms of my research topics, by blogosphere chatter. In fact, my colleagues would probably think I was really weird if I did let blogosphere chatter dictate what I studied.

    You may be right, the politicians who make, or break, the policy seem to like to hear the blogs, including the ‘worlds most popular science blogs’ more than the professionals. I’m tipping Obama won’t get his legislation passed.

  47. bug–
    Sure. Politicians may hear blogs. Anthony’s is very widely read so it likely will influence some politicians. Not sure what that has to do with whether or not it makes sense to correct for ENSO. And based on your wording of what you thin the “reason” is I’m pretty sure you don’t really understand why scientists would want to correct for ENSO if it can be done correctly.

  48. Carrick,

    People who do research in general don’t pay much attention to the blogosphere (with a few notable exception)

    I presume you mean a guy who all “Hockey Stick” paper authors ought to pay attention to?

    One would hope that the latest batch of HS authors will have more sense in responding to his inquiries than those in the past.

    Already seeing things turned up by “laymen” that the referees missed.

    http://climateaudit.org/2010/02/03/the-hockey-stick-and-milankovitch-theory/?replytocom=403671#respond

    (Cue the “I don’t like McIntyre so therefore nobody should listen to him” rant from bugs.)

  49. bugs (#111188): ” the politicians who make, or break, the policy seem to like to hear the blogs.”
    I think a more accurate view is that the politicians seem to like to hear the blogs which agree with their positions. They seem to have no problem with ignoring blogs (or constituents) who present contrary views. I suppose that’s only natural; it seems to be universal to prefer positive feedback to negative.
    .
    But as Lucia says, we’re wandering from the topic of cold tongues. [To forked ones.]

  50. From lucia (Comment #111148) March 8th, 2013 at 5:29 pm

    Frank: “Are changes in mean global temperature in the model even correlated with changes in the NINO indices, as they are in the real world?”

    Lucia: “They aren’t though[t] to be and the paper is based on the assumption they are not.”

    Frank continues: Since ENSO (as quantified by the NINO indices) is associated with multi-year shifts in global temperature, we can ENSO-adjust the mean global temperature and obtain a signal which appears less “noisy” and has lower uncertainty limits when a theoretical fit to some model is performed with temperature vs time. This allows other signals in the data to be seen more clearly.

    If the same thing isn’t true for the relationship between ENSO and global temperature in the Hadley GCM, then one isn’t removing noise from the GCM results. If the model doesn’t place “ENSO-driven warming/cooling” in the locations used by the NINO indices, they won’t be removing ENSO “noise” from their global temperatures. If the global warming in the model isn’t correlated with the ENSO indices, you can’t remove any “ENSO noise”. It seems to me that ENSO-adjusting the GCM results could artificially widen the uncertainty intervals obtained from fitting the time vs temperature data to a statistical model, making the long “pauses” in warming more likely,

  51. Frank–
    Sorry. When I answered, I thought you meant is the secular trend correlated with NINO. I don’t know why I thought that…

    If the same thing isn’t true for the relationship between ENSO and global temperature in the Hadley GCM, then one isn’t removing noise from the GCM results.

    Yes. I agree.
    I want the time series for the model data for the 10 runs actually used in Knight et. all I’ve asked John Kennedy (2nd author), but I suspect he is busy currently because he normally tweets but hasn’t since early March. I need to find Knight’s email. (I wrote Kennedy because he’s 2nd author on Thompson and I got an out of the office response from Thompson.)

    you can’t remove any “ENSO noise”. It seems to me that ENSO-adjusting the GCM results could artificially widen the uncertainty intervals obtained from fitting the time vs temperature data to a statistical model, making the long “pauses” in warming more likely,

    I doubt it would widen them but it could fail to narrow them. That’s the issue.

  52. Lucia,
    I am uncomfortable with a short term (~30 years) ‘cold tongue’ ENSO adjustment, not because I think removing a known short term influence won’t improve the estimate of an underlying longer term trend (I think it will, as do other much simpler methods for accounting for ENSO effects), but because it seems to ignore the possibility of longer term (eg ~60 year) cyclical contributions. If the ‘cold tongue’ adjustment based on short term trends is used declare GCMs projections of rapid warming ‘consistent with’ observed warming, while ignoring longer term discrepancies, then it may serve a political purpose, but is otherwise not much different from F&R. I think any ‘removal’ of ENSO influence needs to be applied to the whole of the historical record, not just ~30 years.

  53. SteveF–
    Thompson et al removed ENSO from a long period. They were mostly trying to understand the 40s. Knight et. al was just looking at recent warming and comparing to models. The problem I have with Knight et al, is that really– it is sketchier than the avearge blog post. For example: think about this graph

    Did the multi-model mean predict a temperature rise consistent with a rate of 0.2C/dec from .. say…the trough after the eruption of Pinatubo? No…it does not. So why is the temperature rise since the early 90s even compared to that? I suspect the authors have a preconceived notion, slapped this together, and whoever the reviewers were didn’t care that if our goal is to learn whether observations are inconsistent with models this comparison makes no sense. Other things are so sketchy. It’s really rather amazing.

    Of course I’m not surprised that the trend did not suddenly shoot up to look consistent with models. But presumably the authors of Knigth et al and the earlier Easterling and Werner kinda sorta ought to be.

  54. Knight says

    Ensembles with different modifications to the physical parameters of the model (within known uncertainties) (Collins et al. 2006) are performed for several of the IPCC SRES emissions scenarios (Solomon et al. 2007).

    Hmmm… I had initially thought the 10 runs used as the ensemble here were downselected from Collins. I now think that cannot be the caseReading the only Collins et al 2006 reference in this “State of the Climate Report” reveals Collins et al did perturbed physics ensembles but with doubled CO2 experiments and such not driven by SRES. So, Knight must have run their own cases fresh and the Collins reference is merely telling us they ran perturbed physics models– in the way Collins ran perturbed physics models.

    That means this documentation is even sketchier than I thought. The reader is left clueless about the range of parameter variations. (Sorry but “within known uncertainties” is ridiculously vague!) There is no way to know how these variations might have screwed with HadCMs ability to have a realistic ENSO, it’s overall variability and so on. (And no… we can’t learn from Collins. We also can’t learn from test of the default HadCM!)

  55. It has not been documented that ENSO is simply a random fluctuation around 0, which is what they seem to be assuming. Their calculation looks like they subtract the ENSO effect to see if some other fluctuation is going on as a residual, but what if the solar activity/whatever cycles are affecting the pattern/freq/magnitude of ENSO?

  56. bugs:

    You may be right, the politicians who make, or break, the policy seem to like to hear the blogs, including the ‘worlds most popular science blogs’ more than the professionals. I’m tipping Obama won’t get his legislation passed.

    If he doesn’t, it won’t be because of WUWT, but rather because Obama chooses (and I think he will) to spend his finite political capital differently.

    John M, l certainly pay attention to Steve Mc’s blog, I think you’d be really foolish to not pay careful attention to issues raised on his blog.

  57. re: Craig Loehle (Comment #111199)

    ….what if the solar activity/whatever cycles are affecting the pattern/freq/magnitude of ENSO?


    Here is a ‘solar cycle domain’ comparison of the ENSO 3.4 (Kaplan version) segments during solar cycles 22 and 23. One cannot see this relationship clearly in the time domain as cycle 22 lasted 9.7 years and cycle 23 lasted 12.6 years.

    Such comparisons can be done back to solar cycle 10. Not all cycles cohere with 23 and 24 but there are many other examples of pairs of cycle segments which are obviously correlated.

  58. Bruce,
    “SO2 went down from 1980 to 2000 the equivalent of one Pinatubo.”
    .
    That’s a bit like comparing an avocado with a hand grenade. Both about the same size and shape, but a little different in behavior. 🙂 Human sulfate emissions have a short (several days) half life in the troposphere; Pinatubo’s particulates and sulfates went into the stratosphere, with a residence half-life of 6 – 12 months.
    .
    Besides, the AR5 SOD indicates that the IPCC’s best current estimates of primary and secondary aerosol influences are lower than had been assumed in the past, not higher. The aerosol kludge is just that, a kludge, not an explanation. Could aerosols explain all the discrepancies between models and measured temperature? Maybe, but since there is no aerosol data, it comes down to a “we can’t think of anything else” argument. I can think of something else: there is a pseudo-cyclical variation with ~60 years period in the historical record, and the true climate sensitivity is much lower than the IPCC’s best estimate of 3.2C per doubling.

  59. Craig–
    Whatever other fluctuations are going on are presumably what contributes to the “error bars”.

    Bruce–
    Whether one can or cannot also adjust SO2 is not particularly relevant to the issue of “what they did” or even the question Knight was addressing.

  60. Lucia (Comment #111193) said:

    “I doubt it [ENSO-adjusting] would widen them [confidence intervals] but it could fail to narrow them. That’s the issue.”

    If the model ENSO-adjustments are merely noise, adjustment should increase the confidence intervals. Noise + noise = more noise and more likelihood of observing pauses in model warming. It’s suspicious that neither Knight et al nor the paper I linked discussed the correlation between model ENSO indices and model global temperature – a fairly obvious and important subject.

  61. Frank–

    But the method doesn’t do “noise+noise”. It removes the part of the “noise” that is correlated with something. The way it’s done, the residuals relative to the time *will* be smaller.

    It is odd that knight et al don’t discuss that– but the paper is just so sketchy it’s ridiculous. Not discussing the things one would need to know to evaluate whether the ENSO correction “likely works” is just one of the many things it doesn’t discuss.

  62. SteveF, the human SO2 went into the atmosphere near all the thermometers, unlike the Pinatubo SO2 which was spread all over the globe and was very thin over the thermometers.

    “Besides, the AR5 SOD indicates that the IPCC’s best current estimates of primary and secondary aerosol influences are lower than had been assumed in the past, not higher.”

    Real scientists disagree.

    “A major clearing of the air has occurred in
    the Netherlands in the past few decades.
    These changes are so large that they have
    become very obvious when looking at the
    data of individual stations. Strong indications
    can be found linking human emissions of
    aerosols to the visibility changes. Coincident
    with the visibility changes, large trends in
    cloud cover, sunshine duration and temperature
    are found, in particular during daytime
    in summer, showing that these tiny particles
    might have a significant influence on regional
    climate.”

    http://www.staff.science.uu.nl/~delde102/CleanerAirBetterViewsMoreSunshine.pdf

    ““According to the authors, “the average increase of [surface solar radiation] from 1982 to 2008 is estimated to be 0.87 W m−2 per decade,” which equates to 2.26 W m-2 over the 26 year period. By way of comparison, this forcing was 12.5 times greater than the surface forcing alleged by the IPCC from increased CO2 over the same period.”

    http://sunshinehours.wordpress.com/2012/10/27/huge-increase-in-sunshine-reaching-earth-12-5-times-the-co2-warming/

  63. Brandon– I ran your script but changed the trend to 0.02 from 2 (just so the errors would be larger compared to the mean trend.)

    I get

    mean(a)
    [1] 0.02009697
    > mean(b)
    [1] 0.02016038
    > apply(a,1,sd)
    c(1:100)
    0.1217226
    > apply(b,1,sd)
    x
    0.1214310

    So, for this one case mean of a happened to be closer to correct than b. Which means if ols was more accurate– but I think that just noise. I think both will assymptotically give the correct answer– which is the definition of “accurate”. That is, I think both are equally accurate.

    I get apply(a,1,sd)
    c(1:100)
    0.1217226
    > apply(b,1,sd)
    x
    0.1214310
    So, in this case ARIMA got a smaller standard deviation. My recollection was if we run enough of these, OLS will give the smaller sd, but perhaps I remember wrong. I’ll go try again.

    (Don’t let Mosher see you programming loops in R! 🙂 )

  64. Ok– This is a case where we know the mean. So, to evaluate you want to do t his:
    > a=NULL
    > b=NULL
    > N=20
    > a=NULL
    > b=NULL
    > N=20
    > x = 1:N
    > NumMonte=10^4
    > for (i in 1:NumMonte){
    + e = 25*arima.sim(model=list(ar=0.3),n=N)
    + y = e
    +
    + a = cbind(a,lm(y~x)$coefficients[2])
    + b = cbind(b,arima(y, xreg=x, order=c(1,0,0))$coef[3])
    + }
    > mean(a)
    [1] 0.02441206
    > mean(b)
    [1] 0.02293534

    # you don’t want to do sample mean–sd, because we KNOW the mean is 0 in this case.
    > sum(a^2)/sum(b^2)
    [1] 0.9984205
    >

    Note that the standard error for the mean over these should be about > sqrt(sum(a^2)/NumMonte)/sqrt(NumMonte)
    [1] 0.01319875

    So,the difference between mean (a) and mean (b) tells us nothing. (Both should approach zero as we increase NumMonte)

    You need to do WAY more runs to discriminate which is better. It’s really, really close. I thought I’d done this… and the edge goes to OLS. But– as you see it’s really close. And I’ll admit I’m not sure. You’ll have to run 10^6 of these or something.

  65. lucia:

    (Don’t let Mosher see you programming loops in R! 🙂 )

    I actually had a problem because of that a couple weeks ago. I wanted to do some testing, but a good sample size generated/analyzed with loops would take a long time. I was able to remove three of the four loops I used by using arrays and the like. Try as I might, I couldn’t get rid of the fourth loop. I couldn’t find a way to create multiple ARIMA sequences without using a loop.

    I could create a really long series then break it into smaller ones to avoid using a loop, but I’m not sure that’s really any better.

    You need to do WAY more runs to discriminate which is better. It’s really, really close. I thought I’d done this… and the edge goes to OLS. But– as you see it’s really close. And I’ll admit I’m not sure. You’ll have to run 10^6 of these or something.

    I’m currently running 10^6 with your changes (no trend, 20 points per series) to see what happens. It may turn out the differences I’ve seen so far are just noise. If so, it’s weird every time I ran things before posting, the results came back the same. Screwy luck, I suppose.

  66. The results from that run were interesting. The means were 0.0070 for OLS and 0.0073 for ARIMA. That is practically identical, but deviations were not. They came in at 4.398 for OLS and 4.117 for ARIMA.

    I guess that means I was wrong about ARIMA being more accurate, but it is more precise?

  67. Brandon–
    Really, it doesn’t matter. It’s just when I started using R, Mosher had to break me of the habit of using loop. Otherwise, I would never have used stuff like “apply” or “lapply” *at all*.

  68. Bradon–
    On the accuracy/precision issue try this: Generate the noise with a “wrong” model (say ARIMA11), then use AR1 and OLS to get trends. In that case, the noise mode will be wrong for both.

  69. Bruce (Comment #111216),
    While the influence of aerosols on albedo is relatively local (eg a few thousand Km downwind from the emission source), the influence on heat balance should extend well beyond just a local effect, since atmospheric heat transport is relatively fast. Still, I am not sure that visibility trends from a couple of locations In the Netherlands imply large global influence. The regions with the highest aerosol effects are mostly where there is wind-blown desert dust and biomass burning. http://earthobservatory.nasa.gov/GlobalMaps/view.php?d1=MODAL2_M_AER_OD
    .
    The best estimates of net aerosol effects from SO2 have dropped quite a lot from AR4 to the SOD of AR5. These are estimates by ‘real scientists’.

  70. Lucia, I have not been following this thread all that closely but I did read the article under discussion. I refer to the writing as an article because it is not apparent that it is peer-reviewed. The purpose of the article would appear to be showing that, under the most restricted conditions of accounting for ENSO cycles, the authors can show that the pause in warming is not statistically and significantly different than that expected from decade’s worth of temperatures from some climate model runs. The article would be the one of choice if one wanted to refer to a publication for policy purposes in defending climate models without going into any great detail.

    The authors must assume that the climate models used do not track the cyclical ENSO effects and thus for a comparison the ENSO needs to be removed from the signal. They are evidently willing to assume that the ENSO is a manifestation of a cyclical nature of the global mean temperature and thus what they are removing is a cyclical component of the global temperature that the models fail to capture. If my inferences here are correct then the article starts by implicitly acknowledging a major weakness in the models.

    I would suppose knowing what global temperatures have done in the years after 2008 and how close the rejection of the null hypothesis was then – that observed trends and climate model predicted trends were the same – it would be of interest to either bring the analysis up to date or to ponder why none of the original authors have taken another look.

  71. Kenneth–
    Even as an article, the paper was vague. Really, it’s worse than you think. Here’s what I wrote way back when

    http://rankexploits.com/musings/2009/knight-et-al-more-questions-than-answers/

    But yes– I strongly suspect the purpose of the article was to give a policy person something to wave around. I don’t think they are implicitly admitting the models don’t contain ENSO. I think HadCM3 with “standard” parameterizations” has something more or less ENSO like. Whether ENSO remains or is realistic when the change parameters as they seem to have done in this paper… dunno.

    I have questions… and I’d like the model data. I’m going to try to get it from Knight. Richard Betts has forwarded my request. If I don’t get an answer from Jeff Knight by end of day, I’m going to ask “enquiries” at the met office. If that doesn’t work, I may FOIA. In which case, I’ll ask people to help me tweak my wording to maximize the change that I will get what I want. (Wording badly can result in not getting what you want because what you asked for didn’t overlap what was available. So… I’ll ask for time series or *if that doesn’t exist* gridded data and so on. I may need to ask a shit wad of questions in one FOIA– including some I don’t currently care about but anticipate I might *come* to care about. FOIA sucks… but it’s better than not getting any answer at all!)

  72. lucia:

    Brandon–
    Really, it doesn’t matter. It’s just when I started using R, Mosher had to break me of the habit of using loop. Otherwise, I would never have used stuff like “apply” or “lapply” *at all*.

    It matters to me because loops tend to be inefficient, and some of the things I do involve a lot of computations. If not for Mosher’s commentary on loops, I might not have realized how much more efficient I could make my code.

    On the accuracy/precision issue try this: Generate the noise with a “wrong” model (say ARIMA11), then use AR1 and OLS to get trends. In that case, the noise mode will be wrong for both.

    I added the ma parameter to my series generation but kept my fitting the same. I haven’t run 100,000 tests yet, but the results so far have been interesting. Both OLS and AR(1) fits seem to converge to zero, but the percent difference in deviations is actually larger than before. I didn’t expect a wrong ARIMA model to be much more precise than a simple OLS regression, but it seems it is.

    I’ll look into this more, but I may not spend much time on it for a bit. I’m playing around with the new Marcott hockey stick’s data set to try to see how they got their results. Of their 73 series, a lot do not seem to mesh with their results. I’m trying to figure out which series are contributing how much of the shape, but I apparently need to spend some time with their SI to figure some things out.

  73. hmm,

    When I get time I’ll look at your code Brandon,
    hmmm.

    you can avoid using cbind() within the loop by pre-allocating space for that array outside the loop, then index within.

    Also, you might be able to do the whole thing with mapply()
    or do.call()

  74. SteveF, using data from 2005 to 2013 showing northern europe free of aerosols corroborates the Netherlands researchers showing the air to be much, much cleaner.

    As for “from a couple of locations In the Netherlands” … that isn’t true. I posted a few other references.

    Real scientists would be interested in the effect clean air legislation has had on temperature. The “scientists” who only blame CO2 are not really scientists.

  75. steven mosher:

    you can avoid using cbind() within the loop by pre-allocating space for that array outside the loop, then index within.

    Yup. I’ve used both approaches. The reason I used cbind() here is it was simpler for me to write. Pre-allocating space always makes me have to think more. The difference in processing time wasn’t big enough to make me go back and “do it right” this time.

    Plus, a couple times the ARIMA functions borked because of unusual numbers cropping up. It would have been more work to extract the results if I had pre-allocated the space as I’d have to filter out the empty results. cbind() avoided that issue.

    Also, you might be able to do the whole thing with mapply()
    or do.call()

    I need to look into functions like those more. I know they can be very powerful, but I’ve hardly used them.

    By the way, I modified the code in the same way I did my last project: I generate the test series in a separate step and store them in an array. That lets me test as many different fits as I want without re-generating data. There’s still a loop in the data generation step, and this approach takes a bit more code, but it’s a big improvement when testing different ARIMA models. Plus, it lets me keep a copy of the data I use.

  76. weirdly I’m having trouble making arima.sim work with mapply()
    the other trick which you mention is just running arima.sim for a large “n” and then just chop up the results.

  77. Steven Mosher: Not sure if it’s faster, but I’ve become a big fan of the foreach package:


    library (foreach)

    mean (foreach (i=1:NumMonte, .combine=”c”) %do%
    {
    y <- arima.sim(model=list(ar=0.3), n=N, sd=25))
    lm (y ~ x)$coefficients[2]
    })

    Not sure if it’s any faster than the alternative for straight-up usage, but if you set it up to use parallel processing and use %dopar% it can nicely take advantage of multiple cores on your machine. (For problems that are sufficiently large, of course.)

    By the way, the `.combine=”c”` is so that I get a nice vector for mean. The default is to return a list which is more appropriate in the general case, but when returning a single value keeping it all in a vector is more convenient.

  78. wayne,

    ya I’ve been meaning to use foreach for sometime, but currently the biggest issues I face are memory issues.. and in some of the GIS stuff both speed and memory.

Comments are closed.