Where is the trend relative to all runs?

As many know, I often focus on whether the multi-model mean is biased. I consider the answer to that question useful in terms of planning. After all, if the collection overall is biased high, then possibly, planners might not want to lean toward expecting earths trends to fall in the lower end of trends predicted by models. But other people like to focus on whether or not the full spread of all runs from all models contains the observed trend. For those people, we can create a histogram of all 54 runs in the A1B collection display the location of the three observed trends on the histogram.

A comparison of trends based on data staring in January 2001 are shown below:

As you can see if we assume the distribution of trends is Gaussian (which they may not be):

  1. 3.7% of model trends fall below the observed Hadley trend.
  2. 7% fall below the observed NOAA trend
  3. 15.5% fall below the observed GISTemp trend.

It is worth noting that while I picked Jan 2001 as start data long ago (i.e. 2008) and the choice is not based on the magnitude of trends and it does not minimize the computed trend, it appears that right now starting analysis in Jan 2001 gives a HadCRut trend of -0.053 with p=3.7% , while the trend of -0.09C/dec starting in Jan 2002 gives a p=5.2% and the trend since 2000 of +0.028C/dec gives a p of 9%. So with respect to p values, Jan 2001 gives the lowest p value and we would get different results with different start years. Similar behavior would be seen with the other observational data sets.

Going forward I’ll be showing more of these. In the next few months, the trends since 2001 are almost certain to decline (because the current temperature lies below the projection line.) So, we’ll see when the computed ‘p’ value hits it’s lowest level for each metric. I’m guessing it will happen in 2-3 months, then both the HadCrut trend and ‘p’ value will rise.

Yes… it’s screwy to watch for lowest ‘p’ values and no one would ever let you watch until you hit the lowest ‘p’ value, ‘freeze’ an analysis on the lowest ‘p’ value and use that as proof of anything. Similarly, you can’t just sit around and watch for the highest ‘p’ value. But that doesn’t mean we can’t watch to see what the minimum ‘p’ for this cycle will be nor the highest during the upcoming El Nino. Since La Nina is fading, I’m expecting the HadCrut ‘p’ will hit…. hmm….I don’t know. I’ll have to do a calculation.

Does anyone have any guesses for when we’ll hit the local minimum computed trend since 2001 for this La Nina? Or what it will be? Maybe I should let people bet quatloos on that.

88 thoughts on “Where is the trend relative to all runs?”

  1. Since the Kelvin wave which weakened the La Nina has run its course, I’d say hang onto your britches and wait til the La Nina is over … next year

  2. Lucia: Does statistical uncertainty in fitting the observed trends (HADCRUT, GISS, NOAA), affect the significance of comparisons between those trends and the model runs?

  3. Jonathan-It should. I’m just reporting where the reported trend is within the distribution.

    I did it before for Ron on a different sort of plot shown here:
    http://rankexploits.com/musings/2011/graph-with-measurement-uncertainty/

    But I’m getting up to speed incorporating each “bit” into my
    R plots. I’m mostly adding the “big” stuff first, and the small stuff later.

    If you visit the link you’ll see that if we go by what the agency descriptions of the uncertainty in the annual average measurement, we could expand the uncertainty creating a spread of “model runs + measurement uncertainty”. It makes very little difference to the p’s– I can do that for you if you like and we can discuss the assumption about the measurement uncertainty made to expand the standard deviations to account for the “random” part of the measurement uncertainty. (I’ll add the uncertainty for future plots now that you’ve reminded me. But it would also be useful for you to read the link to see if you agree or disagree that method at least in principle accounts for the measurement uncertainty.)

  4. Jonathan–
    As I answered, I realized you might be asking a different thing. Are you asking whether I should include “weather noise” in the observed trends? The answer is: Not on this type of plot because the distribution of runs is a distribution of “weather” type trends. The way it goes:

    1) If I test a projected mean trend (i.e. model mean or multi-model mean– black line below) against a single realization of an observations, I have to include the uncertainty in my estimate of the “underlying” (i.e. forced or mean) trend associated with the observation of the weather. My estimates for those based on the observation of earth’s weather are shown with red and green dashed lines in yesterday’s graphs.

    2) If I want to see if a single realization of weather (i.e. an observations of earth’s trend) fits into “all weather in all models), I just create the spread of all weather in the models. Then I show whether the earth’s realization falls inside there. That’s the spread in today’s graph and I just show the earth’s realization without adding the estimate of the uncertainty from the trend fit. The reason I don’t add that is those error bars are estimates to tell us how far the mean trend might fall from the observation.

    Above, I assumed you were asking about measurement uncertainty, which ought to be smaller, and should be added to graphs of type 2. But you don’t get that so much from trend series analysis.

    I hope this is clear. (I can make pictures to explain more clearly if it’s not.)

  5. I don’t get the significance of the enterprise. Is 10 years ever meaningful?

    Besides, the models have an ideological context analogous to El Niño/La Niña in which periods of warming ‘confirm the validity’ of the models and periods of non-warming are ‘natural variation’ (“CV/NV” rhetorical oscillation) so it is unclear to me how big a departure from the model mean is required for significance.

  6. George Tobin–

    Is 10 years ever meaningful?

    Yes.

    it is unclear to me how big a departure from the model mean is required for significance.

    It seem to me that you should like this method then. This test isn’t looking at the departure form the model mean. It is examining whether the weather we had fits in the distribution of all possible weather from a collection of models.

  7. I think this question of departures from the model mean is most interesting
    In the context of testing the rigour of the null hypothesis, It either is or isn’t outside the confidence intervals of your model

    I notice the ‘activists’ present models with v. large confidence intervals indeed allowing multidecadal ‘flatlining’ within

    http://www.realclimate.org/index.php/archives/2011/01/2010-updates-to-model-data-comparisons/

    This presents an interesting (reductionist) line of argument that the anthropogenic forcing may be multidecadal or even more subtle such that you can only observe it over centuries. And maybe its been happening since the end of the last ice age

    http://www.realclimate.org/index.php/archives/2011/04/an-emerging-view-on-early-land-use/

    Is this where the warmists are going to retreat to in the face of climate modelling failure?

    Not exactly the argument proposed inthe IPCC reports

  8. Lucia: Thanks, I was asking question #1: uncertainty about the fit parameters, not weather noise in the data to which the trend was fit.

    I agree that the distribution of models should describe weather noise (that’s part of the point of running an ensemble if I understand this stuff correctly).

    I’m trying to avoid logorrhea in blog comments, but terseness can be uncomfortably imprecise.

  9. Jonathan– Then the answer is no, I don’t need to add that uncertainty to this sort of comparison.

    Basically: There is an either/or issue. In this comparison, you don’t have to worry about uncertainty in the parameter of the linear fit because you are just seeing weather the weather trend fits in a group of weather trends. This has the advantage that you don’t need to worry about arima/rednoise etc. It has the disadvantage that the spread of the ensemble might be over or under-disperse compared to weather. So you can’t really say whether the mean trend is correct– only whether the earth trend as weather falls inside the spread of what I call “all weather in all models”.

    The test I did yesterday addresses the question about bias in a multi-model mean (or a model mean if applied to that) but you have to worry about whether you picked the right sort of noise to estimate the uncertainty in the estimate of the multi-model mean.

    Yes. It’s difficult to be terse in comments. In the physical world, you just get the back and forth right away and then I can answer the right question.

  10. Lucia,

    1. 3.7% of model trends fall below the observed Hadley trend.
    2. 7% fall below the observed NOAA trend
    3. 15.5% fall below the observed GISTemp trend.

    Based on these data, only a fool would bet that the GCM’s have it anywhere close to right. It seems even James Hansen is scaling back the projected warming to match the current reality (and doesn’t even credit you with showing this to be the case!). My guesstimate: the estimated aerosol albedo effects are way too high, the effective ocean lag is way overstated, and the climate sensitivity is consistent with what has actually been observed; that is, much below 2 C per doubling. Reality will ultimately dominate political inclinations; proving once again that life is hard.

  11. Related question: Weather noise should be common-mode for HADCRUT, GISSTEMP, and NOAA. However, three measurements of the same thing (i.e., a trend with the same weather noise) provide three values that are separated by much more than the measurement error bars you give in the link in #74035. I’m trying to get my head around how to think about the uncertainty implied by the spread of these three measurements of the same quantity.

  12. Jonathan–

    provide three values that are separated by much more than the measurement error bars you give in the link in #74035.

    My estimates based on the computation are about 0.03 but the standard deviation in the 3 measurements is 0.055. So, the spread (0.055) is “more” but not “much more” than 0.03. If you were to do an F test, you’d find that getting a sample sd. of 0.055 when the true sd is 0.03 is not inconsistent by any means.

    But if you’d like me to add “measurement error” to the models either with either 0.055 or 0.03, I can show that. Either way is worth looking at. It’s not going to spread it very much because if the measurement error is uncorrelated with the weather noise, you add variances then take the square root. 0.055C/century is small relative to the spread of weather– so we’ll get a small spread. It will make a difference– and I should at least add 0.03 (and I didn’t even do that.). Adding 0.055C spread will only make a little difference, so I can do that today.)

    I’m trying to get my head around how to think about the uncertainty

    Well… even though I note that sd=0.055 estimated from a sample of 3 is not inconsistent with a true value of 0.03, I’ve tried to think about whether my estimate sd of 0.03 might be too small too.

    Here are the there are three possibilities I can think of.
    1) Hansen’s, NOAA and Hadleys estimate of the random error in the annual average T could be low. I don’t know who elses to use.. but still, they could be too low.
    If so, using them to compute my value of s_e will result in s_e’s that are too low. This seems unlikely to be the main reason s_e’s could be too small because if measurement error in the annual average temperature anomalies is much larger than Hansen, NOAA or Hadley say, we have to begin to wonder why the residuals in the fit to trends aren’t larger than they are. (After all, it seems unlikely all the residuals in a trend fit is measurement error. Using current estimates about half the resisuals in recent trends is measurement error! I tend to doubt this.)

    2) In the previous post, I assumed the annual average error is un-correlated from year to year. So, the random error in the annual average temperature anomaly GISS(2000) is uncorrelated from the random error in GISS(2001) and so on. Maybe this is wrong. But I ran monte-carlo for various different lag-1 autocorrelations. The measurement error in trends (sd_e) peaks for an intermediate value of the ar1 coefficient and decays to zero for ar1->1. I could show that– but that effect is not enough to increase my estimates of the sd_e’s very much. But this may well be part of the issue.

    3) The notion I tend to favor is this: The measurement error in the annual (or monthly) values of GISSTemp and Hadley is negatively correlated. This would tend to make the spread of the trends for (GISTemp-HadLey) be larger than the you would expect based on an estimate of the actual random component in each individually. ( The extra term is -2 <GISTemp*Hadley&rt; )

    So, now, why might the errors be negatively correlated? Remember that GISTemp extrapolates over the poles. HadCrut does not. So, we know that when the region near the poles has a high anomaly, HadCrut will tend to have a negative ‘measurement error’. So, what happens with GISTemp? Either GISTemp tends to get the poles correct, or it amplifies the contribution of the temperature anomaly at the pole or it damps the temperatuer anomaly at the pole.

    If it gets the contribution correct the fact that HadCrut leaves it out will not result in any correlation of <GISTemp*Hadley&rt;. If GISTemps method amplifies the contribution of the temperature anomaly at the pole, then <GISTemp*Hadley&rt; will be negative.

    I tend to think the risk of amplyfing the contribution of the anomaly at the pole is large mostly because, we’ll… that’s what filling in the whole pole with the temperatures from a ring around the pole would tend to do that if the temperature at the pole is not perfectly correlated with the temperatures in a ring of land based thermometers around the pole.

    So, this reason would tend to suggest that the spread of (NOAA,GISS,HadLey) should somewhat over state the spread you expect from measurement error in NOAA alone, or GISS alone or HadLey alone. Since, as estimates of noise, sd=0.03 is, from a statistical significance point of view, not much different from sd=0.05 computed based on 3 samples, there is not much difference to explain, and I suspect this is the reason. (But… of course, you can see it’s somewhat speculative.)

    But either way, I can add the variabilities to the spread above, and you can see.

  13. Lucia: Thanks for the clarification. I misunderstood the post you’d linked to from #70435. I appreciate your patient explanations of your methods and your thinking.

    FWIW, lest there be any misunderstanding, there’s nothing pointed or loaded in these questions. I simply want to understand your analysis and your answers are helpful.

  14. Jonathan– I didn’t think there was anything loaded. Both q’s of type 1 and 2 come up all the time and they are good questions. It’s just that I often read one, answer. After reading, I sometimes realize that I may have answered a type 1 when the person meant to ask type 2. They have different quite different answers.

  15. I am more into weather prediction recently as my hobby in the AGW arena. For this year’s la nina it does appear that it will last until next Spring. Granted, you can not say this is for sure, but given that, I would suggest that with current trends and other information I use to predict that we will reach the low point in temperatures in approx. March or April of next year. This is very depended on other factors though, so just using the nina is not good practice.

    From the way weather is shaping up, I expect Summer will see a slight warming. Fall will see even more warming, with winter seeing a sharp down-turn starting in around November for the anamolies globally I might add.

    This is for the N.Hemisphere by the way. We should see the coldest come in March or April, but there is a wild-card which I will not mention for fun sakes. Anyone who keeps track of weather patterns knows about the large wild-card…so I will leave it at that.

    In essence, with the wild-card in the deck so to speak, it really would be interesting to bet on when the coldest time comes, but I really think it would be prudent to wait until June when I know we should be confident that the nina holds on. Otherwise, the bet is all about who said NOW versus LATER.

  16. This La Niña could go on for a long time. SOI is still really high and the last time it was positive for more than a year was the 1973-1976 Niña. Temperature has risen in the equatorial Pacific but the Kelvin wave which brought that has passed now.

    ENSO may go neutral but there’s no reason to expect an El Niño.

  17. Fergal– It might continue– I agree with that. Currently, NOAA predicts it will end. They’ve been wrong before. It’s also true that sometimes the weather goes La Nina-neutral-La Nina. I just want to know the basis for Ben’s prediction, particularly as it differs from NOAA.

  18. When I see a comparison of a collection of various models, in which the individual model results vary widely, I figure the average result must be wrong. Some of the models are “good” and some are “bad”, by virtue of the wide range of results. The average assumes that the bias errors will balance, with as many biased high as low. But, “group think” has likely incorporated similar biases in to all the models.

    Why not use the comparison of the model results to the real world measured trends to pick the winners and losers of the Greatest Climate Model Competition? Keep the winning models that are within a reasonable range of the measured results and discard the rest. Why is there a commitment that the ensemble of selected models must stay the same? ‘Cause the IPCC said so?

    I say pitch the models that are outside the range of statistical agreement with the measured trends and only run the “winners” when making any further extrapolation into the future.

  19. George– Right now, at the blog, I’m comparing to all models. This doesn’t mean that I think there exists any commitment to stick with an particular set of models. However, if we are going to see how the models in the AR4 are doing, then we do have to compare to that set, not a different set.

    Kim
    If you think a particular comment is great, could you link or quote? I’m not going on a wild goose chance to hunt down the “Revkin thread”, and then sift through comments from Ben. I doubt if anyone will. So if you are hoping people will read the comment you consider great, quote or give a link.

  20. Sorry, lucia, I don’t link. It’s entirely your choice whether or not you want to go look. If its wasting your bandwidth, I truly beg your pardon. If its wasting your attention, again, I’m sorry. I appeal, in my own peculiar way, only to the curious.

    benfromMo, I’d never heard from before, and was blown away by his comment.

    Links deteriorate, and are fragile; and so are conversations which depend upon them.
    =============

  21. Kim,

    I appeal, in my own peculiar way, only to the curious.

    At my blog, please either a) link, b) quote, c) link and quote or d) don’t post that sort of comment.

    Even if the curious are here and you judge your comment useful to them, you are being disrespectful of their time by forcing them to go on a wild goose chase to find the comment you tell us is tremendous. So: Don’t post “wild goose chance” comments. Link or quote.

  22. @Kim

    It’s the old wave your arms in a very reasonable manner argument.

    “Scientists should be sceptical of any theory that never validated the null hypothesis in the first place. Its actually possible that the CO2 component is negative after feedbacks.”

    CO2 is a greenhouse gas, the physics mean it effectively makes the atmosphere warmer. A negative feedback will slow and even stop the warming. But that is only after it has become warmer in the first place. Just like putting the brakes on in a car doesn’t make it go backwards.

    “That in itself should point to a rather direct question: since it appears that in the past CO2 increases after temperature increases, is this possibly what we are seeing? If that is not the case, what effect is putting long-sequestered CO2 into the atmosphere actually going to have?”

    Maybe he should read up more information. Such as the AGW ‘fingerprints’ like a cooling stratosphere, for example.

  23. Kim,
    I guess “tremendous” (like beauty) is in the eye of the beholder. I was underwhelmed.

  24. SteveF:
    I would guess that is so for you because he didn’t call anybody any names.

    That comment was refreshing for me to read kim, thanks 🙂

    BTW SteveF, after all this time, everyone here is still just guessing (and believing in) how much power CO2 has!

  25. Re: bugs (Apr 20 08:49),

    ““Scientists should be sceptical of any theory that never validated the null hypothesis in the first place. Its actually possible that the CO2 component is negative after feedbacks.””

    I love this comment bugs. It shows a utter lack of understanding.

    The null hypothesis that more c02 could cause cooling after feedbacks is rejected by physics, by modeling, and by the observations of the paleo record. But yes, it’s logically possible that all this science could be illusion and monkeys are poised to shoot out my southern end.

    its also possible and that Andrew_KY is married to liza.

  26. guesses that Arthur Smith will wait a while before he gives his totally accurate hind-cast/prediction, based on his own version of quasi-virtual-physics

  27. Bugs,

    Please show me a graph of the cooling stratosphere that is easily explained by monotonic ghg increases.

  28. “monotonic”.

    I don’t know where people get the idea that something as complex as the global climate where the short term noise drowns out the long term signal that anything is going to be “monotonic”. The stratosphere cooled, as predicted. That is a greenhouse gas ‘signature’. Warming from the sun would have warmed the stratosphere too.

  29. Bugs,

    The stratosphere cooled, as predicted.

    .
    Yes, and now it hasn’t cooled any more for 15+ years, which was never predicted. I suggest that you not cherry-pick only the data that support your POV, especially where there is equally credible conflicting data which cast doubt on it; it ruins your credibility with anyone who actually looks at the data (including a lot of the people who comment here). A more measured analysis is both warranted and probably more accurate. If the technical issues were nearly so blindingly obvious as you (always!) suggest, then few who comment here would even bother. AGW is not at all neat and clear cut, and there are lots of reasons to doubt the magnitude of predicted future warming.

  30. 15+ years of no cooling is wrong, but the complete picture is, as usual, more complex than I can understand all the details of. For example, the volcanoes cause warming for a short term. If you take the initial cooling from them as your starting point, you are going to see a steep trend that is not the long term signal. There is also the complication of the ozone layer changes.

    The basic point remains, stratosphere signature is there.

    If you go up higher again, the same effect applies.

    Greenhouse gases cause cooling higher up, too

    Greenhouse gases have also led to the cooling of the atmosphere at levels higher than the stratosphere. Over the past 30 years, the Earth’s surface temperature has increased 0.2-0.4 °C, while the temperature in the mesosphere, about 50-80 km above ground, has cooled 5-10 °C (Beig et al., 2006). There is no appreciable cooling due to ozone destruction at these altitudes, so nearly all of this dramatic cooling is due to the addition of greenhouse gases to the atmosphere. Even greater cooling of 17 °C per decade has been observed high in the ionosphere, at 350 km altitude. This has affected the orbits of orbiting satellites, due to decreased drag, since the upper atmosphere has shrunk and moved closer to the surface (Lastovicka et al., 2006). The density of the air has declined 2-3% per decade the past 30 years at 350 km altitude. So, in a sense, the sky IS falling!

    http://www.wunderground.com/resources/climate/strato_cooling.asp

  31. Bugs,

    15+ years of no cooling is wrong.

    Well, are you really suggesting that the satellite data showing no cooling of the stratosphere since ~1995 is wrong? If so please offer a reasoned explanation. Can you point to any published prediction of no net stratospheric cooling over the last 15 years (and I do mean a real prediction, not an ex post facto arm waving exercize)? I do not believe one exists.
    .
    The point is Bugs, you said my statement of no measured cooling over 15 years was wrong, without a shred of evidence to back it up. You earlier made a specific claim about stratospheric cooling, which while not wholly incorrect, is clearly misleading, and which fails to convey the entire story.
    .
    If it is true, as you said, “the complete picture is, as usual, more complex than I can understand all the details of”, then I suggest that you can do yourself and everyone else a favor by not pointing to evidence which you admit to not completely understanding. Please stop telling people they are wrong when you have nothing but your personal opinion to refute them.

  32. Show me your evidence first that there has been no cooling for 15 years then. The records show cooling, just as I claim. There is no dispute about that.

  33. Bugs,

    Look at John M’s second link to the Remote Sensing Systems web page. The graphic of stratospheric temperature trend is very clear… no cooling since about 1995.
    .
    All of the measured cooling took place in the first half of the record.

  34. [Response: This is bad even by Milloy’s own standards. While the basic physics is sometimes difficult to explain (see above!), the basic issue is that stratospheric temperatures change in response to local effects, they do not change because the troposphere does (i.e. troposphere warming does NOT imply stratosphere cooling). Thus the changes in the stratosphere are basically a function of the greenhouse gases, ozone levels and volcanic aerosols there. The changes seen in the MSU 4 data (as even Roy Spencer has pointed out), are mainly due to ozone depletion (cooling) and volcanic eruptions (which warm the stratopshere because the extra aerosols absorb more heat locally). As William points out, ozone depletion is levelling out since the Montreal Protocol, and so lower stratospheric cooling will start to attenuate, but then Milloy doesn’t appear to think that ozone depletion was a real phenomena either. – gavin]

    http://www.realclimate.org/index.php/archives/2004/12/why-does-the-stratosphere-cool-when-the-troposphere-warms/

  35. Bugs,

    An after-the-fact explanation for why a prediction of cooling turned out to be less that 100% accurate does not make my statement that there has been no cooling for the last 15 years incorrect. There has in fact been no cooling for 15 years!

  36. And a couple of more things on stratospheric temperature, ozone, and ghgs.

    Stratospheric ozone levels are approaching where they were in the late 80s.

    Stratospheric temperatures are approaching where they were in the late 80s.

    Radiative forcings from ghgs are 40% higher than they were in the late 80s.

    Let the hand-waving begin.

  37. Bugs —

    I’m a bit confused by the two quotes you link to, which seem to be at odds (although I confess I may be misunderstanding). In the your first link to Dr. Masters (around 2007?), we read the following:

    However, this recovery of the ozone layer is being delayed. A significant portion of the observed stratospheric cooling is also due to human-emitted greenhouse gases like carbon dioxide and methane. Climate models predict that if greenhouse gases are to blame for heating at the surface, compensating cooling must occur in the upper atmosphere…As emissions of greenhouse gases continue to rise, their cooling effect on the stratosphere will increase. This will make recovery of the stratospheric ozone layer much slower.

    The next is from Gavin in 2005, your bold:

    Ozone depletion is levelling out since the Montreal Protocol, and so lower stratospheric cooling will start to attenuate

    My reading of this is that the models suggest stratospheric cooling is caused by CFCs (leading to ozone depletion) + GHG. But if the almost all of the stratospheric cooling previously was caused by CFCs (as Gavin seems to suggest by saying their stabilization leads to the stabilization of temperatures in the stratosphere), I’m not sure how this cooling would be a fingerprint of AGW from GHGs?

  38. Troy CA,
    No need to read too much into the apparently contradictory statements. Both are after-the-fact arm waves. Nobody foresaw 15 years of no stratospheric cooling. One of the often sighted clear “fingerprints” of AGW is more than a bit smudged it seems. Some GHG driven warming is inevitable; how much is an open question.

  39. Ah, Steve @ 9:00 AM yesterday.

    You are underwhelmed because you already ‘get it’. I’m overwhelmed because ben was clearly a naif, and now ‘gets it’.

    Those who were naive about all this stuff three years ago are now getting less naive, and they are becoming skeptics, ok, lukewarmers, rather than alarmists. ben’s missal is a bird on the wing from the continental land mass ahead.

    Your mileage varies as I see, but I get a lot of mileage out of posts like ben’s. The herd paused at the edge of the cliff, and is now mulling the position over.
    ==============

  40. lucia, I see BobN understands my protocol. And I understand yours. Thanks for all the fishes.
    ================

  41. Arthur, diogenes notes that you challenged me for a specific ‘weather’ prediction but don’t have one of your own. Try to keep up.
    =================

  42. kim–
    I don’t know whether you do or do not understand my protocol. More importantly, I haven’t read anything to suggest you intend to comply.

    Just in case you think it’s ok for you to not link because BobN will hunt around for your link for you: At my blog, do not post brief “wild goose chase” comments of the sort you did. Either a) post the link yourself, b) quote or c) don’t post the “wild goose chase” type comment at all.

  43. I see ben has little more predicting for Arthur than I would venture. He thinks La Nina’s got a little more on the ball than a fadeaway.
    ====================

  44. ‘A little more predicting than I would venture’. Hey, Arthur, it’s the bens of this world you have to reach with your ‘communications problem’.
    ============

  45. Kim, follow the link to my blog (see the blue on my name, and the underline? That’s a link), I have a prediction for the next 3 years of temperatures, it’s been up almost a month. Well, 2 predictions, but for this year both models give almost identical numbers. Where are yours?

    And I find it odd that you know what diogenes is referring to when it is a subject that came up around a week ago on a different thread. One of your sock puppets?

  46. Arthur, I can’t predict weather more than a few days out, and my climate models are informed by poor knowledge of natural cycles and of the effect of CO2 in the atmosphere. Nonetheless, they show near, medium, and long term cooling to be more likely than similar timescale warming. I’m guessing about diogenes’ meaning, too.
    ================

  47. Kim,

    Thanks for all the fishes.

    .
    I have no idea, absolutely no idea, what that is supposed to mean.
    .
    In any case, I do hope you will post links or quote in the future. I was a bit irritated to have to search for the comment you referenced, but I assumed it was just an oversight on your part, not a conscious decision. Links or quotes do save other people time.

  48. Steve, it’s a complaint about style, and sorry, I’m just too old to change. The answer to the fishes riddle is 42, but I’m getting early satiety.
    =======

  49. Re: SteveF (Apr 21 07:39),

    I have no idea, absolutely no idea, what that is supposed to mean.

    You’ve never read the Hitchhiker’s Guide to the Galaxy series? How sad. Oh well, de gustibus non est disputandum..

  50. Re: SteveF (Apr 21 06:02),

    Nobody foresaw 15 years of no stratospheric cooling.

    That would have been difficult to do as the rate of removal of chlorine from the stratosphere wasn’t known very well in advance. Still, if stratospheric temperature was modeled, which seems likely, than an increase of ozone (warming) while CO2 continued to increase (cooling) should have been expected to produce a relatively stable stratospheric temperature. I’ve looked at the stratospheric ozone data, but it’s so noisy that it’s hard to tell what’s happening on a decadal scale. I wasn’t able to find the numeric data to do smoothing myself either. But I really shouldn’t have to do it myself. This is the sort of thing that RC should have done in more depth years ago but didn’t.

  51. Arthur,
    I read your post. A couple of things give me pause. First, your regressions generate a recent (1975 to present) “true” slope of 0.169 degree per decade. This is quite different from Kelly O’Day’s results of regression of satellite troposphere data (starting 1978) against Nino 3.4 and the SATO volcanic index, which yields a slope of 0.107 degree per decade and quite remarkable fidelity (high R^2) between the regression model and the data: http://chartsgraphs.files.wordpress.com/2011/03/uah_nino34_sato_regression.png
    I understand that GISS surface data and the satellite tropospheric data are not exactly the same, but this is quite a large discrepancy; it seems the two results can’t both be accurate representations of the ‘true’ underlying trend. I mean, the “true” GISS trend ought not be 60% higher than the “true” satellite lower troposphere trend.
    .
    Second, I noted that you also used sunspot numbers as a proxy for solar intensity variations. The intensity variation (about 1% for a change of 150 spots), and your regression constant (0.000518 degree per spot) suggests an almost instant response (3 months of lag) for the GISS temperatures in the range of 0.34 degree per watt/M^2. The lag seems to me far too short for that much response, considering that the ocean surface temperature (70% of the total area) can’t possibly respond very much to a small change in forcing in only 3 months. If the average well mixed layer depth is 60 meters, then a change of 1 watt per square meter over three months could warm the well mixed layer by only 0.0032 degree (assuming I have done my arithmetic right!). So unless the GISS data show a three-month land response of ~1.133 degree per watt of forcing from the solar cycle (and almost zero simultaneous ocean response), then your sunspot regression result seems to me physically unrealistic.
    .
    A more physically realistic approach is a regression against radiative forcing (not time), where the solar cycle is then included in the forcing. I’m pretty sure the solar cycle is not so visible in the temperature history when you do this. It ought to mostly show up as a variation in ocean heat content.

  52. bugs, years ago at Climate Audit I pointed out that not only did we not know the magnitude of water vapor feedback, but we weren’t even sure of its sign, and being sneered at for the doubt that it was positive.

    One thing the ice cores show for sure is that every time the CO2 went up, later on the temperature went down.
    =============================

  53. DeWitt Payne (Comment #74172),

    Sorry, I don’t read a lot of fiction. Well, maybe parts of the IPCC reports. 😉

  54. SF, Douglas Adams is probably worth your while. An additional benefit is that it is not fiction, rather history as farce. There is a character named Arthur Dent, and I always think of him when I first start to read the comments from Arthur Smith. No matter how many times it happens, I continue to be dismayed and discouraged as I read on.

    How could I be fooled, just by such a name?
    =====================

  55. I highly recommend ‘Barbarella’, too, a no punches pulled documentary.
    ===============

  56. DeWitt Payne (Comment #74173).
    Fair enough, it is possible that loss of chlorine in the stratosphere has an impact on stratospheric ozone. But the drop in halocarbons has (so far) been very modest. It is hard to see how a very small drop in halocarbons could have a large impact on ozone. I have heard people argue that there must be very long-lived “negative” stratospheric temperature impacts from volcanic eruptions (after the initial positive impact)… maybe this is involved in changes in ozone.
    .
    But the point is, it is clear that the mechanism(s) involved are not completely understood, and the data (what there is) is not very good. My original objection was having someone like bugs point to the drop in stratospheric temperatures from ~1979 to ~1995 as absolute confirmation of one of the IPCC’s “fingerprints”… it is a lot more complicated than that.

  57. SteveF – if you do that fit against RSS you get a higher number for the linear trend than GISS/Hadley etc.:

    http://tamino.wordpress.com/2011/01/20/how-fast-is-earth-warming/

    here are the linear trend numbers Tamino found from this kind of fit:

    Data Set Rate (deg.C/yr)
    GISS 0.0172(13)
    NCDC 0.0172(10)
    HadCRU 0.0171(11)
    RSS 0.0183(13)
    UAH 0.0159(15)

    They’re actually all remarkably close, but the satellite trend numbers are a little less certain – given their larger up-and-down variations, that’s perhaps not surprising.

    Assuming a single-lag model for Earth’s response to forcings is a poor representation of reality – you get a better fit with a “two-box” model, which Lucia and I had a huge back-and-forth on a couple of years ago, again based on a Tamino post on the idea. That seems to require about half the response to be on a few-month time-scale, and the remainder on a decadal (20-30 year) time-scale. Ocean dynamics are almost certainly centrally involved, but the details are not well-captured by assuming a single “well-mixed” layer. It’s something I’d like to understand better and I’ve been thinking of ways to do that, hopefully I’ll post some of my thoughts on that later. In any case, finding a roughly 0.1 degree response to solar forcing changes with a few months lag is very much in line with what others have found (IPCC mentions a roughly 0.1 degree C variation from peak to trough of the solar cycle).

    And ocean surface temperature clearly *DOES* respond to changes in incoming sunlight with a few-month lag: if it required a much longer lag we’d get essentially no seasonal temperature variations in the ocean surface. The fact that the solar cycle-related insolation changes are small doesn’t change the fact that there will be a (small) response on that same time scale. Linear responses don’t care how large the input is when it comes to timing of response.

    Regression against radiative forcing is something we’ve all done before (go back and read the two-box discussion, and Lucia’s “lumpy” analysis). But people critique that because they don’t believe the aerosol or other radiative forcing numbers you put in. This is an attempt at modeling from a different angle, and I think the fit to the temperature data you get is pretty convincing regarding the underlying trend.

    Perhaps you have a number for 2011 global average temperature you’d like to put up against my prediction?

  58. Kim–
    Since “thanks for the fishes” clearly did not mean “good bye”, I still don’t know if you intend to comply with my request that you

    Either a) post the link yourself, b) quote or c) don’t post the “wild goose chase” type comment at all.

    Your comment immediately following my request was” Wild geese to thee are plump ducks for me.” is ambiguous but suggests you intend to not comply. Your response to “Steve, it’s a complaint about style, and sorry, I’m just too old to change.” also reads like you intend not to comply with my request.

    Please tell me if you intend to comply with my request. If you do not, I will moderate you and then delete comments I consider to be of the “wild goose” type.

  59. Re: SteveF (Apr 21 09:21),

    Chlorine increase = ozone decrease = cooling. chlorine constant = ozone constant = constant temperature (all other things being equal). Many years ago somewhere else (I can’t remember and don’t feel like looking) I mentioned that it wasn’t clear how much CO2 was responsible for stratospheric cooling as we didn’t know how much was caused by ozone reduction and how much was caused by CO2 increase. I was, of course, blown off, it was all CO2. It’s now clear that most of the cooling was caused by ozone reduction. I also believe that while the initial response to a volcano is warming, there is a long term effect, probably also related to ozone, that causes cooling with a multi-year recovery time constant.

  60. Arthur,
    This is a complicated subject (and I have done a bunch of these kinds of regressions as well). I don’t want to get into a long drawn-out argument, but let me reply to a couple of your comments.
    .

    And ocean surface temperature clearly *DOES* respond to changes in incoming sunlight with a few-month lag: if it required a much longer lag we’d get essentially no seasonal temperature variations in the ocean surface.

    .
    I don’t think that implies the solar cycle will not be heavily damped by the ocean. The solar cycle changes are tiny (about 0.2 watt per square meter peak to valley over a typical 11 year cycle) compared to the seasonal forcing, which outside the tropics can be more than 200 watts per square meter… three orders of magnitude different. The period of the solar cycle is plenty long enough for substantial damping from deeper ocean heat uptake (especially in the tropics), while the annual cycle is short enough that it ought to be less damped by the ocean. Remember also that there is a strong seasonal change in the depth of the well mixed layer at high latitudes (very shallow surface warming in the the summer, deeper convective mixing in the winter.. solar cycle effects at high latitude will get “swallowed” by the deep seasonal convection). There is, of course, a huge damping effect of the ocean at high latitudes on the annual cycle (compare 45 degree latitude seasonal ocean temperature changes to comparable seasonal land temperature changes), but the seasonal ocean temperature changes are in no way inconsistent with a substantial lag imposed by the ocean on an much slower and much smaller cyclical forcing like the solar cycle.
    .

    But people critique that because they don’t believe the aerosol or other radiative forcing numbers you put in.

    .
    Actually, I think there is a way to at least partly constrain the credible values for aerosol forcing, or at least the credible combinations of ocean lag and aerosols which are consistent with the observed rates of ocean heat content change. When an ocean lag is assumed, that lag automatically means a fraction of the applied forcing is being taken up by the ocean, if you assume a very long effective ocean lag (which is to say, you assume a substantial fraction of any increase in forcing accumulates slowly in the ocean for decades) then that automatically requires much higher aerosol offsets… outherwise, the accumulated heat would be much larger than what has been measured. A 20 year ocean lag seems to require that ~60% of all man-made GHG forcing has been off-set by man-made aerosols, while a 9 year lag seems to require about 20% aerosol offset. These offsets are close to the extremes of the IPCC range of aerosol offsets.

  61. Re: Arthur Smith (Comment #74182)

    Perhaps you have a number for 2011 global average temperature you’d like to put up against my prediction?

    0.58 for 2011, 0.60 for 2012, and 0.62 for 2013 🙂

    (These come from a simple regression of GISTEMP vs the log of the CO2 concentration, and a quadratic extrapolation for CO2 growth in the near future.)

  62. lucia (Comment #74183),
    The Pope has just shown Galileo the rack. What will Galileo do? I don’t know.

  63. julio, Arthur,

    My guess – + 0.009 per year from now to 2020. Year by year change is too much driven by ENSO and weather.

  64. Steve,

    I agree that year by year change is just too random to spend a lot of time worrying about it. These numbers are provided, like the horoscopes in the newspapers, for entertainment purposes only. 🙂

    If you did not read The Hitchhiker’s Guide to the Galaxy when you were in college, my advice would be not to bother about it now, either. I remember it had me in stitches at the time, and yet, just the other day, I was paging through it and going “eh–”

    Still, in fairness, it is also true that just the other day I was quoting the bit about the plans (to demolish the Earth to make room for an interstellar bypass) having been on display at the local planning office in Alpha Centauri for the past 50 years, “so there is no point in acting surprised.” So some of it clearly still resonates with me (frightening as the thought may be).

  65. Nobody’s willing to predict actual cooling? I guess Kim’s in a very tiny minority then!

    DeWitt – you’re absolutely right, the seasonal variation doesn’t tell us much about response time. It does show there is *some* short-term response at least. I guess I was arguing (based on other evidence – the two-box fits etc) that a single response-time is not a good representation of the behavior of Earth surface temperature change in response to forcings. The whole Schwartz/Scafetta discussion several years back is probably better support, I shouldn’t even have mentioned seasonal change.

  66. Arthur–
    It’s probably true that single response time fits aren’t going to be terrific.

    On betting: My observation is very few people are willing to be on actual cooling. Meanwhile even those who want us to worry about warming beyond the wildest imaginings of the highest sensitivity IPCC model seem reluctant to bet someone who wants to make the dividing line something like 0.10C/dec.

    Obviously, lukewarmers who believe there is warming but possibly in a more intermediate range can’t find anyone to bet!

  67. Re: Arthur Smith (Apr 21 12:49),

    A multi-box model is just a crude approximation to the actual response function, which is probably more like a diffusion process. At high frequency, the penetration depth is low because it’s damped quickly. Lower frequencies go deeper with less phase lag. If, for example, you try to model the lunar surface as one well-mixed layer, it doesn’t heat up quickly enough when the sun comes up and cools off too slowly when the sun sets. You still might have to use multiple layers in multiple boxes for the ocean response, though, as the effective diffusion rate in the deep ocean is almost certainly significantly lower than in the upper layer, not to mention the difference with latitude. The spread of bomb related 14C into the ocean should be governed by a similar process.

  68. Julio,

    The only reason the ‘Hitchhikers Guide’ series now feels dated is because it was written originally as a radio programme in 1978. Then it was truly innovative and groundbreaking. It therefore immediately generated lots of copycat forms of media and its innovation passed into the mainstream.

  69. Julio,

    Posted before I’d finished – dang!

    But as you say elements of the HHGTTG continue to linger in peoples memory. Even my two sons, aged 22 and 16 KNOW the answer is 42.

  70. DeWitt Payne (Comment #74196),
    .
    Yes, there would have to be a continuously decreasing rate of response with depth. The land and atmosphere would seem to be a very fast, high lattude ocean surface water relatively fast (reflecting seasonality), the well mixed layer in the tropics is seasonally stable, and represents about 75 meters, so it would be considerably slower. The 1000+ meters of the thermocline would appear to be quite slow, decades to centuries, and the very deep ocean, many centuries.
    .
    No simple way to model that.

  71. Dewitt Payne and SteveF

    For what it’s worth, here are current values and projections for stratopheric chlorine and the projections from 1993.

    http://www.environment.gov.au/soe/2006/publications/commentaries/atmosphere/stratospheric-ozone.html

    http://trs-new.jpl.nasa.gov/dspace/bitstream/2014/36590/1/93-2065.pdf

    Although the absolute values from the 1993 paper are higher than currently accepted, the relative shapes of the curves are remarkably similar and both show or project that current levels are/should be about what they were in the late 80s

    Here’s my ranking of the level of understanding of stratospheric chemistry/behavior.

    Chlorine levels—pretty good
    Ozone response/recovery—so-so (we keep hearing “climate change” is impacting it)
    Temperature—lousy or non-existent

    And one more reference that maybe bugs should add to his collection

    http://www.jstage.jst.go.jp/article/sola/5/0/5_53/_article

  72. Re: Dave Andrews (Comment #74197)

    I’m sure The Hitchhiker’s Guide to the Galaxy was wildly innovative. The first couple of books, as I recall, were surreal lunacy. It’s just that I enjoyed them a lot more when I was (a lot) younger. (And the 3rd and 4th books were disappointing… although maybe it was just me growing old.)

  73. Re: John M (Apr 21 16:49),

    Here’s my ranking of the level of understanding of stratospheric chemistry/behavior.

    Chlorine levels—pretty good
    Ozone response/recovery—so-so (we keep hearing “climate change” is impacting it)
    Temperature—lousy or non-existent

    I think you have the last two backwards. Our knowledge of global ozone concentration profile is lousy or non-existent. We have some handle on the total column ozone but that’s not enough. We know the temperature fairly well. It looks like we should be determining the ozone concentration from the temperature rather than trying to measure it directly.

    I think it’s a fairly safe bet that the stratosphere temperature will be trending upward in the next ten years.

Comments are closed.