Knight et al: More questions than answers.

Roger Pielke Jr. discussed Knight et al, which evidently appeared in BAMS. I obtained a very short PDF). The paper is rather difficult to discuss, mostly because it is rather vague. Presumably, this could be remedied by sending a volley of email to the authors, but… really.. the point of a paper is to describe what they did and found. So, rather than disturb the authors, I’ll take a whirl at the paper discussing it in context of questions many of us ask ourselves here.

If anyone can clarify what they might have really meant, let me know.

The final paragraph of Knight et al begins:

Given the likelihood that internal variability contributed to the slowing of global temperature rise in the last decade, we expect that warming will resume in the next few years, consistent with predictions from near-term climate forecasts (Smith et al. 2007; Haines et al. 2009).

So, one might ask: Did the paper demonstrate that internal variability contributed to the slowing of global temperature in a way that makes the current slow down consistent with predictions from, say, the AR4? This is the question I tend to ask, because I have been interested in whether or not modelers can come up with a “consensus” projection/prediction that is consistent with data collected after the projection/prediction methodology is frozen.

With this question in mind, I will highlight some difference between analysis I tend to post and those in Knight et. al:

  • Their analysis is not based on analyzing any collection of IPCC AR4 runs but, rather, on runs from HadCM3 specifically. Presumably many of these runs post-date the AR4 or were, for some other reason, not used in the AR4. The relevant text appears to be:

    We can place this apparent lack of warming in the context of natural climate fluctuations other than ENSO using twenty-first century simulations with the HadCM3 climate model (Gordon et al. 2000), which is typical of those used in the recent IPCC report.

    For the time being, we can defer the discussion of whether or not these runs are “typical”. Suffice it to say they are a) a different collection from those used by the IPCC in the AR4 and b) apparently selected for this particular analysis published after temperature from 1999-2008 were observed.

  • From some larger collection of runs, the authors appear to have selected a subset of 10 runs:

    Ten of these simulations have a steady long-term rate of warming between 0.15° and 0.25ºC decade–1, close to the expected rate of 0.2ºC decade–1.

    ( Based on the later text, I suspect “long-term warming” is defined as a 70 year trend; the authors tell us 10 models represent 700 years of 21st century simulations. Of course, I’m not sure the long term trend is a 70 year trend– but I’ll proceed on that assumption.)

    Selecting a subset of model other than those used by the IPCC is a bit different from what I’ve been doing at The Blackboard and is problematic. It becomes especially problematic if one selects a subset of models of runs after the data they “predict” have been observed. This might be fine if the screening method was announced before data were observed, or if the method so obvious no one could wonder over its propriety.

    However,the motivation for the choice of these particular runs seems odd to me because the 70 year trends (i.e. long term trends) averaged over 55 runs under the A1B scenario appears to be 0.27 C/decade. The trends are shown below:

    SeventyYearTrends

    So, if Knigth et al define “long term trend” as 70 years, then every single one of their 10 runs displays less warming than the multi-run mean from the A1B scenario from the AR4. Of course, maybe they define long term trend otherswise. Had the specific definition been provided we might be able to evaluate the choice more specifically.

    For now, I would suggest that if Knight et al was comparing current trends to an ensemble of models with less rapid warming than projected in the AR4, their comparison tells us very little about the consistency of the current trend with projections in the AR4. What we might conclude is their analysis showed the current trends are consistent with projections based on a a subset of models chosen after temperature were observed and which exhibit less rapid warming than the collection used to make projections in the AR4.

  • Though the conclusions of Knight emphasize the notion that the variation in short term trends across there ensemble is due to internal variability, in reality, both short and long term trends across their model runs differ as a result of of both : a) different parameterizations and b) different forcings applied. The parameterization choices appear to post-date the AR4 projections, and may have affected both the trends and the internal variability of the models. Here’s the relevant discussion:

    Ensembles with different modifications to the physical parameters of the model (within known uncertainties) (Collins et al. 2006) are performed for several of the IPCC SRES emissions scenarios

    So, the variability across the 10 runs is not merely due to internal variability. Due to the brevity of the paper, for all readers can determine, the variability may not even be mainly due to internal variability. At least part of the variability is due to differences in the parameterizations and the forcings.

    In this light, it is impossible for the reader to know whether the “consistency” between the observed trend and the 10 models arises because the current trend is consistent a multi-model mean projection of 0.15C/century — which is less than the multi-model mean projection in the AR4– or whether the current trend is consistent with projections in the AR4.

There are a few other less important oddities in the paper. No doubt there are some who will puzzle over the apple-to-oranges statistical comparison of local trends in grid boxes, or the ENSO correction (why bother? What were the results without the ENSO correction?)

I don’t think they are necessarily worth discussing in any detail. Based on the above, it seems to me that, at best, Knight et all showed that:

If modelers examine a set of 10 runs with different long term trends resulting from a variety of parameterizations and combination of the SRES, all of which seem to have exhibited ‘long term’ warming trends below the average 70 year trend for the multi-run mean of runs forced with the A1B SRES in the AR4, then the current slow down in warming falls in the range of variability of those lower-than-projected-warming runs.

With some luck, a longer more detailed paper will be published– or if it has been published someone can point to it. Then, we’ll be able to learn a bit more about what those authors actually did.

60 thoughts on “Knight et al: More questions than answers.”

  1. In post 19 of that thread Roger gives a link to the whole supplement; it is publicly downloadable, though at 64Mb rather large. Haven’t read this yet, so no idea if it will help.

  2. Really the method of doing these model runs is problematic. They have a choice of parameters for their submodels, for example clouds. The choice of parameters has a substantial effect on the amount of warming spit out by the model, yet they never specify these parameters when they give the results of model runs.

    The old version of MIT’s EPPA model had explicit inputs for key variables of aerosols, ocean sensitivity, and cloud feedback.
    They really should be freezing and making public the model code, and reporting a parameter set with each model run. Then you can always run the same model afterwards, and see which parameter set yielded the best fit to actual data.

    Perhaps they don’t do this because they don’t want people to see the amount of variation(more than 5C) that comes from changing parameters.

  3. MikeN

    The choice of parameters has a substantial effect on the amount of warming spit out by the model, yet they never specify these parameters when they give the results of model runs.

    In the context of this paper, the might also have a substantial effect on the variability of 10 year trends.

    Presumably there is some longer document somewhere. Maybe a NOAA report describing the fuller set of models so we can see how these compare to the fuller set? Maybe an archive with the gridded model data or the monthly temperature anomalies and ENSO values? Who knows?

  4. In the PDF it is statted that 10yrs of flat to lower observed temps are consistent with the models but:

    “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

    This is the first time I have seen anyone state a specific time period for evaluating the efficacy of models.

  5. Jack–
    This group of model runs appears to be “post AR4 (2007)”. So… yeah. They don’t seem to be discussing models that predicted anything. The comparison is between observations an apparently ‘new’ ensemble of 10 runs.

  6. Jack–
    I should also note that one of the things that bothers me is the utter vagueness of the paper. Is a “long term” trend 70 years? How does this collection really compare to AR4 projections– or any ensemble pulled together before the trend they “predicted” was observed?

    There is nothing in this paper to permit a reader to know. This has two problems:
    1) Readers can’t know without contacting the authors. This is tedious and ridiculous. Papers are supposed to communicate the basis of their results. It would be one thing if the lapses were few and minor– but it’s just tons of stuff.

    and

    2) Any answers obtained answers will be scattered all over the place, and for all we know, inconsistent.

  7. Why should we give any more credence to climate models than we now give to all those oh-so-sophisticated models that the mathematicians and econometricists used to drive the world into the worst recession since the 1930’s?

  8. Dave: Models don’t drive the world into recessions . People drive the world into recessions.

  9. Dave,
    That’s a false comparison. Economic models aren’t based on physics. Whatever problems climate models have, I’d sooner put more trust into their results than an economic model.

  10. This paper is included in The State of the Climate 2008, which is longer, and evidently filed with: “Bulletin of the American Meteorological Society, Vol. 90, Issue 8”. However, the longer document is mostly a compilation of many reports. It doesn’t contain any more detail than the shorter pdf.

  11. Chad,

    “That’s a false comparison. Economic models aren’t based on physics. Whatever problems climate models have, I’d sooner put more trust into their results than an economic model. ”

    !) I don’t think basing economics models on physics would make them better

    2) Climate models are PARTIALLY based on physics

    3) do you really think we know everything about physics???

  12. As I understand it, the risk models (which enabled people to create a near economic collapse) were built by people who didn’t understand what the risks really were.

    If the climate models have been built by people who did not and do not really understand how the climate is driven, then the comparison of the two models is relevant.

    The two Russian solar physicists who in 2005 bet $10,000 with James Annan (climate modeler) that the average global temperature ten years from then would be cooler, not warmer, clearly have a different model than the ones in vogue.

  13. Don B,

    The Russians will have a dataset that wins them the bet and JA will have a dataset that wins him the bet.

    Andrew

  14. “The Russians will have a dataset that wins them the bet and JA will have a dataset that wins him the bet.”

    They agreed on a dataset before the bet. NCDC according to Annan’s blog.

  15. What do they mean by ‘average temperature’? Is the bet about whether 2015 will be warmer or colder than 2005, is it about 5 year averages centred on those years, or what?

  16. David–
    I think I found the story here: http://www.guardian.co.uk/environment/2005/aug/19/climatechange.climatechangeenvironment

    To decide who wins the bet, the scientists have agreed to compare the average global surface temperature recorded by a US climate centre between 1998 and 2003, with temperatures they will record between 2012 and 2017

    Andrew_KY–
    No one is winning or losing so far. We can’t begin to know until 2012 when the second period starts. However, right now, based on GISS/ NCDC and HaCrut, the average surface temperatures from Jan 2004-now are roughly 0.1C lower than the temperatures between 1998-2003.

    Still.. even if the models are over-predicting warming, I suspect the temperatures will rise, and Annan will win the bet. We can wait before we wager any Quatloos on this.

  17. The 15 year number makes me suspicious of their model simulations. These are the same numbers Tamino gave, and he just assumed temperature is a linear trend plus noise. I got banned on his site for saying he is wrong in concluding a 10-11 year pause is ‘no more than expected’

  18. They are slowly coming to grips with the fact that the models are not predicting very well.

    But you still get the feeling that they just don’t believe it yet. That the models have to be right and are nearly infallible.

    I don’t know how many of you have built a model and then watched it not work very well in predicting the future. It is humbling and you lose faith that the phenomenon can even be predicted accurately. Or you just accept that it only points to a future direction that could still be way off from reality anyway or you might even just give up and discard it.

    It just does not seem to happen in this climate modeling field. There is something very different about it compared to the normal human experience.

  19. RE: Don B (Comment#20207) September 17th, 2009 at 6:44 pm
    “As I understand it, the risk models (which enabled people to create a near economic collapse) were built by people who didn’t understand what the risks really were.”

    Further to that observation see Nassim Taleb on edge.
    http://www.edge.org/3rd_culture/taleb08/taleb08_index.html

    With reference to climatology here is a something that has been bothering me for some time: has it been shown that model variability follows the same distributions as climate data variability, and if it has not can anyone explain how any credible confidence levels for “ruling out” or “confirming” hypotheses with models can be computed if it has not been shown that model variability follows the same distributions as climate data variability? Apart from that, Taleb’s arguments based on “fat-tailed” distributions generally would seem to suggest that such distributions are not well modeled by the usual statistical references such as gaussian, binomial, etc. I have read that Mandelbrot found power law distributions in such climate related events as estuarine flooding. I believe there are many other examples of natural processes in the literature which exhibit similar variability. If so, how is it justified to assume gaussian or other such models of variance when attempting to find a significant “signal” in the noise?

  20. Jack 20182
    I had a discussion with Gavin, Ladbury and the rest at RC about 3-4 months ago regarding Lucia’s work and it came out then that a 15 year period without warming would “invalidate” the model projections. I guess we’ll find out sometime between 2012 and 2017.
    Thanks
    Ed

  21. They are slowly coming to grips with the fact that the models are not predicting very well.

    But you still get the feeling that they just don’t believe it yet. That the models have to be right and are nearly infallible.

    You have no idea what “they” think at all.

  22. bugs (Comment#20238) September 18th, 2009 at 1:18 am

    You have no idea what “they” think at all.

    In this case it’s probably best all around to simply ask, rather than speculate.

  23. edward–
    They use a test that ignores the actual variability of the earth and substitutes the variability across all runs of all models in the IPCC. So, their variability includes both “weather noise” and “different parameterizations noise”.

  24. “However,the motivation for the choice of these particular runs seems odd to me because the 70 year trends (i.e. long term trends) averaged over 55 runs under the A1B scenario appears to be 0.27 C/decade.”

    I would guess that the point was to get as many years of “0.2 degrees/decade” simulations as possible. Since they are not looking at 1998-2008 specifically, but just taking any 10 year period from those model runs. One could argue that it would be better to take 70+ runs that covered 1998-2008, but I don’t know that such a dataset exists using only one model (and it might have to be much larger than 70 runs in order to actually get an equivalent amount of data). There are a number of other caveats I could come up with, and better experiment designs, but given the constraint of working with model runs that exist, this is one way of exploring the 10-15 year variability within a single model.

    As to the ENSO correction: my guess is that this makes the test more stringent. First, it reduced the 1999-2008 trend. Second, it would have reduced the variability in the model runs. I would argue that if you _can_ remove known variability in both your observed and modeled scenario, that it makes for a better statistical test. Again, one can argue about whether ENSO detrending in reality is the same as ENSO detrending in the model but I would think that it isn’t unreasonable for a short paper (and yes, for a long paper I’d hope they’d show both pre-detrend and post-detrend).

  25. Marcus–

    I would guess that the point was to get as many years of “0.2 degrees/decade” simulations as possible.

    But the 70 year long periods with non-linear trends this wont’ accomplish that goal.

    The difficulty is that, since we expect the underlying trend to be non-linear in time, they have automatically introduced a period of time when the underlying trend is more lower than 0.2 C/decade trends.

    In contrast, they could create the ensemble they wished by running 25 runs of 30 year simulations rather than 10 runs of 70 years. They could show the ensemble average of those runs is 0.2C/decade.

    Again, one can argue about whether ENSO detrending in reality is the same as ENSO detrending in the model but I would think that it isn’t unreasonable for a short paper (and yes, for a long paper I’d hope they’d show both pre-detrend and post-detrend).

    First– I haven’t said ENSO detrending is either reasonable or unreasonable. The paper doesn’t tell us enough to know. That said: Why doesn’t the paper mention in one sentence what the results are prior to ENSO detrending? In reality, if ENSO detrending in the models makes a difference, that opens another can of worms.

    More importantly, why is the question,”What is reasonable in a short paper” meaningful to people who want to determine what the paper actually demonstrated?

    Sure, it may be that we should not expect short papers to show anything. And maybe given the lack of numerous data runs, we should not expect climate scientists to do analysis that support their conclusions in a convincing way.

    But… neither of those things means we have to pretend the authors did support their conclusions. As the paper stands, the authors did not support their claims in a way that a reader can see they are supported.

    Maybe if they wrote a longer paper that filled in all the answer to the questions then people other than the authors would know be able to evaluate what claims the authors work really supports. But in the meantime: The authors haven’t supported their conclusions about whether or not 10 year zero trends are “expected”. They’ve made a claim and published a very scanty, vague paper that tells the public nothing.

    If the climate science community continues to ‘communicate’ in this way, they will continue to foster skepticism among people who actually read their papers!

  26. 0.2C per decade for another 90 years only gets temperatures to +2.4C by 2100. There also a flattening out of the predicted trends starting about 2080 so the average X.XC per decade has to be more than 0.2C to reach the IPCC prediction.

    I pulled apart the expected trend C per decade rate awhile ago and it produced an unexpected result.

    http://img190.imageshack.us/img190/9912/warmingrates2100.png

  27. Just as with an earlier (2009) short paper (can’t remember the authors), these authors fail to demonstrate that the selected model runs have average 10-year variablilty that is comparable to the known variability for the last ~120 years. If the selected model runs are substantially more variable than the temperature history, then concluding that 10-year periods of falling average temperature are “expected” at some known frequency is non-sense. How does such garbage even get by peer review?

  28. SteveF–
    The paper is Easterling and Wehner. It was bad too.

    We have two papers that claim to show zero trends are to be expected over 10 years. Both present very poor arguments. They will not convince anyone skeptical.

  29. lucia (Comment#20253) September 18th, 2009 at 7:40 am

    “The paper is Easterling and Wehner. It was bad too. ”

    Thanks. I wrote an email message to both asking about the relative variability of model runs versus temperature history (which seemed in their case terribly wrong), and got no reply, which is not so much a surprise, since being rude seems almost a requirement to be in the climate modeling field.

    But I still don’t understand how such garbage gets by peer review. Does the editor automatically select someone who is known to be “friendly” to the paper’s results? I would imagine editors would want to select reviewers who would very critically examine the paper for quality to ensure only good papers get published, but this does not seem to be the case.

  30. “They are slowly coming to grips with the fact that the models are not predicting very well.

    But you still get the feeling that they just don’t believe it yet. That the models have to be right and are nearly infallible. ”

    No scientist likes to be proven wrong. Admission of error is rare in the sciences, and it never happens when everyone is looking at you. A lot of times it’s easier to simply hold the line, recant nothing, and wait until the focus has shifted to someone/something else; then you can pretend it never happened. I work in the biological sciences, and I see it a lot. If you ever want to make someone squirm, remind them of a theory/position they held in the past that has since been proven false. “Hey, remember when you thought the gsf7 gene was THE cancer precursor gene?” “Um, no, I don’t think I ever said that…”

  31. “Ten of these simulations have a steady long-term rate of warming between 0.15° and 0.25ºC decade–1, close to the expected rate of 0.2ºC decade–1.”

    I’ve been saying for quite some time now that I don’t really care if a model that warms 1.5 degrees in the next century is rejected as a hypothesis or not. I would actually care if some of the more extreme models were rejected.

    So why exactly do modelers insist on only those tests which allow for either “all of the models are wrong” or “all of the models aren’t wrong”? Can’t we also be interested in the possibility that all of the models aren’t wrong, only most of them?

  32. Lucia,

    “We have two papers that claim to show zero trends are to be expected over 10 years. Both present very poor arguments. They will not convince anyone skeptical.”

    If you want some other supporting evidence for the 10-year number, I’ll gladly provide you some! For instance check out Figure 3 from Pat’s February testimony.

    While I don’t think that Knight et al. or Easterling and Wehner are perfect analyses, based on work that I have been involved with, I think that models do include zero as a reasonable expectation of an observed trend value for trend lengths up to about 10 years. The exact value, of course, depends on how you do the figuring.

    -Chip

  33. Chip–
    You are mistaking my reason for thinking E&W is a bad paper.

    I think Easterling is bad for a number of reasons. One is that that they are addressing the 0 trends but failing to address the temperature trends we’ve actually seen. Zero may seem like an important number, but we’ve seem much lower trends, and the more important question is not about “zero” but about trends that have been observed.

    The other difficulty with E&W is they go so far as to claim other recent slow trends actually appear during the late 20th century– and do so in a manner that would lead people believe those slow trends were not associated with volcanic eruptions. In reality, their examples are all associated with volcanic eruptions.

    The other difficulty is that E&W is that to support the claim that the variability over all model runs reproduces the internal variability of the earths’ weather, they makes comparisons between variability in model and observed trends over the 20th century, but many of the models are not driven by volcanic forcings

    So, if the models are correctly capturing variabilty, their variability in the 20th century should likely be lower than the observed trends. In reality, the model variability is slightly higher than the observed trends.

    The issues above are all flaws even if the paper includes some true observations. That A2 model runs for the early 21st century do include a reasonable expectation of zero trends over 10 years is true. I’ve never disputed that.

    But does that support the conclusion that we “expect” to see that level of variability of trends on earth?

    I think a much more limited conclusion is warranted. I think it’s entirely fair to ask whether the variability of all runs in all models exceeds that for the earth. There are plenty of reasons to expect this might be so. Knight does not even try to engage this point; E&W make a very poor case.

    Before concluding that “we should expect” something, one should address this issue of whether variability of all runs over all models represents the variability of trends on earth. Otherwise, the conclusions should be limited to

    “The observed trend does fall within the range of the model runs” or “The trend does not fall within the range of the model runs”.

    Either conclusion should be reportable– but it’s a somewhat different conclusion than the one advanced in these two papers.

  34. Lucia,

    Good point.

    I agree that models run with SRES A1B statistically expect (within their 95% confidence bounds) a ~10-yr trend of zero.

    But your point is that we should not necessarily accept that the level in internal noise in the models is the same as the real world…and you have reason to expect that the model noise is, in fact, greater than the real world. A situation which, if true, would shorten that 10-yr period.

    -Chip

  35. and you have reason to expect that the model noise is, in fact, greater than the real world. A situation which, if true, would shorten that 10-yr period.

    Precisely. It’s not certain the model noise is greater, but there is evidence to suggest it may well be.

    I think it’s fair to estimate the variability of 10 year trends based both on the earth’s own time series and the models. It’s not quite right to insist that it can only be estimated based on the models.

    While it is perfectly fair for people to observe what the model variability is and to compare the trend to that range, it’s not quite right to categorically decree that’s the variability we must “expect”.

  36. I argued several months back when the first paper came out, that the recent lack of warming meant the high warming projections are less likely than the low warming ones.
    Tamino said the models accelerate towards the end warming, so the first decade actually has a lower trend than later ones, and that there isn’t much of a difference between high and low warming projections in the early decades for my point to be valid.
    I tried to disprove him by looking up the actual projections(which was when I discovered this site,) but all of the posted projections at the Climate Explorer have a very limited range. I couldn’t find anything with warming above 4C or below 2C

  37. Lucia,

    Early September, the WMO held a major confab in Geneva to discuss, inter alia, the status of climate models.

    Mojib Latif [Leibnitz Inst. @ Kiel University] modeler and IPCC lead author, bluntly stated that the emperor has no clothes on. All current models are fundamentally deficient, as evidenced among other things by the fact that none of them even came close to predicting the absence of warming/cooling of the past decade. He also stated that NAO cycles alone account for a significant portion of the warming observed between the late 1970s and 1990s.

    Vicky Pope [UK Met Office] argued that one of the models’ core deficiencies is that they don’t account for natural climate variability. They simply can not because it’s too complex a bundle.

    Tim Stockdale [EU Centre for Medium Range Weather Forecasting] made the point that “model biases remain a serious problem”. “We have a long way to go to get them right. They are hurting our forecasts”.

    These individuals are certainly no “skeptics”. They are part of growing group of scientists who have come to realize that their scientific integrity/reputation is at stake in an ideologically driven controversy where “science” is abused all around. That said, it is perplexing to see how firmly wedded Latif remains to the linear, deterministic view that [man-made] global warming will return unabated after the current hiatus, a view all the more baffling given his comments on the crucial role of the natural variability represented by NAO cycles.

  38. SteveF (Comment#20255) “Does the editor automatically select someone who is known to be “friendly” to the paper’s results?”

    Actually, many journals are letting the authors suggest “suitable” reviewers and also to name people who could not be “objective” as reviewers, from the Author’s PoV.

    Of course, the editor still has to be fairly sympathetic, or just lazy and willing to let the authors dictate the reviewers.

  39. tetris (Comment#20293)-Well, as far as Vicky Pope goes, I can imagine that it is rationalized that natural variability is important, but of limited significance on time scales longer than about 15 years-that certainly seems to be the way a lot of people are going. However Latif’s comment is truly puzzling. But then again, Lindzen said of him:

    “[Latif] is actually one of the better ocean modelers. However, he used to be all over the German media proclaiming that models were perfect, and should be used to determine policy. When someone responded that since the models were perfect, there was no need for more funding, [Latif] developed a deeper appreciation for the model shortcomings. I suppose that there is a lesson here someplace.”

  40. About the E&W paper, when it came out and Chris Colose blogged about it, I looked at his graphs from the paper, and it really said the opposite of the title. The probability of a zero decade was small, and a -.05 decade <1%

  41. MikeN–
    If you look at the distribution of 10 year trends over all models, there is some non-neglible chance of a 10 year trends of zero. The difficulty is that some of those models have lunatic ‘weather’– in particular, some that show zero trends are extremely wild– to a degree that is simply not seen on the real earth. The models all have widely different “weather noise” from each other. They cannot all be correct. Why should a test that is extremely sensitive to the outliers in “wilde weather” necessarily be a good one?

  42. Andrew FL [20299]
    V
    icky Pope also argued that the 2007 out-of-the-norm Arctic ice loss was due to natural cycles rather than AGW.

    To your point, Pope’s boss, James Murphy, underscored that the oceans are “key to decadal natural variability”. I have to admit that I’m puzzled by what that is supposed to mean. I trust he’s not suggesting that the PDO or the NAO are “decadal” because we know they are both multi-decadal cycles.

    The key point remains that we now have up for discussion, from the “experts” themselves, that climate models and their predictive value are highly questionable. Which at an overarching level means that these climate models should not be accepted as the basis for multi-billion dollar government driven economic policy decisions.

  43. tetris (Comment#20305)-What may be even more significant is the admission that natural internal variability is a problem for the models. Remember that the attribution argument goes something like: Our models don’t come close to the observed temperature history with TSI and volcanoes only, and need a substantial push from AGW effects to match.

    But wait, we know that internal variability can result in “pauses in warming” and we know models are deficient in creating realistic internal variability, then they wouldn’t be able to predict much natural warming even if that were the case in reality!

    So, of course, it has to be acknowledged that natural, internal variability is a problem. But the modelers MUST be extremely careful to say that such effects don’t matter on longer timescales. There is zero evidence for that, but then again, there has always been the unstated assumption that “weather” doesn’t have timescales longer than a few years…

  44. tetris,
    “Which at an overarching level means that these climate models should not be accepted as the basis for multi-billion dollar government driven economic policy decisions.”

    Trillion, not billion.

  45. MikeN,

    “About the E&W paper, when it came out and Chris Colose blogged about it, I looked at his graphs from the paper, and it really said the opposite of the title. The probability of a zero decade was small, and a -.05 decade <1%"

    Exactly the conclusion I came to from looking at the graphs. I then emailed the authors with (in part) these comments:

    "It seems to me that your data comparing model predictions with the 20th century record:

    Experiment Negative Trends (%) Positive Trends (%)

    Observations, 1901-2008 1.0 3.0

    Control 2.65 2.65

    20th Century 1.97 8.63

    shows that the variability of the ECHAM model is higher than the real variability of the 20th century; the number of statistically significant negative decades hindcast by the model is twice what the historical record shows, while the number of statistically significant positive trends is nearly three times what the historical record shows. The sum of statistically significant negative and positive trends for the "Control" case (with no underlying warming) is much larger than the sum of the significant trends for the 20th century, while it ought to be somewhat smaller, since removing the underlying warming trend (removing increases in forcing from the model) should reduce the chance of statistically significant warming trends somewhat more than it increases the chance of statistically significant cooling trends. Once again, this indicates the model is more variable than reality.

    The model hindcast does not include the substantial contribution of volcanic eruptions to climate variability, while the 20th centruy temperature record does. The 20th century data should therefore be more variable than the model, not less, indicating yet again that the model is more variable than reality."

    I got no reply from the authors. I do not understand why any reviewer would not have raised similar questions.

  46. SteveF (Comment#20339)- A billion here, a trillion there, pretty soon your talking about real money! 🙂

    SteveF (Comment#20340)- It was even worse than that suggests. I’m not sure how you calculated your figures (are periods which surely occurred from the forties through the seventies where little change or cooling occurred included? They shouldn’t be in either the denominator or the numerator.) However ALL of the periods E&W identified in the observational record of periods of no warming embedded in long term warming were associated with volcanic eruptions! So the real percent could easily be closer to zip, zero, zilch….

  47. Andrew_FL (Comment#20345),

    “However ALL of the periods E&W identified in the observational record of periods of no warming embedded in long term warming were associated with volcanic eruptions! So the real percent could easily be closer to zip, zero, zilch….”

    You are right, but I did not want to antagonize E&W. I just used the numbers they published. I was actually expecting a reply of some kind…. silly me.

  48. To the extent that the models are not incorporating natural varibility elements, we are just talking about “model noise” here.

    It is not natural climate cooling periods, it is just longer periods of model noise.

    A model going off track for a seven years is not the same as the Atlantic Multidecadal Oscillation switching to a cooling phase.

    The models are based on reaching 2.4C to 4.0C by 2100 with the trend rising in lock-step with the increase in GHGs (give or take some very small expected changes in the other forcings like volcanoes and aerosols – there is no accurate volcano forecast or aerosol forecast for the future so these are mostly flatlined).

    Just because the models go off track for periods of time, does not mean they are capable of simulating the past decade of cooling/stalling in temperatures.

    The forcings the models are based on have not declined in the past decade. The forcings have increased. The models, thus, do not include all the forces which act upon the climate, or they have over-estimated the forces which they have included.

  49. Bill Illis,

    Thank you. What you are saying here is that the models have severe limitations.

    Climate scientists must know this. Why do they not acknowledge it? Why do they prefer to run computer models than measure what is going on in the real world?

Comments are closed.