When were the models used in the TAR frozen? Around 2000.

Recently, I have been interested in the answer to these questions:

How does the IPCC really make projections? When were tuning parameters in models used to create projections in the TAR “frozen”?

Both questions were motivated by one of the many of the puzzling features of a recent paper by “The Rahmstorf Six Seven”, often referred to as “Rahmstorf et al 2007.”

That paper suggested that the projections in the TAR were based on physical models which are independent of observations since in 1990. But, mysteriously, these projections were only published in 2001. The question is: is this entirely accurate? And even if it is, can we be sure of tuning parameters were not influence by temperature measurements collected after 1990?

By my reckoning, the models actually used to create the projections published in the TAR are tuned to something, and those tuning parameters were frozen sometime around 2000-2001, and no earlier. It is highly unlikely the predictions using these tuning parameters were not compared to recent data; it is quite likely that tuning parameters that resulted in poor predictions would have resulted in efforts on the part of scientists to develop better methods to select tuning parameters.

So, validation of these models should be based on data obtained no earlier than 2001; to do otherwise would result in a false validation, as the “validation” will rely on data that likely affected the choice of the tuning parameters.

What type of projections am I talking about?

The more specific question I am asking is this: How did the IPCC create projections for Global Mean Surface temperature such as those in Figure 9.13b from 9.3.3 Range of Temperature Response to SRES Emission Scenarios in the TAR:

GMST projections from the TAR

The narrative alongside this figure explain how this figure describing the TAR projections was created:

Figure 9.13b shows the simple climate model simulations representing AOGCM-calibrated global mean temperature change results for the six illustrative SRES scenarios and for the full SRES scenario envelopes. The individual scenario time-series and inner envelope (darker shading) are the average results obtained from simulating the results of seven AOGCMs, denoted “ensemble”. The average of the effective climate sensitivity of these AOGCMs is 2.8°C (see Appendix 9.1). The range of global mean temperature change from 1990 to 2100 given by the six illustrative scenarios for the ensemble is 2.0 to 4.5°C (see Figure 9.14). The range for the six illustrative scenarios encompassing the results calibrated to the DOE PCM and GFDL_R15_a AOGCM parameter settings is 1.4 to 5.6°C. These two AOGCMs have effective climate sensitivities of 1.7 and 4.2°C, respectively (see Table 9.1). The range for these two parameter settings for the full set of SRES scenarios is 1.4 to 5.8°C. Note that this is not the extreme range of possibilities, for two reasons. First, forcing uncertainties have not been considered. Second, some AOGCMs have effective climate sensitivities outside the range considered (see Table 9.1). For example, inclusion of the simple model’s representation of the CCSR/NIES2 AOGCM would increase the high end of the range by several degrees C.

(Italics mine.)

When I first skimmed this text, I thought the these projections represented the ensemble average of actual AOGCM runs. However, on further reading, realized that is not the case. For if this were the case, there would be no reference to “two parameter settings”, nor would we see discussion of “simple climate model” that are “AOGCM-calibrated”.

The modifiers in that paragraph imply that the projections in figure 9.13b in the TAR are the product of a simple, two parameter model. “Parameters” are often referred to as “tuning knobs”; in this case, the “simple models” are “tuned” to predict what a particular AOGCM might be expected to predict if that particular AOGCM were actually run using a particular set of SRES forcings during a particular period in time.

So, these projections are the output of a model (the simple model) that predicts the output of another model (an AOGCM) that is though to simulate the climate of the earth.

Predictions are the result of a “cascade” of models.

The process for making projections is described in Section 9.3 and Appendix 9.1 of “Climate Change 2001: Working Group 1: The Scientific Basis”.

As far as I can tell, the projections in the TAR are based on what I might call a “cascade” of models, but which the TAR calls a 8.3 Model Hierarchy

I’ll describe each type of model below.

  1. A number of AOGCM’s were used to simulate the earth’s climate response under a variety of forcings SRES.

    AOGCM’s the models one often reads about in various blog-climate-war debates. These are detailed models that attempt to simulate the earth by solving the equations governing conservation of mass, momentum and energy. They include parameterizations for various processes; we often here that difficulties associated with modeling clouds are thought to introduce uncertainty in the model predictoins.

    Of course, the predictions of each AOGCM differs from other AOGCM. However, since these are based on approximate models for individual physical processes that happen on earth (rather than say, astrology), one might hope that, on average, ensemble of AOGCM results predicts the earth’s climate.

    In particular, one might hope the ensemble average results of a collection of AOGCM’s predicts measurable metrics like “global mean surface temperature” (GMST) or sea level rise accurate.

    These are used only indirectly to create projections as in figure 9.13 in the TAR.

  2. Simpler “upwelling diffusion-energy balance models (UD/EB)” have been developed, and are tuned to each AOGCM.

    The term “upwelling diffusion-energy balance models (UD/EB)” is describing a type of simple model.

    The goal of the simpler model is to predict the results of a more complex AOGCM model. So, when run, each of these models should “predict” what a particular AOGCM) would predicted if it were run for a particular scenario.

    As far as I can determine, the simple “upwelling diffusion-energy balance models (UD/EB) models” used in the TAR contain six adjustable (aka “tuning”) parameters. (See table 9A.1. )

    Like “Lumpy”, my toy model, which contains two “tuning” knobs, the tuning knobs in the UD/EB model do relate to some physical process. The tuning knobs in the UD/EB include a climate sensitivity, diffusivity in the ocean and what not. Nevertheless, like “Lumpy”, the magnitudes are obtained by fitting to a data set created by an AOGCM.

    I infer that an “upwelling diffusion-energy balance model (UD/EB)” and its six specific tuning parameters, represents “one” model. In the IPCC process there seems to be a one-to-one relationship between AOGCM’s and “UD/EB” (aka ‘simple’) models.

  3. The simpler “upwelling diffusion-energy balance model (UD/EB)” are then used to predict the earth’s climate under a wide variety of forcings as specified by the SRES.

    Each of the ‘simpler’ models can be driven by a SRES forcing projection. When this is done, the result is a some sort of time series for climate. Some metrics of interest, like “Global Mean Surface Temperature” can be computed from the output.

    These are the averaged and used to create plots like Figure 9.13b, which can later be compared to real earth data.

So we see that the projections are influenced by AOGCM output, but indirectly. More specifically, the predictions are those of simpler models that predict what an AOGCM would have predicted if the AOGCM has been run using a particular SRES forcing.

So, when were the “simple models” created?

If we define a “simple model” as the general class of model and its specific tuning parameters, then specific models used in the TAR appear to have been created shortly before 2001.

In Appendix 9.1: Tuning of a Simple Climate Model to AOGCM Results, we learn, “The tuning is based on the CMIP2 data analysis of Raper et al. (2001b). Note that there is often a year or two lag between final results.

As the tuning parameters were not selected until shortly before publication of Raper et al. (2001b), beginning comparisons in 2001, seems appropriate. (Susan Raper’s publication list is here.)

So, while Rahmstorf et al’s claims model predictions in the TAR are somehow independent from data since 1990, and that, in some sense, the predictions cannot be influenced by real temperature data measured after 1990, I think otherwise.

Clearly, analytical choices are made when selecting the magnitude of all six tuning coefficients to simulate the output of AOGCMs. Like it or not, even simple models tuned to AOGCM’s can be compared to actual empirical data prior to finalizing the choice of tuning coefficients.

In fact, I would amazed to discover that Raper et al. cloistered themselves away from real data after 1990. Such a idea is not only implausible, it would be absurd.

It is more likely, that out of true, honest, scientific interest, scientists made data comparisons during the process of selecting the magnitude of the six tuning parameters. Though a procedure exists to select models, that procedure itself was being developed as the coefficients were selected. Like it or not, it would be very difficult for researchers to proceed with tuning constants that resulted in poor fits to the most recent measurements of global means surface temperature.

My conclusion

The models used to create TAR projections contain tuning coefficients developed shortly before 2001. The full process for selecting the tuning constants is, itself, an ongoing research project; the method is constantly being improved. It is inevitable that the process developing the method to select the constants, and the choice of the constants themselves was influenced by the most recent data available. This would include nearly all the Global Mean Surface Temperature data between 1990 and 2000.

For this reason, the fidelity of the tuned models should be tested against data that arrived after the final selection of tuning coefficients. In the case of the TAR, this means validation should be restricted to observations no earlier than 2001.

Sections of TAR describing the process

For readers convenience, here are links to the most pertinent portions of the TAR:Executive Summary of 9. Projections of Future Climate Change.

33 thoughts on “When were the models used in the TAR frozen? Around 2000.”

  1. Lucia,

    Do you know when were the models frozen for 4AR? I suspect it was around 2006. If so this would impact your previous analyses.

    Running your tests against TAR data would be interesting. The error bars are wider but the average trend is 0.4 degC/decade.

  2. Lucia, Thanks for this persuasive analysis.

    There are actually seven co-authors of Rahmstorf et al (2007), not six. These seven scentists are affiliated with six leading research institutions in five countries – US, UK, Germany, France and Australia – and all of them except James Hansen contributed to the IPCC’s Fourth Assessment Report (AR4). This highly influential paper was published online in ‘Science’ on 1 February 2007 – i.e., on the eve of the publication of the SPM of AR4.

    The Garnaut Interim Report to Australian Governments relies on Rahmstorf et al (2007) to support its conclusion, which is likely to be highly relevant to its policy recommendations, that ‘Comparisons between observed data and model predictions suggest that the climate system may be responding more quickly than climate models indicate’ (p. 21). Your analysis (albeit based on a relatively short period) reaches the opposite conclusion – at least as far as observations of global mean surface temperature are concerned.

    While I can understand why David Stockwell has said in effect that it’s not worth spending much time on the Rahmstorf paper, I hope that you are able to find the time to set down your reasons for believing that these scientists made a hash of it. If you are right, the paper will not survive scrutiny from (for example) leading mathematical statisticians at the ANU (my posting to Niche Modelling refers). Ross Garnaut is an able and well-regarded economist who will not knowingly rely on an unsound analysis.

  3. No, Raven, the models used for the AR4 predictions were not frozen in 2006. In most cases the projection period begins in 1999 (HadCRU), 2000 (NCAR) or 2001 (NOAA and some others).

  4. Ian and Raven–
    I’m hoping that the next time the IPCC document makes projections, they suggest a start date for validation.

  5. Lucia

    Have you come across Fig TS. 26 “IPCC Working Group 1, Technical Summary, Final Figures, 2007? I don’t know how to post this Fig & explanation however it is part of a PPT (page 27)& can be downloaded through the following Google search.

    http://www.google.com.au/search?sourceid=navclient&hl=en-GB&ie=UTF-8&rlz=1T4DAAU_en-GBAU242AU242&q=Figure+TS26

    It backdated each of the FAR, SAR & TAR projections to 1990 & they all end in 2005. “Temperature anomalies are shown as annual (black dots) & decadal average values (black line)” which shows , like Rahmstorf et al, values near the top of the TAR projection. It also shows Model scenarios, B1, A1B & A2, sloping up from 2000 at approx .2C per decade.

  6. Lucia I posted this yesterday but it didn’t go through for some reason.

    David Stockwell in his blog Niche Modelling (see comment 1344 above) has done so nice detective work in finding the providence of the smoothed temperature trend in Rahmstorf et al.

    http://landshape.org/enm/

    He sources the smoothing technique to a paper by Michael Mann, “On smoothing potentially non- stationary climate series”, GRL, 2004. Mann advocates the use of a “minimum roughness” constraint for the end of a time series.

    http://holocene.meteo.psu.edu/shared/articles/MannGRL04.pdf

    Steve McIntyre has some things to say about this technique, in another context.
    “As I noted in the earlier post, Mann’s “minimum roughness” constraint, when translated from inflated Mannian language, boils down to a reflection of the series both horizontally and vertically around the final value”.
    “When I wrote a little routine to implement Mannomatic smoothing, I noticed something really funny. I know that it seems bizarre that there can be humor in smoothing algorithms, but hey, this is the Team. Think about what happens with the Mannomatic smooth: you reflect the series around the final value both horizontally and vertically. Accordingly with a symmetric filter (as these things tend to be), everything cancels out except the final value. The Mannomatic pins the series on the end-point exactly the same as Emanuel’s “incorrect” smoothing”.

    http://www.climateaudit.org/?p=1681

    Looking at the series in Rahmstorf et al, which ends in 2006, this appears to be the case. I don’t think this solves our problem re the relationship between the series trend & the IPCC chart, however, if 2008 should turn out to be quite cool it would be interesting to see this updated at the end of the year, using this technique. Or if they used monthly data?

  7. Agreed that your reconstruction of the process is a reasonable interpretation of the rather opaque prose. But its a quite extraordinary situation if it is correct. We’ve the error due to observation from the stations. Then there is the error due to the first level models failing to match the observations precisely. Then there’s the error from the second level models failing to match the first level ones precisely. Then we are invited to become seriously alarmed at what these second level models show. If anyone proposed developing an engineering package to be used in construction or naval architecture like this, they would be thought mad. But basically what we are talking about here is something which should be a sort of Prolines or Maxsurf for the planet. Extraordinary.

    [apologies for posting this first by error as 1377 in the other thread – can you delete?]

  8. More evidenced the “start date” can’t be 1990:

    Since the SRES was not approved until 15 March 2000, it was too late for the modelling community to incorporate the final approved scenarios in their models and have the results available in time for this Third Assessment Report. However, draft scenarios were released to climate modellers earlier to facilitate their input to the Third Assessment Report, in accordance with a decision of the IPCC Bureau in 1998. At that time, one marker scenario was chosen from each of four of the scenario groups based directly on the storylines (A1B, A2, B1, and B2). The choice of the markers was based on which of

    http://www.grida.no/climate/ipcc_tar/vol4/english/099.htm

  9. Steven–
    Yes. Note that the falsification is of the AR4 projections, not TAR projections, as pringed on the figure itself.


    This figure, with data up to 2001 is taken from
    The Fourth Assessment

    To contradict this, one must:

    * Ignore the clearly indicated citation to AR4 and pretend I am looking at the TAR.
    * Give the TAR a totally ridiculous back date that includes data available LONG before the simple models were tuned.
    * Use the odd figures in “The Rahmstorf 7”.

  10. good find Lucia. another thing to consider. AR4 starts runs in 2000 and it too predicts .2C decade for 2000-2011,
    and slightly greater than .2C for 2011-2031

  11. Hi Ian,

    why David Stockwell has said in effect that it’s not worth spending much time on the Rahmstorf paper

    What I mean is the paper is attracting more attention than its speculative claims deserve. Many reservation about their own claims, due to the shortness of the the data considered are in the paper. Just about the same length as Lucia and I have been looking at and getting opposite results only one year later BTW. But these reservations haven’t been picked up, in Garnaut’s report and elsewhere. Its not an interesting and challenging paper like Miskolczi.

  12. Steven,

    He is talking about the TAR because it gives him the results he wants. I tried to explain to him that fiddling with the GHG forcings after the fact is just another form of model tuning but he refused to accept that argument. From his perspective the grey bars start in 1990 therefore comparisons to real data should start at that time too.

  13. Steve and Raven,
    I have no idea why Tamino thinks I was discussing the TAR. Possibly, I was not clear? Anyway, I was discussing the AR4. I just posted a new post, and actually pinged him.

    I will be learning the correction for autocorrelation– as on that point, Tamino has merit. (In fact, it’s the first rational thing one of the critics has said. But.. you’ll see the first to point out that issue was….well… me! 🙂 Anyway, my theory on this is: trust but verify.)

    However, he still isn’t averaging the four instruments together. We know the β error is higher if you insist on treating each instrument individually and not merging.

    The whole “models aren’t tuned”, “start verification in 1990” is nuts. Given the lengthy discussions of tuning, the dates when SRES were even published, the frenetic work to improve AOGCM’s (including comparisons to data) since the 90s, anyone who thinks they can convince a single doubter with that is simply deluded.

    Sure, maybe the “Rahmstorf 7” can say that in a paper and get away with it. Sometime peer reviewers are picky; sometimes they let the wildest things through. If a reviewer was a friend of even 1 of the authors, the chances of them being strict particularly on a claim like that end up low. That’s reality.

  14. David, I agree that Rahmstorf et al (2007) doesn’t deserve the attention that others have given it. But this attention reflects the fact that it comes from the Nobel Prize winning team. Somerville was CLA of Chapter 1, Keeling a Contrib Author of Chapter 2, Parker a Lead Author of Chapter 3, Cazenave an LA of Chapter 5 and Church an LA of Chapter 6. They come from the prestigious institutions: Hadley, GISS, Scripps Institution, Potsdam Institute & CSIRO. They published in a prestigious journal.

    It’s true that they entered reservations about their conclusions, but the same can be said of MBH99. That didn’t stop the IPCC trumpeting 1998 as the warmest year of the millennium, and the reservations of Rahmstorf et al haven’t stopped Garnaut and others relying on that paper to argue that ‘it’s more urgent than we thought.’

    I think that it’s important that the Garnaut Review and the nine Australian Governments to which it will be reporting, should know of the highly questionable procedures followed by Rahmstorf et al in their comparison of temperature projections with observations.

  15. Ian–
    I agree with you. That said, it’s unlikely that a blog post by me will ultimately sway the Australian government no matter what I say. Even though, right now, I seem to be unsettling people, my goals for my blog are more modest.

    I’m trying to learn a bit more about this area. I find that — for me– blogging is a good way to get myself to think critically about data/ evidence etc. I express my point of view and try to support them as best I can, and try to see where the chips fall.

    If what I write happens to sway people fine; if not, fine.

    Another thing I like about blogging is that
    a) visitors suggest questions to ask, and I can think about those and b) visitors help me find resources to learn more. So it beats just reading at home and boring my husband with my droning about something.

    In terms of convincing “everyone”, it’s the next few years of weather that matter. Either

    a) when El Nino arrives, it will really, truly warm. Then the slopes on those curves on my graphs will pop-up. If they stay up through the next La Nina, that will truly kill all the alternate theories.

    or

    b) when El Nino arrives, it won’t warm. The slopes will stay flat!

    I’m betting (a) will happens– I just doubt this decade will be 2C/century. But I’ve been wrong before, and I’m sure I’ll be wrong again. It’s hardly a new experience for me!

  16. Lucia, I am tending to option (b). To use an analogy, the Arctic is like a pressure relief valve on a pressure cooker, and it seems like it has blown off the excess heat in changing its configuration, and is now icing up again. The equations of Miskolczi showing the greenhouse temperatures as constant with greenhouse gas variations are plausible.

    Ian, This has been a lot of the problem with GW all along; esteemed individuals serving up statistical mutton and calling it lamb. I blog partly as education in numeracy, and partly to keep track of my thoughts, and use the worthwhile feedback to turn it into something more substantial, kind of a gestation process. So perhaps the duelling blogs on Rahmstorf7 could evolve into something more substantial. This is where SteveMc is a master.

  17. David Stockwell says:
    “The equations of Miskolczi showing the greenhouse temperatures as constant with greenhouse gas variations are plausible.”
    Has anyone independently verified his conclusions? Lubos hinted that there might be something wrong with his work.

  18. A correction: Church was an Expert Reviewer of Chapter 5, and Rahmstorf was an LA of Chapter 6. My point still holds.

    Lucia and David, I appreciate your responses to my post, but urge you not to underestimate your potential influence. Ross Garnaut is a highly regarded economist who had a major influence on Australian policy long before climate change became a significant issue. He was Australia’s Ambassador to China in the mid-1980s and is one of the world’s leading experts on the Chinese economy. As I noted in my posting to Niche Modeling, the ANU, with which Ross is affiliated, has a strong Maths/Stats School. On matters within their sphere of expertise, he should (and I believe will) listen to them rather than the IPCC milieu. You can provide the serious and measured criticism which is needed to set in motion this sort of process.

    The Australian Bureau of Statistics is also located in Canberra and its senior figures, past and present, are or have had leading roles in the International Statistical Institute and of its Sections. The Bureau’s recently-retired Head, Dennis Trewin, was
    critical of the IPCC (I think that that’s a fair interpretation) at an OECD meeting in Istanbul last July. I urged the involvement of national statistical offices and of the ISI in the IPCC’s Fourth Assessment in correspondence with its Chairman in 2002. I advised Dr Pachauri that Dennis Trewin and Len Cook (then the Heads of the ABS and UK Office of National Statistics respectively) had told me of their willingness to assist the IPCC in its emissions scenarios work. In its scoping submissions to the IPCC in 2003, the (former) Australian Government urged the involvement of national statistical offices and of the UN Statistical Commission. The IPCC ignored all of these well-iintentioned suggestions.

  19. His paper provides considerable empirical verification. Its a bit early for independent verification. I am working on one aspect and hope to produce something soon. I didn’t think Lubos looked into it.

    It seems to me it would allow increased forcing due to increased GHGs, but constant greenhouse effect and therefore no warming over most of the globe. However, the climate system needs to equilibrate somehow, and the paper is not specific about how that would happen in practise. For example, the heat gained in the tropics could be blown off in the Arctic, so in a sense CO2 could cause warming but not through enhanced greenhouse effect, but from albedo changes in the arctic as heat can be dumped more easily there through direct surface emmission (atmosphere is more transparent to IR at the poles). His claims that warming could not have been due to CO2 may be overstated, as I don’t think his equations preclude it as a possibility, they just hold the greenhouse effect constant. I wouldn’t take this as gospel, as its early for me in understanding this, so I could be off base.

  20. Ian, I appreciate your urging. Its going to be more effective to present a specific example where Garnaut has already relied on flawed/speculative research, rather than talk in generalities about statistical due diligence.

  21. Thanks, David. Garnaut HAS already relied on flawed/speculative research, specifically Rahmstorf et al (2007) which is cited five times on p. 21 of his Interim Report to the Commonwealth, State and Territory Governments of Australia. Please study these references. The Review has sought comments on its Interim Report by 11 April. I’m trying to ginger up the statistics community to question Garnaut’s use of a paper which (for example) applies a ‘minimum roughness criterion’ from Mann (2004) to pad the series at the end – and the other examples cited in your analysis.

  22. David, I realise on re-rereading your posting that using the Rahstorf example is perhaps what you meant?

  23. This is one place where they quote a number and on my figure here you can see the trend ranges between 0.3 and 0.37C per decade for HadCRU. It is unlikely both temp series have exactly the same value too. The problem is that the methodology is described hardly at all in the paper, so where do you start auditing it? First you would need to try to replicate it, and to get clarification from the authors. I haven’t found a source of SSA code either.

  24. Thanks David. I’ve just noticed yet another odd feature of Rahmstorf7. Although the seven ‘esteemed individuals’ state without qualification that the GISS and HadCRUT measures both show that GMST rose 0.33 C for ‘the 16 years since 1990’, the paper was submitted on 27 October 2006. Thus the authors could only have had (at most) nine months of data for 2006 when Fig. 1 was produced. Yet the caption to the middle panel says the data for temperature goes ‘up to 2006’. So what years WERE included in the trend calculation? Does ‘the 16 years since 1990’ in fact mean the 16 years from 1990 to 2005 inclusive?

  25. This is just another example of the failure of the review process at ‘Science’. On an issue that they must have known was of critical importance to the paper, they appear not to have tried to replicate the calculations or to insist that the methodology be disclosed, at least in Supplementary Information.

    On a separate point, it’s worth noting that all of the authors of Rahmstorf7, and of Moore et al (2005), and of Ghil et al (2002) (which Moore et al cited extensively], are affiliated with meteorological/geophysical/environmental research institutions. This does not necessarily mean that they are not highly competent statisticians. But it does seem that the comments in the Wegman report about the lack of interaction between paleographers and the statistics community may apply with equal force to other areas of climate change research..

  26. Ian,

    …the paper was submitted on 27 October 2006.

    The process for peer review can be slow and aggravating. You submit, wait, get back reviews, make revisions, and if accepted, wait for galley proofs. Then, it finally gets published.

    Some journals are faster; some slower. This is a one pager, so faster is possible. But, my guess is they modified slightly after peer review. I actually don’t see this as a problem.

    My main reservations about the paper are:
    1) They evaluated against noisy weather data using “the eyeball method” on a chart that includes the error bars for the trend not the weather.
    2) They re-baselined by an unspecified amount temperature shift, explain vaguely, and include no uncertainty interval for the shift. These should be included even if you use “the eyeball method” to make later evaluations.
    3) They don’t show the poor fit in the times before 1990, so that readers can see how poor the hind-casts were in the long past.
    4) They make a very flimsy case for testing against data since 1990 to verify model projections. (Repeating the claim doesn’t improve it.)

    It’s a paper that makes a claim and supports it poorly. The only reason anyone would take this paper as “proving” anything is confirmation bias.

    In my opinion, this paper getting published in Science just stoke the furnaces of denialism. People who believed in AGW didn’t need this paper to convince them. The temperature are up. The problems with the paper are obvious to most people with undergraduate educations. It got published anyway.

    Given the characteristics of most denialists arguments, this flimsy paper only confirms their opinion that no matter how poor a paper is, if it makes alarming warming claims it will be published.

    (Also, notice that these claims that the IPCC underpredicted warming are not echoed in the AR4. So, the Rahmstorf 7 may have convinced some people, but this is not the consensus view.)

  27. I agree that I shouldn’t have criticised Rahstorf et al for incorporating data for 2006 in their analysis.

    Your statement that the problems with the paper are obvious to most people with undergraduate educations is revealing, given the work’s authorship & the amount of attention that it has received

  28. Ian–
    Well, of course, that’s just my opinion. But, I don’t see that paper as really showing much, and most the problems have little to do with statistical details.

    After all: With respect to the “start comparison in 1990” issue– does one need to understand lots of statistics to know that having 20-20 hindsight is not the same as having 20-20 foresight?

    Many of the problems I described are of this nature.

  29. In a guest post on the website of Australian economist John Quiggin in July 2006 (‘A Critique of Wood on Global Warming’) Dr Roger Jones of CSIRO, a Coordinating Lead Author of AR4, argued against the involvement of statisticians in climate change research:

    ‘That [the Wegman Committee] want to involve statisticians in ongoing work is interesting. What level of education in statistics does one need to have? Skill in statistics does not mean a better understanding of science or even uncertainty… The idea of using statisticians without training and a publication record in the relevant science, or as an integral part of a larger team should not be given air…’

    I suspect that, contrary to Lucia’s opinion, the comparison of observations with projections in Rahmstorf7 DOES represent the consensus view of the mainstream science community.

Comments are closed.