The IPCC Gives Their Spin!

Ok… I’m awake now. It’s time to read the SOD. I don’t anticipate it being all either horrifically outrageous nor inspirational. The science won’t have changed much in 5 years. I can’t help but wonder: We had all sorts of debates about “copyright”, and the non-disclosure agreements with the IPCC. Is the IPCC going to file any copyright claims? What jurisdiction would apply? Did they register the things?

Someone will inevitably observe that discussing the politics is not sticking to the science. Well..Duh. No. But I think the most interesting issues in this incident will revolve around people airing opinions about the internal and external politics of the IPCC process. Certainly, that’s the most interesting aspect we can discuss in the first few days. Discussing any new science in the report will take a bit longer.

In any case, itt seems to me the IPCC has jumped into the political discussion. They’ve posted their statement. Of course– as is their right– it’s heavy with their spin. I’m going to fisk it: my comments are in blue. (Then, I’m going to post who won the UAH bets. And then I”m going to read the various chapters. Then… maybe I’ll comment on some ‘science’– but later.)

Here’s my fisk of the statement:

========

GENEVA, 14 December – The Second Order Draft of the Working Group I contribution to the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report Climate Change 2013: The Physical Science Basis (WGI AR5) has been made available online This release, was, of course, anticipated by anyone with more than two operating brain cells. The IPCC regrets this unauthorized posting which interferes with the process of assessment and review. Does it actually interfere? We will continue not to comment on the contents of draft reports, as they are works in progress. Meanwhile, now that the drafts have entered the realm of true transparency, those outside the IPCC will comment.

The Expert and Government Review of the WGI AR5 was held for an 8-week period ending on 30 November 2012. A total of 31,422 comments was submitted by 800 experts and 26 governments on the Second Order Draft of the Chapters and the First Order Draft of the Summary for Policymakers and Technical Summary. The author teams together with the Review Editors are now considering these comments and will meet at the Working Group I Fourth Lead Author Meeting on 13-19 January 2013 in Hobart, Tasmania, to respond to all the comments received during the Expert and Government Review. IOTW: The IPCC has complicated time consuming method of creating a document by committee.

The IPCC is committed to an open and transparent process that delivers a robust assessment. This is why the IPCC maintains opacity during the long preparation period. 🙂 That is why IPCC reports go through multiple rounds of review and the Working Groups encourage reviews from as broad a range of experts as possible, based on a self-declaration of expertise. All comments submitted in the review period are considered by the authors in preparing the next draft and a response is made to every comment. This is not to say that the author’s necessarily respectfully consider every comment, nor that their ‘responses’ will necessarily be to the point. After a report is finalized, all drafts submitted for formal review, the review comments, and the responses by authors to the comments are made available on the IPCC and Working Group websites along with the final report. (Along with? As in– at the exact same date, hour, second? Isn’t this a cross our hearts – hope to die sort of promise?) These procedures were decided by the IPCC’s member governments Who– even the IPCC must admit– collectively have little to no jurisdiction over individual citizens who have every right to think the IPCC decisions are idiotic and act according to the dictates of their own consciences and opinion..

The unauthorized and premature posting of the drafts of the WGI AR5, which are works in progress, may lead to confusion because the text will necessarily change in some respects once all the review comments have been addressed. Does the IPCC operate under the delusion that keeping everything opaque during the writing process would result in zero confusion? Like it did for the AR4. < /sarc > It should also be noted that the cut-off date for peer-reviewed published literature to be included and assessed in the final draft lies in the future (15 March 2013). As some outside the inner-inner-inner circle have complained, this gives the authors the opportunity to cite papers in the AR5 before bloggers are given the few days required to discover and report any obvious technical errors. The text that has been posted is thus not the final report. But the IPCC is too grumpy to thank Alec Rawls for truthfully informing readers it was a draft when he leaked it. 🙂

Anyway, we all know it’s not the final report. I don’t see anything bad in with people observing what the draft says and then seeing how things changed in the final report. I don’t see anything wrong with people seeing this real time.

This is why the IPCC drafts are not made public before the final document is approved. These drafts were provided in confidence to reviewers and are not for distribution. It is in the opinion of the IPCC regrettable that one out of many hundreds of reviewers broke the terms of the review and posted the drafts of the WGI AR5. Each page of the draft makes it clear that drafts are not to be cited, quoted or distributed and we would ask for this to continue to be respected. The IPCC could also ask pigs to fly. Ain’t gonna happen.

========

Now.. as promised… the UAH bets! Meanwhile, feel free to discuss political/copyright/etc. issues on this thread. I’m sure they are going to get discussed. 🙂

91 thoughts on “The IPCC Gives Their Spin!”

  1. lucia-

    The IPCC could also ask pigs to fly. Ain’t gonna happen.

    I don’t know…there is a lot of pork in the air.

  2. “The science won’t have changed much in 5 years.”

    No it hasn’t much, but the IPCC report has. Read the tweets from Roger Pielke.
    The chapter on observations in AR4 (chapter 3) was written by notorious spin-meisters Phil Jones and Kevin Trenberth.

    The equivalent chapter (2) in AR5 seems to have been written by some genuine scientists, and is much less alarmist. Read the Summary of Ch 2, especially the stuff on floods droughts and storms:

    “The most recent and most comprehensive analyses of river runoff which include newly assembled observational records do not support the AR4 conclusion that global runoff increased during the 20th Century.”

    “New results indicate that the AR4 conclusions regarding global increasing trends in hydrological droughts since the 1970s are no longer supported”

    “Recent re-assessments of tropical cyclone data do not support the AR4 conclusions of an increase in the most intense tropical cyclones or an upward trend in the potential destructiveness of all storms since the 1970s.”

    One reviewer of the SoD wrote of chapter 2: “Despite the few blunt comments below, I would like to say that on the whole I think this chapter is good.”

  3. As I suggested on the other thread, the whole secrecy requirement seems to me designed to protect the prerogatives of a handful of scientists in each group who actually decide what will go into the report.

  4. the whole secrecy requirement seems to me designed to protect the prerogatives of a handful of scientists in each group who actually decide what will go into the report.

    That’s precisely how documents written by a small band of authors who ‘read and respond’ to scads of comments are written.

  5. That’s precisely how documents written by a small band of authors who ‘read and respond’ to scads of comments are written.

    Good point. Anyone who has worked on major EIS’s can empathize. Consistency can be a problem for the truest of hearts.

  6. Lucia,
    You noted that the late “close date” for incorporating papers that have not yet been published removes the possibility of any critique of those papers prior to the final version of the AR5. I actually think it is much worse than that. The process is so long and drawn out that there is plenty of time for influential authors (the top dogs in each working group) to prepare and get accepted for publication papers which ‘refute’ papers that are being considered for inclusion. That is nothing short of corrosive to the process.

  7. SteveF

    there is plenty of time for influential authors (the top dogs in each working group) to prepare and get accepted for publication papers which ‘refute’ papers that are being considered for inclusion.

    Yep. And late papers that are not by influential authors can simply be “overlooked”. The process was designed by a certain group of people who want a certain measure of control. (These may not entirely overlap the authors list. ) Features of the process likely bleed over and affect what is accepted or rejected in journals– and may damage the objectivity of climate science itself.

    Overall, I think it’s much better that the drafts be made public as soon as they are “the official draft”. That’s essentially what has happened. It’s not what the IPCC wants and they are unhappy to not get what they want. I would never have promised secrecy and then leaked it myself. But, overall, I think it is better the draft is available rather than otherwise.

    (As I said: I’m curious to see if the IPCC tries to do anything other than lament the fact of it’s leaking. I suspect they won’t because– they have little power to do anything about this. But we’ll see…)

  8. Lucia,
    “Features of the process likely bleed over and affect what is accepted or rejected in journals– and may damage the objectivity of climate science itself. ”
    I am utterly shocked (Shocked!) that could happen. 😉
    But seriously, contamination/distortion of the field by ‘political considerations’ has been a real problem IMO. Ultimately reality will impose itself on the field, of course. But what happens between now and then matters.

  9. The IPCC should reform from its anti-scientific anti-public secrecy and uphold its stated policy of total transparency as strongly affirmed by the IAC. See IPCC’s Counterproductive Secrecy I posted at WUWT. Steve McIntyre exposed how the IPCC’s present secrecy was imposed by dishonest means. – It enforces the power of IPCC’s alarmist authors.

  10. SteveF, don’t forget that PNAS counts as a peer-reviewed Journal even if a particular article is published without independent review (Members may handle the peer review process for up to 4 of their own papers per year).

  11. This wording at the guardian made me chuckle

    little-known US-based climate sceptic called Alex Rawls, who had been accepted by the IPCC to be one of the report’s 800 expert reviewers, admitted to leaking the document.
    “admitted”? Doesn’t that make it sound as the leak was initially anonymous and that he “admitted” it when questioned. I’d suggest a more appropriate reading might be “Took credit for” the leak.

    See http://www.guardian.co.uk/environment/2012/dec/14/ipcc-climate-change-report-leaked-online

  12. lucia (Comment #107436)

    I find these little spin words and phrases in most newspaper articles I read these days. These words inform more about the writer than what or who is being written about.

    Lucia, thanks for this post as it answers a few lingering questions I had about IPCC process and where it was at in the schedule of events. I guess even though nothing more can be added in the way of comments, peer reviewed papers or a reasonable facsimile can be. I continue to have a difficult time surmising why the IPCC wants to keep all these proceedings secret and even when I consider the IPCC to be nothing more than a marketing agency for immediate and intensive government mitigation of AGW.

    My general feeling was that they might think that allowing outside comments to be made on the material being reviewed under the auspices of the IPCC ,even as indirectly as that would occur, would somehow keep the book open to more and more input from diverse sources and that for expediency they need to allow the lead authors to feel free to cut off the discussion even when it comes from outside the process. Surely based on what we have seen from most lead authors, being influenced by outside sources would appear highly doubtful. I therefore can only opine that the IPCC, as any good marketing agency in a similar situation might, feels that bringing out a campaign unimpeded by advanced revelation of content will have a greater immediate impact and garner those quotes and spin from the MSM that they feel will help sell the product.

  13. Kenneth – I think that Alex stated that the review period closed on 30 November. What happens between 30 November and the next release of stuff is down to the magi. It seems that they do not want hoi polloi to see what the magi do to the stuff as it existed at 30 November. I am afraid that you are becoming increasingly Henry James in your comments and I lack the patience to try to parse your increasingly tortuous syntax. If i have just said what you said, then i apologise. if i have missed an important point, please feel free to enlighten me.

  14. lucia:

    This is not to say that the author’s necessarily respectfully consider every comment, nor that their ‘responses’ will necessarily be to the point.

    Or that the responses will make any sort of sense. For example, I recently discussed a case where the author changed the text of the report in response to a reviewer comment in a way that had no bearing on what the reviewer said. The author apparently misunderstood the reviewer comment to the extent he missed the main point while adopting an offhand remark as the “correction.”

    I found it especially amusing because the reviewer comment was bogus. He basically said a part of the chapter was wrong in a nonsensical way and that the authors should cite his work to show so.

    By the way, responses can be things like, “Rejected. Statements are adequate” or, “Rejected – science is correct.” It’s hard to imagine how the IPCC system could work well.

  15. I just checked some of the reviewer comments for IPCC AR4. I actually found responses that were nothing more than, “Rejected.”

    Great review system they’ve got there.

  16. From the bastion of bureaucratic double-speak:

    It should also be noted that the cut-off date for peer-reviewed published literature to be included and assessed in the final draft lies in the future (15 March 2013).

    But I suppose the writers of this IPCC communiqué would prefer that it not be noted that in order for such “literature” to be considered for inclusion in WG!, according to the IPCC’s schedule, it must have been “submitted” by July 31, 2012.

    And, if I’m not mistaken, I believe that there’s something in their rules ‘n guidelines which indicates that such “submitted” [presumably to a journal which requires peer-review prior to acceptance] papers must be made available – albeit perhaps not to the reviewers who might want to verify the claims contained therein.

    But there’s a built-in loophole here that one could drive a carbon-belching Mack truck through – or even a Gergis, Karoly et al paper (and/or facsimile thereof).

    First of all, there are no such such scheduled requirements (as far as I have been able to ascertain) for non-peer-reviewed “literature” [except that publication via a blog-post is verboten and unacceptable].

    This is not a new loophole by any stretch of the imagination. As Donna Laframboise has documented in The Delinquent Teenager … the non-peer reviewed Stern Review was cited no less than 26 times in 12 chapters of AR4, notwithstanding the fact that not a single reviewer of AR4 was given an opportunity to review Stern’s claims (because no references thereto were even mentioned in the drafts presented for expert review).

    And as I had noted a few years ago, the IPCC is never loath to go back to the future in order to prop up its claims and deletions.

    And the second unmentioned point on the IPCC’s part is that the IPCC seems to have absolutely no rules – or deadlines – when it comes to using non-peer-reviewed “literature”, particularly that produced by its favoured green NGOs.

    So, the view from here, so to speak, is that the IPCC seems far more interested in controlling the message than in performing a public service that is “open, transparent, and objective”.

  17. Brandon Shollenberger (Comment #107453)

    I just checked some of the reviewer comments for IPCC AR4. I actually found responses that were nothing more than, “Rejected.”

    Great review system they’ve got there.

    Actually, it may be, well, worse than you might think!

    If you take a look at AccessIPCC‘s Summary of Reviewer Comments – Second Order Draft, you will see that for WGI only 31% of the 11381 comments could be unambiguously described as “Accepted”.

  18. hro001, I don’t see that as particularly problematic. I’ve seen the quality of the reviewer comments. 31% is more than I’d expect.

    Then again, nothing says “agreed” means the reviewer views were actually incorporated.

  19. hro001 (Comment #107472)

    If you take a look at AccessIPCC‘s Summary of Reviewer Comments – Second Order Draft, you will see that for WGI only 31% of the 11381 comments could be unambiguously described as “Accepted”.
    >>>>>>>>>>>>>>>>>
    So much for 97% of scientist agree…..

  20. Diogenes (Comment #107443)

    “If i have just said what you said, then i apologise. if i have missed an important point, please feel free to enlighten me.”

    What was leaked would have been made public by the IPCC but only at a later date and for all to see what was changed from that point forward. My point is that the leaking detracts from springing the entire results of the IPCC on the media and the general public and the positive marketing impact that can have.

  21. Brandon Shollenberger

    I don’t see that as particularly problematic. I’ve seen the quality of the reviewer comments. 31% is more than I’d expect.

    Then again, nothing says “agreed” means the reviewer views were actually incorporated.

    I don’t disagree, but I believe it should also be noted that “agreed” was not one of the IPCC/TSU authorized “responses” to the reviewer comments to which the IPCC/TSU designated hitter was purportedly “responding”,

  22. As I remember, the delegates at the Plenary Session must unanimously agree upon the precise language to be used in the Summaries for Policymakers. Then the SOD can be revised to be brought into agreement with the Summaries for Policymakers. The simplest way to do this is to delete or de-emphasize passages that “confuse” the message the IPCC wants to send. The Lead Authors also have an opportunity to insert material not in the FOD or SOD, because they didn’t want to give opponent an opportunity to officially comment on that material. This premature release of the SOD will interfere with the ability of powerful people to make AR5 say what they want it to.

    The ability to insert new material and submitted papers at the last minute makes peer review of IPCC reports a joke. Since no one knows whether “peer review” or “pal review” has taken place, effective “peer review” should include the opportunity for the whole scientific community to reply to or rebut a reference and have the IPCC consider rebuttals submitted to journals.

  23. Frank (Comment #107519)

    “Then the SOD can be revised to be brought into agreement with the Summaries for Policymakers. The simplest way to do this is to delete or de-emphasize passages that “confuse” the message the IPCC wants to send.”

    If there is nothing to prevent the IPCC from doing these last minute and post comment changes, I would think the changes would be readily determined – if the IPCC indeed published all the comments and the state of the review process to the point before the changes were made even if carried out out after the fact. Maybe I am off track here in making these assumptions and that actually the IPCC only promised to publish the comments after the fact. In that case what you say, Frank, could present a reason for the IPCC not wanting the transparency that leaking/publishing the state of the review at this point in time provides to the process.

    If my view and expections of the IPCC are correct, I would suspect that these changes to bring the review in better agreement with the end result desired by the IPCC will be made anyway,but more ceatively than otherwise would be the case. I am currently wondering what impact the withholding of the the leaked material until after all post comment changes were made by the IPCC authors would have had on the interested public’s view of the process.

    Was the intent of the IPCC merely to eventually, and after the fact, to publish the comments and not the total review content at this point in time? Was it their intent to forever ask the all participants and those making comments never to reveal the review contents at this point in the process? If so, were they being naive in expecting no one to leak these contents? Actually only one person did, but we will never know, I suppose, if other participants might have leaked the information at some point in time.

  24. Kenneth, Frank,
    I think comparison of the final version with the second order draft will be instructive of how the process works. I am especially interested to see if the much higher estimate of present day net GHG forcing, compared to AR4 (due to much lower aerosol offsets and lower aerosol uncertainties), remains in the final document. I will be very surprised if those things are not changed; my expectation is that the ‘best estimate’ for aerosol effects will be increased significantly, the ‘uncertainty range’ for aerosol effects broadened significantly to include lower net forcing, or perhaps both. The net forcing range in the second order draft appears to make the plausible sensitivity range cover from <1.5C per doubling up to a bit over 3C per doubling, with the most likely value a bit over 2C per doubling. I don't think this will be acceptable to certain influential climate scientists, nor to many involved in modeling.
    .
    If the reviewer comments and responses are in fact published, that, combined with the final document and the second order draft, may help clarify if reviewer comments actually have any influence on the process or not.

  25. Bruce,
    The sensitivity is not to CO, it is to radiative forcing. Whatever that sensitivity is, it is for certain not zero, and for certain could not change significantly “over the last 16 years”. What changes over short periods is the average temperature, not the sensitivity.
    .
    If you are of the “Dragonslayer/back-radiation-is-impossible” persuasion, as I fear you may be, then please do not waste my time and yours: completely ignore this comment, pretend I did not respond at all, and consider that it would be best for everyone if you made comments where the readership is uninformed and/or loony.

  26. Re: Bruce (Dec 17 13:36),

    Not if you factor out the 65 year oscillation, which kinda jumps out at you.

    I regressed GISTEMP annual from 1880-2011 on the annual AMO index and then regressed the residuals of that fit from 1958-2008 on MLO CO2 ppmv. Considering the quick and dirty nature of the analysis with no lag for CO2, the fit is pretty good. There’s no significant difference if you use ln(MLO) rather than linear. The concentration range is too small.

    http://i165.photobucket.com/albums/u43/gplracerx/GISTEMPtoAMOandMLO.png

  27. DeWitt,
    The same thing done on GHG forcing in total (fluorocarbons, N2O, methane, estimated TSI variation, etc) shows much better correlation with the temperature record… once adjusted for short term variation due to ENSO…. I would call it almost perfect.

  28. Re: SteveF (Dec 17 16:17),

    So tell me again why we need aerosols for anything other than volcanic eruptions? Oh, yes so we can hindcast without those nasty oscillations like ENSO and AMO. Can you say: right answer for the wrong reason? And a predictive value near zero.

  29. Hi guys,

    Sorry, long time no see, and off-topic, too, but–a colleague down the hallway just came in to show me a paper in Science by Mann et al. (2009) with a hockey stick curve, and I told him not to place too much trust in it, and he asked why. He was genuinely surprised to find out that there was anything even slightly controversial about Mann or the hockey stick curve.

    On the spot, I told him that that is just one of many possible reconstructions, and that in fact a very recent one using many of the same proxies does not look very much like a hockey stick at all (http://hol.sagepub.com/content/early/2012/10/26/0959683612460791.abstract)
    but it would be nice to have a more up-to-date, comprehensive summary of the reconstruction question, preferably in the peer-reviewed literature. Can you help me?

  30. Hi julio,
    You could point him at: http://arxiv.org/PS_cache/arxiv/pdf/1104/1104.4002v1.pdf and associated references as a reasonable start. A key issue is the high chance of loss of variance. You can also point out that since the original ‘hocky stick’ paper (1998, I think) methodologies have improved/evolved and other reconstructions (including some by Mann) have ‘restored’ the existence of the medieval warm period.
    .
    Were it me, I would point out what really set people off was how the original Mann hockey stick paper was gleefully used as ‘proof’ by the IPCC that the medieval warm period never happened… in spite of a century of published work, including ice core data and analysis of written records from the period. The clear objective was to be able to say that “never in history” had it been nearly as warm as 1998. It was the politicization of science which was the problem, not that the original hockey stick used seriously flawed methods (which it did).

  31. julio (Comment #107533)

    “Sorry, long time no see, and off-topic, too, but–a colleague down the hallway just came in to show me a paper in Science by Mann et al. (2009) with a hockey stick curve, and I told him not to place too much trust in it, and he asked why. He was genuinely surprised to find out that there was anything even slightly controversial about Mann or the hockey stick curve.”

    Julio, that exchange should probably answered your own questions about your colleague. He may well be a “truster” of science experts.

    Thanks for the link to the Melvin paper. I have corresponded with him concerning his development of a TRW chronology method that he had developed in recent years and for which he had not yet published the code. The method described in the abstract would appear to be different. I might well spend the $25 for the paper as my interest is piqued in many ways.

    I could then send it to your colleague with the hopes of not conflicting him – if indeed he is a “truster”.

  32. Hi Julio. RomanM’s recent article at CA very clearly elucidates the issue of bias arising from correlation screening. Not peer reviewed but who better than Roman to capture what has been ‘blog reviewed’ for many years. Lucia, JeffId also have posted articles years ago. This article by Bo Christiansen at Die Klimazwiebel discusses his 2010 publication on variance loss in reconstructions.

    JeffId has written several posts but his 3 article series (here, here, and here) working through the Mann 07 experiments using psuedo proxy data are real gems.

  33. Layman Lurker (Comment #107536)

    The post at CA by Roman that you site has a discussion and comments by Nick Stokes. It illustrates my point that attempting to explain the fallacy of pre-selection of proxies for reconstructions or other critical issues about reconstructions and where famous scientists can go wrong can be a big waste time if the person you are addressing is not willing to delve into these issues on their own or is a “truster” of science experts. I would attempt to determine the addressee’s position first.

    I personally think that we waste a lot of space attempting to explain these issues at these blogs to people simply not open to the details and finer points of the matter. Most are simply in defense mode for the status quo.

  34. Thanks, everybody! The McShane and Wyner paper could be quite useful, since it actually provides a summary of the controversy.

    My colleague is just a bit trusting when it comes to stuff published in “Science” magazine 🙂

  35. SteveF:

    Bruce,
    The sensitivity is not to CO, it is to radiative forcing.

    Is “radiative forcing” really an absolute standard? I know various effects are converted to “radiative forcing,” but I find it hard to believe the results of the conversions are all exactly equivalent. Just how accurate is the “radiative forcing” scale?

    DeWitt,
    The same thing done on GHG forcing in total (fluorocarbons, N2O, methane, estimated TSI variation, etc) shows much better correlation with the temperature record… once adjusted for short term variation due to ENSO…. I would call it almost perfect.

    I don’t think you meant to include the part I made bold. TSI isn’t a GHG forcing.

    That said, I’d be interested in seeing what you describe done.

  36. Brandon,
    “Just how accurate is the “radiative forcing” scale?”
    Well, there is some uncertainty of course, although it appears to be relatively modest. For the well mixed gases, the behavior can be pretty well described based on their concentrations, absorbance wavelengths and absorbance intensities. GHG forcing is probably the least uncertain factor.
    .
    WRT estimated TSI variation, you are right, that is not a GHG forcing. I will try to put together a quick graphic which shows how the ENSO and AMO ‘adjusted’ temperatures track forcing… I just need to find a spreadsheet from several years back which has the forcing data.

  37. SteveF, I assume greenhouse gases convert fairly consistently into a single scale. After all, they work in basically the same manner.

    What I’m more hesitant about is assuming that scale can be the same as a scale for volcanic aerosols, TSI and even ENSO effects. I get it works as an approximation, but I have no real clue as to how accurate it is.

    (It is perfectly possible other people have examined the issue and fully addressed my concerns. I’ve just seen too many glaring issues get ignored in climate science so I can’t assume the best.)

  38. julio,
    “My colleague is just a bit trusting when it comes to stuff published in “Science” magazine.”
    I was too, until I read a few articles on subjects I knew pretty well; trust in what is published in ‘Science’ is sometimes misplaced.

  39. It doesn’t help I’ve seen Hansen say the sensitivity to doubling of CO2 is 4.1, not 3.7. I never did track down why he gave a value 10% higher.

  40. Brandon,
    “What I’m more hesitant about is assuming that scale can be the same as a scale for volcanic aerosols, TSI and even ENSO effects.”
    .
    For sure volcanic aerosols are not going to be the same as GHG’s…. and there is MUCH more uncertainty associated with the net forcing from volcanic aerosols. TSI is also different, and clouded by suspicions of amplification by, er, clouds. ENSO seems mostly a process where heat is moved around, and mainly influences temperatures between 30S and 30N, just the opposite of GHG’s. Anyway, I will try to generate a graphic which does not use TSI or volcanic aerosols, and which assumes AMO and ENSO influences can be effectively subtracted from the temperature history.

  41. I went back and read what the IPCC had reported as of Oct 2012 about the review process and what they intended to reveal eventually to the public.

    http://www.ipcc.ch/activities/activities.shtml

    What I find is that only the comments and authors replies were to be published after the final report was issued. I would suppose that no one person would have these comments in hand (to leak) because they would only be aware of their own comments and replies to those comments. The interim Second Order Drafts were never to be made public. These draft reports were to be made available to the participants and governments. Lots of people and organizations had access to these drafts evidently and yet only one person leaked the information. Would an individual participant have access to all the documents that were leaked or was the leaking a cooperative effort?

    Also their wording about the rules for using peer reviewed papers and the availability of these papers for use in the IPCC review process is rather vague and broad in meaning.

    In this context the leaking might well be a game changer for the advocacy of the IPCC and assuring it is properly inserted into the reviews.

  42. Brandon Shollenberger (Comment #107547),

    4.1 is roughly the disgnosed net forcing change to 2xCO2 in the CMIP3-era GISS models – see here.

  43. Brandon,
    As promised, here is a graph of historical temperatures versus GHG forcing (sum of CO2, N2O, methane, halocarbons, and tropospheric ozone). http://i47.tinypic.com/jt54sh.png Man made aerosols and volcanic aerosols are not considered. The temperatures came from Hadley, and were regressed against the AMO index and the 4 month lagged Nino 3.4 index. The regression constants were used to “subtract” the AMO and ENSO influence from the temperature history. The individual GHG forcings for each component gas came from historical data on the concentrations of these gases in the atmosphere (including ice cores and direct measurements) and the IPCC functions for forcing as a function of concentration.
    .
    I plotted the temperature history and the forcing history on the same graph, but with the forcing lagged 4 years (to account for the fast part of the ocean response). The correlation is pretty obvious, even if not perfect.
    .
    A couple of comments: I am not claiming that this graphic defines climate sensitivity, only that there is very good correlation between the “adjusted” temperature history and GHG forcing. For certain there is an unknown history of aerosol offsets and a largely unknown historical accumulation of heat by the oceans; these would reduce the net forcing shown on the graph, and lead to higher values of “effective climate sensitivity” than suggested by the graphic.

  44. Re: SteveF (Comment #107552)

    Wow! That *would* be a low climate sensitivity indeed (I’m getting about 0.9 C/doubling from your graph). You are correct in stating that we should not believe that, but it does show how much the “standard” estimates depend on a poorly quantified variable (aerosols).

  45. julio said, ” You are correct in stating that we should not believe that,…”

    Why? We have absolutely no clue how much noise there is other than the NH in the past can vary by tens of degrees and the tropics by a couple of degrees. The major “feedback”, ice sheet and snow cover in the NH is now productive lands that I doubt many land owner will just abandon to encroaching glacial mass. With the major feedback taken out of the equations, there is much less up potential.

  46. re: SteveF (Comment #107552)

    Sweet graph Steve. Don’t you think by removing ENSO and AMO you might also be removing a significant portion of the aerosol offsets thereby (at least partially) accounting for them?

  47. julio,
    I do not believe such a low sensitivity. But in light of the (leaked) second order draft, the plausible current forcing range is not so terribly far below my (very simple) graphic: (Zeke (Comment #107382), “Up front and Open” thread)
    .
    As I suggested earlier, that graphic is nothing short of revolutionary, because it sets the plausible range of “effective climate sensitivity”, which is defined as:
    .
    Lambda = {(Current forcing) – (Heat accumulation) – (Aerosol offsets)}/(delta-T)
    .
    where lambda has units of (watts/M^2)/degree and delta-T is the temperature increase over the pre-industrial temperature. The highest plausible forcing is ~2.95 watts/M^2, and current heat uptake (Levitus et al) is in the range of 0.5 watt/M^2 (including a modest provision for deep ocean uptake, land accumulation of heat, and ice melt). Warming since the pre-industrial period is about 0.85C, so the lowest plausible “effective climate sensitivity” is about (2.95 – 0.5)/ 0.85 = 2.88 (watts/M^2)/degree, or about 1.29C per doubling. The smallest lambda (from the same graphic) corresponds to about 2.6C per doubling… well below the middle of the canonical IPCC range. There are lots of assumptions in there, but the concept is pretty simple.
    .
    Many climate models suggest the effective sensitivity increases in the temperature domain after a considerable time, so the equilibrium sensitivity (several hundreds of years out) could be higher. Still, if that IPCC graphic is to be believed, it points towards 1) lower sensitivity in practical terms, and 2) far lower aerosol offsets than have been previously suggested.
    .
    Layman Lurker:
    I have no idea; I don’t see why accounting for those as pseudo-oscillations would also account ofr a portion of aerosols.

  48. Re: Layman Lurker (Dec 18 15:30),

    The interesting point is that volcanic eruptions no longer appear to stand out in the adjusted temperature record. Unless you can show that volcanic eruptions somehow trigger La Nina events or have the same signature, the forcing from aerosols must not be very high. You only need high aerosol forcing if you think that there really isn’t an AMO.

  49. DeWitt,
    I’m pretty sure it is El Nino’s that tend to follow a volcanic eruption, whether by chance or as a response of the climate system. That messes up the ENSO -temperature regression by making it look like there is a slightly negative global influence of El Nino conditions on temperature some of the time. If you regress lagged Nino 3.4 against temperature but exclude the three years following major eruptions, the correlation is better and the effect larger. I did not bother to do that in the graphic above, but I think if that were done the volcanoes do stand out a bit more, and the trend becomes a bit smoother where there are no significant volcanic effects.

  50. DeWitt Payne (Comment #107558),

    Temperature signals from volcanic eruptions are contained within the AMO data because AMO indices are just detrended SST series. What you are doing by subtracting an AMO index from a global land+ocean dataset is subtracting the N. Atlantic volcanic response from the global land+ocean volcanic response. GCMs are not entirely consistent on this matter but they do tend towards coincidentally producing fairly similar magnitudes of response in both these diagnostics.

    Using the Trenberth & Shea definition (basically N. Atlantic SSTs – Global SSTs) would be slightly better but GCMs tend to “predict” N. Atlantic temperature response to El Chichon and Pinatubo should have been roughly twice as large as the effect on global SSTs so there’s still a problem with saying anything meaningful about volcanic effects.

  51. Re: SteveF (Comment #107557)

    Something between 1.3 and 2.6 C/doubling sounds pretty good to me, indeed (and, as you say, over hundreds of years the final cumulative effect may be larger, to make paleoclimatologists happy–not that I think that’s strictly necessary 🙂 ).

  52. HaroldW,
    I think Matt Ridley overstates the case a bit. Some amplification above the CO2-only level is almost certainly correct; exactly how much remains in doubt.
    .
    The evidence for modest warming is growing, but I would not say the argument is air tight; Isaac Held notes that low sensitivity is apparent in his GCM until well after forcing has stopped increasing (in other words, as has been discussed at this blog many times, climate sensitivity becomes non-linear in the temperature domain in the long term). Whether or not that is true is uncertain, but it may turn out to be irrelevant, since the response is modest for >100 years. By the time the higher sensitivity would become apparent, the atmospheric CO2 and overall GHG forcing will have been long since declining.

  53. Re: Paul S (Dec 19 07:46),

    Volcanic eruptions don’t explain the strong peak at ~65 years in the frequency domain plot of the AMO index and GISTEMP. But you do have a point that using the unsmoothed index is probably throwing out the baby with the bathwater as far as quantitating the climate response to volcanic aerosols. SteveF made a similar comment about ENSO above.

  54. SteveF (#107575)-
    Thanks. Nic Lewis provides more details here. I haven’t read it yet though.

    Edit: “tl;dr” summary by Nic: “In my view 1.75°C would be a more reasonable central estimate for ECS … perhaps with a ‘likely’ range of around 1.25–2.75°C.”

  55. HaroldW,
    Nic does a good job laying out the argument. He does make one smallish misstatement: the implications of the greatly reduced aerosol effects (since AR4) were not lost on everyone: I (and others) noted the implications immediately when the information became available to us… in my case:

    SteveF (Comment #107385)
    December 13th, 2012 at 6:52 pm

    Zeke,
    Thanks for posting that graphic. I think It shows a very big increase in the best estimate of net forcing compared to AR4, due in part to falling estimates for secondard aerosol effects. Hummm.. If you subtract 0.5 watt per square meter for accumulation, you are left with ~1.8 watts yeilding ~0.85C warming (effective sensitivity of <0.5 degree per watt.)

    <0.5 C per watt is <1.86C per doubling. The exact number is 1.75C per doubling. 😉

  56. SteveF
    Myself, I favor sqrt(pi) as the equilibrium climate sensitivity. [units = K/doubling, of course.]

  57. SteveF (Comment #107585),

    I’ve found another mistake, though it may be the IPCC’s rather than Nic Lewis’. A more complete post is here, but essentially the ‘best observational (satellite-based) estimate for AFari+aci’ of -0.73 +/-0.3 is strongly weighted by values which only include the first indirect effect. If you add to these estimates a reasonable value for the direct effect the mean of the best observational estimates would be -1.0 +/- 0.3 W/m^2.

    I’m not sure to what extent the AR5 draft’s best estimate of total aerosol forcing was weighted by the satellite mean figure of 0.7 – it’s plausible they took into account the differences in focus between the studies – but the best estimate from satellite obseration-based studies is probably slightly larger than the stated total aerosol forcing in the draft.

  58. PaulS,
    I read the chapter once, and I didn’t get that at all, but I will look again. The narrative seemed to be mostly saying that the modeled secondary effects are grossly overstated compared to satellite measured values. This is consistent with the forcing diagram that Zeke linked to the first day the SOD was available. It is also consistent with what Nic concluded (and he was reviewing all along). Are you sure about your interpretation?

  59. Thanks for translating the “IPCC speak”. Reminds one of “Big Brother” in “1984”.

    Thankfully the IPCC has no real power other than their ability to Mesmerize politicians.

  60. SteveF (Comment #107596),

    Yes, if you look at Figure 7.19 there appear to be 7 dots (though I think there may be 8). Four of these dots are around the -1 mark, and the other three/four are around -0.5. Following the references it’s clear the -0.5 ones are all from the Quaas et al. 2006 papers. However, if you read these they clearly state the given estimates only relate to the indirect effect:

    http://www.mpimet.mpg.de/fileadmin/staff/quaasjohannes/qb_grl05.pdf

    http://www.mpimet.mpg.de/fileadmin/staff/quaasjohannes/qbl_acp05.pdf

    To get a total aerosol forcing you would need to include an estimate for the direct effect, for which -0.4 seems a reasonable figure. Doing that makes the mean of the 7/8 estimates -1.0 W/m^2. This is still slightly smaller than the mean and median from the model ensemble considered in AR5, but slightly larger than the stated best estimate of total aerosol forcing.

  61. I’ll go through all the references relating to satellite observation-based estimates listed in Table 7.4, give their mean estimate and the specific scope of the study:

    Bellouin et al., 2012 = -0.9; direct + first indirect effect, no AF.

    Dufresne et al., 2005 = -0.72; given as total aerosol forcing but specifically simply the sum of direct + first indirect so unclear whether it includes AF.

    Lebsock et al., 2008 = -0.42; first indirect effect only, no direct or AF.

    Quaas and Boucher, 2005 = -0.3 and -0.4 (two separate estimates); first indirect effect only, no direct or AF.

    Quaas et al., 2008 = -1.1; direct + first indirect effect, no AF.

    Quaas et al., 2009 = -1.2; appears to be total including AF.

    Storelvmo et al., 2009 = unable to find a stated observation-based estimate, IPCC authors may have inferred a figure from something in the paper.

    Lohmann and Lesins, 2002 = -0.85; given as total aerosol forcing, assume that means including AF.

    Quaas et al., 2006 = -0.3 and -0.5 (two separate estimates); indirect effects only but including cloud lifetime so would relate to AF indirect, no direct.

    Sekiguchi et al., 2003 = -1.3; assume relates to AF.

  62. PaulS,
    “This is still slightly smaller than the mean and median from the model ensemble considered in AR5, but slightly larger than the stated best estimate of total aerosol forcing.”
    .
    And what are those slightly larger ensemble mean and median values? More to the point, how do the ensemble total net forcing for today (and over the recent past) compare to the AR5 SOD total net forcing estimates?
    .
    The chapter is quite clear (executive summary and within section 7) that the best aerosol estimate is ~0.9 watt/M^2. If you have in fact identified an error (and it is not clear to me that you have), then it was missed by a lot of people.
    .
    I believe you will get a lot of push-back from people who will say you are making an apples to oranges comparison, and who will claim that the best estimate of 0.9 depends on modeled response rather than observational data. If I understand correctly, the ~0.9 watt/M^2 from Chapter 7 is not equivalent to the AR4 best estimate of ~1.2 watt/M^2. AR5 includes other influences which were not included the AR4 value. ( http://en.wikipedia.org/wiki/File:Radiative-forcings.svg ) An apple to apples comparison is ~1.2 watt/M^2 versus ~0.7 watt/M^2, which is a big reduction.
    .
    The key point is this: the AR4 best estimate for net human forcing was +~1.6 watt/M^2, while AR5 SOD offers a best estimate value of +~2.3 watt/M^2. Even if you have identified an error which changes that best estimate to +~2.2 watt/M^2, that is still a big change from AR4, and one that implies a lower plausible range of climate sensitivity which remains consistent with the observed temperature history. Do you disagree?

  63. SteveF,

    I know what AR4 and AR5 (draft) state with regards to their best estimates of RF and AF, and you are correct that the AR5 (draft) RF estimate is the apples to apples comparison with AR4’s RF chart. It’s also clear that AR5 (draft) states an estimate from satellite observational studies, supposedly relating to total net AF, of 0.73 +/- 0.3 W/m^2.

    However, the AR5 draft gives references to the satellite observational estimates it uses to produce this calculation, which I’ve read and copied down all the individual stated estimates and their scope in the post above. The only way you can get a figure close to 0.73 as the mean/median from these references is to weight studies with differing scopes equally as if they relate to the same thing. Do you think that’s a correct way to do things?

    Whether this amounts to an error on the IPCC’s part with regards the RF and AF estimates in the AR5 draft depends on whether or not they have really taken this 0.73 figure at face value and used it to inform their judgement. As I said, it’s possible they have simply stated this as the mean/median of estimates from satellite-based studies (a simple statement of fact) while taking into account the scope differences in their final judgement.

    However, using the 0.73 figure as Nic Lewis does in this case, as if it relates to an apples to apples average of satellite-based studies is clearly wrong.

  64. PaulS,

    Do you think that’s a correct way to do things?

    Certainly not unless an explanation is given for why they weight different studies differently.
    .
    But whether the best estimate to use in an analysis like Nic’s is 0.73, 0.9 or 1.0 W/M^2, that begs the real issue. The AR5 SOD indicates a substantial increase in the best estimate of net man-made forcing compared to AR4. So I ask you again:

    Even if you have identified an error which changes that best estimate to +~2.2 watt/M^2, that is still a big change from AR4, and one that implies a lower plausible range of climate sensitivity which remains consistent with the observed temperature history. Do you disagree?

    .
    .
    You also missed my other questions (which were not rhetorical):

    And what are those slightly larger ensemble mean and median values? More to the point, how do the ensemble total net forcing for today (and over the recent past) compare to the AR5 SOD total net forcing estimates?

  65. SteveF (Comment #107628),

    Obviously a higher forcing, all else being equal, would imply a lower effective sensitivity relative to the observed temperature change over any particular period.

    Total net forcing in the models? Not sure, but Table 7.5 contains diagnosed AF values from some CMIP5 models. Mean is -1.05, standard deviation 0.3.

  66. Paul S,

    Re: Table 7.5, four of the models include positive carbon-on-snow effects, so their actual aerosol only values are higher (in the case of GISS, the carbon-on-snow effect is +0.17). The ensemble aerosol mean, absent carbon-on-snow, is for sure more negative than -1.05.
    .
    But here is what I really do not understand: Why is there any range of aerosol effects at all? It seems to me the only rational way to compare the models is for all of them to use the same aerosol values and the same net forcing values. As things stand, the comparison is not between apples and oranges, it is between grapes, bananas, apples, grapefruits, kiwis, and strawberries. If you are going to compare models, then why not make a comparison which actually shows how the models differ, rather than obscuring differences with what is nothing more than a crude kludge?

  67. Steve F,

    Ok, plugging in a value of -0.2 to the BC on snow models shifts the mean to -1.1.

    But here is what I really do not understand: Why is there any range of aerosol effects at all? It seems to me the only rational way to compare the models is for all of them to use the same aerosol values and the same net forcing values.

    Because the purpose isn’t to compare models. It’s to provide an ensemble of projections which span a plausible range of uncertainty. A significant part of that uncertainty relates to present day aerosol forcing, so it’s important that this is reflected in the ensemble.

  68. There’s pretty much no way that current net TOA forcing is as high as 1 W/m² unless ocean heat content measurements are way off. Almost all of the TOA forcing has to be going into the ocean. Even 0.7 W/m² may be too high.

  69. Just another blow for the IPCC …

    All should read the breaking news here, from which I quote:

    ” This story is huge. America’s prestigious National Academy of Sciences (NAS) and related government bodies found no greenhouse effect in Earth’s atmosphere. Evidence shows the U.S. government held the smoking gun all along – a fresh examination of an overlooked science report proves America’s brightest and best had shown the White House that the greenhouse gas effect was not real and of no scientific significance since 1979 or earlier.

    For those who have been following the research by myself and others from among nearly 200 members at Principia Scientific International, I’d like to draw your attention to an Appendix now added to my current paper.
     
    Have a Happy Christmas everyone!

  70. Doug Cotton–
    That PSI article hits a level of bogosity that rivals anything in The Dragon Slayers. The logic is nuts. On the one hand we have:

    1) “The NAS study was commissioned by the U.S. government to address the best science of the day on the role of carbon dioxide in atmospheric physics”

    2) The PSI authors seem to have done a word search for “greenhouse gas effect” (or something similar).

    3) The PSI authors conclude that if the authors of the NAS study did not use a particular set of words to describe the effect of CO2 on the atmosphere, then there is no effect.

    That’s nuts. The authors of the NAS report may not have used “greenhouse gas effect” (or greenhouse. Or gas or whatever.) because they think that’s a poor word choice. Lots of people think that’s a poor word choice to describe the warming effect of CO2 on the atmosphere. What that means is that it is entirely possible for someone to search for “greenhouse gas effect” in articles by written by “hell fire and brimstone” alarmists who think sensitivity is spectacularly high and you could return 0 word search matches.

    The fact is: If scientists had not believed that CO2 could affect the temperature of the earth, no one would have commissioned the NAS report to evaluate the potential impact. It is certainly true that they could be uncertainty of the magnitude of the effect given lack of knowledge of sensitivity, lack of knowledge of the amount of CO2 injected into the atmosphere, lack of knowledge of the carbon cycle, lack of understanding of aerosols and so on. But that does not mean they didn’t believe that thing often called “the greenhouse effect” did not exist.

  71. Paul S,
    If the models do not on average reflect both the best current estimate of aerosol forcing AND the best estimate of net forcing, then the average of the model ensemble is biased. If uncertainty about aerosols is what you want to include, then have each model rum the same three conditions: low range, best estimate, high range. Taking the models projections at face value, independent of the assumed aerosol influence (and net forcing level) only gives credibility to models which do not merit credibility.
    .
    I see no possibility of separating credible models from incredible models if they are never required to all use the same net forcing history.

  72. I see no possibility of separating credible models from incredible models if they are never required to all use the same net forcing history.

    It’s not that difficult. You just analyse each model according to its individual forcing history.

    …then the average of the model ensemble is biased.

    I’ve made a similar point before that most AR4-era models had aerosol forcings much lower than the AR4 best estimate. Again, if you want to trust the AR5 best estimate you can do some simple scaling or throw out models with high and/or low aerosol forcing to obtain a cut-down ensemble with mean matching the best estimate. It’s something which perhaps should be taken into account.

    However, we’re not talking about an estimate with high confidence here. You’ve said it yourself: the apples to apples RF estimate has decreased by a large amount from just five years ago. Does that really sound like something that’s firm enough for everyone to tune their models towards?

    Up to now I haven’t mentioned the practical considerations of what you’re proposing. Forcing from aerosols is an emergent property in each model. A few papers have noted that knowing the diferential anthropogenic aerosol concentrations in a set of models will not provide a good indication for which models produce the lowest and highest forcings. More important for defining the forcing are the deeper characteristics of the model – clouds, surface albedo, atmospheric transport, natural aerosol burden. I’m really just referring here to models which only simulate the direct effect from prescribed distributions of sulfate aerosols and other species. For the present generation of ESMs which include live chemistry and microphysics modules this lack of dependence on emissions history is even more apparent and forcing even more intrinsic to the individual system.

    So, how do you produce a specific forcing in each model in a realistic manner without altering its basic characteristics?

  73. Paul S,
    ” Does that really sound like something that’s firm enough for everyone to tune their models towards?”
    .
    Well, the models are certainly tuned to something; if the IPCC claims to have identified the most probable aerosol influences, then surely that ought to be considered by modelers, unless they think they expertise in estimating aerosol effects is greater than the aerosol specialists.
    .
    As to what properties are “emergent” and what are inputs: I must assume that GHG forcing and the concentration and distribution of aerosols are inputs, and responses are “emergent”. The inputs could be set equal in all the models (and/or several different levels for those inputs could be run in each model).
    .
    But I think your question about “what to tune the models to” gets to the heart of the matter. All of it (aerosols, secondary effects, cloud properties, ocean properties, heat uptake, CO2 uptake, future emissions trajectories) is terribly, terribly uncertain, and the models are each “tuned” to some set of assumed values for all those uncertain factors. I don’t see how giving each modeling group complete freedom to choose critical parameters makes matters any better; in fact I think that only makes the model projections less certain, less believable, and less connected to our best measurements of reality. There is only one reality; we can’t just say they are all ‘right’, or even all equally likely to be ‘right’. A whole lot of healthy discrimination between models seems missing from the process.
    .
    What I have observed is that each time measurement data which could constrain the models becomes available (ARGO heat uptake measurements, satellite measurements of aerosol effects, tropospheric temperature profiles), modelers seem to discount whatever data does not agree with their particular model, instead of asking themselves some obvious questions about why their model disagrees with measurements. The process is upside down.
    .
    But regardless of how models are “tuned”, I think it boarders on insane to suggest that the model ensemble is capable of making accurate, or even useful, long term (or short term) projections of future temperatures. I am convinced that the continued and growing divergence between the models and reality (as our hostess regular documents) will ultimately force substantive changes in model assumptions. I am just surprised it has not already happened; perhaps the new AR5 best estimates for net GHG forcing and aerosol effects will push the modelers to question many of their assumptions. But I am not going to hold my breath.

  74. Interesting that the 1979 NAS report referred to above (with or without use of the phrase “greenhouse effect”) already had the SAME basic estimate (1.5-4.5 C per doubling) 33 years ago as the various IPCC reports. No progress even on the uncertainty range in 33 years.

  75. Lance (#107665) –
    “[T]he 1979 NAS report …had the SAME basic estimate (1.5-4.5 C per doubling) 33 years ago as the various IPCC reports.”
    Well, AR4 reduced the range slightly to 2°C to 4.5°C.

  76. Sorry for the delayed responses. Visiting family has been a bit more… exciting(?) than I anticipated.

    Paul S:

    4.1 is roughly the disgnosed net forcing change to 2xCO2 in the CMIP3-era GISS models

    That is both interesting and disturbing. I remember time and time again being told that while people may disagree about what the climate’s sensitivity is, we could all agree the forcing of a doubling of CO2 is 3.7. Looking at the table in the link you gave me, I see estimates ranging from 3.0 to 4.2. That’s a pretty huge difference for people not to have mentioned.

    SteveF:

    A couple of comments: I am not claiming that this graphic defines climate sensitivity, only that there is very good correlation between the “adjusted” temperature history and GHG forcing. For certain there is an unknown history of aerosol offsets and a largely unknown historical accumulation of heat by the oceans; these would reduce the net forcing shown on the graph, and lead to higher values of “effective climate sensitivity” than suggested by the graphic.

    It’ll take me a little while to decide how I feel about your graph given how many parameters you have. In the meantime, would you mind saying which data you used? Your GHG curve seems a bit different than the ones I’m used to. It may just be in my head, but it could be a difference in where the data is from.

    But here is what I really do not understand: Why is there any range of aerosol effects at all? It seems to me the only rational way to compare the models is for all of them to use the same aerosol values and the same net forcing values.

    Hardly. We don’t have multiple models just to test how well different functions handle different inputs. We have different models in the hope of finding ones that explain things well. That requires a lot more flexibility. It’s especially important since we don’t know that any particular forcing history is “right.”

    Imagine if we dictated a forcing history for models that was wrong. Every model would necessarily come out wrong. A great deal of effort would be put into solving a problem that couldn’t be solved. What’s the value in that?

  77. Brandon,
    “It’ll take me a little while to decide how I feel about your graph given how many parameters you have.”
    I don’t understand. The two regression constants come from HadCru3 (I think that’s what they call it), regressed against Nino 3.4, and the AMO (data comes from NOAA). There is a newer Hadley history, of course, but I don’t think it is so different as to make much difference in the graph. The 4-year lag for ocean surface warming (top 100 meters or so) is a SWAG based on estimates by Stephen Schwartz (and others). The result is pretty much the same if you use 3 years or 5 years lag.
    .
    The forcing curve is what I calculated based on the historical records for CO2, N2O, methane, and halocarbons. (I did this about 3 years ago.) The halocarbon atmospheric data did not go all the way back to first production, so I estimated what the early (pre-1950’s) concentrations probably were based on some estimates of historical production… halocarbons were a small forcing in those years anyway. I used published equations that relate atmospheric concentration of each component to net forcing (these equations were circa AR4). Since I did not include many of the factors that are often included in estimates of forcing history (land use change, aerosol offsets, ‘black carbon’, volcanoes, etc), that may explain the difference.
    .
    My only point in posting that graphic was that there is a clear correlation between forcing and temperature history, especially if you assume the AMO and ENSO account for variation around a ‘true’ underlying trend.
    .
    WRT constraining the models: I do not see how every modeling group doing whatever they want helps to identify which models are accurate and which are not; as it stands they can (and do!) adopt very doubtful forcing histories and proclaim that their model is ‘consistent’ with the temperature history. I see no way out of the confusion if there are no constraints, so I think we should agree to disagree about what constraints should be placed on the models.

  78. SteveF:

    I don’t understand. The two regression constants come from HadCru3 (I think that’s what they call it), regressed against Nino 3.4, and the AMO (data comes from NOAA). There is a newer Hadley history, of course, but I don’t think it is so different as to make much difference in the graph. The 4-year lag for ocean surface warming (top 100 meters or so) is a SWAG based on estimates by Stephen Schwartz (and others). The result is pretty much the same if you use 3 years or 5 years lag.

    That is three different parameters. Not only that, but one of those parameters is a non-physical kludge.

    The forcing curve is what I calculated based on the historical records for CO2, N2O, methane, and halocarbons. (I did this about 3 years ago.) The halocarbon atmospheric data did not go all the way back to first production, so I estimated what the early (pre-1950′s) concentrations probably were based on some estimates of historical production… halocarbons were a small forcing in those years anyway. I used published equations that relate atmospheric concentration of each component to net forcing (these equations were circa AR4). Since I did not include many of the factors that are often included in estimates of forcing history (land use change, aerosol offsets, ‘black carbon’, volcanoes, etc), that may explain the difference.

    I was referring to GHG curves specifically so things like land use and volcanoes would have no effect. But to clarify, is your underlying data basically the same as something like this? If so, it could just be me misremembering things.

    My only point in posting that graphic was that there is a clear correlation between forcing and temperature history, especially if you assume the AMO and ENSO account for variation around a ‘true’ underlying trend.

    The fit you got is not compelling. There is certainly a correlation but what of it? Since you’re lagging the forcing line, one could argue the complete inverse of your position. After all, common understanding says warming will cause an increase in GHGs. Things like that are why I need to take a little while* to think.

    WRT constraining the models: I do not see how every modeling group doing whatever they want helps to identify which models are accurate and which are not; as it stands they can (and do!) adopt very doubtful forcing histories and proclaim that their model is ‘consistent’ with the temperature history. I see no way out of the confusion if there are no constraints, so I think we should agree to disagree about what constraints should be placed on the models.

    Hardly. The constraints come in later steps. The best way to build models is to start with the loosely constrained ones. As new data comes in, those models can be tested and refined. Models that are too bad should get dropped. Models that are workable can be refined. This allows you to narrow the field of models to ones that can fit the data. As the field gets narrower, you can then examine where the differences lie.

    The approach you suggested is unworkable. You basically suggested models should have one parameter fixed to an uncertain value. If we do that, every model will be forced to be wrong. There will be no way to refine them to a right answer.

    *It’ll take longer since I’m on vacation. I have time to devote to blogs, but not as much as usual.

  79. Brandon,
    What is compelling to some is not so compelling to others. I dug out the individual gas concentrations from ice cores and atmospheric measurements; I did not use a complied list like you linked to.
    .
    “common understanding says warming will cause an increase in GHGs”.
    .
    Are you really suggesting warming causes increased GHG levels, and not the other way around? That is simply nonsense. Sure, warming of the ocean surface (eg, el Nino) leads to a tiny (a couple of PPM max) and brief increase in CO2, because it briefly reduces the rate of CO2 absorption, even while net absorption is strongly positive; such shifts amount to nothing of consequence.
    .
    “The best way to build models is to start with the loosely constrained ones. As new data comes in, those models can be tested and refined. Models that are too bad should get dropped.”
    .
    Sure, but my objection is exactly because that is NOT what happens; new data comes in and modelers either arm wave it or ignore it. Please point to all the bad models which have been dropped over the past decade. There is no “convergence”, and the model ensemble continues to make long term projections based on current forcing values which are substantially lower than the best available current estimates; and some people are already saying that is not important. The individual models don’t even change much over multiple years, they just add new bells and whistles that don’t make much difference. Most of the computer time seems spent making projections of future doom rather than improving congruence of the models with reality. The models are on average diverging from reality. Wanna bet if that divergence narrows or widens over the next 5 years?
    .
    Perhaps you think my approach of constraining model inputs is unworkable, but I am quite certain the current approach (which seems to be what you advocate here) does not work at all: no narrowing of the range of diagnosed climate sensitivity, no reduction in uncertainty, in fact, no perceptible progress.
    .
    Like I said, I think it would be better to agree to disagree on this one.

  80. SteveF:

    What is compelling to some is not so compelling to others. I dug out the individual gas concentrations from ice cores and atmospheric measurements; I did not use a complied list like you linked to.

    Have you happened to post the data you used anywhere?

    Are you really suggesting warming causes increased GHG levels, and not the other way around? That is simply nonsense.

    Of course I’m not suggesting that. I pointed out the issue as a problem with the correlation you found. Your approach has no way of demonstrating your interpretation is any better than what you call “simply nonsense.” Pointing out the fact your method fails to reject “nonsense” doesn’t mean I’m suggesting that nonsense is true.

    Sure, but my objection is exactly because that is NOT what happens;

    Huh? That isn’t what you said. You said:

    the only rational way to compare the models is for all of them to use the same aerosol values and the same net forcing values.

    You’ve now accepted an approach which you began by condemning. I pointed out the approach you described was unworkable and guaranteed to fail. I have no idea how pointing out that obvious fact and describing how things ought to be done makes you think I am advocating the approach currently being used. Heck, you even say the approach I described as best is not being done. If the approach I say should be used isn’t being done, how could I be advocating for what is being done? You seem to be thinking I’m saying:

    “Method X is the right method to use. They should keep using method Y!”

  81. I followed a link from the SkepticalScience Twitter feed a little while ago, and it took me to a piece written by Stephen Lewandowsky. It was about what you’d expect, but I was shocked when I followed a link in it. The link was to a piece where Lewandowsky was quoted as saying:

    Science is one of the most transparent endeavours humans have ever developed. However, for the transparency to be effective, preliminary documents ought to remain confidential until they have been improved and checked through peer review,”

    Try wrapping your head around that.

  82. Brandon Shollenberger (Comment #107720),
    Wow, we really disagree about his. I have said for years that some constraints need to be applied, specifically because the current process is broken; the models are not improved based on new data, bad models are not ever removed form the ensemble based on performance. Wacko models continue to be used. The models can only be improved if they actually are modified based on performance under realistic constraints. The realistic constraints are what is missing, and comparing models using the same set of inputs seems to me a very reasonable start to applying constraints.
    .
    Please let’s not waste any more time on this.

  83. SteveF:

    Please let’s not waste any more time on this.

    By “this,” do you mean the one point you discussed in your comment or the discussion in general? I guess it’s fine if you don’t want to discuss anything about what you did or misinterpreted me as saying, but you should let me know.

    In any event, I feel I should point out I have explained why your suggested approach is completely unworkable. You’ve done nothing to rebut my explanation so it is hard to imagine why you would keep promoting that approach. You are either intentionally promoting a stupid approach or you are simply not responding to the things I say.

    I suppose that would waste both of our time.

  84. I never actually said this conversation is a waste of time so I don’t know how you “agree” on that point. Oh well. If I can’t even get answers to simple questions/points, I guess this is a waste of time.

Comments are closed.