Surface Temperature maps.

Paul asked:

This is a good first step. I think that a genuinely honest evaluation of the models would be to compare measured and predicted global temperature maps. The anomaly curves can hide a lot of unphysical predictions because they average over the entire earth.

As I noted in discussions in comments, the authors of the AR4 do discuss the mismatch between mean surface temperatures in simulations and the measured values. I described the comparisons between simulated and measured values provided in chapter 8 of the AR4 in a previous blog post which contains some graphics.

Here’s an IPCC graphic showing their interpretation of the measured surface temperature and the modeled surface temperatures. Does someone involved in the AR4 believe surface temperature of the earth is sufficiently well known to bother to the subtract the measured temperatures from the simulated temperaures? Yep! See below.

Figure from Supplemental Materials To Chapter 8 in the AR4.(Click for larger.)
Figure from Supplemental Materials To Chapter 8 in the AR4.

The “hot” looking planet top, left is the measured earth in real units. The one just to its right is the difference between the the mean model simulation and the earth. Bear in mind: No matter what temperature you think ideal, a perfect simulation should match the real earth. The rms error is shown to the right.

Mind you, who knows what these magical graphs could possible be meant to communicate. Evidently Gavin said this:

Response: The GISTEMP data does not compute the average annual global mean temperature. In fact, it’s not clear that is even possible to do so. They use 14 deg C as a convention, but as explained ably here, the actual value is highly uncertain. It is therefore not widely used as a tuning target, and so models end up with a range. – gavin

Since the global average surface temperature is the average over the surface, one might presume that one could determine the global average by integrating over the surface temperature. So, if HadCrut thinks they can come up with the measured values over the entire surface, one might think they could integrate over the globe.

Of course, we would understand there might be a good deal of uncertainty. Nevertheless, even if the IPCC authors are attempting to do something that may be impossible, it appears these IPCC authors felt sufficiently confident in HadCruts values to compare the mean simulation to data and discuss this.

Or maybe the guys at Hadley think they can do this but Gavin is only suggesting his coworkers at GISS think they can’t do compute the mean temperature and the 14 C listed on GISS’s web page is just a guess.

If Hadley feels capable of computing the surface temperature but GISS does not, that might explain why the authors of the AR4 compare the simulated surface temperatures to HadCrut. Still, maybe we should take HadCrut’s absolute temperatures with a grain of salt also. Because, who knows? Maybe they are overconfident about their abilitiees.

What we do know is that no matter what the earth’s mean surface temperature may be, the models don’t agree with each other and disagree over periods as long as century. So, the difference between models is different physics not weather noise.

So, at least some models are sufficiently far off to get the global mean surface temperature wrong.

References:

Both Chapter 8 and the supplemental materials for chapter 8 of the WG1 for the AR4 are available here.

76 thoughts on “Surface Temperature maps.”

  1. Lucia – from the caption, the middle “planet” in the IPCC figure is the difference between the mean model earth and the observed temps, not the mean model earth. The “planet” on the right is the sum of squared errors or something like that.

    The most striking thing to me is not that the models are cooler overall but that Antartica is way off in the models.

  2. I have real problems with the meaningfulness of the models’ mean.

    If I throw a true die one hell of a lot of times I will get a mean throw of around 3.5 and I understand what that means.

    If I throw a collection of loaded dice, mostly once each, but occasionally a few times each and construct the mean I have a value but I have not much of an idea of what that value implies.

    If the model mean did not serendipidously reflect the real world better than the models themselves, I wonder if anyone would mention it.

    Am I to believe that building a model is an outcome from a true model building die? So if I buld enough models the mean will tend to the true model (the world as we know it)?

    Alex

  3. You know, I’ve thought of another reason why not be able to get a consistent result on the absolute mean surface temperature-how on Earth can you do a proper study of impacts if the absolute temperature in models is not correct? What if it was significantly lower or higher than models are Hansen’s mysterious “trusted models” suggest? They might significantly err in the damage they expect to see.

  4. Forget about AR4. It was “very conservative.” The 2014 report will “include futures with a lot more warming” http://news.yahoo.com/s/afp/20090214/sc_afp/usclimatewarming_20090214150716

    ‘…Field is co-chair of the group charged with assessing the impacts of climate change on social, economic and natural systems for the IPCC’s fifth assessment due in 2014.

    The 2007 fourth assessment presented at a “very conservative range of climate outcomes” but the next report will “include futures with a lot more warming,” Field said…’

  5. So the next question you should ask is, “So What?”.
    What does it mean for the models to not match the GMT? Does it mean anything?

  6. The most logical assumption, Nathan, is that the models do not get the physics right. The problem for this is that in chapters 8 and 9 the IPCC go through a lot of effort in setting up that their Bayesian mean is a good approximation of the true physics and results. Thus throwing another of the IPCC “legs” of their argument in the trash can.

  7. Lucia, thanks for posting this. Examining the three different models they range from mediocre, CNRM-CM3, to bad, CGM31 (T47), though the temperature scales are confusing. I’m assuming the right hand scale applies to the difference maps.

    Regarding Gavin’s comment “The GISTEMP data does not compute the average annual global mean temperature. In fact, it’s not clear that is even possible to do so.” Huh? The models claim to be modeling a physical system. They darn well better be able to calculate the value of physical variables. If not, this a side ways admission that all they can really do is fit the “anomaly” by the right amount of tweaking.

    “What we do know is that no matter what the earth’s mean surface temperature may be, the models don’t agree with each other and disagree over periods as long as century. So, the difference between models is different physics not weather noise.” This strikes at the heart of the IPCC’s utterly unscientific averaging of the models. There’s no point in having multiple models if they don’t embody different physics (and chemistry and biology!), but that simultaneously makes it illegitimate to average them. At best, one is right and the rest are wrong, though it’s certainly possible that they’re all wrong. Without seeing the other model results, I’m guessing that the three in the figure are the best of the lot and the rest are much worse.

    I’d love to see predictions from ten years ago of seasonal global temperature maps and compare them to the actual satellite temperature measurements. If these are representative, it would probably be hilarious.

  8. Lucia, if you are going to quote me and then dance around insinuating something that is never very clear, it might be useful for your readers (and perhaps you) to actually read the link that you did not bother to copy over.
    http://data.giss.nasa.gov/gistemp/abs_temp.html

    You might also like to read Jones et al 1999 and see exactly what went into their estimates.

    Then perhaps there might be something interesting to discuss.

  9. John F Pittman

    Really, is the physics wrong? Which particular physics? All of it? Way back to Newton? Or is it just the radiative part? Or is it the water vapour? What is the problem here?

    The problem I have with this post is that Lucia reaches this very obvious question, then stops. Throws her hands up and says “Must be wrong”, she doesn’t attempt to quantify the error. Just says “it’s wrong” – which is pretty useless. How wrong is it? Is it actually a big error? Or is it actually not important?

    And why is this important?
    “So, at least some models are sufficiently far off to get the global mean surface temperature wrong.”

    Why do the all the models have to get the surface temp right? Surely there is a use for models that ‘get it wrong’.

  10. Lucia
    Ok, I think I have worked out what the problem is in the earlier post you made (that you linked to at the start), it seems to be with comprehension.

    This is the IPCC text.
    “Individual models typically have larger errors, but in most cases, less than 3C except at high latitudes (see Figure 8.2 b and Supplementary Material, Figre S8.1) Some of the larger errors occur in regions of sharp elevation changes ”

    Would you not read this as:
    Individual models have large errors, but in most cases [those errors are], less than 3C except at high latitudes [where those errors are (often?) larger]. [In addition] Some of the larger errors occurred in regions of sharp elevation changes.

    so yes, the individual models have large errors, but most of those errors are less than 3C except in areas of high latitude and regions of sharp elevation changes.

  11. From the link provided by Gavin

    ” Q. What do I do if I need absolute SATs, not anomalies ?
    A. In 99.9% of the cases you’ll find that anomalies are exactly what you need, not absolute temperatures. In the remaining cases, you have to pick one of the available climatologies and add the anomalies (with respect to the proper base period) to it. For the global mean, the most trusted models produce a value of roughly 14 Celsius, i.e. 57.2 F, but it may easily be anywhere between 56 and 58 F and regionally, let alone locally, the situation is even worse. ”

    I never found that I needed anomalies. I needed to know the absolute temperature to dress accordingly : bathing suit and pareo, or parka?

    If we take this proposed relativity to other examples, it would mean that all we need is the shape of the house, not the absolute scale, to have a house to live in. The relative map of a region, not the absolute distances to navigate with, etc. etc.

    Though the plans of a house are useful, the actual implementation in meters is what counts. It works because there is a linear scaling between a house model and a house built from the model. We now have climatologists telling us that the shape of the modeled climate is correct, and if they compute a difference it should apply to the real world regardless of whether the model fit the real world. The above plots emphasize that the shape is not correct though, so the scaling argument falls through. In addition, it is well known that climate, like weather, does not have linear approximations, being highly chaotic, so even linear scaling is a huge hypothesis that has to be proven. For example, presently in Greece we have a difference of 10C degrees between night and day on an average temperature of 13C ( 8 18). A few days ago the difference was 15 ( 0 15) degrees on an average temperature of 7.5C . Nothing very linear there.

  12. Gavin–
    Feel free to discuss anything you like. I’ve read Jones and had read the GISS reference. I also know the AR4 does compute the climatology and does compare models to the surface temperature.

    If you are going to answer plain william plain question like this:

    If GCM’s cannot accurately describe actual current temperatures how can they accurately describe future climate?
    Thanks
    William

    With irrelevant evasions like this:

    [Response: The GISTEMP data does not compute the average annual global mean temperature. In fact, it’s not clear that is even possible to do so. They use 14 deg C as a convention, but as explained ably here, the actual value is highly uncertain. It is therefore not widely used as a tuning target, and so models end up with a range. – gavin]

    Clearly, your answer does not engage address the question put to you. So, why would anyone think you believe you are going to discuss frankly?

    But thanks for the links.

  13. Nathan–

    Ok, I think I have worked out what the problem is in the earlier post you made (that you linked to at the start), it seems to be with comprehension.

    No, you have not identified the problem with the text. The problem with the text is it is written to studiously avoid specifically mentioning the magnitude of the “larger errors” ( 5 C or possibly more) and instead mention a lower number (3C) . That is to say: It is written to be accurate but convey an false impression. That is “accurate but untrue”.

    FWIW: Most of the problems with George Will’s column fell in the “accurate but untrue” camp.

    So, you are highlight the “accurate” part– as Will did by defending the specific facts underlying most of his column– the text remains “untrue”.

  14. Nathan (Comment#11199)

    “Why do the all the models have to get the surface temp right? Surely there is a use for models that ‘get it wrong’.”

    Well obviously they don’t have to. Getting the mean temp wrong and getting the climatology wrong, does not seem to impede them.

    They will “have” to get them right when people require them to get them right. I am sure they would love to get them right. Just now they don’t.

    The claim that they, the modelers, need to make, and to justify, is that it simply doesn’t matter. Unfortunately it might have been better if that argument had been firmly stated upfront, and to the general public. I am sure they know the argument and have a justification. A simply appeal to linearity might suffice. If they made such an appeal then doubtless they might be challenged to justify that.

    There is a general argument “well all the models agree that it is going to get hotter, they can’t all be wrong, and that is what the basic science tells us anyway all other things being equal” but that is telling us little more than what the basic science tell us.

    I do not know the state of play for the CMIP3 archive but for the CMIP2 archive on the PCMDI website we have the statement:

    “A natural question to ask is whether the spread in simulated temperatures is correlated with variations in planetary albedo among the models.”

    Which kind of indicates that they did not know why the spread exists.

    So they get this wrong and perhaps they do not know why they get this wrong. It does not inspire confidence. Obviously a whole load of people do not believe it matters much. But that is a belief until proven otherwise.

    Alex

  15. Does someone involved in the AR4 believe surface temperature of the earth is sufficiently well know to bother to the subtract the modeled temperatures from the simulated temperaures?

    Subtracting modeled from simulated temperatures? Was one of those supposed to be “measured”?

    And I’m sure “well know” was supposed to be “well known”.

  16. FrankK–
    Gavin mosted at late in the evening. He’s not required to be beavering away on ModelE every moment of his waking life. I’m sure that ModelE is chugging right along even as we blog.

  17. For me it’s not even a question of whether the models get the science correct or not. The fact of the matter is, and many people forget this, that the models are large, complex pieces of software. It has been known for some time that software systems such as these will contain serious errors, otherwise called bugs. The software engineering community has developed quality processes to reduce these errors by a significant margin, however none of the models were designed, written, and tested using these processes. In fact, most modelers will admit that their models are research codes, not engineering codes.

    That does not mean that all these models are useless. They can be quite useful in steering future research or helping to determine if a particular theory is at least plausible. However given that their output is likely compromised by the inevitable programming errors contained in them, the models outputs can’t be used to support a conclusion that the theory is likely correct. And certainly you can’t use them as a basis to promote policy changes, especially of the magnitude that are being suggested.

    Now, if any of the models have been developed using industry accepted quality processes then I’d like to see a pointer to the following:
    1.) Requirements document
    2.) Coding standards
    3.) Full high and low level design documents
    4.) Individual peer review summaries for each CSU
    5.) Validation reports
    6.) Formal Verification test procedures
    7.) Verification Test reports

    So far I’ve not seen *any* of the above for *any* of the models. I really think this eclipses all the other discussions of the models, at least in regard to whether the outputs mean anything outside of the research groups that wrote them.

  18. Lucia:

    Although Gavin’s answer and link may appear to be ‘flip’, there’s an important implied point that’s being missed:

    The Stefan Boltzmann black body Law is the foundational equation used to model the steady-state temperature of the earth. It requires a defined ‘surface’ of the emitting body. In the presence of a greenhouse effect, that’s well above the true surface. But then one has to try to define an effective surface somewhere below for your model.

    That’s somewhere near the solid or liquid true surface, but includes a portion of the lower troposphere. It’s temperature will be somewhat different than that of the true surface. I believe that just where that should be is the open question. It’s unresolved, from GCM to GCM, and that’s the point Gavin raises.

  19. You know I’m going to stick my neck out here and suggest that Gavin is saying that its a bit difficult to get a ‘mean global temperature’ because the mean is a bit arbitrary..and it appears to be that the models calculate a delta T rather than an abs T, as in a change to the equilibrium, which should mean slightly easier physics. Please correct me if this is wrong. Maybe a model could be built (or exists) that calculates abs T that would solve this?
    In addition I believe some modellers apply the ‘ballpark rule’ as in well we’re sort of in the ballpark so we aren’t completely wrong. This can be a double-edged sword because you have to be in the ballpark on a lot of other parameters, not just T, to justify this.
    Also, there is the eternal ’empricists v theoreticists’ divide which with climate science would tend to be on the theoreticists side as the Earth’s climate is anything but simple or easily understandible/testable.

  20. Len,
    .
    I would think that the model/observed GMST differences would be complicated not by the definition of where the “surface” is in the context of blackbody radiation but rather due to the number of atmospheric layers used in the model. If you are comparing models to surface air temperature, the only appropriate way to do this is to compare the model atmosphere layer that is in contact with Earth’s surface to the observed surface air temperature. The comparison is not a comparison to a theoretical blackbody emitter; the comparison is model prediction to observed. Comparing the theoretical blackbody temperature of the model to observed is apples and oranges.
    .
    Blackbody temperatures would only matter when comparing models to satellite information.

  21. Len, you risk opening up a whole new can of worms.

    I think that the important surface is were the dense stuff (land, water) meets the thin stuff (atmospphere) that is where the radiation that escapes directly to outer-space originates. Dense materials tend to have high emissivities.

    Now that temperature is almost never the temperature that the weather stations measure.

    I like anyone who has spent any time in a very hot clime (some months in the Namib Desert) could tell you, although the shade temperature might be around 40C the ground temperature is a lot (~20C) hotter during the heat of the day. Having scaulded my feet in the sand I can testify to this.

    Now that questions what the tas (Surface Air Temperature) in the models means. If it is the surface temp it would mean that in desert areas it should on average be perhaps 10C+ more than the temperature record would indicate. If it is meant to be the temp in a weather station how do they model that?

    I don’t know, for me it is an open question.

    Alex

  22. Len,
    .
    Also, what Gavin is discussing really doesn’t relate to that. Gavin’s discussion is more about how to define what is meant by SAT and the fact that it is a rather amorphous definition. For example, you could calculate a daily mean by using 0.5*(max-min). Or you could sample at 10-minute intervals and average. Or you could sample continuously and average over the integral. All give a different numerical representation of exactly the same thing.
    .
    Same problem exists for monthly temperatures. Different surface stations use different algorithms for calculating the temperature. This results in uncertainty in the aggregate measurement.
    .
    Not only that, but the surface measurements are not continuous across the surface. The measurements are made at discrete points – both geographically and altitudinally. The temperatures in between each of these points are interpolated by various methods. However, in the case of the models, all of the temperatures are calculated explicitly. That results in potential error between the real world (interpolated) and the model world (not interpolated, but possibly averaged by various methods).
    .
    Lastly, there are known temperature gradients within even a few meters of the surface. Strong gradients of greater than 1 C are have been measured in Antarctica over the height of mere meters. So where is the sensor relative to the snow? That changes with time as snow accumulates/gets blown away. This isn’t just Antarctica, either. Two surface stations in California in close proximity may have sensors at different heights that yield different temperatures simply because of the height difference.
    .
    Gavin can correct me if I am wrong, but I believe those are the points he is trying to make.

  23. How are anomalies calculated? Are the degrees C referred to related to absolute degrees C? Aren’t there the exact same problems in determining anomalies as in determining absolute mean temperatures?

  24. Lucia,

    No, you have not identified the problem with the text. The problem with the text is it is written to studiously avoid specifically mentioning the magnitude of the “larger errors” ( 5 C or possibly more) and instead mention a lower number (3C) . That is to say: It is written to be accurate but convey an false impression. That is “accurate but untrue”.

    FWIW: Most of the problems with George Will’s column fell in the “accurate but untrue” camp.

    So, you are highlight the “accurate” part– as Will did by defending the specific facts underlying most of his column– the text remains “untrue”.

    You imply that the authors of the IPCC report attempted to intentionally mislead readers about surface temperature simulations in the climate models. Yet I find no evidence at all of this in the report. The report states where there are problems (high latitudes, high elevations, eastern ocean basins), and provides a link to more detailed info in the supplement. The criticism about omitting the magnitude of larger errors is just nitpicking — it’s all there for anyone who is interested. Contrast this with Will who intentionally misleads by ignoring the subtleties (differences between arctic and antarctic sea ice) involved in determining sea ice trends.

  25. Len–
    My issue with Gavin’s “answer” was not the information in the answer itself. My issue is that it does not address the question asked. The question asked at RC was this:

    If GCM’s cannot accurately describe actual current temperatures how can they accurately describe future climate?

    Gavin did not answer this. He responded by discussing uncertainties in measuring the global average surface temperature of the earth during the 30 year baseline period. This is almost irrelevant to the question. It only seems relevant because someone reading quickly might thing the answer is suggesting the models might be accurately describing teh actual current temperatures.

    Yet, it’s quite clear that many of the models used by the IPCC don’t. We may not know which don’t, but most don’t.

    Now, I can’t begin to guess why Gavin elected to go off on a tangent about the difficulty involved in defining or measuring the earths temperature. The conversation could be interesting, and if he wishes to wax philosophical, I’d be happy to let him muse here in comments. Or, as he has his own blog, he may obviously post there. Or whatever.

    However, with regard to the question william asked gavin, those discussed in my post which he linked and the posts here, even if we chase that red herring Gavin pointed out, we will not get back to the issue William was trying to understand which is:

    Why are do so many models inaccurately “predict” the surface temperature of the earth?

    And lest anyone be confused: it is clear many of the models get this value wrong. There are three possibilities based on the informatioin at GISS:

    So,

    * The “best estimate” of 14F is the “correct” value, then most models are lower, as least some are way off on the low side and some are way off on the high side. A few loo, ok. The differences between models and the earth cannot be ascribed to “weather”. The graph looks as I showed it in my previous graph. (Where I stated that I reconstituted the measured values using GISS’s best estimate of the surface temperature.)

    * The earth’s surface temperature higher– say 1C. In this case, nearly all the models “predict” earth to be colder than it is. The model errors on the low side error cannot be ascribed to “weather”.

    * The earth’s surface temperature is lower– say by 1 C. In this case, a few models are ok, but most are still biased. The bias isn’t due to “weather”.

    So, the difficulty in determining the actual surface temperature is nearly irrelevant to the question William ask. It is only relevant to figuring out which models are wildly off. But no matter what the correct answer is, some are off.

    So: yes. Had willam asked Gavin, “Can we know the earth’s surface temperature with precision”, Gavin’s answer would be perfectly suitable. But…welll… no.

  26. MC–
    The model outputs are reported in honest to goodness “C” not “delta C”. It’s GISSTemp that’s in “delta C”.

    RyanO–
    I suspect you are describing the issues that concern Gavin about determining the ‘correct’ value for surface temperature. As you see above, I think that issue, while real, is a red herring vis-a-vis the question William asked Gavin, which Gavin answered at RC. I thought what I was ‘implying’ in my answer was that not withstanding these uncertainties in the measurements, one must question the accuracy of the models.

  27. Ryan, pretty close.

    Have a look at the top panel here:
    http://farm1.static.flickr.com/82/222920214_3d64ab4e9c_o.jpg

    It shows the surface ground temperature over new york, but any scene over land has the same kind of heterogeneity. Surface air temperature is slightly different, but you’ll see similar variations. The point is that with just 3 or 4 stations, it is very difficult to make an appropriate estimate of the scene average temperature. Jones et al, use elevation to make lapse rate corrections, but surface type variations are probably as large. For a model, all these things are perfectly known, but for the real world, they aren’t. And unlike for the anomalies, you can’t rely on large-scale spatial correlation to help you out.

  28. To All,

    My opinions on climatic change are a bit like climatic change; both go up and down.

    Int eh 60s I was persuaded that their was an ice age coming. About 20 years later I was persuaded that we were going to fry.

    In 1990 I was first persuaded that I should have doubts.

    Then in the late 90s I was persuaded that frying was indeed the prgnosis.

    Around 2007, I got cold feet again. Since then I have convinced myself that nobody knows.

    I will get to the point:

    The 1990 doubts came from a programme whose transcript can be found here:

    http://www.angelfire.com/dc/gaudcert/globwarm3.htm

    Its interest is primarily historic but if anyone did not happen to see the UK Channel 4 programme ‘The Greenhouse Conspiracy’ it may be worth a read if only to judge how far we have or haven’t moved on.

    Alex

  29. Gavin, are you suggested that the absolute temp is so uncertain as to contain within its bounds a good number of model results? Just curious, because the range given in the GISS reference would not appear to cover that.

    You know (this is to everyone) I think it is also an interesting question to ask, “why don’t the models agree with each other?” It might help us to understand why they don’t appear to be very good at getting the estimated absolute GMST, which might help us figure out if the values being incorrect means anything for the delta…

  30. Gavin, thanks. I take it the top panel is skin temperature, not air temperature . . . but it still illustrates your point nicely and adds an additional element that on local levels vegetation, presence of water, and topology in general can have a major effect on the absolute temperature measured.
    .
    With that being said, I do agree with Lucia that the response to William’s question is evasive. It does not address the fact that regardless of difficulties in obtaining or defining SAT, most of the model’s predictions of absolute temperature must be incorrect because they are incompatible with each other.

  31. Lucia somehow did not include the bulk of William’s comment:

    However, I’ve just learned that GCM’s are not capable of modeling actual average global temperatures. By actual temperature I don’t mean the “anomalies”, I’m referring to the full average surface temp. Based on the GISS database, global temps over the last 100 years or so were about 14-14.5C. All but twow GCM’s forecast temperature over the last 100 years to be between 12-14.5C.

    which is what I was responding to.

    The second part is a very familiar question, loosely paraphrased as “if models aren’t perfect, why are they useful” which I have devoted copious digital ink to over the years – most recently here and here. I don’t feel the need to repeat that every time someone asks basically the same question.

  32. Alexander: My problem with the “conspiracy” interpretation is that it connotes that the scientists involved are knowingly engaged in deception; i.e., that they do not believe their own published work. This is the same as an accusation of fraud.
    .
    While I may not agree with Hansen’s conclusions, for example, I feel that he believes what he is saying. Is there hyperbole involved? I think that there is. But does Hansen believe that AGW is not a concern but knowingly protrays otherwise? I seriously doubt it.
    .
    To me, there is a big difference between using hyperbole to get a point across and protraying something as dangerous (harmless) when the person knows or believes the answer to be the opposite.
    .
    I’ve poked fun at Hansen, and Mann, and (sorry Gavin . . . ) RC. I’ll probably continue to do so from time to time. But I do not feel that those individuals are committing fraud. I’d be willing to wager a great deal of money that Gavin, Mann, Hansen, Jones, [insert name here] honestly believe that AGW is a grave concern for humanity. I’d be willing to wager a great deal of money that much of their motivation for continuing to work in the field of climate science and their accompanying statements/books/press releases is a legitimate concern that not enough is being done.
    .
    If Gavin is reading, maybe this next part is helpful (or maybe not). Like you, Alexander, I’ve gone back and forth on what I believe. For the last few years, I’ve been a lukewarmer . . . or at least a skeptic of the catastrophic AGW. Over time I’ve developed what I feel are some legitimate scientific reasons for feeling that way, but what initially turned me into a lukewarmer was the hyperbole associated with AGW and the casting of doubters in catastrophic AGW as energy company moles seeking to destroy civilization for personal gain.
    .
    <–(Note: guilty of some hyperbole myself 🙂 )
    .
    So if Gavin’s still reading . . . I think he and others on the serious AGW front would find lay people and people with moderate scientific backgrounds more receptive if the hyperbole was minimized. I’ve read RC for over a year now, but always been intimidated to post because – to be honest – anyone who shows any signs of being skeptical tends to get excoriated by the regular readers.
    .
    But enough preaching on my part . . . my only real point was that I feel guys like Hansen, et al., honestly believe what they say even if they use hyperbole to get their point across. That doesn’t qualify as a conspiracy, IMO.

  33. Gavin,

    FWIW (perhaps very little to you), I am what would be recognised hereabouts as being a ‘warmer’. I’m hanging out here because I like talking to people who are somewhat more sceptical than me about projections of AGW.

    I’m sorry to be so direct, but I think your comments on this thread seem dismissive and, thus, are not likely to be appealing to any who are still looking to come to a judgment of matters. I think ‘William’s question’ is a very reasonable one, which probably deserves a direct answer from anyone who wishes to explain their assessment of concerns. I think your dropping of links is a suggestion of ‘you should already know better’, which is potentially insulting.

    That is no way to win the matter of persuasion by reason.

    I am saying this to you because I share your concerns for the future. You may be ascerbic in your response, if you wish, but I have shown you the respect of offering an honest view.

    Simon

  34. RE: RyanO (Comment#11301)
    I agree that I don’t see any malice in the pro-AGW guys, just too much doggedness over a theory that still does not have enough characterisation material to really bed it in. I normally don’t get the chance to be a bit holistic as to my attitudes to the models on CA (for good measure as well) but I’ll do it once here as Lucia is a bit more forgiving.
    As physicist/mathematician/engineer my issue is the lack of recognising mathematically obvious relationships and how they are applied. A case in point: El Nino, La Nina, PDO etc are considered noise rather than any attempt to try and model them. Trends are then fitted assuming linear trend + noise and then being thorough (a la Santer et al) confidence intervals are widened because of auotcorrelation. Ok all good so far. Now take a step back.
    1) Look up the definition of autocorrelation and convolution. It deals with higher order periodic signals in the data. Periodic signals can be generated by what the ANOVA guys call AR-1 i.e.a first order difference equation with pertubation. A 2nd order difference equation produces sine waves (among other things). Anybody familiar with control theory will automatically know this when programming realtime applications. If not take a spreadsheet and have a play. x(n) = Ax(n-1) + Bx(n-2) etc. The reason is that ‘continous’ calculus is only a limit to discrete calculus. Hence you can approximate many time-independent functions with iterative simpler models. This is mathematically obvious if you go back and look at your theory. Hence know your tools before you use them.
    So if you get a high order of residuals in the errors there is something else going on.
    So why fit a linear trend if this is obvious? Its because you are forcing a bias on your theory. Don’t.
    2) If you use AR-1 to model the residuals then you are accepting that an iterative difference equation common to chaotic weather models may have a part to play. If the data fits a certain behaviour, explore more.
    The models of linear trend + noise fit the assumption of CO2 rising and the rest is background. If on the other hand a more iterative, residual based and sensitive model is built that tries to cover PDO, El-Nino etc then other sensitivities like solar fluctuation and how this is coupled with deep sea currents may come into play. But even before that it is glaringly obvious that this linear trend + noise is not realistic.
    To put it frankly as physicists can understand ‘there is no beauty in it’
    I suspect that people want a better model, but the thinking appears to be blinkered and does not actually recognise the interesting weather patterns we see. In addition we have a hammering home of the simple idea of CO2 causing warming with little idea about subtley and this is a shame.

  35. Ryan O,

    Please re-read what I said. I can not help the name of a Channel 4 programme containing the word “conspiracy”. As I said I think it is hisotric interest as some of the same arguments were being raised 19 years ago. I have made no reference to anyone that I currently know are engaged in the art of climate research.

    I just hoped that some who being younger than I, and so more personally involved in the prospect of an unwelcoming future would be interested in a bit of history regarding the dispute.

    Much of your post seems to be addressed to Gavin and I could not possibly comment.

    BTW, If you do not remember the 1990 programme I suggest you have a look. It is what I intended. It was influencial, I can not say if on balance it is right or wrong.

    Alex

  36. Gavin,
    .
    I think I arrived at a different interpretation of William’s question. The quote you reprinted was William explaining the reason for his question and doesn’t seem to relate to the specific discussion at hand. His question (IMO) wasn’t in the typical “models are useless” vein . . . it was directly asking why if the models don’t match observations would we believe their predictions.
    .
    However, the explanation you provided shows how the measurement method for SAT can introduce a bias. This does not address the root of the question, which is: Even if the bias could be properly accounted for and a no-kidding SAT could be obtained, most of the models would not match, and should that fact not cast some doubt on their predictive ability?

  37. Alexander: Sorry if you felt that I was calling you out. Not my intent. It was over-editorializing on my part regarding the content of the program (I do remember it vaguely . . . I know I saw it). You’ve made no statements that you believe there is a “conspiracy” . . . so I didn’t mean to attribute that to you.
    .
    It was more of a general comment that the polarization of folks into camps with regard to AGW and both sides alleging deceit by the other is not accurate and counterproductive. I did not intend to attribute any of that to you. I apologize because I did not clearly state that and I can see why you may have felt that my post was directed at you. 🙂

  38. Gavin

    The second part is a very familiar question, loosely paraphrased as “if models aren’t perfect, why are they useful” which I have devoted copious digital ink to over the years

    Adding the first part only re-emphasize that what William was asking was precisely the question I quoted which was:

    If GCM’s cannot accurately describe actual current temperatures how can they accurately describe future climate?
    Thanks
    William

    I don’t see how you believe the second part of the bit you quote translates into “if models aren’t perfect, why are they useful”, but even if it had, you didn’t answer that either.

    I am also aware that you constantly transtlate questions about accuracy, precision and whether or not we can expect the projections to fall within any particular rang of reality into “if models aren’t perfect, why are they useful”. You then proceed to answer this question, which you seem to see all around you and which as far as I can tell is almost never asked.

    If you chose misunderstand perfectly clear questions and answer that other question, that is your perogative. But you can’t expect people not to notice that you do so. You will also find that if you keep doing what you do, people will continue to repeat the questions they really wish to ask.

  39. Confirming Ryan O,

    Given that the models do not agree on the global mean temperature what is the justification that gives us confidence that they can predict climatic changes.

    To all,

    Gavin gives two links where he has answered this question:

    here:

    http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/

    and here:

    http://www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/

    could someone read these and let us know where he answers this specific question. I apologise for not doing it myself but my eyesight is not what it was and I simply can not read through two long blogs at this hour (UK). If my age bars me from joining in fully then so be it.

    Alex

  40. Joe–
    I think the authors of the IPCC word their descriptions to put the models in the most favorable light possible. Yes. I don’t simply imply this. I say it flat out. The spin in those sections is not necessarily heavy, but it exists.

    I doubt the spin is intentional. I think the sections were written by modelers, and the modelers are their babies, and they tend to describe them the way mothers describe their children: As glowingly as possible. The mother do this because they are somewhat myopics to their children’s faults; modelers can be too. Neither group thinks they are being intentionally misleading when they do this. So… no. I don’t think the authors of the IPCC reports intentionally misled.

    However, though the likely did so unitentionaly, I do think the authors of the IPCC wrote the paragraphs in way that would lead most casual readers to imagine the discrepancies were roughly half as large as they are, and they put most of the figure in the appendix where only a minority of readers would look at them.

    In contrast, graphs that show models in a more favorable light are highlighted in the body of the report, not buried in an appendix.

  41. Alexander Harvey (Comment#11296)

    Hi Alex,

    When you refer to ‘The Greenhouse Conspiracy’ C4 programme, do you mean, perhaps, ‘The Great Global Warming Swindle’? –

    http://en.wikipedia.org/wiki/Global_Warming_Swindle

    If so (and sorry if I’m guessing wrong), I think it’s fair to say there’s been considerable criticism of its scientific claims. The ‘full Monty’ can be read here, in a 123 page complaint to OFCOM –

    http://www.ofcomswindlecomplaint.net/FullComplaint/TOCp1.htm

    (Ofcom didn’t find against the programme since, to simplify, it judged it to be an ‘opinion piece’).

    Personally, I completely lost all respect for the programme and all those associated with it when it was realised that a crucial graph on the correlation between solar output and temperature had been fabricated (apparently the graphic designer had ‘filled in’ data and timescale to make it look better. Hmm).

    But perhaps I’m talking about a different programme!

  42. MC: I’m with you, buddy. The nice thing about noise is it lets you be further off the mark and still have statistical significance. Call it a signal, however, and the game changes. With the heavy use of PCA in climate science of late, I would think that someone would begin applying for the reason it originally came about – signal processing – to at least empirically identify periodic processes in the climate that are currently considered “noise” (even if a physical cause is not assigned due to lack of information about the underlying process). That would be a decent step toward developing models to handle these phenomena.
    .
    My degree, by the way, is in physics. My work experience is nuclear, mechanical engineering, and (of late) process control and powder metallurgy. The types of analyses done on climate information wouldn’t pass muster in any of those disciplines.
    .
    When it comes to trends . . . well, I’m more of a delta-T guy than a trend guy with the T-nought and T-x being calculated over various periods and with varying timeframes to examine the effects of start/endpoint choices and higher-order processes that may not be well characterized. Personal preference, I suppose . . . but it sure seems easier to keep it straight in your head when working that way rather than working with linear trends if the underlying process is not known to be linear. And the range of statistical tools for analysis is broader for comparisons of that type.
    .
    Regardless, sounds like you know full well (probably through experience) that failure to characterize higher-order processes can lead to wildly inaccurate extrapolations. I’ve learned that the hard way myself.

  43. Alexander:

    Gavin gives two links where he has answered this question:

    here:http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/

    and here:

    http://www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/

    could someone read these and let us know where he [i.e. Willam’s] answers this specific question.

    The first link you gave is to a list of FAQ’s here:
    http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/
    The idea that info on this page answers William’s pages is laughable.

    The second link is:
    http://www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/
    Also laughable.

    I have commented on the practice of “answering by providing links to pages and articles that do not answer the question” before. It is a frequent practice … by …. some…. people…..

    With some frequency, the text surrounding the links suggests the person answering has been answering the same thing over and over and over etc. It’s not a good way to persuade. Needless to say, people come back and continue to ask the question similar to those they asked before but which have not been answered.

  44. Simon (Comment#11316):

    Err No. I meant the 1990 programme not the recent one. As I indicated perhaps you should click the link and have a read.

    It has quotes from the like of:

    PROF. RICHARD LINDZEN & DR ROY SPENCER
    and perhaps others who are still in the debate nearly 19 years later.

    But you were be right to ask if I meant the recent Channel 4 offering because the debate was very similar, but no doubt some details have changed.

    Alex

  45. Lucia:
    “No, you have not identified the problem with the text. The problem with the text is it is written to studiously avoid specifically mentioning the magnitude of the “larger errors” ( 5 C or possibly more) and instead mention a lower number (3C) . That is to say: It is written to be accurate but convey an false impression. That is “accurate but untrue”.”

    So you don’t like my interpretation? It fits their figures… Read it again.
    This is the IPCC text.
    “Individual models typically have larger errors, but in most cases, less than 3C except at high latitudes (see Figure 8.2 b and Supplementary Material, Figre S8.1) Some of the larger errors occur in regions of sharp elevation changes ”

    Individual models have large errors, but in most cases [those errors are], less than 3C except at high latitudes [where those errors are (often?) larger]. [In addition] Some of the larger errors occurred in regions of sharp elevation changes.

    This is an entirely appropriate use of that text. You have simply decided they are trying to hide their errors. Weird. Look at their first sentence:
    “Individual models have large errors”
    Yep, they do
    “But in most cases, are less than 3C”
    Yep, in most cases the errors are less than 3 C

  46. Lucia,

    Thanks again for taking the time. I had read thoses “faqs” (is that pronounced facts?) before and I could not remember anything specific about the global means. But now I do remember, I mentioned them:

    http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/

    comments 443, 452, 453, and perhaps others. I do not recall that I solicited much comment.

    My poor tired eyes thank all who take the time to read and relay.

    Alex

  47. Alexander Harvey (Comment#11327)

    My apologies for the false assumption. I did check your link but thought it was some part of the ‘Swindle’ transcript – as you say, similar arguments. I do live in hope of one or another of the ‘alternative hypotheses’ developing some substance (because otherwise I am very worried), but we shall see. Regards, and good night.

  48. Lucia
    And this comment is hilarious:
    “to studiously avoid specifically mentioning the magnitude of the “larger errors””
    I mean they may not have written what the largest error was, but the figures clearly show the magnitudes of the errors. Do you not think that figures ‘mention’ things? Are the separate from the text?

  49. Nathan (Comment#11329)

    You seem to in possession of a level of knoweldgs that I do not.

    Do you know that, what they say (regardless of which interpretation you might wish) about the discrepancies are true.

    Now I do not know, if you do, can you put some more quantative figures to the assertions.

    Given that some of the models are 1C-2C off on average I would be surprised if they the >3C errors are largely confined to the poles and the mountains.

    But as I said, I do not know, if you do, please be more specific, or if you do not, perhaps be less emphatic.

    Alex

    Given

  50. Simon,

    No Problem, just pleased to know someone read the my comment.

    Alex

  51. When I took the SAT a few, er many years ago… I appended a little note to my test paper:
    “In individual questions I typically have errors, but in most cases, less than one or two letters away from correct, (a to c, or b to d, or vice versa) except on the really difficult questions, which I skipped (see Questions 32, 34, 76 and 80-126) Some of the larger errors occur in questions that contained the phrase “all of the above”. Other than that I think I did really well and worked really hard so please give me a 1400.
    Thanks,
    Mike Briant… oops misspelled my name”

  52. Nathan (Comment#11333),

    Once again, is this what you know or what you believe.

    You say: “but the figures clearly show the magnitudes of the errors.”

    Do they? I doubt it. I can not see the clarity. They imply, we infer. WIth that sort of clarity we can all walk away with our prejudices intact.

    Like I say I do not know but:

    “Individual models typically have larger errors, but in most cases, less than 3C except at high latitudes (see Figure 8.2 b and Supplementary Material, Figre S8.1)”

    Is so poorly worded as to be next to useless.

    What does it mean.

    “Individual models typically have larger errors” ==> models normally/typically have large errors

    “but in most cases, less than 3C except at high latitudes” ==> “in most models or most of ther errors.

    Well you can not enumerate contiguous temperature errors. Is an error in North Greenland and one in South Greenland one error or two. So I guess it menas in most models and Ireally wonder if that is the case.

    Alex

  53. And this comment is hilarious:
    “to studiously avoid specifically mentioning the magnitude of the “larger errors””

    .
    I am willing to entertain the notion that my eyes may need calibration, but if I just look at the plot of CRU minus the mean model, I would have to say that the IPCC’s definition of “most” must mean “infinitesimally larger than 50%”. The first two color codes comprise an error of +/- 5C . . . and if those colors cover much more than 50% of the surface then my real identity is Boris. Virtually none of the Atlantic is coded by those colors, a huge swath of the Pacific is outside those bounds, and almost all of Europe, China, and Africa are similarly outside those limits. Antarctica is totally wrong.
    .
    To say that “most” of the errors are within 3C may be technically accurate, but gives an incorrect impression of how well the models match observations. I agree with Lucia . . . the text is misleading.

  54. So you would all rather the interpretation that disagrees with the figure?
    I don’t know what to say.

    “Individual models have large errors, but in most cases, less than 3C except at high latitudes. Some of the larger errors occurred in regions of sharp elevation changes.”

    that makes perfect sense to me:

    In a model the errors are large
    (yes, we can see that they are large – unless you want to debate what ‘large’ means)

    but most of the errors are less than 3C
    (yes, most do look less than 3C on that figure – this also implies that 3C is large, but perhaps not so large as to be troubling)

    except at high latitudes
    (so at high latitudes the above note about the size of errors is not applicable; we could assume then, since they are actually talking about it, that the errors are typically larger)

    If you think it’s misleading, great. But it looks pretty simple to me. Sure you could rephrase it slightly to get rid of any ambiguity (there are too many commas), but hey that’s not a crime, and you can actually get the right sense if you think for a minute or two.

  55. Nathan, the temperature difference, as shown by Lucia in the other thread, between models is about 3.5C for the average world temperature. The claim of differences of what 3.0C means to Greenland icesheets for example and should be seen on earth such as 80 feet higher sea levels is listed below in 2 examples. Many more are available. The second is a favorite of mine. It indicates that we should have either an ice world or hothouse world having the same weight in the IPCC Bayesian approach. The claim is the physics are right. If the physics are right, including albedo, the models cannot be right, though one may be. The graph indicates either we have some at the ice world or some at the hothouse end, and in either case, the albedo, and the effect of the oceans are, according to the modellers and the physics, significantly different. That is my point.

    A recent report called Avoiding Dangerous Climate Change, produced by the Hadley Centre, one of the top world centres for projecting future climate, modelled the likely effects of a 3C rise. It warned the situation could wreck half the world’s wildlife reserves, destroy major forest systems, and put 400 million more people at risk of hunger.

    http://www.scitizen.com/screens/blogPage/viewBlog/sw_viewBlog.php?idTheme=13&idContribution=132
    James E. Hansen is the lead climate scientist and director of the NASA Goddard Institute for Space Science. He comments for Scitizen on a draft report of the Intergovernmental Panel on Climate Change (IPCC), released last month by the US government. Scientists estimate that a doubling of carbon dioxide levels would cause an increase of about 3 °C. What does it mean? Simply put, the Earth 3C warmer would be a different planet than the one that we know. There would be no sea ice in the Arctic in the summer and early fall. The ice sheets on Greenland and Antarctica would be undergoing rapid melting. The last time that the Earth was 3C warmer was at least 3 million years ago, during the Pliocene, when sea level was 25 meters (plus or minus 10 meters), i.e., about 80 feet, higher than today.

  56. If anyone is interested in the TV prog from 1990 “The Greenhouse Conspiracy” is it still available here:

    http://video.google.com/videoplay?docid=-5949034802461518010

    Personally I think it is worth the hour just to see how the dabte has and has not moved on in nearly 19 years. I have just watched it again but I have seen it before to others that are much younger it may be an illuminating watch.

    Alex

  57. Re: My Last,

    If anyone does watch the recording ,the MET model is shown predicting rainfall in parts of the Sahara similar to that of the British Isles.

    This sounds like a real howler except it is not necessarily spatial wrong but merely temporally wrong. By about 5000 years or more, perhaps when savanah megafauna could still inhabit that area.

    That did not stop them being confident about a 2-5C warming by 2100 then so why should lesser discepencies bother them now.

    Alex

  58. Nathan– What Ryan is saying is the text does not match the figures. In anycase, most of the figures are in the appendix far from the text. So, only the most motivated readers would simultaneously read the text and the figure.

    The text appears accurate (if we define “most” as 50.000000001% or more and mentally account for all large area subtracted in the litany of exception in the sentence. ) It is simultaneously untruthful if we go by the impression the reader would glean.

    You can keep repeating it is accurate fine. But there are plenty of people who understand the meaning of “accurate but false”. Google around. You’ll see this is a widely understood concept in journalism and, for that matter, by cigarette manufacturers. “Accurate but false” is the technique related to the long discussions explaining that we cigarettes weren’t proven to cause cancer that filled the airwaves back in the early eras of the cigarrette-wars.

    So… yes.. keep telling us it’s “accurate”. And I’ll keep poiting out it’s “Accurate but False”.

  59. Thanks Alex. I have seen it before and agree. It is amazing that after 20 years just how much talking past one side or theother seems to be going on. Of course, in something that has taken about 100 years to change about 0.6C in world that spans about 80C in its extremes, it is easy to understand why this occurs.

  60. Ryan (comment #11273)

    Representing the Stephan-Boltzmann simple sphere with a globe with multiple layers of atmospheric gases (somewhat differently for each model), guarantees that predicted GMST will be a
    ‘a little’ different from model to model – and from earthly ‘surface’ observations, no matter how the observations were collected (your comment #11276). This is because in each model, what ‘layer best corresponds’ to the earth’s ‘surface’, will be somewhat different and uncertain. At the same time, it’s less surprising that the trends should be highly correlated, except for an approximately constant offset, because they’re using the same physics.

    In this sense, your answer and mine are roughly equivalent. But, despite Gavin’s approval of yours, I think mine more directly addresses the confusion that bothered William.

  61. Nathan/ Alexander

    You say: “but the figures clearly show the magnitudes of the errors.”

    Do they? I doubt it. I can not see the clarity.

    It’s worth remembering that while I place the images near the text in my blog post, the text is in chapter 8 and the images in the supplemental materials. On the web, the supplemental materials are a separate pdf and must be downloaded separately. So, unless a reader is motivated to download and view the additional graphs, the won’t see them.

    Those who do download them are likely to think the text does not match the figures. The argument that it’s ok for the text to mislead because it’s possible for a reader to hunt down the figures in a separate document is a bit… much.

  62. Len,

    Your:

    “This is because in each model, what ‘layer best corresponds’ to the earth’s ’surface’, will be somewhat different and uncertain.”

    I am not sure there is much question about this:

    For instance, in wikipedia world, the atmospheric model (HadAM3) of Had CM3 is said to have just 19 layers. see the right hand scale here:

    http://upload.wikimedia.org/wikipedia/commons/f/f8/Hadcm3-jja-djf-zonal-mean-t.png

    I do not think any of the layers would correspond to where weather satiosn measure the temperature. I suspect (hope?) they have a surface temperature separate from any of the atmospheric layer boxes.

    Alex

  63. Lucia,

    FWIW and I do not want to labour a point. The mp projections appear to be similar to the Robinson projection. Not an equal area projection.

    In reality Greenland is less than one third the area of Australia. So I do not know how one could assess where most of the surface is let alone most of the discrepancy.

    Alex

  64. RE Gavin #11299

    I think the Quote actually is

    “All models are false but some models are useful”. George Box (statistician) but then again he’s also credited with the quote “Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful”.

  65. Lucia,

    However, though the likely did so unitentionaly, I do think the authors of the IPCC wrote the paragraphs in way that would lead most casual readers to imagine the discrepancies were roughly half as large as they are, and they put most of the figure in the appendix where only a minority of readers would look at them.

    In contrast, graphs that show models in a more favorable light are highlighted in the body of the report, not buried in an appendix.

    I was curious enough about the purported exaggerations in the report that I decided to run the numbers. I picked one model (because I don’t have a lot of time for this), and chose GFDL_CM2.1 because it looked to be representative of one of the cold-biased models. Here’s a table of the error temperature and the fraction of the earth’s surface where that error occurs:

    Terr % of earth’s surface
    ———————————
    >1 51.5
    >2 21.7
    >3 9.9
    >5 1.8

    And most of the large (>3) errors occur in the Antarctic where the observational data is not well sampled (as mentioned in the report)

    This seems to contradict your exaggeration claim. It would be interesting to repeat this with some of the other models, if someone has the time.

  66. Joe

    Interesting. I’m not sure why you consider that one “typical”.

    I’d like to point out you picked a individual case that, based on eyeball inspection of the graphs in the supplemental materials happens to be more favorable to making the words in the IPCC text seem not-exaggerated than others.

    With respect to 1 note that you picked GFLD 2.1 as opposed to GFLD2.0.

    The image above confirms that GFLD 2.1 is one of the cases that,while on average, off by quite a bit, manages to be off about uniformly over the globe. So, obviously, that was not one of the cases that I would have used as an example of how the text does not fit.

    Assuming my eyeballs are calibrated with the colors, the entire northern hemisphere is off by 4C for GFLD 2.0. Or not. Graphics are not the best way to provide quantitative information to people.

    I never said none of the models match the text. Some aren’t too badly matched. But as a generality, the text is not quantitative and certainly is worded to distract from the truth which is some models have very large regions that are off by at least 4C.

    As you might note in my previous post I said:

    So, my contention is this: The IPCC, at least in the AR4, tended to avoid describing the level of accuracy easily digestible form.

    This was not always the case. Atmoz provided a link showing that a clearly labled Taylor diagram was included in Chapter 8 of the IPCC’s Third Assessment Report (TAR.) These sorts of diagrams permit readers to quickly assess how closely each model matches observed data overall.

    Had they done so we wouldn’t need to parse the precise meaning of “most” “with few exceptions” . (Is 50.00001% enough to say “most”? Is 10% few? Is 49% few exceptions?)

    After arguing whether “most” or “few” was precisely correct, we also wouldn’t need to parse whether the “few exceptions” refers to “few models” or “even in those few models that don’t… there are few exceptions” etc.

    Many models have errors larger than 3C in regions outside the high latitudes. Do “most” have no errors less greater than 3C outside the poles? It doesn’t look that way to me. But if you want to create the the Taylor diagram I thought the IPCC authors should have provided, that could clarify this.

  67. You’re splitting hairs, Lucia. The IPCC report stated:

    Individual models typically have larger errors, but in most cases still less than 3 deg C, except at high latitudes

    I think most unbiased observers would agree that 90% is equivalent to “most.”

    And even using your CM 2.0 example (which is one of the poorer performing models), I came up with 79% of surface area with temperature errors < 3 degrees — which, for reasonable people at least, is equivalent to “most.”

  68. Joe–
    No Joe. I’m not splitting hairs. The IPCC says this:

    Figure 8.2a shows the observed time mean surface temperature as a composite of surface air temperature over regions of land and SST elsewhere. Also shown is the difference between the multi-model mean fi eld and the observed field. With few exceptions, the absolute error (outside polar regions and other data-poor regions) is less than 2°C.

    The portion above clearly is discussing figure 8.2a, which compares the multi-model mean to the average. The claim of absolute error less than 2C with the caveat of outside polar regions and other data poor regions applies to this error. In this section the “exceptions” applies to regions in the field, and must do so because they are discussing only one case: The difference between the multi-model mean and the observation.

    The next sentence switces.

    Individual models typically have larger errors, but in most cases still less than 3°C,

    “Cases” refers back to “individual” models in this sentence. It doesn’t mean “regions”. There are now many “cases” so it’s possible to have “most cases”.

    “Larger errors” presumably refers back to errors larger than 2C discussed in the previous sentence So, the second sentence is saying that in most models, the errors larger than 2C are nevertheless less than 3C.

    That claim would be wrong incorrect. In practically all models there are errors less than 3C. However, I haven’t applied the caveat:

    except at high latitudes (see Figure 8.2b and Supplementary Material, Figure S8.1).

    So, now we should ignore the errors in high latitudes. High latitudes are presumbaly the polar regions.

    But, if we look at the figures in 8.2b most all models still have “larger errors” (i.e. greater than 2C) which are also larger than 3C. In fact, if you look at the figures and draw lines to exclude the polar regions, you’ll see that practically all models have some”larger” errors in regions outside the polar regions that are larger than3C.

    To claim the statement is not misleading, you seem to be interpreting it as asying something like “If we make a histogram of the errors, in most models, more than 50% of the errors are less than 3C”. That’s not what the paragraph said. Maybe thats what the authors thought, but they didn’t have a good copy editor, or they miscommunicated with the copy editor.

    In the end, they said something that’s not right.

    Mind you: Had the figures been placed near the paragraph, I might cut slack for poor wording. In that case, the reader could sort it out. But the figures in 8.2b are in an entirely different section. So, they are placed in a location where readers are unlikely to mentally correct for the incorrect wording.

  69. Lucia, I agree with you.

    When I tried reading chapter 8, I was appalled. I felt like Alice through the looking glass: whatever I knew about statistics and validations were distorted completely. Spaghetti graphs of models around their mean are supposed to substitute for errors, at least that is how I read it. I gave up on the maps.

    The question: if a model does not fit the data, how can the trends from this model fit the data, is not answered by this method.

  70. What to think of this:

    Roy Spencer announces about 5 days ago 0.350C anomaly for February and the file updated last night to shows that. Same time RSS updated theirs and shows +0.230C Anomaly Who’s got the superior algorithm?

    http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.2

    Pavlov Dog is running the saliva in buckets.

    Edit: Oddly, the current file shows no prior month adjustment, which UAH does have, from +0.307C to +0.304C an upload boo boo maybe ;>)

  71. Lucia
    “The next sentence switces.

    Individual models typically have larger errors, but in most cases still less than 3°C,

    “Cases” refers back to “individual” models in this sentence. It doesn’t mean “regions”. There are now many “cases” so it’s possible to have “most cases”.

    “Larger errors” presumably refers back to errors larger than 2C discussed in the previous sentence So, the second sentence is saying that in most models, the errors larger than 2C are nevertheless less than 3C.”

    I still think you are simply misinterpreted what they wrote, why do you think they don’t mean regions?

    ” Individual models typically have larger errors, but in most cases still less than 3°C,”
    I think the ‘cases’ are the errors in regions, that is most errors (between the model region and observed region) in each model are less than 3C

    So in total:
    “Individual models typically have larger errors, but in most cases, less than 3C except at high latitudes (see Figure 8.2 b and Supplementary Material, Figre S8.1) Some of the larger errors occur in regions of sharp elevation changes ”
    I read that as saying most of the errors in each model is less than 3C but at high latitudes and sometimes in regions of sharp elevation changes you get bigger errors. This actually fits their figures, and seems to be confirmed by Joe above.

  72. Nathan,

    If they meant regions, their writing is extremely poor. Ordinarily, when using an ambiguous word like “cases”, you don’t leap back past both the object and subject in the same sentence to refer back to the previous sentence.

    Moreover, to use your reading, the word cases now must communicate something quite complex “Most regions in most individual models”.

    Anyway, why substitute the single word “cases” for the single word “regions” if they mean regions? Just to be confusing?

    I think your reading is not the one most people would take on first reading. That said, it may be what they meant. Had the figures been placed anywhere near the text that might have helped clarify the issue for people. But, as it stands, by assuming the ambiguous word “cases” refers back to the a nearby candidate rather than one further away, the sentences is false and conveys a notion that the errors are less than they really are.

Comments are closed.