Fact 6A: Model Simulations Don’t Match the Average Surface Temperature of the Earth.

This morning, Alexander Harvey asked,

“I would really like to see the temperature records (we have more than one world it seems) expressed in absolute terms and comparison between them and the model outputs performed in absolute terms. [. . .] I do not know if the spread in the CMIP3 data is as broad as I have not seen the data expressed in absolute terms.”

Below, I show the 12 month average temperatures from simulations and GISSTemp in non-anomaly degrees C:

Figure 1: IPCC Model Simulations Prediction of Earth Surface Temperature
Figure 1: IPCC Model Simulations Prediction of Earth Surface Temperature

To create the GISSTemp (i.e. measured) value, I added 14 C, which GISSTemp reports as the best estimate for the average temperature from 1951-1980. The GISSTemp value is shown in dark blue.

The remaining traces based on model-run-data downloaded from The Climate Explorer.

If anyone knows the baseline temperature for HadCrut, point me to it, and I’ll add HadCrut.

If most the models were correct, the coal-protestors would in DC would probably be even colder today. Alas, most the models are unable to predict the average global surface temperature.

Note that the UK Met office did not include this information when trying to support their claimed “Fact 6: Climate models predict the main features of future climate”.

Comment on the MetOffice Pamphlet

Some will recall that Alexander’s question appeared on a post discussing the Met Offices pamphlet discussing “Facts” about global warming.

The Met Office’s Fact 6 is “Climate models predict the main features of future climate”.

This is a particularly odd sort of “fact”. It’s not clear to me how they can even begin to prove any claim that models can predict the main features of future climate. If we go by past IPCC projections, models have a less-than-wonderful track record predicting the future — as in things that happen after climate models are run.

The narrative in the UK Met Offices “Fact” page seems to explain the models can “predict” the some select features of past climate, particularly when those features were already known before the models “predicted” them.

That said, there are a number of historical observations of climate that models “predict” poorly.

One of these is the average global surface temperature in non-anomaly degrees C for the entire 20th century.

My understanding is that letting people learn that models don’t predict the surface temperature accurately is considered “confusing”. One is not supposed to discuss “confusing” things as it could “confuse” people and “confused” people may come believe models based on physics may produced biased predictions of global surface temperatures in non-anomaly degrees C.

So, now that you’ve seen the graph shown above please forget about it. After all, remember it will just cause “confusion”.

69 thoughts on “Fact 6A: Model Simulations Don’t Match the Average Surface Temperature of the Earth.”

  1. Lucia,

    I must thank you for taking the effort. It is a graph I have been hoping to see for some while.

    I will write a little more when I can

    Alex

  2. Alexander– I already had the data and the program. So, it was just a matter of running the program to create this particular graph. I usually show anomalies. But it’s true that if we don’t use the anomaly method, the models don’t look too impressive.

    This is particularly so since we can get within 33K without accounting for the atmosphere at all. For the most part, we can explain the entire 33K without using climate models at all!

  3. Interesting appraoch. Hat tip to Alx and Kudos to Lucia.
    There are several features that (by eye-balling) stand out:
    1) The general properties of the models seems to descibe the general direction of the earth’s temperature reasonably well. (Albeit quite a number of models seems to overrate the warming in theses days).

    2) the majority of the models seems to underestimate the historical temperature record as such.

    Can particular properties in the models contributig to this be identified?

    Cassanders
    In Cod we trust

  4. Hadley’s updated its global average temperature to 2008. Seen anything like it before?

    I did email them and ask for the absolute values. They’re not what I want, apparently.

  5. Cassanders–
    I’m sure modelers have been trying to figure out what it is that causes the any individual model to be so far off on the average temperature during the 20th century. If they’d correctly identified the problem, it would be fixed.

  6. This is just amazing, incredible. I hope I can close my mouth by dinner time, or I will be unable to chew… Climate Science has never been as corrupt as it is today. We are living interesting times indeed.

  7. Lucia,

    This is from the PCMDI website:

    “The range among the models of global- and annual-mean surface air temperature is rather surprising. Jones et al. (1999) conclude that the average value for 1961-1990 was 14.0°C and point out that this value differs from earlier estimates by only 0.1°C. ”

    &

    “It therefore seems that several of the models (which simulate values from less than 12°C to over 16°C) are in significant disagreement with the observations of this fundamental quantity. Reasons for this situation are discussed briefly by Covey et al. (2000) in the context of the CMIP1 models. A natural question to ask is whether the spread in simulated temperatures is correlated with variations in planetary albedo among the models. Unfortunately, the CMIP1 and CMIP2 database does not include the energy balance at the top of the atmosphere.”

    Here is a link:

    http://www-pcmdi.llnl.gov/projects/cmip/overview_ms/ms_text.php

    I presume the first quote’s reference to Jones et al. (1999) might imply that Hadcrut also has a 14C offset.

    The second quote mat imply a real problem for the modelers, they simply do not know the physical characteristics of the Earth well enough to get the temperature right. And to be frank I am surprised how close they do get. The problem is unlikely to go away all that soon as you only need to be out by a small fraction to produce unrealistic climates.

    I have pondered the question of validating the models and I have considered whether I could tell the difference between living in a HadCRUT or GISS world and one of the model worlds. The answer is for some of the models is a respounding “YES”.

    The view that the models can produce a releastic climate without being able to forcast the weather is I suspect demonstably false. The question as to whether they can predict climatic change without being able to predict the climate is a “different question”. The modelers obviously believe so and their reasons need to be listened to.

    Alex

  8. Alex–

    I wouldn’t call this level of agreement close. Bear in mind that you can get about this close without any detailed model of the atmosphere. We also predict warming with models that capture very few details about the physics.

    I agree that we should listen to the reasons why modelers believe they can accurately predict trend in GMST while being this far off on their prediction of GMST.

    I’d be happy to read or listen to an explanation for the theory of getting the trend right while getting the base level wrong by modelers. Do you know of such an explanation? As far as I can tell, it’s “explained” by avoiding showing graphs like the one I posted above.

  9. Most of the models think the planet is cooler than it is, and all the models think a rapid rise has already begun. The GISSTemp has been changing much more slowly than models think it should.

  10. Lucia,

    I do what I can to be as fair as I can be. When I first found out about the spread (5C in CMCIP2) I was quite shocked, as I dare say others will be startled by your graphic. Then I had to try and think my way into the mind of a modeler or more accurately how I would think and feel if it was my model. My first conclusion was:

    These people have a good deal of integrity that they are sometimes suspected of not having. By this I mean that if there was a knob that could be turned to get the mean temperature right lesser folks would have tuned the temperatures in.

    The second was that the debate over whether they could predict the climate without being able to predict the weather was spurious as they can do neither.

    There remains the question of whether they can predict change without being able to predict the status quo. Well it is an open question. I suspect they have this debate but not it seems in public. I have not heard the rationale but I think (hope) they must have one. I could only speculate but I won’t. I really would like to hear their thoughts on this point.

    I think you will find that I do not disagree with you all that much. I simply have a very different style. I sometimes damn with faint praise. But I do feel for them.

    Regarding simple models: I think it is a great pity that the “simple model” or “climate model” as opposed to very long range weather forecast is not persued with more vigour. The combination of a half decent atmospheric model and a diffusive ocean can, as you rightly say, produce equivalent results on a global scale. Unfortunately many examples of simple model in academic papers are so unrealistic that they give the craft a bad name.

    Alex

  11. Lucia,

    Whether it took a little effort or a lot, thank you for this thread as I think it may help the cat to get in amongst the pigeons.

    Alex

  12. Alex–

    Of course many modelers have integrity.

    That said the evidence that they don’t tweak to get perfect agreement is not necessarily proof of the integrity. It’s not always that easy to tweak a GCM. Various parameterizations and forcings can be tweaked within the range supported by empirical evidence for things like aerosol loadings, snow/ice cover, radiative properties and levels of GHGs and so on. Oddly enough “tweaking” within this range is acceptable science and would be in any field. In fact, some tweaking outside the range is called “sensitivity studies”. So, it’s done.

    However, they can’t tweak infinitely. Also, running and tweaking is time consuming. What I think we know based on the range of results is the range of uncertainty in the various parameters can result in a wide range of average surface temperatures for the earth.

    However, with regard to integrity: it is also the case that the particular comparison of simulated and observed temperatures in terms of non-anomaly temperatures is rarely presented to the public by those who create tutorials of the type at the Met Office. This is not necessarily a sign of lack of integrity; it merely means the modelers have convinced themselves this is the “right” way to show their predictions.

    However, the practice does mean it’s difficult (and possibly impossible) to find an explanation for why the anomaly method is the “right” way to evaluate models.

    On the simple models: Simple models are discussed in the literature and in climate text books. AOGCMs can, in principle, predict detailed features that simple models would never be able to predict. For example, no simple model would ever have any hope of predicting how climate change might affect Nebraska differently from Australia. In principle, an AOGCM could.

    Of course, this advantage is hypothetical if the AOGCM are inaccurate overall. So, we are left with the question: Why are so many so far off on the earth’s surface temperature? (Or at least, I consider it “far off” if we consider the magnitude of the error is not small compared to what we can obtain if we use simple 1-d radiative convective models.)

  13. Yep. The cat among the birds. Even using the highest from Chapter 9 of 2.9 W/m^2 for anthropogenic effects, there does appear to be a major problem with claiming the physics is correct. The simplistic computation of the range of W/m^2 for these models using T^4 is about 11.8 W/m^2 of which they are claiming they can attribute 2.9 W/m^2 to anthro. I guess the real problem is the claim that the physics (and models’ average) is correct.

  14. Lucia,

    Thank you for your thoughts, I can not say that I disagree much.

    I brought the “abosule vs anomaly” issue to your attention as I thought you had both the skills and intuition to run with it.

    Of course I think it is important. If it has been deliberately hidden that would be scandalous. I simply do not know the answer to that question.

    Actually I think it is very important as it illustrates the terms on which we are been asked to trust the models. They can not predict the weather, they can not predict the climate, yet we are being asked to accept that they can predict climatic changes.

    My charge that they can not predict the climate is a heady one. But I suspect I am right.

    If they can predict the climate then transporting my life from 1961-1990 in HadCRUT world to the worlds of the models then I should have had the same life experience. The same flowers would flourish in the hedgerows and same bugs and birds would crawl and fly in that world.

    I suspect that closer analysis of the model output will reveal that I would notice a substantive difference.

    The temperature would be different, the seasons would be different, the climate would not be the one that I recall.

    This is the important point. We are asked to trust in the anomalies even though the absolutes are often noticably unrealistic. FWIW the failure of the mean is probably the tip of the iceberg. I think that when the seasonal component of the models is made apparent more worrying “anomalies” may become apparent.

    To be honest I am very disappointed that the absolutes have not been brought more to the foreground, I felt more than a little decieved when I realised how far off they can be. But I can understand how they would not deliberately draw attention to the discrepancy.

    If this is the first time any blog has concentrated on this issue then you are to be congratulated. And I suspect that I am to be thanked a little also.

    Best Wishes

    Alex

  15. Alex–
    I don’t think the absolute vs anomaly is hidden in any absolute sense.

    The general cold bias is discussed very briefly in the AR4. But, in my opinion, the narratives are a bit asymetric in the sense that the authors make direct statements that we gain more confidence from some sorts of qualitative agreements but they won’t straight out and admit that, on the other hand, we might have less confidence based on disagreements bewteen GMST and surface temperatures. So, for example– if you visit the Fact 6 page at the UK met site, they’ll show an image of precipitation that is supposed to give us “more” confidence. The images that might give us “less” confidence…. well… they aren’t shown.

  16. Lucia,

    I suspect that the use of the precipitation graphic was opportunistic as it is a success that anyone would trumpet. As it seems you agree.

    Interestingly the “cold bias” does not seem to exist in CMIP2 world; see:

    http://www-pcmdi.llnl.gov/projects/cmip/overview_ms/control_tseries.pdf

    Apparently it was much more balanced back then.

    *****

    I will once again express my suspicion that not only can the models fail to get the mean right, but their unique ability to describe the various regions is even more deeply flawed. (In absolute terms)

    Best Wishes

    Alex

  17. It’s harder to test whether or not regional predictions are off. The smaller the time period or spatial extent, the greater the weather noise. So, the disagreement can be more difficult to distinguish from ‘weather noise’.

  18. Lucia,

    I am currently engaged on a little project that is investigating the differences between how the models and the temperature record estimate the climate between 1961 and 1990 the HadCRUT baseline.

    The baseline as set out in abstem from the Hadley website gives the closest approximation of what I could consider climate. By that I mean the average seasonal temperature fluctuations and mean temperature over that thirty year period.

    This is a gridded (5×5) view of climate during that epoch.

    In particular I am interested in the seasonal lag. The models are doomed to get this wrong as it is (I believe) just so difficult to model. If I ever complete a seasonal phase lag graphic for just one model over the entire globe it will not show the agreement that the precipitation graphic does. I have done a “first pass” analysis of the HadCRUT absolute temp data which I feel is data rich and I am just starting on the Hadley model output data.

    Sadly my methods are slow (mostly due to data acquistion problems).

    Best Wishes

    Alex

  19. Lucia, the data for HadCrut is available here in gridded form. I calculated the average monthly temperatures (°C) to be 12.1328, 12.2805, 12.9117, 13.8976, 14.9213, 15.6142, 15.8859, 15.7377, 15.1024, 14.1177, 13.0603, 12.3812. The mean is 14.0036 °C.

  20. I’ve made a tiny huge mistake. I used the absolute temperatures from an older analysis. Notice the link’s last directory is “old-temperature”. I just downloaded the correct file from the correct site but the differences aren’t huge. Here’s the correct data. Sorry for the mistake.

    12.1063, 12.2482, 12.8738, 13.8616, 14.8909, 15.5873, 15.8587, 15.7101, 15.0719, 14.0865, 13.0316, 12.3551
    mean = 13.9735

  21. Lucia,

    Yes I think so too, but I am somewhat old and very tired and I doubt I have it left in me.

    On the bright side I suspect it will only take me a year or three. My estimation of when the models will produce accurate predictions for the next 50 years is about 50 years.

    Bless you and all who care enough to do more.

    BTW as a younger person (think hippy and the 60s) I was a dab hand at complex plaiting. Plaits with 5,7, etc threads. Also plaits with only one string (think those magical leather belts that scouts wore plaited but with no open end).

    Best Wishes

    Alex

  22. Alexander– Like macrame? The anti-coal crowd appears to be in serious need of macrame belts and tie-dyed clothing. How can you really shut down a coal plant without tie dye?

  23. Sleeper– As with everything, it depends on how you define “right”. On average, the slopes from 1900-now for the models are a little bit high. I think it’s about 10% or higher depending on whether you use HadCrut or GISS. We could debate whether this matter, but it’s not precisely right.

  24. In there defense, if you care more about policy than science, it makes sense to try and withhold “confusing” info from people. Hansen’s exact words, btw, via Steve Milloy:

    “For the global mean, the most trusted models produce a value of roughly 14 Celsius, i.e. 57.2 F, but it may easily be anywhere between 56 and 58 F and regionally, let alone locally, the situation is even worse.”

    It might be a good idea to account for Hansen’s stated range in this analysis. The weird thing, to me, is that ~models~ are relied on to calculate the 14 degree figure.

  25. Andrew,

    I did not know that, I think (in cmip2 world) it would be 53F – 62F so his easily is easily right.

    Lucia,

    Macrame! Oh no! Nothing so grand, we were totally unaware of such things. Tie Dye and Batik would be a YES. As it happens I was a competent wheel thown potter making produce for the comunity (ahem commune). Those where the days, idyllic NOT, we thought that if Kennedy and brinkmanship politics didn’t kill us an ice age would.
    That said I will always look at people who have the power of making in a more favourable light than those who do not.

    Alex

  26. Andrew–
    So, does the batch of models used by the IPCC and shown above include not trusted models? Because the highest ones and lowest ones are off by about ±2 C, which is ±2 C* 9F/5C, which about twice the range admitted by Hansen in the quote above.

  27. Lucia,

    Your #10934 is I think an underestimate of the discrepancy.

    Are the pundits being provably deceiptful? If so what is it all about?

    If it was a lie, why? It is so easily disproven. I simply do not understand. I am by and large a kindly person. Yet I expect deceivers to be hanged by their delicate bits.

    Alex

  28. Here is a degrees kelvin graph of what the politicians want to spend trillions on in a futile attempt to stop all that horrible AGW.
    http://tinypic.com/view.php?pic=pm4ap&s=5

    The earth is just below the sweet-spot in the human habitable range (0-30 both C and K) and people want to rock the financial boat based on models that don’t agree and don’t even get the global temperature correct.

    Our sun is a variable star. Discounting the sun as a major cause of the recent warming while promoting man to the major cause seems to border on hubris as to the extent of our knowledge related to climate.

  29. You have to imagine this is excerpted from a prospectus for a mutual fund. The fund is informing you about the software it uses to predict market trends. It explains that you should invest with it, it is a proper trustee of your retirement nest egg, and its unique competitive asset is its modelling software which enables it to predict stock markets. As proof it tells you it has run this stock market modelling software against the various indices going all the way back to the late nineteenth century.

    It then triumphantly offers you Lucia’s chart as evidence of the excellence of its modelling package. This, it says, is how various runs of its model compare with what really happened to the Dow and S&P. You can see, it says, that by using this, it is going to generate huge profits for you,

    Do you still feel inclined to invest your nest egg in it after seeing this evidence?

  30. I was wondering about how the climate models were built. I have no knowledge in this field. My earlies reading about climate discussed the three Hadley cells in each hemisphere. If I were to try to make a model I would start with the six coupled Hadley cells and then try to extimate the amount of turbulence (weather) in each cell. This seems very crude but if it matched general climate it would seem to be a place to start.

    From what I have read, the global climate models just divide up the atmosphere into finite element blocks without paying attention to some fundamentals. Or were the simple diagrams in my elementary books too idealized and the Hadley cells not as well defined as they suggested?

  31. Gary,
    The models do indeed break up the atmosphere into discrete elements and solves the relevant equations in regular time steps. Features like Hadley cells should emerge naturally from the physics of the model. I doubt it’s something that needs to be or should be hard coded.

  32. So, now that you’ve seen the graph shown above please forget about it. After all, remember it will just cause “confusion”.

    .
    Awesomeness. 🙂

  33. Chad, thanks for the feedback.
    If the Hadley cells are distinct enough in the actual climate, hard coding them in a model would help force a better solution when there is inadequate or noisy data.

    In my lab work I measure the wear of a product and force fit the data to a model. This works because the cumulative wear correlates very well to the model even though there can be substantial noise in wear rate over a short time period, r>0.99

    I guess the question comes down to judging if the Hadley cells are distinct features when averaged over time. Force fitting the data to a model only works if the model is correct. The current climate models do not seem to work well. I have read that they are force fitting the data to a model that forces positive feedback on any warming due to CO2. They do this by assuming constant relative humidity even though this is contradicted by the data. So how robust are these Hadley cells?

  34. There was a comment on the RC “George Will” thread that linked to Lucia’s chart above. It elicited the following comment from Gavin:
    “Response: The GISTEMP data does not compute the average annual global mean temperature. In fact, it’s not clear that is even possible to do so. They use 14 deg C as a convention, but as explained ably here, the actual value is highly uncertain. It is therefore not widely used as a tuning target, and so models end up with a range. – gavin”

    If the actual value is highly uncertain, how can the anomaly value have any certainty.?

  35. Gary P–
    Gridding things up is much more conventional in fluid mechanics. Correlations do get used for some things. But the climate approach is similar to what we’d see in aerospace, mechanical engienering, chemical engineering etc.

  36. Edward–
    Interesting. In which case, the value is simply not know. So, the scatter in the models exists, but it can’t be tested.

    It is therefore not widely used as a tuning target, and so models end up with a range.

    That’s an interesting way to say things. Does this mean if they knew it, they’d tune to match? 🙂

    Oddly, anomalies can be determined even if the absolute value cannot. Consider this hypothetical:

    Suppose you measure have a biased scale. You step on it, and your weight may be off by 10 lbs. But… you don’t know. The one thing you know is it’s always off by the same amount. Now, you can weigh yourself every day.

    Eventually, you can report whether you gained 1 lbs or lost 1 lbs. But, you never know your weight!

  37. #10976 You ask: “If the actual value is highly uncertain, how can the anomaly value have any certainty?”

    1. They tune to the trend, not the mean. 2. It’s not that the anomaly values are more certain than the mean values. It’s the trend amongst these values in which they have confidence.

    #10978 As usual, lucia gets it right.

  38. This isn’t an unusual situation at all – in fact it’s very typical of modeling. In my earlier career in physics, the essence of many of the projects I worked on was to calculate the differences in energy of various different configurations of atoms. This required highly complex computer codes that derived one-component quantum states for the electronic energy bands using “pseudopotentials”, approximate density-dependent exchange-correlation potentials, and various other tricks. The final number for each configuration was a total energy per unit cell (this was for periodic systems), often expressed in Rydbergs, which are whoppingly large numbers compared to the tiny energy differences between different configurations. Those nominal total energies were easily shown to be quite far from the actual bulk potential energies of the associated crystals; nevertheless we could use them to quite accurately calculate the differences between energies of different configurations without worrying about that large-scale discrepancy. The calculations captured the essence of the differences between the different local configurations, and what they missed about the large-scale energy situation wasn’t materially important to those differences.

    I am guessing the climate issue is somewhat similar – most behavior is locally linear in temperature change, so if everything is shifted down or up a couple of degrees it doesn’t much affect the response pattern.

  39. I want to hear Gavin talk A LOT more about “tuning targets”. Specifically, tuning to get the GHG-forced trend “right”.

  40. i.e. How does Gavin determine when he is tuning to a deterministic climatic response versus quasiperiodic/chaotic weather “noise”? Chime in any time, Gavin.

  41. Arthur–
    The big question isn’t “1) Does getting the mean wrong cause additional errors?”

    The questions are: 2) What’s is it about the model that causes the mean to be wrong? and 3) If that’s wrong, how do we know it doesn’t cause the trend or sensitivity to be wrong as well.?”

    These could have answers– but what are they? I suspect that, as in most fields, the real answer to (2) and (3) are “we don’t know” and “we don’t know”. (For that matter, the answer to (1) is “we don’t know”. However, the idea that the shift doesn’t, itself, cause further errors is a bit more plausible than the idea that errors that cause the inaccurate estimate of earth’s surface temperature auto-magically correct themselves when predicting trends.

    They may or may not do so. It depends on the cause of the error. And given the ability of modelers to adjust applied forcings over the 20th century, it’s not entirely clear that the relatively good match over 100 years should give us a lot of confidence in the ability to predict climate trends over periods as long as 100 years.

  42. At first sight, this begs a number of interesting questions, and maybe the modelers have the answers (they do tend to have an answer for everything, after all!…). Here’s only two that come to mind.

    1) Absolute temperature IS important. Water freezes at 0 C, and that’s an incontrovertible physical fact. If the model’s absolute mean temperature is 2 degrees below the actual one, the model must have areas of the Earth that are deep frozen when they’re not in real Earth. What gives?

    2) Think tipping point. The tipping point cannot occur at some “anomaly”. It has to be at some “absolute” temperature. If models show a tipping point at some “anomaly” that is in actual fact, say, 1 degree below the actual mean temp of real Earth, it makes no sense at all! We should all be dead. Well, maybe we are and we don’t know (Ubik anyone?).

    There are probably thousand more examples where “absolute” is “absolutely” important. Do modelers ever take physics courses?

  43. Lucia,

    One can only wonder if, despite their enormous complexity, the GCMs have moved us on in any significant degree since Arrhenius.

    They can predict neither the weather nor the climate, like Arrhenius. They confirm the link between CO2 and temperature, like Arrhenius.
    They fail to predict the effect of a doubling of CO2 with any precision, like Arrhenius.

    To be honest I am not sure what to make of it. But I will say this, given the availability of data from CMIP3 and the current “stalling” of the temperature record, scepticism will increase and with that I hope more informed debate.

    Alex

  44. What I wonder is if choosing the mean as a tuning target precludes you from getting the trend correct.

  45. Lucia, #10977
    The grid used in modeling engineering problems is used for relatively “simple” problems like airflow over the wings or mechanical stresses in the wings. One does not grid up the entire aircraft and try to model fluid flow through the engine and mechanical stress in the wing with one super grid. Each basic problem is solved by itself and the result coupled to another model. When solving for air flow around an aircraft one needs to know the engine trust and any local turbulence, but what happens inside the engine is irrelevant

    Likewise, I suspect what is happening in the southern equatorial Hadley cell is irrelevant to what is happening in the antarctic cell other that through some simplified coupling terms that make it through the mid-latitude cell. I am guessing that breaking the climate modeling problem down into the Hadley cells may the problem more tractable.

    However, I believe my question has been answered. The existing climate models do not do this. Thanks.

  46. GaryP– I didn’t means to suggest anyone tries to simultaneously solve the flow over the aircraft and the stresses and displacements of the wing itself.

    But one also doesn’t hard code in a separation bubble downstream of an aircraft and then write up some sort of separate hard coded inviscid solution far from the solid surface etc. Mostly, one grids things up and solves a modeled for of the NS.

  47. Do we know that the models should be modelling GIS Temp accurately?

    For instance GIS Temp I assume is very roughly the spatially weighted average of many thermometers placed a meter or two above the ground all over the globe. A lot of this is based on extrapolating from the nearest actual thermometer, and this may present issues in certain areas. For instance around the Himalayas I would guess most of the real thermometers are situated in more comfortably warm (or less freezing) lower areas. GIS being designed to monitor trends probably does not consider when it extrapolates into the colder higher mountains that it needs to reduce the temperature, because it would still be getting the trend right?

    And for climate models, what is the temperature reported? From what I understand most climate models are based on separating out the atmosphere into a number of discrete layers. I assume that the actual reported temperature is the temperature of the lowest layer? In some models this may be the lowest 100 metres, and others the lowest 500m. As the temperature decreases with height, this could explain why most models are cooler than the ‘surface’. I can’t see how this effect could explain models that are warmer though…

    Also Gis temp is sea surface temperature for the ocean? Is the model temperature calculated on sea surface temp for the oceans? Or on the lowest layer of the atmosphere as well?

    On comparing models with HAD Crut, this temperature series has gaps for the north and south pole. So if you average what is left you would have what I guess would be a quite significant warm bias in the calculation of the absolute temperature of the earth.

    And I also had a look at UAH temps, to see if there is a similar disagreement on absolute temperatures for our temperature series. For the daily charts this appears to show that the absolute temp of the ‘Near Surface’ (channel 4) is something like -16 degrees C. This must be wrong??. The 900hp level is about -3 degrees C.

  48. 900 hPa (hectoPascal) is near surface. Standard sea level presure is 1013 hPa. The original MSU channel four measured the lower stratosphere. The AMSU has more channels so it’s not clear whether the channel 4 temperature yo cite refers to channel 4 in the AMSU or the MSU. The temperature is likely the raw brightness temperature which is considerably massaged before it emerges as t2lt, t2 and t4 for lower troposphere, middle troposphere and lower stratosphere.

  49. Lucia,
    Would it be possible to draw a line on the graph delineating hindcast from forecast?

  50. GarryP– If we based the “forecast” on when the SRES were defined, the division is near 2001. But it’s easier to see the divergence on the anomaly graphs. This is true even though the anomaly method maskes the fact that models all dissagree on the surface temperature.

  51. Michael (Comment#11027),

    The field used from the model data is I think “tas” (Surface Air Temperature).
    It is, I believe, the data used to make comparisons between the models. It is the spread in these data that is worrying. I will agree that comparing model temperatures to observational records, (on land, sea or, from satelitte) has additional problems. But given the model spread some of the models simply must have significant issues.

    Alex

  52. I have a number of issues with the temperature data. The abstem3.dat file from Hadley gives an average temperature per cell per month. There are no missing values. In theory you can add these values back to the gridded anomalies and get the original absolute temperatures. But the gridded anomalies have missing values; some cells have no data for the entire period of record. So where did the averages for the reference period in abstem3 come from?

    The Hadley method substantially eliminates seasonal variation which, if you want to study long-term climate is surely the right thing to do. But for GissTemp we’re told “add 14 degrees”. That won’t eliminate seasonal variation yet the GissTemp monthly series contain no visible seasonal variation even if you plot NH or SH temperatures rather than global.

    None of this sounds right.

    Also, as far as I can tell, that graph in the Met Office pamphlet is the data from the “simple area weighted average” series rather than the series “recommended for most purposes” ending in 2007 – reasonable as the pamphlet was produced in 2008 – and using an early value for 2007. If you substitute the current corrected value for 2007 the smoothed values dip at the end rather than just levelling out. Oh yes, and add 0.33 degrees – no idea why. If it was me I’d want to replace it with a less misleading pamphlet as soon as I could. Perhaps they will.

  53. This is a good first step. I think that a genuinely honest evaluation of the models would be to compare measured and predicted global temperature maps. The anomaly curves can hide a lot of unphysical predictions because they average over the entire earth.

    I once joked at CA that there could be alligators at the poles and polar bears at the equator and still have correct anomaly curves. To my surprise, someone pointed out that at least one of the models made the Pacific Ocean freeze at the equator.

  54. Lucia (Comment#10924):

    “Alex– That would be very interesting!”

    Well I have had a little go. I have one strip N42.5 for both HadCRT3 and HadCM3 and their is a good match. But there are some interesting discrepencies. In the model the oceans are more phase delayed than in the record (the seasons are a little later around 1 week), That does not seem much but it equates to an ocean that is more tightly coupled and/or more thernally heavy than in the record.

    As it happens even the simplest of models is likley to give quite a good match. All you need is a land/ocean mask, an approximation for the thermal mass of the land ocean and atmosphere and a reasonable value for the thermal coupling between each of them and to outer space.

    A heavier than real ocean would tend to supress temperature fluctuations which I think is in the data also but more work is required. Speculatively if the coupling and weight of the ocean are too high it would supress the long term trend (increase pipeline effects). And would allow for a higher climate sensivity to be compatible with the record.

    I wil try to persevere, my greatest problem is that as far as I can figure out. In order to complete just one model I must extract one file for each grid square from climate explorer and process it manually to produce a monthly baseline average. For HadCM3 this is around 6000 files. If anyone knows a better way to extract this dort of data from Climate Explorer I would be very please to hear from you.

    Alex

  55. Not only do the models run cold relative to observations, but look at Figure S8.3c which shows zonal diurnal temperature range. The models have tighter diurnal range by 1-4C compared to observations with the tropics having the largest error. Maybe something wrong with the way they are handling convection and radiation?

  56. Bob Koss (Comment#11205)

    “Maybe something wrong with the way they are handling convection and radiation?”

    Yikes – that’s about 90% of the modeling in the AOGCMs!!

    I say we make Lumpy the official NOAA climate model. It’s about as accurate as any of the climate models shown here. And think of the savings – we wouldn’t need to spend $170 million of our dwindling taxpayer dollars on mega computing facilities for climate modelers!

Comments are closed.