Multi-Model Mean Trend: AOGCM simulations vs. observations.

Way back in February, I promised I’d show how the observed trends through Jan 2009 compare to simulated trends based on the data I downloaded from The Climate Explorer. Then, I came down with a recurring sore throat, ended up on amoxicillin… and got distracted making mugs and slippers. But today I’m going to show two cool graphs.

The first graph shows:

  1. The observed least squares trends for HadCrut and GISStemp computed for a range of start dates. All trends end using data from Jan 2009 inclusive, but begin with Jan for years from 1960 to 2004 inclusive.
  2. The uncertainty intervals computed using the total standard error indicated in the denominator of equation 12 of Santer17 and determining the 95% confidence interval using the number of degrees of freedom in equation 13 of Santer 17. The method results in slightly different uncertainty intervals for HadCrut and GISS.

This graph is illustrated below:

Figure 1: Multi-model mean trend compared to observations computed with a range of start years.
Figure 1: Multi-model mean trend compared to observations computed with a range of start years.

As I use 2001 as my “start year” for testing IPCC projections, I have highlighted 2001. When simulated and observed trends are computed beginning in that year, the model projections fall outside the ±95% uncertainty intervals. So, if that year were truly selected at random and we really had no other data, we would reject the hypothesis this multi-model mean agrees with observations at the 95% level.

Readers will recall however that many have suggested this result arises only because of the short time span involved. I have in the past mentioned that I had looked at past years. In particular, I had looked at 1980. The reason for this choice was that the IPCC makes projection relative to the 20 years from 1980-1999 inclusive. So, 1980 might appear to be a reasonable choice for testing trends.

Readers examining the graph above are likely to notice that the observations fall almost smack dab on top of the ±95% confidence intervals. In fact, on that graph, it’s very difficult to tell.

Solution to squinting: Mysterious variables like d*

To make it easier to diagnose whether a t-test would result in “reject” or “accept” at the 95% confidence interval, I created a graph comparing the normalized error to the level corresponding to the ±95% confidence intervals. The normalized error is determined by first finding the difference between the simulations and the observations and then dividing by the standard error. This quantity is referred to as d* in equation (12) of Santer17. I’ve plotted this value and the ±95% and ±90% confidence intervals for d* below:

Figure 2: Normalized error as a function of start year.
Figure 2: Normalized error as a function of start year.

Examining the figure shown above we see that in 1980, the multi-model mean exceeded the 95% confidence intervals by a slight margin. That is to say: If we consider 1980 a reasonable start date for trend analysis, and we accept method used in Santer17 as the method to perform a t-test, we reject the hypothesis that the multi-model mean trend agrees with the data at the 95% confidence level.

When processing this information it is worth bearing in mind that

  1. The method in Santer17 assumes that all residuals from a linear trend are “noise” and the “noise” is well represented by an AR1 process.
  2. Rejecting a hypothesis at 95% means the statistical test is supposed to incorrectly reject hypothesis that are actually true in α=5% of tests drawn at random from the population of all possible cases. ( This is called “type 1 error”; “type 2” error also exists.)
  3. If the underlying trend really is linear, and all residuals from linear really are AR1 noise, the method described in Santer rejects true hypotheses too often. That is, where we claim to reject true hypotheses in α=5% cases we may really reject more often. However, the error is relatively small and discussed in Santer et al. (It’s also easy but time consuming to correct for this error running Monte Carlo tests.)
  4. To the extent that the residuals from a linear fit are not “noise” but arise due to deterministic factors the method used in Santer will tend to reject models that are correct in less than α=5% of cases. That is to say: If after running many simulations, we discover the trend average over all models has dips and rises due to something like… oh say… volcano eruptions, then the 95% confidence intervals might actually only reject models that are wrong 4.99%, 3% or even 0.1% of the time. (The correct rate can only be estimated if we know the true shape of the trend including the effects of volcanic eruptions. I estimated the effect here. It appears that for trends starting in 1979, the non-linearity in the underlying trend due to volcanic eruptions is sufficiently large to more than compensate for the effect described in the previous bullet point and the Santer test makes the uncertainty intervals too large.)
  5. To the extent that the “unexplained weather noise” is of some form other than AR1, the method may reject “true” hypothesis at some rate other than 5%. We can’t know if the rate is higher or lower than 5%– that depends on the actual form of the noise.

So, as with any statistical test, we could argue whether the 5% uncertainty intervals are correct or not. However, if we pretend any particular start year was selected at random, and we pretend the simulations and the forcings used to drive the models were entirely uninfluenced by knowledge of the measured temperatures, and apply this t-test, we will find that for quite a few years between 1960-2004, the test rejects the hypothesis the multi-model means correctly reproduce the observed trend.

For what it’s worth, I can concoct similar graphs for trends starting all the way back to 1900, and I can partition for simulations with and without volcanic eruptions.

Will I show those? Of course I will in time.

Update

I was sloppy and forgot to mention to compare my first graph to a similar graph by Pat Michaels’s, who presented a somewhat different analysis to Congress. The major difference between my graph and Pat’s is the method used to compute confidence intervals. I used the method described in Santer. Chip Knappenbeger describes Michaels’s method in comments here.

86 thoughts on “Multi-Model Mean Trend: AOGCM simulations vs. observations.”

  1. I like the normalized graph, Lucia. I do that a lot myself – I wasn’t aware that Santer had defined a quantity for it. Typically I normalize to the 95% CI such that > 1 or < -1 means you are outside the intervals.
    .
    If you use anomalies rather than trends, you can make a much stronger statement than simply the percentage of points that like outside the 95% CIs because you don’t have the problem of the end date affecting the results for all the start dates. If the difference between the models and observations arose due to noise, then the normalized plot of the differences between model anomalies and observed anomalies should oscillate about 0. If the differences lie all on one side – or show a strong trend with time – you can reject the null hypothesis much easier.
    .
    Maybe I’ll have a go at it.

  2. RyanO–
    I show anomalies sometimes.

    The difficulty with using the anomalies is you need to pick a baseline and the distribution is rather sensitive. But, I’ll cook up a distribution of the 12 month annual anomalies using the 1980-1999 baseline.

    Trends get rid of this issue. The mug showed the 12 month anomalies based on the 1900-1999 baseline. If you look at it, you’ll see the anomalies are mostly out using that baseline. On the other hand, if you used the 1980-1999 baseline, the models don’t look toooo bad. To a large extent the center year of the baseline is close to the year when you check the anomalies, the baseline method forces the models to look good due the the magic of subtracting the different to force agreement. If they really are good, they should look good for all times. But… well they don’t.

    The reason I switch back and forth is not inconsistency (as I sometimes read in comments at other blogs). It’s to show that no matter what method I use the models either look poor or very poor.

  3. Lucia,

    Something that puzzles me is your “no matter what method I use the models either look poor or very poor”.

    How can this be, either they are getting something badly wrong and their shortcomings are not publically acknowledged, or we are getting things badly wrong, or a lot or Cherry-Picking is going on (by either side of the argumentative community).

    Now people who wish to contest a theorem do cherry-pick, in that they simply point out what the theorem gets wrong, they are permitted to completely ignore what it gets right. That is the correct position. If it gets anything wrong, it need fixing. That does not mean that one should throw the baby out with the bath water. The models correctly (my prejudice) predict that GHGs make the globe warmer. I, as you probably also, wonder do they get anything else right.

    On another tack, I have been puzzled for somewhile as to how the CMIP2 archived models can agree so strongly concerning the warming for a century of 1%/yr CO2 increase +/- 25% about a mean of 2C. Yet disagree on the long term effects of a doubling of CO2 +/- 50% about a mean of 3C.

    I wonder if the models have been constrained by reality. I suspect I can show that for the high end models ~4.5 to get in at 2.5C at 100 years they must have a much heavier and closely coupled ocean compared to the low end models. This strikes me as strange. If only best science was involved one might expect a high end model to occassional be coupled to a light ocean giving giving a high divergence at the 100 yr mark and conversely a low end model would occassional be coupled to a heavy ocean and give next to no warming. Now both of these would give historically untrue versions of the 20th century. So perhaps we have ended up with the set of models that we have. It may not be deliberate but the models have formed a set in close agreement over the 20th century and at the 100 year mark (1%/yr) due to a selection process. If so there agreement, it is not worth much as it would consist of high end sensitivities coupled to heavy oceans and low end sensitivities coupled to light oceans. They would agree because they differ in multiple ways that are constrained by the historic record, so that the errors are self compensating. I have it on good authority that the models are not “tuned” to reproduce the 20th century but they would not have to be if they were already selected to do so, due to being constrained in the above manner.

    As it happens I suspect that all of them have oceans that are too themally heavy and that leads me to be prejudiced against all but the lowest sensitivity estimates.

    Alex

  4. Zeke–
    Yes. I did get the idea from Chip Knappenberger. I should note that.

    The difference between mine and his is I compute the confidence intervals differently. Let me fish out the link to give credit to Knappenberger and Michaels.

  5. Alex–
    Many of the discussion of good agreement focus on data that is purely hindcast. So, for example, if we compare projections from 1900-1999 and use the anomaly method, the models don’t look bad. It’s when we switch to predicting the future that the projections no longer match.

    The argument has been that we gain confidence in models because the “predict” already known trends in anomalies over the 20th century ok. They do “predict” that well. However, the difficulty is during that period, each modeling group has some latitude in tweaking parameterizatoins inside the models and, more importantly, externally applied forcings from things like aerosols. So, the correlation between forcings the modelers elect to apply and the model sensitivity to CO2 is recognized and reported. So, to some extent, in papers modelers admit the agreement in the 2oth century may, in part, be achieved for the “wrong” reasons. ( Basically, the modelers make off-setting mistakes. Moreover, they run the models enough to kinda-sorta know whether their model has high or low sensitivity. So, it’s not like the off-setting mistakes “just happen”.)

    In some sense, there is nothing “hidden” about the current mis-match in projections. It’s just…. silence. Things weren’t quite so silent when the TAR models could be made to appear to be underpredicting warming.

  6. The University of Colorado sea level data has been updated:
    http://sealevel.colorado.edu/ The slope is 3.3 mm per year- 30 cm per century

    The trend is pretty linear and shows no acceleration. Yet the press is full of stories that sea level is accelerating (but if one reads the actual statements by the scientists they actually say it will accelerate in the near future).
    So since sea level rise is supposed to be the most horrible result of GW why has it not accelerated? After all there are claims that the oceans are storing the heat so they must be suffering from thermal expansion. Also the supposed melt of Greenland is supposed to have accelerated. So where’s the signature?

  7. Jack– I thought pestilence and loss of crop land was supposed to be the most horrible result.

    Well, the new graphs do get rid of the recent flat spot in the sea level rise!

    I think the issue with the sea level is warnings that all of a sudden huge amounts of ice supported by land could slide off Greenland or break off from an ice shelf and raise the ocean suddenly. So, the most alarming scenarios all involve a sudden jump.

    So far, they are predictions. But until that happens, the sea level rise should be fairly smooth (within some degree of “weather noise”.)

  8. Harvey
    The models do not model actual temperature only anomalies. They do not do a good job of describing actual temperatures on this planet. I think that deficiency, which is not well known, indicates that the models do not fundamentally describe accurately the mechanisms that “create” temperature or climate on the planet. Perhaps it’s a coincidence that the models got temperatures correct for while back in the 80’s and 90’s based on a climate cycle that was not well understood. Climate seems to be cooling perhaps as a result of this understood process or cycle that the models either do not account for or understand very poorly.

  9. Lucia: It looks like there is certainly enough uncertainty that I am certain that the model results do not offer enough certainty to be certain about anything at all (or something like that). But, more importantly, I have a recurring sore throat and am wondering about the probability that I need some amoxicillin? Several other folks where I work have been out, sick as hell, for a week.

  10. Jack: 3.3 mm/year of sea level rise (globally) _is_ an acceleration compared to the average 20th century rate of increase, and much higher than the average global sea level rise increase for the past couple of thousand years.

    However, some studies indicate that there has been a decent level of variation in the 20th century rise, so we don’t know yet if the 3.3 mm/year we’re seeing now is a temporary increase in rate, or a real new average (perhaps Lucia could do statistics on that someday. Does the AOGCM model set have sea level rise data?).

    In any case, models predict a slow acceleration (ignoring the variation around the mean) due to increasing thermal expansion plus glacial melt. To get much above 60 cm of sea level rise by 2100 would require additions by Greenland and/or West Antarctica, and we don’t yet have good enough understanding of either to really predict what sort of sea level rise rate we could get from them.

  11. Jae– If your throat is very sore and it keeps coming back, go to the doctor. It could be strep.

    Marcus– I think the AR4 did not predict sea level rise. The uncertainty was perceived as too high. If the modelers think they can’t predict something, I don’t test their predictions.

  12. Jae, you should go to the doctor. I had to meet some deadlines, so I put it off. When I went, I had to get a cortisone shot and get a lecture about first sign of pneumonia and I would be put in a hospital. Then, my body was so beat up I came down with another illness and had to go through another set of indignities and antibiotics. Don’t just adapt, be mitigating, it may be hard for you, but do it 😉

  13. I like the longer-term trendline analysis.

    Here is another way to look at these timelines based on CO2 levels (as a proxy for all the GHGs) and the formula for global warming predictions.

    CO2 increased from 317 ppm in 1960 to 386.5 ppm in February 2009.

    The global warming formula which takes you to 3.0C per doubling (or 3.25C by 2100 with an ocean lag factor) is …

    4.17 LN(CO2) – (Minus whatever constant is required to take you to the baseline you are using- anomalyC, global average temp C, or Kelvin).

    4.17 LN(386.5) – 4.17 LN(317) = +0.826C

    The increase in temps for 1960 to 2009 that takes you to the IPCC’s predictions is +0.826C

    Actual temps from GISS and Hadcrut3 have only increased by +0.45C or so.

    So no matter what timeline is used, the models are off by close to half.

  14. Science Daily reports here on a new data base of aerosol measurements (published in ‘Science’) –

    …the team notes, that their finding of a steady increase in aerosols in recent decades, also suggests an increase in sulfate aerosols. This differs from studies recently cited by the Intergovernmental Panel Climate Change showing global emissions of sulfate aerosol decreased between 1980 and 2000.

    http://www.sciencedaily.com/releases/2009/03/090312140850.htm

    Also, a report on a new technique for observation of aerosols:

    http://www.sciencedaily.com/releases/2009/03/090312134635.htm

    Once we have better measurements of aerosol forcings I think we’ll have better performing models.

  15. http://sealevel.colorado.edu/

    If I am looking at this correctly, the trend indicates about a foot of sea level rise per century. Since 2006, however it appears that the rise has halted.
    I am selling my beach front property and moving to Denver. This is obviously the calm before the storm.

  16. “…I think we’ll have better performing models.”

    So, *now* the models will be better than the models of yesterday, which means the models will be better than:

    a) Worthless
    b) Poorly Performing
    c) Almost Relevant
    d) Asking The Magic 8 Ball/Flipping A Coin

    😉

    Andrew

  17. 1. It is a tautology to suggest that once we have better estimates of one part of a model we’ll have a better performing model.

    2. Hopefully by “model performance” we are not talking about the ability to retrofit to a global time-series, but a proper out-of-sample (i..e. into the future) validation test.

    3. You don’t need better measures of aerosol forcing sensitivity coefficients to have better estimates of aerosol quantities. So measure the aerosols, run the model with the same sensitivity coefficients as calibrated in past IPCC reports, and see how the trend deviates from expected. This does not require new research. It should take a couple of hours to do if newer, better estimates of aerosol quantites are already available.

  18. Lucia:

    All the models tend to give upper tropical troposphere temps about +0.5ºC higher than the lower troposphere, in ‘conflict’ with the very sparse tropical radiosonde data sets.

    The parameterization of lapse rates in GCMs is somewhat flacky.

    If the radiosondes were ‘right’ and models were tweaked to give environmental lapse rates that matched them more closely, the ‘predicted’ warming trend might drop from about +0.2ºC/decade to nearer to +0.1ºC/decade.

    I believe this would pull the model data in your second graph well within the 95% confidence interval for all but the most recent years.

    That would probably only mean that we have a little more time to get mitigation right.

  19. Sea level rise and no increase in ocean heat content apear to be mutually exclusive. Either the ocean has constant mass and is expanding due to increasing temperature or the temperature is constant and the ocean mass is increasing due to land based ice melting, or a combination ot the two effects. In either case, the ocean heat content (heat capacity times temperature times mass) goes up. I guess the sea floor could be rising, but wouldn’t somebody have noticed that by now? Or perhaps there is some combination of temperature change, thermal expansion coefficient and heat capacity that would allow volume to increase at constant heat content, but it seems unlikely. Thermal expansion of water varies with temperature and even changes sign at low temperature. The depths of the ocean are very near the temperature of maximum density and minimum rate of change of density with temperature.

    Maybe that’s it. Instead of sea level going up because the upper ocean is warming, it’s going up because the deeps are cooling. Yeah, that’s it. [/sarcasm]

  20. Simon– I have suggested the possibility that the problem is poor estimates of aerosol loadings in the past. That said I try to separate questions:

    1) Are the models on track or not.

    Then, if they are off track 2) Why are they off track? and if they are on track 3) Are they on track for the correct reason.

    You may not be aware of the history, but there are “some” who insist the AR4 models cannot be said to be off track based on the comparisons to data. Of course, if the models are on track (as some insist) but driven by incorrect aerosol forcings (as some data now suggests), accidental agreement for the wrong reason would hardly suggest we should be confident of their predictive ability.

  21. “accidental agreement for the wrong reason would hardly suggest we should be confident of their predictive ability”

    Wrong Method + Right Answer = Bad Science

  22. Interesting. Now do you have data for a model or set of model runs with no or not significant CO2 forcing in it and how does that compare with the trend estimate evolution?

  23. Bill Illis,

    The models would only be off by half if they assumed an instantaneous equilibrium temperature response from CO2. There are good reasons why they do not.

  24. I have a suggestion to make this a bit more informative.

    1) What would the lukewarmers estimate of future warming be? Would it be fair to say that people who believe IPCC CS estimates are too high would say that future warming should be 0.1degC/decade or would it be 0.05degC/decade?

    2) Once a general number is agreed upon, compare that to 2001-2009 actual temps.

    3) If possible, compare the IPCC and the lukewarmers to the 1990-2009 actual temps, accounting, of course, for the difference in forcings for the longer period.

  25. Boris-lucia, I think, still believes that, eventually (in the long run?) we’ll see .2 per decade. But I don’t think that either of those would make much sense even to lukewarmers who don’t think that we’ll eventually see .2 per decade. The best expectation, as the “skeptics” at WCR (Pat Michaels et al.) have been saying for some time, is warming at the rate show from 1979 to the present-about .17 degrees per decade. For example, see:
    http://www.worldclimatereport.com/index.php/2006/01/31/hot-tip-post-misses-the-point/

  26. Zeke,

    I’m using the prediction trendlines of the models.

    See Hansen 1988 and all of the IPCCs. How do you get from +0.6C today to +3.0C (or +3.25C) in 90 years?

    See here.

    http://img183.imageshack.us/img183/6131/modeleghgvsotherbc9.png

    And here (with GISS temp down to 0.41C in February, the statement should be changed to GISS Model E is off by 0.38C in just five years).

    http://img175.imageshack.us/img175/2107/modeleextraev0.png

    And here.

    http://img254.imageshack.us/img254/2626/tempobsrvvsco2ct4.png

    There is a question of how long the ocean thermal response lag should be. Originally, this was just a few years, then it got changed to decades, then it got changed to 30 years, and now it is 1500 years. [Here is Hansen’s latest climate/ocean ‘lag” response timeline before equilibrium is reached]

    http://img291.imageshack.us/img291/1200/image002.png

    As lucia has noted, the timelines keep getting changed so we should just pick a prediction at a certain date and test that.

  27. Andrew_FL–
    I don’t know what the trend will eventually be. But, I don’t see any strong reason to expect it to be in the more alarming ranges we read in various newspaper articles. I’d be stunned if it’s less than 1C for this century. I also don’t expect it to be 3C. But… who knows?

    I’m not sure why Boris thinks it would be more informative for use to identify lukewarmer projections and test those. In anycase, even if it would be “more” informative, so? Testing models against new data is normal in science. So, in my opinion, reporting the results of any such tests is sufficiently informative. Certainly, it meets the minimum standard for hitting the “publish” button at a blog.

    But if anyone including Boris is aware of any other projections, I’d be happy to be told where to find them. Then I’ll test them.

  28. Lucia

    Re 11783. Ice breaking off from a shelf has no effect on sea level since shelf ice is already floating. Also there won’t be any sudden sliding of Greenland ice into the ocean for the simple reason that Greenland is bowl-shaped with a depression in the middle and elevated edges. Any large scale movement would require the ice to slide uphill.
    This is well illustrated on the map on page 10 of this monograph from the Geological Survey of Denmark and Greenland:

    http://www.geus.dk/publications/bull/nr14/index-uk.htm

    Also De Witt Payna (11815):

    Ordinary seawater does not have any maximum density, it grows denser until it freezes. Only fresh and slightly brackish water has a maximum density above the freezing point.

  29. tty– Ice that is literally floating will have no effect on sea level. But aren’t shelves defined as being attached to the surface in some way? So part of the weight is not off set by displaced water?

    But yes, the large sea level rises are expected to occur if ice currently on solid land ends up in the sea.

  30. No Lucia, shelf-ice is floating on water. You are probably thinking about the buttressing effect shelf-ice is considered to have on adjoining grounded ice. However this is due to the fact that it takes a great deal of energy to move a couple of billion tons of floating ice sideways, not that the grounded ice is in any way supporting the shelf-ice or vice versa. Ice hasn’t got enough tensile strength for that in any case.

  31. Lucia, when I try to get the model data from Climate Explorer it says that they can’t redistribute the data. What’s the magic handshake or sign that you use to get the data? Secondly is there a data dictionary of what the parameters are?

    Thanks

  32. I’m not sure why Boris thinks it would be more informative for use to identify lukewarmer projections and test those.

    First, it would be helpful for skeptics to make their arguments more quantitative. If skeptics are convinced that climate sensitivity is on the lower range, then they should be able to put together some sort of projection based on this range. The “skeptic movement” tends to avoid anything but sniping from the wings (still waiting on that SKEPTEMP analysis, guys), so if they want in on the “debate,” as they so often claim, then give us something to debate about. Other than canards about water vapor being 95% of the greenhouse effect, that is.

    Much has been made on this blog and elsewhere that the recent global temperatures cast doubt upon the IPCC projections. But would those who argue for a low CS be able to produce better projections? My instinct tells me they wouldn’t–that a short term trend from 2001 might fit in with their projection, but the longer trend would be way off. And I wonder how the climate skeptics–the lukewarmers–would explain the discrepancy between their projection and the long term trend? Any guesses?

  33. Boris–
    I’m not convinced we know the correct sensitivity, but have seen little empirical evidence to support the higher range. So what if I don’t come up with my own model? Why does this bother you? (Ok. That’s a rhetorical question. The answer doesn’t matter to me. It doesn’t bother me one iota to not create and run my own GCM in my sparetime at home.)

    I’ve never claimed I or anyone could produce better projections than the existing ones. I’m not surprised your instinct might tell you skeptics can’t produce better predictions than current IPCC models. Mine say the exact same thing: Skeptics models would be just as poor as the IPCC models.

    I think it’s likely state of knowledge is such that predictive ability will elude everyone from alarmists to skeptics. But the fact that we can’t develop models with decent predictive ability doesn’t mean we can’t compare the ones we have to data and admit they have problems. There is nothing in science or logic that says we can’t compare models against data unless we come up with better models. That notion would be idiotic. If anyone believed it, scientific and engineering progress would have stalled long ago!

    On the other stuff: Have I ever said anything about water vapor being 95% greenhouse effect etc? So, what in the world does that have to do with my testing models? Have I ever said I planned to develop skeptemp at home in my spare time? That’s JohnV’s project… right? He says he’s going to finish it sometime. So what does that have to do with me?

    So, in the meantime, I’ll just keep talking about how projections compare to models and show the different ways we can compare the two.

  34. BarryW–
    Pick TAS for surface temperature. Then click “select field”. You’ll get a panel that asks you for lat/long info etc. I’ll find the post that explains it better!

  35. Tried that and I got this message:

    Download miub echo g 20c3m tas
    The PCMDI does not allow us to redistribute this file. Please consult their website for further information.

    or I get

    You don’t have permission to access /IPCCData/20c3m/tas_A1_giss_aom_00.nc on this server.

    I even registered and get the same answer. Guess I’ll have to email them to see what I’m doing wrong. Thanks

  36. Boris,

    The burden of proof is on those who believe we should spend billions to “stop” AGW that may not even exist.

    Thus, the AGW camp must “prove” AGW to justify their “belief”. Moreover, they must prove AGW will truly lead to catastrophe. Finally, they must prove it is cost effective to “stop” AGW versus merely “adapting” to whatever comes.

    Skeptics need only point out the AGW camp’s foibles to justify their “skepticism”.

  37. Lucia,

    Don’t be too defensive. I was talking about skeptics in general, not you in particular. Despite what your views are, you must know that skeptics take your analysis and conclude that models are off–even though your analysis is based on a very short period, climatologically. I think it would informative to remind those skeptics that a low sensitivity would not explain the temperatures observed in the late twentieth century. If there are any skeptics who think that Lucia’s analysis casts doubt on IPCC sensitivity estimates, I’d like to hear you explanation.

  38. Anthony Watts has a post from the Edmonton journal that uses some record cold temps in Canada to argue that AGW is wrong. I know some of you voted this goofball best science blog. Aren’t you embarrassed?

  39. Sigh, Boris try reading instead of reacting. Did you even read the first line? He’s pointing out that climate extremes don’t prove anything. AGW fanatics take any high as being proof of AGW, but ignore any low as being irrelevant. Someone who makes snarky comments from his intellectual high perch, should be embarrassed, not the rest of us.

  40. Sigh, Boris try reading instead of reacting. Did you even read the first line? He’s pointing out that climate extremes don’t prove anything. AGW fanatics take any high as being proof of AGW, but ignore any low as being irrelevant. Someone who makes snarky comments from his intellectual high perch, should be embarrassed, not the rest of us.

    I also read Anthony’s headline: “Edmonton Canada bests all time record low by -12 degrees, columnist questions climate situation”

    If you don’t want to be accused of saying stupid things, it’s best not to say…stupid things. Yes, the article complains about some unknown journalists hyping heat waves. And also says that:

    Even the Arctic sea ice, which has replaced hurricanes as the alarm of the moment ever since hurricanes ceased to threaten, has grown this winter to an extent not seen since around 1980.

    –which takes George Will’s nonsense to a whole new level.

  41. Boris–

    Despite what your views are, you must know that skeptics take your analysis and conclude that models are off–even though your analysis is based on a very short period, climatologically.

    The model projections by the IPCC are off. We don’t know why they are off. It could be the forcings. It could be inadequate spin up. It could be they model parameterizations. It could be the poor resolution. We don’t know.

    This means the sensitivity estimates are off.

    I realize many say that the rise in temperature over the past century can’t be explained with low sensitivity. But if you examine the argument it is based on models output. It might be wise if climate modelers sought more empirical estimates. Schwartz tried to do that; he came up with low sensitivity. The uncertainty intervals may be large; the method may be flawed. But the value he got was low.

  42. Plus, if there is no deterministic “window” in the chaotic system that is climate, then any predictions (lukewarmer, alarmist, or otherwise) are approximate at best.


  43. The model projections by the IPCC are off. We don’t know why they are off. It could be the forcings. It could be inadequate spin up. It could be they model parameterizations. It could be the poor resolution.

    It could be internal variability — and that’s the most likely explanation if you are referring to results since 2001.

    This means the sensitivity estimates are off.

    Not necessarily. But that’s a minor point.

    I realize many say that the rise in temperature over the past century can’t be explained with low sensitivity. But if you examine the argument it is based on models output.

    I suppose that’s true. But even if models didn’t exist at all, low sensitivity would do a worse job explaining the 20th century temperature rise compared with IPCC range sensitivity.

    It might be wise if climate modelers sought more empirical estimates. Schwartz tried to do that; he came up with low sensitivity. The uncertainty intervals may be large; the method may be flawed. But the value he got was low.

    Well, there are plenty of other empirical estimates out there. Annan and Hargreaves 2006, Forster and Gregory 2006, Royer, et al, 2007, Hansen 1993, Lorius 1990, Gregory 2002, Wigley, et al 2005. There are many more. When you look at all empirical estimates, you get pretty much the same result as models: CS of about 3 degC.

  44. Boris, here is a low sensitivity reconstruction of monthly temperatures going back to 1871. This reconstruction method works equally well for the NH, SH and the tropics.

    http://img14.imageshack.us/img14/8/newhadcrut3model.png

    This low sensitivity model was only off by 0.015C in February, 2009 while GISS Model E would be off by 0.36C in February, 2009.

    Here is GISS Model E’s reconstruction/hindcast from 1880 to 2003. Not as good as the above it seems.

    http://img355.imageshack.us/img355/9043/modelehindcastoz1.png

    And here is how the low-sensitivity estimate plays out over the next 100 years.

    http://img14.imageshack.us/img14/5721/newhadcrut3warming.png

    Hadcrut3 gives the highest sensitivity numbers of the different temperature series. GISS is 1.52C, NCDC is 1.4C, RSS is 1.1C and UAH is 0.8C. The lower numbers from the satellites indicate that there was likely some unneccessary “adjustments” made to the historical temperature figures which has artificially adjusted the long-term trend upward.

  45. Boris–

    It could be internal variability — and that’s the most likely explanation if you are referring to results since 2001.”

    Did you read this post? They are off if we pick 1960 as they start date. The ‘hindcast/projection’ merges are off if we do the analysis starting in 1980.

    You need to cherry pick to find a period when they are *inside* the 90% confidence interval since 1960. If I post the results with volcano eruptions included in the forcing files, the results are worse.

    On the sensitivity: the 20th century could be “explained” if variability is much higher than the models say it is (and higher than we estimate using AR2) and/or some combination of ocean time constant is either much higher (or lower) with the surface time faster.

    Which of the four annan and hargreaves 2006 do you mean?
    http://www.jamstec.go.jp/frcgc/research/d5/jdannan/
    The one at ArchiveX in which he pours through the literature looking for empirical estimate and shows why sensitivities on the high (aka alarmist) range are improbable? Or the one in GRL which basically says the same thing?

  46. Lucia,

    On the sensitivity: the 20th century could be “explained” if variability is much higher than the models say it is (and higher than we estimate using AR2) and/or some combination of ocean time constant is either much higher (or lower) with the surface time faster.

    If variability is that high, then it would “explain” model divergence as well.

    Yes, the GRL Annan. Note that he also excludes low sensitivities and gets a range of 2.5C to 3.5C. (Actually the paper might say 2-4, but on his blog, he gave the tighter range.)

    Bill,

    Interesting–and thanks for pointing to actual projections. When did that analysis begin projecting?

  47. Boris is talking nonsense as usual and seems to be ignoring the replies from Allen and Ryan, while referring to the self-reinforcing groupthink-afflicted team. If you want my ‘projection’, here it is: global average temperatures will not vary by more than about half a degree up or down over the next 50 years. But who is going to take any notice? Nobody. I am not claiming any degree of certainty, and I’m not demanding that the entire world spend trillions of dollars and completely change its way of life (How many times does this simple point have to be made?)
    Many skeptics believe the future climate is inherently unpredictable and so any future ‘projections’ are as meaningless as predicting next years weather forecast.

    would those who argue for a low CS be able to produce better projections

    Yes, my ‘more-or-less flat’ projection is already more accurate than the IPCC 🙂

  48. Boris–

    If variability is that high, then it would “explain” model divergence as well.

    Of course. But, in that case, the models still have a problem because despite being based on physics, they get the variability wrong on average.

    Since all “lukewarmer” means is being suspicious of the higher sensitivities, Annan’s stuff falls firmly into the “lukewarm” spectrum. The fact that it doesn’t call for very low ones isn’t a problem.

  49. Bill,

    I was under the impression that the AMO was defined by the multidecadal cyclical variability left in climate models when all CO2 forcing had been removed. Given that the projected magnitude of the AMO is a result of climate models using a higher sensitivity than your approach, wouldn’t a lower climate sensitivity also imply lower (rather than higher) AMO forcings? Also, given that the AMO is defined in part by detrending past temperatures from estimated CO2 induced warming in climate models, it seems like the fit of AMO variability to temperature variability is not really an independent factor.

  50. I think you’ve clarified “lukewarmer.” Apparently I didn’t know what it meant. So, bender would agree with the median IPCC sensitivity range? Really?

    And I would be a lukewarmer because I think CS is between 2 and 4?

  51. Many skeptics believe the future climate is inherently unpredictable

    Well, they may believe that. But there is a value for climate sensitivity. Do they think it’s low or high or do they just ignore the concept completely?

  52. Zeke, the AMO is just ocean SSTs in the North Atlantic (with the slight trend upward removed). It is a measured quantity like the Nino 3.4 region anomaly rather than falling out of climate models. Any increase in the trend (which is not much by the way) is included in/is left to the CO2 induced warming.

    All of the ocean cycle indices I’m using have been detrended (the slope is Zero and the sum of all months is Zero over the period). There are lots of papers which indicate that the AMO is a natural ocean cycle, part of the Thermohaline Ocean Circulation and is independent of global warming.

    Over time, the raw (no trend removed) AMO will be going up as most of the ocean surface temperatures have increased over time (0.4C or so). But some of the AMO reconstructions show it was much higher at times in the past than the current numbers show so, once again, we are back to it being a natural cycle.

  53. Boris–
    You’d have to ask Bender what he believes specifically. I define lukewarmers as believing CO2 causes warming, but being suspicious about claims that things must somehow be on the very alarming end of sensitivity, outcomes etc.

    As with all labels, individual people who might fit the general category have different specific ideas. There is obviously some sort of range of thought or feeling about likelyhood of various ranges of warming. So, one could have

    1 hell-fire-and-brim-stone warmers,
    2 high-side of the IPCC warmers,
    3 lukewarmers,
    4 no-warming or cooling- it’s natural variation zeros ,
    5 the sun is slowing down we are entering a little ice ages coolers, and
    6 stone-cold we are plunging into an honest-to-goodness glacial period coolers.

    I don’t think there are very many of (6), but who knows? There seem to be some (4-5) at blogs, forums, conferences etc. Of course, there are also people who defy categorization (e.g. Tom Chalko who has posted that warming will cause the earth to explode.)

    Then, of course within all the categories of beliefs about warmth, we have difference of opinion about a) the amount of uncertainty about their notion of the best estimate and b) what we should do about it.

    I think there is no uncertainty that CO2 causes warming and that warming will resume. (If aerosols are blocking the sun, things could get bad quite quickly if aerosols are cleaned up!) That said, with respect to figuring out if things are an absolute emergency that could result in death of all species on earth or whether it’s just going to warm just a little more than it last century…. I think there is uncertainty. I can’t say that people with any number of opinions are clearly wrong.

  54. Boris–
    I think if you firmly believe 4 is very likely… probably not. There’s a certain point where you need to be suspecting the upper part of the IPCC range is unlikely. Still, 4C is less than 4.5C!

  55. “1 hell-fire-and-brim-stone warmers,
    2 high-side of the IPCC warmers,
    3 lukewarmers,
    4 no-warming or cooling- it’s natural variation zeros ,
    5 the sun is slowing down we are entering a little ice ages coolers, and
    6 stone-cold we are plunging into an honest-to-goodness glacial period coolers.”

    It’s amazing that we have this wide array of beliefs about something that is supposedly so *settled*. Facts are boring, though. Belief… imagination… faith… that’s where the exciting stuff is. The possibilities are endless, eh? What can we dream up next? 😉

    Andrew

  56. 3 is the most likely number, 2 and 4 are less likely, but quite plausible.

    The def. of lukewarmer by David Smith from one of your previous posts was:

    Also, I am a “lukewarmer” who thinks that the world is warmer than it would otherwise be due to anthropogenic gases (but doubts that the impact will be extreme).

    which isn’t very helpful either because it confuses the amount of warming with the severity of effects–and is vague about both.

    In any case, there must be a set of skeptics who argue that the true range of CS is substantially lower than the IPCC range. (Bill gives a nice example, though I wonder if aerosols are considered in his estimate.)

  57. Boris,

    In any case, there must be a set of skeptics who argue that the true range of CS is substantially lower than the IPCC range.

    Of course there is. I could probably guess at a few, but I’d rather people spoke for themselves.

  58. Lucia, thanks for explaining ‘lukewarmer’! Like Boris, I’d often wondered what you meant by that – but your definition is still a bit vague. Also, as hinted at by Boris, it would be better to expand and clarify the 2-4 range in your scale. I guess I’m around 3.8 on the Lucia scale! To answer Boris’s question, ‘many skeptics’ think that ‘climate sensitivity’ is an over-rated concept but if forced to come up with a number would say around 1 degree or less.

  59. PaulM–

    Of course the definition is vague. So are definitions of “skeptic”, “alarmist”, “denialists”, “AGW-activist”, “septic” and all the other terms flung around blogs and forums. Heck, if you can get people to decide the exact range of color that can be called “pink”, you’re a better man than I am! Most people agree Peptobismol is pink. But… are fuschia’s pink? Is coral pink? How light does it have to be? How dark? We know pink needs to include white and red. How much white do you need to add before you just call it red? May pink contain very light grey instead of white? Can it contain some yellow (i.e. coral)? Some blue (i.e. fuschia) ?

    With regard to lukewarmer, the minimum criteria are:

    * Must believe ghg’s will cause warming, and that the warming for doubling of CO2 would be at least detectable compared to “noise”. (I don’t know where the cut-off is. I’d say a reasonable cuttoff a some small multiple of the standard deviation of variability in GMST due to “weather noise.” This permits us to argue about the cuttoff, but we know the variability of weather noise greater than 0.1C. We can argue about how large the multiple is– but if you your best estimate of climate sensitivity to doubled CO2 low as 0.1C, you really can’t claim to be a warmer of any sort. That’s just too tepid. )

    * Must believe the warming for doubling of CO2 likely on the lower side of the IPCC range. (You’re not required to be certain. It’s just what you’d bet. )

  60. Bill Illis (Comment#11843)

    Bill, re. the last graph in your post above (‘Climate Response Function’), of which you say “Here is Hansen’s latest climate/ocean ‘lag” response timeline before equilibrium is reached” –

    The context in which I’ve seen Hansen recently show this graph, which is the GISS model with Russell ocean, was his AGU presentation. He shows it but then puts the case that the models may have the response times wrong. He says:

    “…we now have several reasons to believe that the climate response time of the GISS ocean model, and most ocean models, is probably too long. Most of the IPCC ocean models seem to mix too rapidly.”

    “Comparisons show that all four models have similarly long surface temperature response times.
    Unfortunately, this does not indicate that the models are right.
    On the contrary, there are numerous indications that they have a common problem.
    First, overall, they tend to mix transient tracers more than observed.
    Second, theoretical work at GiSS, by Vittorio Canuto’s group, shows that mixing parameterizations, such as the common KPP approximation, cause too much mixing in the upper ocean.

    I think Hansen indicated in this presentation a number of elements that he thinks the models may be having trouble with. My summary:

    1. Equilibrium climate sensitivity is known accurately, but “Estimates of climate sensitivity based on the last 100 years of climate change are practically worthless, because we do not know the net climate forcing. Also, transient change is much less sensitive than the equilibrium response and the transient response is affected by uncertainty in ocean mixing.” (my bold).

    2. but, net forcings and response time are both uncertain.

    3. GHG forcings are known “very accurately”

    4. whilst for aerosols, “the error bars are huge” – “the aerosol forcing might be anywhere between zero and -3 W/m2”.

    5. Therefore, “the net forcing….is anywhere between zero and +3 W/m2, probably between about +1 and +2 Watts.”

    6. Measurements of ocean heat storage are just not good enough yet, but nevertheless “it has become clear that there is a discrepancy between observations and the heat gain calculated in most models, if the models use a net human-made forcing of +2 W/m2 and if the oceans mix as deeply as most ocean models do.”

    7. However, if the climate response function (as indicated in the GISS graph you presented) is in fact too slow, then the current net forcing must be less than 2W/m2.

    8. If so, then there is less warming ‘in the pipeline’ to come (the planet is closer to energy balance than modeled) but also the climate will be more responsive to ongoing changes, including moderate changes such as solar irradiance changes.

    We won’t have accurate tropospheric aerosol measurements until the Glory mission is operative (http://glory.gsfc.nasa.gov/), and DSCOVR, the satellite that was designed to measure stratospheric aerosols, is maybe going to be used for something else, having been mothballed for so long ( http://spaceflightnow.com/news/n0903/01dscovr/).

    How can we expect models to do a good job if we can’t provide accurate measurements in the first place?

  61. Simon, that’s getting closer but I think in the AGU presentation, Hansen is saying (you might have to also read parts of the paper linked below to get what he meant about the oceans mixing slower part.)

    “Temps are not keeping up with the models.

    So either the Aerosols negative impact is bigger than we thought or the oceans warm up slower than we thought in the initial 100 years of the graph [this is just the point in the graph between 40% and 60% response or up to 60% response – Hansen is saying it might have less slope over this period] [In the early part of the graph, the oceans are absorbing more of the increased forcing than we thought.]

    Hansen says, however, that the models do assume the oceans warm up much faster than they really do. The models need to be changed to perhaps an even longer climate response function than included in the graph (and he has had this semi-verified by the NOAA, NCAR and Hadley Centre centre computer models).”

    And Hansen has used this in one other presentation and in his most recent paper as well so it is not just the AGU presentation. (This starts on Page 22 here).

    http://pubs.giss.nasa.gov/docs/2008/2008_Hansen_etal.pdf

    We do know it takes a long, long time for the deep ocean to warm up. The cold dense water stays at the bottom and warmer (lighter) water rises to the top. If the cold water freezes, it becomes even less dense and, in fact, can only stay frozen if it is at the surface. This means the ocean is either very warm or frozen at the surface and gets progressively colder and colder as it gets deeper. The coldest water is the most dense and stays at the bottom and there is very little mixing at all.

    The surface ocean probably catches up to warming in a few years [the 40% part of the graph], but the deep ocean needs to complete at least one complete overturning (probably two or three) to catch up to the surface. This may take 800 to 3,000 years.

    And the deep ocean paleo-climate reconstructions shows that the deep ocean does eventually catch up. It has been 10C warmer and it has been 3C colder than now.

    Now if Hansens Aerosols explanation is actually right, then there will be even more global warming than we thought once we clean up the air. [I don’t buy the Aerosols negative forcing since the effect has been concentrated in the northern hemisphere where temps have increased at a faster rate than in the southern hemisphere. Is China cooling off faster than anywhere else right now? Is the northern hemisphere and the northern mid-latitudes cooling off at a faster rate since China and India started producing the big brown cloud. Nope. ]

  62. Simon Evans [11883]
    Your comment to the effect that is unreasonable to expect models to perform if we don’t have accurate data to put into them is certainly to the point.

    What to do then, about the fact that all current GCMs by definition are fatally flawed because 1] it is clear we know preciously little about the actual workings of the climate system [as in whether water vapour a positive or negative forcing] and 2] of the extreme complexity of the climate system.

    Over on the AOGCM thread I have suggested that GCMs are in many ways like proteomics, i.e. the modeling of a system with an “infinite” number of variables, none of which is “fixed”, in real time. The proteomics undertaking was abandoned in complete failure and it is doubtful we can reasonably expect any better from GCMs.

  63. Way ahead of you Boris. At Climate Audit I keep a running table of people’s beliefs about GHG sensitivity. I’ve published it twice. Search there and you will see a rather precise quantitative definition of what a “lukewarmer” is. Of the poeple who have posted their beliefs NONE are deniers, most are lukerwarmers, a few are alarmists. FWIW I have not posted my beliefs on the matter because I have none. I am agnostic, and defer to the “experts”. I hope to hell they’re right. I reserve the right to question their logic and data. I think they are obliged to answer skeptics and lukewarmers respectfully, with real answers. The denigration that skeptics receive at Real Climate does NOT help their cause.

  64. Okay, I give. I can’t find what bender is talking about-would you be so kind as to point us in the right direction?

  65. What’s a “lukewarmer”?

    Google this phrase at Climate Audit:
    “Two years ago I asked CA regulars to state their “priors” – what proportion of 20th century warming could be attributed to GHGs”

    Or just go here:
    http://www.climateaudit.org/?p=3819#comment-300434

    This was my reply to Joe Solters, who accused me of being a “warmie”, always “telling people what to think”.

  66. Re: Simon Evans #11807

    When the data fit the thoery, it’s time to surge forward on the policy front; no time for science.
    When the data depart from the theory it’s time to do some science to rescue the hypothesis.

    I’m surprised Simon missed the double-standard. Normally he’s so good at reporting such things.

    Reporting bias is something Steve McIntyre has commented on before: when it’s warm (ca. 1998) you calculate a trend and report it. When it’s cool (ca. 2008) you sit tight and wait for warmer times. Except now the cooling has been noted so widely it can’t be so easily ignored, dismissed, or denied.

  67. Bill Illis (Comment#11888)

    ” Is China cooling off faster than anywhere else right now? Is the northern hemisphere and the northern mid-latitudes cooling off at a faster rate since China and India started producing the big brown cloud. Nope.”

    Well, you say no, but the UNEP regional assessment report says the following:

    “Annual land-average solar radiation over India and China decreased significantly during the period 1950 – 2000. For India, the observed surface dimming trend was -4.2 W m-2 per decade (about 2 per cent per decade) for the 1960 – 2000 period, while an accelerated trend of -8 W m-2 per decade was observed for the 1980 – 2004 period. Cumulatively, these decadal trends suggest a reduction of about 20 W m-2 from the 1970s up to the present, thus supporting the large dimming values inferred from modern satellite and field campaign data. In China, the observed dimming trend from the 1950s to the 1990s was about 3-4 per cent per decade, with larger trends after the 1970s. Cities like Guangzhou recorded more than 20 per cent reduction in sunlight since the 1970s.”

    http://www.unep.org/pdf/ABCSummaryFinal.pdf

    tetris (Comment#11894)

    I saw your post on proteomics. I think the comparison would apply if, for example, we were trying to predict whether it would rain or shine in New York on the first day of next month, but I don’t think that intractable complexity is the real problem for climate modelling. I don’t agree with your view that we don’t know the impact of water vapour. The satellite record is relatively short, but it supports a strongly positive feedback, in the order of a doubling of warming, as theorised. I think uncertainties regarding cloud response are more to the point, and this accounts for most of the range in current modelling.

    bender (Comment#11906)

    “When the data fit the thoery, it’s time to surge forward on the policy front; no time for science.
    When the data depart from the theory it’s time to do some science to rescue the hypothesis.”

    No, it is always time to be working on the science. I referred to the DSCOVR satellite above. This was proposed in 1998 by Al Gore (termed the Triana mission at the time), and should have been launched in January 2003, but it was shelved in a context of political opposition. If it were in space now we would be able to measure stratospheric aerosols by it (and much else besides). Given its proposal in a year of record global temperature, it makes no sense to suggest that the interest in advancing research wanes ‘when the data fits the theory’.

  68. Simon-um, 1998 isn’t exactly “theory fitting” just because it was warm-it was an El Nino year, and unless your totally clueless about how the Pacific Ocean influences global temperatures, you know that means warm. So a single year is not theory testing evidence. A long term trend-now ~that~ can test a theory. Since 1979, its up, more or less as much as the models say (though the answer was known before hand so it doesn’t mean much) but since 2001, its down. Of course, that’s a much shorter period, so maybe its due to some “variability”-but Bender has it wrong anyway. When temps are up, the tactic is to yell “fire!”, but when they’re down, the tactic is to yell “FIRE!!!1! ROFLCOPTER!1!!1!!” and then pie the opponent in the face. All the while, science may or may not be being done, but that’s not really related to trends.

  69. Andrew – I don’t think we disagree. My reference to the coincidence of 1998 being the year of the Triana mission being proposed was simply in response to bender’s suggestion that more science is only called for when the recent evidence seems less convincing (which I happen not to think is the case anyway, if we consider all the evidence relating to the matter that’s accrued this century).

    In passing, I like to remind people of this whenever the 1998 El Nino comes up –

    White House, June 8 [1998] – Vice President Gore and NOAA scientists announced today that The 1997/98 El Niño, one of the most significant climatic events of the century, produced extreme weather worldwide.

    http://www.publicaffairs.noaa.gov/stories/sir3.html

  70. Actually, the net effect of El Ninos is positive-longer growing seasons offset the costs of “extreme” weather, while La Ninas, which causes alternate extremes in different places, the net effect is negative. And I’m not sure what business Gore had co-opting the NOAA.

  71. Bender-Is that degrees or percent? I think it must be the former. In that case, you could put me down for an oddly specific .4752-assuming the IPCC mean GHG forcing estimates, when summed, result in the actually GHG forcing, and applying a sensitivity (based on some work I’ ve been doing) that CS is .18 C/W/m2.

  72. Simon Evans [11912]
    At the current state of the art, the predictive value of GCMs is indistinguisable from zero. The reason for this is precisely that they all tend to be way to broad brush [reflecting in that sense our limited understanding of the details of the system] and have no ability to account for the effects of the “wheels within the wheels” we know do very much exist. Because of this they all GCMs tend to overstate the effects of certain purportedly well understood “key” variables, and are consistently well out. Trenberth has repeated argued that GCMs will never amount to anything credible until we learn to model regional CMs. Which pretty much requires getting a way, way better handle on the role of inter alia time related variables.

    As far as the role of water vapour is concerned: you are out on thin ice [no direct link with lower troposphere temperatures implied in any way..] in arguing that we know with any degree of certainty that water vapour is actually a positive forcing. It is not only water vapour in the form of clouds that would appear to be a negative forcing. Paltridge et.al.’s very carefully formulated argument suggests something quite different, as does the work of Svensmark, Spencer,. Christy and a few others. The reason these studies are being vehemently pooh poohed as “iffy” [or worse] by AGM proponents, is of course that if correct, not only do most GCMs conclusively collapse, but the entire CO2 as main climate driver hypothesis vapourizes, as it were.

  73. Woops, posted an incomplete comment –

    Tetris, continuing –

    As for the Paltridge paper, one can’t make a good analysis from bad data, however careful the argument. I think Ryan Maue’s comments on the CA thread discussing this explain well why there would be reservations about the paper.

    Svensmark – what does this have to do with water vapour response to warming, though? Potentially it offers (or offered) an alternative explanation of recent climate evolution, but I don’t think that in itself it suggested a refutation of w.v. feedback.

    Spencer etc. – well, I read what he/they write. Btw, he’s said on his blog:

    “While I have believed for years that water vapor feedback might be negative, I will admit the latest evidence is looking more and more like the real story could on the reflected solar side instead.”

    As for such studies being “vehemently pooh poohed as “iffy””, of course it’s possible that all those making criticisms are simply subject to their biases, but it’s also possible that the criticisms are sound. I guess we just have to read all the stuff and do our best to make our own judgment of that. I could, of course, say exactly ther same about the vehement treatment of Mann & ‘the team’ over the past ten years…. I’d agree with you that less vehemence on both sides would be good for science, but I don’t think we’re going to get that!

  74. Simon Evans [ 12045-47]
    Dessler and Sherwood piece is precisely that, a “perspective”. Hardly anything one would want hang one’s scientific hat on.

    I followed the discussions on CA re Paltridge et al and saw Maui’s reservations. I also duly noted that several commenters, on what seemed to me to be good grounds, disagreed with his argument.

    Svensmark’s work has everything to do with water vapour as a major driver in the system as until further notice, clouds are water vapour. How they are formed and in what quantity are fundamentally important questions.

    Spencer’s comment by no means indicates he now somehow thinks water vapour is a positive forcing. On the contrary, he argues that when everthing is added up, water vapour comes out as a net negative forcing. We know from comments on another thread here that you don’t have much time for Spencer, but that doesn’t give you the right to misrepresent his thinking.

    And on the subject of “iffy” data, the Mann data set[s] certainly qualify for a major “iffy” prize, as does the GISS surface temperature data set and the Jones data sets used by the IPCC to pooh pooh urban heat effects. One that latter note, Jones has just recently published results that in fact show a 0.1C per decade positive contribution to temperatures from urban heat effects. It seems to me thta add that up in the context of rapid urbanization during the 20th century and you quickly arrive at the 0.6C increase we are told we have observed. Add that to all the other questions that have been raised about data contamination and and “adjustments” and you have truly “iffy” temperature data .

    Not surprisingly, this major reversal/correction by Jones, which removes one of the cornerstones of the AGW/ACC edifice, got no publicity whatsoever. WUWT has a thread on it today with a link to the paper, and maybe someone should suggest to Revkin that he comment on both the findings and their implications.

  75. Tetris–

    as until further notice, clouds are water vapour.

    Clouds and watervapor are considered separately in models. Water is water in the gaseous phase. Clouds are ice crystals or water droplets. These can reflect and scatter light.

    What they have in common is they are both water and they are both suspended in the atmosphere. So, if you are trying to discover how water in the atmosphere affects things, you need to consider both. But, the word “water vapor” is for the vapor specifically.

  76. tetris,

    I did not misrepresent Spencer’s thinking. I quoted him directly, made no comment myself and included his suggestion that “the real story could on the reflected solar side”.

    Jones has just recently published results that in fact show a 0.1C per decade positive contribution to temperatures from urban heat effects. It seems to me thta add that up in the context of rapid urbanization during the 20th century and you quickly arrive at the 0.6C increase we are told we have observed.

    I’ll read the paper first before forming a view.

  77. Lucia [12059]
    Thx for highlighting the distinction. It seems to me that the models treating clouds and water vapour seperately may be problematic in itself. Not to put too fine a point on it, but be it ice or droplets, it’s still water which got up there through evaporation.

  78. Tetris–
    Yes. The water evaporates before condensing to form clouds. Models basically need to track the amount of water and also it’s phase. That’s because it does have different radiative properties depending on the phase. Similarly, snow banks reflect differently from water absorbed into the soil etc.

    Water is a big complication!

  79. Simon Evans[12060]

    Yes please have a close look at the paper because what it says is remarkable in more ways than one.

    How important is UHI in Jones’ study? 0.1C per decade for the 1951-2004 period covered, during which period “climactic” temperatures ib China increased by 0.81C , i.e. 0.16C per decade. UHI therefore, accounts for an astounding 62.5% of measured temperature increases in China for the 1951-2004 period. But, so Jones tells us UHI [somehow, no explanation given] is of no consequence in the case of London or Vienna. Nor I presume for New York, Paris, Melbourne or any other city that comes to mind. Only in Chinese urban areas. How come there are a good number of climate “scientists” [Jones, Steig and a few others come to mind] who persist in thinking their readers are cretins?

    The implications of this paper are certainly interesting: in particular, coming from Jones himself, this should force the IPCC to revise its official position on the role of UHI. The findings also go to the heart of arguments made by Pielke Sr and Trenberth respectively: a] that land use is a key local and regional climate forcing, and b] that developing reliable regional climate models is an absolute must, without which GCMs will continue to be GIGO. UHI has to be included as one of those “wheels within wheels” variables in any climate model, as it clearly is a major component of land surface temperatures.

  80. tetris,

    I will need to get to a library which carries JGR, which I don’t subscribe to myself – that may take a while, for current personal reasons. Until then, I’ll just raise a few uncertainties –

    0.1C per decade for the 1951-2004 period covered, during which period “climactic” temperatures ib China increased by 0.81C , i.e. 0.16C per decade. UHI therefore, accounts for an astounding 62.5% of measured temperature increases in China for the 1951-2004 period.

    That’s not quite what I understand from the abstract. The ‘true climactic warming’ of 0.81C is on top of the UHI effect of 0.1C/decade, thus the percentage ‘reduction’ is closer to 40%. That’s my reading anyway.

    The distinction between urban development in China and urban trends in London, Vienna, etc. is surely reasonable.

    This paper was commented upon at the ‘Sketical Science’ site back when it came out –

    http://www.skepticalscience.com/Does-Urban-Heat-Island-effect-add-to-the-global-warming-trend.html

    Make of that what you will.

    I am not clear of the extent to which (if any) UHI in developing regions has been corrected for in the surface temperature records. Nor am I clear of the impact any uncorrected trends would have upon global temperature analysis. I need to read the paper, I guess. I’d suggest that it could not account for a substantial part of global trend, since such developing urban areas are a small proportion of the global surface – but I might be in error to presume that.

  81. Simon Evans [12085]
    I stand corrected on the math. That said, I trust you are not suggesting that 40% is an insignificant componnent of the observed warming.

    I disagree with you on the impact of urbanization outside of China. Urban sprall is the hallmark of the 20th century in all developed economies, from the US to the Netherlands where greenbelts are the exception in the much of the country. Examples abound in all OECD countries of met stations that were originally located in greenfield settings winding up in high energy density suburban or urban settings. It is impossible to simply disregard the record of this phenomenon in Watt’s exhaustive US station data base.

    No, I’m afraid that Jones full well understood the implications of his conclusions, and sought to “round the edges” so to speak. As WUWT duly notes, it speaks favourably of Jones that he published it at all. That said, I note, without much surprise, that this paper, peer reviewed and published in a reputable journal, got absolutely no coverage whatsoever when it appeared a few months ago. Maybe just another example of editorial AGW/ACC biases: “doesn’t fit the story line, let it go”…

Comments are closed.