GISSTemp for November: 0.68 C

The GISSTemp for November 2009 has been posted. The November 2009 anomaly exceeds the current October 2009 anomaly of 0.64C, and also exceeds the October 2009 anomaly of 0.67C published last month.

Because a blog visitor wanted to know whether the nomimal IPCC Trend of 0.2C/decade still fell outside the ±95% uncertainty intervals, I’ve plotted data and trends since 2001 showing the red-corrected uncertainty intervals and a trend of 0.2C/decade: GISSTempNov(Nov. temperatures are highlighted.)

As you can see, when the residuals from a least squares trend are assumed to be AR(1), the trend of 0.2 C/decade continues to fall outside the 95% uncertainty intervals. So, given these analytical choices, the nominal projection by the IPCC remains inconsistent with observations. FWIW, the actual multi-model mean for the AR1 models exceeds 0.2C/decade, and it also remains inconsistent with the observed data.

Update: Nick was puzzled and thought Nov. and Oct. data did not appear. I’ve circled all Nov. anomalies. Note that anomalies in the graph have been re-baselined to 1980-1999, the period I use to make all model and observational anomalies fit the same baseline.

290 thoughts on “GISSTemp for November: 0.68 C”

  1. Lucia,

    I was wondering if you could direct this old plumber to an easy to comprehend site about these plotting graphs. I don’t want to waste anyones time on explaining these to me if I can learn this myself. Where I am most confused is with the 95% uncertainy levels.

    Thanx.

  2. DeNihilist–
    What do you want to know about the ±95% uncertainty intervals? I use a method that assumes the residuals from are “normally distributed” and have serial autocorrelation of a type that is described as AR(1).

    I can explain why I use this, and why I think it seems ok. (Although I haven’t posted my full analysis of why I think it’s ok. I’m triple checking it.)

  3. I don’t understand what the 95% is supposed to represent. Is it that the aveage plotted line, can vary up or down by that amount and still be considered right? So if outside of the 95% the results cannot be taken as “robust”?
    (please don’t feel that you need to take time to teach me this stuff, if I can find a good sensible site, then I can teach myself. I feel that your time is way to valuable in the persuit of truth in your sphere. Thanx for educating/clarifying things in this debate for people like me!)

  4. Also, thanx for your efforts in helping to make this subject just a little less foggy for people like me. I will bing this subject.

  5. Lucia, I think this sentence:

    So, given these analytical choices, the nominal projection by the IPCC remains consistent with observations.

    should read:

    So, given these analytical choices, the nominal projection by the IPCC remains inconsistent with observations.

  6. DeNihilist–
    There are always a bunch of assumption associated with any statistical test. But in this case, assuming the assumptions associated with the analysis apply, then the fact that the trend of 0.2C/decade lies outside the uncertainty intervals means that that trend is probably not consistent with what we are seeing in the earth’s data. That is: The rate of warming on the honest to goodness earth is probably less than 0.2 C/decade.

    There are a number of caveats associated with the assumptions, but before we go there: If the assumptions are true, then this result is saying the trend of 0.2C/decade is probably wrong.

  7. Lucia:

    I think DeNihilist is asking for a reference to learn about statistical testing and the interpretation there-of. Your answer ” I use a method that assumes the residuals from are “normally distributed” and have serial autocorrelation of a type that is described as AR(1).” is terrifying to the uninitiated in statistical techno-speak. GK

  8. My understanding of the 95%, is that if the earth really was warming by 2 degrees a century over a long period of time, and the only variation from this was AR(1) noise of the same amplitude as measured in the (I assume?) last 8 and a bit years, and we picked a bunch of almost 9 year periods, then 95% of the time the trend should lie within the confidence interval, and 5% of the time it will not.

    The last 8 and a bit years is either one of those 5% of the times it did not, or evidence that at least one of the assumptions of 2 deg/century and AR(1) noise are incorrect.

  9. Bugs #28184
    I made the same remark. But at the bottom right of the graph, I see Lucia has rebaselined it.

  10. Hi Lucia,

    I’m afraid I cannot tell in detail how the GISS historical (i.e. from 2008 or any time before) data changed in October or in November because I didn’t download the published versions. However I can tell you about the differences between this December version and the version they had in September. It is quite different and it raises some eyebrows.

    * A full 20% of the monthly temperature data has been changed.
    * There ara slightly more warming changes than cooling ones.
    * Most of the cooling changes are concentrated between 1908 and 1950. Most of the warming changes are concentrated between 1889-1907 and, especially, 1981-present.
    * There are extremes in both cases. The year with the most cooling is 1934: 10 of the 12 months are adjusted towards cooling, most of them by a whole 0.02C, cooling the yearly average by 0.02C too. 1933 also sees cooling adjustments in 7 out of 12 months.
    * The years with the most warming are 2004 and 2005 (7 out of 12 months adjusted towards warming).
    * Some years without any monthly data apparently adjusted have nevertheless a yearly average warming adjustment of +0.01C (1975, 1979). The same is true but for cooling in other years (1947, 1930).

    With respect to changes in the trends (all trends in ºC/century):
    * Global trend 1880-2008: warms from +0.560 to +0.562
    * Cooling trends 1880-1910 and 1940-1970 now cool less. 1880-1910 changes from -0.128 to -0.120. 1940-1970 changes from -0.230 to -0.225.
    * Warming trend from 1910-1940 now warms less: it goes from +1.230 to +1.222. Warming trend from 1970-2000 now warms even more: changes from +1.523 to +1.534.
    * The most recent trend 2001-2008 cools from -0.143 to -0.183 (due mainly to the cooling of many monthly records in 2008 and the warming of all the previous years).

    With respect to new records as a result of the massaging of historical data:
    * 2007 goes up in the ranking of hottest years: the new ranking is again 2005, 2007, (1998 tied with 2002), 2003 (very close behind).
    * November 2009 is the hottest November on record. This is the first time since March 2007 that a month can claim being the hottest of its kind on record.

    Best regards,

  11. DeNihilist (Comment#28170) I don’t understand what the 95% is supposed to represent

    Correct me someone if I am wrong, but De Nihilist I would say that if the IPCC trend falls outside the 95% uncertainty level, then there is a 95% chance that the assumptions on which this trend is based are wrong.

    And their main assumption, on which this trend is based, is that CO2 will increase by a certain amount, which is correct, and because of this increase in CO2 the temperature should go up by 0.2C per decade, which is wrong.

    Thus we could conclude, as Lucia says, that IPCC could still be correct, but there is only a 5% chance of that being so and there is a 95% chance of their being incorrect.

    Bear in mind also the IPCC says that it is “very likely” that the warming of the present period is due to Anthropogenic CO2 and very likely is defined as having greater than 90% probability.

    This doesnt disprove anthropogenic influence on the warming, but it does make their scary scenario somewhat unlikely.

    Also one could perhaps, with some justification, argue that if they are so wrong about the warming trends, how confident can we be that anything they say is correct.

    Yet on the basis of this uncertainty the UN and western governments want to tax us for reasons unfathomable.

  12. And this despite as Nylo and others have pointed out, GISS adjusting the temperature records to show greater warming trends.

    One wonders why they have to continue to amend historical temperature data. To me this looks like scientific fraud of epic proportions.

  13. What have your interactions been like with skeptical blog sites like Climate Audit (Steve McIntyre) that are playing up these e-mails?

    I’ve been very courteous to McIntyre over the years since the committee in Washington. One time he sent me a message saying he couldn’t understand the greenhouse effect, and asked for a simple model explaining it. So I took a few hours and tried to explain it. And I sent him a simple paper I wrote many years ago that I thought might be helpful to his readers. He wrote back and asked if he might post the .pdf of the paper and I said fine. Within an hour or two there must have been 75 or so of these really insulting comments. One of these guys wrote, “North is obviously promoting his own agenda.” My answer to Steve is that no good deed goes unpunished.

    http://blogs.chron.com/sciguy/archives/2009/11/post_59.html

  14. All this puts us on track for warming of 1.0C to 1.5C by 2100.

    RealClimate put up an analysis of the Hadcrut3 data yesterday which was released by the Met Office last week.

    Numbers are anywhere from 0.042C per decade (full Hadcrut3 including ocean temperatures) to 0.060C per decade (in “random” samples taken by Eric Steig).

    0.042C times 25 decades = 1.05C by 2100

    0.060C times 25 decades = 1.5C by 2100

    While the climate scientists are patting themselves on the back for having a slight increase in (some kind of) released raw and adjusted temperatures, they have failed to do the simple math again and check if it matches their predictions.

    Now the warming is supposed to accelerate as more CO2 was added in the latter half of the century (but not much more of an increase in logarithmic terms) and then there is the issue of natural variability which has added to the trend.

    We don’t get to the IPCC predictions of +3.0C by 2100 now matter how one looks at the numbers.

  15. Nick– I updated to help people notice the rebaselining. I do that because I super-impose things on graphs all the time and it’s easiest to just show everything on the same baseline.

    G. Karst (Comment#28178) — I realize that my answer might not help DeNihilist. But I don’t know what he wishes enough to point him to a useful answer. I don’t know if he doesn’t understand how to interpret “inside” or “outside” uncertainty intervals, or if his question is how they are computed or what not.

    Richard (Comment#28196)

    I would say that if the IPCC trend falls outside the 95% uncertainty level, then there is a 95% chance that the assumptions on which this trend is based are wrong.

    No. It means that if the IPCC is correct, excursions with this far off from the mean would happen less than 1 in 20 times. This is fairly unlikely, so can be deemed “inconsistent” with the IPCC nomimal projection at the stated confidence level (95%).

    But there are some assumptoins involved in the statistical analysis. I just don’t know if DeNihilist means to ask about those.

  16. Twaki-

    But is the trend within the margin of error and therefore insignificant?

    It’s just outside.

    I think odds favor this happening over the next 7 years: The trend will pop inside the uncertainty intervals during el nino and pop back out during the next La Nina. It might oscillate twice — but then it will stay out. (If you run some Montecarlo where you create noisy series with trends of 1.5 C/century and test 2.0 C/century, you’ll see they oscillate in and out of “rejection” and finally reject and stay there.

    There is something called “statistical power”– right now the test has little “statistical power”, so we can expect we’ll get some “fail to rejects” even if the IPCC 2C/century projection is wrong. Eventually, that won’t be the case. So, if the IPCC projection really is wrong, we are going to be seeing “robust rejections”, and they will likely happen after the next La Nina– if not then, they will happen after the next El Nino.

    Now, for what I think won’t happen: Even at the top of El Nino, the trend will not penetrate the upper ±95% confidence intervals. If it does, this will be a good indication that the confidence intervals are too narrow. (But I have reasons to believe they are ok.)

  17. Lucia,
    What justifies to begin with 2001? How would be the results different if you began with 1991, 1981 or even earlier? Thanks.

  18. Bill Illis,
    A decade is 10 years. 2000 to 2100 is 10 decades. Where did you get 25 decades to 2100?

  19. Lucia: I cannot believe that after all the proof that most of these people are frauds you are still going on and on with manipulated Gisstemp data which is probably the MOST manipulated, a bit like Steigs Antartica stuff just spread that warming as much as possible! Yes more than Cru. It will all look very silly within a year when this is all over…For Gods sake the head of Cru has been put aside and Mann is likely to dissappear from the map too, but good on you for trying to be fair

  20. VG–
    I think there is proof these guys over-juiced their case. But I don’t see any particular evidence that GISSTemp is fradulent. So… yes… still looking at the major observations. When (or if) HadCRU posts Nov. I’ll post that too.

  21. Nylo:
    I think there is a simple explanation for the changes to the GISTEMP data. They switched to USHCN v2 on November 13:
    .
    http://data.giss.nasa.gov/gistemp/updates/
    November 13, 2009: NOAA is no longer updating the original version of the USHCN data; it ended in May 2007. The new version 2 currently extends to July 2009. Starting today, these newer data will be used in our analysis. Documentation and programs will be updated correspondingly.
    .
    I read somewhere that the switch made only very tiny changes in the results, but I can’t find that reference this morning.

  22. DeNihilist (Comment#28170),

    You can start by looking at a site like:
    http://en.wikipedia.org/wiki/Statistics

    Most comments here appear to be made by scientists and engineers, and all these folks have studied statistics and statistical testing of “significance”. Statistics is a huge field, but the basics you can figure out without too much effort (probably not more than a couple of days of effort).

  23. I think that by any definition of ‘climate’, ca 10 years is way too little to establish trends, and using such short periods is a recipe for rather fruitless discussions – if the purpose is to establish consensus on the eventual climate changes.

    It can, however, be used, as it seems to me that Lucia does, to _test_ hypotheses, and then it can make a lot of sense to look at shorter term trends. Lucia’s analysis shows that we are very unlikely to have a 0.2 deg/decade trend IF the assumptions about the structure of the temperature time series (AR(1)) is correct. And, without more knowledge, it is a fairly natural assumption to start with. It should also be noted that if it is not correct, the basis for the IPCC 0.2 estimate could also be influenced.

    The trend checking also goes both ways: If we are in a cooling period now, as many seem to maintain, the trend is unlikely to be as much as -0.2 deg/decade.

    BUT: The data are not inconsistent with longer time trends of LESS cooling or warming, and if we extend the observational period backwards, and apply a better fitting ARMA(1,1) model (see http://tamino.wordpress.com/2009/12/15/how-long/ for an example), the “current” trend stabilizes at about 0.15 deg/decade.

    I’d like to ask: If someone is not willing to accept that a warming about this order of magnitude may be the current climate trend, can they still be called “skeptics”? Or is another designation then maybe more fitting?

    The UAH anomaly also has last month the hottest November on record.

  24. denny (Comment#28207),

    As bender observed, Luica has explained this many times.

    But let me summarize: all climate models make decent hind-casts, but that results is meaningless in terms of model prediction skill, since the models are all, shall we say, ‘optimized’ so that the match historical temperature reasonably well. The 2001 date represents when the models actually start predicting, not hind-casting. Prediction is the only legitimate test of model skill, and so far, they aren’t doing too well.

  25. Thanks Lucia. It seems every year gets more interesting. If the cooling PDO continues then the likelihood of more La Ninas is greater. Also if the sun continues to slumber then this could drag things down as well. Also read a paper on how changes to the ocean currents mean cooling for the next 20 -30 years. There seems to be a growing list of cooling signals. Any major volcanic eruptions will exacerbate this.

  26. JohnV,
    Your ‘simple explanation’ explains nothing at all.
    Nylo highlights part of a steady systematic sequence of adjustments of the historical data that lowers past temperatures, increases modern temperatures, lowers past trends (particularly 1900-1940) and increases modern trends.
    This has been highlighted on previous occasions, see
    http://wattsupwiththat.com/2009/06/28/nasa-giss-adjustments-galore-rewriting-climate-history/
    http://rankexploits.com/musings/2009/giss-updates-increase-recent-historic-trends-slightly/
    http://wattsupwiththat.com/2009/07/14/giss-for-june-way-out-there/

  27. This whole “GISS adjusts past data, proving they artificially inflate warming” meme is getting a tad stale. Yes, the algorithm GISS uses occasionally changes past temperatures minutely. However, the sum of these changes over time have not had much of an impact on the trend.

    Bill Illis: The argument that the linear trend over the last 150 years doesn’t imply 3 degrees warming in 2100 is silly. The number Steig gives corresponds almost exactly with the IPCC number for the 1850-present trend. Future warming is driven by climate sensitivity and tempered by thermal inertia. Are you arguing that observed temperature changes over the past century necessarily set an upper limit on sensitivity over the next century below the generally accepted range? If so, why?

  28. SNRatio (Comment#28225),

    There is no doubt that the best estimate of trend from 1975 to present is about 0.15C per decade. However, the 1945 to 1975 period had, if anything, a modest decline. So there has been post 1975 warming of 0.15C per decade, but it is not certain how much of this is the direct result of radiative forcing and how much might be caused by other factors.

    For example, aerosol emissions dropped in the post 1975 period due to the US Clean Air Act and similar measures in Europe. Could the fall in particulates not have contributed to the measured temperature rise post 1975 (global brightening)? Seems to me quite possible it did. Or could a significant portion of the post 1975 rise not be attributed to ‘natural’ climate variability over times scales in excess of ~30 years (like the AMO, PDO)? Once again, there is a significant chance this is the case.

    I am not at all skeptical that radiative forcing from CO2 and other greenhouse gases causes some warming. What I am skeptical of is the ability of climate models to accurately predict warming over the remainder of the 21st century based on projections of radiative forcing. Or to state more directly, I have doubts that the climate models accurately capture cloud feed-backs, ocean heat accumulation, ocean evaportation rates, rainfall, and convective heat transport to the upper troposphere.

  29. Why should pointing out that GISS and CRU and others adjust the historical data to enhance current warming get ‘stale’, if one is at all intersted in making policies based on honest information?
    Lucia’s analysis pretty much shows that the accepted gold standard of climate catastrophe prediction, as represented by the IPCC, fails.
    How much worse would it fail if the people analyzing the data were not the people adjusting the data, inevitably, it appears from the record, to make their predictions look better?

  30. To prove the efficacy of their models why hasn’t someone “tuned them” to the time period 1850-1980 and then started predictions for 1981 and later. We could then compare the ability to predict compared to “actual data” for the time period 1981-2009.

    My guess is that the models are not capable of accurately describing the temperatures we have actually experienced over the last 28 years.

    Is there anything out there that does something like the above?
    Thanks
    Ed

  31. Zeke Hausfather (Comment#28231),

    “Future warming is driven by climate sensitivity and tempered by thermal inertia.”

    For sure. But the available historical record does not constrain these these critical parameters very well. The predicted climate sensitivity for different models covers a substantial range (~1.5 C per doubling to >4.5 C per doubling), yet all the models pretty well hind-cast the temperature record.

    Ocean heat content from Argo data appears clearly inconsistent with very long ocean lag periods (very high thermal inertia), which seems inconsistent with high climate sensitivity.

  32. Bugs,

    You really should be less credulous. North’s memory seems quite faulty here. Follow the links to the CA comments and you will see a generally technical discussion on climate models where North seems to be treated with respect (certainly far better than skeptics would expect at sites such as Real Climate, Tamino etc).

    Contrary to North’s assertion, there is no comment saying “North is obviously promoting his own agenda.” The closest I could see to this was the following statement by Lubos: “Also, while I think that North is a pleasant smart guy and I agree with his general suggestions what is the best way to determine these things roughly, I think that this particular promotion of their 1993 paper is a self-promotion.” This is “really insulting”??

    North refers to “75 or so of these really insulting comments.” There are only a handful of comments that even mention him and the Lubos comment appears to be the strongest worded. If this is what North regards as being “punished” or “really insulting”, then he really needs a thicker skin. Hell, Bugs, you probably say worse on this blog five times a day.

  33. lucia:
    I’ve noticed that your language around “falsification” has softened since last winter (when I last frequented your blog). You now include caveats like “given these analytical choices” and use “inconsistent with” instead of “falsified”. Is there any particular reason that I missed?
    .
    You are using an AR(1) noise model for your consistency test. Tamino uses an ARMA(1,1) noise model. He seems to demonstrate that ARMA(1,1) is more appropriate for GISSTemp when looking at 1975 to 2009:
    .
    http://tamino.wordpress.com/2009/12/15/how-long/
    .
    As you know, ARMA(1,1) gives wider confidence limits than AR(1). Do you have a counter-argument to back up your preference for an AR(1) noise model? If I remember correctly, your prior tests for the suitability of AR(1) were based on temperatures starting in 2001. If that’s right, do those tests hold up if you use more data?
    .
    I’m not picking a fight here — just looking for the “she-said” in response to “he-said”.

  34. edward (Comment#28234),

    The problem is that modelers can empty their heads of what the post 1981 history has been. They don’t even admit (as far as I have read) that the models are in fact “tuned” to match the historical record, even though it is obvious that they are.

    To do your experiment, you would need to raise a bunch of climate model builders from childhood in isolation, never let them know anything about the temperature history after 1981, and then have them build models. Not going to happen.

  35. Zeke Hausfather (Comment#28231)
    December 16th, 2009 at 10:49 am
    Bill Illis: The argument that the linear trend over the last 150 years doesn’t imply 3 degrees warming in 2100 is silly.

    Its only a little silly. The point is that people need to take into account what a very low number of 0.042C per decade (after it has been adjusted upward by 0.025C per decade) implies about the warming to 2100 (instead of congratulating themselves that there has indeed been a temperature rise and the adjustments are very close to Gaussian (only slightly adjusted upward that is if you ignore the fact it starts out as -0.25C since 1920).

  36. JohnV–

    You are using an AR(1) noise model for your consistency test. Tamino uses an ARMA(1,1) noise model.

    Tamino persists in not treating volcanoes exogenous when he fits. IMHO this means his fits are do-doo. Also, I’m checking some analysis to show that we know AR(1) works well, but I will be discussing this in the future. For now, I’m checking an analysis for errors, switching from EXCEL to a code and checking to see the result is ‘robust’ to a bunch of choices. But… I’m pretty darn sure Tamino ARMA(1,1,) is doo-doo.

  37. lucia:
    “doo-doo” hey? 🙂
    If I have some time to look into this myself, what’s the most recent “clean” year that you would use for fitting? Or alternatively, which years since ~1975 would you rule out because of volcanoes? Thanks.

  38. “…the warming is supposed to accelerate as more CO2 was added in the latter half of the century…

    The TAR figures, based on the 2000 SRES calculations, really had more of an acceleration in the first half of the 21st Century, with a slowing down in later decades. For example, decadal differences for A1F1 were as follows, with the peak rate in 2051-2060:

    2001-2010: 0.16
    2011-2020: 0.23
    2021-2030: 0.30
    2031-2040: 0.42
    2041-2050: 0.57
    2051-2060: 0.64
    2061-2070: 0.60
    2071-2080: 0.54
    2081-2090: 0.45
    2091-2100: 0.40

    http://www.grida.no/publications/other/ipcc_tar/?src=/CLIMATE/IPCC_TAR/WG1/552.htm

    I don’t know of an equivalent table published with the AR4, but have the changing rates altered so much? The best estimate for A1F1 is down to 4C from 4.49C, but I suspect the pattern of change is similar.

    It’s evident from the above that the decadal rate of change at the beginning of the century (let only the average back to 1850) gives hardly much of an indication of projected rates throughout the century.

  39. SNR–

    the “current” trend stabilizes at about 0.15 deg/decade.

    s size of difference matters with respect to quite a few plans.

    Also, if the correct trend is 0.15C/decade, that means the multi-model mean of 0.2C/decade is, indeed, biased high. That is: it’s “wrong”. Models in the lower range would be closer to the correct ultimate outcome and modelers need to figure out why their models are, on average, over projecting. (In fact, they would need ask why simple extrapolation has greater predictive value than the models because this would constitute evidence of “no skill” relative to the “simplest method” i.e. linear extrapolation.)

  40. Simon-

    I don’t know of an equivalent table published with the AR4, but have the changing rates altered so much?

    The projected warming during the first part of the century differs in the TAR and the AR4. The AR4 projected about 0.2 C/decade, the TAR about 0.15 C/decade.

    What happens later in the century depends on the scenario. In some warming accelarates; in others it decelerates. It depends on GHG additions.

  41. Lucia,

    The projected warming during the first part of the century differs in the TAR and the AR4. The AR4 projected about 0.2 C/decade, the TAR about 0.15 C/decade.

    Yes, I know there are differences – generally the TAR 2100 values are higher for the range of scenarios. The average projected TAR increase across scenarios for the current decade was 0.18C, in fact.

    The “about 0.2C decade” statement comes from the AR4 SPM, written in 2007. It was never clear to me whether they meant by “for the next two decades” the period 2001-2020 or 2011-2030. You take it to mean the former, I gather, I tend to think they meant the latter – but either way, “about” is a pretty vague term along with “next” being ambiguous. It’s a pity they didn’t present the equivalent of the TAR scenario tables from which we could get a better idea of what the range of ‘about’ might be and how that range relates to scenario. Having said that, the differences in various rates for the TAR scenarios for this decade don’t make much immediate sense to me (with, for example, the B scenarios modelled to give more temperature gain than A1F1 ….huh?).

    It seems basically necessary to me to consider the changing projected rate of temperature increase for whichever scenario we reckon we’re following before extrapolating some 2100 figure based on current trend, as some others are inclined to do.

  42. It might be slightly off topic (for this post), if so maybe Lucia might consider doing another post to cover it?

    I posted a comment at Tamino’s blog (twice) but it just disappears, and I thought it was an interesting question.

    Tamino put forward the following in http://tamino.wordpress.com/2009/12/15/how-long/

    “Time and time again, d*#^%lists try to suggest that the last 10 years, or 9 years, or 8 years, or 7 years, or 6 years, or three and a half days of temperature data establish that the earth is cooling, in contradiction to mainstream climate science…”

    And then went on to show recent temperature history along with confidence intervals dependent on the length of time we were talking about.

    I had just been reading a paper by Trenberth, so I asked this question, which seemed right on topic for that blog. Strange that it got deleted (twice). Probably just a moderation snafu.

    === my question ====
    There’s an interesting paper by Trenberth, “An imperative for climate change planning: tracking Earth’s global energy”.

    “The global mean temperature in 2008 was the lowest since about 2000 (Figure 1). Given that there is continual heating of the planet, referred to as radiative forcing, by accelerating increases of carbon dioxide (Figure 1) and other greenhouses due to human activities, why is the temperature not continuing to go up?”

    And he goes on to investigate the earth’s energy budget and where the heat is and where it could have gone.

    But if I understand your argument correctly, and maybe I haven’t, first, you can’t compare “outliers” to work out a trend, and second, you can’t even draw a trend line (however, computed) legitimately for a less than 10 year period.

    But Trenberth’s approach makes sense to me and seems like a legitimate question.

    What am I missing?

    PS. I can’t find the exact link to the paper now, I have a local copy saved, but it’s under http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/

    ========= end of my question to Tamino ============

  43. SteveF

    Could the fall in particulates not have contributed to the measured temperature rise post 1975…

    I recall that SO2 increases were originally proposed by Jim Hansen as an explanation for the temp. dip leading up to 1975.
    Also, was told by a designer of SO2 scrubbers for coal-fired power plants that the scrubbers increased CO2 emissions by 15%.

    For what it’s worth…

  44. Steve,

    There’s a direct link to the Trenberth paper you reference here:

    http://www.wired.com/images_blogs/threatlevel/2009/11/energydiagnostics09final.pdf

    I think his point is that we lack the observations to be able to do what he thinks we should aim to do. If we can’t distinguish, in the short term, natural from anthropogenic then we can’t make so much of short-term analysis, and thus the ‘wait twenty/thirty years for the natural cycles to even out’ point of view prevails. It would be better if we could figure out much more right now, but we need more equipment to do that.

  45. Simon–

    It was never clear to me whether they meant by “for the next two decades” the period 2001-2020 or 2011-2030. You take it to mean the former, I gather, I tend to think they meant the latter

    Welll…. the authors could have used a good copy editor to improve clarity. But, in fact, it doesn’t make much difference. They did provide some tables, and it’s “about 0.2C/decade” from 2001-2030 for the A1B and A2 scenarios. So… either way would be ok.

    To be more specific requires getting the model data… which I have.

    It seems basically necessary to me to consider the changing projected rate of temperature increase for whichever scenario we reckon we’re following before extrapolating some 2100 figure based on current trend, as some others are inclined to do.

    I don’t try to do that. I just try to compare to what is predicted as the best estimate of the multi-model mean for whatever period I am comparing to data using the metric that’s appropriate.

  46. Here is a chart showing how the trend C per decade changes over time under the AIB scenario with warming of +3.0C by 2100. It rises to 0.3C per decade by about 2040 and then falls again.

    [I don’t post this often because no one else is producing this chart but this is how you get to +3.0C by 2100 under the ln formula and the A1B scenario].

    http://img190.imageshack.us/img190/9912/warmingrates2100.png

    Or you can think of it as we are only at +0.7C right now, so to reach +3.0C in 90 years, the average has to be 0.256C per decade over the next 90 years (and the trend starts to flatten out as one gets closer to 2100).

  47. SteveCarson–
    I’m not sure I know what you are asking me. To comment on Trenberth paper? To comment on Tamino’s stuff? To say whether or not we are “allowed” to show a 10 year trend?

  48. “..if the IPCC trend falls outside the 95% uncertainty level, then there is a 95% chance that the assumptions on which this trend is based are wrong.”

    lucia (Comment#28205): “No. It means that if the IPCC is correct, excursions with this far off from the mean would happen less than 1 in 20 times. This is fairly unlikely, so can be deemed “inconsistent” with the IPCC nomimal projection at the stated confidence level (95%).”

    So if the IPCC temperature trend predictions are correct, then the chances of them being so is less than 5%. The corrollary to that would be that there is over a 95% chance that they are wrong. What is wrong with stating it thus? (Subject of course to the caveats of the assumptions in the statiscal analysis)

    We could reason further that IF there is over a 95% chance that IPCC trend predictions are wrong, then the assumptions on which these predictions are based also carry a similar probability of being wrong.

    Further Simon Evans (Comment#28248) has pointed out that the models in fact projected more of an acceleration in the first half of the 21st Century. This is certainly not borne out in the first decade of this century, which has not accelerated over the trends of the past 50 years, which is supposed to have been caused mostly by anthropogenic forcing.

    VG (Comment#28211) – I agree with you. I do not see any reason for GISS to continually adjust (homogenise) their records. And when these adjustments consistently tend to increase later temperatures and lower earlier temperatures, thereby nudging the trends up, I dont see how anyone can fail to be suspicious, or shrug them away by saying they dont change the trends very much.

    The cause for suspicion gets further enhanced when they refuse to reveal the reasons or methods for these adjustments.

    However if, even with these groomed records, the IPCC trend forecasts significantly fall outside the margin of error, then the question arises, can we be all that confident about the IPCC forecasts?

  49. Lucia, I think Trenberth’s paper is fine and very interesting. I think showing trends is fine – and necessary.

    Here’s what I am interested in. Why a temperature drop/flatline over a few years would concern a key climate scientist and the same would be not just ignored but derided by a statistician. It seems as if Tamino is saying that you can’t get any useful data out of only a few years. And it seems as if Trenberth is saying yes you can and you have to explain it.

    I thought you might have a perspective on this given that you cover the statistics so much. If not, that’s just fine as well.

    Steve

  50. Richard

    So if the IPCC temperature trend predictions are correct, then the chances of them being so is less than 5%. The corrollary to that would be that there is over a 95% chance that they are wrong. What is wrong with stating it thus? (Subject of course to the caveats of the assumptions in the statiscal analysis)

    Why do you think that’s a corrollary? It’s not. The probability they are wrong cannot be determined based on the information provided. To figure that out, you need to know based on other information.

    For example, suppose I did some sort of statistical experiment to measure the accelaration of gravity, and my instruments were noisy. I get a value of g=9.85m/s^2 with some sort of uncertainty intervals. Say, based on my experiment, I find 9.82 m/s^2

  51. Steve–
    I’m sure Tenberth’s paper is interesting. But… I’m doing something specific right now. So, I’m not going to engage that this moment. 🙂

  52. Bill Illis (Comment#28268) – That graph shows seriously escalating trends till about 2050. I think any person on the street can see that there is no evidence of that happening at the moment and it would be extremely unusual for the trends to suddenly catch up to that graph.

    In fact it would be so unusual that my scepticism of AGW would be shaken by that. Though I may still not believe Al Gore on the temperatures at the centre of the Earth or that you can run steam turbines by drilling 2 kms anywhere below the Earth.

    Thank you for that Bill. A picture is truly worth a thousand words!

  53. Bill Illis (Comment#28268) – That graph shows seriously escalating trends till about 2050. I think any person on the street can see that there is no evidence of that happening at the moment and it would be extremely unusual for the trends to suddenly catch up to that graph.

    i don t expect a 0.3/decade “soon”.
    .
    but your “any person on the street can see that there is no evidence of that happening at the moment ” claim is obviously false. and if you had looked at the links to Tamino, you would know.
    .
    here is a look at the trend over the last 2 years:
    http://www.woodfortrees.org/plot/wti/last:24/trend/plot/wti/last:24
    .
    the trend is 0.12/year (per year!). so there is plenty of evidence, using your way of analyzing data…

  54. lucia (Comment#28275) – I dont think your example of the measurement of gravity is an apt one. The accuracies of the instruments used in measuring g could be very accurately estimated. And thus we could estimate the chances of our values being in error due to noise.

    Instruments for measuring g are quite accurate. If I got a value of 9.85 and I was over the south pole, I would not be much in error, as values vary from about 9.78 at the equator to 9.83 at the poles, at sea level.

    g is one value, and it can be measured with great accuracy, with very accurate instruments.

    In the case of the temperature trends however it depends on a myriad of assumptions, nothing can be measured with any great accuracy, and yet the IPCC come up with conclusions in which they place huge confidence (over 90%).

    Someone said that many climate scientists have migrated to that sphere from geography and many do not have a good grounding even in basic science.

  55. sod (Comment#28282): Sod for the temperature trends to be increasing, keep the start year constant, say 1950 or 2000 and then plot the trend per year. ie 1950 to 2000, 1950 to 2001, … 1950 to 2009 .. etc, or start from 2000 if you wish.

    I think it has a bit of catching up to do to reach 3.5 C

  56. Richard–

    lucia (Comment#28275) – I dont think your example of the measurement of gravity is an apt one. The accuracies of the instruments used in measuring g could be very accurately estimated. And thus we could estimate the chances of our values being in error due to noise.

    No matter how accurate you can make your instruments, if you use a 95% confidence intervals, you will still reject a correct hypothesis 1 in 20 times. That’s the way frequentists statistics work.

    In the case of the temperature trends however it depends on a myriad of assumptions, nothing can be measured with any great accuracy, and yet the IPCC come up with conclusions in which they place huge confidence (over 90%).

    Maybe. But notice that your interpretation of the probability the null is actually wrong depends on information that is not actually contained in the t-test.

    That’s precisely what I wanted you to understand. If you really, really, really believe a null hypothesis, you will likely decide that an excursion outside the 95% confidence intervals is an outlier. But if you already thought it was wrong, you are likely to think the excursion means the null is actually wrong.

    But this isn’t information from the t-test. The t-test only tells us that if the null is right, and your confidence intervals were based on correct assumptions, there is a p chance of a result as far from the mean as you observed. It doesn’t tell you the probability the null is wrong!

  57. VG–
    I’m waiting for that and any similar report to be confirmed. So… we’ll see. But, in the meantime, I still compare GISS,HadCRU etc to projections. Unless you want to provide me your monthly anomaly series based on… what?

  58. sod:

    but your “any person on the street can see that there is no evidence of that happening at the moment ” claim is obviously false. and if you had looked at the links to Tamino, you would know.

    What BS.

  59. lucia (Comment#28292): “No matter how accurate you can make your instruments, if you use a 95% confidence intervals, you will still reject a correct hypothesis 1 in 20 times. That’s the way frequentists statistics work.” I see.

    “Maybe. But notice that your interpretation of the probability the null is actually wrong depends on information that is not actually contained in the t-test.”

    True but my interpretation (estimation really) / belief?, that the null hypothesis is probably wrong is enhanced by the t-test.

    I believe, though this belief is considered and not blind, that the null hypothesis is wrong. The balance of evidence for this lies outside the t-test.

    But I would say that screaming mob in Copenhagen, though they may not be able to engage in a reasoned argument about the t-test, really, really, really, really, really, really, really, want to believe in the null hypothesis.

  60. The Russian points about what CRU has done strikes at the heart of AGW promotion credibility.
    GISS releasing code, unless fully documented and updtated to account for GISS changes in historical data, is not very useful.
    We now seem to have data that is massaged to ‘enhance’ the desired point. We now know that ‘the team’ was more worried about suppressing difficult questions than in talking about how great things are going. We know that code was tricked up to make the desired points show up better.
    And now the Russians are asking why their temp data was played with.
    Who else, and what else?
    Why is AGW- the claims of extreme climate changes- even credible?

  61. VG and hunter:
    You both seem to be quite confident in the conclusions of the Russian Institute for Economic Analysis. As you are skeptical people, I assume you have looked into them. Maybe you can answer a few basic questions:
    .
    Who are they?
    Do they have any climate-related expertise?
    On what are they basing their conclusions?
    .
    Thanks.

  62. hunter–
    Yes. If their conclusions are correct. But we need to confirm that report. There is a lot of crud reported on both warmer and cooler channels on climategate. We just don’t know yet.

    JohnV– I wouldn’t care whether they have climate expertise. People are going to have to look into these things, and now they can. So, they are going to be looked into.

    That said: I do care about the fact that we in the peanut gallery do not yet have enough details. Someone is going to have to document which stations they looked at, which CRU uses etc. The news article is much too thin to judge.

    It’s going to take at least a week to know for sure if this is some sort of made up hoax. If it’s not, it’s going to take months to know for sure whether there is an explanation. And now in the aftermath of the letters, people at CRU are going to actually have to answer in a way that is transparent to the peanut gallery.

    They may not like it. They may think they shouldn’t have to. But they are going to have to do it.

  63. SteveF (#28232:
    I will not try to speculate on any kind of attribution! But it could be noted that with existence of long natural cycles, with any significant changes in forcings the system could take >100 yr to reach equilibrium. So the candidate list is rather long.. And I think both “d***ists” and “w***ists” could benefit from taking their own arguments more seriously.

    Lucia (#28249):
    I think there is little doubt that the models have been over-projecting, and I can’t really see the big difference if your model shows the predicted slope to be outside the 95% CI of the observed, or a (possibly) better one says it’s just outside the 75% – it’s way off in any case, and there is an important difference between 0.15 and 0.20.

    But of course, it’s nice to have something statistically significant to point at. And I’ve seen gavin using that very argument over at RC – that quite a few of the model runs track the actual temperature development, so that, in a counting sense, we are not outside of 95% of models. Well, they are clearly not capturing all the important features to model temperature – but that doesn’t necessarily make them useless for other purposes. And I would be very careful about concluding anything about their ultimate usefulness from their present failures.

    Henrik Svensmark wrote on WUWT a couple of months ago that we should enjoy the warming while we still could. Maybe that will apply to ‘cooling’ too..

  64. JohnV (Comment#28304): JohnV dont expect to be nanny fed with the answers. Do your own research and come back with the answers and also state if you have any point to make.

  65. SNR_ratio–
    I’d venture to say that it is impossible to tell if a model is “useful” or “not useful” if one does not answer “Useful or not useful for what purpose?”

    Models as they exist are certainty “useful” at showing that we know enough physics to explain the poles are colder than the equator and summer is warmer than winter. They can be used to explain other things.

    Currently, they don’t seem particularly “useful” when used to project trend in surface temperatures. The multi-model mean says about 0.2 C/decade, the observations seem to suggest otherwise. The fault could lie with the SRES, or with modeler electing not to include the solar variation or whatever. But that’s part of the process of using models to project. It appears simple linear extrapolation may be beating out the models. That’s pretty sad for models. And if this turns out to be the case, we will say “If your goal is to beat linear extrapolation of earth’s surface trends, models are not useful. If your goal is to understand why the north pole is colder than the equator, they are useful. If you have a different goal, say what it is and we can have a little chin wag about their usefulness with regard to that goal.”

  66. Richard:
    I’m staying quiet on the conclusions of the Russian IEA. I have no point to make at this time. From what I can tell, they are investigating the subset of stations that have been released (not the complete set of stations used in HadCRUT). But I could easily be wrong about that. I can not read Russian so it’s difficult to understand their report.
    .
    Hunter and VG have claimed that their study is pretty important. Perhaps they should back up their conclusions.

  67. “JohnV dont expect to be nanny fed with the answers. Do your own research and come back with the answers and also state if you have any point to make.”

    The point is the Russian IEA thing sounds like utter BS form the start and most “skeptics” start off believing it because they believe everything that supports their own biases. It’s the new skepticism. Before you say my theory is doo-doo, it does explain Monckton and Plimer quite well…

  68. Also, I heard Lonnie Thompson was out on a glacier with a HAIR DRYER! The Tajikistan Investment League has a story on this.

  69. JohnV (Comment#28311) “I have no point to make at this time.”

    I see, well do a bit of investigation and then see if you have a point to make.

    “I can not read Russian so it’s difficult to understand their report.”

    That would make it more than difficult – that would make it impossible.

    Here is a translation excerpt from their first page:
    “It is easy to see that the meteorological stations located in the Russian territory is not entirely uniform and their concentration significantly higher in western and southern parts of the country, while in the north and east – notably less.” (Used by CRU)

    Now read EM Smith’s “March of the Thermometers” and put on your Sherlock Holmes thinking cap.

    Boris (Comment#28312) – can you read Russian? Ja? Nyet?

    Believe everything? No to be a sceptic you have to doubt. Believe nothing of what you hear, very little of what you read and only half of what you see.

    That doesnt apply to Al Gore, or Phil Jones, where you should be prepared to be shocked out your wits if they accidentally tell the truth.

  70. Re the IEA, the founder of it (although I do not think that he is involved with the IEA organisation at the moment) is now a member of the CATO institute, for what that is worth.

  71. “Re the IEA, the founder of it (although I do not think that he is involved with the IEA organisation at the moment) is now a member of the CATO institute, for what that is worth.”

    David Gould,

    Conspiracy afoot?

    Andrew

  72. Richard:
    Let me get this straight. Hunter and VG make claims about the importance of a report from the Russian IEA. I ask them questions about the Russian IEA. You are upset with me for asking questions. Does that sound right? Is it heresy to ask questions?

  73. No conspiracy. Simply pointing out that this might not necessarily be a disinterested analysis of datasets.

  74. David Gould (Comment#28316)
    December 16th, 2009 at 5:45 pm

    It is often the case that those who suffered under communism have a great appreciation for freedom and the free market.

    Sort of like a reformed smoker. Never light up next to one.

  75. JohnV (Comment#28318) Let me get this straight. Hunter and VG make claims about the importance of a report from the Russian IEA. I ask them questions about the Russian IEA. You are upset with me for asking questions. Does that sound right? Is it heresy to ask questions?

    No it is not. But these questions can be found out by you as well as anyone else. Then again questions should have some point behind them and you have said that you have none.

    The importance of a report, allegation or argument rests solely on the logic and facts presented by the report or argument. It is not disproven by the character of the person or persons making the argument. A bad character can be correct and a saint wrong.

    I am not upset with you, merely pointing out a few things.

  76. David Gould,

    “that this might not necessarily be a disinterested analysis of datasets”

    Yes. And there might not necessarily be a lot of that kind of thing going around.

    Andrew

  77. Richard,

    “The importance of a report, allegation or argument rests solely on the logic and facts presented by the report or argument. ”

    Perhaps a bit pedantic of me, but often the importance of a report, allegation or argument can heavily depend on the logic and facts *left out* of the report, allegation or argument …

    And the choice about what to include versus exclude can be influenced by personal opinion on various issues.

  78. David Gould

    Perhaps a bit pedantic of me, but often the importance of a report, allegation or argument can heavily depend on the logic and facts *left out* of the report, allegation or argument …

    Well, of course. The news article suggest that which has been left out by CRU is important the arguments about climate change. 🙂

    Of course, we still don’t know much about the report alleging CRU lest stuff out.

  79. “I think that there is more than is suspected by some.”

    I agree wholeheartedly and with whipped cream on top, David.

    Andrew

  80. Richard:
    You are right that the “importance of a report, allegation or argument rests solely on the logic and facts presented by the report or argument”. Did Hunter and VG review the logic and facts before making grand conclusions based on the report? Failing that, did they take the easier step of looking into the source? Failing that, why are they so confident that this demonstrates fraud and “strikes at the heart of AGW promotion credibility”?

  81. JohnV,

    Hunter and VG have announced the “death of the AGW scam” many, many times on this blog since I first started reading it. It is like a long running opposite of the parrot sketch. 🙂

  82. David Gould:
    Yes I know they have. They and others just keep throwing mud, hoping that some it will stick. Then people like Richard say it’s not ok to ask *them* questions, that I should waste my own time understanding their mud. It’s a classic FUD campaign.

  83. Yes, Global Warming is like the bad ‘B’ movie that your girlfriend likes to watch over and over again.

    The acting is terrible (really awful) and everyone knows the conclusion is always determined before the movie starts.

    Andrew

  84. Lucia,

    If I recall B movies properly, Dracula died many, many times. Also, Zombies sort of die more than once.

    That’s awesome because you get to see the hero kill the zombies multiple times!

  85. David Gould (Comment#28333) “I am hoping that AGW, as opposed to “the AGW scam”, is easier to finish off …”

    It wont be easy. There are scientists in the multibillion dollar gravy train. There are Govts ready to tax us into oblivion, tax the air we breathe out – how marvellous, and there are the mass of followers who really, really, really, really, really, really, really, really, want to believe they are “saving the planet”.
    It will die a slow death, difficult to kill like the Russian Rasputin and do many lazarus acts every time there is a heat wave.

    JohnV – read this – it made a big impression on me –

    “As the Thermometers March South, We Find Warmth
    Ask any retired person which way to get warmer. They will tell you it’s to head to the tropics.

    Well, I couldn’t sleep until I found out if my “eyeball” look at the increase in lower / middle latitude thermometers by year was right. It was.

    So what AGW has found is that if you put more thermometers in the tropics you get more warming. Who knew? (Just about everyone retired…) I first discovered this trend in some steps detailed here:..” http://chiefio.wordpress.com/2009/08/17/thermometer-years-by-latitude-warm-globe/

    According to the Russian report its the same in Russia. The south overrepresented and Siberia less so, in the “Global” database. They say the coverage should be even and the Russian stations are there to given this even coverage, but were not taken. On the face of it this is reasonable. Lets see what comes of it.

  86. JohnV, David Gould…

    I would point out that we are now looking at a detailed analysis by EM Smith of GISS and GHCN. This follows smaller analysis of single stations that imply similar issues. We also have the history of other analyses spanning over 10 years implying similar issues.

    Basically, the evidence is down to GISS, HADCrud, NCDC…can no longer wave their hands and tells us to TRUST THEM!! They need to fully open the procedures and data they are using and show why they are appropriate, not just hand over unknown heiroglyphics. There is no longer room for them to claim a higher moral ground where they can claim infallibility without fully showing their processes.

    As the satellite record is partially benchmarked against this ground record it is IMPERATIVE that this openess and verification is undertaken IMMEDIATELY so we can all relax and get on with whatever needs to be done.

    Maybe this Russian group ARE financed by Putin’s KGB pals because they don’t want their oil and gas sales hurt. The best defense against this type of attack is the validation and verification in an OPEN process that everyone interested can SEE!!! If there are problems, they can be resolved in an open manner rather than in a hidden process that will not satisfy most of us.

    Anyone afraid of an open investigation I personally would have doubts about their possible intentions and agenda!!!

  87. Richard,

    I am not sure if you misunderstood my comment or were replying in reverse, so to speak. But to clarify: I am one of those believers. (I do not drive a car and am a vegetarian). When I spoke of my hope that AGW will be easier to finish off than “the AGW scam”, I was talking about my hope that we can minimise the temperature increase caused by humans more easily than Hunter and VG are finding it to kill off something that does not actually exist – that is why “the AGW scam” was in inverted commas.

    As I said, you may not have needed this clarification, but just to be sure. 🙂

  88. SNR –

    “I will not try to speculate on any kind of attribution!”

    What the heck? Tamino certainly attributes warming to radiative forcing, and it seems so do you.

    By pointing to a rise of 0.15C per decade since 1975, pointing to Tamino’s (AKA Grant Foster’s) analysis, and then suggesting that anybody who doubts this is not really a skeptic but rather a “d***ists”, you appear to suggest that near 100% attribution to greenhouse gases is the only reasoned interpretation.

    You are simply wrong. Nobody knows if 0.15 degree per decade since 1975 represents the direct effect of radiative forcing or some convolution of radiative forcing with one or several other factors. It is certainly no more than blind speculation to say that radiative forcing is the only important factor involved over the past 35 years.

    To suggest that someone who doubts total attribution of the post 1975 temperature increase to radiative forcing is not basing this doubt on a reasoned analysis is both unfair and inaccurate. You should be a lot more generous to those who disagree with your party-line attribution to radiative forcing.

  89. David Gould (Comment#28343) – If you do not drive a car and are a vegetarian then surely global warming will be controlled and temperatures will come down fast. And I will give you full credit for saving us from death by grilling and roasting – (not a pleasant thought).

    But please, please, at least start eating meat if we start descending into a little ice-age.

    Have you heard of that horrible man James Randi? He went around doubting such perfectly scientific stuff like water divining and homeopathy. Now he even doubts AGW! imagine that. And he quotes Arthur Conan Doyle of all people – the blighter.

    “Earth has undergone many serious changes in climate, from the Ice Ages to periods of heavily increased plant growth from their high levels of CO2, yet the biosphere has survived. We’re adaptable, stubborn, and persistent — and we have what other life forms don’t have: we can manipulate our environment. Show me an Inuit who can survive in his habitat without warm clothing… Humans will continue to infest Earth because we’re smart.

    In my amateur opinion, more attention to disease control, better hygienic conditions for food production and clean water supplies, as well as controlling the filth that we breathe from fossil fuel use, are problems that should distract us from fretting about baking in Global Warming.”

    “From Sir Arthur Conan Doyle’s 1891 A Scandal in Bohemia, I quote:

    Watson: “This is indeed a mystery,” I remarked. “What do you imagine that it means?”

    Holmes: I have no data yet. It is a capital mistake to theorise before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts… “

  90. kuhnkat:
    Open code and data is good. I agree.
    .
    Richard:
    Regarding the “March of the Thermometers”. It’s a good thing that temperature trends are computed and communicated as anomalies rather than absolute temperatures. Otherwise Chiefio might have a point. Is that the same guy who had problems with rounding and precision?

  91. Richard,

    Yes, it is my belief that by not driving a car and by not eating meat I personally can save the planet. No-one else need do a thing; SuperDave to the rescue. 😉

    James Randi is a great man. On this issue, he happens to be wrong. (Suprisingly, people who are great can also be wrong. :))

  92. “Suprisingly, people who are great can also be wrong.”

    Note: this obviously does not include me.

  93. “Holmes: I have no data yet. It is a capital mistake to theorise before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts… ”

    It appears the scientists who create the IPCC report are thinking along the same lines, not just data, but a physical basis.

  94. David Gould (Comment#28348) “Richard, Yes, it is my belief that by not driving a car and by not eating meat I personally can save the planet.”

    Just remember to keep it on even keel. Dont overdo it, when the temp goes down pull it up.

  95. Richard,

    I think that I might have to do some polling: after all, if the majority want a cooler world, I would have to go with their views; and if a majority wanted a warmer one, well – to the BBQ!

  96. JohnV (Comment#28347) : “Regarding the “March of the Thermometers”. It’s a good thing that temperature trends are computed and communicated as anomalies rather than absolute temperatures. Otherwise Chiefio might have a point.”

    If you locate more thermometers in warmer places OVER TIME (remember he said “by year”) you can establish a trend. Just like if you leave a thermometer at the same place and that place gets warmer over time. So maybe he has a point.

    “Is that the same guy who had problems with rounding and precision?” – I have no idea. But what has that to do with the point above? Or is it your habit to ask questions without a point?

  97. Richard,

    On first glance, I would question the maths on that.

    Imagine that you start one temperature station. It reports an average temperature of 100 for its first year of operation.

    Then you start a second temperature station in a warmer place.

    In this year, the first temperature station records a temperature of 101. The second temperature station records a temperature of 200.

    If we are dealing in anomalies, then the second temperature station does not yet have an anomaly to report – we have no baseline as yet.

    So the increase in temperature will be 1.

    In the third year, both stations record an increaase of 1 over their previous year.

    So the increase in temperature will be 1 (the average of the anomalies).

    There is thus no bias for the warmer station being added to the temperature dataset.

    Now, if the anomaly was worked out in a different way, such as averaging the absolute temperatures and then working out an anomaly, there would indeed be a problem.

    In that case, you would have recorded an increase in global temperature of 49.5!

    I kind of doubt that scientists are making that kind of elementary mistake …

  98. JohnV,

    ” It’s a good thing that temperature trends are computed and communicated as anomalies rather than absolute temperatures.?”

    You would think that wouldn’t you??

    Of course, those stations that are disappearing at altitude and extreme latitudes are generally RURAL stations with less TREND!!!!

    Your anomaly theory falls flat on its face.

  99. David Gould (Comment#28354) : Dont argue with me – I merely pointed out something that on the face of it makes sense. If you say his argument is wrong, pop over to his place and tell him why so. I’ll listen to both of you and, very fairly, declare a winner.

  100. kuhnkat,

    Regarding the specific accusation – that more temperature stations in warmer regions of the globe increase the measured warming – the anomaly theory indeed disproves it.

    If rural stations with smaller trends are disappearing, as you claim, then that is a completely seperate issue.

  101. Richard:
    Ok, I’ll back up a little for you. To the best of my knowledge, having looked into GISTemp and written my own simple code for temperature reconstruction, new stations are added to the reconstruction carefully. That means considering and correcting for the normal temperature around the station. You convert to temperature anomalies and bias each station’s readings so that adding or removing stations from different temperature regions does not affect the trend.
    .
    I asked if he’s the same guy who had trouble with precision and rounding to make a point (and he was quoted as an example of this problem in lucia’s recent post on the topic). The point is that if he could not understand something that simple then you might be wise to be skeptical of his other conclusions. I’m not saying that he’s evil — just that he might not be fully informed.

  102. kuhnkat:
    Can you point me to an analysis showing that “those stations that are disappearing at altitude and extreme latitudes are generally RURAL stations with less TREND”?
    .
    If it’s true then there’s a big problem with the temperature reconstructions. Is it true? Has anyone done the analysis? If not, why not? It seems pretty important.

  103. David Gould (Comment#28359) “I have laid out the maths here. What is your take on it?”

    Too simplistic. The march of the thermometers can have an effect on the trends. Read his entire article.

    In the Chiefio’s words “30 degree zones will give a biased view of both thermometer stability and of temperature changes over time. And 30 degree bands hide more than they reveal. IMHO, the 30 degree zone choice is a large “Dig Here!” flag.”

  104. Richard,

    I have read it. I cannot see how the march of the thermometers can have an effect on the trends if anomalies are used *unless* the accusation is not that warmer areas are being picked but that areas that are *warming faster* are being picked.

    However, a march of temperatures towards the tropics should, under the global warming hypothesis, show a *decrease* (the tropics are expected to warm more slowly than higher latitudes).

    If you can explain his reasoning to me as you understand it, that would be great, as I may well be misunderstanding it.

  105. If you can lay out how my maths is wrong, for example, that would be great. “Too simplistic” is not very helpful. What compications am I missing?

  106. David Gould (Comment#28364) “I have read it. I cannot see how the march of the thermometers can have an effect on the trends if anomalies are used *unless* the accusation is not that warmer areas are being picked but that areas that are *warming faster* are being picked.”

    I agree with you there. Apparently the US has had no warming over the last 100 years as per the GISS records? Is that correct? But then Tucson was *warming faster* than any other place in the US. And the USHCN weather station has problems there sited in a parking lot. So when JohnV says bias’s have been removed from stations I am not very convinced.

    “However, a march of temperatures towards the tropics should, under the global warming hypothesis, show a *decrease* (the tropics are expected to warm more slowly than higher latitudes).”

    (Yep like Antarctica the fastest warming place on earth? An entire continent being represented apparently by one station, which funnily enough has the fastest warming trend on the continent. And all traceable back to that very dependable chap Michael Mann)
    http://wattsupwiththat.com/2009/12/13/frigid-folly-uhi-siting-issues-and-adjustments-in-antarctic-ghcn-data/

    Thats if one believes, as you do in the AGW hypothesis. But then you also believe that not driving a car and being a vegetarian makes a difference to the global temperatures, so in JohnV’s words “it might be wise to be skeptical of his other conclusions”.

  107. David Gould,

    “kuhnkat,

    As far as I know, all GISS data and algorithms/code are publically available.”

    Then you can explain to us how the decisions are made to reduce the number of stations worldwide from the high of over 3000 around 1990 to their present deplorable state??

    You can also back up the ridiculos TOB adjustment magnitude and the other decisions such as averaging an Urban station with an alledged rural station (that often isn’t) at distances of 250km or more??

    HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

  108. Richard,

    So you agree that the argument re the march of the temperature stations is wrong? Excellent. 🙂

    Re not driving a car and being a vegetarian making a difference to the global temperatures, it looks to me as though you are a little sour that this particular argument has not gone the way you thought it would (especially as you threw in a whole bunch of other stuff irrelevant to the particular issue that we were discussing).

    I should perhaps emphasise – although this should hardly be necessary – that I *do not* think that my efforts by themselves will have any effect on global temperature. Your assertion that I do think this is incorrect. Just another clarification for you. 🙂

  109. The IEA document is dated December 2009 and was authored by N.A. Pivovarova and edited by A.N. Illarionov
    The paper is 21 pages, the following is a synopsis of the first few pages.
    I the introduction it is stated that the work could not be carried out until the recent (Dec. 8) posting of some of the HadCRUT date at http://www.metoffice.gov.uk/corporate/pressoffice/2009/pr20091208a.html
    under pressure from the climate science community. The goal of the study was to verify (with respect to Russia) the HadCRU claim that the 1500 worldwide temperature stations were distributed evenly and reflect the true temperature record. The means for this was to compare temperature data from all Russian weather stations with those used by HadCRUT with the assumption that if the data are in agreement, that will go a long way to alleviate suspicions about HadCRUT data quality.
    Part 2 is dedicated to data sources and begins with complaints about unorthodox HadCRUT formats. The Russian data is compiled and accessible at http://meteo.ru/climate/sp_clim.php . This database contains data fro 476 stations and goes up to 2006. Of the 1500 weather stations that Hadley reports about, 121 are in Russia. In other words for the land temperatures Hadley used about 25% of the Russian stations (121 vs 476). This is in spite of the fact that Russia is 12.5% of the earth’s land mass and the Russian stations comprise a mere 2.4% of stations used by Hadley.
    If this is useful, I will continue as soon as I have more time.

  110. David Gould (Comment#28369) “Richard, So you agree that the argument re the march of the temperature stations is wrong?”

    That this would cause a trend simply by moving the thermometres southwards, maybe. But there do seem to be *warming faster* places in the move, which would.

  111. JohnV and David Gould,

    Obviously I have not done a detailed analysis of the stations that are included and dropped or I would be splashing it all over the web for BIG brownie points from one side or the other!!!

    I am not competent to do the task in the first place.

    On the other hand, with the difficulty in finding what stations are actually used and which data sets are adjusted how much by whom and what is actually RAW data and where is the Meta Data to determine if adjustments are reasonable…..

    Basically I feel I no longer need to show reason for a complete audit of the temperature data sets. Various people have shown individual problems that are affecting virtually every stage of the process. Those sitting back and claiming that it all AVERAGES OUT are no longer credible, especially with what has been seen behind the scenes with CRU.

    Any claims of error of a particular magnitude by me is guesswork based on poor understanding of the full problem.

    Any claims of small error by the IPCC and friends is, at best, the same. At worst it is fraud.

    IT MUST BE AUDITED NOW FOR CREDIBILITY!!!!!

  112. kuhnkat,

    From the GISS page, it seems that around 2,000 stations are currently used.

    http://data.giss.nasa.gov/gistemp/station_data/

    The general reason for using stations appears to be this:

    “In our analysis, we can only use stations with reasonably long, consistently measured time records.”

    Given that the stations are not owned by GISS, if a country shuts down a station – for whatever reason – it is also no longer possible for GISS to use it.

    What reasons do you suspect GISS has for no longer using those 4,000 stations?

    “Mine is an evil laugh.”

  113. kuhnkat,

    Given that the data and codes are available, there is nothing stopping an audit being done (unless I misunderstand what you mean by ‘audit’).

  114. Richard,

    As coverage increases, if the world is warming then we would expect to find more warming stations than cooling ones, no?

  115. denny,

    It looks as though I was wrong about the founder no longer being involved with the organisation, given that he edited the document.

  116. The very proposition of projecting beyond the point of likely human verification renders the exercise unverifiable, therefore esoteric.

    We’re arguing over the number of imagined angels with no pin on which to dance; yet the band plays on.

  117. David Gould,

    “If it’s true then there’s a big problem with the temperature reconstructions. Is it true? Has anyone done the analysis? If not, why not? It seems pretty important.”

    My understanding is that until EM Smith got GISSTemp code to run no one KNEW what stations were being used!!!!!! Would you like to do the analysis??

    This has been the issue all along with FOI’s on GISSTemper and HADCrud and others to find out what was being done to which thermometers to get the Globally Averaged Sausage Temp.

    Claiming that something is available may still be meaningless if there are mistakes in EM Smith’s work.

    Claiming that HADCru is available is still misleading as there are no distributed analyses including stations. As we have seen with GISSTemper, having the data and the code is not a magic bullet.

    With EM Smith’s station lists, shouldn’t be hard to look through the stations that were dropped and the ones that are left to see whether there is a net difference. Would you mind doing that for us??

  118. More from Russian:
    Part 3 deals with station selection. It describes the minimum requirements for selection that aims for data reduction while preserving the representation of the whole. These include
    -even territorial distribution
    -maximum observation duration
    -maximum continuity of data
    -steady location
    -minimum city and industrial heat island effect
    If these criteria are kept, the results will have no shift, i.e. they will have the same characteristics as the whole and will be superior to the whole in excluding non-climactic factors.
    More to follow tomorrow.

  119. John V to kuhnkat:
    “Can you point me to an analysis showing that “those stations that are disappearing at altitude and extreme latitudes are generally RURAL stations with less TREND”?”
    .
    Great question, John V. No. But it would make sense to close the stations that are most remote and most expensive to maintain, keeping mostly the UHI & airport stations open. Someone has to check this.

  120. “With EM Smith’s station lists, shouldn’t be hard to look through the stations that were dropped and the ones that are left to see whether there is a net difference. Would you mind doing that for us??”

    Lol. Free the code, free the data. Ah, can’t be bothered.

  121. On first glance, I would question the maths on that.
    Imagine that you start one temperature station. It reports an average temperature of 100 for its first year of operation.
    Then you start a second temperature station in a warmer place.
    In this year, the first temperature station records a temperature of 101. The second temperature station records a temperature of 200.If we are dealing in anomalies, then the second temperature station does not yet have an anomaly to report – we have no baseline as yet.So the increase in temperature will be 1.
    In the third year, both stations record an increaase of 1 over their previous year. So the increase in temperature will be 1 (the average of the anomalies).There is thus no bias for the warmer station being added to the temperature dataset.

    I believe EM Smith shows that cold stations were dropped and warm stations were added. Using your crude example:
    first year avg = 100/1 = 100
    2nd year avg = (101+200)/2 = 150.5
    2rd year avg = 201/1 = 201

    I don’t think he has problems with rounding, I think he also found problems with rounding. You could just ask him yourself…

  122. JohnV you seems to be realizing.. Anyway to answer your query re hadley and russia its all here:
    http://climateaudit.org/2009/12/16/iearussia-hadley-center-probably-tampered-with-russian-climate-data/
    with the emails etc. If you bother to read it.. and I would because Macyntire has never been pro or anti anything he just “audits” the data, and I think Lucia will agree with me on this one.
    Lucia you must be beginning to realize now that you have been royally conned by using manipulated data from these people of the TEAM to draw your very hard worked “real trends vs IPCC data” which must have entailed considerable effort and work. This will not go away and will prevail because usually the truth does over time. The TEAM could have avoided all this if they had not done this and probably would still be respected scientists with a future in a related area such as pollution and natural variability long term forecasting etc… BTW I fully trust UHA and RSS data and this could be used to argue that there has been some warming but also declines over the past 10 years, in other probably no significant change whatsoever.

  123. OT but you may have heard of giant icebergs floating up near the Australian Coast last month and also same time last year. Why? the antarctic had been increasing in ice extent considerable over the past 4 YEARS yes, 4 YEARS (see Cryosphere today a totally pro AGW site). These icebergs are a result of an overextended ice over the SH. This event and the extreme COLD records experienced in BOTH the SH and NH hemispheric winters suggest a global COOLING brought about solar minima. BTW I would prefer warming any day over cooling

  124. #28388
    Nothing to do with the last four years. This is one of those huge slabs that broke off about ten years ago.

    I cant’ see it from my house, though. It’s about 1700 km south.

  125. N Stokes. Point taken but from a much larger ice mass. ie the mass has increased, the iceberg pushed further north. It has happened before, cheers

  126. I fully trust UHA and RSS data and this could be used to argue that there has been some warming but also declines over the past 10 years, in other probably no significant change whatsoever.

    That’s great! RSS plotted against HadCRUT from 1978:

    http://www.woodfortrees.org/plot/hadcrut3vgl/from:1978/offset:-0.15/plot/rss/from:1978

    It’s amazing that these manipulators at CRU have been so incapable of making any difference! Not only are they FRAUDSTERS they’re also INCOMPETENT! 😉

  127. Nick Stokes (Comment#28187) December 16th, 2009 at 1:13 am

    Bugs #28184
    I made the same remark. But at the bottom right of the graph, I see Lucia has rebaselined it.

    Hide the increase?

  128. Should the lower troposphere (RSS) not show at least some amplified warming compared to the surface? It shows less warming. And UAH shows an even lower trend.

  129. Lucia i dont think you get it. the 0.68C Gisstemp could be anything -0.5, + 3 etc. Its all C@#p. Haven’t you been following the news?

  130. Nylo (Comment#28394)

    Er, yes, Nylo – the HadCRUT trend is very slightly greater, perhaps 0.01C/decade greater?

    So, that’s the scale of manipulation, is it? I remain very surprised that after so many years of effort put into such a fiendishly complex scam that that’s the best they could do! 😉

  131. Simon Evans (Comment#28402)

    The difference in the trend depends a lot on the selected time period. As you can see in my comment #28398, the difference is however quite great since the beginning of the current century: HadCRUT3 cools 0.06C while RSS cools 0.11C, that’s about twice the HadCRUT3 cooling trend. On the other hand, if you want to stick with the period since 1978, UAH shows quite a different story than RSS, as shown by Niels A Nielsen.

  132. Nylo,

    Of course differences in trend depend on the selected time period! Here’s another one for you –

    http://www.woodfortrees.org/plot/hadcrut3vgl/from:1993/offset:-0.15/to:1999/trend/plot/rss/from:1993/to:1999/trend/plot/uah/from:1993/to:1999/trend

    OMGosh, look how much steeper both the RSS and UAH trends were than HadCRUT between 1993 and 1999!!!

    What happened to the CRU manipulations during that time? Or were RSS manipulating even more then? And UAH?

    I guess you really can’t see how entirely ludicrous the whole notion is. You want to suggest that the trend since 2001 shows manipulation, but maybe you’ll have a different explanation for the 1993-1999 period? As a fall back, you’d like to point out that the UAH whole record trend is lower. Yes, we know that. Does that prove HadCRUT manipulation? If so, then I guess we have to presume RSS manipulation too? However, we also know that the UAH trend from 2001 is about the same as HadCRUT. Hmm, are these guys just taking it in turns to manipulate?

    I was responding to VG’s comment ” I fully trust UHA and RSS data”. Maybe you can figure out all these devious manipulations amongst yourselves, then get back to us.

  133. #28345 SteveF
    “SNR –

    “I will not try to speculate on any kind of attribution!”

    What the heck? Tamino certainly attributes warming to radiative forcing, and it seems so do you.

    By pointing to a rise of 0.15C per decade since 1975, pointing to Tamino’s (AKA Grant Foster’s) analysis, and then suggesting that anybody who doubts this is not really a skeptic but rather a “d***ists”, you appear to suggest that near 100% attribution to greenhouse gases is the only reasoned interpretation.

    You are simply wrong. Nobody knows if 0.15 degree per decade since 1975 represents the direct effect of radiative forcing or some convolution of radiative forcing with one or several other factors. It is certainly no more than blind speculation to say that radiative forcing is the only important factor involved over the past 35 years.

    To suggest that someone who doubts total attribution of the post 1975 temperature increase to radiative forcing is not basing this doubt on a reasoned analysis is both unfair and inaccurate. You should be a lot more generous to those who disagree with your party-line attribution to radiative forcing.”

    If you had cared to ask whether you had understood me right before writing up this, I could have told you that you were plain wrong, and you could have spared yourself the work.

    If you had thought about my comment about people taking themselves more seriously, you could have gotten on the right track. Therefore, I’ll expand a little on that.

    When opponents of AGW point to non-antropogenic factors for attribution, they almost invariably have to assume a high climate sensitivity. They can also point to long-time cycles. OK, all forcings are clearly not created alike, but why single out CO2 etc and assert that they have zero or even negative feedback? You may find some reasons to do that, but then there are many reasons not to, and if you believe in high climate sensitivity, you should be careful about neglecting forcing factors. The same applies to long-time cycles (cfr Akasofu). If Akasofu is right, we are in for quite a lot of cooling in the next years, without that affecting the long time warming trend he finds the very least. But how many will still refrain from saying “it’s cooling!” if Akasofu is right?

    The proponents of AGW are arguing for a moderately high climate sensitivity, but the climate system may take very long time to reach equilibrium when forcings are changed, their own models often indicate centuries. In this process, we also have to do with long-time cycles. In spite of this, they disregard that there may still be quite a bit of “heat in the pipeline” from the natural factors mainly responsible for LIA recovery, and they disregard cyclicity when attributing the temperature changes since 1970.

    As for me, I really don’t want to speculate on attribution. I don’t even know how much signal there will be left when apparent long-time cyclicity is filtered out. But I think all the factors suggested that are not plainly contradicted by observations, merit investigation. To pontificate on attributions without investigating all the actual possible explanatory factors, is more theology than natural science to me.

  134. Simon Evans (Comment#28404)

    Yes, you were replying to VG’s statement that he trusted UAH and RSS. And I was responding to your graph pretending to represent that HadCRUT3 and RSS show the same thing. No, they don’t, and they also don’t show the same as UAH. Call it malfeasance, mistake or procedural differences, they are not the same. And they are even more different if you check the regional trends. RSS and UAH for example nearly only differ in their tropical trends, which affects the global trends.

  135. If you are going to compare the satellite trends to Hadcrut3 or GISS Temp, you should take into account the fact that the satellites are more sensitive to the ENSO, especially compared to GISS.

    In the 1997-98 El Nino, for example:
    GISS only increased 0.4C;
    Hadcrut3 increased 0.6C;
    UAH increased 0.7C; and,
    RSS increased 0.8C.

    This additional variability from the ENSO seems to occur throughout the record (as well as for other ocean cycle influences).

    Something similar is also evident with respect to the volcanoes (where the satellite measurements decline by about 0.6C due to Pinatubo while the surface measurements only decline by about 0.4C).

  136. VG:
    I read the page at Climate Audit. It basically has the same info as WattsUpWithThat, with one additional disclaimer:

    [Further note: maybe even akin to Cato Institute or CEI. Comments on data need to be cross-examined before relying on them.]

    .
    It looks like Steve McIntyre and I agree on this one. Thanks for the link.

  137. I was responding to your graph pretending to represent that HadCRUT3 and RSS show the same thing.

    What? I linked to both graphed over the whole satellite record period! If you think the tiny difference in trend is evidence of “manipulation”, when there’s more than enough to be considering about the differences in what is being observed, in the processing of those observations and in their modelling, then so be it.

    Bill Illis (Comment#28407)

    Of course – which is why I’d suggest we should be particularly careful with any short-term comparisons (such as suggesting the divergence from 2001 as evidence of “manipulation”, the absurdity of which I’d hoped to illustrate with the 1993-99 period I picked out).

  138. SNRatio :
    .
    The proponents of AGW are arguing for a moderately high climate sensitivity, but the climate system may take very long time to reach equilibrium when forcings are changed, their own models often indicate centuries. In this process, we also have to do with long-time cycles. In spite of this, they disregard that there may still be quite a bit of “heat in the pipeline” from the natural factors mainly responsible for LIA recovery, and they disregard cyclicity when attributing the temperature changes since 1970.
    .
    Actually they disregard much more .
    It does not take centuries to “reach equilibrium” .
    It NEVER reaches equilibrium .
    We are dealing here with an entretained dissipative non linear system .
    Such systems are never in equilibrium . Their trajectories are not determined by a hypothetical return to some “equilibrium” but by the most efficient energy dissipation .
    They are not cyclical either even if pseudo cycles may pop up in existence for a limited time on all time scales .
    You are clearly right when you mention that attributions in such systems are only a more or less sophisticated hand waving .
    According to papers that I have linked several times , the LOCAL climate attractor is supposed to have about 5 dimensions in the phase space .
    An attribution would mean to identify those 5 dimensions in order to be able , not to predict the evolution of the system because that is impossible , but to describe the topology of the attractor .
    This is beyond any theoretical tool today and it can’t be done empirically either because we’d need data over literally hundreds of thousands years .
    Attributing the dynamics of a 5 dimensional chaotic system to one single parameter within a primitive linear equilibrium model with data over ridiculously short time periods is just , as you politely say … speculation .

  139. “Lucia you must be beginning to realize now that you have been royally conned by using manipulated data from these people of the TEAM to draw your very hard worked “real trends vs IPCC data” which must have entailed considerable effort and work.”

    VG,

    Don’t underestimate the ability of the American Progressive/Liberal to cling to their Progressive Ideas, even in the face of new or conflicting information.

    A con to you and I is unacceptable. A con by a Progressive/Liberal is not a con to other Pregressives/Liberals, but simply the way business is done.

    Andrew

  140. Bill Illis,
    “If you are going to compare the satellite trends to Hadcrut3 or GISS Temp, you should take into account the fact that the satellites are more sensitive to the ENSO, especially compared to GISS.”

    Interesting point Bill. It is not immediately obvious why that should be. Perhaps the discrepancy is related to how sea surface temperatures effect the rate of heat transport to the troposphere. If a warmer sea surface leads to substantially greater evaporation and rainfall (as indicated by the clausius-clapeyron equation), then the troposphere should warm quite considerably during an ElNino. With an El Nino, the sea surface warming is relatively localized, so the satellites should see bigger tropospheric increases where El Nino increases sea surface temperature, but much less tropospheric warming where El Nino doesn’t change the sea surface temperature much. A warmer troposphere should lead to greater radiative heat loss to space during El Nino, which makes sense if the ENSO is an oscillation between periods of tropical Pacific Ocean heat accumulation (La Nina), and tropical Pacific Ocean heat redistribution/heat loss (El Nino).

  141. The Russian report.
    Part 4 analyzes the evenness of the territorial distribution of the selected stations. I climatology it is traditional to use a 5degx5deg geographical mesh and data are calculated based on the density of stations in each square, their area, etc. The total land mass of earth comprises of about 1500 such mesh cells. The distribution of the Russian meteorological stations and of those used in HadCRUT calculations are shown in Scheme 1 in the world coordinate mesh. Yellow –used, blue – not used; abscissa – eastern longitude, ordinate – northern latitude.
    The distribution of the meteorological stations is uneven but there are stations in 152 mesh cells within Russia. If all were included in global temp calculations, their wight would be about 10% (152 vs 1500 cells). But that is not the case.
    Scheme 2 shows the distribution of used and unused Russian meteorological stations by HadCRUT along a 5degx5deg coordinate mesh. Yellow -90 cells used, blue – 62 left unused. Abscissa – eastern longitude, ordinate – northern latitude. Meteorological stations are indicated in the lower left corner of the corresponding mesh cell.
    The analysis shows that a lot of data from open sources within Russia were not used. Moreover, some unused cells are next to one another thus forming large unused zones. The used data are only from 90 cells (59.2%) which excluded about 40% of Russian territory and this was not because of the lack of meteorological stations but for some other reasons.
    Theoretically it is possible that some exclusions in European Russia, esp. along borders was due to availability of abundant data from other sources (effecting 9 cells). However, practically every station north of the 70th parallel is unique data source. None the less, of 23 such stations Hadley used only 10.
    Fig 1 shows some temeperature series north of the 70th parallel not used by Hadley. Abscissa – years, ordinate – deg C, legend – various stations.
    Even cursory inspection of these data from unused stations reveals the absence of a clear warming trend.
    The HadCRUT selection excluded huge Russian territory from 50-55 deg norther latitude and 70-90 deg eastern longitude. There are 16 meteorological stations in this area and none was included. Another neighboring large area at 85-90 deg eastern longitude and 50-65 deg northern latitude has also no included meteorological stations.
    Table 1 contains information on the location of meteorological stations used by HadCRUT.
    Column headings:
    Coordinates of location, N latitude
    Number of meteorological stations
    Subheadings: total/ used by HadCRUT/unused
    Percentage of used
    (Bold): total
    Next part of the table is same but by E longitude
    As a result of the selection, 355 Russian stations (74.6%) were excluded by HadCRUT. Of the 152 mesh cells 62 were entirely excluded (40.8%) and with the 143 operating stations (30%). In some of these cells there are more than one (up to 8) stations.
    Table 2 shows the distribution of stations among all and unused mesh cells.
    Column headings:
    Number of stations in cell
    Total cells
    Subheading: Number of cells/total stations in them
    Unused cells
    Subheading: Number of cells/total stations in them
    (Bold): Total
    When trying to explain this selection and looking at temperature trends it is difficult to escape the impression that the excluded geographical areas as a whole did not show remarkable warming trends during the second half of the 20th and beginning of 21st century.
    Fig 2 shows temp series according to stations in the 65-70 deg n lat., 35-40 deg e lon. That was not included in HadCRUT for global temp calculations.
    Abscissa – years, ordinate – deg C, legend – various stations.
    On the other side, Hadley sometimes uses data from several, even from all, stations within a single cell, even when they are close to one another.
    Table 3 Distribution of stations used by GadCRUT among cells
    Column headings
    Number of stations in the cell
    Used cells
    Subheadings: Number of cells/total number of stations in them
    Fig 3 shows temperature series from stations within the cell 50-55 deg n lat, 35-40 deg e lon. Included in HadCRUT calculations of global temp.
    Abscissa: years, ordinate deg C, legend names of stations incl. coordinates.
    It is no surprise that all three stations in this cell obediently show warming trends in the second half of the 20th century.
    More to follow when I have time.

  142. I have no idea why the number eight becomes a smiley. As long as you don’t get mixed up, just smile:)

  143. denny:
    Thanks for taking the time to translate. I used an automatic translator, but it was still very difficult to read.

    I have not followed CRU’s release of data closely. If I understand correctly, they recently released data from a subset of stations that were not protected by non-disclosure agreements (about 1500 stations?). Data from other stations will be released as permission is granted. IIRC, the full HadCRU temperature reconstruction uses 4000+ stations. Is that correct?
    .
    The Russian IEA document discusses HadCRU’s 1500 stations (see quote below). If my understanding of CRU’s release is correct, then IEA is analyzing only a subset of the stations (the ones for which data was released). Again, is that correct?
    .
    From denny’s translation (#28370):

    The goal of the study was to verify (with respect to Russia) the HadCRU claim that the 1500 worldwide temperature stations were distributed evenly and reflect the true temperature record.

  144. Part 5 is about the duration of the observations
    The aim is to check how much a role criterion of maximum duration of observations played in Hadley’s selection process.
    Table 4 shows the distribution of meteorological stations according to their starting years
    Column headings:
    Observations started in the year or before
    Number of stations
    Subheadings: total/used by HadCRUT/not used
    Percentage of used
    Next to last row: After 1960
    (Bold): total
    It would appear that stations that began operations in the 19th cen. should have priority and should be included the fullest. But by far not all stations with long series were included; of the 82 stations that started operating in the 19th cen. 55 were included, 27 (32.9%) not included. Of the 63 stations that started operating before 1930, 31 were not selected.
    How were stations in the same cell selected? Fig 4 shows temp series of two stations (Uchur –blue and Toko -yellow). Was selection based on longevity of series or on the rate of warming?
    Abscissa: years, ordinate deg C, legend names of stations.
    Predictably, HadCRUT included only Toko, in spite of longer and more continuous observations from Uchur.
    More later. Note: this is not a literal translation.

  145. My reading is that HadCRU uses 5000 about stations but data from oly about 1500 were released. None the less HadCRU claims that because of the selection is representative, the data of the 1500 well represents the data of 5000. The Russian text cites http://www.metoffice.gov.uk/corporate/pressoffice/2009/pr20091208a.html
    for this, so looking up that link should make it clearer than the somewhat cloudy rewording in Russian.
    I hope this helps.
    I do not claim that what I submit is a full translation bu I am trying to include all information what is in the Russian text – as I understand it.

  146. denny:
    Thanks for the link, and thanks again for doing the translation work.

    So it looks like the Russian IEA document is only looking at the subset of stations that are not covered by non-disclosure agreements. As indicated by the MetOffice press release:

    This subset is not a new global temperature record and it does not replace the HadCRUT, NASA GISS and NCDC global temperature records

    .
    Any conclusions in this Russian IEA document do not refer to the HadCRUT temperature series.

  147. ivp0 (Comment#28419) December 17th, 2009 at 12:01 pm

    Re: #28409
    Thank you Simon for that handy tool for comparison. So you are suggesting that these trend differences are irrelevant?:

    What do you mean by “irrelevant”? To what? Bill Illis has clearly pointed out the greater sensitivity of the satellite analysis to the 97/98 Enso – RSS being +0.4C greater change than HadCRUT – and the same is true, though to a lesser extent, for La Nina, so if you plot from 1998 to present then the result should be expected to be exactly the one you get!

    So no, it’s relevant to understanding the different sensitivities of satellite analysis as opposed to surface analysis, but yes, it’s entirely irrelevant to any consideration of “manipulation”.

  148. JohnV: Your conclusion may be unwarranted for two reasons. 1. The documant claims that Russian data are open to everyone, so I think they should have been included to any HadCRU disclosure. 2. HadCRU claims that the data from the disclosed sources well represents the whole. If they are unrepresentative for Russia, it is a problem for the whole.

  149. Boris (Comment#28313) December 16th, 2009 at 5:23 pm

    “Also, I heard Lonnie Thompson was out on a glacier with a HAIR DRYER! ”

    They also fire lasers at all the glaciers from the satellites in space. No wonder they’re all (well, some of them) melting.

  150. denny:
    I am not concluding from this that CRU is perfect. You are raising different issues that may be valid. The particular issue raised by this Russian IEA document — whether CRU cherry-picked the stations used in HadCRUT — can not be investigated using only this subset of stations.
    .
    I believe the complete list of stations used is available (just not the station data). If so, that’s what should be used for this type of study.
    .
    This Russian IEA study may be relevant for the smaller question of whether the subset of stations that aren’t covered by non-disclosure agreements is representative of all stations (either all stations in the world or all stations used in HadCRUT).

  151. Part 6 deals with the maximum continuity of temp series. Hadley claims not to evaluate cases where data are missing. In these instances more complete series should receive preference. Here the completeness of observations as defined by the ratio of actual observations to the total possible observations is examined. Table 5 details the distribution of stations according to completeness of series.
    Col. headings:
    Completeness ratio of series
    Number of stations
    Subheadings: total/ used in HadCRUT calculations/ not used.
    Percentage of used
    Distribution of numbers of stations
    Subheadings: total/ used in HadCRUT calculations/ not used.
    First line: total
    Second line Over 90
    Last line: less than 50
    It turns out that data not included are systematically much more complete than those included. Series wityh over 90% completeness are used only 23%, while two third (66.7%) of half empty series (below 50%) are used.
    Moreover, sometimes HadCRUT omits data that is hard to explain. For example, Figs 5 and 6 show data from Sortavala and Petrozavodsk presented by Rosgidromet (Russian hydro-meteorological authority) and truncated by Hadley.
    Fig 5 shows temp series from Sortavala (blue) and Petrozavodsk (yellow) presented by Rosgidromet
    Abscissa: years, ordinate: deg C.
    Fig 6 shows temp series from Sortavala (blue) and Petrozavodsk (yellow) used by Hadley
    Abscissa: years, ordinate: deg C.
    A consequence of such truncating is removal of data showing appreciable warming in Karelia during the 1930s and the replacement of a longer (positive) trend in Sortavala with a steeper and shorter one.
    More to come.

  152. JohnV: Not belabor this but why do you believe that the Russian part is incomplete when they assert that all data are open source? My reading is that they are dealing with the complete Russian dataset.
    SteveF: I went to school there a very long time ago. I studied electrical engineering and physics. I know very little about climate science (I assume that this is not an oxymoron :)).

  153. denny”
    I understand your point now. It is indeed possible that they are using the complete Russian data set. I don’t know where to look up the complete list of Russian stations used in HadCRUT or the list of stations released this month. Can anyone point me in the right direction?

  154. Part 7 addresses the steadiness of the location of meteorological stations (minimum number of moves, minimum distance of moves).It is important that observation locations are as constant as possible for high quality calculations. Such moves occur over several hundred meters or a few kilometers due to various reasons and, naturally, microclimate, incl. thermal regimen may differ on the new location.
    Of the 121 Russian stations used by HadCRUT 72 has moved during its lifetime (59.5%), some more than once. Among the 355 stations not used, only 73 (20.6%) moved. In other words, the quality of the data that were not used in calculations of global temps is significantly higher due to higher station stability, than the data used.
    Table 6 shows the distribution of stations depending on the stability of location
    Column headings:
    Number of stations
    Subheadings: total/ Moved/ Not moved
    Percentage of not moved
    Row headings:
    Used in HadCRUT calculations
    Not used
    Total

    Part 8 deals with urban heat effect. The effect of cities being warmer is well known even outside meteorology; inhabited locations are generally warmer than uninhabited ones, large population centers are still warmer. Sometimes the difference can be several degrees C. In order to minimize this effect preference should have been given to data from uninhabited or scarcely inhabited locations.
    Table 6 shows the distribution of stations according to the type and size of habitat.
    Column headings:
    Number of inhabitants
    Number of stations
    Subheadings; total/ Used by HadCRUT/ not used
    Percentage of used stations
    Structure of stations
    Subheadings: total/ Used by HadCRUT/ not used
    Row headings:
    In village
    Less than 20 thousand
    20-50 thousand
    Etc
    Last line: total
    About half of the stations used in the calculations are in villages. However, the percentage of station used rises quickly from 19.1% for villages to 83.3% for million inhabitants. But it appears that this was not inevitable: the potentially useable stations in villages are far from being exhausted (254 were not used) in areas where urban heat effect does not have significant influence. An example of selection by Hadley researcher is illustrated with Fig.7.
    Fig 7 shows temperature series according to data from Buynaksk (blue) and Mahachkala (yellow).
    Abscissa: years, ordinate: deg C.
    According to formal attributes of these two close by stations (Buynaksk 42.49deg, 47.07deg; Mahachkala 42,58, 47,33) Mahachkala appears to be less preferred. It is the capital city of Dagestan with over ½ million inhabitants (552 thou), while Buynaksk has only 62 thou inhabitants. In addition the station of Mahachkala was moved thrice during its history. However, from Hadley’s point of view Mahachkala was preferable to Buynaksk because during the 20th cen. It shows a warming trend, while Buynaksk shows slight cooling. Not surprisingly Mahachkala was used and Buynaksk was not.
    One more to come…:)

  155. However, from Hadley’s point of view Mahachkala was preferable to Buynaksk because during the 20th cen. It shows a warming trend, while Buynaksk shows slight cooling. Not surprisingly Mahachkala was used and Buynaksk was not.

    Is that actually what this document says? If so, then it is self-evidently a work of propaganda.

  156. So, that’s the scale of manipulation, is it? I remain very surprised that after so many years of effort put into such a fiendishly complex scam that that’s the best they could do!

    Confirmation bias doesn’t require a conspiracy just preconceptions.

    In any case, for whatever it’s worth, I tend to put more weight in GISS (even thought their trends are higher even than CRU) simply because they are a lot better documented. You can do as you please of course, but that’s my predilection.

  157. David Gould (Comment#28369) “I should perhaps emphasise – although this should hardly be necessary – that I *do not* think that my efforts by themselves will have any effect on global temperature. Your assertion that I do think this is incorrect. Just another clarification for you.”

    David Gould (Comment#28343) “I am one of those believers [in AGW]. (I do not drive a car and am a vegetarian).

    It seems that you are kind of mixed up. That you do not drive a car and are a vegetarian seemed to imply that it was related to Global warming. And since mighty scientists like Al Gore and Pachauri are urging us to do such things to mitigate the effects of GW, I thought maybe you were doing so because of this. But it seems you share with JohnV the habit of saying and doing things that have no point. A characteristic it seems common among AGW believers.

  158. JohnV:

    So it looks like the Russian IEA document is only looking at the subset of stations that are not covered by non-disclosure agreements. As indicated by the MetOffice press release:

    I missed that in all of this teapot-scaled tempest.

    Where’s the evidence that the reason the other stations weren’t released is because they were actually covered by nondisclosure agreements?

  159. Richard (Comment#28446)

    You missed out on understanding the words “my efforts by themselves” .

    Rather obviously, if everyone reduced their fossil fuel usage and went vegetarian it would have an effect.

    You may think that doing a positive thing whilst others like you don’t give a toss is pointless. That’s you. Some others may think there needs to be a sense of personal responsibility for what we do, and that from that sense positive effects may grow. That’s others. It’s the difference between someone like you and some others.

  160. Carrick (Comment#28445) December 17th, 2009 at 3:34 pm

    Confirmation bias doesn’t require a conspiracy just preconceptions.

    No, But I would have anticipated it might give rise to some evident bias in outcome! So, we have “confirmation bias” which hasn’t actually affected the outcome in any distinguishable way. Cool – no proplem with confirmation bias, then.

  161. Wow. Assuming denny’s translation is a fair representation, and assuming the Russian authors are fairly representing the station data/location/selection, this will be pretty damaging to the CRU’s claim of representative station selection. It seems anything but.

  162. “vegetarian”

    Did you vegetarians know that your body is also configured for the consumption of meat?

    Just checkin’

    Andrew

  163. Andrew_KY,

    It is not, however, dependent upon the consumption of meat (unlike cats, for example – I think!).

  164. Simon Evans (Comment#28450) “..obviously, if everyone reduced their fossil fuel usage and went vegetarian it would have an effect.”

    I see. I miss these “obvious” things. How much effect would it have? What are the figures? How many people would have to reduce their fossil fuel usage and go vegetarian to reduce the temp by x amount, or not have the temp go up by x amount?

  165. Andy Krause,

    I suggest that you re-read my crude example. It had nothing to do with rounding. It was about using anomalies and how if you use anomalies moving temperature stations to warmer regions has no effect.

  166. denny (Comment#28441)
    If that Table 6 (part 7) is the info they used, then it’s dodgy. The most likely reason that stations haven’t moved is that the period of record is short. That doesn’t make them desirable stations. The table doesn’t seem to mention this.

  167. Richard,

    I was responding to your bizarre assertion that I believe that me not driving a car and not eating meat can, all on its own, change the temperature trajectory that the earth is on.

    Regarding numbers, as an example, transportation causes around 15 per cent of global emissions. If everyone cut their transport use by 10 per cent (and obviously this transport use does not simply include personal vehicular usage – it includes the transportation of the food and other goods that we use) we could reduce emissions by 1.5 per cent. Now, that is not all that much. But it is some.

    On a household basis, I have calculated that my family being vegetarian and not using a car has cut our per capita emissions from the Australian average of 20 down to 15, which is not a bad cut. Our per capita emissions are still high by global standards, however. 🙁

  168. Andrew_KY,

    Provided you are careful with your intake of nutrients, this is not a problem; indeed, the amount of meat that is eaten by the average westerner can be more damaging than not eating meat at all. I would argue that at least moderating meat intake would make people healthier.

  169. David Gould (Comment#28458) “I suggest that you re-read my crude example. how if you use anomalies moving temperature stations to warmer regions has no effect.”

    If you move temperature stations to places which are “warming faster” from places that are “warming slower”, you admit, would have an effect. And I pointed out two stations, one in Antarctica (curiously in the Northern extremity of the continent), which is warming faster than any other station in Antarctica, (some are actually cooling), which is taken by GISS to be the sole representative of the whole continent. And the fastest warming place in the US (also in the south) of the country.

    The Russian paper also says the same. What is taken to represent the whole of Russia have the fastest warming data and do not spatially represent the country.

  170. Oooo, lots and lots Richard! Lots to get to x amount! Oh yes!

    Everybody with any development, including the Chinese at their current per capita rates, would have to reduce emissions if we are to restrain temperature rise. Developed nations would have to reduce per capita very significantly indeed – in the general area of 90% by 2050. It’s not going to happen, of course.

  171. Simon,

    “It is not, however, dependent upon the consumption of meat”

    True.

    “(unlike cats, for example – I think!).”

    My cats get boring dry cat food everyday. -Treats on holidays. 😉

    Andrew

  172. Andrew,

    I think the dry cat food has meat protein in it – not entirely sure about this, but I think they are dependent upon it.

  173. David Gould (Comment#28460) : You have not answered my question. To say “If everyone cut their transport use by 10 per cent ..we could reduce emissions by 1.5 per cent.” is as meaningless as saying if we reduce our incomes by 50% we will be 50% poorer.

    The question I asked was by how much will this reduce global temperatures? or by how much will it reduce the increase in global temperatures?

  174. Andrew_KY

    My cats get boring dry cat food everyday. -Treats on holidays.

    Grizelda, who adopted us by bravely entering through an available cat door supplements her dry food with. . . rabbit. Adult rabbit.

  175. Richard,

    It’s figured that to have a fair chance of staying at or below +2C, emissions need to be limited to a 2020 level of 44Gt. So, you can cut that whichever way you like. This would be a limit along the way of changing the trajectory to 2050 and beyond. It makes no sense to say that cutting 1Gt equals x degrees of temperature gain avoided, since the gain would not be avoided anyway if the 1Gt is put into the atmosphere later or currently by other means.

  176. Richard,

    As far as I am aware, the assertion that GISS only uses one Antarctic station to measure the temperature of the entire Antarctic is false.

    If you go to http://www.antarctica.ac.uk/met/gjma/

    and check each individual station (there are 18), 5 are cooling and 13 are warming (that is as at the end of 2008).

  177. David Gould:
    Trolls are dependent on misunderstanding basic points, asking irrelevant questions, endlessly picking arguments, and constantly changing the topic. Do not feed the trolls.

  178. This is the last installment. I forgot to give a translation of the title:
    How worming is created.
    The case of Russia.
    The original is at
    http://www.iea.ru/article/kioto_order/15.12.2009.pdf
    Now to Part 9. This being results and conclusions, the following is actually a translation. The only caveat, it has not been proofread.
    9. Results.
    The analysis of the use of Russian climate station data by the Hadley center and UEA coworkers shows that the selection process they executed resulted in the following:
    – the percentage of the Russian stations in global temp calculations is decreased compared to the percentage of Russian territory of the dry land area of earth;
    – the station selection was carried out in such a way it left over 40% of the country not covered by data;
    – the use of longest observation series that are especially valuable for evaluating one and a half century of temperature trends is far fro complete;
    – during the selection of data preference was given to series with measurement gaps while more complete series were unused;
    – during the selection of data preference was given to stations that moved over stations that preserved location;
    – when selection was made between closely located stations preference was given to stations in larger settlements, including cities that clearly exhibited “urban island” effects.
    In other words, the co-workers of HadCRUT carried out a systematic selection of meteorological data giving preference to lower quality data vs higher quality; preferring shorter and less complete series, data from stations that moved more and located in more populated areas. Moreover, they similarly deliberately refused to use data that characterize temperature variations on about 40% of our country.
    In order to check the extent to which this adopted approach could have influenced the calculated results, a comparison between the results obtained on the basis of the narrow selection with results obtained for the full population is required.
    In order to calculate near surface air temperature anomalies compared to the baseline of 1961-1990 (as accepted by current climatology), we carried out calculations for both all of the 152 cells of the 5-degree mesh (476 stations) and for those 90 cells that are represented by the 121 stations selected by the Hadley Center. In both instances averages were calculated for each cell and for every available year, for each cell the deviations from the baseline were calculated, and the deviations were averaged for all cells for each year.
    Fig. 8. Average temperature deviations over Russian territory relative to the 1961-1990 baseline; 11 year smoothing.
    Abscissa: year, ordinate: deg C; blue -152 cells (476 stations), red 90 cells (121 stations).
    The results of Fig 8 show significant discrepancy between the evaluations obtained by the two methods. The extent of warming in Russian territory during 130 years (from the 1870s to the 2000s) that is obtained from the data used by the Hadley Center (90 cells, 121 stations) is close to 2.0 degC. However, the calculation for the more complete data base (152 cells, 476 stations) a more limited warming of 1.4 degC.
    While for the period of 1955-1995 the two temperature series are in general close but when moving further in the past, and similarly in the last decade, the separation between the two series quickly rises. At the same time it becomes clear that while for the period before the mid-1950s the temperature series build based on the selection by the Hadley Center is characterized by a decrease of temperatures compared to the series built based on all 476 stations, during the period past 1995 he temperature series build based on the selection by the Hadley Center is characterized by an increase of temperature.
    While during the second half of the 1940s the temperature anomalies calculated based on the HadCRUT selection appears to be 0.14 degC below values obtained based on all stations in Russia’s territory, in the 1910s they were already 0.26 degree lower and in the 1870s already 0.56 degC lower (c.f. Fig. 9).
    Fig. 9. The difference between evaluated temperature anomalies with participation of 152 vs. 90 cells of the coordinate mesh; 11 year smoothing.
    Abscissa: year, ordinate: deg C.
    When taking into account the negative difference between the temperature series before the mid-1950s (up to 0.56 degC) and positive differences in the mid-1990s (up to 0.08 degC) the increase of the extent of warming achieved by the co-workers of HadCRUT for the territory of Russia between the 1870s and the 1990s can be estimated to be 0.64 deg C.
    Such an estimate is at the same time quite conservative because in these calculations of temperature over the territory of Russia all data in the Rosgidromet database were used without conducting any selection based on content, without inevitable corrections, for example, for urban heat effect.
    Temperature distortions of this size for a country of the size of Russia (12.5% of world land) cannot remain without effect on the extent of global warming published by HadCRUT and used in the presentations of IPCC. In order to clarify the extent of such increase and making the data about global temperature changes more accurate, the whole mass of global temperature data must be recalculated.
    It the climatic data processing procedures discovered on the example of Russia were applied similarly also with regard to data for other parts of the world, then the inevitable correction of the calculation of global temperature and its changes during the 20th century may prove to be quite significant.

  179. Richard,

    Regarding temperature increases, if we (for example), continued on a business-as-usual path except for reducing our emissions by 1.5 per cent, we would have only a very tiny impact on global temperatures.

    If everyone cut their personal emissions by 25 per cent, that would have an impact. Assuming a sensitivity of 3 degrees centigrade per doubling, and assuming that business-as-usual puts as from the 390 ppm that we are today to 550 ppm (a 160 ppm increase), then cutting emissions by 25 per cent over that period would mean only a 120 ppm increase, to 510 ppm.

    The difference would therefore be around .3 of a degree celsius.

    Not enough, of course. But a start.

  180. Simon:

    So, we have “confirmation bias” which hasn’t actually affected the outcome in any distinguishable way

    I actually would have been very disappointed if it had. There are enough controls available scientifically that confirmation bias shouldn’t affect the outcome. Making a mistake of this sort would have been completely indefensible.

    As has been said above, one should treat claims of data mishandling very skeptically.

  181. Thanks for that last part, denny – it makes clear that Tim Lambert is on the money! The HadCRUT selection demonstrates the natural pre/early 20th century warming which the non-selected data fails to demonstrate.

    So, it looks like the non-selected data, which emphasises only the post-CO2 increased warming is possibly a bit dodgy?

    Lol! 🙂

  182. Has anyone noticed how many websites are now claiming that “Russia” has accused the CRU of cherry-picking (Google it, or Bing it if you’re boycotting Google)? Not “a conservative think tank based in Russia”, but Russia itself. Like the IEA is the Russian government.
    .
    Has the Russian IEA released their data and code yet? It’s getting a lot of attention — surely it should be audited.

  183. Carrick,

    We agree, such a mistake would be indefensible. I don’t know whether you’ll agree with me that it would also have been blitheringly obvious over the periods of time we are considering!

  184. I found a map that shows two things:
    1. The stations in the subset of ~1500 that were just released (red)
    2. Stations in HadCRUT that are not in the subset (gray)
    .
    http://www.metoffice.gov.uk/climatechange/science/monitoring/locations.GIF
    (lucia — could you please inline this image? Thanks)

    .
    There are a lot of Russian stations that are used in HadCRUT but were not released in the current subset. Apparently the stations that were not released are not allowed to be released.
    .
    My previous point still stands:
    The Russian IEA report only looks at a subset of the stations. Any conclusions are not relevant for the HadCRUT temperature reconstruction.

  185. Nick Stokes (Comment#28459)
    The table things are translated closely.
    Simon Evans (Comment#28442)
    However, from Hadley’s point of view Mahachkala was preferable to Buynaksk because during the 20th cen. It shows a warming trend, while Buynaksk shows slight cooling. Not surprisingly Mahachkala was used and Buynaksk was not.
    Is that actually what this document says? If so, then it is self-evidently a work of propaganda.
    Yes. this is what it says and yes, there is an element of propaganda in it. As there is in about everything that the IPCC puts out. That does not make it untrue – in either case. The people that hid the data exposed themselves to this by doing so. Openness will eventually disinfect all but it will take time. With all of your help who want the truth, whatever it is.

  186. Giss use SCAR.

    “June 9, 2008: Effective June 9, 2008, our analysis moved from a 15-year-old machine (soon to be decommissioned) to a newer machine. This will affect some results, though insignificantly. Some sorting routines were modified to minimize such machine dependence in the future.

    A typo was discovered and corrected in the program that dealt with a potential discontinuity in the Lihue station record and some errors were noticed on http://www.antarctica.ac.uk/met/READER/temperature.html (set of stations not included in Met READER) that were not present before August 2007. We replaced those outliers with the originally reported values.

    Those two changes had about the same impact on the results as switching machines (in each case the 1880-2007 change was affected by 0.002°C). See graph and maps. ”

    but you never can what is really USED unless you get the code working and follow the data through. FWIW

    http://data.giss.nasa.gov/gistemp/station_data/v2.temperature.inv.txt

  187. JohnV (Comment#28477) “Has the Russian IEA released their data and code yet? It’s getting a lot of attention — surely it should be audited.” –

    Of course they should. But how funny that is exactly the demand that “sceptics and deniers” are making for the CRU and other temperature records.

    That they release their data and code and be audited.

  188. you all should learn by now that posting at Tamino and RC is governed by Mann’s law. The comments cannot be used as a megaphone for skepticism. So, if you have a good point to make you should avoid making it there.. because they wont let you make it there in the first place. Ask Lucia.

  189. The Rothera station has a trend of .65 per decade. So if GISS are showing a trend of .41 per decade, they are not cherry-picking Rothera to do that.

  190. Simon:

    I don’t know whether you’ll agree with me that it would also have been blitheringly obvious over the periods of time we are considering!

    That part is a bit more uncertain to me.

    I actually instrument and collect temperature data as part of my work, and a 1°C error isn’t all that large of a systematic error. While I don’t think people are tampering with data sets, I do think the systematic uncertainties associated with these analyses are understated.

  191. John V,
    Richard points out that you have proven the point all skeptics have been making for many years.
    Thank you very much.
    Welcom to the skeptical side. Prepare to be vilified.

  192. Richard and hunter:
    You two aren’t so good with the sarcasm are you? 🙂
    Yes, I realize that “skeptics and deniers” have been asking for code and data. I have always been very supportive of that.

  193. One quick question for JohnV and others.

    Let’s suppose that we have a temperature series and it increased 1C over 100 years. Then lets suppose that we want to use that temperature series to do things like test the output of models and calibrate proxies. Ok.

    Now lets suppose that I do a study that shows that over time we have underestimated the amount of warming. Let’s just suppose that its a sensor drift over time and instead of going up 1C, when we look at this sensor bias we determine that the actual temperature should go up to 1.1C We have a cool bias of .1C

    What would be the best thing to do.

    1. Report the temperature series as going up to 1C, calculate the error bars and then add .1C to one side of error bars around the graph.

    2. Adjust the temperature series up to 1.1C and then calculate the error bars.

    3. Model the .1C adjustment with appropriate error bars and then adjust the temperature accordingly and carry that error forward into your final error calculations.

  194. steven mosher:
    I think the answer depends on how well we understand the hypothetical sensor drift. Answers 1 and 2 are just special cases of answer 3. The real “trick” is determining the “appropriate error bars” in answer 3.
    .
    Having said that, I would go with answer 3. If little is known about the sensor drift, then the appropriate error bars for the modelled adjustment would be quite large.
    .
    I suspect you’re going to have some fun now. Where are you going with this? Time of observation corrections?

  195. JohnV (Comment#28477) December 17th, 2009 at 5:04 pm

    Has anyone noticed how many websites are now claiming that “Russia” has accused the CRU of cherry-picking (Google it, or Bing it if you’re boycotting Google)? Not “a conservative think tank based in Russia”, but Russia itself. Like the IEA is the Russian government.
    .
    Has the Russian IEA released their data and code yet? It’s getting a lot of attention — surely it should be audited.

    Agree 100%.

  196. David Gould:
    If you go to http://www.antarctica.ac.uk/met/gjma/

    and check each individual station (there are 18), 5 are cooling and 13 are warming (that is as at the end of 2008).

    But here’s the weird thing. GISS reports a trend for the continent of 0.41C/decade and only 3 of those 18 stations have reported trends over that number. (Rothera @ 0.65, Marambio @0.57 and Faraday @0.54.) All three of those are on the Antarctic Peninsula, Rothera being the furthest south. The fourth warmest trend is at Esperanza (0.32) and it’s on the peninsula too.

    I would be interested to hear of a weighting algorithm which can produce a trend for the entire continent of 0.41 from all the stations, when the four warmest trends (and the only three over that “average”) come from stations all of which are situated outside what the average observer would consider the boundary of an otherwise mostly convex continent.

  197. Punch My Ticket,

    I do not know what algorithm that they used. However, given that their algorithms and codes are all publically available, I am sure that an interested individual can find out.

  198. JohnV (Comment#28489) “Yes, I realize that “skeptics and deniers” have been asking for code and data. I have always been very supportive of that.”

    Supportive? Thats nice in your heart of hearts? Did you ever make that demand yourself? You are quick to make that demand of the Russian paper, which is a completely side issue to seeing whether the vital temperature records are ok or not.

  199. Lucia,

    “Grizelda, who adopted us by bravely entering through an available cat door supplements her dry food with. . . rabbit. Adult rabbit.”

    Does she share with the housekeepers??

  200. Where is the GISS trend of .41 degrees from? I took it from JeffID’s website, but when I searched the web I found this:

    quote:

    Drew T. Shindell of the NASA Goddard Institute for Space Studies in New York, who is another author of the paper, said, “It’s extremely difficult to think of any physical way that you could have increasing greenhouse gases not lead to warming at the Antarctic continent.”

    Dr. Steig and Dr. Shindell presented the findings at a news conference on Wednesday. They found that from 1957 through 2006, temperatures across Antarctica rose an average of 0.2 degrees Fahrenheit per decade, comparable to the warming that has been measured globally.

    In West Antarctica, where the base of some large ice sheets lies below sea level, the warming was even more pronounced, at 0.3 degrees Fahrenheit, though temperatures in this area are still well below freezing and the warming will not have an immediate effect on sea level.

    end quote

    from:
    http://www.nytimes.com/2009/01/22/science/earth/22climate.html

    Note that this is GISS and note that they claim that the trend is .2 degrees *Farenheit* per decade, which is nowhere near .4 degrees centigrade per decade.

    So: for those claiming that GISS is cooking the books, where does GISS claim that Antarctica is warming .4 degrees per decade?

  201. Punch My Ticket,

    “I would be interested to hear of a weighting algorithm which can produce a trend for the entire continent of 0.41 from all the stations, when the four warmest trends (and the only three over that “average”) come from stations all of which are situated outside what the average observer would consider the boundary of an otherwise mostly convex continent.”

    Clearly no reasonable weighting algorithm could do it. I expect that if someone takes the time to dig through the GISS methods, they will find some “adjustments” applied to the non-peninsula station data (i.e. the Mann-like magic happens) before the data weighting is done; and presto 0.41C per decade!

  202. So, SteveF, can you please point us to the GISS claim of a 0.41C/decade trend over Antarctica?

    Looking forward to your reference.

    I didn’t think GISS covered the whole of Antarctica, but that’s just silly me – please give us your evidence of what you assert.

  203. David Gould,

    “I suggest that you re-read my crude example. It had nothing to do with rounding. It was about using anomalies and how if you use anomalies moving temperature stations to warmer regions has no effect.”

    Consensus Climate Theory tells us that the poles have higher TRENDS. If we move stations toward the equator it will lower the trends. IF, that is IF, there is a cycle of warming and cooling, where we are in that cycle would determine the actual effect moving stations closer to the equator would have on the global average.

  204. Just a simply averaging of the trends of the stations gives me a trend for Antarctica of .17 degrees centigrade per decade.

    .2 degrees Farenheit per decade – the GISS result – equates to .11 degrees centrigrade per decade.

    It does not look to me as though there is any funny business going on at GISS re Antarctica.

    Claims refuted.

  205. kuhnkat,

    So basically you are saying that the effect the trends will have depends on the trends? Amazing …

  206. kuhnkat,

    Do you have a basic understanding of anomalies being computed for grid squares? Do you understand that it would not matter if you have 100 stations in one grid square and only one in another, they would each contribute equivalently to a calculation of global anomaly? You appear, from what you are saying, to be labouring under the misapprehension that anomalies are calculated from an average of station records rather than from an average of grid squares. Is that so?

  207. S Mosher says:

    Has the Russian IEA released their data and code yet? It’s getting a lot of attention — surely it should be audited.

    I was looking for it last night and got to

    http://translate.google.com/translate?hl=en&sl=ru&tl=en&u=http%3A%2F%2Fmeteo.ru%2Fclimate%2Fsp_clim.php

    The original is:

    http://meteo.ru/climate/sp_clim.php

    I got called away to other thing so I could not follow up once I had found it. Now that I went to grab stuff to start a recon, I have trouble getting back to the data so I am not sure what’s up.

  208. David Gould,

    “Given that the stations are not owned by GISS, if a country shuts down a station – for whatever reason – it is also no longer possible for GISS to use it.

    What reasons do you suspect GISS has for no longer using those 4,000 stations?

    “Mine is an evil laugh.””

    As apparently many of the stations are STILL in the GHCN DB where GISSTemper gets their data, and many are still operational, we need to ask Hansen and workers this question. Any suggestions on how we can get this process started??

  209. On the IEA issue, Gavin at RC noted this plot from the IEA report. It shows almost perfect agreement between the IEA nominated group of stations and the selected subset since 1960, and fairly good agreement since 1900. There’s no evidence there of a rigged selection – it seems to have been well made.

  210. David Gould,

    “So basically you are saying that the effect the trends will have depends on the trends? Amazing …”

    Yes, I am saying the TRENDS of the FEW SELECTED stations affect the Globally computed trend. We still do not knw how/why those stations are selected.

    Now, don’t you feel better that I have clarified your restatement of what I said??

  211. kuhnkat,

    I do feel better. Thanks. 🙂

    Out of interest, on what basis would you select temperature stations? In other words, if for example I get a reply from Dr Hansen and he says, ‘We select stations for x, y and z reasons, and reject them for a, b and c reason,’ on what basis would you judge his reply?

  212. kuhnkat,

    And Dr Hansen just replied to my email – how prompt!

    Here is his response:

    “David, They should all be included if the records are long enough. But the long-term trend of urban stations is adjusted to the mean trend of nearby rural stations. As for quality control (checking for station mves etc.) we now rely on the National climate Data Center for that purpose, so that we won’t be acused of jiggering the data. It might also be noted, to help quell the concern of skeptics, that if all the stations are used the results are nearly the same. More detail is in the papers, specifically the references in the attachment. Jim Hansen”

    And he included a PDF attachment, which I will examine in a moment (I do not know how to post it here – I need to think about how to do that.)

  213. David–
    The few times I’ve emailed Hansen, he has always replied promptly, briefly, but with no evasion.

    You can post a link to a pdf if the pdf is online somewhere. If Hansen sent you a pdf file, that’s trickier. If it’s not a copyright violation to post, you can send to me, I’ll upload it and post the link. However, many journal articles are covered by copyright, and I can’t copy them and host the copy myself.

  214. These are the references in the PDF file:

    Frölich, C. 2006: Solar irradiance variability since 1978. Space Science Rev., 248, 672-673.

    Hansen, J., D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, and G. Russell, 1981: Climate impact of increasing atmospheric carbon dioxide. Science, 213, 957-966.

    Hansen, J.E., and S. Lebedeff, 1987: Global trends of measured surface air temperature. J. Geophys. Res., 92, 13345-13372.

    Hansen, J., R. Ruedy, J. Glascoe, and Mki. Sato, 1999: GISS analysis of surface temperature change. J. Geophys. Res., 104, 30997-31022.

    Hansen, J.E., R. Ruedy, Mki. Sato, M. Imhoff, W. Lawrence, D. Easterling, T. Peterson, and T. Karl, 2001: A closer look at United States and global surface temperature change. J. Geophys. Res., 106, 23947-23963.

    Hansen, J., Mki. Sato, R. Ruedy, K. Lo, D.W. Lea, and M. Medina-Elizade, 2006: Global temperature change. Proc. Natl. Acad. Sci., 103, 14288-14293.

    Hansen, J. 2009: “Storms of My Grandchildren.” Bloomsbury USA, New York. (304 pp.)

  215. Simon Evans,

    I realised I gave too short of an answer:

    “Do you have a basic understanding of anomalies being computed for grid squares?”

    Yes.

    “Do you understand that it would not matter if you have 100 stations in one grid square and only one in another, they would each contribute equivalently to a calculation of global anomaly?”

    No.

    You probably do not mean what I think you said. Let me state my understanding.

    It does not matter if you have 1 station in 1 grid and 100 stations in another. The GRID CELLS still contribute equally to the Global Sausage Average. This is also an exageration of part of what is wrong with the Global Sausage Average. Poor spatial coverage in an almost chaotic environment with a bias toward population centers and airports.

    “You appear, from what you are saying, to be labouring under the misapprehension that anomalies are calculated from an average of station records rather than from an average of grid squares. Is that so?”

    No.

    The issue is that there is ANY consistent trend in the loss of stations. The only trend that should be apparent is an attempt to maintain spatial coverage for background temps and balancing between rural and urban/airport.

    If the stations were NOT available in the GHCN DB, that would be a partial explanation. As they are, you need to find an explanation for their inexcusable lack of professionalism in claiming authority for their product.

    If, as Hansen claims, they simply can not afford QC hours, they should give up their product. No system can be expected to maintain itself for long periods.

  216. So it seems that the only criteria used is that the station record is long enough. And even if you use all the stations, the trends are about the same. Another concern alleviated, right, kunhkat? 🙂

  217. David Gould,

    funny, but, I do not see a list of stations in your reply!!

    Could you also post those?? Also, what periods are used for those stations while I am being picky!!

    Another question you might ask is whether he has the time to audit EM Smith’s work!! It would be good to get feedback from someone familiar with the system!!

    At this point, the issue isn’t how I would select stations. The question I thought we were trying to deal with is WHAT STATIONS GISSTemper IS DEALING WITH!!! Once we have that, there are several people who I am sure will do analyses for us!!

    Things like this:

    http://statpad.wordpress.com/2009/12/12/ghcn-and-adjustment-trends/

  218. kuhnkat,

    “They should all be included if the records are long enough. But the long-term trend of urban stations is adjusted to the mean trend of nearby rural stations.”

    “It might also be noted, to help quell the concern of skeptics, that if all the stations are used the results are nearly the same.”

    As such, I am unclear what the issue is.

    Re specific questions, why don’t you ask him yourself?

  219. Re stations used, it is on the darn web link that I posted earlier:

    http://data.giss.nasa.gov/gistemp/station_data/station_list.txt

    “For a list of stations actually used click here: http://data.giss.nasa.gov/gistemp/station_data/station_list.txt
    , for the full list (copied from GHCN’s website and augmented from SCAR) click here: http://data.giss.nasa.gov/gistemp/station_data/v2.temperature.inv.txt

    I have my suspicions that your questions are not serious ones, kuhnkat, given how easy it is to get the answers to them …

  220. By the way, something I noticed from the page:

    http://data.giss.nasa.gov/gistemp/station_data/

    “(a)The figures below indicate the number of stations with record length at least N years as a function of N ,
    (b)the number of reporting stations as a function of time,
    (c)the percent of hemispheric area located within 1200km of a reporting station. ”

    The graph that shows the decline in stations is (b). In other words, fewer stations are *reporting*. This has nothing to do with GISS. They are not selecting fewer stations – they are using all the data they get.

  221. David Gould,

    I am not sure I understand.

    The first link is of ALL stations. What does that mean?? all stations they COULD have used?? All stations collecting data whether or not GISSTemper or HADCrud use them? GHCN or NCDC… stations?

    The second link is the list of all stations used, but, then you state it is augmented from SCAR and give a third link.

    Is the actual list of stations used a total of both the 2nd and 3rd lists??

    How have you verified that these stations are actually SELECTED by the GISSTemper routines that compute the grids and trends?? I will assume for the moment that they are the actual latest lists of stations from GHCN.

  222. kuhnkat,

    Sorry – I accidentally put the same link twice.

    This is the list of stations actually used by GISS:
    http://data.giss.nasa.gov/gist…..n_list.txt

    This is the list of all stations:
    http://data.giss.nasa.gov/gist…..re.inv.txt

    I have not verified that GISS are telling the truth about what stations they are using and not using. However, all you – and note that I use the word ‘you’ deliberately here – need to do is run the data provided through the computer programs provided and see if the answers match.

    Here is the page link to the programs:

    http://www.giss.nasa.gov/tools/

  223. By the way, I do not state that they are augmented from SCAR – those are quotes from the website.

  224. kuhnkat,

    I would also direct you to where Dr Hansen directed me: the references outlined in the PDF.

  225. You guys should check out http://www.ClearClimateCode.org
    They are re-building GISTemp in Python and are almost done.
    .
    I’ve exchanged a few emails with the project lead and I’ve been watching the dev mailing list. It’s being run like a real professional software project. Hopefully GISTemp will be easier to understand and run on your own computer when they are done.

  226. It is very interesting – and amusing – to me how various refuted claims/assertions have evolved over time in this thread. 🙂

  227. Cooking data the Russian way (bortsch?) Please read carefully the wording denny has provided. Then take another look at the actual graphs: http://scienceblogs.com/deltoid/2009/12/russian_analysis_confirms_20th.php

    If Tim Lambert is right, and several details indicate that he is, we can sum up a little:

    1. They try to uncover another “smoking gun”, as they continually accuse the HADCRU team of cherry-picking etc.
    2. They clearly don’t know what they are doing, as they haven’t even bothered to check against the final gridded temperature series. If you are checking the construction of temperature series, checking against the final product has to be step one.
    3. They’re not very well versed in the logic of the problem, as they come up with a “HADCRU” time series that weakens the case of AGW, without reflecting on whether this could indicate they are on the wrong track.
    4. Taking their temperature series as the “correct one”, one have to say that the CRUTEM3 estimate is a very good first approximation, so, as data cookers, those guys behind that estimate is really a bunch of losers.
    5. The deviation between estimate and raw temperature series may still be of significance and in some respects questionable, but this clearly has more to do with the algorithms used than the selection of stations.

    Still, I don’t “trust” the CRUTEM3 estimate. Why? Because it’s not transparent, well documented, principles and decisions clearly stated etc etc. Lack of clarity makes me suspicious, knowing that Phil Jones et al also had agendas does not make that better. That does not have to imply misconduct at all – it may just be the inherent tendency to see what we expect to see.

    And – the most important thing: As this Russian soup shows, many people around the world will NOT hesitate in skewing things, if they see it as their interest. Lies and accusations are flying all around the world, and lots of people seem to like it that way, as long as it supports their point of view. THAT’s extremely scary to me, and transparency and open, informed discussion is the only safe remedy against it.

    It’s time Steve McIntyre looks into this, instead of passively citing that
    “Hadley center probably tampered with Russian climate data”:
    http://climateaudit.org/2009/12/16/iearussia-hadley-center-probably-tampered-with-russian-climate-data/

    I’m sure IEA will give him their data and algorithms when he requests it, and, hopefully, we can get the final word on how they could get it so wrong.

  228. Just have to correct myself in the last comment – got the impression from too fast reading that Lambert had plotted the Russian CRUTEM3 grid data, but it’s the whole world on the actual latitudes – so discrepancies _should_ turn up, really – but it still gives good indications of the actual estimates for Russia.

    This “reality check” has given me a lot more faith in the HADCRU series – it was rather low.

  229. David Gould,

    “I have not verified that GISS are telling the truth about what stations they are using and not using. However, all you – and note that I use the word ‘you’ deliberately here – need to do is run the data provided through the computer programs provided and see if the answers match.

    Here is the page link to the programs:”

    What relation to the computation of GISSTemper do these tools have?? You appear to have gotten lost in your success with exchanging e-mails with the man who related Coal trains with Death Trains!! Throwing this list of tools at me, that have nothing to do with the discussion, and pretending that I have to run them to prove something is dumb or disengenous. Or, is it that you never check the claims of those you BELIEVE????
    .
    Since you now have the station list YOU should be able to refute my claims of stations biased URBAN/AIRPORT. Of course, I would actually expect you to provide me with an independent verification rather than doing it your self.
    .
    I gave you a link to RomanM’s GHCN analysis showing bias in the adjustments. Here is the link to EM Smith where he has done his GISSTemper analyses:
    .
    http://chiefio.wordpress.com/2009/11/03/ghcn-the-global-analysis/
    .
    http://chiefio.wordpress.com/2009/12/08/ncdc-ghcn-airports-by-year-by-latitude/
    .
    Here is the list of posts he has done on the issue:
    .
    http://chiefio.wordpress.com/category/agw-and-gistemp-issues/
    .

    PS: Which of those claims were mine that were refuted, and by what??

    HAHAHAHAHAHAHAHAHAHAHA

  230. kuhnkat,

    I understand that using google can sometimes be tricky – googlegate and all that – but there are other search engines available. However, given that I have linked to all of the datasets and to the tools which GISS uses to generate its temperature series, you do not even have to use any search engines. 🙂

    As to refuting your claims, sorry: if you make claims, *you* need to provide evidence that they are true. If you provide no evidence, then there is no reason to think that they might be true in the first place, and hence no need to refute them.

    As to the links that you have provided, given how badly the analysis was with the march of the thermometers, I do not trust that web site. (And the problems with rounding in another analysis documented on this site). If you can provide me with an independent source, then I might consider responding.

    As to a claim of yours that I have refuted, I have refuted the claim that GISS has reduced the number of sites that it uses. It was easily done: by going to a webpage and reading one line of text.

    As I said previously, “Mine is an evil laugh.”

  231. kuhnkat,

    As to me not checking the claims of people who I believe, rather it is that I tend to trust people and organisations who have proved themselves trustworthy to me over time.

    Have you checked, for example, the claims made on the chiefio site? How did you check them?

  232. kuhnkat,

    Have you read the pdf from Hansen yet? Have you looked at the references? (If you have trouble with search engines, I can link to some if you like. ;))

  233. David Gould – you said – “If everyone cut their transport use by 10 per cent we could reduce emissions by 1.5 per cent.”
    Then you say “Regarding temperature increases, if we, continued on a business-as-usual path except for reducing our emissions by 1.5 per cent, we would have only a very tiny impact on global temperatures.”

    When you say “everyone” do you mean every driver on Earth? And this will have only a very tiny impact on global temperatures? What is this “very tiny impact” in terms of numbers?

    Then you say “If everyone cut their personal emissions by 25 per cent, that would have an impact.” By “everyone”, do you mean every man, woman and child on Earth? And is this in addition to cutting their transport use by 10%?

    You say this will make a difference of about 0.3C.

    So let me get this straight. If every man woman and child cuts their personal emissions by 25% and then stick to that rate through our population increases, after a century or less we will have reduced the temperature rise, according to you, as very accurately computed by the AGW theory, by 0.3C.

    The best estimate of IPCC is a 3.5C rise. So instead of 3.5C we will have increased global temperatures by only 3.2C. Is this correct?

    What would be the economic cost of reducing personal emmissions by 25%. What would it mean in terms of standard of living? Practical things. Could I use my fridge and toaster? Could I go to Europe for the holiday I’m planning? (I live in NZ). What about the guy who lives in Burkina Faso? (have I spelt that right?). And by reducing the rise by 0.3C how much sea level rise could we save? Could I still holiday in Fiji or the Cook Islands or would they cease to exist?

  234. talking about thermometers marching south, i am unconvinced. i have discussed this with Smith on the Jennifer Marohasy site, and i did not get the feeling, that he could explain the effects of this on gridded temperature.

    he is writing extremely long posts, that don t do the final step: looking at how this effects the temperature in the end. ( a lot like Anthony watts, actually

    if there was a massive effect of temperatures marching south, we would see a sharp divergence from satellite temperature. i don t see that.

  235. Jimmy Haigh (Comment#28314) :Can I just say what a brilliant name “DeNihilist” is?

    It’s really very good, isn’t it. They’re a canny bunch, though, those plumbers.

  236. sod (Comment#28574) December 18th, 2009 at 12:35 am

    talking about thermometers marching south, i am unconvinced. i have discussed this with Smith on the Jennifer Marohasy site, and i did not get the feeling, that he could explain the effects of this on gridded temperature.

    he is writing extremely long posts, that don t do the final step: looking at how this effects the temperature in the end. ( a lot like Anthony watts, actually

    if there was a massive effect of temperatures marching south, we would see a sharp divergence from satellite temperature. i don t see that.

    Sod, it’s not about the consistency of the denialism, it’s about the quantity.

  237. bugs (Comment#28578) – Bugs the term “denialism” or “deniers”, as applied to people who are sceptical of the AGW hypothesis, is a misnomer.

    “Warmers” is also a misnomer. “Warmers” are actually “believers”. Those who believe, fanatically, wholeheartedly, come what may, in the AGW hypothesis, from its sophisticated but half-done science, to its projections of catastrophe, to its mitigation by the use of dollars, vegetarianism, throwing sand over one’s left shoulder while standing on ones right leg, during full moon.

    And “deniers” are all who do not believe this belief. They range from questioners, to doubters, to people who simply do not have the faith and wait for more evidence. Some of these even doubt that we do not have time to wait for more evidence, as we are placed precariously on the razors edge of catastrophe. They think maybe this is not true.

    I personally do not think this is true and I reason it out thus. If it were true, then in our long climatic history, this comparatively minor amount of CO2, would have sometime or the other, tipped us over the edge, when CO2 was rising and “adding to further Global warming”, and sent us hurtling towards a heated extinction.

    The Earth has never behaved like that. It has tended to be cold, and always pulled back from warming. This more so in the last 30 million years.

    But then that is just Nature and history. Against this we have the consensus wisdom of the “2500 scientists” of the IPCC, the policy statements of the Scientific bodies, Obama, Nancy Pelosi, Hillary Clinton, Al Gore, Gordon Brown, Kevin Rudd, Penny Wong and our very own John Key, so I may be wrong and I wait for further evidence.

  238. Steve Carson, my own answer based on the spectral content of “weather” is you need a 30-years average. I suspect most of the modelers would agree with me, but that’s my opinion too.

    The inconsistency of climate models with fluctuations on time scales shorter than that is also not interesting, unless the climate models are tooled to include these shorter time-scale forcings.

  239. Richard,

    By everyone, I mean everyone. The 25 per cent includes the 10 per cent cut in transport – it is not in addition to it. As to costs, it has not cost my family a darn thing – in fact, it has saved us money, given how expensive meat is in comparison to fruit and vegetables and how cheap using public transport is compared to running a car. Obviously, it has meant minor sacrifices of a non-monetary nature, though. I do miss eating meat at times and not having a car is sometimes inconvenient.

    And yes, .3 degrees is not a big cut in temperatures. This is why we need bigger cuts than 25 per cent – 50 per cent globally on 1990 levels by 2050 is the usual number cited. And my family needs to cut their per capita emissions by a further 2/3 or a touch more over that period. This is going to be difficult, but can be acheived through the use of solar power for our electricity needs and by public transport going electric, along with food transport issues in particular being addressed.

    This *will* result in increased costs – there is no point pretending otherwise.

  240. Simon Evans (Comment#28476)
    JohnV (Comment#28480)
    SNRatio (Comment#28540)
    Sorry for the tardy response. My reading of this story is that Tim Lambert inadvertently strengthens the point that the Russians were making. The essential fact is that he shows that the CRU data follows well both lines (total and selected) of the Russian paper. However, the Russian paper makes the (correct) point that the total data should not be taken at face value: corrections, esp. for urban heat island effect should be made but have not been made. If the CRU data had NOT been cherry-picked but correctly selected, it should show definitely LESS warming than the total. Thus, by accepting the data the Russian put forward and making an, in my view, incorrect argument, Lambert significantly strengthens the case the Russians were making. Ironic…

Comments are closed.