Initial thoughts on the Watts et al draft

While doing a detailed analysis of the results is not possible until the actual station siting classifications are released, I’ll provide an initial set of thoughts and discuss potential areas where the paper could be improved.

As everyone who follows climate blogs is well aware by now, Anthony Watts released a draft of his new paper yesterday, which claims that well sited stations have a mean warming trend in the raw data that is about 0.1 C less per decade (0.155C / decade) than poorly sited stations (0.248 C per decade).This is a significantly different result from prior papers that have looked at the issue of station siting (e.g. Menne et al 2010, Fall et al 2011, Muller et al), which all found no significant differences in mean temperature trends between well and poor sited stations.

The Watts et al draft differs from prior papers in that it uses the classifications scheme from Leroy 2010 rather than the older Leroy 1999 criteria. The difference between the old and new Leroy papers is the inclusion of total surface area of heat sinks, rather than simple distance to heat sinks. This actually results in a less strict criteria than that of Leroy 1999, and considerably more stations are rated as class 1 or 2 (160) than in the prior classification scheme (~80).

This should by itself raise a yellow flag: if using a more strict classification criteria found no difference in trend, why would a more lax classification criteria result in very significant differences in trends (at least in the raw data)? Intuitively the opposite should be true; if good station siting is correlated with lower trends, than more restrictive groups of good stations should result in lower trends, all things being equal.

The Watts et al draft focuses the majority of their analysis on the raw data, comparing the results they get to those from the adjusted data to suggest that the adjustments are biasing the trend upwards. This is different from Fall et al, which examined raw, time of observation adjusted, and fully adjusted data but primarily used fully adjusted data in their analysis.

There are good reasons not to use the raw USHCN data for analysis, at least without additional work to control for potential biases. During the period that Watts examines, from 1979-2008, a significant portion of USHCN stations were converted from Liquid-in-Glass (LiG) thermometers in Cotton Region Shelters (CRS) to electronic instruments in Maximum-Minimum Temperature Systems (MMTS). These instrument changes also generally involved the instrument location changing, as MMTS sensors require being wired to an electric source. There is good evidence that the conversion to MMTS stations introduced a significant negative bias in max temperatures and a modest negative bias in mean temperatures, as shown in the figure below. By looking at 1979-2008 trends in stations whose current equipment is MMTS, Watts incorrectly concludes that MMTS stations have a lower trend, because in effect he is looking at a record that from 1979 to around 1990 was mostly CRS and 1990 to 2008 was mostly MMTS. The correct approach would be to examine records from MMTS stations only after the date at which the MMTS instrument was installed.

Additionally, during the period from 1979 to 2010 about 250 USHCN stations changed their time of observation (TOBs) from near sunset to early morning. This change in TOBs results in a significant step change in temperature measurements for stations effected, biasing trend calculations over the period if it is not corrected for. These TOBs changes may be more likely to occur in rural stations than urban stations, as many urban stations had earlier transitions to MMTS instruments with automated readings.

Watts et al find that there are no significant differences in trends in well sited and poorly sited stations in the adjusted data, but significant differences in the raw data. Given that the two major biases known to have occurred during this period, he should be extremely careful to control for sensor type and observation time between his two groups. It may well be that the observed differences are driven more by these factors than actual station siting, given that prior station siting analysis using a more strict classification criteria found no significant differences.

Another useful analysis for Watts et al to do would be to compare Class 1/2 and Class 3/4/5 station records to those from the U.S. Climate Reference Network (CRN), a set of pristinely sited stations, during the period from 2004 to 2012 where CRN data is available. While the period is relatively short, it may be sufficient to yield interesting results.

—————————————————————————————–
Its also worth highlighting Kenneth Fritsch’s summary from the prior thread, which covers some similar points and makes a number of additional ones:

After a 300 plus post thread I think it would be time for someone to attempt to summarize the importance and weaknesses of the Watts prepublication. I have not read all 300 plus posts and I have only skimmed through the Watts paper. I am familiar with this subject so skimming can impart a fairly accurate picture.

What I see is with the newer criteria for classifying stations the raw USHCN temperature data shows larger differences between 1979-2008 trends when station are grouped by the newer criteria. That finding if it can be verified would be worth a publication. For the bigger picture, however, the question has to be whether that difference exists in the adjusted data. It does not, but Watt attempts to make the point that the data from the higher quality stations with the lower trends is adjusted upwards towards the poorer quality stations with the higher trends. That is an occurrence I have seen with my own calculations using the older classification criteria with TOB and adjusted data. Here I do not see Watts making the connection between that observation and an error in the adjustment methodology, but perhaps my skimming of the article was not sufficient to find it.

Firstly as I recall the USHCN data adjustments start with the TOB data and not the raw data and secondly I recall the TOB is a major part of the adjustment from raw to adjusted temperatures. The adjustments between TOB and Adjusted data are made for USHCN with the change point algorithm which is invoked from both meta data and measured change point calculations between nearest neighbors stations. Once the non homogeneity is established in this manner the data is adjusted using the nearest neighbor stations. It is at this point that Watts would have to show the over homogenization of the algorithm that would lead to the error he claims for the USHCN adjustment process. What I have found by calculating and finding change points between nearest neighbors after the USHCN data has been adjusted with the Menne algorithm is that it is under adjusted – although these observation could arise from separate problems.

The problem with using a current station rating to predict temperature trend effects is that it says nothing about the past history of the quality of that station and when you only look at the past 30 years of data a station change that predates that period and remained more of less constant should not affect a temperature trend. Obviously in principle a change point can look into the past and would be better equipped to find these changes. Unfortunately those change point algorithms are limited and particularly so by noisy data. Attempts have been made to use simulated and realistic data to test or benchmark the performances of these algorithms. It might be instructive to determine how one might realistically simulate a changing station criterion for testing the adjustment algorithm. For example would a slow decay in station quality (rating) be detected with a change point and without meta data?

506 thoughts on “Initial thoughts on the Watts et al draft”

  1. TOBS adjustment is necessary, but can it be done without invoking neighbor comparisons, which might mix in effects of siting? There probably aren’t enough stations that neighbor one another of the same siting quality to do so.

    An important issue is whether some of the adjustments in question are adjusting not just for the real problems they are trying to address, but whether they are also resulting in the mixing in of biases. IMAO, the issues with trends in the US are probably overestimated by Watts et al. but not nonexistent. I am also pretty sure that the various problems that are present in US data will be even worse, and compounded by other problems, in other parts of the world.

  2. This should by itself raise a yellow flag: if using a more strict classification criteria found no difference in trend, why would a more lax classification criteria result in very significant differences in trends (at least in the raw data)?

    It doesn’t raise any flags for me.

    A more strict classification criteria forces some of the good data into the not so good data basket, effectively diluting or reducing the “badness” of the not so good data. By relaxing the classification criteria and ensuring that all the good data goes into the good data basket, the poorness of the poor quality data becomes more apparent.

  3. So basically Watts is saying: the raw, uncorrected data needs to be corrected.

    This is news?

  4. Being a total lay person and scientifically really illiterate, I still have a problem with this statement:”The difference between the old and new Leroy papers is the inclusion of total surface area of heat sinks, rather than simple distance to heat sinks. This actually results in a less strict criteria than that of Leroy 1999, and considerably more stations are rated as class 1 or 2 (160) than in the prior classification scheme (~80).” The logic of that statement escapes me. I seems to me that if there are 10 feet to a 1 foot wall and the same distance to 20 foot wall, or paved area, then it seems only logical that more stations can be included in the 1/2 classification. The rest of the “initial thoughts” are dependent on that statement. Besides, it is WMO approved. I agree the TOB is important and should be addressed. What i do find fascinating is that a lot of people have a problem seeing the shift change. Instead of comparing USHCN to “Watts”, one should look at the converse: Start with “Watts” and see if USHCN measures up. Or as the other paper indicates: http://www.agu.org/journals/pip/jd/2012JD017578-pip.pdf
    How many geologists had difficulty adjusting to Wegener and for how long? How many physicists had trouble adjusting to Newton or Einstein and for how long? Not mentioning Galileo.
    Since it now becomes more “fashionable” to question the current orthodoxy a lot more papers of this type will be done. This is going to be fun.

  5. Zeke (Comment #100567)-I can’t comment on the first paper, which appears to be a statistical analysis. However, the analysis of reanalyses is really unimpressive for several reasons. For one thing, at least some reanalyses use station temperature data and are thus not independent of them. Besides that, many reanalyses have known-and unknown-biases that effect their trends. That being said, atmospheric data from say, satellites, does suggest biases in temperature data over the US probably can’t be very large, at least over the period in question, which is what I would expect. If we take satellite data sufficiently seriously so as to consider it to more or less validate the US trends in the recent period, (a very common argument being made why Watts et al must be wrong) we have to also take serious that the same data suggest significant biases present elsewhere.

  6. “How many geologists had difficulty adjusting to Wegener and for how long?”

    Wegener wasn’t the first to observe how the continents appear to fit together so closely, just as Watts wasn’t the first to notice the effect of homogenization on trends.

    What held back Wegener was the fact that his proposed mechanism of continental drift (continents more or less slicing through the ocean floor like icebergs through the sea above) was and is physically impossible.

    Once sea floor spreading was observed and it was recognized that continents and the sea floor plates move together adoption was quick.

    If Watts can show why the homogenization algorithms that are published and have been subjected to a great deal of scrutiny are wrong, then he’ll have a point. He’ll have his mechanism, and if it holds up to scrutiny, he’ll have scored a great victory.

    Thus far, he just asserts that homogenization leads to *spurious* results, and that the raw data – which is known to have problems due to TOBS changes, sensor changes, etc – yields the “real” trend. This is vaguely similar to Wegener’s assertions about climate drift that was backed mostly by assertion and beyond that, a physically impossible mechanism.

    McIntyre, listed #4 on the paper (to his surprise, apparently), has already stated that ignoring TOBS issues is a mistake, and promises a full statistical analysis.

    My guess is that when this is done, the results are going to look a lot like previous studies that have looked into this, including the earlier one with Watts’ name on it.

    And my guess is that Watts will continue to refuse to believe it …

  7. Joe Prins,

    The statement that Leroy 2010-based classifications are less strict and include twice as many stations in Class 1 and Class 2 vs. Leroy 1999 is lifted directly from Watt’s paper. It makes sense when you realize that Leroy 1999 excluded any station from Class 1 or Class 2 if it had any heat sink within a specified distance, whereas Leroy 2010 only excludes locations with heat sinks above a certain size.

    I would be surprised if there were any Class 1/2 Leroy 1999 sites that are not also Class 1/2 Leroy 2010 sites, but I can’t actually know this because I don’t have access to the Leroy 2010 classifications.

  8. Skeptikal,

    I’d imagine that most of the new Class 1/2 sites per Leroy 2010 were Class 3 sites in Leroy 1999. It would not have diluted comparisons to Class 4/5 sites.

    Also, there are many more poorly sited stations than well sited stations, so arguing that the poorly sited station set was diluted with good sited stations doesn’t really work that well.

  9. Zeke,
    I think another effrect of using the Leroy 2010 classification is that the data needed to apply it is harder to get, and so only a subset of 779 out of 1007 could be reclassified. Of course there’s very little detail on how they did get and apply that needed data anyway. But there is scope fpr bias in the way those 779 were selected.

  10. Zeke, were you able to clearly pull out the spatial treatment? There seems to be a mix of simple averages over 9 regions and ALSO the use of 6×6 grid cells (also using simple averaging?). This is more apparent in the two powerpoints than in the paper.

  11. Kap:

    So basically Watts is saying: the raw, uncorrected data needs to be corrected.
    This is news?

    I think the news is Watts is saying it now, too. 😉

  12. Zeke
    “This should by itself raise a yellow flag: if using a more strict classification criteria found no difference in trend, why would a more lax classification criteria result in very significant differences in trends (at least in the raw data)? Intuitively the opposite should be true; if good station siting is correlated with lower trends, than more restrictive groups of good stations should result in lower trends, all things being equal.”

    Your logic here is flawed. Leroy 2010 is not less strict than Leroy 1999. It is merely different. L1999 implies that size of the sink does not matter while L2010 implies that it does. To anyone with even a rudimentary understanding of thermo of course the size (mass) matters as does the specific heat.

    The take away should be that L2010, which take into account the area (a proxy for mass) yields different results than L1999, which ignores mass. So which methodology seems more physically plausible.

    It would be nice if Watts had controlled for all variable other than classification instead of giving us an apples to oranges comparison. But there are no flags in the classification part.

  13. dhogaza,
    Whether or not Anthony has identified issues which have caused an overstatement of land warming, the impact on the global average trend will be very modest. That doesn’t strike me as any kind of great victory. What it might do is add to our understanding, since much of the divergence between ocean and land temperature trends would then be explained. I do not pretend to know if Anthony is right about introduced biases, but I am anxious to see the details.

  14. SteveF, there is no reason to believe that global wx stations are better situated than US wx stations. If the argument can be made that US trends are overestimated due to poor stn siting, it can be expected that global (land) trends are likely to be similarly overestimated.

  15. A first clue to whether the analysis may be altered by adjustments would be to go back and look at Fall et al and ask whether the trend differences by siting in the raw data in that paper was larger or smaller than the claimed effect in raw data in this paper. I haven’t checked this, anyone else?

  16. Andrew_FL,

    In Fall et al CRN12 stations (well sited) and CRN12 proxies (poorly sited) had identical raw temperature trends.

  17. Here is a problem I have with this analysis. A consequence of a 2 to 1 difference in rate means either the temperature today is not what we think it is, or the temperature at the beginning of the series is not what we think it was. You cannot change the rate over a known period without affecting one end point or the other. Which is it?

  18. Zeke (Comment #100582),

    Also, there are many more poorly sited stations than well sited stations, so arguing that the poorly sited station set was diluted with good sited stations doesn’t really work that well.

    According to Watt’s Figures and Tables, 117 stations went from Non-Compliant to Compliant and only 13 went from Compliant to Non-Compliant. That’s a net of 104 stations diluting the poorly sited station set.

  19. “Whether or not Anthony has identified issues which have caused an overstatement of land warming, the impact on the global average trend will be very modest. That doesn’t strike me as any kind of great victory.”

    I mean a great victory in the sense of vindication for his surface stations project, which he started to prove that reported US temperature trends are inflated.

    College drop-out with a demonstrated lack of knowledge of algebra who just started teaching himself statistics last friday takes on and kicks professional science ass, c’mon, that would be a great victory on a personal level.

  20. Ron Broberg,
    My point is that the oceans dominate. Even a 0.15C adjustment downward for all land area globally will change the recent trend by only -0.05C per decade or less. Yes, if true it is significant. No, it doesn’t mean GHG driven warming is not happening.

  21. I think we should pause to recognize that the science of climate is in an early stage. Pretty much at a definitional stage with respect to temperature measurement.

    We have endless discussions on UHI, TOBS, and other adjustments…which means that the fundamental metric of regional/global temperature has not be established…scientifically. That is, by careful experimentation.

    For example, UHI can be determined experimentally…we don’t need to statistically estimate using large databases.

    I note from Zeke’s citation of Williams et al. 2012

    “Changes in the circumstances behind in situ temperature measurements often lead to biases in individual station records that, collectively, can also bias regional temperature trends. Since these biases are comparable in magnitude to climate change signals, homogeneity “corrections” are necessary to make the records suitable for climate analysis. ”

    That’s interesting from 2 perspectives. That measurement error is large relative to the climate signal. And, second, the climate community is still nailing down exactly what that error is…and what are all of its sources….and how, if possible, to minimize it relative to the signal that the physics models tell us should be there

    In sum…we’re in the early days of climate science

  22. dhogaza,
    Generally it is better to indicate to whom you are responding; I will assume me in this case.
    .
    I do not know if Watts et al will hold up to close inspection. I do know that his coauthors are not just learning statistics over the last week. I am puzzled by your apparent hostility toward Anthony. Perhaps dhogaza should offer a formal refutation of Watts et al…. Hopefully one that shows your superior command of statistics.

  23. Based on my analysis of the relationship of anomalies over the US in LT data from UAH, the surface varies by a little more than a factor of two more than the atmosphere. This suggests that over the US there is a very slight cooling bias relative to satellites adjusted to surface variability. This same kind of analysis done globally would however imply a strong warming bias in the surface data. As such I really don’t think the US data can possibly have as large a warming bias as Watts et al. imply.

    The regression coefficient for surface temperature anomalies (12 month moving averages) over the US (USHCN) (Y) against LT, the same, (X), is about 2.32, detrended it’s about 2.23. This implies cooling bias, albeit less than about .02 per decade. This same analysis, globally, would imply a warming bias of something close to .1 per decade.

    Note, it’s either this, or the processes that determine lapse rate variations are reversed over long timescales versus short ones.

  24. Re: dp (Comment #100596)

    Here is a problem I have with this analysis. A consequence of a 2 to 1 difference in rate means either the temperature today is not what we think it is, or the temperature at the beginning of the series is not what we think it was. You cannot change the rate over a known period without affecting one end point or the other. Which is it?

    I’d vote for keeping the present-day temperatures ,since we currently have instrumentation with a lot better spatio-temporal coverage than in ye olden day (we have satellites and automated measuring devices which don’t need to limit data recording to 2 temps per day at a very few sites).

  25. It seems that the “skeptics” who actually try and study climate usually end up up confirming what was already known. Still, it would be nice for Anthony if he did end up adding to the sum total of knowledge.

  26. Oliver – Anthony’s claim covers only the period of the satellite record. From Anthony’s site:

    “The new analysis demonstrates that reported 1979-2008 U.S. temperature trends are spuriously doubled, with 92% of that over-estimation resulting from erroneous NOAA adjustments of well-sited stations upward.”

    This is the perplexing part. How can we not know what the end point temperatures were and hence the trend? Obviously I’m missing something.

  27. dp (Comment #100606)-There is no single satellite instrument monitoring surface weather over the period which is not subject to possible drifts or other temporal biases. In short, believe it or not we don’t know the “end points” in absolute terms with the accuracy necessary to determine the trend in that way. That is why it is necessary to make adjustments in data for known sources of bias and drift.

    But which direction to adjust depends on the source of the trend bias. If it is, say, UHI, presumably leading to spurious warming, the “old” reading is more accurate in the sense of being representative of the surrounding area, and thus the current temp for that location ought to be adjusted downward. If the bias is due to an instrumentation change, presumably the new instrument is the more accurate one and thus old data should get changed.

    However, “which” data is getting adjusted-new or old-only makes a difference to your estimation of the surface temperature’s actual value (ie, on a temperature scale), not it’s change over time. Of the groups assessing temperature trends for the globe, few attempt to create an actual temperature record, preferring anomalies. NCDC does offer global “normals” for turning their anomalies into one (and at a regional scale, the USHCN data comes in “actual” form from NCDC) but I’m not sure how they are determined. BEST actually seems to attempt to calculate the actual temperature, which I find a bit surprising.

  28. SteveF:

    Perhaps dhogaza should offer a formal refutation of Watts et al…. Hopefully one that shows your superior command of statistics.

    😛

    Somehow, I don’t that’s dhogaza’s speciality.

  29. Re: dp (Comment #100606)

    Oliver – Anthony’s claim covers only the period of the satellite record.

    Well, cr*p. I missed that point, so what can I say? 🙂

  30. “Perhaps dhogaza should offer a formal refutation of Watts et al…. Hopefully one that shows your superior command of statistics.”

    Nah, I’ll let Steve McIntyre do it.

    I’m lazy …

  31. Somehow, I don’t that’s dhogaza’s speciality.

    True, I only have a degree in mathematics, which makes me incompetent to criticize Watts who says he taught himself statistics, starting last Friday …

  32. Perhaps dhogaza should offer a formal refutation of Watts et al…. Hopefully one that shows your superior command of statistics.

    I mean, c’mon, McI is already starting to dissassociate himself from the paper.

    You guys are going to throw Mac under the bus?

    (probably so, just as Mosher’s being raked over by those calling him a traitor)

  33. Another point about satellite monitoring-for whatever reason, different instruments often appear to have offsets from one another. Not a problem for determining changes when you can stitch them together during overlaps, but which instrument’s level do you bring the other ones to? That matters for knowing the actual level but not change. Presumably the most recent, most technologically advanced instrument. This has recently been happened which measurements of Total Solar Irradiance. The recent TIM instrument gives lower readings than previous satellites indicated for solar brightness in absolute terms. This has lead to a reduction in the estimated value of the solar constant by a significant amount, but has not caused either ACRIM or PMOD to revise their estimates of past changes in TSI over the last three cycles~ish.

  34. SteveF:

    I do not know if Watts et al will hold up to close inspection. I do know that his coauthors are not just learning statistics over the last week. I am puzzled by your apparent hostility toward Anthony.

    And McIntyre admits there’s a huge flaw in the paper, which he didn’t catch over the weekend.

    If I were McIntyre, I’d have a far deeper and more persistent towards Anthony for having listed me as a co-author of a paper that McI says “I didn’t have time to parse”, without first asking for permission.

    I’m hostile to Watts because he’s a fucking idiot who lists people on his papers without first asking them if they want to be so listed …

  35. Perhaps dhogaza should offer a formal refutation of Watts et al…. Hopefully one that shows your superior command of statistics.

    Actually I expect author #4, McIntyre, to do so. He’s hinting at it. He’s indicated that he’s surprised at being listed as a co-author (which implies he agrees with the conclusions), since he’d had no time to properly parse the paper.

  36. I wonder if dhogaza is still a fan of McIntire when he’s opposing another AGW pulp fiction..? Won’t hold my breath on that, you cant change hypocrites…

  37. Re: dhogaza (Jul 31 17:51),

    What held back Wegener was the fact that his proposed mechanism of continental drift (continents more or less slicing through the ocean floor like icebergs through the sea above) was and is physically impossible.

    Once sea floor spreading was observed and it was recognized that continents and the sea floor plates move together adoption was quick.

    Wegener’s proposed mechanism was indeed crap and was used as an excuse to reject his hypothesis. But you’re wrong about the order of things. I was in graduate school at the time. The seminal paper was a geology paper that identified exactly matching formations in Africa and South America. Once it was proved beyond a doubt that the continents must have been joined the search for the real mechanism was on. Researchers soon identified sea floor spreading and subduction as the probable mechanism for continental drift, not the other way around.

    It turns out that oil companies probably knew all about this decades before, but it was considered proprietary information.

  38. I wonder if dhogaza is still a fan of McIntire when he’s opposing another AGW pulp fiction..? Won’t hold my breath on that, you cant change hypocrites…

    Actually, I’m pleasently surprised that McI is showing some sign of intellectual honesty.

    But he’s not off the hook for blessing Watts effort until it was pointed out to him that he’d missed a huge mistake.

    Anything from the “team” gets micro-analyzed by McI, but he gave Watts a light-handed pass until he was called on it.

    Tch tch.

  39. And why is it hypocritical when he gave Watts a pass on something so sloppy he’d just laugh if it came from the consensus side?

    The hypocricy is, of course, McI letting himself be suckered by Watts, and now being an author of record.

  40. The seminal paper was a geology paper that identified exactly matching formations in Africa and South America. Once it was proved beyond a doubt that the continents must have been joined the search for the real mechanism was on.

    Yes, I admit to being overly simplistic. The point stands, though, that Wegener’s proposed mechanism was “crap” (as you say) or “physically impossible” (as I say).

    Researchers soon identified sea floor spreading and subduction as the probable mechanism for continental drift, not the other way around.

    One could still argue against the geological record until a mechanism was found. Fossil evidence predated Wegener, IIRC (and I’m not going to bother to look it up, the point is, his mechanism was crap, and was RIGHTLY discarded, despite those who argue that somehow Wegener was right).

    It turns out that oil companies probably knew all about this decades before, but it was considered proprietary information.

    I do not know this, but I would not be surprised.

    Still doesn’t rescue Wegener’s proposed mechanism that was “crap” and so long “wrongly” opposed by geologists. It was just crap, and righly opposed.

    Fortunately science moves on …

  41. “TOBS adjustment is necessary, but can it be done without invoking neighbor comparisons, which might mix in effects of siting?”

    The TOBS adjustment is created something like this.

    Hourly stations all over the US were collected. A portion were held as out of sample stations.

    The hourly data was used to created a model of how changes in the observation time introduced a bias. that model takes latitude/longitude/ month, sun position into account. different regions and different times of year get different adjustments.

    Like so; in location X, a change from sunset to morning will cause a .12C change in TMAX. This values is calculated from empirical data and the biases ( positive and negative) are expressed as the output of a function given the parameters listed above.

    Then the model was tested and verified with the out of sample stations.

    There is no Mixing of stations. So for example all stations in Zone X that moved from sunset to sun rise will get a -.1 adjustement.
    Stations in zone Y, that moved from noon to sunrise might get a different adjustment.. +.2.. The adjustment is made on the basis of a model that was developed using hourly data. that model was tested and verified against stations that were held out from the model building process.

    We know this. A change in Time of observation changes the tmax and tmin recorded. We know that because we have hourly data and we can simply show what happens as we change the time of observation.

    Changing the TOB is a change in observation practice. that change introduces a bias. this is known and measured. There is a correction for this bias. If a station underwent other changes. if we changed the exposure from a sunny place to a shady place, everyone at WUWT would say.. hey that change introduces a bias.
    TOBS is no different. change the time of observation, the tmin and tmax change. the raw data has been corrupted by the change in TOB. that corruption can be fixed in one of two ways.

    A) treat it as two stations.
    B) make an adjustment to the station.

    What you cant do is leave the corruption in the raw data. That would be like ignoring a change that you know makes a difference.

  42. Zeke. When I think about the leroy 2010 approach I am remined of the work we did on Imperivious area.

    The notion that the surface area of imperivious material is related to UHI is basically the same as leroys appraoch. and what i would like to see is the same kind of sensitivity study done that we did to the percentage.. not break things into 5 arbitary bins..

  43. Mosh,

    You mean the sensitivity of urban-rural differences based on different ISA cutoffs? We did that for our USHCN paper, though I don’t think the figure made it into the final paper due to length constraints. You also run into issues with a spatial gridding model when you get too strict in your criteria and start losing spatial coverage.

  44. dhogaza (Comment #100611)

    True, I only have a degree in mathematics,

    Associates degree no doubt.

    (Sorry you set that up with years of ad hominem attacks with no substance.)

  45. Andrew, actual temperatures don’t allow you to compare trends at different places because of stuff like latitude and altitude. If you want an example of how it is more useful to consider anomalies there is always this.

  46. I may not have understood any of this, but just in case: Anthony’s paper seems more about station selection than all this stuff you guys are discussing. Anthony’s sieve (Leroy 2010) selects a different population than the others. How do his stations look when the TOB effects are corrected even by the same methods used on the other sets?

  47. Mosher,
    “We know this. A change in Time of observation changes the tmax and tmin recorded. We know that because we have hourly data and we can simply show what happens as we change the time of observation.”
    My understanding was that most stations (even old ones) used max/min recording (analog) thermometers. Am I mistaken about that? With a recording thermometer the time of observation ought not matter.

  48. I think you need to examine exactly which stations switched classes due to Leroy 1999 to 2010. Saying 2010 is less strict because the top tier list got bigger is not defensible unless the top tier from 1999 is largely intact in the 2010 list.

    I suspect a lot of airports got the boot from the top tier because they had tarmac which was far enough distant to not be excluded by Leroy 1999 but in Leroy 2010 the size of the tarmac excluded them. I suspect the expansion was due to a lot of non-airport stations which had some small problem like a driveway or shed too close in Leroy 1999 made the top tier in 2010 because the sink was actually so small it would have little influence.

  49. Re: the TOB “corruption” and ways to handle it:
    A) treat it as two stations.

    How does BEST know when the time of observation has changed so that it can scalpel the station time series?

  50. SteveF (Comment #100631)
    August 1st, 2012 at 7:07 am

    Mosher is right. The min/max recording thermometers are reset at time of observation. It’s the time the reset is done that makes the difference. In theory from what I know the TOBs adjustment is valid because, as Mosher points out, you can prove it is correct by taking hourly data and analyze it by pretending it was a min/max recording and pick different times to make your pretended reset.

    What I question about TOBS is what Watts brought up in that it depends on complete and true metadata. I don’t believe the metadata has sufficient quality with human nature being what it is. Some station keepers when asked what they do when they are away and can’t get a recording say they get the temperature data out of their local newspaper and fill it in from that. How much of that kind of data pollutes the record?

    In general this instrument network has not had the discipline, precision, accuracy, or on-site oversight to do what is being asked of it in identifying decadal trends with accuracies in the hundredths of degrees. Sometimes you just have to face up to the fact that your data isn’t good enough.

    Donald Rumsfeld said “You don’t go to war with the army you want, you go with the army you have.”

    In this situation we have a parallel by the AGW pundits “You don’t advocate policy with the data you want, you advocate policy with the data you have.”

    I believe it is possible there might have been bit of mischief, or maybe just innocent confirmation bias, involved in turning the data they had into the data they wanted. This is what we seek to discover.

  51. SteveF:

    My understanding was that most stations (even old ones) used max/min recording (analog) thermometers. Am I mistaken about that? With a recording thermometer the time of observation ought not matter.

    Hourly data is available from some.

    I have data from temperature sensors that record every second, in fact I can point people to a large online collection if interested, this is the resolution you need for studying turbulence in the boundary layer.

    From a pure signal theory point of view, most met stations on the network drastically under sample .

  52. Zeke (Comment #100625)
    August 1st, 2012 at 12:44 am

    Question. I the NASA urban/semi-urban/rural classifications you used can airports be considered rural?

    I don’t think an airport should ever be considered rural because of all the tarmac and structures. The are urban-like islands even if they are well away from dense population centers.

    In my experience as both a private pilot and air traveler many if not most airports are located as far outside of dense urban areas as practical because nobody wants loud planes right over their heads at low altitudes. I seem to recall that NASA does their urban not-urban classification by looking down through a clear sky at night and looking at how many artificial lights there are. Most small airports don’t have their lights on at night. They have a radio that listens on a published frequency and a pilot can turn on the runway lights by setting his radio to that frequency and double clicking the PTT button on his microphone.

  53. Zeke (Comment #100625)
    August 1st, 2012 at 12:44 am

    Mosh,

    You mean the sensitivity of urban-rural differences based on different ISA cutoffs?

    ####

    no the poster. where you made do a sensitivity from like 0 to 25%..

  54. Carrick should write a pixel reading program to take the maps in Watts and try to figure out the stations used from the locations.

    I bet you 10 quatloos carrick that you cant do it

  55. Re: Berkeley dont use metadata to scalpel. its done based on properties of the time series.

    So is there some file on the Berkeley website that shows for every station the detail of when and how many times the time series was scalpeled? I’m looking now, but not finding anything obvious yet.

  56. Zeke or Mosh,

    Was the scalping algorithm tested against known station moves to see whether it could detect them?

  57. Joshy, I was saying precisely that temperature level is not what we are interested in, but anomalies. Me thinks you are directing your comments in the wrong place.

    Steven Mosher-Does TOBs assume a static daily cycle? From the sound of it, it does. This is clearly a problem although it is not obvious to me that it would create a bias of a particular kind to do so.

    I’m thinking I prefer the idea of treating it as a new station.

  58. j ferguson,

    We don’t know how his stations look when TOBs and sensor type are corrected for. One of the points of this post was to suggest correcting for both as a way to improve the paper, since I can’t do the analysis myself at the moment.
    .
    dallas,

    Indeed, both the scalpel and Menne’s PHA seem to do a good job of removing TOBs biases. Hence the fact that Berkeley and NCDC find identical CONUS anomalies over the past 30 years or so.
    .
    David Springer,

    I’d be happy to examine exactly which stations switched classes due to Leroy 2010. Unfortunately this data is not available asofyet.
    .
    JR,

    Berkeley detects TOBs changes by looking for break points in the station compared to surrounding stations. Its the same way it deals with any potential inhomogeniety.
    .
    David Springer (Comment #100638),

    Airports can be rural if the station is far enough from the buildings. There is good reason to think that airport stations are actually pretty well sited, as they get lots of wind and tend to be in the middle of the grassy area between runways. We did a comparison awhile back and found no real differences in trends between airport and non-airport stations: http://rankexploits.com/musings/2010/airports-and-the-land-temperature-record/
    .
    BillC,

    I believe so, but I will have to check with Robert. The original breakpoint detection work was done before I joined the project.

  59. Zeke (Comment #100648)

    Can’t do TOBs because we don’t have his station list.

    He likely has good reasons, maybe from bad experience, but it seems that there would be more worthwhile analysis of what he says he’s done if he shared the list.

  60. It’s just a shame that, way back in the 20th century, they didn’t have the advantage of computers, or electronic measuring devices, or some sort of communication-by-wire. Otherwise they could have built a small network of (say) several thousand land stations, similar to Argos in the oceans, and probably for a tiny proportion of the cost. It would have made this much less tendentious.

  61. j ferguson,

    Anthony has explained his reasons not to release the station rankings until publication. While I don’t necessarily agree, its his choice and I won’t press him on it too much. That said, when folks ask why I haven’t done a particular analysis myself, I do need to point out that I don’t have access to the classifications.

  62. cui bono,

    They could also have launched satellites back in 1900 and developed robust ways to account for instrumental drift and satellite transitions. Alas, we don’t like in this fictional world of perfect climate data :-p

  63. Re: Zeke
    Is there a way to know when BEST detected breakpoints in a station timeseries? I.e. – do they have a file somewhere that shows all of the detected breakpoints and dates by station number?

  64. All I want to say is, good luck guys on working through all this data! Just make sure to take a breather every now and then.

  65. Interesting thread.

    I’m gratified to see dhogaza mixing it here without gratuitous insults. Not unreasonable comments either.

    The inclusion of Steve Mc as an author seems worthy of a fair amount of ridicule as well as a chance for him to demonstrate some non-partisan analysis.

  66. “Like so; in location X, a change from sunset to morning will cause a .12C change in TMAX. This values is calculated from empirical data and the biases ( positive and negative) are expressed as the output of a function given the parameters listed above.”

    The idea that a man behind his desk would actually KNOW the actual time of observation is ludicrous. Also, the law of great numbers would suggest that the bias rumbles around zero, not displaying a trend the size of the signal.

  67. Rich,

    The observers record the time of observation as part of the data. It might not always be correct, but it likely won’t be systemically incorrect over time (years, decades).

  68. JR (Comment #100659)
    August 1st, 2012 at 10:50 am
    Re: Zeke
    Is there a way to know when BEST detected breakpoints in a station timeseries? I.e. – do they have a file somewhere that shows all of the detected breakpoints and dates by station number?

    ###############

    We are working on getting that data in a format that is useable.

    Hmm basically if you look at the file format every station has a unique ID. a sequenced number from 1-44,000.

    So, when station gets split there are a couple choices
    hmm. change the station ID

    23567.12 would be the 12th segment

    OR, just add another feild called the segment feild and repeat the station IDs so for a station with 12 segments youd have 12 entries.

    The “sliced” data has been on the to do list for some time. You are talking about 179K “stations” not so bad on the berkeley cluster.. for PC folks I’ve done some hacks that can help.

  69. Rich (Comment #100666)
    August 1st, 2012 at 12:38 pm
    “Like so; in location X, a change from sunset to morning will cause a .12C change in TMAX. This values is calculated from empirical data and the biases ( positive and negative) are expressed as the output of a function given the parameters listed above.”
    The idea that a man behind his desk would actually KNOW the actual time of observation is ludicrous. Also, the law of great numbers would suggest that the bias rumbles around zero, not displaying a trend the size of the signal.

    ################

    the effect can be quite large. over at CA a reader has done what I suggested a while back. Go get 5 minute data or 1 hour data from CRN and write a program to see for yourself.

    I started out as a TOBS critic. Go see the CA thread on TOBS.
    A skeptic, Jerry B, shared his data with me ( actually from John Daly’s site ). After plowing through those numbers I had to admit that my skepticism about TOBS was wrong. Save yourself the embarassment and actually look at some data.

  70. Steven Mosher-Does TOBs assume a static daily cycle? From the sound of it, it does. This is clearly a problem although it is not obvious to me that it would create a bias of a particular kind to do so.

    ######

    not that I recall. The best thing I can suggest is to read the papers. I said all manner of stupid things before I did that. I think carrot eater ( as I recall ) was kind enough to send or post a copy of the paper.
    Sorry I dont have it at hand.. probably on my mac somewhere.

    The biggest issue with TOBS in my mind was the standard error of prediction. The verification, as i recall, was pretty good and unbiased, but the SE for certain cases was on the order of
    .2F?? ( bad memory sorry)

    At the time of that debate my primary beef was that these errors were not being propagated..

  71. BillC (Comment #100644)
    August 1st, 2012 at 9:17 am
    Zeke or Mosh,
    Was the scalping algorithm tested against known station moves to see whether it could detect them?

    ############################

    Hmm Im not sure. There is a TOBS field in the station data, where that is known, so I suppose Once I have the fragments i could do a check of that without bugging Robert for more code.
    There was some sensitivity done on scalpeling.. some of the early work didnt use the scalpel. Hmm Anthony was quite pissed that some of the early work was pre scalpel..

  72. David Springer (Comment #100635),

    Humm… so that suggests the process of reading and resetting of the min/max recording analog thermometers changes the min and max values in the next 24 hour period? I will have to read a couple of papers I guess, since that doesn’t on its face make much sense. It makes perfect sense that a change from one type station to another would change the results, but I am surprised by any influence of time of observation.

  73. Zeke, Mosher, David Springer,
    I found this link at CA: http://www.john-daly.com/tob/TOBSUM.HTM which explains how TOBs happen. Pretty counter intuitive unless you look closely at it. Seems to me that the location of the station (continental, moderated by the ocean, cold or warm on average) would all come into play. Very messy.

  74. Reading the Karl et al paper, the results and data are pretty interesting. It seems TOB (midnight-midnight climatological day versus noon-noon, for this paper) more often over estimates the minimum (too hot) than underestimates the maximum (too cold). Looking at Table 3 seems to show this (hourly highs are -0.39 C away from the true high annually, and hourly lows are 0.47 C away from the true low), along with Figure 8 of the model data.

    Overall though, it seems the effects are pretty flattened out when one takes both the separate minimum TOB and the maximum TOB together. Still a small amount left over from the more often than not too high minimums (0.08 C too hot annually all together for their Bismark, ND data).

    At least that’s how it seems from this paper, if I am reading it right.

  75. “Can’t do TOBs because we don’t have his station list.
    He likely has good reasons, maybe from bad experience,..”

    If it’s that he’s reticent to release data because he doesn’t trust the motives of some ‘auditors’… oh the sweet everloving irony.

  76. SteveF:

    Pretty counter intuitive unless you look closely at it. Seems to me that the location of the station (continental, moderated by the ocean, cold or warm on average) would all come into play. Very messy

    It would come to play. I’m assuming they are using some averaged version of the TOBS correction that may not be right for any given site but would work for an “average” station in the ensemble (but then if the weighting of coastal-to-urban-to-rural changes over time, the TOBS correction would have to follow this).

    As you said “very messy.”

  77. Steven Sullivan (Comment #100683)
    August 1st, 2012 at 4:05 pm
    “Can’t do TOBs because we don’t have his station list.
    He likely has good reasons, maybe from bad experience,..”
    If it’s that he’s reticent to release data because he doesn’t trust the motives of some ‘auditors’… oh the sweet everloving irony.

    Not at all. I believe the complaint was more toward someone else publishing work based on his data – which had been acquired at great time and expense – before he could get his own work to press.

  78. Andrew FL replied:
    “In short, believe it or not we don’t know the “end points” in absolute terms with the accuracy necessary to determine the trend in that way. That is why it is necessary to make adjustments in data for known sources of bias and drift.”

    So we don’t know what the temperature is or was, nor presumably even what it will be, we don’t know what the change has been over a period of time even though we’ve spent billions to do just that, we think though that it is bad and will only get worse – we definitely don’t think it will get better and nobody knows why (/sarc), and we’re thinking about spending trillions in hard earned income to treat this as a problem that we need to fix.

    And they call this science.

  79. @D Hogaza

    McIntyre, listed #4 on the paper (to his surprise, apparently), has already stated that ignoring TOBS issues is a mistake, and promises a full statistical analysis.

    My guess is that when this is done, the results are going to look a lot like previous studies that have looked into this, including the earlier one with Watts’ name on it.

    And my guess is that Watts will continue to refuse to believe it …

    His amazing auditing powers failed him when he started to investigate TOBS, and let the issue lapse. Yet they didn’t fail him chasing up FOI requests for years. Those powers seem to fail him when he is not interested in the answer.

    The temperature record has been substantially correct from the start, the confirmation by BEST does not add anything new to what we already know, but it does use a refinement of existing methods to arrive at the same conclusion.

    Which is irrelevant to the ‘sceptics’, since the vast majority have decided to cut Muller loose, and ignore him. The truth was inconvenient.

  80. SteveF (Comment #100674)
    August 1st, 2012 at 1:47 pm

    Zeke, Mosher, David Springer,
    I found this link at CA: http://www.john-daly.com/tob/TOBSUM.HTM
    ————————————————————————
    I read that a couple of years ago. Presumably if you have good enough metadata you might improve your data at the margins. The problem I have is that the metadata is notoriously poor on the face of it and we don’t know how much poorer than on the face of it because station keepers aren’t paid and there is no oversight. Watts talked to a great many of them and found things like when they couldn’t do the reading at the normal time they’d write down the normal time anyway or when on vacation they’d just fill in from newspaper reports of high/low temp or when it was too cold or they were sick they’d get it from the paper.

    On top of that you expect that out of millions of log entries the number of entries taken during taken near the high temp of the day would about equal the number taken near the low temp of the day and they would essentially cancel out. The low temp is generally around breakfast and the high temp around supper. Six of one, half dozen of the other.

    But that isn’t the result of the TOBS adjustment. The TOBS adjustment accounts for half of the twentieth century warming trend. Given how much room there is for mischief (or mistakes or human error) in exactly how the adjustment is applied, and given the demonstrated proclivity of the usual suspects to WANT alarming warming, it’s just a big red flag.

    What we should do is use the raw data, warts and all, because that can’t be claimed to have been purposedly biased. But there’s no warming trend in the raw data! None!

  81. SteveF (Comment #100672)
    August 1st, 2012 at 1:25 pm

    David Springer (Comment #100635),

    Humm… so that suggests the process of reading and resetting of the min/max recording analog thermometers changes the min and max values in the next 24 hour period?

    ——————————————————————

    It’s really pretty simple in concept. Say the high temperture of the day happens at 3pm and it’s 100F. Now say you take your reading at 3pm and reset your thermometer. It immediately climbs back up to 100F because it’s still 100F outside. Now say at 3pm the next day it’s only 95F. Your thermometer will still have yesterday’s 100F reading. So you’ll record 100F for two days in a row. If you’d take your reading at noon that would not have happened and you have one day’s high at 100F and the next day at 95F. This can go either way. You can also reset near the low temperature for the day. No time is perfect because when cold or warm fronts blow through the high/low temp can be quite different from the average time.

    I say screw the adjustments and use the raw data, warts and all.

    But if we do that there’s no warming trend and the climate boffins have to find some other way to earn a living. See the problem?

  82. Zeke and Mosh,

    You do realize that TOBS is frought with problems, right? Different places and different seasons have different times that are going to give the greatest or least error and which direction the error lies. The bias was obtained by modeling a number of hourly stations and comparing the results by picking different hours to perform the reset. The selection of those stations used to get the modeled results is one source of error and which stations in the actual run on the real min/max dataset get which kind of adjustment is also critical. There is a great deal of room for error in the process of applying a TOBS correction to a huge dataset. I’m not going to argue with you that time of observation will make a difference what I’m arguing with you about is whether a reliable correction for it can be made.

    This is basically a case of the raw data being a sow’s ear and the adjusted data is purported to be a silk purse. I’m saying you can’t make a silk purse from a sow’s ear and you’re saying you can.

    One thing is for sure. The sow’s ear doesn’t have any warming trend in it so if you’ve got a vested interest in a global warming controversy you have to defend the silkiness of that purse. Maybe we should turn it all over to people who have no vested interest in the result.

  83. I love the appeal for “raw’ data. including readings of 15000C?
    including the same measurement for 1 month straight?

    Now, if you really want to piss people off, dont use USHCN. its 1200 stations. Instead use the other 140000 stations in the US.

    but seriously we should extend the worship of not accounting for changes in observing practice to UHA..

    Springer. ever look at UHA before corrections?

  84. Zeke,

    “Airports can be rural if the station is far enough from the buildings. There is good reason to think that airport stations are actually pretty well sited, as they get lots of wind and tend to be in the middle of the grassy area between runways. We did a comparison awhile back and found no real differences in trends between airport and non-airport stations: http://rankexploits.com/musing…..re-record/”

    Say what? A well sited station would be somewhere where humans have alerted nothing. Somewhere wind patterns have been altered by human development is NOT representative of most of the earth’s surface.

    I can’t believe you actuall wrote that.

  85. Moshpup should be careful, straw houses burn so easily!!

    “I love the appeal for “raw’ data. including readings of 15000C?
    including the same measurement for 1 month straight?”

    The man is talking about junk adjustments, like TOBS, smeared indiscriminately across a DB and you want to talk about simple data entry issues that can be handled by simple screening??

  86. KuhnKat (Comment #100700)
    August 1st, 2012 at 9:14 pm

    Moshpup should be careful, straw houses burn so easily!!
    ——————————————————————–

    Glad someone else noticed the straw man arguments. When you see someone making those you know you hit a nerve. 🙂

  87. dhogaza,

    “And McIntyre admits there’s a huge flaw in the paper, which he didn’t catch over the weekend.”

    Go back and reread what McIntyre wrote. You will not find the word HUGE anywhere. That is your bias speaking.

  88. David Springer,

    That graph hasn’t been used since ~2008 when USHCN v2 came out. Anthony keeps linking it for some strange reason, but apart from the TOBs adjustments it shows the rest are no longer used.

    Personally I like the Berkeley approach to adjustments. Detect breakpoints, cut series at the breakpoints to create separate stations, and combine the station fragments into a spatial field with correlation-based weighting for any particular period of time.

  89. Zeke (Comment #100648)
    August 1st, 2012 at 10:07 am

    “I’d be happy to examine exactly which stations switched classes due to Leroy 2010. Unfortunately this data is not available asofyet.”

    Leroy 2010 is the new WMO-ISO standard superceding Leroy 1999. It’s a public standard. I would suggest if you aren’t using the latest standard for site classification you don’t have an up-to-date reanalysis that is copacetic with the new standard. If you don’t want to put in the work that Watts did in applying the new standard to the metadata it would probably be best if you simply remained quiet about your out of date analysis.

  90. Zeke (Comment #100704)
    August 1st, 2012 at 9:23 pm

    “That graph hasn’t been used since ~2008 when USHCN v2 came out. Anthony keeps linking it for some strange reason, but apart from the TOBs adjustments it shows the rest are no longer used.”

    Your site selection criteria is from 1999. New criteria came out in 2010. You hardly have room to talk about dated material.

    But two wrongs don’t make a right. Just sayin’ pot:kettle:black.

    Do you have any evidence that the 2008 graph is no longer accurate? And if it is why has NOAA not updated it?

  91. David Springer,

    If there were an objective way to apply Leroy 2010, sure. Iteratively looking at 1,000 station images in the hopes that my interpretation matches those of Watts et al really isn’t ideal, as I have no way to be sure that my classifications will actually match his. Its not like Leroy 2010 actually rates (or even mentions) USHCN stations…

    Also, on the subject of airports, there is a good reason why CRN12 stations in Fall et al were disproportionately located at airports. Compared to most stations, they tend to be well sited with good modern instruments.

  92. Zeke,

    “Compared to most stations, they tend to be well sited with good modern instruments.”

    Umm, stations with good instruments sited close to miles of concrete that is definitely NOT similar to the previous environment is now well sited?? I guess you wouldn’t have a chance of coming up with the same classifications as Watts.

  93. KuhnKat,

    Which part of “there is a good reason why CRN12 stations in Fall et al were disproportionately located at airports” did you not understand? Thats not my classification system. Oddly enough, most airport stations are a reasonably distance from the concrete, and airport stations show no higher trend than non-airport stations, even controlling for urbanity.

  94. Zeke (Comment #100708)
    August 1st, 2012 at 9:29 pm

    “If there were an objective way to apply Leroy 2010, sure.”

    Watts certainly has you at a disadvantage there. He and over 500 volunteers visited just about every USHCN station and photographed from all angles noting the problems. Not even NOAA has information that good about their own network.

    “Also, on the subject of airports, there is a good reason why CRN12 stations in Fall et al were disproportionately located at airports. Compared to most stations, they tend to be well sited with good modern instruments.”

    You really don’t understand why mown fields of grass and runways and hangars aren’t very representative of most of the world’s surface? Seriously? I hardly know what to say to that. Believe it or not most of the earth’s surface is not mowed grass and tarmac and large buildings. Seriously. It isn’t.

  95. dave Springer.

    There is no evidence whatsoever that Watts applied the new standard. Or if he did that he did so accurately.

  96. Kukhkat

    “I say screw the adjustments and use the raw data, warts and all.”

    strawman?

    more like springer

  97. Zeke

    Leroy 2010 supercedes Leroy 1999 precisely because the older criteria did not properly address heat sink issues. You keep referring to 1999 like it didn’t have a problem i.e. airports don’t show a lot of difference between other stations even accounting for urbanity. Unless you use Leroy 2010 you aren’t accounting for urbanity properly. That’s the whole point!

  98. Re Steven Mosher (Comment #100698)

    ‘I love the appeal for “raw’ data. including readings of 15000C?’
    ________________

    Unlike you, David Springer is willing to take the bad as well as the good. I suppose if you and David were picking apples out of a barrel, you would eat only the good ones, while David would eat some of the good and some of the rotten, and end up with a bad taste in his mouth.

  99. Mosher,

    Watts specifically stated he used Leroy 2010. As to whether he applied it properly I guess we’ll find that out. Given Watts or a volunteer contributing to the surface station project visited and photographed every one of those stations he’s in a better position to apply those standards than anyone else. Judging by your increasing acrimony I think you realize that BEST has very likely been bested by hundreds of citizen scientists doing this for no compensation other than wanting to be a part of getting to the truth. I bet McIntyre’s name on the paper is really disconcerting. I love it!

  100. David Springer,

    At least 10% of Leroy 2010 Class 1 and Class 2 stations are airports. I’d wager its closer to 20% to 30% once we see the ratings.

  101. Can someone do a check for me? I tossed together a quick program to try to see if I could find (at least some of) which stations went from Compliant to Non-Compliant in this Watts paper. It crashed after running through ~30 stations in the list I had, but by that point, it had already picked out 14 as missing. There should only be a total of 13 missing, so that number is way too high after checking less than half the list.

    I figured something was wrong with my program, so I took the list of missing stations and checked by eye. When I did, I couldn’t find any of them. I then tried to find the ones my program didn’t mark as missing, and I found all of them. Assuming I haven’t just screwed up badly, that means Anthony’s map of Class 1/2 Stations is missing stations it shouldn’t be missing. Either something went wrong when making the figure, far more previously compliant stations became non-compliant than he says, or there is something wrong with his underlying data set.

    So, could someone to see if I just screwed up? All you have to do is see if you can find any of these 14 stations:

    15749 34.74 -87.6 164.6 AL MUSCLE SHOALS AP
    42941 34.7 -118.43 932.7 CA FAIRMONT
    44232 36.8 -118.2 1204 CA INDEPENDENCE
    45532 37.29 -120.51 46.6 CA MERCED
    46506 39.75 -122.2 77.4 CA ORLAND
    47195 39.94 -120.95 1042.4 CA QUINCY
    48702 40.42 -120.66 1275.3 CA SUSANVILLE 2SW
    80611 26.69 -80.67 6.1 FL BELLE GLADE
    120200 41.64 -84.99 307.8 IN ANGOLA
    121873 40 -86.8 256 IN CRAWFORDSVILLE 6 SE
    123513 39.64 -86.88 224 IN GREENCASTLE 1 W
    124181 40.86 -85.5 221 IN HUNTINGTON
    129253 38.65 -87.2 152.4 IN WASHINGTON 1 W
    176905 43.65 -70.3 13.7 ME PORTLAND JETPORT

    in the first panel of Figure 3. You can also check to see that station information is correct here.

  102. Max_OK (Comment #100715)
    August 1st, 2012 at 9:52 pm

    Re Steven Mosher (Comment #100698)

    ‘I love the appeal for “raw’ data. including readings of 15000C?’
    ________________

    Unlike you, David Springer is willing to take the bad as well as the good. I suppose if you and David were picking apples out of a barrel, you would eat only the good ones, while David would eat some of the good and some of the rotten, and end up with a bad taste in his mouth.
    ————————————————————–

    Nah, I’d make apple cider and apple pies and have a party while Steve was still trying to figure out how to find apples guaranteed to not have a worm hidden in them. Rotten apples can be identified with a quick glance just like transcription errors that show 1500C highs or -150C lows can be identified easily.

  103. Oh, and for reference, these were the stations I did find:

    23160 35.27 -111.74 2239.4 AZ FT VALLEY
    28619 31.71 -110.06 1405.1 AZ TOMBSTONE
    32444 36.1 -94.17 387.1 AR FAYETTEVILLE EXP STN
    40693 37.87 -122.26 94.5 CA BERKELEY
    41048 32.95 -115.56 -30.5 CA BRAWLEY 2 SW
    42728 38.33 -120.67 217.9 CA ELECTRA P H
    51564 38.82 -102.35 1295.4 CO CHEYENNE WELLS
    80211 29.73 -85.02 6.1 FL APALACHICOLA AP
    83186 26.59 -81.86 4.6 FL FT MYERS PAGE FLD
    86997 30.48 -87.19 34.1 FL PENSACOLA RGNL AP
    88758 30.39 -84.35 16.8 FL TALLAHASSEE WSO AP
    97847 32.13 -81.21 14 GA SAVANNAH INTL AP
    107386 48.35 -116.84 725.4 ID PRIEST RIVER EXP STN
    123418 41.56 -85.88 266.7 IN GOSHEN 3SW

  104. Zeke (Comment #100717)
    August 1st, 2012 at 10:01 pm

    “At least 10% of Leroy 2010 Class 1 and Class 2 stations are airports. I’d wager its closer to 20% to 30% once we see the ratings.”

    Does it follow from that that airports are good representatives of the earth’s surface?

    I don’t doubt that airports are still a good fraction of the stations as there are far more little airports with a single runway out in boonies at the end of a two lane road than there are airports that can land big commercial aircraft. Presumably the big ones got the boot and small ones that were excluded are now included. See in Leroy 1999 if there was a cobblestone sidewalk and outhouse too close to the temp station it wouldn’t make top tier but an international airport with a mown field in the middle of it would make the grade. Leroy 2010 would allow the cobblestone sidewalk and outhouse to make the top tier and throw out the international airport. Leroy 2010 uses actual magnitude of the heat pollution regardless of distance rather than mere distance alone.

  105. David Springer said in (Comment #100719)

    ” Rotten apples can be identified with a quick glance just like transcription errors that show 1500C highs or -150C lows can be identified easily.”
    _______

    I thought you wanted to use all data, including crazy things like 1500C highs and -150C lows.

    Earlier this evening In Comment #100694 you said:

    “I say screw the adjustments and use the raw data, warts and all.”

    But if you changed your mind, I’m glad.

  106. Zeke (Comment #100704)
    August 1st, 2012 at 9:23 pm
    David Springer,

    That graph hasn’t been used since ~2008 when USHCN v2 came out. Anthony keeps linking it for some strange reason, but apart from the TOBs adjustments it shows the rest are no longer used.
    —————————-

    Well, how can we use an updated version of the adjustments chart when Noone knows what they are.

    Is it 0.5C or 0.84C?

    Please publish the (annual or monthly) data you have downloaded from the NOAA which shows the Raw, TOBs adjusted and Final data.

    Then we will all know what the numbers are.

  107. Max Ok,

    Springer has no principles. he was for the raw data before he was against it. forget that steve Mc says that TOBS is required. Forget that we had a whole CA thread on it years ago. forget that there is actual empirical data showing the necessity of it. forget that spencer himself in his temperature series had to correct his data for differences in TOBS. forget that Japan has to do it, and canada and norway they are part of the conspricay. Forget that skeptic jerryB from john dalys site and CA, had his own analysis with independent data showing this. Forget that a commenter on CA (mt) has shown the problem with code posted in the comments.

    Here is what they know. Anthony knows this because we have talked about it. TOBS is required. And when you apply TOBS these types of effects shrink dramatically. I have no doubt that there may be a microsite effect. but after you apply TOBS it will be cut down dramatically.

    Oh.. class 5 stations.. have a trend that is like class 1 and 2.
    go figure that..

  108. @KuhnKat (Comment #100703)
    August 1st, 2012 at 9:21 pm

    dhogaza,

    “And McIntyre admits there’s a huge flaw in the paper, which he didn’t catch over the weekend.”

    Go back and reread what McIntyre wrote. You will not find the word HUGE anywhere. That is your bias speaking.

    Absolutely right. If it was someone name Mann or Schmidt, he would have been weeping tears of blood, not so with Tony, you have to read between the lines.

  109. Here is a quick map I threw together with the stations I listed above marked. All of them were listed as ranked 1 or 2 on the Surface Stations website, yet as you can see, quite a few are nowhere near anything marked on Anthony’s map for rank 1/2 stations. Supposedly only 13 of those stations dropped out under the new classification, yet I found that many after checking just 30 (less than half).

    Either I’m way off on my locations somehow, or there is something seriously wrong with Anthony’s figure. And since he hasn’t released a station list, there’s no way to know if it is just his figure that’s wrong or if the problem extends further.

    Either somebody needs to tell me I’m wrong, or Anthony needs to release a station listing.

  110. Here’s the same thing as above, but with the location of all 71 stations marked.

    Does anyone have an explanation?

  111. Steven Mosher, but is TOBs bi-directional? There needs to be some adjustment for TOBs, but that issue is different for different instrumentation. LIG maxmin, just records max/min not time of max min. Digital records time of max min. If you correct lig to MMTS then MMTS to lig you should get the same answer if it is correct. If you “pick” MMTS that is biasing the adjustment because you “believe” it is more accurate. Since MMTS required separate adjustments and appears to have drift with aging, they are likely to be the wrong reference.

  112. dallas,

    From what I’ve read, the BEST method doesn’t adjust for TOBS or instrument changes. It looks for break points in the time series and splits the station record at whatever points are found. The two “stations” that emerge from this splitting are then treated as separate entities so no adjustments need to be made. Therefore they aren’t “picking” anything as being more accurate: the method is blind to the instrumentation used for measurement.

  113. Hello !

    Sorry for my very bad English, I am French.

    Zeke say : “The difference between the old and new Leroy papers is the inclusion of total surface area of heat sinks, rather than simple distance to heat sinks. This actually results in a less strict criteria than that of Leroy 1999 ”

    I am an observer for Météo France (MF) and I have classified some weather stations.

    In France, the total surface area of heat sinks is also in Leroy 1999. I use these surfaces since 2000. Perhaps, the problem is than your national weather service does not communicate all the details in the paper Leroy 1999 as in your USCRN (the best network in the world !) website.

    You can see the original Leroy 1999 (sorry only in french), in the PDF of MF :
    http://meteo.besse83.free.fr/imfix/35-1999.pdf
    But in this PDF, there is an error in a picture for “classe 1” (there is no error in the original book in paper ), here you can see the good picture :
    http://meteo.besse83.free.fr/Divers/Classite_fichiers/image004.jpg with a scan of book paper 1999 . I use here in the old web page for the temperature’s classification 1999 with the surfaces : http://meteo.besse83.free.fr/Divers/Classite.htm

    News features in the classification Leroy 2010 for the temperature and the differences with Leroy 1999 :
    In 2010 “The primary objective of this classification is to document the presence of obstacles close
    to the measurement site. Therefore, natural relief of the landscape may not be taken into
    account, if far away (i.e. >1 km). A method to judge if the relief is representative of the
    surrounding area is the following: does a move of the station by 500 m change the class
    obtained? If the answer is no, the relief is a natural characteristic of the area and is not
    taken into account.”

    In 1999, if the relief is a natural characteristic of the area, is far away > 100 m, it was good.

    Class 1 2010 : Away from all projected shade when the Sun is higher than 5°.
    Class 1 1999 : 3°, but if it is the unique detail that is not class 1 (class 2, class 3… for respective solar height), the station stay in class 1 (class 2, class 3,…) if the shade stay near of the hour of sunset and sunrise for one at four seasons.

    Class 2 2010 : Away from all projected shade when the Sun is higher than 7 °
    Class 2 1999 : 5°

    Class 3 2010 : Away from all projected shade when the Sun is higher than 7 °.
    Class 3 1999 : 5°

    Class 4 2010 : Away from all projected shade when the Sun is higher than 20 °.
    Class 4 1999 : > 5°

    For example :
    I have a natural site classified by MF, with Leroy 1999, the site was class 2 (with a shade for sun at 14° near the sunset /sunrise, with the forest at 50 m ) and in 2010 is class 4 with the same shade.

    I installed for some weeks a weather station at 80 m (altitude : – 8 m) of my station, in the garden of my neighbor in site class 4 in Leroy 1999 and site class 5 (shade with sun > 20°) in Leroy 2010.

    A bad class in temperature in a natural site, with the big and the long shadows, are often sources of freshness. The class 5 of my neighbor, is very cold (°C) with shadows also at 12 h solar :

    http://meteo.besse83.free.fr/imfix/ecart267m_275m.png
    http://meteo.besse83.free.fr/imfix/t275m_267m.png
    Conditions in my weather station for the maximum deviation of these days with the neighbor’s station:
    http://meteo.besse83.free.fr/imfix/cond10012012.png

    For analyze the meaning of the class with the shadows for the average temperature, this is important to see if the shadows concern the hours of T max and the big surface if no.

    This classification was created to estimate the biases on the instantaneous measurements in RADOME AWS network in France (one measurement every minute or every 6 minutes) and it does not specifically on the possible differences on T min and T max with biases often lower and not always in the same direction as the differences of instantaneous measurements.

  114. Does BEST preferentially split records where the break-points exhibit decreasing temperatures versus records where the break- points are Increasing?

    Haven’t heard anyone talk about how that was tested.

  115. Mosher,

    Earlier you state Spencer had to correct for TObs in his series. That surprises me b/c I thought Spencer (the latest) uses 6-hourly sampling from hourly reporting stations. Are you referring to something different?

  116. Bill Illis,

    I can’t imagine it does. You can read the methods paper, the version available online and see it talking about changes in variance etc. But testing it to find known moves has not been stated to be done, in the paper they talk about not having the metadata for moves from USHCN (?). It would be a nice test.

  117. ChristianP, the one big exception to your comments that I can find is mountain and coastal-generated weather, for which even a 10-km separation from the coast or mountain range can cause significant local weather effects. These of course are well studied in the boundary-layer meteorology community.

    The US West is arid basin and range, so substantial prominences rising about flat plains with little vegetation (so very little surface friction, and large fetch). A very special and interesting example of this is Long Island, NY, which also mixes in urbananization, which for that island really started in earnest around circa 1970).

    You can get diurnal effects from these than can cause significant excursions from “typical” regional weather (100-km mean radius). It’s also not clear that in a warming climate that anomalizing the data is a fix for this, since the boundary layer effects are driven by temperature differences, which likely will change in a warming climate.

  118. Max_OK (Comment #100724)
    August 1st, 2012 at 10:37 pm

    “I thought you wanted to use all data, including crazy things like 1500C highs and -150C lows.”

    No. Transcription errors are not part of the USHCN raw data.

  119. Doghozer is a simple-minded and vicious little wind-up warmist troll. He seems particularly agitated by Anthony’s recent efforts. The warmistas must be worried.

  120. Brandon Schollenberger

    The website you listed is classified as a threat by Norton Anti-Virus.

    symres:C:\Program Files (x86)\Norton Internet Security\MUI\19.7.1.591\coUICtlr.loc/PAGEBADREDIRECT.HTML

  121. Steven Mosher (Comment #100728)
    August 2nd, 2012 at 12:26 am

    Max Ok,

    Springer has no principles. he was for the raw data before he was against it.
    ——————————————————————

    Presumably Mosher knows that USHCN raw data does not include transcription and typographical errors. In saying I have no principles Steverino is obviously projecting a quality in himself onto others.

  122. Steven Mosher (Comment #100728)
    August 2nd, 2012 at 12:26 am

    Does BEST use the TOBS correction?

    I’m reading here from others that they do not. Yet you insist that all prior analyses do so. Which is it?

  123. Does BEST use the TOBS correction?

    I’m reading here from others that they do not. Yet you insist that all prior analyses do so. Which is it?

    ##########

    Huh. Do some reading. We dont use TOBS data.
    And I dont insist that ALL prior analyses do. I’ve seen many bad
    studies, usually at WUWT that try the same old trick.

    For your benefit there are two choices.

    A. SPLIT the station record when the TOB changes
    B. correct for the change.

    berkeley do A, most every body else ( unless they are trying to hide something ) do B.

  124. Bugs, I have a feeling you “read between the lines” more than you read the lines.

    ChristianP, thanks for the links and info. Measuring every 6 minutes look like it supports the idea that increased granularity does allow for higher temperatures to be recorded – en principle. I would have expected that with modern electronics the daily maximum should be recordable within nanoseconds. I think that new internet/wifi Nest thermostat can even do that for your house.

  125. David Springer said in Comment #100744

    “Transcription errors are not part of the USHCN raw data.”
    ________

    So when you say “raw data” you don’t, strictly speaking, mean all raw data, but mean raw data adjusted to eliminate obvious transcription errors.

    If the objective is to identify raw data that are misleading, I see no reason to stop after finding some transcription errors.

  126. David Springer:

    Brandon Schollenberger

    The website you listed is classified as a threat by Norton Anti-Virus.

    symres:C:\Program Files (x86)\Norton Internet Security\MUI\19.7.1.591\coUICtlr.loc/PAGEBADREDIRECT.HTML

    That’s weird. I posted a link to a Surface Stations page and two images I put on Photobucket. None of those should have tripped an AV. If I had to guess, I’d say Norton is overreacting as it is wont to do.

    If it gives you the option, you could try going to the page anyway. The links are easy to verify, so it’s not like there should be any risk.

  127. Oh, as an update, I may have found an answer to my own question. The reason more stations are missing than ought to be may be that some airports weren’t included in Figure 3. There’s nothing to indicate that in the document, but it might explain things.

    Things like that are why being more open would be good. People shouldn’t have to manually find locations on a map in order to try to figure out what is being done.

  128. Steven Mosher (Comment #100728)
    August 2nd, 2012 at 12:26 am

    “forget that Japan has to do it, and canada and norway they are part of the conspricay.”

    Steven, I have been attempting track down documentation of what country’s temperature data sets are adjusted for TOBS and to date I have only found that to be the case with the US. I ask the question to GHCN and it was the only question that I asked of that organization that was not answered or replied to. Can you give a link to or reference your reason for indicating that Japan, Canada and Norway use TOBS?

    Here it would also be important to know whether the TOBS adjusted data is the data that NCDC collects as raw data from the country since I know that GHCN does not make the adjustment for any country’s data except the US.

  129. Brandon Shollenberger (Comment #100755)
    August 2nd, 2012 at 12:32 pm

    “That’s weird. I posted a link to a Surface Stations page and two images I put on Photobucket.”

    I don’t pay for NAV so I can ignore the warnings. There are exceeding few malicious website warnings. How about you figure out what the problem is and fix it?

  130. Max_OK (Comment #100754)
    August 2nd, 2012 at 12:29 pm

    “So when you say “raw data” you don’t, strictly speaking, mean all raw data, but mean raw data adjusted to eliminate obvious transcription errors.”

    I mean raw data as GHCN means raw data. Obviously you didn’t know or you wouldn’t be on my ass about it. Raw or unadjusted data in NOAA context means transcribed from paper to electronic storage and scrubbed of typographical mistakes. Write that down.

  131. Tip of the hat to Zeke.

    Tip of the hat to Steve Mosher.

    WUWT post, Update on Watts et al. 2012:

    …An issue has been identified in the processing of the data used in Watts et al. 2012 that was placed online for review. We thank critics, including Zeke Hausfather and Steve Mosher for bringing that to [our] attention. Particular thanks go to Zeke who has been helpful with emailed suggestions. Thanks also go to Dr. Leif Svalgaard, who has emailed helpful suggestions.

    The authors are performing detailed reanalysis of the data for the Watts et al. 2012 paper and will submit a revised paper to a journal as soon as possible, and barring any new issues discovered, that will likely happen before the end of September…

    Admirable conduct. Science progresses.

  132. Steven Mosher (Comment #100749)
    August 2nd, 2012 at 10:08 am

    Okay then. So “most everybody” uses TOBS correction unless they are trying to hide something. BEST does not use TOBS correction but not because they are trying to hide something. Watts doesn’t use TOBS correction because he is trying to hide something.

    That’s your story and you’re story and you’re sticking with it?

  133. David Springe,

    Berkeley Earth corrects for TOBs, though not explicitly. Like any other discontinuity it feeds both the scalpeling and correlation-weighting during the kriging process. It turns out that it actually matches the NCDC adjustments pretty well, as we noticed, despite not having a separate TOBs correction. Its also worth noting that Menne’s PHA also seems to do a reasonably job of fixing TOBs when applied to the raw data (as opposed to being used after TOBs adjustments, as is currently the case).

  134. Steven Mosher (Comment #100749)
    August 2nd, 2012 at 10:08 am

    Huh. Do some reading. We dont use TOBS data.

    ————————————————————–

    Who is “we”?

  135. Watts doesn’t even focus on what is happening in the Northern Arctic. He is always spouting off about skewed temperature data around “Urban Sprawl” (concrete and blacktop)skewing the temperature. He needs to ask himself why is the Northern Arctic getting so hot? I don’t see any blacktop or concrete there at all but yet it is still warming profusely.

    Mr. Watts needs to look at the current dot charts and ask why are the red dots more massive and more numerous than the blue dots?? I don’t see concrete or blacktop “Urban Sprawl” there.

    Example for Mr. Watts:

    http://www.ncdc.noaa.gov/sotc/service/global/map-blended-mntp/201206.gif

  136. David Springer,

    Alas, as is most often the case with scientific endeavors, both Mosh and myself are unpaid volunteers.

  137. “Its also worth noting that Menne’s PHA also seems to do a reasonably job of fixing TOBs when applied to the raw data (as opposed to being used after TOBs adjustments, as is currently the case).”

    I had forgotten that indeed Menne shows that their algorithm can adjust the data reasonably well (or reasonably the same) without first making the TOBS adjustment and would imply that with the use of that algorithm, the other countries failure to use TOBS becomes less important.

  138. cyclonebuster (Comment #100767)
    August 2nd, 2012 at 2:49 pm

    “Watts doesn’t even focus on what is happening in the Northern Arctic. He is always spouting off about skewed temperature data around “Urban Sprawl” (concrete and blacktop)skewing the temperature. He needs to ask himself why is the Northern Arctic getting so hot? I don’t see any blacktop or concrete there at all but yet it is still warming profusely.”

    Black carbon, not black top.

    http://pubs.giss.nasa.gov/abs/ko06100c.html

    Koch, D., and J. Hansen, 2005: Distant origins of Arctic black carbon: A Goddard Institute for Space Studies ModelE experiment. J. Geophys. Res., 110, D04204, doi:10.1029/2004JD005296.

    Back before IPCC AR1 Hansen listed black carbon as producing half as much forcing as CO2. He was spanked by the bandwagon and retreated from that position since then. Personally I believe he was correct in 1988 about black carbon and bowed to political pressure. I can refer to some of my articles on black carbon and Hansen written back in 2006 if you like.

  139. Zeke (Comment #100769)
    August 2nd, 2012 at 2:53 pm

    “Alas, as is most often the case with scientific endeavors, both Mosh and myself are unpaid volunteers.”

    Most often the case? Curry’s salary is $228,000 per year according to public records in Georgia. Wise up.

  140. Kenneth, here is a recent paper claiming to document the bias in Canadian observational data. Not apparent from the abstract that the situation documented in Canada would confound a siting analysis in the manner suggested by Zeke.

  141. cyclonebuster (Comment #100767)
    And where do you think those measurements were made? By some Inuit in an igloo? WUWT has had numerous entries dealing with the effects of growth in the towns where those measurements were made. Try doing some research before you mouth off.

  142. Yes Carrick, it is true, but in my message, I do not speak of these big differences in some km with the sea and/or the mountain (where do you think, I say this ? Thank you). I’m only talking about the differences by the effects of micro-site for the classification (here the sea breeze with the class 4 (L1999) in a site natural, with the trees of the forest that cut the sea breeze at 2 m of soil, only on 20 m2 (very micro-site), provides average T max hotter about +1.5° / +2° C in the months in summer, than the other station in class 2 (L1999) at 40 m.

    I know the strong differences in some m or km with the sea and the moutain, I am “homotopoclimatus” with always a weather station in “the pocket” 🙂 :
    http://meteo.besse83.free.fr/imfix/etalonhomotopoclimatus2.jpg
    http://meteo.besse83.free.fr/imfix/etalonhomotopoclimatus.jpg
    http://meteo.besse83.free.fr/critique_vantage_fichiers/station.jpg
    http://meteo.besse83.free.fr/vtgmobil.jpg
    http://meteo.besse83.free.fr/imfix/toit1besse.jpg

    MRE, in nanoseconds there is too much noise. This T is not good for the weather, it is the micro-scale in the radiation shield, it is not the true T of the free air (if you use a sensor with a good time constant, the variation ot the T may be of several °C in one minute without a real weather change (convection with a small wind and strond radiation in summer with the heat soil)
    6 mn is the archive in the database with the T max/1 mn , T min/1 mn , T instantaneous each 6 mn and T average / 6 min (T instantaneous , T max and Tmin, for WMO and for MF = average /1 mn of all the samples (of 10 s in one minute for example as often)see CIMO guide)

    In US with USCRN and ventilated radiation-shield, the instantaneous T , T max, T min, is an average/ 5mn of all the samples, because the time constant is much shorter with several m/s in the ventilated radiation shield (In France in ours natural radiation shields, the wind in the radiation shield is 1/10 of the wind speed at 2 m. Without a big storm, It is difficult to get always 6 m/s in the natural shelter (60 m/s/10 mn at 2 m !), as in the radiation shield Young 43502)

  143. Kenneth Fritsch,

    Assuming, of course, that networks outside the U.S. are sufficiently dense to detect discontinuities associated with TOBs. Something I’m less than completely convinced of.

    By the way, I’ll answer your more detailed questions over on the other thread after work; only have time for quick replies at the moment.

  144. David Springer,

    Curry gets paid that salary as a professor. If she volunteered to work on an IPCC report, she wouldn’t be paid a cent apart from any travel expenses.

  145. ChristianP,

    Thank you for the helpful clarifications on the ratings. I’ve primarily been going by the descriptions of Leroy 1999 and 2010 from the Watts et al draft manuscript. I should have time to read Leroy 2010 tonight (1999 seems only available in French).

  146. Zeke (Comment #100776)
    August 2nd, 2012 at 3:14 pm

    Assuming, of course, that networks outside the U.S. are sufficiently dense to detect discontinuities associated with TOBs. Something I’m less than completely convinced of.

    ———————————————————————

    What do you mean by “detect” discontinuities? Surely you’re not telling me that algorithmic guesses are used to determine when a station changes the time of day it makes recordings? The time of each observation is supposed to be recorded by the station keeper. There should be NO “detection” involved whatsoever.

    The more I learn about this the more hokey it becomes.

  147. Zeke (Comment #100778)
    August 2nd, 2012 at 3:17 pm

    “Curry gets paid that salary as a professor. If she volunteered to work on an IPCC report, she wouldn’t be paid a cent apart from any travel expenses.”

    You don’t have a very good understanding of what salary means then, Zeke. In the area of climate science University of Georgia owns her ass. Whatever she does in relation to that field is on the clock for Georgia. She’s not an hourly employee. I repeat, wise up.

  148. Christiane – I think that Carrick was only pointing out differences between North America and Eurasia in terms of temperature variability. He will no doubt clarify.

  149. On second thought, Zeke, I might be mistaken about Curry’s fiduciary responsibility to the University of Georgia. My experience is entirely in the private sector and I probably don’t know how deep the rot and corruption goes in public universities. It may be true that a university professor can work on private interests in the field the university pays them for and use their office space, computer resources, laboratories, and materials owned by the university for private gain. That would be grounds for terminatino in the private sector and probably get you sued as well. The public sector may overlook it. So is that what you’re telling me, that people like Curry can use public resources for private gain?

  150. BarryW (Comment #100774)
    August 2nd, 2012 at 3:10 pm

    cyclonebuster (Comment #100767)
    And where do you think those measurements were made? By some Inuit in an igloo? WUWT has had numerous entries dealing with the effects of growth in the towns where those measurements were made. Try doing some research before you mouth off.

    Your analogy is laughable. Show me the “Urban Sprawl” of the Inuits in their igloo’s. The Dot chart would show more heating if concrete and blacktop surrounded those Igloo’s but yet it is warming without it isn’t it?

    http://www.ncdc.noaa.gov/sotc/service/global/map-blended-mntp/201206.gif

  151. Re: David Springer (Aug 2 15:25),

    In the area of climate science University of Georgia owns her ass.

    Try to avoid the ad hominem.
    sarc
    Oh, wait, you must be one of those Big Oil funded deniers I’m always hearing about.
    /sarc

    So far, your comments have been a waste of bandwidth. You should wise up and try saying something substantive instead of whinging.

  152. David Springer (Comment #100771)
    August 2nd, 2012 at 2:56 pm

    cyclonebuster (Comment #100767)
    August 2nd, 2012 at 2:49 pm

    “Watts doesn’t even focus on what is happening in the Northern Arctic. He is always spouting off about skewed temperature data around “Urban Sprawl” (concrete and blacktop)skewing the temperature. He needs to ask himself why is the Northern Arctic getting so hot? I don’t see any blacktop or concrete there at all but yet it is still warming profusely.”

    Black carbon, not black top.

    http://pubs.giss.nasa.gov/abs/ko06100c.html

    Koch, D., and J. Hansen, 2005: Distant origins of Arctic black carbon: A Goddard Institute for Space Studies ModelE experiment. J. Geophys. Res., 110, D04204, doi:10.1029/2004JD005296.

    Back before IPCC AR1 Hansen listed black carbon as producing half as much forcing as CO2. He was spanked by the bandwagon and retreated from that position since then. Personally I believe he was correct in 1988 about black carbon and bowed to political pressure. I can refer to some of my articles on black carbon and Hansen written back in 2006 if you like.

    I am fully aware of the difference between black carbon and blacktop…. However, both will melt more ice…

  153. Too bad the killfile script doesn’t work here. I’ve found another name I’d like to add to the list.

  154. My sensors detect a clear discontinuity in David Springer’s blood pressure. I believe this type of variable can be efficiently homogenized by the eat-more-chocolate algorithm (of which I happen to be a seasoned practitioner).

  155. I was right the first time, Zeke.

    http://www.physast.uga.edu/policies/PromotionPolicies/ProceduresPromotiontoProfessor

    Activities such as publishing in peer reviewed journals, getting invited for speaking engagements, and so forth are all things that go into performance evaluations used for promotion and salary.

    Curry and Muller are personally profitting from BEST related activities as the activities are considered in annual performance reviews.

    But I already knew what they were getting out of it. I was wondering what you and Mosher are getting out of it? Bullet items for your resumes?

  156. Oh lovely. Now the peanut gallary thinks they know what my blood pressure is doing. It’s probably down. To whoever it was that wanted a killfile for blog entries. You and me both, buddy. The trick is learning to skip over the names you don’t want to read. I know, it’s almost impossible to not read me. It’s a common problem. Be advised it won’t hurt my feelings if you don’t. In fact I’d prefer. I’d rather just make my prognistications in the total absence of annoying feedback from my inferiors. 😉

  157. dhogaza,

    In the e-mail from Joelle Gergis September 27 2011: “..contributors of non-publicly available data used in this study are automatically considered coauthors unless you’d prefer acknowledgement rather than the responsibility of coauthorship”.

  158. Roald Amundsen navigated the Northwest Passage in 1903-1906. The only that’s changed today to make the passage any easier is satellite imagry to pinpoint navigable channels through the ice.

  159. David Springer (Comment #100793)
    August 2nd, 2012 at 4:10 pm

    “Roald Amundsen navigated the Northwest Passage in 1903-1906. The only that’s changed today to make the passage any easier is satellite imagry to pinpoint navigable channels through the ice.”

    Did he also traverse the N.E. Passage in 1903-1906?

  160. ‘But I already knew what they were getting out of it. I was wondering what you and Mosher are getting out of it? Bullet items for your resumes?”

    My resume is 3 pages long, thats the short version. Berkeley earth is not on my resume. It is on my LinkedIn.

    What I get out of it.

    Well the only way I can explain that is by comparison.

    A non berkeley paper is headed toward publication. I helped a
    little with the data. It was fun. I like data.

    I write packages for R. That’s an open source community.
    we are dedicated to open science. We show our commitment
    by volunteering our time. I blog on R. I answer questions on the
    help list. Its fun. I enjoy it and I think if you believe in open science you show your belief through action.

    I’m working with a couple of grad students on their research. Helping with r and learning about there particular field of interest.
    I like learning. If I wasnt learning something new every day, I would be unhappy. I also like helping people. well helping people
    who want to know the answer. folks who just ask questions to disrupt class.. they never did too well at grade time.

    I’m working with a couple of start ups. What a different group of
    people. I like meeting and working with people I have never worked with before. Its challenging.

    So, a few months back when berkeley did there first release I wrote code to read their data and I gave it to them. I like data.
    and I like writing code. Later, they asked if I would like to join the weekly meeting. Zeke was already part of the team. The code is matlab. Which I always wanted to learn. So I learn by diving into
    the deep end of things. That was a plus. I get that out of it.
    The people are really different. I like going to the lab. Its like nerd heaven. Long ago when i worked with lab guys I also liked it.
    Its fun for me because they teach me stuff I dont know. If I ask questions and remain teachable. I also like getting out of SF. I always had a animus about berkeley. Im a conservative. But, I think I need to get over that silly attitude about berkeley, so I like going there and looking for the good. Its easy to find the bad and be narrow minded. I think my attitude toward berkeley is silly, so I go there to change my mind. I did the same thing a while by going to a very progressive church here in SF. Im not progressive and not religious, but I figured could challenge my own bigoted view of things by spending time experiencing things I didnt agree with. Very fruitful. learning tolerance.

    Most of the reasons come down to my personal enjoyment and personal growth. My resume doesnt need padding, its already all over the map from teaching to engineering to statistics to marketing to product development.. crap.. plus, you know there are far more significant things Ive done than diddle berkeley data.

    You see the problem the problem mr springer with asking a question when you dont know the answer.

  161. Can you give a link to or reference your reason for indicating that Japan, Canada and Norway use TOBS?
    Here it would also be important to know whether the TOBS adjusted data is the data that NCDC collects as raw data from the country since I know that GHCN does not make the adjustment for any country’s data except the US.

    ###################

    I believe I have posted these several times. Typically when Tilo asks for them. I did it once here and another time over at WUWT when he came there to demand the same links and then slink away.

    lets try this.

    japan time of observation bias

    http://sciencelinks.jp/j-east/article/200005/000020000500A0108818.php

    for the canadian records go to CRUTEM4, look at sources, click on the link for canada read the paper.

    For australia, go look at WUWT find the post about the reports on australian weather stations, read that paper, when you get toward then end you will see the paragraph about changes made there.

    Hmm I think that source also has the note about norway.

    For the most part, generally speaking, the US is unique in this regard.

    Also, the long series, that go back to the 1700s are all tobs adjusted.

    However, If people dont like TOBS in the US they can just use 1000s of stations that take temperatures 4 times a day.

    easy peasy.

    Or is you are a real masochist you could use hourly data, forget min max and look at the trend in the tmean figure..

    There are any number of ways to prove how good or bad the TOBS data is. But it requires some work.

    That is something that I might do for you kenneth because you ask honest questions and walk where the data follows. But Im presently behind on two other projects.

  162. Kenneth

    ‘Here it would also be important to know whether the TOBS adjusted data is the data that NCDC collects as raw data from the country since I know that GHCN does not make the adjustment for any country’s data except the US.”

    yes thats true. A while back I was spending some time comparing the canadadian homogenized to berkeley raw. Instructive.

    So, the cases where it may matter is if you are using CRU data as CRU takes data from NWS.

    A good comparison would be

    Canada according to various sources.

    So: canada using GHCN
    canada using CRU4
    canada using env canada,
    canada using GHCN daily
    canada using Berkeley

    Again, these types of comparisons are all easily done. But here is what you will rarely find. You will rarely find somebody Springing up to actually get there hands dirty with data or with reading papers. They Spring up and want to know who is paying you.
    They Spring up and spout all manner of nonsense and then they slink away when they are called to account. Does anyone Spring to mind?

  163. Hm… cutting the data off at 2000.

    😕

    Is that Dana’s “artwork” again? /sigh

  164. Mosher 100795,
    Thanks for laying out what and why you do what you do. I too am a “conservative”, or at least a libertarian, which these days ends up meaning to most people “a conservative”. One of the great challenges for people, of any political POV, who really want to understand things face is separating the political issues from the rest. We are constantly assaulted by those who demand political loyalty and adherence to the ‘party line’, even when loyalty is not deserved. Perhaps before we are both too old to care this horrendous question of climate sensitivity will be finally clarified.

  165. cyclonebuster (Comment #100784)
    As I said try doing some research instead of just looking at dots.

    For example, try googling “barrow heat island”.

    Here’s just the first hit I got

    Barrow

    if you checked Anthony’s site you’d see they’ve discussed this a number of time but it seems as if you’d rather stick with your dots than learn something.

  166. BarryW (Comment #100802)
    August 2nd, 2012 at 7:02 pm

    cyclonebuster (Comment #100784)
    As I said try doing some research instead of just looking at dots.

    For example, try googling “barrow heat island”.

    Here’s just the first hit I got

    Barrow

    if you checked Anthony’s site you’d see they’ve discussed this a number of time but it seems as if you’d rather stick with your dots than learn something.

    LOL How about if you try to figure out what those dots represent,things like size related to temperature and the amount of Red dots compared to Blue dots,if you did that you would realize how uninformed you are about the temperature they represent…. Apparently,that’s all YOU did with the dots and just looked at them.

  167. Re David Springer Comment #100761)

    “I mean raw data as GHCN means raw data. Obviously you didn’t know or you wouldn’t be on my ass about it. Raw or unadjusted data in NOAA context means transcribed from paper to electronic storage and scrubbed of typographical mistakes. Write that down.”
    ___

    David, it doesn’t matter what I know or don’t know. Raw data are raw data, and raw data adjusted for transcription errors or other reasons are adjusted data. I doubt GHCN fails to make the distinction between raw and adjusted.

    In Comment #100694 you remarked “I say screw the adjustments and use the raw data, warts and all.” Thank you for clarifying that by “raw data” you meant raw data adjusted to remove transcription errors. Again my question is why do you not want to use data adjusted to correct other kinds of errors?

  168. steveF

    Thanks for laying out what and why you do what you do. I too am a “conservative”, or at least a libertarian,

    yes. In SF if I describe myself as a libertarian, people ask what is that? i explain it as a conservative who doesnt care if you smoke pot or dance naked at the folsom street fair. they are puzzled.

  169. Steven Mosher, SteveF, anyone else who cares, I realize this is drifting far afield of the original topic, but as self described conservatives or libertarians, what, if anything, do you feel should be “done” and by who about the climate “problem”-

    A. If the “mainstream” view of science is correct.

    B. If the “lukewarm” view of science is correct.

    If they are different actions by different actors, why?

  170. BarryW (Comment #100802)
    August 2nd, 2012 at 7:02 pm
    cyclonebuster (Comment #100784)
    As I said try doing some research instead of just looking at dots.
    For example, try googling “barrow heat island”.
    Here’s just the first hit I got
    Barrow
    if you checked Anthony’s site you’d see they’ve discussed this a number of time but it seems as if you’d rather stick with your dots than learn something.

    ###################

    There is actually a CRN station at Barrow. so folks who want to look for UHI can look at that versus the other sites. As Anthony considers CRN as a gold standard that would be the place to start.

    on august 4th go to this link and look up monthly data
    ( services are currently out )

    http://www.ncdc.noaa.gov/crn/station.htm?stationId=1007

    or you can look at 5 minute data and see the effect of TOBS.
    (requires programming, script is on climate audit )

    The arctic is warm. look at SST and look at the reports of rain. or if you like look at the web cams in the arctic. or look at the bouys to get real time data above and below the ice.

    The problem is that the observations and observation networks that are available have moved beyond the tired old stuff that people always dredge up in these debates.

    More folks should spend time with data than posting on blogs.

    with that Im out to look at more data.

  171. Re cyclonebuster Comment #100799

    I bet Watt’s thinks this is an upward trend in ice loss over Greenland………

    http://www.skepticalscience.co…..ly2011.gif
    ______

    I don’t know what Watts would think, but I count about 10 starts of an upward trend in Greenland ice.

    BTW, Watts’ attention has turned to street lamps melted by a burning dumpster in Stillwater, Oklahoma, which some silly AGW alarmists think has something to do with global warming. There are photos at WUWT.

    Elsewhere in Oklahoma, heat from a burning dumpster could be what buckled a bridge, closing Highway 33. See photo in link.

    http://newsok.com/buckled-bridge-closes-oklahoma-highway-33-near-downtown-guthrie/article/3697532

  172. Lucia
    you seem to have changed you policy to try to get the same number of contributions as Climate etc, which makes the latter unreadable.
    So: Is it OK with you if I call Dzoga a wanking moron?

  173. Thank you Steven ! I am here often in silence, on this blog every week to read it. I do not have the time, it is difficult for me and it is too long to write the comments in a bad english. I read also your blog. I appreciate your scientific rigor as that of Lucia , Zeke, …, (You are all well known among enthusiasts weather/climatology in France)

    Zeke, I saw that it was in the project, but I say this here, because you hear more the scientific arguments than in the blog of A. Watts.
    Classification 1999 shall not be in English, at that time it was scheduled for France.
    The original paper is here :
    http://comprendre.meteofrance.com/publications/collections/techniques_d_observations_et_de_prevision/techniques_d_observations_et_de_prevision?page_id=2912&document_id=4071&portlet_id=18736

  174. Steven Mosher (Comment #100798)
    August 2nd, 2012 at 5:07 pm

    “Again, these types of comparisons are all easily done. But here is what you will rarely find. You will rarely find somebody Springing up to actually get there hands dirty with data or with reading papers. They Spring up and want to know who is paying you.
    They Spring up and spout all manner of nonsense and then they slink away when they are called to account. Does anyone Spring to mind?”

    That’s so very clever! No wonder your resume is “all over the map”. ROFL

  175. @AMac (Comment #100762)

    August 2nd, 2012 at 2:35 pm
    Tip of the hat to Zeke.
    Tip of the hat to Steve Mosher.
    WUWT post, Update on Watts et al. 2012:

    …An issue has been identified in the processing of the data used in Watts et al. 2012 that was placed online for review. We thank critics, including Zeke Hausfather and Steve Mosher for bringing that to [our] attention. Particular thanks go to Zeke who has been helpful with emailed suggestions. Thanks also go to Dr. Leif Svalgaard, who has emailed helpful suggestions.
    The authors are performing detailed reanalysis of the data for the Watts et al. 2012 paper and will submit a revised paper to a journal as soon as possible, and barring any new issues discovered, that will likely happen before the end of September…

    Admirable conduct. Science progresses.

    Do we know who the authors are?

  176. Max_OK (Comment #100806)
    August 2nd, 2012 at 10:52 pm

    “In Comment #100694 you remarked “I say screw the adjustments and use the raw data, warts and all.” Thank you for clarifying that by “raw data” you meant raw data adjusted to remove transcription errors. Again my question is why do you not want to use data adjusted to correct other kinds of errors?”

    Because the other kinds aren’t “errors”. It’s not an error to make an observation at 7am. It’s not an error to move a station to a new location. It’s not an error to upgrade from a CRS station to an MMTS station.

  177. bugs (Comment #100822)
    August 3rd, 2012 at 5:46 am

    “Do we know who the authors are?”

    Watts indicated there are some as yet unnamed authors. The named authors are Anthony Watts, John Christy, Steve McIntyre, and an employee of Watts’ whose last name is Evans and whose first name escapes me.

  178. Alexej Buergin (Comment #100816)
    August 3rd, 2012 at 2:17 am

    “So: Is it OK with you if I call Dzoga a wanking moron?”
    ——————————————————————–

    It’s an insult to wanking morons. Dhogaza is a vicious little atheist troll with no balls. It’s unlikely he’s a wanker due to the lack of balls.

  179. “Roald Amundsen navigated the Northwest Passage in 1903-1906. The only that’s changed today to make the passage any easier is satellite imagry to pinpoint navigable channels through the ice.”

    Took him three years, in a ship that was designed to survive being frozen in over the winter. Get with the program

  180. Dave Springer,
    TOBs clearly can be tricky to account for accurately, especially if there is inaccurate or incomplete meta data. But I don’t really understand your objection to at least examining and trying to account for TOBs when adequate meta-data is available. Clearly it is not an ‘error’ to collect minimum and maximum values at a specific time (like 7:00 AM). But it is also clear that collecting data from the same instrument at 11:00 AM instead of 7:00 AM will sometimes cause the recorded maximum and minimum temperatures for the same station to change on average in one direction… that is, the time of data collection introduces a slight bias in the reported daily average temperature. Do you doubt this is in fact a potential bias? I mean, you can yourself look at hourly station data and calculate how the daily average temperature would change with a change in “time of observation” at a specific station, and lots of people have already done this. Critically examining the accuracy and uncertainty of any method used to “correct” for time of observation bias is perfectly sensible, but I don’t see that ignoring a potential bias from time of observation is justified, any more than ignoring UHI bias due to local development is justified.

  181. David Springer:

    I don’t pay for NAV so I can ignore the warnings. There are exceeding few malicious website warnings. How about you figure out what the problem is and fix it?

    How exactly do you suggest I do that? None of my links are to suspicious sites. None of my links are to material that is remotely malicious. There is no good reason for Norton to give a warning regarding any of them.

    The only “problem” is Norton doesn’t like a link for some reason. Do you suggest I change a website I have no control over? Or do you perhaps suggest I change Norton? Exactly what do you expect me to do?

    As for ignoring warnings, if you want to believe every warning from Norton should be listened to, you can. False positives are guaranteed to happen so I don’t know why you would, but it’s your call. However, I’m not going to try to change websites or programs I have no control over just to get rid of a false warning.

  182. Carrick:

    Hm… cutting the data off at 2000.

    Well, it’s GRACE data. There’s no data before 2002. The original post has other figures (in Basic) and links to papers.
    .
    All: The troll. Don’t feed it.

  183. Steven Mosher (Comment #100810)
    So when did the rest of the world get CRN?

    And I never said the arctic wasn’t warming nor the rest of the world (though the 80N graph doesn’t show any summer warming above average this year). Anchorage was having a cold summer as I remember too. Yeah, it’s weather not climate.

    The original complaint was that Watts didn’t look at the arctic. That’s like condemning BEST because Muller didn’t look at buckets vs engine inlet temps. Irrelevant to the discussion.

  184. I noted in Zeke’s graph above that there is a group of stations which have maintained a midnight time of observation for a very long time. There appear also to be a fraction of stations which have maintained the same time of observation, even while a lot of stations have changed. Perhaps it would make sense to examine the trends only for stations which have maintained the same time of observation and see if these stations show a different trend in the raw data than for all stations.

  185. Steven Mosher (Comment #100798)

    Steven:

    I have looked at the Japanese link you provided and looked at CRUTem4 for Canada as you suggested and not found anything by way of references to how Norway or Austria adjust their station data by way of your directions. The Japanese link shows that the raw Japanese data can be affected by changing the TOBS, but does not say anything about what adjustments are made to the Japanese temperatures in Japan. CRU says nothing about how Canada internally makes temperature adjustments, but I did find this article with reference to Vincent, who I believe was a pioneer in applying change points to the homogenizing process for adjusting temperatures. It would appear that the Canadian temperatures would be adjusted for TOBS via the change point algorithm. I continue to assume that GHCN and BEST start with raw Canadian and make adjustments from there.

    http://adsabs.harvard.edu/abs/2010EGUGA..12.2964V

    I am most interested in how these data sets are adjusted and have been assuming that GHCN and GISS (actually GISS currently uses adjusted GHCN data) use only raw data from the countries of origin and then do their own adjustments. In my email exchanges with Phil Jones I was not able to clearly pin down the form in which CRU received data other than GHCN data which they use in adjusted form. He tended to wave me towards the Metro Office and adjustments made there, but I was not able to determine exactly what adjustments they made to the data. I believe CRU makes only minor quality control checks looking for outliers and transcription errors on the data it receives. Any links that you might have that would clear up these issues would be helpful, and particularly with regards to TOBS.

    Is my assumption correct that the data used by BEST starts with the raw data from the various sources, i.e. BEST is not doing a second iteration of adjusting data that has already been adjusted. I hope you noted the point that Zeke made about the GHCN algorithm evidently being able to account for the TOBS without an initial adjustment for it.

  186. SteveF,

    I’d do a pairing approach, creating pairs of stations with no TOBs changes and TOBs changes over a given period (say, 1979-2010) that also have the same instrumentation. Create a plot of pre- and post-time of observation change across all the pairs. I did something very similar for MMTS transitions recently and got some interesting results.

  187. DaveScot:

    It’s an insult to wanking morons. Dhogaza is a vicious little atheist troll with no balls. It’s unlikely he’s a wanker due to the lack of balls.

    I’m not an atheist, but even if I were, I fail to see the relevance.

  188. DaveScot:

    Because the other kinds aren’t “errors”. It’s not an error to make an observation at 7am. It’s not an error to move a station to a new location. It’s not an error to upgrade from a CRS station to an MMTS station.

    But it is, of course, an error not to account for changes in recorded temps that result from such changes.

    It seems like the argument is something like: good station siting leads to a lower trend than bad station siting, but if a station is moved and changes classification, it makes no difference at all …

  189. David Springer, @100693

    “What we should do is use the raw data, warts and all, because that can’t be claimed to have been purposedly biased. But there’s no warming trend in the raw data! None!”

    *None?* REALLY?? Please explain this, then, willya?

    http://postimage.org/image/ecnzbd3ev/

  190. David Springer said in Comment #100823

    ‘Because the other kinds aren’t “errors”. It’s not an error to make an observation at 7am. It’s not an error to move a station to a new location. It’s not an error to upgrade from a CRS station to an MMTS station.’
    _____

    I believe it would be an error to change the time of observation if the change adversely affected the accuracy of the temperature anomaly.

    Webster’s definition of error __ ” an unintentional departure from truth or accuracy.”

    If we are interested in temperature anomalies, I don’t understand why you would want to disregard changes that could bias these anomalies.

  191. The semantic part is that in this case the “adjustments” are really intercalibrations. Ask John Christy about the problems you can get into without intercalibrations when you have different instruments.

  192. Zeke,
    Based on your graph, there appear to be something over 500 stations which have not changed time of observation, and about 500 which have changed from evening to morning. That number of stations ought to be enough to demonstrate the influence of the change in time of observation. More sophisticated approaches like your pair-wise comparison might be technically better, but would (I think) inevitably introduce some analytical choices which might raise doubts, at least for some people. If Anthony’s results really are substantially biased by changes in time of observation, as is claimed by some, then comparing stations with the same quality rating with and without a change in time of observation ought to yield an obviously higher trend for stations which have not changed time of observation. There is something to be said for minimizing complexity in analyses of contentious subjects.

  193. I’m guessing that many do not have a clear idea of what “raw” temperature data means. If we define the raw temperatures to be those that were recorded by the observers and then used to compute a monthly mean value, then the GHCN unadjusted data are not necessarily always “raw”. Prior to 1951, GHCN unadjusted draws heavily from World Weather Records. There are numerous cases where WWR reports homogenized temperatures because of station moves or changes in the way Tmean was computed. These are all documented and people can read about them at http://www.archive.org, but it is extremely tedious. Even for the U.S., many stations have adjusted (to a 24-hour mean from (Tmax+Tmin)/2) data in WWR and sometimes GHCN used WWR and sometimes they used the true “raw” data, and you have to compare to the B91s to figure that out. The best that can be said for GHCN is that the unadjusted data are “NOAA-unadjusted”. It is not true that there are absolutely no adjustments of any kind in GHCN “raw”.

  194. Re: Eli Rabett (Comment #100828)

    “Roald Amundsen navigated the Northwest Passage in 1903-1906. The only that’s changed today to make the passage any easier is satellite imagry to pinpoint navigable channels through the ice.”

    Took him three years, in a ship that was designed to survive being frozen in over the winter. Get with the program

    From the NOVA page for the episode “Arctic Passage,”

    5. Winter 1903-Summer 1905
    
On the southeast coast of King William Island, Amundsen finds a protected bay in which to drop his anchor. He names the area Gjoa Haven, and the expedition stays on King William Island until August 1905. During this time, Amundsen learns Arctic survival skills from the Netsilik, a band of Inuit people. He and his men also fulfill the scientific aims of their mission during these two years; they take many geographical measurements and locate the magnetic north pole. (emphasis added)

    So yeah, it took Admundsen’s crew a while, but they had to find their own way through, and they weren’t exactly sailing for 3 years straight.

  195. @JR/Max_OK – 1951 is a long time ago. Just noting the obvious, one of the things I have been thinking is Anthony should do his analysis starting in around 1997, when most of the known adjustments had mostly subsided?

  196. Kenneth.

    Here is what I posted on WUWT .

    using google and doing some reading is all it would take. I’ve explained this and cited it on several occasions. I get tired of doing other peoples work. I get tired of lazy people who are not interested in the truth, who expect others to do their work for them.

    A SIMPLE GOOGLE ON TIME OF OBSERVATION BIAS will get you the mere
    beginnings of the literature on this. Then read the papers because the papers have bibilographies going back decades

    But here goes yet again. The US is SOMEWHAT Unique in this matter because unlike other countries we had no standard observation time, However, we are not entirely alone in this regard. There are several other countries who have the same issue. A few follow.

    Japan

    http://sciencelinks.jp/j-east/display.php?id=000020000500A0108818

    Or if any of you started to look at the sources of Crutem4 ( none of you have ) you would have found
    this right away

    Canada:

    http://www.ec.gc.ca/dccha-ahccd/default.asp?lang=en&n=70E82601-1

    This website provides monthly, seasonal and annual means of the daily maximum, minimum and mean temperatures from the Second Generation of Homogenized Temperature datasets which now replace the first generation datasets.

    A First Generation of Homogenized Temperature datasets were originally prepared for climate trends analysis in Canada. Non-climatic shifts were identified in the annual means of the daily maximum and minimum temperatures using a technique based on regression models (Vincent, 1998). The shifts were mainly due to the relocation of the station, changes in observing practices and automation (Vincent and Gullett, 1999). Adjustments for the identified shifts were applied to monthly and daily maximum and minimum temperatures (Vincent et al. 2002). Observations from nearby stations were sometimes combined to create long time series that are useful for climate change studies.

    The Second Generation of Homogenized Temperature datasets were recently prepared to provide a better spatial and temporal representation of the climate trends in Canada. In this new version, the list of stations was revised to include stations with long-term temperature observations covering as many of the recent years as possible. New adjustments were applied to the daily minimum temperatures in order to address the bias due to a change in observing time (Vincent et al. 2009). Techniques based on regression models were used to detect non-climatic shifts in temperature monthly series (Wang et al. 2007; Vincent 1998). A new procedure based on a Quantile-Matching (QM) algorithm was applied to derive adjustments (Vincent et al., 2012; Wang et al. 2010).

    Want More?

    Well, there is also Australia which was posted here. They changed observation time in
    1964 and in there latest product I believe they made a few site specific adjustments for TOBS

    See section 8 here

    http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

    And Norway.

    Nordli, P.Ø. 1997. Adjustments of Norwegian monthly means of daily minimum temperature.
    KLIMA Report 6/97, Norwegian Meteorological Institute, Oslo

    And almost every long temperature series ( see the 4 long european stations) has TOB adjustments

  197. “The best that can be said for GHCN is that the unadjusted data are “NOAA-unadjusted”. It is not true that there are absolutely no adjustments of any kind in GHCN “raw”.”

    yes, if you read the essay Zeke and I posted on Judiths you will see that we acknowledge that all we really know is this.

    we know when a dataset says it was adjusted.

    in all cases we used data that was not claimed to be adjusted by the data provider.

    If we have concerns about undocumented corrections in GHCN, we have a simple way to test that. Throw all the GHCn data out.

    OR work with daily data. when you look at daily data you see all sorts of bogus values. This is evidence ( not proof) that it is unadjusted. Using daily data only we get the same answer.

    We can also move to GSOD data and other sources.. Again, the same answer.

    Sadly, we must agree that it is warmer now than in the LIA.

  198. toto, thanks for the link. Without that I had clue where the source was (and didn’t really feel like searching, given I’ve been putting in 18-hour work days leading up to next weeks deployment).

    As you know, issues with Grace aside, 12 years isn’t long enough to establish trends in climate. It’s useless to argue over such a short period that what you are seeing is a trend as opposed to a trend + natural climate oscillations.

  199. Steve, thanks for your synopsis on TOBS etc. For those of us who haven’t a chance to invest the time you’ve spent, we appreciate your willingness to share what you’ve learned.

  200. Max_OK (Comment #100853)
    August 3rd, 2012 at 9:58 am

    “If we are interested in temperature anomalies, I don’t understand why you would want to disregard changes that could bias these anomalies.”

    Because for CONUS from 1979-2009 the “adjusted” USHCN record has a warming trend 50% larger than the satellite record for the same period of time 0.3C/decade vs. 0.2C/decade respectively. SHAP and TOBS combined account for the entire warming trend as the unadjusted data has no trend.

    Adding insult to injury to the huge disagreement is that the “fingerprint” of AGHG warming is more warming with altitude. The satellites measure temperature at a higher average altitude than 1.3 meters off the ground as USHCN does. Therefore if the satellite record is the better one then it should show more warming than the surface stations not less.

    There is therefore a problem somewhere. I suspect the problem is in the pencil whipping (SHAP and/or TBS) that raises USHCN unadjusted trend from 0.0C to 0.3C. The problem could also be that the warming trend is not due to anthropogenic GHGs or the theory saying that GHG warming should be greater with increasing altitude on the troposphere.

  201. Kenneth

    “Is my assumption correct that the data used by BEST starts with the raw data from the various sources, i.e. BEST is not doing a second iteration of adjusting data that has already been adjusted.”

    go to the sources page. or download the code and look at the python parts which download the data.

    ‘unadjusted’ or ‘raw” data is used. whereever possible. there are 16 sources. CRU is a source and it is used as the source of last resort.. its data is used only if other sources have no data.
    The sources.txt file is enormous, interested people can start there.

    step 0 is to put daily into monthly

    step one is a merge process where all records are merged.
    Thats the multi value dataset. every value reported by every source for every month of every site is there.

    Then duplicates are removed and you are down to the single value file.

    Then outliers are removed per the methods paper.

    Then slicing

    then kriging with reweighting.

    At some point when I come up for air, I think I will be working on the data paper. In steves world the data paper would have come first, but life isnt burger kind and I cannot have it my way.

    as it stands I have little advantage over anyone else in understanding ALL the details of the code and data. I start with the papers. Then I look at the code and if I get stuck I can ask robert.

  202. SteveF (Comment #100857)
    August 3rd, 2012 at 10:15 am

    “There is something to be said for minimizing complexity in analyses of contentious subjects.”
    ————————————————————————–

    +1*10^6

  203. Zeke, why do you need ask about SHAP? This is what Watts 2012 is contending is wrong due. Below is from NOAA description of USHCN data.

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

    SHAP – Station Homogeniety Adjustment Procedure.

    The homogeneity adjustment scheme described in Karl and Williams (1987) is performed using the station history metadata file to account for time series discontinuities due to random station moves and other station changes. The debiased data from the second adjustment are then entered into the Station History Adjustment Program or SHAP. The SHAP allows a climatological time series of temperature and precipitation adjustment for station inhomogeneities using station history information and is the third adjustment. The adjusted data retains its original scale and is not an anomaly series. The methodology uses the concepts of relative homogeneity and standard parametric (temperature) and non parametric (precipitation) statistics to adjust the data. In addition, this technique provides an estimate of the confidence interval associated with each adjustment. The SHAP program debiases the data with respect to changes other than the MMTS conversion to produced the “adjusted data”. Specific details on the procedures used are given in, “An Approach to Adjusting Climatological Time Series for Discontinuous Inhomogeneities” by Karl, and Williams, Jr. 1987, Journal of Climate and Applied Meteorology 26:1744-1763.

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

  204. @Zeke

    The description of SHAP above is for v1 data. USHCN v2 SHAP adjustment, which has itself been adjusted (adjustment^2 🙂 ) is here and was modified as recently as 2007 from the original Karl 1987.

    It has modified several times. Sort of like a toothpaste every few years it’s new and improved. Whether it is the best it can be now even David Springer doesn’t know.

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/#homogeneity

  205. Zeke (Comment #100874)
    August 3rd, 2012 at 12:36 pm

    David Springer,

    SHAP hasn’t been used for USHCN for three years now. Catch up with the times…
    ftp://ftp.ncdc.noaa.gov/pub/da…..al2009.pdf

    ———————————————————————–
    It’s what Watts 2012 compares to. I thought the thread was about the Watts paper. Sorry. What are we talking about?

  206. Steve_F (100857), and maybe Zeke:

    I got curious about this and poked around on NOAA’s website ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/. Trying to do the simplest of analysis I grabbed the raw monthly mean file and the “tobs-adjustment-only” monthly mean file. From 1970 to present there are about 1000 stations reporting annual means in 1970, though that drops to about 800 in 2007 (and that’s as far as it goes) and varies somewhat from year to year.

    Although it looks from Zeke’s figure like there should be 500 or so stations that didn’t change Tobs, when I compare the annual raw means to the annual Tobs means, from 1970 to 2007 I get a pretty consistent number of about 50 stations whose mean values don’t change. About 18 of those stations overlap with the 71 or which are so rated 1 or 2 versus 3,4, or 5. The linear least squares fit trend to the simple average of those 50 stations is about 0.15C/decade, whereas the same treatment of the 18 which are 1+2 is 0.09C/decade whole raw dataset is 0.06C/decade and the whole Tobs adjusted dataset is 0.12C/decade.

    wtf

  207. Zeke (Comment #100874)
    August 3rd, 2012 at 12:36 pm

    SHAP hasn’t been used for USHCN for three years now. Catch up with the times…
    ftp://ftp.ncdc.noaa.gov/pub/da…..al2009.pdf
    ————————————————————————

    It’s still station homogenization. A rose by any other name…

    This is just one more in a series of “new and improved” homogenization procedure. Given how many times the procedure has been modified one might wonder it they really ever knew what they were doing since it had to be repaired or replaced so many times.

    A history like that doesn’t bother you? Karl 1987 was good for like 20 years then right when the temperature trend flattened out and went into decline following 1998 El Nino the pencil whipping started getting fast and furious. That doesn’t appear the least bit suspicious to you? Along with the growing disagreement between USHCN, satellite record, and theory of greenhouse gas warming anomaly supposed to grow with altitude and satellite shows it shrinking compared to USHCN measured 1.3 meters from the ground? Seriously? This all looks perfectly copacetic to you?

  208. BillC (Comment #100878)
    August 3rd, 2012 at 12:52 pm

    Imagine that. I’m shocked. Shocked I tell you.

  209. Trying to clarify that last comment because I hope somebody actually reads it and tells me what I did wrong:

    USHCN V2, simple average of all stations reporting in a given year, for 1970-2007, linear least squares fit:

    Type, count, trend deg C/decade
    Raw, all stations, 0.06
    Tobs, all stations, 0.12
    Raw (=Tobs), all stations with no change from raw->tobs, 0.15
    Raw (=Tobs), 1+2 rated from above, 0.09
    Raw (=Tobs), 3+4+5 rated from above, 0.30

    wtf?

  210. BillC (Comment #100878)

    Between TOBS and Adjusted the missing monthly data are infilled and a station without a TOBS adjustment could change because of infilled data.

  211. Continuing:

    Type, count, trend deg C/decade
    Raw, all stations 1+2, 0.10
    Raw, all stations 3+4+5, 0.06 (no surprise, it’s the vast majority)

    Tobs, all stations 1+2, 0.12
    Tobs, all stations 3+4+5, 0.12 (actually agree to about +-0.002)

  212. Kenneth,

    Absolutely, but I’m not using Adjusted, specifically to avoid all adjustments other than Tobs, including infilling. The fields with actual values in both datasets are the same, just the values are different from the Tobs adjustment. so lots of stations don’t have an annual summary if a month is missing.

    So my numbers won’t line up with anyone using the Adjusted.

  213. CORRECTION (from 100881)

    Raw (=Tobs), 3+4+5 rated from above, 0.17

    grabbed the F number not the C

    still 2x the 1+2 rated where Raw = Tobs

  214. BillC

    Interesting, huh? See Watts 2012 for a possible explanation. 🙂

    Site selection means everything. Anthropogenic influences changes what time of day min/max temperatures change.

    How little anyone cares about this is illustrated by me pointing out to Zeke that an airport on flat land with a mown fields in and around it is hardly representative of the average land surface on the earth yet he has no problem at all with 30%+ of the class 1 and 2 stations being airports. Zeke defended airports as being really good sites. This is nonsense. A really good site is one with the least anthropogenic disturbance.

    A new study just came out showing airports are worse then we thought. The microclimate that forms at night on a natural landscape undisturbed by aircraft wingtip vortexes, prop wash, jet wash, smooth tarmact, and lowered evapo-transpiration, very short height groundcover, level, is much lower in the natural setting. Remove airports from the list of class 1 & 2 stations and watch the trend drop even more.

  215. BillC,

    Interesting. Perhaps there is some minor time of obs drifts (e.g. a few days or months where it changes) that do not represent systemic time of obs changes for those stations but still involve some correction so as to preclude a perfect match?

    If there are only 50 matches, you can’t have a particularly large Class 1/2 subset of those 50, and you need to take trends from a very small number of stations with a grain of salt due to potential biases in spatial coverage, instrumentation changes, etc.

  216. Re David Springer Comment #100869

    David, over at STOAT, Paul S (in a July 31, 2:12 pm post) says the Watts paper uses the wrong amplification factor for the UAH Lower Troposhere CONUS trend.

    Watts: “Depending on the amplification factor used, which for some models ranges from 1.1 to 1.4, the surface trend would calculate to be in the range of 0.17 to 0.22, which is close to the 0.155°C/decade trend seen in the compliant Class 1&2 stations.”

    Paul S: “This is wrong of course – the 1.1 – 1.4 figure would relate to global land+ocean TLT amplification. The global land relationship in CMIP3 models centres on a 1:1 relationship (0.8 to 1.2 between 1979 and 2005, 0.9 to 1.1 between 2010 and 2100), and land temperatures are what is at issue.”

    For more, see

    http://scienceblogs.com/stoat/2012/07/30/why-wattss-new-paper-is-doomed-to-fail-review/

  217. Max_OK (Comment #100888)
    August 3rd, 2012 at 3:20 pm
    Re David Springer Comment #100869
    David, over at STOAT, Paul S (in a July 31, 2:12 pm post) says the Watts paper uses the wrong amplification factor for the UAH Lower Troposhere CONUS trend.
    Watts: “Depending on the amplification factor used, which for some models ranges from 1.1 to 1.4, the surface trend would calculate to be in the range of 0.17 to 0.22, which is close to the 0.155°C/decade trend seen in the compliant Class 1&2 stations.”
    Paul S: “This is wrong of course – the 1.1 – 1.4 figure would relate to global land+ocean TLT amplification. The global land relationship in CMIP3 models centres on a 1:1 relationship (0.8 to 1.2 between 1979 and 2005, 0.9 to 1.1 between 2010 and 2100), and land temperatures are what is at issue.”

    ###############

    Its actually worse than that. Amplification over land is latitude dependent, so you need to look at the amplification over
    CONUS. that’s <1.

    Long ago here on rank exploits and over at CA one could get skeptics to accept this approach to bounding the bias problem in the land surface record.

    basically because Peilke used the method. You start with the
    UHA record, apply the amplification, and you have an estimate for the land. then you compare the land record with the estimate from UHA.. The residual ( according to Peilke and Watts ) is the bias in land. Its a good cross check. And most skeptics will agree to this cross check…. UNTIL you point out that they use the wrong amplification numbers. When you use the right numbers… that bias vanishes or becomes very small.

  218. “Zeke defended airports as being really good sites. This is nonsense. A really good site is one with the least anthropogenic disturbance.”

    Airports can vary between being cooler than undisturbed sites and warmer than undisturbed sites.

    Understanding why is simple once you look at where some airports are built.

    The other thing you have to be careful about is assuming that a site that is listed as an airport site is actually at the airport.

    Peilke’s latest paper, should give you a clue about cool airports.

  219. Mosher – “Amplification over land is latitude dependent…”

    Is there a paper which discusses this, or a table/chart somewhere? Klotzbach et al. only talks about the average amplification, presumably a global average.

  220. of course.. Watts 2012 relies on an amplification of 1.1-1.4.. which is wrong. I suspect they will drop this argument and attack UHA as being wrong.. or they will attack TOBs as being wrong..

    Gosh.. looks like unfalsifiable beliefs to me

  221. Max_OK, Steven Mosher- Regarding the amplification factor, I raised this issue in this thread:

    Andrew_FL (Comment #100603)

    The amplification over the US is indeed less than 1, which does in fact suggest, for the US, there is little bias in the surface data. The best part is I determined the amplification factor empirically, not using a climate model.

    I guess you could say I am a skeptic that agreed to this “cross check”-and in fact you must admit I still do. So I resent your implication a bit, Mosher.

    Also:

    “When you use the right numbers… that bias vanishes or becomes very small.”

    This is almost certainly true over the US. It is not true globally. Globally the amplification factor is greater than one, and the UAH trend is less than the surface trend-meaning that if you accept this cross check, if you accept that it proves that there is little bias in the US record, you must also accept that it shows a significant bias globally.

  222. Steven Mosher (Comment #100865)
    August 3rd, 2012 at 11:39 am

    “using google and doing some reading is all it would take. I’ve explained this and cited it on several occasions. I get tired of doing other peoples work. I get tired of lazy people who are not interested in the truth, who expect others to do their work for them.

    A SIMPLE GOOGLE ON TIME OF OBSERVATION BIAS will get you the mere”

    Steven, you made some claims here at this blog about other nations adjusting temperatures for TOBS and listed those countries in an exchange with David Springer and thus I think I had every right to judge that you had reasonably accessible documentation and references to provide when I made my request. I do not think attempting to turn the issue around does you any credit. Once you make a claim and one in the terms you did you should stand ready to document it with readily accessible material.

    My main intention was to obtain some information on historically what was done with TOBS by the major three temperature data sets. It was the one remaining area in my review and analysis of the three majors that I was frustrated in not finding information. I knew that GHCN did not adjust for TOBS for any countries data except the US and I had to assume that the data GHCN started with for other countries was raw data. Since we know that TOBS should be adjusted for when observation times change it is puzzling why more effort was not expended on attempting that adjustment for all the data used or alternatively showing why it might not be critical given other countries changes or lack thereof of observation times. Since currently all the GHCN data, and thus the GISS data, receives the change point treatment and it has been shown that that process can reasonably well adjust for TOBS the situation may currently not be that important.

    My questions to you and my searches where in an effort to determine whether raw data that GHCN received might actually have been adjusted. It does not bother my one wit that a second adjustment be made on the same data that has been adjusted once and in fact I suggested to Menne that this might improve the GHCN algorithm as I could show that change point could be found between nearest neighbor stations in adjusted form.

    I found the Canadian adjustment after I looked at CRUTem4 but I did not find it from that site.

  223. “of course.. Watts 2012 relies on an amplification of 1.1-1.4.. which is wrong. I suspect they will drop this argument and attack UHA as being wrong.. or they will attack TOBs as being wrong..

    Gosh.. looks like unfalsifiable beliefs to me”

    Why do the major 3 temperature data sets, and maybe the fourth now, more or less ignore the UAH and RSS measurements in confirming or not their measurements?

    Also it is puzzling that if Watts and his people were looking for potential problems with the instrumental record they would continue to look at the last 30 where you have independent measures and where the instrumental record is probably much better than it was earlier. Limiting ones searches to looking for trend changes in that 30 year period due to micro site changes does not make a whole lot of sense either if those changes occurred more than 30 years ago.

  224. Kenneth Fritsch (Comment #100897)-“Limiting ones searches to looking for trend changes in that 30 year period due to micro site changes does not make a whole lot of sense either if those changes occurred more than 30 years ago.”

    Analyzing the record for bulk, uniform trend differences from siting differences doesn’t make sense to me period. The only way to possibly say anything about the impact of siting would be to identify when stations changed their siting, and to look for evidence that this resulted in a change, and how much of one. One. Station. At. A. Time.

  225. If the globally averaged tropical amplification is in fact positive (as seems to be the concensus among modelers) then the satellite record ought to show a somewhat greater trend that the surface. But that is not the case: http://www.woodfortrees.org/plot/rss/plot/hadcrut4gl/from:1979/trend:1/plot/hadcrut4gl/from:1979/plot/rss/trend
    So either the estimate of tropical amplification is wrong, or there are substantial other factors involved. The most plausible other factors seem to be boundry layer warming (Pielke sr and associates) and/or there are some inaccuracies in the surface record. I suppose the satellites could be wrong as well, but as closely as those records have been examined, that seems to me less likely.

  226. The question on whether the data comes to GHCN raw or adjusted is answered here.

    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/README

    V3 contains two different dataset files per each of the three elements.
    “QCU” files represent the quality controlled unadjusted data, and QCA” files represent the quality controlled adjusted data. The unadjusted data are often referred to as the “raw” data. It is important to note that
    the term “unadjusted” means that the developers of GHCNM have not made any adjustments to these received and/or collected data, but it is entirely
    posssible that the source of these data (generally National Meteorological Services) may have made adjustments to these data prior to their inclusion
    within the GHCNM. Often it is difficult or impossible to know for sure, if these original sources have made adjustments, so users who desire
    truly “raw” data would need to directly contact the data source.

    The “adjusted” data contain bias corrected data (e.g. adjustments made by the developers of GHCNM), and so these data can differ from the “unadjusted” data.

  227. That should have been tropospheric amplification, not tropical (damned autocomplete).

  228. “The only way to possibly say anything about the impact of siting would be to identify when stations changed their siting, and to look for evidence that this resulted in a change, and how much of one. One. Station. At. A. Time.”

    Agreed. And in fact I have said the same many times. The only way a snapshot would work would be if all stations started as pristine and the snapshot quality deterioration occurred near instantaneously. But even that does not work if the change occurred before the start of the period used to measure trend effects.

    Obviously these attempts are aimed at finding something or not finding it at the margins. I guess one could say given these quality levels of stations at this point in time the trends over the last thirty years were not significantly affected. Not sure that really says much but it is antithetical to proposition that it would make a difference.

  229. Why do the major 3 temperature data sets, and maybe the fourth now, more or less ignore the UAH and RSS measurements in confirming or not their measurements?

    no clue. seems basic to me, but its hard to get folks interested.

  230. Kenneth.

    I copied an pasted a reponse from another thread.
    the thread question there was who else uses TOBS.

    That’s not your specific question.

    If I understand your question it is: does ghcn take data from other NWS that use TOBS?

    Answer: maybe.

    Proceedure: First identify the countries that may use TOBS.
    Norway, australia, japan, canada.

    1. get the ghcn record
    2. go research the NSW and see what you can get from them

    Compare.

    Or drill down into daily data.

    Mind numbing.

  231. Steven Mosher, here’s the published version.

    I think it’s fair to say there substantive disagreement between models and data trends, once you go beyond the global mean average. Heck, even the model global mean average (which they “sort” of tune for) is at the ragged edge what is acceptable model performance.

  232. Steven Mosher (Comment #100906)-It’s okay, just don’t paint “skeptics” writ large with such a broad brush-I’m not quite a “lukewarmer” but I’m not an unreasonable guy! 🙂
    SteveF (Comment #100899)-“If the globally averaged tropical amplification is in fact positive (as seems to be the concensus among modelers) then the satellite record ought to show a somewhat greater trend that the surface.”

    Here’s the thing, you don’t need models to tell you what the amplification is: models can tell you why they do it-namely it’s a consequence of the lapse rate following the moist adiabat. This will be the case in models with “any warming”-Real Climate told us so. So we can ask the question: in the real world, what lapse rate change pattern is exhibited when the surface cools and warms due to ENSO and a couple volcanoes? It turns out LT variations are in sync with, but larger than the surface variations, globally, by about (maybe a little more) the factor models say long term trends should be amplified. Ben Santer noticed this, in the climategate emails you can see them discussing a “timescale” argument why they believe that amplification must really be real. Santer later published on this in 2005, I believe. So amplification seems to be right. But as you say, the long term trends don’t display this behavior. I think that it is because of the surface data-the fact that the satellite data, when adjusted by empirically determined “amplification” factors agrees with the US data trend to within .02 per decade (this was my high estimate of cooling bias) but disagrees with the global data by something like .1 degree per decade, suggests to me that the US data is unlikely to be wrong, but as I have long suspected, the data in other parts of the world is much lower in quality.

    Any adjustment to the satellite data would destroy this good agreement with the US data to improve agreement with the global data. That strikes me as ludicrous.

    Steven Mosher (Comment #100903)-Mears in on the RSS team. He may believe his team’s product is biased but that would surprise me. Personally, I think in recent years it is biased cool, but this is largely serving to erase the earlier warming bias from the NOAA 11 transition to NOAA 12. But lots of people are always saying the satellites must be wrong. I believe it when I see it! 🙂

  233. Willis long agao on CA also did a post on amplification.
    No time to look for it..

    GUI is almost done.. arrg, guis in R.. my code looks like lucias!

  234. Steven Mosher, I said I was going to ignore you for some time, but I noticed you said something earlier I needed clarification about:

    go to the sources page. or download the code and look at the python parts which download the data.

    Could you clarify where the code for downloading data is? I downloaded a zip file I found on the BEST website, but I didn’t see any Python code in it. I may have just missed it as there are many subdirectories and the Readme file is basically useless, but it’d be helpful if you could offer an actual source rather than telling people to look things up.

    I’m not familiar with Matlab at all, and my internet connection is bandwidth limited so I probably won’t be downloading the full data set any time soon, but I would like to be able to review the code used. Some sort of guidance or roadmap would go a long ways.

  235. http://www.drroyspencer.com/2012/05/u-s-temperature-update-for-april-2012-1-28-deg-c/

    According to UAH data for CONUS 1973-2012 the trend is 0.22C/decade. USHCN shows 0.26C for the same period. UAH trend for same period with population adjusted data (see article for population adjustment method) is 0.13C/decade.

    The USHCN defenders can kick and scream all they want but it isn’t going to change the evidence of a mistake somewhere.

    Possibilities are:

    1) the satellite temperature record is wrong
    2) the surface temperature record is wrong
    3) CO2 is not responsible for the warming

    It could be some combination of all three but my bet is that most of the error lies behind door #2 and #3 above. The rural warming left in the satellite record after population adjustment matches the surface warming that which Watts 2012 found in USHCN data with altered station discrimination. Both of those match the warming trend Atlantic SST and the warming trend in the Atlantic SST matches the upside of a cyclic 60-year warming/cooling trend in the Atlantic. All of which essentially points to no CO2 warming but rather to natural global temperature oscillation.

    This shouldn’t be difficult to accept by an objective observer. The earth has been ringing like a bell on 100,000 year oscillation glacial/interglacial for several million years. That there exist harmonics at higher frequencies is no surprise. It would be more surprising to find no harmonics.

    AGW faithful and those with vested emotional interests in one narrative or another are not objective.

    The AGW narrative is collapsing under the weight of contrary evidence. Get used to it.

  236. Satellite temperature data is checked and confirmed with balloon soundings.

    What are USHCN network adjustments checked and confirmed with?

  237. Kenneth Fritsch (Comment #100900)
    August 3rd, 2012 at 7:00 pm

    The question on whether the data comes to GHCN raw or adjusted is answered here.

    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/README

    V3 contains two different dataset files per each of the three elements.
    “QCU” files represent the quality controlled unadjusted data, and QCA” files represent the quality controlled adjusted data. The unadjusted data are often referred to as the “raw” data.
    —————————————————————————-
    Precisely. My bold. I had anticipated the commenters here already knew that. Max_whoever and Steven Mosher apparently did not or they would never have embraced the straw man about me wanting to include transcription and typographical errors.

    This is either incompetence or intellectual dishonesty on display on Mosher’s part. I’m going with intellectual dishonesty because the level of incompetency required given the time he’s devoted to this seems absurd.

  238. Steven Mosher (Comment #100904)
    August 3rd, 2012 at 8:43 pm

    Why do the major 3 temperature data sets, and maybe the fourth now, more or less ignore the UAH and RSS measurements in confirming or not their measurements?

    no clue. seems basic to me, but its hard to get folks interested.
    ——————————————————————————-

    Culture wars. Spencer and Christy both believe in God. The general academic position is that anyone who believes in God cannot be intellectually competent. At the most liberal institutions like Stanford and Berkeley this is triply true which is why I instinctively knew Muller could not be trusted as an objective participant and warned Watts from the outset that he was going to get f*cked by Muller. Lay down with dogs get up with fleas.

  239. Mosher,

    Simple averages – yes, as I said. It was meant to be quick. I had no idea what to expect to find, so I’m a little surprised. Before I’d do any spatial averaging, I’d look at
    1) (qualitatively) the geographical distribution (to make sure 15 of the 18 aren’t in the same region)
    2) length and timing of trends in each case
    3) the distribution of the trends by station.

    Particularly if the distribution is all over the map, I’d be more inclined to think random error.

  240. Carrick 100908,
    Wow, that paper is as mealy-mouthed as any I have seen recently!
    The whole thing is a search for why the satellite data MUST be wrong. So, the satellite data must be wrong, the baloon data confirming the satellites must also be wrong, and there is no discrepancy…. at least after you change the satellite data to make it “right”. Sad indeed.

  241. McIntyre here

    http://climateaudit.org/2012/07/31/surface-stations/

    describes CONUS-only satellite trend for the period as 0.29C/decade which is sharply (to say the least) higher than the global land-only trend of 0.17C/decade. I’ve no reason to distrust McIntyre.

    I was not aware of how much CONUS differed from global land-only trend.

    I’m curious as to what makes CONUS so much higher than global land trend. CO2 is ostensibly well-mixed so this still points away from anthropogenic warming from CO2 emission but it certainly does still point to anthropogenic warming from some other cause such as land use change.

    Ostensibly there should not be much additional warming anomaly with increasing altitude over land due to less evaporation over land surfaces but it should not be totally absent either so I still suspect that surface temperature adjustments to USHCN raw exagerate the warming to some extent although perhaps not as much as Watts 2012 would have us believe. USHCN is given at 0.26C/decade compared to satellite 0.29C decade. This does not strike me as being much out of agreement with latent amplification at increasing altitude.

    The question I have is why CONUS is so much higher than global land-only. It screams anthropogenic but not anthropogenic CO2 unless it is not well mixed. And in any case the global land+ocean satellite trend is still 0.14C per decade and falling and the IPCC AR1 prediction of 0.3C/decade is quite wrong at this point according to our best sensor system.

  242. bugs (Comment #100918)
    August 4th, 2012 at 5:48 am

    Hansen believes in god.
    ————————————————————————–

    Got a link describing exactly which god that would be for Hansen? I couldn’t find a single thing about Hansen’s religious beliefs, if any.

    Spencer and Christy on the other hand one need only go to wikipedia for theirs. Both quotes below are from their wikidpedia biographies:

    “Prior to his scientific career, Christy taught physics and chemistry as a missionary teacher in Nyeri, Kenya from 1973 to 1975. After earning a Master of Divinity degree from Golden Gate Baptist Seminary in 1978 he served four years as a bivocational mission-pastor in Vermillion, South Dakota, where he also taught college math.[1]”

    Spencer is perhaps more objectionable as he embraces teh hated Intelligent Design:

    “Spencer is a proponent of intelligent design as the mechanism for the origin of species.[34] On the subject, Spencer wrote in 2005, “Twenty years ago, as a PhD scientist, I intensely studied the evolution versus intelligent design controversy for about two years. And finally, despite my previous acceptance of evolutionary theory as ‘fact,’ I came to the realization that intelligent design, as a theory of origins, is no more religious, and no less scientific, than evolutionism. . . . In the scientific community, I am not alone. There are many fine books out there on the subject. Curiously, most of the books are written by scientists who lost faith in evolution as adults, after they learned how to apply the analytical tools they were taught in college.”[34] In The Evolution Crisis, a compilation of five scientists who reject evolution, Spencer states: “I finally became convinced that the theory of creation actually had a much better scientific basis than the theory of evolution, for the creation model was actually better able to explain the physical and biological complexity in the world… Science has startled us with its many discoveries and advances, but it has hit a brick wall in its attempt to rid itself of the need for a creator and designer.”[35]”

  243. David Springer (Comment #100923)

    I’m wondering if it is related to land use change, al la Dr. Pielke. Notice the regional differences that show up in Watt’s charts. The industrialized north east has the highest trend. I haven’t looked at a map of the sat data to see if it shows urbanized heating. Temp trends vs night lights?

  244. http://curry.eas.gatech.edu/currydoc/Agudelo_GRL31.pdf

    Take a gander at fig. 1 above which is a global map of satellite derived decadal temperature trends for 1979-2001.

    I wasn’t aware there was this much regional discontinuity. Vast regions with anywhere from -0.4C/decade to +0.4C/decade.

    US, Europe, and Northeast Asia are hotspots with the latter two warming faster than CONUS. Africa, Central and South America, south Asia, Australia, and Antarctica are all cooling.

    This doesn’t look like a well-mixed atmosphere if CO2 is responsible.

  245. Barry – check out Curry’s paper. It would appear that industrialization beginning at about 35 degrees north latitude is the common factor and the farther north the more warming there is. This correlates fairly well with soot emission from industrial smokestacks and accumulation on snow & ice.

    I wrote back in 2005 that I thought soot was more responsible for warming trends than anything else. I’d like to see the same map as fig. 1 in Curry 2004 split into two versions April-September and October-March to better determine if the effect is greater when there is more snow on ground or not.

  246. RE David Springer Comment #100916

    David, you were misunderstood because your statement “I say screw the adjustments and use the raw data, warts and all” could reasonably be interpreted as meaning raw data before editing for transcription errors. Perhaps your use of “warts and all” made misunderstanding more likely, since transcription errors could be seen as warts.

    I trust our conversation, and the comments of others, have been as edifying for you as it has been for me.

  247. BarryW

    I considered land use change but I think there’s enough land use change in Africa, South America, and Australia so we should see warming there and there’s not much land use change in northeastern Canada which has a lot of warming. It seems to correlate best with smokestack soot emission and the distance soot can travel before it settles out.

  248. David Springer (Comment #100930)

    Interesting. Have you looked in the India/China downwind areas? I would think they would show a large spike.

    Just looked at the Curry paper. China lights up like a xmas tree. I would think the overall distribution is very suspect for CO2 causation assuming it’s “well mixed”.

  249. Max – not edifying but rather a complete waste of time for me. I knew the difference between USHCN adjusted and raw all along as well as quality control in going from individual station reports on paper to raw USHCN data. Nowhere did I even hint that I had a problem with quality control correcting transcription and typographical errors. That was a strawman animated through reductio ad absurdum by Mosher and reanimated by you numerous times.

  250. BarryW – I can hardly see the continental outlines in that map. Wind patterns and industrial smokestack concentration would no doubt be interesting to see. In general there’s little wind-driven exchange across the equator. There’s also a lot less snow cover in the southern hemisphere (excluding Antartica) because there’s twice as much ocean in the SH and the ocean moderates summer/winter temperature differences. Antarctica is possibly a special case due to polar vortex which tends to isolate the interior of the continent from the ocean surrounding it. However, when it comes to CO2, the polar vortex doesn’t stop that from mixing. Antarctica’s CO2 concentration lags Mauno Loa by 1-2 years at most IIRC.

  251. BarryW – I’ve seen some claims that CO2 isn’t well mixed but they didn’t appear credible to me. Mauna Loa agrees with almost any other sampling site (including my own in Austin, TX) in the NH almost instantly and with everywhere but Antarctica in the SH once you adjust for seasonal difference between NH and SH. The “well-mixed” claim for CO2 appears very well grounded in reality. The claim of anthropogenic warming from CO2 is grounded in theory but not in reality.

  252. SteveF,

    As opposed to the search for any reason why the surface record MUST be wrong? And there are massive differences between the various radiosonde analyses depending on the homogenization method used, so it’s a bit hard for them to “confirm” massively adjusted satellite readings, which also have very large zonal differences between even themselves. Not to mention massive past corrections.

    A reminder for Mosher that a Berkeley style analysis of radiosonde data would be awesome, given all of the instrumentation changes. This assumes the measurements are dense enough to detect change points. Yes, another “honey do.”

  253. Steven Mosher (Comment #100905)

    Mind numbing is a subjective measure. Now that I am convinced that GHCN, GISS by way of using GHCN data currently and CRU, by way of Phil Jones email replies on the matter, do not really know how or how much the raw data they received is processed, I think a future project for myself would be to simply compare the GHCN data in unadjusted and adjusted form for various country regions of the world. It could be instructive.

    At this point I see no reason to think that adjusting already adjusted data is going to be problematic. It simply should mean that the unadjusted and adjusted data will have only small differences.

  254. ob (Comment #100934)
    August 4th, 2012 at 8:47 am

    re: soot etc. what about the hot spots over the southern oceans and the North Pacific?

    —————————————————————————

    Ninety percent of the mass of the global ocean is a nearly constant 3C. Ten percent of its mass called the mixed layer averages about 14C. The mixed layer floats atop the frigid depths. Averaged, the global ocean temperature is 4C which probably represents the average temperature of the earth over a glacial/interglacial period.

    People usually jump up and say “Dave, the deep ocean is 3C because that’s the temperature where H20 reaches highest density”. Well that’s true for FRESH water but not for salt water. Salt water at ocean salinity level keeps on increasing in density right up to its freezing point which is -1.9C. When it reaches freezing the salt is excluded and that’s a really good thing otherwise the ocean would freeze from the bottom up and we’d be in a permanent ice age.

    Anyhow the point is that different factors must be considered in land vs. ocean. Rate of mixing between surface and deep water is one factor. Gyres that exclude expanses of water from north/south conveyor belt circulation are another. Trade winds speed up, slow down, and shift latitudes. The trades accelerate or decelerate evaporation which cools the surface layer and account for El Nino, La Nina, and La Nada. We know those indirect effects from changing trade wind behaviors have global consequences in both temperature and precipitation patterns.

  255. David Springer (Comment #100935)

    My comment was meant sarcastically. If CO2 is the major causation of warming then one would expect a uniformity of the warming at least hemispherically given the differences between land/ocean NH and SH. The warm spots on Curry’s maps seem to line up well with prevailing wind patterns relative to industrialized areas both in the Northern and Southern hemispheres. Do the CO2 proponents have an explanation for that?

  256. David Springer (Comment #100923)

    The question I have is why CONUS is so much higher than global land-only. It screams anthropogenic but not anthropogenic CO2 unless it is not well mixed.

    The impact of the Pacific MultiDecadel Oscillation is more pronouced in the US. Prevailing winds in the northern hemisphere are ‘west to east’.

    Thru June 2012 the USA 48 trend is now .23C which agrees with the Northern Hemisphere trend of .23C which is higher then the global trend.
    http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt

    Changing the ‘endpoint’ by 4 years changes the trend in the US lower 48 by 0.06 deg. It differences between the USA48 trend, northern hemisphere land trends and global land trends scream of ‘natural variation’.

  257. Kenneth Fritsch (Comment #100937)
    August 4th, 2012 at 9:03 am

    Or you can just do like I do and discount the land surface temperature record entirely because it just wasn’t designed to tease out decadal temperature trends accurate to a few hundredths of a degree over large regions and no matter how much pencil whipping you do it isn’t going to fix the fundamental problem that it’s not adequate for what is being asked of it. There is far too much room for mischief, mistake, and confirmation bias in the complex attempts to massage the individual station data into a composite different than what it is. If you want to know the daytime high and nightime low temperature of Possum Trot, Kentucky on 12 June 1963 to plus or minus 1 degree C then USHCN is the place to go (or the local newspaper) but it just isn’t up to the task of a continental US average trend in the hundredths of a degree per decade and CONUS is the best ground sensor network so any other region is even more problematic.

  258. harrywr2 (Comment #100940)
    August 4th, 2012 at 9:06 am

    “The impact of the Pacific MultiDecadel Oscillation is more pronouced in the US. Prevailing winds in the northern hemisphere are ‘west to east’.”

    Australia is a CONUS-sized fly in your ocean influence ointment. Prevailing winds are east-to-west in the SH, Pacific east of Australia is a hot spot, yet Australia is not warmed like the US is. How do you rationalize that?

  259. David Springer said in Comment #100932

    “Max – not edifying but rather a complete waste of time for me.”
    ____

    David, I surrender. I can’t beat you at wasting time.

  260. harryr – actually there’s just a vast difference between NH north of the 30th parallel and the rest of the world.

    The difference is there whether looking over ocean or over land.

    http://curry.eas.gatech.edu/currydoc/Agudelo_GRL31.pdf

    The amount of land surface everywhere south of the 30th northern parallel must be greater than that above it because almost without exception every bit of that southern surface is cooling and makes any surface north of it look like there’s a space heater blowing across it.

    I can’t see how CO2 can possibly explain this. Can you?

  261. Max_OK (Comment #100943)
    August 4th, 2012 at 9:26 am

    David, I surrender. I can’t beat you at wasting time.

    Fixed that for ya.

  262. David Springer (Comment #100941)

    “Or you can just do like I do and discount the land surface temperature record entirely because it just wasn’t designed to tease out decadal temperature trends accurate to a few hundredths of a degree over large regions and no matter how much pencil whipping you do it isn’t going to fix the fundamental problem that it’s not adequate for what is being asked of it.”

    Naw, I’ll just continue to do what I think is reasonable and instructive.

  263. BarryW (Comment #100939)
    August 4th, 2012 at 9:03 am

    There have been some studies (or at least A study I read) that soot is active in temperature modulation while in transit as well as after it settles out on the ground. The gist of that was that it was a negative surface forcing while airborne (shadows the surface). This was made IIRC as a rebuttal to Hansen 1988 forcings from various causes that had soot at about 50% the positive forcing of CO2.

    IPCC couldn’t tolerate soot as a major source of warming since the U.S., beginning with the Clean Air Act of 1964, has made huge strides in reducing soot emission while virtually no other region did the same including Europe. The primary political objective of global warming is to somehow punish the US for becoming the world’s only military superpower and single largest economic power. US economic, military, religious, and cultural leadership scares the bejesus out of many others. You don’t see many people who manage to emigrate to the U.S. leaving it after they arrive. You don’t see people in the U.S. abandoning blue jeans, coca-cola, and McDonalds for whatever is fashionable in other parts of the world but you see almost everywhere else in the world taking up those things.

    Downsizing the United States of America was, is, and continues to be the political agenda driving the CAGW charade. Therefore the only mechanisms for global warming has to be something the U.S does more than anyone else and that doesn’t include particulate emissions from fossil fuel combustion it’s only CO2 emission. Can’t even blame methane emission because most of that comes from rice growers. Nothing poor countries or Europe does that pollutes the air is politically acceptable. Interestingly a few years ago China took over as #1 source of CO2 emission and they’re no slouch on rice/methane production either. Exempt from Kyoto too. Isn’t that just precious? Unlike the U.S. however China doesn’t play the politics of guilt game and won’t hesitate to tell the rest of the world to go pound sand. I wish the liberals in the US resembled the Chinese government in more ways that just fascist inclination particularly in the realm of wanting to grow their economy and influence in world affairs.

  264. Kenneth Fritsch (Comment #100947)
    August 4th, 2012 at 9:36 am

    Naw, I’ll just continue to do what I think is reasonable and instructive.
    ———————————————————————

    Without letting facts get in the way of your beliefs.

    Understood.

  265. David Springer (Comment #100948)

    The UN adgena is basically how do we cut the US off at the knees to bring them down to our level. Liberals in the US think the same.

    While it may shadow the ground when airborne it will still absorb SW and radiate at LW. CO2 would still react to the LW whether it’s at ground level or even a few Kilometers above ground. So it’s overall effect on albedo would be much the same I would think.

  266. cce (Comment #100936),
    The conundrum is that there is substantially conflicting evidence of tropospheric amplification versus the models over anything but the very short term. Yes, there are ‘adjustments’ everywhere, and yes, there is noise/uncertainty in all of the data. I objected to the paper because the authors seem to assume that the modeled amplification MUST be correct and start working from there… hence a search for reasons why tropospheric temperature data is probably wrong. A more measured approach would be to accept that modeling of the atmosphere’s behavior may be a lot less than perfect, and that could also lead to the discrepancy between short and long term amplification. My personal guess is that relatively fast ENSO driven warming/cooling of the troposphere mimics “amplification” of sea surface temperature changes in the short term but not in the long term… and that produces the conflicting information: short term ENSO driven SST changes appear amplified, even while the longer term trend shows little or no amplification. So both the surface and tropospheric data may actually be correct.

  267. David Springer (Comment #100942)
    August 4th, 2012 at 9:22 am

    harrywr2 (Comment #100940)
    August 4th, 2012 at 9:06 am

    Australia is a CONUS-sized fly in your ocean influence ointment. Prevailing winds are east-to-west in the SH, Pacific east of Australia is a hot spot, yet Australia is not warmed like the US is. How do you rationalize that?

    The Rocky Mountains. The highest mountain on the Australian mainland is 2200 meters. In the Western US we have a whole slew of mountains over 4,000 meters.

    A small change in Pacific Ocean energy is going to have a big impact on the ‘weather’ on either side of the ‘Rocky Mountain Air Dam’. Since the land mass west of the Rockies is comparatively small then the trend doesn’t average very well.

    If we look at Watt’s pretty picture of surface tempurature trends in the US regardless of weather we use refer to the picutres using Watt’s ‘purified’ data or NOAA’s adjusted data the trend differences East and West of the Rockies are pronounced.
    http://wattsupwiththat.files.wordpress.com/2012/07
    /watts_et_al_2012-figure20-conus-compliant-nonc-noaa.png

    The trends in the pictures use state borders rather the the actual Mountain Ranges. IMHO is you used the actual mountain ranges the difference in trends would be even more pronounced.

  268. Steven Mosher (Comment #100910)-Yes, that was an interesting post: he showed that, on very short timescales there is “amplification” less than one on average-this is because, IMAO, the month to month changes are mostly uncorrelated, measurement noise and various seasonal effects “pollute” the signal. The “amplification” maximized greater than one at something like two years, IIRC, and decayed thereafter. This is consistent with what I mentioned above: that ENSO and volcanoes show amplification, but the long term trend does not. He also showed that amplification varies with altitude on various timescales in ways different from models, but that analysis depended mostly on the radiosondes. Either way, that is why it makes the most sense to empirically determine the amplification, since it is not obvious that models are correct about the exact nature of the vertical structure one should expect. Since the long term trends are what are in question, the empirical determination should use inter-annual changes.

  269. @David Springer


    Downsizing the United States of America was, is, and continues to be the political agenda driving the CAGW charade

    You have no evidence of that whatsoever. In all the ‘climategate’ emails, where is there one mention of any intention to do so.

  270. David Springer said in Comment #100948I

    “I wish the liberals in the US resembled the Chinese government in more ways that just fascist inclination particularly in the realm of wanting to grow their economy and influence in world affairs.”
    ____________

    David now wants Americans to be Commies.

    David, you are hoot!

  271. bugs (Comment #100954)-Funny, I didn’t know United Nations bureaucrats and high officials had any emails exchanges in the climategate files.

  272. bugs (Comment #100954)
    August 4th, 2012 at 11:46 am

    @David Springer

    Downsizing the United States of America was, is, and continues to be the political agenda driving the CAGW charade

    You have no evidence of that whatsoever. In all the ‘climategate’ emails, where is there one mention of any intention to do so.

    You’ll need to pour thru KGB files…not ‘climate gate’ files. It’s not so much a ‘climate concerned’ goal but a goal of the ‘hard left’.

    In the ‘Great Cold War’ the achilles heal of the ‘Capitalist Pig West’ was ‘energy’. If you could deny the West energy then you could bring it to it’s knee’s in short order.

    The reality is that the issue of ‘Climate Change’ is easily exploitable by those with political agenda’s that have nothing to do with ‘Climate Change’.

    Andy Lacis for example seems to me to be incapable or producing a ‘scientific paper’ that doesn’t include his personal opinion of the Bush Family.

    Here is a guest post he did at Pielke Sr’s.
    http://pielkeclimatesci.wordpress.com/2010/11/23/atmospheric-co2-thermostat-continued-dialog-by-andy-lacis/
    We currently seem to be operating under the ‘no regrets’ climate policy first formulated under the first Bush administration, which basically states that if anything undesirable should happen because of global climate change, we will then deal with that problem after the fact.

    Is it right to question whether someone who can’t avoid interjecting his ‘political opinions’ into a scientific discussion involving atmospheric physics might also allowing his political beliefs to interfere with his scientific objectiveness?

  273. FWIW,

    The reason my trends were lower on the whole is I did 1950-2007 where I said I was doing 1970-2007. Since I had to change I thought why not just do 1979-2007 to bring it in line with the satellite era. So for the 160 stations in USHCN having entries every year 1979-2007 here are the stats:

    Station type #stations trend deg C/decade
    Raw, Tobs \=Raw, Class 1+2 7 0.29
    Raw, Tobs\=Raw, Class 3+4+5 93 0.28
    Tobs, Tobs\=Raw, Class 1+2 7 0.29
    Tobs, Tobs\=Raw, Class 3+4+5 93 0.35
    Tobs = Raw, Class 1+2 16 0.24
    Tobs = Raw, Class 3+4+5 44 0.25

  274. BillC,
    Can you clarify what the above station descriptions mean? I really can’t follow.

  275. I think it reads like this for the first line

    Raw where tobs is not equal (\=) to raw for classes 1 and 2 with seven sites having a trend of .29

  276. The “no regrets policy?” Waiting until you know what the problem is and then fixing it? Bush 1 thought of that?

    Lacis must be very young indeed.

  277. Re: j ferguson (Aug 4 14:57),

    Gee. I thought it was about mitigation vs. adaptation. The party line being that adaptation is not to be mentioned. But, that’s really a cost/benefit net present value calculation. Stern was only able to make mitigation economically attractive compared to adaptation by using an extremely low interest rate to calculate the net present value of spending money now instead of spending money then. Of course current interest rates are extremely low, but I don’t think that can last.

    The real problem is that we don’t, in fact, have the money to spend now. The weaker economies, Greece, Italy and Spain, e.g., are going bankrupt already. Unless there’s considerable economic reform (read reduced real spending now and in the future) the stronger economies will soon follow the weaker ones. In the US, California and New York are already on the brink. Their unfunded pension and medical care liabilities are enormous on a per capita basis and growing rapidly. Making tax rates more progressive will just reduce the tax base one way or another so revenues will not increase much, if they increase at all. Warren Buffet assumes everyone thinks like he does and that they will earn just as much regardless of how much of it they get to keep. He’s just as wrong, and for the same reason, as was Pauline Kael about Nixon (“I don’t know anyone who voted for him.”)

  278. BarryW,
    Thanks for the explanation.
    .
    BillC,
    Humm… So those results at least raise a question about the TOB adjustments…. the poorly sited stations have the same trend as the well sited stations if there is no TOB adjustment?!? Whats up with that? Do you know the locations?

  279. The comparison of soot to sulfate aerosols is that sulfate aerosols are reflective in the SW so solar energy is lost to space because the albedo increases. Soot isn’t reflective in the SW, so the energy is absorbed in the atmosphere rather than at the surface and the albedo, to a first approximation, doesn’t change. If you had enough soot, then the troposphere might become more like the stratosphere with a reverse greenhouse effect. The temperature would increase with altitude with the atmosphere warmer than the surface. That would be the nuclear winter scenario. But the soot from the oil well fires in Gulf War I threw a monkey wrench into the nuclear winter hypothesis. The reason being that soot also absorbs and emits in the LW so the ratio of SW absorption to LW absorption doesn’t change and the surface doesn’t cool relative to the atmosphere.

  280. DeWitt,
    Warren is perhaps right about the ultra-wealthy (making money becomes like an entertaining card game when you already have all you could plausibly want or need), but he is almost certainly wrong about normal people. My wife made a choice to retire early specifically because her income was effectively taxed at our last dollar rate. Had she not faced high marginal rates, she would probably have continued working; the consequence is the US government is going to collect less tax from her. Warren just can’t appreciate the marginal value of income when you must balance how hard you work against how much money you actually take home.

  281. DeWitt Payne (Comment #100962)- You are much too kind to Buffet, he has no intention of paying the “Buffet tax”-they’ve already carved out the charitable givings exemption in it. Guess who’s giving his fortune away? Guess who’s corporation owes the IRS billions and is fighting them in court? Buffet has no intention of paying more taxes. He just wants to say he does, and make other people do so.

  282. David,

    nobody estimates the land record to within 1/100ths of a degree.

    what is reported as the average is mathematically the best estimate of the temperature at unobserved locations.

    The best estimate is one that minimizes the error.

  283. David Springer (Comment #100949)

    “Without letting facts get in the way of your beliefs.

    Understood.”

    Sorry David I thought that draft I was feeling was from hand waving.

  284. BarryW – yep. SteveF – from this sample (stations recording an annual total every year from 1979-2007):
    1) Stations where there were no Tobs adjustments had a lower trend than the general average of raw and adjusted, no matter what class.
    2) Tobs adjustments didn’t affect Class 1+2 stations much, but they affected Class 3+4+5 stations.
    3) Not much difference in trends between Class 1+2 and Class 3+4+5 in raw, but a difference in Tobs (as you said, and corollary to #2).

    Things to look at:

    Geographic distribution
    Trend distribution within the sets
    Instrumentation changes

    Interesting that 2/3 of the Class 1+2 stations did not have a Tobs adjustment.

  285. “DeWitt Payne (Comment #100962)- You are much too kind to Buffet, he has no intention of paying the “Buffet tax”-they’ve already carved out the charitable givings exemption in it. Guess who’s giving his fortune away? Guess who’s corporation owes the IRS billions and is fighting them in court? Buffet has no intention of paying more taxes. He just wants to say he does, and make other people do so.”

    What Buffett wants is everyone wealthy to pay the wealth tax to the government and thus must think that government can spend money better than the wealthy. There is no doubt that he is a proponent of big government. He obviously could donate his money to the government but would rather it go to charities which is somewhat of a contradiction.

    He is somehow held up as a special advocate since he is wealthy and got that way through some very wise and shrewd business decisions and investments. That makes him no more competent to give advice on government or economics than any other wealthy business person.

    The books on Buffett I have read indicate that he is very much occupied with making his businesses grow and the money those businesses make is his way of keeping score. The money he has personally accumated along the way seems much less important to him. I suspect he would be very much more concerned about what the government might do to slow his business growth than what it would do with his personal fortune.

    What I see of Buffett of late is that he has changed from making some wise comments on and criticism of government actions (and some not so wise) to becoming a cheerleader for the economy with no brainer comments like and I paraphrase here “regardless of this problem of (fill in here with debt, unemployment, regulations, etc.) the American economy will do just fine as it always has”. Which sounds a lot like it will because that wise old sage Warren Buffett says it will.

  286. BillC,
    When TOB correction is applied that probably means the station type was changed as well, which probably also means the station is more likely urban/suburban rather than rural. The geographic distribution of stations without any TOB adjustment would be very interestIng to examine.

  287. Kenneth Fritsch (Comment #100971)

    There is no doubt that he(buffet) is a proponent of big government

    BNSF’s(Buffets railroad) revenue from coal is down, revenue hauling oil and related equipemtn is way,way up.

    Warren is one smart dude. Credit where credit is due…the owner of the nations largest fleet of ‘death trains’ managed to manipulate the ‘climate concerned’ to go out and protest against Keystone Pipeline which would have put a major dent in his oil hauling revenues if it were built.

  288. @harrywr2 (Comment #100957)

    August 4th, 2012 at 12:45 pm
    bugs (Comment #100954)
    August 4th, 2012 at 11:46 am
    @David Springer

    Downsizing the United States of America was, is, and continues to be the political agenda driving the CAGW charade
    You have no evidence of that whatsoever. In all the ‘climategate’ emails, where is there one mention of any intention to do so.

    You’ll need to pour thru KGB files…not ‘climate gate’ files. It’s not so much a ‘climate concerned’ goal but a goal of the ‘hard left’.

    In other words, neither David nor you have any evidence of scientists scheming to bring down the West and Capitalism.

  289. bugs in his nead,

    “In other words, neither David nor you have any evidence of scientists scheming to bring down the West and Capitalism.”

    Absolutely right bugs, because you used the general term scientists. All scientists or most scientists would obviously not be able to hide such a wide ranging conspiracy so cannot possibly be correct. Of course, a smaller number could. Your statement is a quite common one of creating a STRAWMAN. MoshPup, Stokes and others seem to be fond of it also.

    Now, the no evidence is a real joker. All we have to do is look at all the mitigation schemes of the environmentalists and the Gorebull Warming Scientists to see they have if not an exact match, a more general match that they would ALL reduce the economy and with the economy the number of people supportable on the planet. We also see that there is NO effort to enforce these mitigation schemes on the WORST contirbutors to the problem outside of the west.

    Basically bugs in the head, you reject the evidence that has been parading in the papers, IPCC reports, news releases, talks… for the last 20 years and more!!!

    DENIER!!!

    HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

  290. Buffet wants a strong guvmint. When he bought the 5 billion GoldSacks stake he fully expected the government would bail out the likes of AIG guaranteeing a 100% payout. He had full confidence in Bernanke, Paulson.

  291. In these station analyses Is there an adjustment to the readings for altitude? I was taught that the reason Death Valley and the Dead Sea were so hot was partly due to the fact that they are below sea level…. meaning that they had just that much more compressing atmosphere overhead.
    Observe an expanse of cumulus clouds on summers day and tell me that altitude doesn’t have an important bearing on air temperature. All the vapor from below precipitates at the just the same height, producing a field of flat bottomed clouds. If you are modeling climate at a particular place it would seem you could create an expected temperature for any location based on altitude, latitude, sun position, albedo, ground heat capacity, ground moisture, air humidity, air pressure, wind speed, existing ground or water temperature and the profile of the atmosphere above; add to this cloud conditions and where the air came from the day before and you should have an expected air temp. It might be that there is also a factor for ground heat conductivity … I’m a little unclear on the particulars of how ground heat works. Also ocean currents would have a bearing on air temps at any position.

    Bottom line, if you wanted to know what the typical and stable *surface* temperature of such-and-such a place at such-and such time wouldn’t you just look for a lake and take readings of the temperature of the water? Air temperatures are too transitory whereas water temperatures integrate the changeable temperature of the air into a stable reading

  292. bugs (Comment #100976)-Individual scientists may have commitments to partisan politics or political correctness-and academia is overwhelmingly left-wing. This doesn’t require that they understand or desire the consequences of the policies they advocate or help others to advocate. While ignorance on their part is arguably risible, it can’t really be helped. But just because scientists themselves do not hold an agenda of a self-aware nature, does not mean that they are not being used by people that do possess agendas of a sinister, fully self aware nature. Many scientists probably really believe their own hype. I’ve seen a pro-AGW presentation given to first year engineering students and I note that the presenter made the woefully incorrect claim that people can do something on the cheap-risible on the part of an engineer who ought to know better, but I recognized it-he had cribbed it from Tom Karl. Naïveté is the problem with most people taken in by “progressive” politics. This is the case for those scientists taken in by it too.

    But the agenda that is being discussed doesn’t cease to be of concern just because scientists don’t realize they are serving it.

  293. The scientists didn’t create the mitigation schemes. The link lists all the contributors to the mitigation report.

    Can’t see any of the usual climate specialists there. Looks like it does include a lot of economists and environmental economics specialists.

  294. no, moderated again, must be the link to the IPCC WG4 list of authors. None of the usual climate specialist scientists in there. It’s a completely different list. That conspiracy would have to be ridiculously big to hold together.

  295. Andrew_FL (Comment #100981) —

    Well stated. Your comment’s link may see some use in the coming months, as it is a thoughtful rejoinder to the oft-stated “So, you’re claiming Conspiracy!” argument in defense of consensus-favoring papers (no matter how silly their methods may be).

    And to go on to state the obvious: that mainstream climatologists say something, doesn’t make it wrong. (E.g., routine claims don’t require extraordinary proof; most will turn out to be correct.) Each assertion has to be evaluated on its own merits.

  296. So far, the science has been correct. Muller has only come up with what everyone else knew already.

  297. bugs (Comment #100988) —

    > Muller has only come up with what everyone else knew already.

    That’s one way of putting it. Personally, I give the BEST team’s accomplishments a lot more credit.

    > So far, the science has been correct.

    “The science” is a vague term that calls out for a definition. bugs’ claim is not so, by how he’s used this notion in prior threads.

  298. bugs 100988,
    The basic ‘science’ has never really been in doubt: increasing infrared absorbing gases in the atmosphere is expected to cause warming. Few who have evaluated the technical substance doubt that. That doesn’t mean the whole of ‘the science’ is correct, especially the further away from the basics of radiative effects you venture: the net radiative forcing (net of aerosol effects) is very uncertain, computer models are uncertain (they diagnose a wide range of climate sensitivity to forcing based on a series of assumed partameters), and the consequences of warming, uniformly described by the concerned as looming catastrophes that will ruin the Earth’s biosphere and kill billions of people, are wildly uncertain.
    .
    I took the time to read a couple of threads at Eli’s blog yesterday (subject: morality of global warming), and it obvious that the fundamental disagreement between Eli (and most of his readership!) and me is a disagreement about moral responsibility. While I see global warming as a subject of some concern, and one which MAY require action for the public good, proportional to the extent and future cost of reasonably well defined long term consequences, Eli and most of his readers clearly believe that it is immoral to not act now, independent of cost, financial or otherwise, even in the face of very large uncertainty. Much of Eli’s readership also appears to believe substantial differences in wealth, whether between countries or between individuals, are similarly immoral, and so want to address the ‘moral imperative’ of dealing with global warming in such a way that it simultaneously addresses the ‘moral imperative’ of reducing differences in wealth via a colossal redistribution. They appear to pretty much uniformly reject the entire global economic and social structure, based entirely on their moral/political POV. This joining of global warming and “social injustice” (leftist politics) is so common that I suspect even you would acknowledge it is real.
    .
    It is that simple. No amount of discussion is going to change those two incompatible moral POV’s. Ultimately, the choices will be mainly political in nature, and I suspect, will be based on projections of the consequences of global warming which are a whole lot more solid than what we currently have. My only concern is that the moral views of those who practice climate science could (and IMO already do) grossly bias the ‘scientific’ process and its conclusions. That is the only reason I participate in the discussion.

  299. “both Mosh and myself are unpaid volunteers”

    When people get on the internet and tell everybody they are Saving The World For Free, I can’t help but get a little tingly inside.

    You guys are my heroes.

    or

    Please leave a link as to where we can donate to the Unpaid Global Warming Propagandizers Fund.

    Andrew

  300. bugs (Comment #100988)-“So far, the science has been correct.”

    I concur with Amac that this is a vague statement. What is “the science” anyway? That there is a greenhouse effect? That there has been some warming? That humans are increasing greenhouse gas concentrations, thereby contributing to warming? If that is “the science” then I don’t think anybody ever doubted that those points were correct.

    But if “the science” encompasses sweeping claims about climate impacts, all manner of disasters and doom to come, then bugs’ claim is just completely indefensible. On this, “the science” is frequently demonstrably wrong.

    If “the science” refers to the magnitude of future changes or sensitivity…well, we’ll see won’t we? “So far” the projections aren’t right, but while at this point it looks like somewhat more modest future warming is most likely, there is no guarantee that will remain true. There are no guarantees in science anyway.

  301. Re: SteveF (Aug 5 07:38),

    What’s immoral in my opinion is that the mitigation schemes that are proposed do nothing to raise the standard of living of the people who currently don’t even have access to clean water, much less energy. Global warming will not have much effect on these people because they are already at risk. That’s the point that Lomborg raised years ago that caused him to be ostracized.

    Wealth transfer schemes to the less developed world have never resulted in improved living standards for the people who need it. The money always ends up in someone’s secret bank account in the Cayman Islands or wherever. Look at ‘poverty’ programs in the US. If we simply took the money we spend on those programs and gave it to the people we’re claiming to help, they would be better off. But a lot of bureaucrats would be out of their jobs.

  302. Re SteveF (Comment # 100951 – 8/4 – 11:10 a.m.:

    This little gem of a comment by SteveF, as short as it is, deserves some careful thought, especially the latter part which begins, “My personal guess is that…” This “guess” may have great importance not only for the global-tropospheric and tropical amplification issues, but also for the entire (yet related) issue of the water vapor feedback. Go take a look a Andrew Dessler’s 2009 paper on that feedback using AIRS satellite data which he admits has a 10% error range right off the bat. He examines several Dec-Jan-Feb periods during intense El Ninos and compares them to similar periods of very strong La Ninas. He of course finds a relatively large SST change in the tropical Pacific. He also finds that during the El Nino periods, tropospheric water vapor increases, especially in the equitorial regions of deep convection. His spatial plots show this, but also show a decrease in water vapor in tropical/lower latitude subtropical regions. But there is a globally averaged net increase in WV during the El Ninos, and this in turn is extrapolated into a long, slow but inexorable increase in the WV feedback in conformity with the GCM outputs.

    Is there anything troublesome about this methodology? We know that tropical deep convection is not a linear phenomenon. And do we know what is happening to the surface energy during El Ninos? Compare with, e.g., Wentz et. al. (2007), “How Much More Rain Will Global Warming Bring?”; R.W. Spencer et. al. (2007), “Cloud and Radiative Dudget Changes Associated with Tropical Intraseasonal Oscillations,” GRL Vol. 34; and a number of papers by A. Sun and colleagues comparing observations vs. model outputs for WV LW positive forcing, Cloud LW positive forcing, Cloud SW negative forcing and atmospheric heat transport negative forcing in various equitorial Pacific regions of deep convection. [Sorry about the lack of references for the latter papers, but I don’t have them with me. The upshot of these papers is that net forcing is strongly negative and yet grossly underestimated in atmospheric and coupled models.]

    Steve, if you would care to “amplify,” be my guest. If I’m heading in a different direction than you had in mind, I apologize.

  303. SteveF (Comment #100972)

    The geographic distribution of stations without any TOB adjustment would be very interestIng to examine.

    Here’s the geographic (at least regional) distribution of raw trends in deg C from 1979-2007, for the 160 stations that had an entry for every year in this period, and here’s the geographic distribution in trend adjustments (linear fit to [Tob adjusted minus raw]). Er, x,y = long, lat.

    The highest Tob adjustments are in the midwest. The distribution of raw trends looks pretty random to eyeball. [Update: well, the highest ones aren’t near coastlines.] I also did a scatterplot of Tob adjustments vs. raw trend, and there was a very slight, probably insignificant, positive correlation. (Higher raw trends correlated with higher Tob adjustments). Very small and r^2 only 0.1. If this correlation had been significant, I’d be suspicious of a bias in the Tob adjustments. I guess so far, I’m not.

    This doesn’t address station siting or urban/rural, but I’m not inclined to do anything further because I noticed that the classified stations on the USHCN website are less than half the total, and I wouldn’t even begin to trust the urban/rural thing. Best to leave that to AW. I wouldn’t be shocked if at the end of the whole thing, it turns out that there is say 10% inflation of the trend – but in this morass, it will be hard to find someone who would spend the necessary time and not be subject to major confirmation bias.

  304. since we’re allowed to speculate here, I wonder if changing from CRS to MMTS incited shifting from good to less good sites in some cases, due to the need for an electrical hookup? That could be a source of some bias that might be hard to correct for since you’d be switching location, instrumentation and maybe Tob all at once…?

  305. DeWitt Payne in Comment #100993 said:

    “What’s immoral in my opinion is that the mitigation schemes that are proposed do nothing to raise the standard of living of the people who currently don’t even have access to clean water, much less energy.”
    ______________

    You could say that about a lot of things. Air bags in American cars, for example, don’t raise the standard of living in poor countries, but I don’t think we are immoral for spending extra on air bags instead of giving the money to the world’s poor.

  306. Max_OK:

    Air bags in American cars, for example, don’t raise the standard of living in poor countries, but I don’t think we are immoral for spending extra on air bags instead of giving the money to the world’s poor.

    Doubt anybody would argue with you on that, but at some point, spending more e.g. on one’s own nations health care program and leaving other poorer nations on their own (countries whose poverty was created in part by one’s historical colonialist policies) does start getting into gray territory.

  307. BillC (Comment #100995)

    Would not you have to account for the total station density variation within the CONUS when you do an analysis like this one?

  308. Kenneth,

    Absolutely if I was trying to come up with a CONUS average, but I’m not, just trying to look at trend patterns of invididual stations or groups.

  309. Re: Max_OK (Aug 5 15:43),

    The primary argument for mitigation is that we need to do it because it will make the rest of the world better, or at least keep it from getting worse, not so much for our own benefit. Btw, I can make what I consider a reasonable argument that air bags do more harm than good and can’t be justified on a cost/benefit basis. In other words, wear your lap and shoulder belt. It might keep the airbag from killing you.

    They don’t put airbags in race cars where the probability of a high speed crash is far greater.

  310. BillC,
    ” That could be a source of some bias that might be hard to correct for since you’d be switching location, instrumentation and maybe Tob all at once…?”
    Sure would seem so. Messy.

  311. Leigh Kelly 100995,
    Not yet ready to amplify much on this; still thinking about it. But a couple of things are clear: 1) the groups who regularly calculate the satellite microwave temperature trends (UAH and RSS) have heard all the supposed “improvements”, in calculation methods, which increase the lower tropospheric trend to be more in line with modeled amplification, and find them wanting. 2) the long term satellite trend for CONUS, where amplification is expected to be minimal, is very close to the CONUS surface station trends… If the satellite calibrations are so terribly wrong over the oceans, how do they manage to return to good calibration over the CONUS?
    .
    The most reasonable interpretation is that tropospheric amplification over the ocean must be on average near zero in the long term, and something else like ENSO is driving the apparent short term amplification of ocean surface warming. It may end up being a case of presuming an effect is a cause.

  312. SteveF (Comment #101004)-“the long term satellite trend for CONUS, where amplification is expected to be minimal, is very close to the CONUS surface station trends… If the satellite calibrations are so terribly wrong over the oceans, how do they manage to return to good calibration over the CONUS?”

    I’m not sure how you figure there is minimal amplification (ie about one) or that the CONUS trends suggest long term amplification of about one. The trend and variability in satellites was less in the analysis I recently carried out, and if amplification was about one that would suggest substantial warm bias in the USHCN data. But based on inter annual fluctuations the amplification of LT was substantially less than one, and the trends adjusted for that factor are consistent with each other. I think you are right that the USHCN data supports satellite analysis not being “too cool” so to speak, but given that the USHCN data seems to imply, at least there, that amplification factors don’t vary with timescale, which seems contrary to your idea that both the surface data and satellite data are correct globally, since that requires that short term amplification is an ENSO artifact or something.

    In short, I don’t think the interpretation that long term amplification is different from short term is the “most reasonable. It’s certainly not more reasonable than surface data bias outside the US.

  313. Re: TLT amplification.

    Most of my current thinking on this subject was triggered by this blog posting by Isaac Held.

    It suggested a few points:

    1) There doesn’t seem to be much missing in the physics of how surface temperatures affect the lower troposphere

    2) Amplification in the tropics appears to be inline with expectations*

    3) Land surface temperatures border on being irrelevant to large scale land-ocean TLT averages. It’s the ocean that matters.

    Further to this, the key to my thinking lies in the fact that expected TLT/surface amplification is very heterogeneous geographically. Land is distinct from ocean, but also amplification over oceans differs by latitude.

    I would suggest that there may not be a significant discrepancy between surface and TLT observations: that they are consistent with what models would expect. The main reason for the apparent discrepancy at global average scale between what coupled model runs predict and what is observed could be that warming at the surface has occurred in a different spatial distribution than suggested by those models.

    There are three key areas in which the models don’t encompass observed spatial behaviour – the magnitude of the land-sea warming contrast, the extent of warming bias towards NH high latitudes, and the lack of TLT/surface amplification. It’s my contention that these three are linked: the lack of TLT/surface amplification is primarily caused by a disproportionate amount of warming being concentrated in areas which are expected to have “amplification” of less than 1.

    * The land-ocean tropics trend in UAH is roughly half of that in RSS, so that wouldn’t be so clearly consistent. One of the major problems with interpreting this stuff is that inter-annual variance is so dominant just getting a gauge on basic trends is difficult with any useful degree of certainty.

  314. Paul_S,

    I think I mostly agree with you but:

    * The land-ocean tropics trend in UAH is roughly half of that in RSS, so that wouldn’t be so clearly consistent.

    What!?? Where can I go to see this? I know where to get UAH data by region…what about RSS? UAH has trends in deg C/decade as follow: Tropics 0.07, Tropical land 0.09, Tropical ocean 0.06.

    Also UAH global ocean trend is 0.11, which squares with some amplification compared to the global satellite SST trend of 0.084-ish.

  315. BillC,

    RSS Land-ocean TLT data by zones. Second column is tropics (i’m assuming, possibly incorrectly, that UAH also define tropics as -20 to 20º).

    Isaac Held’s post says he calculated the 1979-2008 tropics trend at 0.13ºC/Dec. Over the same period I got ~0.06ºC/Dec for UAH. For the full record up to June 2012 I make the RSS trend 0.12ºC/Dec.

  316. BillC
    RSS publishes monthly TLT series for land+ocean, land only, and ocean only. Second column of data is 20S-20N, which seems to correspond to UAH’s “tropics” region. Over the common period Jan1979-Jun2012,
    I get OLS trends (in K/decade) as follows:
    UAH tropics L+O: 0.068
    RSS tropics L+O: 0.115
    UAH tropic ocean: 0.055
    RSS tropic ocean: 0.111
    UAH tropic land: 0.093
    RSS tropic land: 0.125

  317. Paul S, HaroldW – I’ve confirmed what you guys said. I didn’t know about this variability and it puzzles me. I always thought the two agreed better than that (based on the apparent agreement in global trends at 0.13-0.14K/decade). I know there’s been lots written about this and there is a paper discussed above, but why the differences? UPDATE: UAH CONUS +0.23, RSS CONUS +0.19. It would be pretty cool to analyze the trends and variability in the two satellite series over CONUS vs. USHCN or Berkeley (Andrew_FL – you did this?)

  318. Right now it seems to me that RSS is agreeing better with the models, whereas UAH is agreeing better with the land surface record? (thinking about dist. by latitude)

  319. Paul S (Comment #101006)-Claim 1 I don’t know about, but claim 2 is just baffling and flat wrong. It seems the common approach taken by those who don’t want the models or the surface data to be wrong is to take whatever atmospheric data agrees, regardless of accuracy, ignore the data that disagrees. To that end:

    “The land-ocean tropics trend in UAH is roughly half of that in RSS, so that wouldn’t be so clearly consistent.”

    The assumption that amplification is “there” in the tropics seems to depend on the assumption that RSS is the superior satellite dataset and therefore UAH is wrong and may be disregarded. On the contrary, there is a clear shift in the RSS data around about 1992 relative to numerous other datasets, and it is that shift which primarily causes the difference between UAH and RSS-that shift produces a warm bias in the RSS trend. The “agreement” of RSS with models is therefore completely spurious.

    BillC (Comment #101011)-I compared NCDC’s USHCN data to UAH-the trend in UAH and variability was reduced, when I multiplied UAH anomalies by the factor by which variability was multiplied at the surface, the trend difference disappear-in other words, UAH agrees well with the CONUS trend, accounting for enhanced surface variability. If I had used RSS I may have found a warm bias in the surface data, but I didn’t do this because of RSS’s known biases. It appears those biases (due to diurnal drift corrections that appear excessive) are cool over the US.

  320. RSS’s known biases. It appears those biases (due to diurnal drift corrections that appear excessive)

    Andrew_FL – some here may disagree, but thank you for a reason (seriously).

  321. Andrew FL – there’s nothing baffling or wrong about it. The consistency with RSS is laid out in Isaac Held’s post, and I’ve clearly noted that the same wouldn’t clearly apply for UAH.

    Meanwhile, you’re attempting to paint this as ‘baffling and wrong’ simply by dismissing the data that disagrees with you.

    Note that Held’s post is about a model fed with prescribed SSTs from observations. The agreement between the model output and RSS suggests a good degree of agreement between the SST dataset and RSS.

  322. Paul S (Comment #101015)

    Is there a link here to comments Held made on the troposphere warming trend amplification as derived by model and compared to the RSS and UAH observations.

    When in reply to Douglass et al. on the amplification factor in the tropics, Santer et al in their paper presented lots of satellite and radio sonde observational data to be compared with model data. Santer’s point was basically given the huge CIs it cannot be hypothosized that there is a significant difference between models and observations. However you come down on this debate, it is rather obvious that the CIs are large and need to be stated when discussiing these issues.

    I see a discussion here that appears to neglect entirely these CIs. Given the “variety” of data available in this analysis I would think it would rather easy to “pick” some data to make one’s point.

    Also I must go back and find the reference but I saw an article by Christy and Spencer noting that the RSS people had used an arbitrary choice of a diurnal correction to their data because if fit better with models.

  323. Paul S,

    I never looked in much detail at the Isaac Held post. I especially never read the responses, especially Christy’s (and the one that comes after it). Isaac Held is good at cutting off discussions that appear to be strongly chaotic! Wonder how many comments he had to disappear (note: nothing wrong with that, it is consistent with “heavy” moderation on his blog as he states).

    The agreement between the model output and RSS suggests a good degree of agreement between the SST dataset and RSS.

    If the model is correct.

  324. Paul S,
    I had read that Isaac Held blog post once before, but it was worthwhile reading again. In the comments that followed, John Christy and Isaac had a rather testy exchange where Christy pointed out that adopting the RSS data (which better supports agreement with the models) was not justified due to known issues with the RSS data, and that substituting the UAH data shows a clear discrepancy with the models.
    .
    So which is correct: RSS or UAH? I don’t know, but I think Richard Lindzen’s observation about climate science can be suitably applied here: whenever there is a discrepancy between data and models, reasons are always searched for (and more or less always found!) for why the data is wrong. Kuhn noted that a dedicated search for reasons why the data must be wrong is quite normal… just prior to a paradigm shift.

  325. Paul S (Comment #101015)

    I do see the link to Held – which I will now read.

  326. Note the comment by Chris Holloway and response by Held – hinting at SteveF’s comment above which Leigh Kelly amplified.

  327. Read the Held blog linked above and came away with a feeling of potential cherry picking the data and means of presenting it. I agree that the trends should be normalized to ratios or difference between troposphere and surface temperatures.

    Below is the link and excerpt I found of the critique of the RSS diurnal correction.

    http://www.appinsys.com/GlobalWarming/SatelliteTemps.htm

    “The RSS scientists Mears and Wentz (“The Effect of Diurnal Correction on Satellite-Derived Lower Tropospheric Temperature”, Science, September 2005) made the following statements
    [http://www.sciencemag.org/cgi/content/full/309/5740/1548?ijkey=ab2f73117791e29249a57fe02ac0664e0a0b2746]
    • “Much of the surface warming of Earth observed over the past century is understood to be anthropogenic”
    • “TLT, or temperature lower troposphere showed cooling relative to the surface in many regions of Earth, particularly in the tropics. This finding is at odds with theoretical considerations and the predictions of climate models.”
    • “We used 5 years of hourly output from a climate model as input to a microwave radiative transfer model to estimate the seasonally varying diurnal cycle”
    • “Satellite-based measurements of decadal-scale temperature change in the lower troposphere have indicated cooling relative to Earth’s surface in the tropics. Such measurements need a diurnal correction to prevent drifts in the satellites’ measurement time from causing spurious trends. We have derived a diurnal correction that, in the tropics, is of the opposite sign from that previously applied. When we use this correction in the calculation of lower tropospheric temperature from satellite microwave measurements, we find tropical warming consistent with that found at the surface and in our satellite-derived version of middle/upper tropospheric temperature.”

    In other words, if observations don’t match the theoretical models, adjust the data.”

  328. Kenneth,

    Isaac Held seems pretty evenhanded. It would be interesting to replicate his analysis with UAH. I suspect he’d let me try if I asked for the data from his model runs.

    Bill

  329. Zeke, you are surprised that the newer siting classification criteria would increase the number of stations classified as well sited. But observe that many stations will be near such small heat sources as sidewalks, thus in the simpler criteria grouped with those near a large heat source such as a parking lot. Thus stations previously classified as poorly sited may now be rated better. (I have not looked at the break points in the five-step scale specifically regarding sidewalks, I just make the general point of principle.)

    You raise questions in other areas that merit investigation, notably the possible relocation of stations when they were automated (to be closer to power). Watts has questioned why data from the new stations was adjusted upward to match old stations. Off the top, one would expect locations closer to power to be closer to heat sinks/sources thus more in error. (Though they could be solar powered with low data rate RF transmission, at a cost of course.) I think Watts’ logic is that the new equipment should be more accurate, so should not be calibrated (in effect) to the old – thermometry can be calibrated independently. Have Muller et al justified their approach?

    I am very wary of all these “adjustments” of various kinds, be they inter-calibrations (an apt term from the Rabbet Run blather-blog) or homegenization (see McIntyre’s questions on that) or whatever – there is high risk of contaminating good data with bad, high risk of errors from extrapolating from one location to another.

    I have to take the Missouri position that none of the instrumental data is useful for climate analysis. (Missouri is the “Show Me” state in the US. 🙂

    The real question in the debate is whether or not humans are to blame for most of any warming (or cooling, which alarmists were claiming in the 1970s) that may be occurring. Even Muller does not provide proof in his 2012 paper, he just concludes there is a correlation between temperature and the logarithm of CO2 concentration. He then assumes humans are the source of the increase in concentration – ignoring serious questions about sources of the increase in CO2. Serious questions have been asked about the physics of CO2’s effect on the atmosphere – a good case is made from physics and empirical engineering formula that it is limited to an amount almost reached at the current concentration. Questions have been asked about relative timing of CO2 and temperature increases. (Keep in mind that correlation does not prove causation, though Muller says he tested a few other theories such as solar but his data source seems sparse and he seems to omit some solar theories such as cloud seeding.) Serious questions have been asked about alarmist claims of past temperatures – indeed, in 2004 Muller pointed to McIntyre and McItrick having shown that Michael Mann’s “hockey stick” claim is false, and since then several scientific reports and archaology digs have confirmed that there was a Medieval Warm Period.

  330. Meanwhile blather continues, such as Nick Stokes’ meaningless “there is scope fpr bias” in http://rankexploits.com/musings/2012/initial-thoughts-on-the-watts-et-al-draft/#comment-100584. Reminds me of “talking heads” on TV.
    Worse is “dhogaza” playing the “argument from authority smear in http://rankexploits.com/musings/2012/initial-thoughts-on-the-watts-et-al-draft/#comment-100599.

    Thankyou “rk” in http://rankexploits.com/musings/2012/initial-thoughts-on-the-watts-et-al-draft/#comment-100601 for pointing to the immature state of climate science. The real revelation in the various findings about temperature measurement is that it is not easy to do right for purposes of global climate. Most people assume it is easy, as they think of a simple thermometer (and don’t think of its calibration, or location such as a few feet away from their house, in the sunshine).

  331. ‘In other words, if observations don’t match the theoretical models, adjust the data.”

    Satellite “data” is not an observation. It is a data product that results from raw sensor inputs and a very long processing chain that relies on models and assumptions. It is model output.

  332. Andrew_FL (Comment #101005)
    August 5th, 2012 at 10:04 pm
    SteveF (Comment #101004)-”the long term satellite trend for CONUS, where amplification is expected to be minimal, is very close to the CONUS surface station trends… If the satellite calibrations are so terribly wrong over the oceans, how do they manage to return to good calibration over the CONUS?”

    ##################

    there are two curves used for turning brightness into temperature. The curves differ whether you are over ocean or land… hmm cant recall the reason.

  333. Captain,

    Don’t you want sea surface temperatures, not upper 300M heat content?

    BTW anyone know if you can get gridded UAH data? I’ve run across some not-so-old blog posts where it was used, but I can’t find it currently.

  334. BillC, With RSS and UAH tropics being an issue, I takes whats I can gets 🙂 Though it would seem likely that if tropical sea surface temperature was warming the top 300 meters would be warming.

  335. Steven Mosher (Comment #101028)

    “Satellite “data” is not an observation. It is a data product that results from raw sensor inputs and a very long processing chain that relies on models and assumptions. It is model output.”

    Steven your quip here indicates that we cannot differentiate between climate model output and satellite model output/observation as regards model output versus observation. It has been my experience that most climate scientists refer to the satellite measurements as observations. Would you care to provide some details for why you term satellite data as model outputs? And how would your definition of satellite output differ from surface station output after it is adjusted?

  336. BillC (Comment #101024)

    “Isaac Held seems pretty evenhanded. It would be interesting to replicate his analysis with UAH. I suspect he’d let me try if I asked for the data from his model runs.”

    I doubt that his runs were made exclusively for his blog post. Are not they available at some repository? That Held is even handed has nothing to with the analysis since I am not questioning motivations.

  337. Easy kenneth. because I’ve looked through the code that takes you from a sensor input to a data product output. It would include code that you also find in a GCM.

    The fact that for convience people refer to sattellite data as if it were an observation doesnt bother me, except when people try to give that data some kind of epistemic PRIORITY based on the word
    “data”

    In short you have two products both the outputs of models.
    you cant use the model/data categorization to determine ON ITS FACE which is correct. You are comparing the out puts of models in both cases.

  338. Held’s model runs are in the PCMDI database, under CMIP5, AMIP experiment, gfdl hiram180 model. I’ve downloaded one run, 4gb across 5 netcdf time slices for the atmospheric temperature data. Currently trying to build some scripts to check spatial amplification distribution and hopefully provide a means to better quantify my hypothesis.

    Re: rss vs. uah, I think Kenneth makes a timely point about CIs. It’s why I referred to uah data as ‘not clearly consistent’ rather than ‘inconsistent’: even though me no brain enough to calculate appropriate CIs I suspect even the low trend uah data would be consistent with tropospheric amplification when uncertainties are taken into account. Of course, that doesn’t say much for simply being consistent.

  339. Mosher,
    Sure, and the ‘output’ of a mercury in glass thermometer is also a ‘model’, if you insist. The issue is more a) the complexity and b) how many degrees of freedom does the modeler have in interpreting the ‘raw’ data. Microwave temperature data certainly does not approach mercury in glass, but neither is it based on a multitude of modeling choices like in GCM ‘s. Drawing an equivilance between these doesn’t really address Kenneth’s points in any meaningful way.

  340. steveF.

    and insisting that you can decide the question merely but saying that one is “model” while the other is “data” is mistaken
    They are both models, therefore, the decision cannot be made strictly on the basis of “model” versus ‘data’, but it will depend entirely on a comparison of the models as models.
    Further the number of modelling choices you have, is also an important factor, but one cannot simply read off the answer from
    the number of modeelling choices.

    In short you actually have to do the work.. Not
    A) call one a model and the other data and think youve done anything
    B) counted or estimated which is more complex and think youve done anything.

  341. Steven, you might want to reprimand this august list of coauthors who had the temerity to use the observed and modelled with reference to observed satellite measurements and climate model output.

    https://www.llnl.gov/news/newsreleases/2008/NR-08-10-05-article.pdf

    Consistency of modelled and observed temperature trends in the tropical troposphere

    B. D. Santer, P. W. Thorne, L. Haimberger, K. E. Taylor,a T. M. L. Wigley,
    J. R. Lanzante, S. Solomon, M. Free, P. J. Gleckler, P. D. Jones, T. R. Karl,i S. A. Klein,
    C. Mears, D. Nychka, G. A. Schmidt, S. C. Sherwood, and F. J. Wentz

    But, seriously Steven, in attempting to give you the benefit of the doubt I am attempting to decipher what you are saying to understand the modeling implications for the satellite output. The measurement is of radiation detected from a satellite in space against a calibrated reference. In order to classify the measurements as originating from a defined lower and middle troposphere and stratosphere the data are weighted. Adjustments are or were made for satellite orbital decay and diurnal considerations. From that I see nothing much different that has to be done in general when measuring and adjusting surface temperatures. What am I missing? In fact the satellite measurements like the surface ones suffer from the original intended use being for weather and not climate. You almost sound as though you are saying there are model choices to be made with how the satellite data is handled and that those choices can affect the final adjusted result.

    Would you consider the BEST algorithm/approach a model?

  342. If I didn’t know better, I’d think Steven Mosher prefers the “data” that gives the answer he likes the most.

    Seemingly petty and unscientific… if I didn’t know better.

    Andrew

  343. Re: SteveF (Aug 6 17:14),

    I suggest that you read something about remote observation by passive microwave emission. It’s nothing at all like mercury in glass. In fact, the problem of turning the observed microwave intensities into temperature is ill-posed. There are an infinite number of solutions that will produce the same readings. Now you can get rid of a lot of those solutions by using other criteria like smoothness, but there will never be a unique solution as there is with, for example, a liquid in glass or thermocouple thermometer.

    First, the emission from each microwave band doesn’t come from a specific altitude. It comes from a broad range of altitudes that overlap. And if that weren’t bad enough, there’s noise too. Until the AQUA satellite was launched which has station-keeping capability (and which RSS doesn’t use AFAIK) the orbits drifted both in the time of equator crossing and altitude. At the lowest level, TLT, radiation from the ground is included in the reading. That’s why you don’t get satellite data from Antarctica because the surface is too high. So to derive the TLT numbers, you have to model emission from the surface and subtract it. And of course the surface emission varies with temperature and time of day.

    The radiosonde data also has problems so there is no ASTM or NIST or whatever standard reference method for comparison. So there is no solid reason to prefer either RSS or UAH and especially to use either to cross-validate with the results of GCM’s even if they use measured SST’s rather than an ocean model in the GCM.

    Here’s a plot of RSS-UAH monthly data for the tropics land plus ocean. Looking at the plot, guess when the AQUA satellite was launched.

  344. Mosher,
    “In short you actually have to do the work.. ”
    I suppose you do. The microwave brightness is linerly proportional to temperatue at frequencies far from the emissive peak frequency, and all atmospheric temperatures seem to satisfy this requirement. That part seems not in doubt. How things like diurnal variation and orbital decay influence the results is more complicated, and for sure angle of view also comes into play. But there isn’t anything I’ve read about that is remotely like cloud paramerterzations and arbitrary aerosol influences. In the spirit of avoiding work (as you seem to suggest is my want), why not rely on some work others have already done?
    .
    Wiki
    “Globally, the troposphere is predicted by models to warm about 1.2 times more than the surface; in the tropics, the troposphere should warm about 1.5 times more than the surface.”
    Pretty clear. And at odds with both RSS and UHA, which show on average essentially no tropospheric amplification globally except in the short term (and yes, I have downloaded the data and ‘done the work’.) The globally averaged net ‘amplification’ is a bit under 1. Perhaps there are reasonable explanations for the descrepancy, but calling the microwave data ‘a model’ doesn’t contribute much to figuring out why the discrepancy exists.

  345. DeWitt,
    Thanks. I was actually already aware of those issues, including the “altitude overlap” for different frequencies. None of that makes the microwave temperature measurement “remotely” like a climate model. 😉

  346. SteveF:

    And at odds with both RSS and UHA, which show on average essentially no tropospheric amplification globally except in the short term (and yes, I have downloaded the data and ‘done the work’.)

    Indeed, it’s at odds with all of the data sets, AFAIK, including surface-based sounding data.

  347. Carrick,
    At odds until you twist the data into a pretzel to make it fit the models. IMO, this controversy is symptomatic of where climate science goes off the rails. My experience is that lots of data are very seldome wrong; it is our understanding that is most often far from perfect. If you look at the number of pretzel-twisting papers published on this single issue (data/model discrepancy on tropospheric warming) you begin to see it is field that is struggling to come to grips with the data. Maybe Mosher is right and we can chalk it all up to the effects of ‘post normal science’. But whatever the cause, it makes my BS antenna go on high alert.

  348. Guys, if you think RSS is better, look at what they do over the US. If your a skeptic and like that their diurnal corrections make the USHCN data look bad it is perfectly intellectually honest for you to eschew the UAH data…as long as you also do so on a global basis. For my part I took the UAH side long ago-even when many skeptics switched to RSS recently because it is (INCORRECTLY!) cooling relative to UAH-which is either out of ignorance or intellectual dishonesty. The reason RSS must be incorrectly cooling relative to UAH in recent years is because UAH’s backbone during the period has been the station keeping thrusters based AQUA which needs minimal diurnal correction. RSS is using a satellite which is drifting, (warm as I understand it, but they have excessive diurnal corrections that turn that into cooling).

    One argument that has often been heavily implied about satellite data is that it agrees with radiosondes, which proves it’s wrong! Well, not in so many words. But everybody knows radiosondes are crap, so I’ve seen it strongly implied that agreeing well with radiosondes is not a point in their favor but against them. Nevertheless, the 1992 shift of RSS to UAH also present as relative to several other datasets. RSS even seems to shift relative to surface data at that time. Several old posts at Jeff’s examined this issue, but the background on those would be a bit to much effort for me. A good start in the literature is:

    http://journals.ametsoc.org/doi/abs/10.1175/JTECH1937.1

    Which explains UAH’s hypothesis for why the step is present in the data. Another paper:

    http://www.agu.org/pubs/crossref/2008/2007JD008864.shtml

    Independent of the UAH group, confirms that RSS’s corrections for diurnal drift appear to be inferior to UAH.

    This:

    http://pielkeclimatesci.files.wordpress.com/2010/09/r-358.pdf

    Examines several more datasets and carefully discusses issues with each of them. The step difference with RSS gets attention, and it can be seen that it is relative to basically all datasets.

    Brightness information goes through a theory based model, as I understand it, to calculate temperatures. This is, I have assumed, relatively solid physical theory, but climate models are not mere mathematical expressions of physical theory-if they were they wouldn’t disagree with each other about important things. If not I’d be interested to know what the strengths and weaknesses of the models are.

    But I think Steven Mosher is often carrying baggage from WUWT arguments with the unwashed masses over to here. Some over there express sentiments that if something is a “model,” it is a priori wrong. That’s only true in economics. 😉

  349. See here for a comparison of satellite SST and RSS MSU trends (K/decade) by latitude.

    phi – got a handy decoder ring for that UAH gridded data? It makes no sense to me and I can’t find the bit in the readme file where it tells me what’s what.

  350. And I’d be interested in hearing why HadSST2 (per WFT) is 0.13K/decade from 1982-current where the satellite SST is 0.083. Yes I understand there are some places where satellites don’t get it right, and the satellite SSTs are only at night. Maybe when I have time I’ll look at the HadSST2 data by latitude. Meanwhile you’ve got skeptics (Tisdale and Pielke) using the satellite data while Held’s post referenced earlier uses HadSST2 to prescribe the HIRAM model. Imagine my surprise if the satellite SSTs line up better with UAH (gimme the decoder ring) than with RSS. But since Carrick told me GISTEMP LOTI uses satellite SST data since it’s been available, my head hurts.

  351. Pardon me for being a little late to the discussion. But I think Dave Springers statement “Prevailing winds are east-to-west in the SH, Pacific east of Australia is a hot spot, yet Australia is not warmed like the US is. How do you rationalize that?” is not correct. Prevailing winds in SH are west to east like in the NH. I believe our Highs and Lows and water down the plug-hole rotate in different directions though. A look at the weather maps will confirm what I say, unless, of course, I completely misunderstand the meaning of “prevailing”.

  352. BillC,
    Where do you get a satellite SST trend of 0.083? From the Climate Plotter I get (1982-2011):
    UAH All Ocean 0.138 °C/decade
    HADSST2 0.145 °C/decade
    The odd one out is actually
    NOAA Global Ocean 0.115 °C/decade.

  353. BillC (Comment #101062),

    Reading code is given in this file: ftp://ghrc.nsstc.nasa.gov/pub/data/msu/docs/readme.msu

    I have a reading tool but unfortunately it is neither handy nor easily exportable.

    Given the subject of the thread, what I find most interesting are the comparisons for land and especially for the northern hemisphere (better coverage, few areas beyond the 60th parallel). In the latter case, the trends ratio UAH/CruTem3 is about 0.7 and it must be even worse for RSS / BEST. I see no physical explanation for this divergence, I guess there is a serious problem of measurement (TLT or T2M?). This divergence is about 0.1 ° C per decade. What’s amazing is that this value of 0.1 ° C also corresponds to the divergence of MXD for the Northern Hemisphere!

  354. Bill Illis:

    The differentials between satellite lower troposphere and the surface is clearly due to the surface record adjustments.

    That would make logical sense only if you didn’t have to make adjustments for TOBS or for station moves (etc), which you do, and further requires that what you are actually sampling with satellite lower troposphere is the same thing you are measuring with surface temperature, which its not.

    Put another way to prove this claim you have to show:

    1) TOBS adjustments required by the data result in a nil effect on temperature trend.
    2) Adjusting for station moves, changes in instrumentation (etc) will have a nil effect on temperature trend, and
    3) Measuring at an effective elevation well above the surface boundary should result in the same measured value as measuring in the surface boundary layer.

    The more likely explanation is that “3” is false—they don’t measure at the same effective height and aren’t expected to exactly agree,

    (I’ll note that the satellite temperature measurements show a decreasing temperature trend as you increase the effective height of the temperature measurement. This is consistent with measurements on the surface showing a larger trend.)

  355. Nick Stokes,

    I get 0.083 from the Reynolds OIv2 SST (NOAA) data. It matches what Bob Tisdale gets, so I must be doing something skeptical. I can’t think of a reason why you would get a different trend, maybe you can? Just an OLS fit…

  356. Re: SteveF (Aug 6 19:39),

    The satellite data is like climate models in one respect, it’s been tweaked to make it agree with the surface record more or less. All UAH data before 2002 and AQUA is suspect. And since RSS doesn’t use AQUA, all their data is suspect. Any trend measurement assumes that the effective emission altitude has remained constant over 30 years. The fact that RSS and UAH don’t agree, and worse, the disagreement isn’t consistent over time as you can see from the graph I posted (and I have lots more that look as bad or worse) makes all this discussion of amplification factors nugatory even if you believe that the GCM’s can calculate an amplification factor accurately, which is another stretch.

  357. Carrick,

    (I’ll note that the satellite temperature measurements show a decreasing temperature trend as you increase the effective height of the temperature measurement. This is consistent with measurements on the surface showing a larger trend.)

    I don’t think that is possible to determine directly from the satellite data. The satellite products don’t refer to effective heights, but rather contributions with different weightings from all vertical levels in the atmosphere.

    As Isaac Held states in the blog post I linked earlier:

    The [T2 / TMT] model trends are now (0.138, 0.125, 0.129) with a mean of 0.131, with the RSS trend over this period is 0.102. These trend are smaller than the T2LT trends, in both the model and the observations, despite the fact that T2 weights the lower troposphere less strongly that T2LT. The model trends actually increase with height through the troposphere. The problem, long appreciated, is that T2 has significant weight in the stratosphere, where there is a cooling trend in both model and observations…

  358. Re: Paul S (Aug 7 08:19),

    The problem with that hypothesis is that the LS temperature has hardly changed for nearly 20 years. Most of the cooling has been in the upper stratosphere anyway. We should, in fact, be starting to see stratospheric warming as the CFC’s continue to decline and ozone starts to recover. If the LS isn’t changing, then I would expect to see exactly what the satellites observe, a decreasing trend with altitude.

    It’s a little fishy anyway. If the trend is increasing through the troposphere, then I would expect to see the temperature at the tropopause increasing too. But that temperature is also determined by the temperature in the stratosphere. It’s a little hard to imagine a cooling stratosphere with an increasing temperature at the tropopause.

  359. DeWitt Payne (Comment #101074)

    “The satellite data is like climate models in one respect, it’s been tweaked to make it agree with the surface record more or less. All UAH data before 2002 and AQUA is suspect.”

    DeWitt, you make a very cogent point here and one that I was willing to give Steven Mosher in my reply to his model comment. I have heard variations of your claims about the satellite data being “adjusted” to fit the surface temperatures but I have never seen the details of how that is accomplished. You point to a general consideration whereby you state that the model is ill posed with evidently multiple/arbitrary choices that can be made to “fit” the data. I have heard similar claims made to Spencer and Christy and they always claim that the satellite measurements are entirely independent of the surface measurements.

    This is a very important claim since if the satellite data is dependent on the surface measurements then we can no longer use the satellite and surface measurements for mutual confirmation. DeWitt (or Steven) could you relate some specific details for judging dependency? We could pose those specific questions directly to Spencer and Christy.

  360. Re: Kenneth Fritsch (Aug 7 08:58),

    The climate modellers claim that their results are independent of the surface temperature record too. I don’t believe their claim either. The boilerplate you get from Spencer is that the temperature references are a temperature controlled hot plate on the satellite and deep space. That is true, but only goes so far. It’s the algorithms and specifically the weighting curves that are important and C&S have never published their algorithms. You can’t derive a weighting curve without knowing the atmospheric temperature profile in absolute terms. So there’s at least one link.

  361. Another reason this discussion is moot is that C&S will be releasing version 6 of their data Real Soon Now. We have no idea what effect that will have on calculated trends.

  362. Dewitt Payne,

    The problem with that hypothesis is that the LS temperature has hardly changed for nearly 20 years.

    Held’s model TLS trends are higher (i.e. less cooling overall) than RSS TLS. Over the post-Pinatubo period it looks like there may even be a slight warming trend in the model.

    I don’t think it’s an easy thing to consider what certain altitude trends should be just by looking at the satellite data. The vertical weightings seem to hold complexities that are difficult to appreciate.

    What I can say is that Isaac Held’s model produces:
    1) larger tropospheric warming trends over the tropics as altitude increases.
    2) a TLS with very little post-Pinatubo trend, perhaps even slight warming.
    3) a TMT which shows slightly less warming than its TLT.

  363. PaulS:

    I don’t think that is possible to determine directly from the satellite data. The satellite products don’t refer to effective heights, but rather contributions with different weightings from all vertical levels in the atmosphere.

    It’s what they report:

    TLT 0.170 °C/decade
    TMT 0.135 °C/decade
    TTS 0.057 °C/decade
    TLS -0.294 °C/decade

    I’ll let you argue with them over whether it’s “possible.” 😉

    I understand the point of your argument, but I think it’s not valid the way you stated it.

  364. Paul S:

    The vertical weightings seem to hold complexities that are difficult to appreciate.

    That’s just a bit too hand-waving for me to appreciate your argument. >.<

    I've experience with surface atmospheric temperature profilers, which work on pretty much the same principle, and overall they are able to achieve decent agreement with direct radiosonde/tethersonde data.

    The technologies basically work, the question is in making the comparisons to other types of measurements.

    In any case, one can't argue on one hand that surface temperature and satellite temperature reconstructions should agree (as Bill Illis seems to believe) and on the other that they are basically meaningless (which I admit is an over-simplification of what you are trying to argue).

    I have great respect for Isaac Held, but models, especially simple ones, are just models. They don't carry any weight until they've been validated by data, especially in a system as complex as this.

    For all of the issues with satellite radiometric temperature measurements, this is still a vastly simpler problem than modeling climate (I think SteveF made a similar point). I find the idea we should put our weight on the climate models rather than data … baffling.

  365. Carrick, hey! that’s is scientific hand waving 🙂 The differences in the weighting look to be the biggest difference. RSS could change their filters a little and get about the same results. That tends to make me prefer UAH because with the AQUA channels you can customize the weighting. They both are a little sucky pre 1990 as best I can tell.

  366. Carrick,

    I understand the point of your argument, but I think it’s not valid the way you stated it.

    I guess that depends on what you meant precisely by: ‘decreasing temperature trend as you increase the effective height of the temperature measurement’

    My take was that you were arguing satellite data suggested a steady and continuous decrease in temperature trend going upwards through the troposphere. If that’s not the case then we may not have a dispute.

  367. I have linked a review of the theory behind the UAH measurements here:

    http://wattsupwiththat.com/2010/01/12/how-the-uah-global-temperatures-are-produced/

    DeWitt, I would think that determination of the weighting functions could be based on empirical radio sonde data and not model results or surface measurements. Radio sondes might not be practical without breakpoint adjustments to measure changes over time but could still be adequate at a given point in time looking at the troposphere at various altitudes. Unfortunately I have not heard a dissertation by either Spencer of Christy on the weighting functions.

    I did find these links to weighting functions which would appear to be rather straight forward and not something unique to Christy and Spencer.

    http://www.skepticalscience.com/news.php?n=657

    This paper Grody 1983 (section 2) contains a discussion of the science of Microwave Sounding, and in particular the existence of Weighting Functions derived from solving the Radiative Transfer Equation. “…the temperature weighting function…defines the contribution of temperature at different altitudes to the brightness temperature.”. These functions are produced by summing the contribution at each frequency of microwave emissions from multiple levels in the atmosphere, taking into account the radiating behaviour of the atmosphere, pressure, temperature, path length etc.

    http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450%281983%29022%3C0609%3ASSOUTM%3E2.0.CO%3B2

  368. Steven Mosher (Comment #101089)

    Sorry, Steve but I do not believe your recommended reading list provides anything new to this discussion. It might help if you added a comment of your own about the contents of these posts and why you listed them.

  369. I don’t know Steven. Best looks like a very reasonable representation of land temps and is biased higher than Sat temps over the last 20 years. Sat temp trends are essentially in agreement with SSTs (where 90% of the heat is) over the same period: http://www.woodfortrees.org/plot/best/from:1978/to:2012/offset:-.2/plot/uah/from:1978/to:2012/plot/hadsst2gl/from:1978/to:2012

    There are no perfect thermometers but it still looks a lot like UHI whether John Cook likes it or not.

  370. The lower stratosphere cooled in 1992 (by 1.5C) as the sulfate aerosols from Pinatubo dissipated out and left less Ozone in their wake (every stratospheric eruption that we have measurements for has left a similar pattern).

    The lower stratosphere has been mostly stable since that time (although we are starting to see more temperature spikes in the Tropics in particular).

    Daily measurements from UAH to end of June (July is not out yet).

    http://img688.imageshack.us/img688/7889/dailyuahstrattempmay201.png

  371. Paul S:

    My take was that you were arguing satellite data suggested a steady and continuous decrease in temperature trend going upwards through the troposphere. If that’s not the case then we may not have a dispute.

    So I guess there’ not a dispute. My point was only that the satellite record suggests a decrease in trend with increasing elevation and this is consistent with a surface temperature record having a larger trend than the satellite data, not whether there is “fine grained” features in the change in trend with altitude.

    As I indicated above, for people who want to equate surface temperature with TLT, it seems that the burden is on them to provide a physics justification for that expectation. A priori I wouldn’t personally expect them to be the same.

  372. Steven, the list of “complexities” with satellite records is a really a very small one, especially compared to other climate related problems, including the complexities of the surface land air temperature measurements..

    What’s good for the goose is good for the gander here (and of course I’m a bit dismayed that you’d put any weight in a site that specializes in papering over problems with climate science as a primary source for framing the issues in a fair manner.)

  373. Bill Illis. So Pinatubo aerosols flattened out the Stratosphere trends?

    There are some that think in a bi-stable system that heat capacity limits also play a role. When a limit is approached, there can be changes in the way the system responds.

    http://www.columbia.edu/~lmp/paps/polvani+solomon-JGR-2012-inpress.pdf

    It would be nice to have simple closed system but some non-equilibrium thermodynamic systems can be dissipative even cumulative over a sufficient period of time, requiring thinking just outside of the box to get a handle on the complex interactions.

    http://i122.photobucket.com/albums/o252/captdallas2/climate%20stuff/whatsnormal.png

    Like why would a system appear to approach an asymptotic limit before and after a perturbation?

    http://2.bp.blogspot.com/-1PqP7oo8z9M/UA15toiROjI/AAAAAAAACls/VCy7DOfW4IE/s1600/stratosphere+step.png

    Why would the stratosphere tend shift to a similar asymptotic approach at the same time?

    https://lh3.googleusercontent.com/-rRs69Ekl9Zc/T_7kMjPiejI/AAAAAAAAChY/baz0GHWEGbI/s917/60000%2520years%2520of%2520climate%2520change%2520plus%2520or%2520minus%25201.25%2520degrees.png

    Could it be the system is strangely attracted to a physical limit?

  374. I’ll say again, but perhaps posting the analysis on my blog would help people see this:

    You can’t believe USHCN is accurate and disbelieve UAH is accurate.

    Heh, I’m on vacation, and the internet connection I’m getting is lousy. I’ll try to put the analysis on my blog later.

    A couple of other things: the flattening of lower stratosphere temps mirrors flattening of concentrations of CFC’s. So as expected, lower stratosphere is dominated by Ozone effects.

    Also, it is not the case that UAH exclusively uses AQUA. AQUA has been used to serve as the “backbone” of recent temperature anomalies, because it doesn’t need diurnal drift correction, and diurnal corrections for other satellites can be calculated relative to it thanks to that feature. If RSS is now using AQUA, the problem is that it is not treating AQUA as inherently superior but as an equal to other, drifting satellites, once those satellites have had RSS’s diurnal drift corrections applied. They assume that their diurnal corrections are correct, so any drifts between AQUA and drift corrected contemporaries are supposedly due to…random errors I guess? eliminating noise is the reason for using multiple satellites.

  375. Carrick (Comment #101086)

    That’s just a bit too hand-waving for me to appreciate your argument.

    I argued that less warming in TMT than TLT is plausibly consistent with increasing trends by altitude up through the troposphere because TMT is substantially influenced by the stratosphere. This argument was supported by that being the case in Held’s model.

    DeWitt dismissed this by pointing out that there has been little TLS trend for nearly twenty years.

    I checked the model TLS and found there was little trend in that either over the same period, yet TMT and TLT in the model are what they are. I then suggested understanding how different satellite channels behave is probably more complex than simply looking at one channel and guessing what parts of that might influence another.

    Where is the hand-waving here?

    …and on the other that they are basically meaningless (which I admit is an over-simplification of what you are trying to argue).

    I don’t think that’s getting anywhere near what I’m saying. I’m arguing that the satellite data have a very specific meaning which needs to be understood before trying to assert further meaning to differential trends between satellite channels and those at the surface.

    I have great respect for Isaac Held, but models, especially simple ones, are just models. They don’t carry any weight until they’ve been validated by data, especially in a system as complex as this.

    Pretty much all of Isaac’s post was a comparison of the model output with satellite data. I’m not sure what your criteria would be for validation.

    I find the idea we should put our weight on the climate models rather than data … baffling.

    What you seem to be missing is that anytime you attempt to ascribe meaning to data you are using a model. Those arguing that the various satellite channels indicate steadily decreasing trends by altitude are applying a model… it’s not a model with any physical basis but I’m not sure that should be a selling point. It’s essentially a linear regression model using three or four datapoints from non-independent sampling through a system (the atmosphere) which is known to have considerably different behaviour by altitude.

    What I’m doing is looking at the data and comparing with what physical models say, looking for consistencies and inconsistencies and drawing conclusions. The fact is that Held’s model appears to be consistent with the satellite data on several counts, which suggests the lower warming trend in RSS TMT versus TLT is actually consistent with increasing temperature trends up through the troposphere, since that is definitively what occurs in the model. The satellite data does not have fine enough vertical resolution (if that’s the right word) to actually discern what’s happening from the surface to the upper troposphere, hence ‘not possible’, so we can only say what’s consistent.

    Obviously a steadily decreasing trend all through the atmosphere is also consistent with the satellite data, but to argue that it’s important to understand you are applying a model – it’s not just “the data” as you seem to suggest.

    Note that Held does apply some mathematical manipulations to the various RSS channel data to try and isolate particular tropospheric heights and their trends. He finds a similar picture to the model – increasing trend by altitude.

  376. Paul S (Comment #101099),
    ” Note that Held does apply some mathematical manipulations to the various RSS channel data to try and isolate particular tropospheric heights and their trends. He finds a similar picture to the model – increasing trend by altitude.”
    Sure, but note also that he uses RSS data and ignores UAH data for no apparent reason except that he likes the agreement with the model he uses. I think the point is that the issue is VERY far from resolved, in spite of multiple publications declaring the discrepancy ‘resolved’. It makes sense that in tropical regions (especially over oceans) tropospheric warming should be mainly controlled by the moist adiabat. But there is a lot more to the story than simply making that argument (which is I think pretty much what Isaac Held says in his post). There is a substantial discrepancy between the globally averaged measured tropospheric amplification and the modeled tropospheric amplification expected from the measured surface warming. Something is not right. I do not claim to know for sure what is not right, but for sure there are problems.
    .
    My current thinking is that the models do not properly deal with surface boundary layer physics, and so there is a disconnect between the measured surface warming and the measured tropospheric warming (as already claimed in a couple of papers by Roger Pielke Sr and some of his associates).
    .
    The implication of lower globally averaged tropospheric amplification is (of course) less water vapor amplification of warming from well mixed GHG’s and perhaps lower equilibrium sensitivity than the CGM’s predict. I suspect that this is the real issue being contested, and why so very much energy/publications/time has been expended trying to refute the existence of an obvious discrepancy between models and data. Seems a lot like ‘post normal science’ to me.

  377. Re: Andrew_FL (Aug 7 13:20),

    You can’t believe USHCN is accurate and disbelieve UAH is accurate.

    True. I don’t believe either of them is fit for the purpose of quantitatively determining changes in long term trends in the climate. Qualtitative, yes. IOW, we’re reasonably certain it’s warming. But, as Carrick points out, they’re still better than climate models because they are observations of reality, not fairly crude (coarse grid, simplified physics models for radiative transfer, other kludges to ‘solve’ the Navier-Stokes equations, etc. ) mathematical models that have never been properly validated.

    Also, it is not the case that UAH exclusively uses AQUA.

    And you know this how? According to Spencer:

    The UAH global temperatures currently being produced come from the Advanced Microwave Sounding Unit (AMSU) flying on NASA’s Aqua satellite.

    http://www.drroyspencer.com/2010/01/how-the-uah-global-temperatures-are-produced/

    That would seem to be fairly definitive. OTOH, later in the same article he says:

    I then pass the averages to John Christy, who inter-calibrates the different satellites’ AMSUs during periods when two or more satellites are operating (which is always the case).

    Inter-calibrate is somewhat ambiguous. He could be comparing the AQUA data to another AMSU to see if there’s some change, or the data could be combined to provide the final result. But considering the first statement, my reading is that the other satellite(s) are control(s) not data sources. But of course, without the code, only C&S know for sure.

    Then there’s this comment:

    For those channels whose weighting functions intersect the surface, a portion of the total measured microwave thermal emission signal comes from the surface. AMSU channels 1, 2, and 15 are considered “window” channels because the atmosphere is essentially clear, so virtually all of the measured microwave radiation comes from the surface. While this sounds like a good way to measure surface temperature, it turns out that the microwave ‘emissivity’ of the surface (it’s ability to emit microwave energy) is so variable that it is difficult to accurately measure surface temperatures using such measurements. The variable emissivity problem is the smallest for well-vegetated surfaces, and largest for snow-covered surfaces. While the microwave emissivity of the ocean surfaces around 50 GHz is more stable, it just happens to have a temperature dependence which almost exactly cancels out any sensitivity to surface temperature.

    So then are the data from the window channels used to correct channel 5, which also gets a lot of emission from the surface? That would be something nice to know as well.

  378. Re: Carrick (Aug 7 11:10),

    I’ve experience with surface atmospheric temperature profilers, which work on pretty much the same principle, and overall they are able to achieve decent agreement with direct radiosonde/tethersonde data.

    Indeed. But, since the inversion of the intensities to temperature is an ill-posed problem, don’t you need to have a good idea what the temperature profile is already? I thought that was the purpose of the TIGR database (Thermodynamic Initial Guess Retrieval). My understanding is that you can do the same sort of thing if you use more of the microwave channels on the AMSU. But I remember reading a comment from someone that if you didn’t have a recent nearby sounding , the results weren’t very good (I believe whoever it was actually said useless rather than not very good).

  379. Re: DeWitt Payne (Aug 7 14:48),

    I don’t believe either of them is fit for the purpose of quantitatively determining changes in long term trends in the climate.

    That statement is too strong. It should read: “I don’t believe either of them has been demonstrated to be fit for the purpose…”

    USHCN could be completely accurate and still not be telling us what it appears to be telling. There’s still the problem of how much do quasi-periodic oscillations like the AMO and PDO contribute to the variability. The AMO, for example, bottomed out in the early 1970’s and has been increasing ever since. It was also near it’s peak for the severe US droughts of the 1930’s and 1950’s. That could be a coincidence, but it seems unlikely to me.

    [edit]
    The previous time before that that the AMO peaked was in the 1870’s. Wanna guess what happened then? Hint: The Great Famine of 1876-78 caused by intense drought.

  380. DeWitt Payne (Comment #101101)-I’m happy to talk to people with consistent views. Unless UAH is calibrated somehow based on USHCN specifically, I find it unlikely they coincidentally vary almost exactly in proportion over annual and longer timescales. Unlikely, but not impossible.

    As for how I know UAH doesn’t exclusively use AQUA, it’s because I read the readme files…

    http://vortex.nsstc.uah.edu/data/msu/t2lt/readme.01Dec2011

    “Update 7 Apr 2010 ***********************************

    With March 2010 we have now included AMSU data from NOAA-18 beginning in June 2005 to the present. This is an operational satellite in the afternoon (about 2 p.m.) orbit, so it will experience drifting into cooler diurnal temps as time goes on. At this point, the equatorial crossing time is only about 30 minutes from the nominal time, so has very little impact. AQUA, with its stationary crossing time is still the backbone satellite for this period. You will notice some slight changes in anomalies as NOAA-18 is added into the mix. In particular, the gridded maps that had depended only on NOAA-15 patterns will now be a bit smoother as another satellite is
    added.”

    Note also the last two updates. And several others. It is evident that UAH does make use of the other AMSUs for various reasons. AQUA is the “backbone” since it shouldn’t have significant diurnal drift. It is not all the data.

  381. DeWitt:

    But, since the inversion of the intensities to temperature is an ill-posed problem, don’t you need to have a good idea what the temperature profile is already?

    The surface atmospheric profilers don’t make any assumptions about the vertical or temporal variation in the atmosphere, and for these we have “ground truth” in the form of met towers and radio & tethersondes, so I guess I’d need some convincing that vertical profiling is necessarily an ill-posed problem. It’s my understanding that it isn’t.

  382. Paul S:

    Where is the hand-waving here?

    Thanks for elucidating your argument. That certainly is not handwaving in the sense that your original comment The vertical weightings seem to hold complexities that are difficult to appreciate. that I branded as handwaving was. 😉

    Pretty much all of Isaac’s post was a comparison of the model output with satellite data. I’m not sure what your criteria would be for validation.

    The standard ones. Comparison with data is usually referred to as “verification” not “validation”. Validation involves testing for internal consistency, for which the model-based steps of the satellite reconstructions pass with flying colors, and the climate models do not.

    What you seem to be missing is that anytime you attempt to ascribe meaning to data you are using a model

    Of course I’m not missing that.

    The use of models is not the issue. Nobody (sane) should be objecting about the use of Planck’s radiation law, which in any meaningful sense is well tested and well understood. It’s what we call a “solved problem.”

    Satellite-based measurements make contain some complexity, but largely it’s a series of steps, each involving well known physics. (I am a bit bemused by the notion put forward on this thread that they are wildly more complex than say reconstructing surface temperature from surface temperature measurements.)

    OTH, the use of climate models is in no way similar to the types of models used in satellite-based profiling. Hopefully you can see why that would be without me needing to explain it in detail.

  383. A polite suggestion, if I may.

    Where there are specific technical questions concerning UAH why not ask Roy Spencer directly and then provide the response here? I’m specifically speaking of things like the role of Aqua vs. the other satellites, calibration based upon USHCN, and so forth.

    Specific, definitive answers to these questions would seem to me to be of benefit to all involved.

  384. Schnoerkelman, Roy’s not the best in terms of responding to questions, which is to say I’ve never had any luck. YMMV.

  385. SteveF:

    My current thinking is that the models do not properly deal with surface boundary layer physics, and so there is a disconnect between the measured surface warming and the measured tropospheric warming (as already claimed in a couple of papers by Roger Pielke Sr and some of his associates).

    This of course is the claim I’ve been making too. The global models don’t incorporate boundary layer atmospheric physics in any meaningful way. Any response of the boundary layer to warming that is different than that of the troposphere above it will be missed by the global climate models.

    In the same vein, satellites measure temperature in broad swaths that include (in some channels) contributions from the boundary layer, but because these weighting functions are kilometers thick (even for the near surface channels), temperatures derived from these profilers will dominantly be measuring the temperature above the boundary layer, rather than the temperature in the boundary layer (let alone the surface air temperature).

    To nearly the same that global models don’t accurately model surface temperature, neither do satellite temperature profilers measure surface air temperature.

    It would be beholden on those who want to claim they should be the same to demonstrate that the differences in physics of the lower troposphere and the surface boundary layer “don’t matter”.

    Don’t/doesn’t matter seems to be the mantra in climate science. Nothing ever “matters” including grave errors in your methodology as with Gergis et al.’s flawed paper. They’ve pretty much guaranteed to their colleagues that the conclusions will be the same, regardless of the errors.

    They might consider saving people time by writing an post-modern climate science article generator similar to this one for other post-modernist essays.

  386. By the way here’s a decent brief exposition of the satellite inversion method. I was going to write up something similar myself (however concentrating on surface profiler measurements, which I know more about), but found this instead.

    I don’t see anything particularly challenging here in obtaining an inverse solution. The near linearity of the problem is interesting because it means you can likely find the solution to the fully nonlinear problem using iterative methods.

    It’d be interesting to see Held’s code so we could get a feeling for what he’s actually doing. It’d also be interesting to see what happens if you use the much finer resolution AQUA weighting functions instead of the AMSU ones.

  387. Carrick,
    Yes boundry layer behavior (especially over land) ought to be hugely important in the diurnal surface temperature cycle. Even over the ocean there is some difference between night and day in convective transfer to the atmosphere due to the formation of a warmer thin skin layer during the day due to solar heating (this disappears at night as the surface cools and mixes convectively with the underlying “well mixed layer”).
    The suggestion to compare daytime maximum temperature trends with satellite lower tropospheric temperature trends seems to me a good suggestion, since daytime convection ought to link the tropshere and surface temperatures more closely.
    Collecting temperature data at multiple locations at differing heights (up to a couple hundred meters) would be a low budget means to better define boundry layer influences; there are hundreds of transmission towers already in existence which would be suitable, and the cost would be miniscule compared to “big science” programs.

  388. SteveF:

    The suggestion to compare daytime maximum temperature trends with satellite lower tropospheric temperature trends seems to me a good suggestion, since daytime convection ought to link the tropshere and surface temperatures more closely

    This is true. Nighttime surface temperatures are almost completely decoupled from higher-altitude temperatures. Unfortunately, even the daytime boundary layer is only weakly coupled to the troposphere above it (you can see this when you take off in a plane during the daytime, as your plane emerges above the boundary layer you can see a sharp transition associated with a layer of haze composed of aerosol particles diffused into the boundary layer from the surface).

    I of course like your idea of putting sensors in transmission towers. They’d rather spend billions on supercomputers that have little chance of advancing the art than on actual data. Typical post-modern scientists.

    These days, with the prevalence of broadband wireless the main problem of “getting the data back” has now be solved. Ideally you’d want sensors at multiple elevations, e.g., 1-m, 3-m, 10-m, 30-m, 100-m etc. (this distribution is ideal for the typically-logarithm profile seen in wind speed vertical profiles.) At these low data rates, on the tower, you could use a Zibgee network to a single network access point to transmit the data back (Zigbee has much lower power costs than WIFI does.)

  389. Carrick,
    Thanks for the link. Linearity makes any inversion problem more tractible, and successive approximation methods (like Richardson-Lucy), IIRC, are a lot less computationaly intensive to arrive at a reasonable solution. Noise amplification from the inversion could still be a problem, but there are probably good physical arguments for smoothing to reduce noise effects. In any case, the inversion method lends itself to both validation and verification; I see no parallel with CGCM’s.

  390. Carrick (Comment #101116)

    “In the same vein, satellites measure temperature in broad swaths that include (in some channels) contributions from the boundary layer, but because these weighting functions are kilometers thick (even for the near surface channels), temperatures derived from these profilers will dominantly be measuring the temperature above the boundary layer, rather than the temperature in the boundary layer (let alone the surface air temperature).”

    Would not radio sondes be better suited for obtaining the required data here. Surely that method has a finer resolution with altitude than satellite MSU? And if so should not there be observed data that can be compared with modelled results?

  391. Carrick,

    I visited the POMO generator you linked to. Neat stuff, and I traced back to the Sokal article, since the reference to it triggered some long-lost memories. This happened while I was in college btw.

    Some interesting quotes from Sokal’s own exposition of his parody:

    Politically, I’m angered because most (though not all) of this silliness is emanating from the self-proclaimed Left. We’re witnessing here a profound historical volte-face. For most of the past two centuries, the Left has been identified with science and against obscurantism; we have believed that rational thought and the fearless analysis of objective reality (both natural and social) are incisive tools for combating the mystifications promoted by the powerful — not to mention being desirable human ends in their own right. The recent turn of many “progressive” or “leftist” academic humanists and social scientists toward one or another form of epistemic relativism betrays this worthy heritage and undermines the already fragile prospects for progressive social critique. Theorizing about “the social construction of reality” won’t help us find an effective treatment for AIDS or devise strategies for preventing global warming. Nor can we combat false ideas in history, sociology, economics and politics if we reject the notions of truth and falsity

    I say this not in glee but in sadness. After all, I’m a leftist too (under the Sandinista government I taught mathematics at the National University of Nicaragua). On nearly all practical political issues — including many concerning science and technology — I’m on the same side as the Social Texteditors. But I’m a leftist (and feminist) becauseof evidence and logic, not in spite of it. Why should the right wing be allowed to monopolize the intellectual high ground?

  392. And BTW I ‘like’ the ecmwf link you posted. Everyone in this thread should at least skim that, it concisely answers for me why at first glance it seems like an ill-posed problem and how the near linearity reduces it back to something solveable.

    Though, I am still a bit concerned that (I think it was) DeWitt’s note about changes in height of the emitting layers…but I thought the best ground truthing was the use of sondes to measure pressure (because we need to know the # of emitting molecules), and that the sondes are much more accurate in this regard than w/r/t temperature (sensor adjusts much faster). If there are large areas of the world, or even latitudes, where the sonde data is sparse, this could raise some concerns (??)

  393. @Carrick

    “I of course like your idea of putting sensors in transmission towers. They’d rather spend billions on supercomputers that have little chance of advancing the art than on actual data. Typical post-modern scientists.”

    I think ‘they’ have a better idea of what they are doing than you do. The new supercomputers have a better ability to model clouds, which will give them a better idea of what effect they will have on climate sensitivity. The only real issue is climate sensitivity now. As for being ‘post modern’, that’s just a cheap shot, but you get plenty of them from the ‘skeptic’ blogs, they aren’t idiots.

  394. Carrick (Comment #101117)

    Thanks much for these links Carrick – just what I was looking for.

  395. Re: Andrew_FL (Aug 7 18:58),

    The Spencer article I quoted was dated January 6, 2010, which pre-dated the addition of NOAA-18 to the mix. I should have checked for more recent information.

    You can see an increase in seasonal variation in the RSS-UAH plots after 2002, which does seem to cast doubt on the accuracy of RSS’s methods. But by the same token it casts doubts on the accuracy of the data from both from before 2002. I’m not saying it’s wrong. I’m saying that it’s just not good enough for long enough and the confidence levels for the trends generated from internal noise of each source likely underestimate the true variability.

  396. BillC (Comment #101124)-“If there are large areas of the world, or even latitudes, where the sonde data is sparse, this could raise some concerns (??)”

    Above the boundary layer, near the equator, sampling is probably a minor issue, due to the Rossby Radius of Deformation being inversely proportional to the Coriolis Parameter. Temperature and other variables are smoothed out over long distances near the equator.

  397. Re: Carrick (Aug 8 05:43),

    Interesting link. But it still doesn’t relieve my concern that TLT in 1979 may not be measuring exactly the same thing as TLT in 2012 even without the orbital drift problems and the changes from MSU to AMSU and then AMSU-A.

  398. SteveF:

    Noise amplification from the inversion could still be a problem, but there are probably good physical arguments for smoothing to reduce noise effects

    Of course, noise amplification from the inverse solution is another example of a “solved problem”. (Singular value decomposition and other matrix regularization schemes are examples of this.)

  399. DeWitt:

    Interesting link. But it still doesn’t relieve my concern that TLT in 1979 may not be measuring exactly the same thing as TLT in 2012 even without the orbital drift problems and the changes from MSU to AMSU and then AMSU-A.

    I hope it’s clear I addressing your concerns about the problem being “ill-posed”. That is a different issue that the question of splicing together the records, which I would contend is a solvable issue, and a well-posed one.

    Whether it has been done right “now” is different than whether it can be done right in principle. And even that “in principle” question can be answered in a relatively straightforward manner.

  400. Kenneth Fritsch:

    Would not radio sondes be better suited for obtaining the required data here. Surely that method has a finer resolution with altitude than satellite MSU? And if so should not there be observed data that can be compared with modelled results?

    Radiosondes are more useful in probing the upper tropospheric temperature, because they are relatively expensive to use, and you are given e.g. T as a function of the trajectory of the radiosonde (and generally the trajectory is only known in terms of its barometric height). Due to their nature (one discrete measurement at each height), and the turbulent nature of the atmosphere, ideally you’d liked an averaged version of the meteorological quantities you’re measuring.

    I’d say the best choice for would be an array of surface profilers, which could give you e.g., 15-minute averages of temperature and wind speed as a function of altitude.

    Towers, if they already exist, give you even finer temporal resolution for meteorological information at each height, and for studying problems like atmospheric turbulence, are really the only tool available.

    It’s true the limitation of these is that you can only put instruments in at fixed height, but over long-enough time averages (30-minutes is usually sufficient), you can employ Monin-Obhukov similarity theory to interpolate over the entire sampled range of the boundary layer profile.

    Plus instrumenting a tower is going to be a lot cheaper than sounders (which start around $50k and go up from there depending on what quantity you are measuring).

  401. Carrick,

    Whether it has been done right “now” is different than whether it can be done right in principle. And even that “in principle” question can be answered in a relatively straightforward manner.

    Or, given the data available “now” for the whole history of the satellites, whether it can be done “in practice”.

    BTW the tower instrumenting thing sounds like a good academic proposal. I’m in.

  402. BillC, I think Sokal hits the nail on the head in a way that applies equally to the “post-modern” approach that some climate scientists take.

    To amplify that comment, it isn’t just climate scientists who do this, it’s practically everybody… it’s a natural tendency to want a model that has a few knobs that you can twist than get the answer you want.

    Data collection is tough and demanding, it’s costly both physically on the researchers as well as budget wise, and just like good theorists, there aren’t that many experimentalists who are really good at data collection. Sometimes researchers are injured during field deployments, rarely but it does happen, they die. Discomfort and fatigue are extremely common (where I’m deploying we’re facing near-record highs, over 110°F for the next several days, and some planning and coordination is needed to prevent the very real threats associated with heat exposure in these extreme conditions).

    Anyway, I see this same post-modern thinking in nearly every branch of science and engineering. Post-modernism is just a meme used to excuse engaging in the hard work of doing something correctly. And the harder the work, and the more people involved, and the greater the stakes in the outcome, the more likely you’ll find people trying to take short-cuts and finding plausible sounding explanations for (bluntly) why they failed to do what they needed to do to demonstrate what they set out to demonstrate.

  403. Carrick, I have previously proposed using layered instruments suspended on a cable between two barrage balloons.
    The actual, mobile, set-up would be very cheap.

  404. Carrick, the towers sound like a very interesting idea, sounds well worth pursuing.
    Fluxnet have a slew of towers around the world already, so there may be some usable data out there already. I was involved in a hyperspectral scanner survey over the Kruger Park tower some years ago, which was when I first heard about them.

    http://earthdata.nasa.gov/featured-stories/featured-research/ground

    http://fluxnet.ornl.gov/

    http://globalchange.nasa.gov/KeywordSearch/Metadata.do?Portal=NASA&KeywordPath=%5BKeyword%3D'AIR+TEMPERATURE'%5D&OrigMetadataNode=GCMD&EntryId=s2k_knp_met&MetadataView=Full&MetadataType=0&lbnode=mdlb3

  405. Steven Mosher, thanks for the link. Urban/rural boundary is of great interest these days.

    Regarding rural, this is one of the longest rural data sets I know of: CASES-99 and ABLE.

    For people who want to play with things like diurnal cycles, see how nocturnal boundary layers form and “dissolve” into ordinary day-time boundary layers, this is the place to go.

  406. DocMartyn

    Carrick, I have previously proposed using layered instruments suspended on a cable between two barrage balloons.

    We’ve looked into that approach too. The biggest problem we’ve had is in tracking the position of the instruments, since the balloons move up and down (and left and right etc) relative to each other. We have an engineer who specializes in GPS type measurements, who has studied how to use individual GPS systems to accurately track the relative spacing of closely spaced elements.

    Recently we’ve had great success with placing a single sensor on an observation balloon (same class as used in Afghanistan). This may resurrect interest in our group in putting an acoustic array on a balloon.

    Chuckles, thanks for the link.

  407. Carrick (Comment #101117)

    You do know that both your links go to the same article.

  408. welcome carrick ive been poking around in their stuff for a few years.. thanks for ur link as well

  409. Re: Carrick (Aug 8 11:11),

    Ill-posed problems are solved all the time. X-ray crystallography is a classic example. The question isn’t whether the problem can be solved in principle. The question is how well has it actually been done with the last 34 years of the data from different satellites with very different capability microwave detectors. Obviously not well enough or UAH wouldn’t need to release a new version. I still say that we don’t have enough information to say which of RSS, UAH or STAR are more accurate. In fact, the differences probably reflect the real uncertainty of the process.

  410. bugs 101128,
    Climate sensitivity is a very important issue, of course. The net feedbacks from clouds remain very uncertain; faster computers alone will not answer the question of cloud feedbacks, because there will likely remain some parameterizations which come from ‘exper judgement’. There is also uncertainty in climate sensitivity that comes from uncertainty in atmospheric moisture change with rising temperatures.
    But even if climate sensitivity really were well defined, the issue of what consequences warming will bring also needs to be reasonably well defined. As things stand there is remarkably little that can be projected in the way of net negative consequences; or at least consequences which would justify large current expense.
    I have little doubt that humanity will devise solutions for whatever future consequences occur, as least so long as they have not been impoverished by imprudent expenditures. Where there probably is room for a political consensus today is in modest ‘no regrets’ policies like Pielke Jr and others have been suggesting for some time. Something is better than nothing.

  411. SteveF (Comment #101167)-“Where there probably is room for a political consensus today is in modest ‘no regrets’ policies like Pielke Jr and others have been suggesting for some time. Something is better than nothing.”

    Fantasy designer policies may look better than doing nothing in some analyses. But Congress does not adopt bills as written by policy designers that perfectly balance the the bad that’s “necessary” with offsetting “good.” Even if such perfect policies were really knowable, short of making Pielke Junior-assuming he is that perfect designer or the follower of him (Nordhaus?)-dictator for life, the policy Congress would implement if it did “do something” would inevitably be worse than the “perfect” policy, and, I contend, inevitably worse than nothing.

    Something is almost always worse than nothing.

  412. Andrew FL,
    No regrets means that the cost is not huge, and potential benefits significant, independent of how serious a problem global warming turns out to be. Some things government does poorly, and the rest very poorly. But some government is needed, and some actions can only be done by government, as inefficient, wasteful and flawed as that process may be. If global warming were to turn out to have serious negative consequences, governments and diplomacy would necessarily be involved in any response, as wasteful (distasteful) and inefficient as the process would likely be.
    By the way, a political consensus does not presume that all would agree, just a substantial majority. My belief is that a substantial majority could coalesce around some modest no regrets steps; agreement by all is certainly not required nor expected.

  413. SteveF (Comment #101169)-If I believed that a real world government would institute a policy that really constituted “no regrets” I would be all for it. As it is, I don’t believe that’s possible in the real world.

  414. DeWitt, as far as I can tell, this is an excursion from the discussion at hand. You raised a point about the problem being ill-posed, my comment was addressing that.

    I’ll allow we can move the discussion to the one of whether the problem has been set up correctly or solved correctly, but that’s a different discussion.

  415. Re: Carrick (Aug 8 19:08),

    No, it isn’t a different discussion. It’s exactly the point I raised initially. Let’s go back to the beginning. Here’s the post from Steve F replying to Mosher to which I was replying when I mentioned ill-posed:

    Sure, and the ‘output’ of a mercury in glass thermometer is also a ‘model’, if you insist. The issue is more a) the complexity and b) how many degrees of freedom does the modeler have in interpreting the ‘raw’ data. Microwave temperature data certainly does not approach mercury in glass, but neither is it based on a multitude of modeling choices like in GCM ‘s.

    But of course it is based on a multitude of modeling choices, an infinite number, in fact. So I replied:

    In fact, the problem of turning the observed microwave intensities into temperature is ill-posed. There are an infinite number of solutions that will produce the same readings. Now you can get rid of a lot of those solutions by using other criteria like smoothness, but there will never be a unique solution as there is with, for example, a liquid in glass or thermocouple thermometer.

    Did I imply directly or indirectly that there was no solution? I certainly didn’t intend that. Are either of those sentences incorrect? There is a best solution, but it’s fairly clear that nobody knows for sure what that is or there would be far less disagreement between the different camps.

    Sounding from the surface upward is different. For one thing, you know the surface temperature. But unless there is no significant overlap of the weighting functions. The problem is still ill-posed. There are methods for dealing with that for obtaining temperature profiles suitable for use in short term weather forecasting. But like using surface temperature stations also designed for short term weather forecasting, using satellite temperatures for long term trend analysis may well be pushing the system beyond its physical limits. In quality control terms, the system may not be fit for that purpose. Someday we may be able to show that it is fit-for-purpose, but that isn’t today.

  416. DeWitt Payne (Comment #101177)-The differences between the different group’s trends seem to mostly be due to differences in diurnal drift corrections, how the separate satellites are stitched together etc. less than they do differences in the application of theory to calculate temps.

  417. DeWitt, the question e.g. of splicing that you brought up in response to my comments has nothing to do with inverting radiative measurements to obtain temperature, so yes they are separate discussions, which you are now conflating.

    I’d be interested in seeing you go to through the notes that I linked and explain how one get an infinite number of solutions from the inverse problem for satellite temperature profiling.

    I think the problem is much better behaved that you suggest.

  418. Andrew_FL, differences arise from the the algorithms used for splicing the temperature series from the satellites but also of course on the use of different data sets, the AQUA data in particular used by the UAH group.

  419. @SteveF

    I have little doubt that humanity will devise solutions for whatever future consequences occur, as least so long as they have not been impoverished by imprudent expenditures. Where there probably is room for a political consensus today is in modest ‘no regrets’ policies like Pielke Jr and others have been suggesting for some time. Something is better than nothing.

    Why wouldn’t the solutions to the consequences make us impoverished. The proposals to limit CO2 emissions are not designed to make us broke, but to just speed up innovation. Already, alternate energy sources are getting much cheaper.

  420. bugs,
    The main point of no regrets policies is to facilitate the technical innovations that will help make non-fossil energy less expensive and energy use more efficient. The fact that solar and wind costs have come down just shows what modest price incentives, combined with a globally competitive market economy, can do to improve technology. But certain key technologies (like fail-safe uranium based breeders, or fail-safe thorium based reactors) are only going to be developed through government approved, and probably government funded, research and development.
    There remain a couple of billion very poor people on Earth who need cheaper energy (usually along with better government!) to become more wealthy, and to enjoy all the improvement in quality of life that permits. There will be another couple of billion by the time population peaks. We need cheaper and more abundant energy, and in the long run, fossil fuels will become more scarce and expensive. Fossil fuels are not the long term answer to humanity’s energy needs, independent of the impacts of CO2 on climate. If the consequences of warming turn out to be costly, a more wealthy world will be in a better position to deal with those consequences. Which is why policies to promote research and develoment for non-fossil energy are sensible…. and no regrets.

  421. DeWitt Payne (Comment #101177),

    I wish that I had initially written: “a multitude of parameter values for uncertain factors like cloud influences and aerosols in GCM’s” instead of “multitude of modeling choices like in GCM‘s”. There is a qualitiative difference in the things that make GCM’s and microwave temperatures uncertain; the best solution to an ill posed problem can be demonstrated, and has little to do with “expert judgement”. Whatever uncertainty is there is in microwave temperatures is due to other factors, as have already been discussed here and elsewhere.

  422. DeWitt Payne (Comment #101177),

    I wish that I had initially written: “a multitude of parameter values for uncertain factors like cloud influences and aerosols in GCM’s” instead of “multitude of modeling choices like in GCM‘s”. There is a qualitiative difference in the things that make GCM’s and microwave temperatures uncertain; the best solution to an ill posed problem can be demonstrated, and has little to do with “expert judgement”. Whatever uncertainty is there is in microwave temperatures is due to other factors, as have already been discussed here and elsewhere.

  423. The idea of the policies that have actually been proposed is that they make CO2 emitting energy sources more expensive, so that alternatives look attractive by comparison. The cost of those alternatives is actually so much greater than current sources that such a policy alone would accomplish little without massively increasing costs of current energy sources. But such policies would presumably be coupled with existing and additional subsidies for “alternative” energy, which shift the vast majority of the real cost of those energy sources from current users to future taxpayers. Now, supposedly if the “alternatives” have a competitive advantage those who produce the alternatives would have more money to invest in the development of improved, cheaper versions of the “alternatives” and this innovation would, supposedly, offset the negative effects of increased energy costs. To begin with, this theory confuses cause with effect: investment doesn’t produce innovation, innovation attracts investment. Throwing money at the problem isn’t going to cause anyone to be more likely to have a flash of insight. The second problem with this theory is that the description of the consequences is just flat wrong. The truth is that the magical innovation can never offset the lost wealth from increased energy costs-which is lost permanently. The time spent with a weakened economy not producing what one could produce is lost permanently. Economic growth happens like compound interest-contrast an economy that grows at 2% continuously with one where the 2% growth is interrupted by 0% growth for even two years-the lost opportunity, the lost wealth, grows more and more massive as time goes on.

    Do not confuse the intentions of a policy, with it’s actual results.

    Either way, though, I believe that people need to fully comprehend the magnitude of the policies being proposed, to see why, even if the fairy dust innovation story were true, that could not conceivably reduce emissions as much and as fast as the policies that have been suggested propose. Waxman-Markey would have taken us to per capita emissions levels of the mid to late 1800’s in about 40 years. What the heck kind of innovation is possibly going to accomplish that, while sustaining an economy 2.2 times the size of the present one?

  424. Andrew FL,
    Just to be clear: ‘no regrets’ policies have nothing to do with monstrosities like Waxman-Markey, which are stupid, destructive, and wouldn’t do anything like what they are claimed to do in any case. No, forced reduction in energy use via government mandate (high taxes or Cap’n Trade) will never work, and I have never suggested they would. A low energy tax (like $10-20 per ton of carbon) dedicated by law exclusively to support technological development of cheaper non-fossil energy is nothing like Waxman-Markey in scope nor cost, and would actually have a good chance of substantive long term benefit. Would such a program be as efficient as privately funded research? Heck no! But on the other hand, publicly funded research can focus on things the private sector won’t (or legally can’t). We have huge existing public funding of all kinds of research (NIH funding alone is enormous), so there is plenty of precedent.
    .
    Could we trust politicians to not divert funding to rubbish commercial projects like Solyndra? Of course not. Public diligence would be needed to avoid subsidized commercial activities (which are almost always a boondoggle) and to minimize the really stupid stuff…. of which we must accept there will be (this is government funded activity after all). Could we trust politicians (especially those on the political left) to not try to expand the low carbon tax into a huge energy tax used mainly for wealth redistribution? Of course not; public diligence would be needed to avoid cancer-like growth of any energy tax.
    .
    But in spite of all the caveats, IMO there is a need for government involvement, certainly in nuclear power technology, and arguably in research on a range of energy production technologies and energy efficiency technologies. I don’t expect perfection from government programs, but I recognize that fossil fuels, independent of global warming, are a relatively short term energy supply. Looking beyond the next election cycle, or even beyond our own limited lifetimes, makes perfect sense to me.

  425. Carrick, GPS is rather low resolution and rather expensive.
    You can do the job far cheaper using laser distance measurement. You can have a number of units operating at the same time.

    http://www.fluke.com/fluke/usen/products/Laser-Distance-Meters?trck=distance

    A friend of mine used a pair on either end of a gallows crane, pointing to the base, to establish that it was horizontal. He had originally wanted to us GPS, but GPS is +/- 1 meter and laser is +/- mm.

  426. DocMartyn,

    GPS used for civil surveying is accurate to mm. but it’s more expensive than less accurate “retail” GPS which I think is what you must mean. Not sure how it stacks up against lasers – my last experience was it was fairly competitive.

  427. The issue of what the unadjusted GHCN data set represented came up in discussion on this thread. GHCN for Version 3 notes that the unadjusted temperature series are impossible for them to track in full and might or might be adjusted. The adjusted temperatures are obtained from the unadjusted temperature by processing that data through the GHCN quality check and homogenizing algorithm. I thought it might be instructive to look at the Adjusted versus Unadjusted temperatures by country since I assumed that the handling of the original unadjusted data would be homogenous by country. To that end I down loaded the most recent versions of GHCN mean temperature as given under Adjusted and Unadjusted for the period 1900-2011. I separated the data by country for Adjusted and Unadjusted and then subtracted the Unadjusted data from the Adjusted data by month for each station and then averaged all station data by year and obtained a time series of this overall average for the years 1900-2011.

    I plotted 28 of these series by country with the number of stations noted on the plot and linked the plots below. The US difference plot is rather unique in that it shows a relatively large and monotonic upward trend over the entire period after 1940. Canada by contrast shows only small difference over the entire time period as is more or less the case for Australia, France, Germany, Austria, Sweden, Ukraine, Indonesia, China, Japan and the UK. Other countries like Finland show a plateau of constant difference and then a period of changing differences and then a step to a different level plateau of constant difference.

    What we see in these plots could be accounted for at least in part by unadjusted data in some countries cases being adjusted to point of eliciting no adjustments by the GHCN adjustment process. It should be clearly noted here that we are looking at averages and those averages could contain variations of differences both positive and negative and the lack of a trend in the differences merely indicates that the adjustments result in no biases. I should probably go back and look at absolute differences by country. I do not have an explanation for those cases where the difference is not 0 but is a plateau of a constant amount lasting over several years. Further I have always assumed that over the past years GHCN received what they would term and label as unadjusted temperature data. The adjusted amount could change and particularly when GHCN changed its homogenizing process as it did on going from V1 to V2 and V2 to V3, but I assumed that the unadjusted data received in past years would remain the same. I doubt that the original data received from most countries was always adjusted in the same manner and thus why would I not see early data, as in the case of Germany, Austria, Sweden, Finland, Denmark, New Zealand, Ukraine and Iran (where long stretches back in time show exactly zero differences), being different than the GHCN V3 adjusted version.

    We have had major discussions about the CRU temperature data where the original data was lost and what that means to the integrity of the data set. If GHCN were to receive changing original data I would think that would create an integrity problem whether that original data were adjusted or not before going to GHCN. I would think that the integrity issue and GHCN’s awareness of it would rule out the changing of the unadjusted data over time and thus the questions of the patterns I see in the difference plots remain open. Can anyone here help me with my questions? It might be time to email GHCN again.

    http://img38.imageshack.us/img38/6665/ghcnadjunadjcountry1.png

    http://img266.imageshack.us/img266/1125/ghcnadjunadjcountry2.png

    http://img98.imageshack.us/img98/9044/ghcnadjunadjcountry3.png

  428. “But in spite of all the caveats, IMO there is a need for government involvement, certainly in nuclear power technology, and arguably in research on a range of energy production technologies and energy efficiency technologies. I don’t expect perfection from government programs, but I recognize that fossil fuels, independent of global warming, are a relatively short term energy supply. Looking beyond the next election cycle, or even beyond our own limited lifetimes, makes perfect sense to me.”

    SteveF you have covered a couple issues critical to current political philosophy and the problems it portends with possible attempts at AGW mitigation and other issues.

    Nuclear power has and is very much in the hands of government through research and development, licensing, regulating and waste disposal. I see all these functions as being tainted by politics not only in the US but worldwide and to the point where it could obliterate the entire industry.

    When talking about looking ahead I always like to bring forth the issue of a Medicare and Social Security and the problems those government programs will impose on the economy – and long before the mythical trust funds run out of IOUs, i.e. when the outgo exceeds the income and the difference covered in taxes – and the reaction of politicians and intellectuals who should know better refusing to face the problems. That situation is not unique to the US. Further we have the recent example of the EURO zone with the rather irresponsible short term fixes being proposed and implemented there. I do not see any relief from these issues without a major change in the intelligentsia thinking on them – which probably means a change of the intelligentsia.

  429. Re: Carrick (Aug 8 21:43),

    I don’t think it’s fair to say that stitching the different satellites together is a separate issue. Noise in the data contributes to the problem being ill-posed. Orbital drift and cross calibration between different satellites is noise in the signal, just not random noise. In that respect, it’s no different than station moves and TOB corrections for the surface record. Sure you can demonstrate that for the ideal case, say one location and one satellite at a single time, you can calculate a best solution and prove it. But if the case were ideal, i.e. one satellite with no drift in orbit and no change in the hot target temperature with satellite orientation for 34 years then RSS, UAH and STAR would be getting the same results today. But they’re not. Judgement is still involved.

  430. Re: DocMartyn (Aug 9 11:19),

    Lasers require line of sight. GPS only requires a reasonably clear view of the sky at each location. Real Time Kinematics is good to ~2cm in real time for objects moving wrt the base station (if you spend enough money). You can even do fun stuff like measuring the yaw angle of a moving car caused by differing slip angles at the front and rear wheels. I don’t think you can do that very easily with lasers.

  431. Kenneth Fritsch (Comment #101210),

    I said somewhere up thread that government does most things very poorly, and I believe that unfunded liabilities in Social Security, and a host of other looming financial/social issues, pretty much proves that. We get the government we deserve, and substantial reductions in future benefits for social welfare programs (not just Social Security, and not just in the USA!), and the bankruptcy of several States, all of which are as certain as PV = NRT, is only getting what is well deserved… by an apathetic and intellectually lazy public. Politicians get away with misleading people about factual reality because we, on the whole, do not want to hear the truth, especially about taxes, spending, and borrowing.
    .
    The ‘intelligentsia” (an oxymoron if there ever was one), which finds endless excuses and defenses for all this politically motivated nonsense, is not going to change. Political change can come only from people insisting that politicians behave responsibly and give straight answers to direct questions; if a majority of the voters insist that politicians change, then the ‘intelligentsia’ will not matter much.

  432. DeWitt Payne (Comment #101211)-with regard to the various issues that are sources of differences between the data products by different groups, I think you need to identify the differences specifically and try to determine their origins, and determine which data products are more accurate. Of the people putting together satellite datasets, only one seems at all interested in identifying the sources of the differences: John Christy. He has presented evidence and done research which argues, convincingly IMAO, that RSS and STAR have significant biases.

    SteveF-Thinking back to the debate that started on Jeff’s blog a couple years back when I said I don’t believe in public funding for R&D, this is not a discussion I’m eager to have. Suffice it to say I am disheartened to think that there seems to be noone that is willing to agree with me. Well fine, I’m a crazy extremist. I’m used to ideological isolation. But I’ll say again, investment doesn’t lead to innovation-innovation attracts investment.

  433. “Political change can come only from people insisting that politicians behave responsibly and give straight answers to direct questions; if a majority of the voters insist that politicians change, then the ‘intelligentsia’ will not matter much.”

    I think this where we will disagree as I judge that it is from the intelligentsia that voters obtain there views. I also do not see that the change required or that I think is required is embraced by any of the current politicians and their parties.

    “Thinking back to the debate that started on Jeff’s blog a couple years back when I said I don’t believe in public funding for R&D, this is not a discussion I’m eager to have. Suffice it to say I am disheartened to think that there seems to be noone that is willing to agree with me. Well fine, I’m a crazy extremist. I’m used to ideological isolation. But I’ll say again, investment doesn’t lead to innovation-innovation attracts investment.”

    My political philosophy would also disagree with public funding for R&D. I really do not look for anyone who might agree with my politics as I know they would be far and few between at this point in time. I do believe that our politics are eventually much more determined by intellectuals and intellectual discussion than voters’ choices. Most voters are much more informed about those matters that require them to make decisions in the market place about things that affect them personally than about political issues. It has to do with market decisions being voluntary and poltical decisions being enforced by force.

  434. DeWitt:

    Noise in the data contributes to the problem being ill-posed

    Stitching is definitely a different can of worms than your comments about multiple solutions to the same of radiance values (which I don’t think is as big a deal as you think, as in I don’t think the lack of uniqueness of the inverse solution is as big a problem as you intimate, but whatever).

    I don’t see any reason to conflate all of these together, they are separate issues, separate steps. The only overlap at all is amount of noise from one step feeding into the second. Noise doesn’t make a system of equations ill-imposed. Rank deficiency for example does that (but it is independent of the noise present).

    The fact your error bars are bigger when you have more noise has nothing to do with ill-posed. I think this has dragged on enough. You’re welcome to a final comment if you want, but I’ve communicated what I wanted to, as well as I could already.

  435. Andrew_FL:

    Thinking back to the debate that started on Jeff’s blog a couple years back when I said I don’t believe in public funding for R&D, this is not a discussion I’m eager to have. Suffice it to say I am disheartened to think that there seems to be noone that is willing to agree with me

    I think part of the disagreement comes from your view that people would be willing to invest in basic research, when they are most definitely not interested in doing so.

    You’d have to make an argument that basic research serves no positive role in society, or come up with an argument for how private investment would fund it in spite of the uncertainty in any given outcome, before we could arrive at any agreement on this. I don’t think the disagreement brands you a radical, just not somebody seeing things realistically.

    I split R&D into 1) basic research, 2) applied research, and 3) development, which private funding generally being attracted to (3) only. To me having the government fund (1) is equivalent to other infrastructural funding, as with bridges, roads, etc. Government spends very little in (3), most of it is privately funded (and when the government gets involved, it’s usually the equivalent of poring water down a hole). Regarding (2), I see a role for “seed” money in public dollars, but think it works better if it’s generally controlled by the private market (except on really “long term” applied research programs that have applicability to e.g. the military or the medical community, the human genome project is an example of the latter).

  436. “I think part of the disagreement comes from your view that people would be willing to invest in basic research, when they are most definitely not interested in doing so.

    You’d have to make an argument that basic research serves no positive role in society, or come up with an argument for how private investment would fund it in spite of the uncertainty in any given outcome, before we could arrive at any agreement on this. I don’t think the disagreement brands you a radical, just not somebody seeing things realistically.”

    I think what you are saying is that government must force people to do what they would not do voluntarily because they do not see the benefits of R&D. That approach and thinking contains a lot of mischief for other pursuits of government not so noble as R&D. Without public consent for R&D I do not see how a voting majority would decide favorably on funding R&D. Privately funded R&D would not require a voting majority to see the benefits of R&D and in fact it would only require a few who might either see that as path to making money or doing it as a matter of satisfying their intellectual curiosity. Once the purse strings of R&D are dominated and made totally or largely dependent on government funding it is those uninformed and short sighted voters who would be deciding whether that funding is continued or what is funded.

  437. Carrick (Comment #101224)-The way I see it (then and now) a lot of people care deeply about science and technology. I would personally be willing to allocate some of my income to research, and I think that anyone who cares deeply about science and technology would be willing to do the same, so I am not convinced it would be difficult to find people willing to fund research. My concern is really with the idea that everyone who pays taxes is obligated to pay for scientific research.

    But as I said, it’s not a conversation that seems to get very far. At any rate, you can comfort yourself knowing that my ideas are sufficiently unrealistic that they pose no real threat.

    As for whether I’m radical, well I’m sure that I am, I have been for a very long time. After all, I got into a lengthy debate at Jeff’s once about whether a progressive income tax is inherently socialist in nature. Not even Jeff agreed with me, in spite of the fact that it was a simple question of definitions. My conservative parents think I’m too rightwing. Yup, I’m radical alright.

  438. Carrick, Andrew, Kenneth,
    I really don’t think we are very far apart on this. The question of justifying public expenditure on basic research (as opposed to private investment) is best analyzed in terms of diffuse benefits and concentrated costs; at a certain level of uncertainty, private capital will not voluntarily participate. It has to do as much with the finite lifespan of a human being as anything else; when potential return is beyond the horizon of a person’s existance, public investment is the only thing that can be attracted (loathsome as the political process may be).
    I think Carrick is right about what attracts private investment… It is always the later stages that are attractive, not the ‘gleam-in-the-eye’ stages. And this is evident even in an industrial setting; blue/violet LED’s came into existence in spite of Nichia’s management, not because of it. I have myself hidden important research from ‘management’, lest it otherwise be killed. Andrew is right that innovation attracts capital; that is the whole problem. InnovatIon does attract capital, when it reaches the point where the probability of retrn can be determined. But capital does not often sponsor fundamental innovation. The benefits are too diffuse and uncertain compared to the up-front costs.
    So, I think there is a place for publicly funded research. It is NOT funding losers like Solyndra, it is funding a gleam in someone’s eye.

  439. Andrew_FL 101219,
    You should not be discouraged. You are discussing research and innovation with (mostly) old war horses who, through often painful experience with the real world process, have reached a different perspective than you. At 25 my idealized views were not what they are today approaching 62. That does not mean I am right about any specific subject, of course, but it does mean that I have seen quite a lot. I suggest only that you allow yourself the benefit of some doubt in your own views based on limited data.

  440. SteveF:

    It is NOT funding losers like Solyndra, it is funding a gleam in someone’s eye.

    Yep. It should stay out of (3) development. The only reason it is involved in (1) basic research is because as you said nobody is going to fund that in the private sector…and this is in spite of the fact government is horrible about picking the “best” basic research rather than because they are at all competent at it.

    (In a similar vein, they s*ck at building levee systems, and not just in New Orleans. But we still need levees, just don’t expect private entrepreneurs to volunteer to pick up the costs and “do it right”.)

  441. Kenneth Fritsch:

    Once the purse strings of R&D are dominated and made totally or largely dependent on government funding it is those uninformed and short sighted voters who would be deciding whether that funding is continued or what is funded.

    Except they aren’t. Most R&D is in (3) development and is privately funded. Check the numbers.

    I think what you are saying is that government must force people to do what they would not do voluntarily because they do not see the benefits of R&D

    How did you possibly get that from what I was saying?

    The public in general favors federal funding of basic research.. Otherwise it wouldn’t happen.

  442. SteveF (Comment #101228)-Pretty sure I asked you guys never to use my age against me like that. 😉

    That said I don’t know if I appreciate being aged about 4 years. I’m younger than you remember, I think.

    Well, I can’t say for sure what I’ll think when I’m much older. I do sometimes imagine how enraged my younger self would be with me. But that’s got little to do with politics. That said, showing me I’m wrong would probably require me to see what I advocate actually tried. Ain’t gonna happen. This is relatively small potatoes anyway. Given that I’m prepared to live with the fact that Americans are never going to accept real capitalism-when even the Republicans will nominate a socialist and pass him off as a champion of freedom-I can live with what I feel is a misplaced sentiment in the funding of research.

  443. @- SteveF (Comment #100990)
    ” While I see global warming as a subject of some concern, and one which MAY require action for the public good, proportional to the extent and future cost of reasonably well defined long term consequences, Eli and most of his readers clearly believe that it is immoral to not act now, independent of cost, financial or otherwise, even in the face of very large uncertainty.”

    There are complex issues involved in intergenerational ethics, but
    I don’t think there are any credible arguments that we DO consider inflicting harm to future generations to be a moral failing. The response to a drug like thalidomide that rarely damages the next generation is withdrawn despite benefits to the primary consumer shows that.
    But to characterise the two sides as consisting of one that would take action “…proportional to the extent and future cost of reasonably well defined long term consequences” and the other advocating action “…independent of cost, financial or otherwise, even in the face of very large uncertainty” is mistaken at least.

    Both ‘sides’ regard the actions they advocate, or oppose are proportional to the extent and future cost of climate change. Both sides face significant uncertainty as an inherent part of the issue. How that uncertainty is integrated into any plan of action is one of the problems that intergenerational ethics has to deal with. It is not accurate to say that one side will only respond in the case of certainty and the other ignores uncertainty.

    In the case of sulphates and CFCs the benefits and damages were contemporaneous. The case for control of emissions of these chemicals could be made on the grounds that the immediate costs were sufficient to override the benefits.
    But suppose that the immediate decline in ozone was much smaller, but the persistence of CFCs was much greater so that damage was far more weighted to the future while the benefits were still immediate. Would it still have been justified to ban CFCs if the danger and damage was to our grandchildren rather than ourselves ?

    @- “Much of Eli’s readership also appears to believe substantial differences in wealth, whether between countries or between individuals, are similarly immoral, and so want to address the ‘moral imperative’ of dealing with global warming in such a way that it simultaneously addresses the ‘moral imperative’ of reducing differences in wealth via a colossal redistribution.”

    There is some confusion between the issue of social equity and emission controls of CO2. But it is incorrect to portray the right, or the Libertarian individualist as entirely unconcerned with issues of wealth differentials. Even the most enthusiastic of the Ayn Rand acolytes claim that some sort of ethical equity emerges via an unidentified ‘invisible hand’ process from the exercise of unremited individual greed. Although I know of no real world example of such a process. It is more a case of whether resource inequality is regarded as a primary target of political action or only ever as a secondary effect of civic governance.

    The main problem with wealth differentials is that it multiplies the moral harm. The rich benefit far more from CO2 emissions than the poor, but the harms are predominately going to impact the least well off.

    It is difficult to deal with intergenerational ethics, one way to consider the issues involved is to look at how we view previous generations for the choices they made that may have been of great immediate benefit, but have significantly damaged resources in the present.
    What do we think of the people and institutions that fished out the cod on the Grand Banks or the herring in the North Sea ? Both were resources capable of sustainable exploitation, but short-term advantage has completely destroyed those resources.
    The history of hard rock mining carries a similar message. It has taken repeated efforts by central governance to ensure that mining does not cause far more damage over the medium/ long term than can be met by the short term gain. The complaint from the mining industry is that by imposing on them the cost of their extraction externalities many resources have been rendered economically unviable. The result is that much of the crucial metal elements for modern technology are mined in countries with very lax regulation that enables the extraction without paying for the environmental damage caused.

    I do not know any simple solution to the problem of intergenerational ethics. But I am pretty sure that denying there IS such a problem by claiming that future discounting in cost/benefit analysis negates the need to consider uncertain future damages is unwarranted.

  444. Izen:

    The main problem with wealth differentials is that it multiplies the moral harm. The rich benefit far more from CO2 emissions than the poor, but the harms are predominately going to impact the least well off.

    Although this might seem self-evident, I question the implication that the poor do not benefit significantly from CO2 emissions. Maybe that wasn’t your thought, but 3rd world proliferation of motor-scooters, motor-cycles, and pick-up trucks – all fossil fueled, suggests that those emerging from poverty don’t agree with you.

    I don’t think we benefit in any way from continuation of third-world poverty, nor would we, or they, benefit from throttling fossil-fueled transportation just as they are beginning to dig their way out of the middle-ages – if that’s a fair characterization.

  445. izen (Comment #101235),
    There is so much in your comment that I disagree with that it is hard to know what to address. I do agree that both ‘sides’ believe their position is the morally correct one.
    .
    With regard to externalities of economic activity, which seems to be at the bottom of most of what you are saying: These need to be addressed individually, since each is different. One of the great problems with ‘external costs’ is that the value assigned often ends up more related to the political views/values of the person calculating those costs than the factual effects involved. In cases where the effects are clear (like acid rain damage to the bio-systems of lakes), it is possible to discuss the external costs compared to the cost of remediation. But in other cases the external costs (a mountain top disappears due to coal mining, ‘ruining the view’) are often in the eye of the beholder. There is, IMO, too much conflation of the knee-jerk reaction “you are damaging the environment” with quantified damage that can be rationally discussed and weighed against the cost of remediation.
    .
    The issue of overfishing strikes me as a red herring in any discussion of global warming. The depletion of fish stocks due to overfishing is a documented factual reality, and there are lots of laws and international agreements already in place to deal with overfishing. These are by no means perfect, but there are plenty of cases where existing regulations on fish take have already allowed recovery of depleted stocks. (Consider for example the sea scallop recovery in Atlantic coastal waters.) I know of nobody who argues that overfishing is a good thing, nor anyone who opposes efforts to avoid overfishing. The arguments, and there are lots, are about the details.
    .
    The issue of “intergenerational ethics” is fraught with wildly divergent opinions, and finding common ground is never going to be easy. Consider for example the political hot potato of unfunded public liabilities and ever mounting spending on social programs, which leads inevitably to unsupportable government debt. I would feel a lot more confident having a discussion about intergenerational ethics with people who agree first to cut unsustainable public spending.
    .
    In the specific case of global warming, the great challenge is to rationally evaluate future harm, and not forget to include potential future benefits of warming in any evaluation. As far as I can tell, there is no evidence of actual harm, only projections of future harm. The ethics of leaving a couple of billion people in horrific poverty today due in large part to a lack of inexpensive energy seems pretty clear to me, especially when the comparison is made to speculative harm long in the future. I think a good argument can be made that reducing poverty today will have far greater positive effects, now and in the future, than a hugely expensive effort in the developed world to force reductions in CO2 emissions… and any effort in the developed world is doomed to be ineffective anyway (China and India are not going to stop rapid economic expansion and CO2 emissions over the next two or three decades, much as some might like them to).

    The main problem with wealth differentials is that it multiplies the moral harm. The rich benefit far more from CO2 emissions than the poor, but the harms are predominately going to impact the least well off.

    I have no idea what ‘moral harm’ means. I do know that poor people suffer far more from expensive energy than do rich people. Rich people can buy a Prius and feel self-satisfied and smug about their environmental sensibilities. If they had a shred of good sense they would calculate the large difference in price between the Prius and a comparable small sedan, buy the small sedan, and use the difference to help feed and educate half a dozen poor kids in Africa for a couple of years. And yes, I do think the lives of poor kids are more valuable than the extra CO2 the comparable sedan would emit.
    .
    As I have noted in other comments (this thread and elsewhere), fossil fuels are at best a short term source of energy; they will for certain become more scarce and expensive over time. A prudent course of public action is to underwrite research on long term energy sources; we are going to need energy, and lots of it, long after fossil fuels are no longer important. Private enterprise does not have a long enough time horizon to invest in the kinds of research that are needed. The common political ground, if one is found, almost certainly will not include considerations of “environmental justice”, “intergenerational obligations”, “income distribution”, etc. It almost certainly will include continued efforts to clearly define future impacts of warming and to find economically competitive alternative energy sources (like fail-safe breeder reactors and fail-safe thorium reactors).

  446. “Even the most enthusiastic of the Ayn Rand acolytes claim that some sort of ethical equity emerges via an unidentified ‘invisible hand’ process from the exercise of unremited individual greed. Although I know of no real world example of such a process.”

    Advocates for climate concern would have more credibility if they didn’t spew dreck like this.

    First of all, the invisible hand of the market place is the term used by Adam Smith. By it, he meant that outcomes not intended by individuals seeking their own self interest are nevertheless the result of them doing so. Leftists believe such a thing, too-they just believe every such outcome is to hurt other people. To say there is no real world process however, betrays an ignorance of the concept of a market. Such is a typical characteristic of an aging Marxist, although it could be you are just an unfrozen caveman.

    The problem of fisheries is not “short term advantage.” That’s a SYMPTOM. The real problem is that fisheries are PUBLIC PROPERTY. People thus have every incentive to exploit it, but there is no owner to concern himself with the upkeep of the property, maintaining it for future use. The solution is simple, it’s called PROPERTY RIGHTS.

  447. Carrick (Comment #101232)

    I am aware of the amount of private research versus government funded. My point was that as the linked article below states: “The second implication is that government R&D directly crowds out private inventive activity”. The link and excerpt below authored by a well-known and not radical economist details some of my thoughts on the topic. I would not agree with the authors Chicago School economic theory but he does present some interesting facts.

    Abstract
    Conventional wisdom holds that the social rate of return to R&D significantly exceeds the private rate of return and, therefore, R&D should be subsidized. In the
    U.S., the government has directly funded a large fraction of total R&D spending. This paper shows that there is a serious problem with such government efforts to increase inventive activity. The majority of R&D spending is actually just salary payments for
    R&D workers. Their labor supply, however, is quite inelastic so when the government funds R&D, a significant fraction of the increased spending goes directly into higher wages. Using CPS data on wages of scientific personnel, this paper shows that
    government R&D spending raises wages significantly, particularly for scientists related to defense such as physicists and aeronautical engineers. Because of the higher wages, conventional estimates of the effectiveness of R&D policy may be 30 to 50%
    too high. The results also imply that by altering the wages of scientists and engineers even for firms not receiving federal support, government funding directly crowds out private inventive activity.

    http://faculty.chicagobooth.edu/austan.goolsbee/research/r&daer.pdf

    When you say “The public in general favors federal funding of basic research.. Otherwise it wouldn’t happen.” in response to my “I think what you are saying is that government must force people to do what they would not do voluntarily because they do not see the benefits of R&D” let us first of all differentiate that government action is by force whether it is approved by a majority of the voting public or not and it is different than voluntary exchanges in a free market. In other words a majority in favor of an action does not change the issue of force. The issue then becomes one of noting that the public favors, or better, appreciates the benefits and needs for R&D and yet the claim is that they would not act on that favorability by investments or donations, i.e. they need to be forced.

    My point would be that government actions in these matters requires little convincing or analyses of the benefits of R&D and who might benefit, it would appear merely sufficient to claim that a proper level of R&D requires government subsidies – end of analysis. In a truly free market place, showing the benefits of R&D – and in specific instances – would be required to attract investors or in some cases donations.

  448. “Private enterprise does not have a long enough time horizon to invest in the kinds of research that are needed. The common political ground, if one is found, almost certainly will not include considerations of “environmental justice”, “intergenerational obligations”, “income distribution”, etc. It almost certainly will include continued efforts to clearly define future impacts of warming and to find economically competitive alternative energy sources (like fail-safe breeder reactors and fail-safe thorium reactors).”

    SteveF, while I disagree with most of what izen (Comment #101235) says, I unfortunately would have to say that his views are much more in line with the prevailing current intelligentsia than yours or mine. Environmental justice, intergenerational obligations and income redistribution are very much in tune with modern day intellectuals selling points on government administered attempts to mitigate AGW.

    I would also disagree with the your statement on the shorter time line for private enterprise by asking: compared to what. Modern governments and the prevailing intelligentsia have, in my view, demonstrably very short time lines. Their general solutions to looming crisis in current government programs and finances seems to be when the crisis hits we will, and can better, impose higher taxes and/or print more money. In a true free enterprise system of justice any proven detrimental past, present or future effects to individuals or their property from AGW would be handled through a system of torts. Not that system will be used any time soon.

  449. Re: Kenneth Fritsch (Comment #101247)

    I unfortunately would have to say that his views are much more in line with the prevailing current intelligentsia than yours or mine. Environmental justice, intergenerational obligations and income redistribution are very much in tune with modern day intellectuals selling points on government administered attempts to mitigate AGW.

    A quite compelling argument can be made that income redistribution is the current state of the game, so I’m not sure why that would be counted as a point against government-administered mitigation. In fact, I’m not sure what is particularly “unfortunate” about any of the three “selling points” you mention, so I must be misunderstanding your point. Please elaborate if you would.

  450. Re: SteveF (Comment #101240)

    Rich people can buy a Prius and feel self-satisfied and smug about their environmental sensibilities. If they had a shred of good sense they would calculate the large difference in price between the Prius and a comparable small sedan, buy the small sedan, and use the difference to help feed and educate half a dozen poor kids in Africa for a couple of years. And yes, I do think the lives of poor kids are more valuable than the extra CO2 the comparable sedan would emit.

    SteveF,
    I think you’re exaggerating a bit here. Several calculations I’ve seen show that the Prius (for example) does recover its cost over a comparable small sedan within a span of a few years, given reasonable assumptions about usage pattern and opportunity costs.

    If you change the condition from “ridiculously overpriced for the sake of smugness” to “a small difference in price which may range from slightly positive to slightly negative,” then suddenly a decision to buy a Prius isn’t quite the `moral harm’ (TM) that you make it out to be.

    I sympathize with your point, but maybe this was not the greatest example.

  451. Oliver,
    I have not read any cost estimates, but I have done my own rough estimate.

    Prius: ~US$24,000
    Comparable size standard car (say a Camry or similar): ~US$18,000

    Difference: US$6,000

    Driving 12,000 miles per year, fuel consumption: Prius ~240 gallons, standard car ~345 gallons. Assuming US$4.00 per gallon, that is a payout of 105 * US$4.00 = US$420 per year. Assuming a 12 year lifetime to near-zero value (and comparable maintenance costs), the Prius is STILL going to cost about US$1,000 more in total, not counting extra capital costs.
    .
    I don’t know how to monetarily value feeling pleased with oneself, so maybe the Prius is worth it.

  452. Oliver (Comment #101260)

    “In fact, I’m not sure what is particularly “unfortunate” about any of the three “selling points” you mention, so I must be misunderstanding your point. Please elaborate if you would.”

    Oliver, I am looking at the problem as a Rothbardian libertarian.

    Environmental justice as I think of it as used in its current context is this rather vague and general view that we need to save the environment and do it with draconian government regulation and without at lot of concern for whom the responsible parties are and specifically the damage inflicted. The Rothbard solution deals with protecting individual property rights and proving under tort law that harm has been inflicted.

    Intergenerational obligations are used, I judge, by the prevailing intellectual classes with regards to AGW as a marketing ploy or otherwise why would these same people seem so little concerned about the looming government financial crises and the those emminating from Medicare and SS. They seem to have no problen laying off those responsibities to future generations. Some of it is in the same vein as I see where advocates for more government regulations start with the need “because we have to protect the children.”

    Redistribution of income is very much in the realm of the current intellectual regime as you say and they use any and every venue to increase that distribution. They know that arguments you hear at these blogs about AGW mitigation adversely affecting more the less well off of the world and turning that around by proposing that indeed that is the case and thus we must buy them off with – well let us see what better than redistribution of income and wealth.

  453. “I think you’re exaggerating a bit here. Several calculations I’ve seen show that the Prius (for example) does recover its cost over a comparable small sedan within a span of a few years, given reasonable assumptions about usage pattern and opportunity costs.”

    My son and I had this discussion and it depends heavily on the assumptions. Do you care to elaborate?

  454. Re: SteveF (Comment #101280) 


    Oliver,

    
I have not read any cost estimates, but I have done my own rough estimate.
    Prius: ~US$24,000
Comparable size standard car (say a Camry or similar): ~US$18,000
    Difference: US$6,000
    Driving 12,000 miles per year, fuel consumption: Prius ~240 gallons, standard car ~345 gallons. Assuming US$4.00 per gallon, that is a payout of 105 * US$4.00 = US$420 per year. Assuming a 12 year lifetime to near-zero value (and comparable maintenance costs), the Prius is STILL going to cost about US$1,000 more in total, not counting extra capital costs.

    SteveF,

    For the typical car owner, who will trade in or sell the car after 3-8 years, then you need to take into account the depreciation over the ownership period. Over 5 years you’ll lose a bit more than 1/2 of the original value, meaning you need to cut your $6k initial difference nearly in half. In fact, the Prius has a somewhat more favorable depreciation than a Corolla, so you’ll come out ahead after about 3 years or so, assuming fuel stays around $4/gal.

    On the other hand, if you keep the car for 12 years and assume it goes to zero value, then sure, the above analysis doesn’t apply. But then, I don’t think you can reasonably assume that fuel will remain at $4/gal for the next 12 years. You also start running into very uncertain maintenance costs which at some point will dominate over the fuel costs, and we can’t really say much at all about the relative costs.

    I don’t know how to monetarily value feeling pleased with oneself, so maybe the Prius is worth it.

    I’m sure for some it would be worth it even for a significant premium.

    Re: Kenneth Fritsch (Comment #101282) 


    My son and I had this discussion and it depends heavily on the assumptions. Do you care to elaborate?

    See the above. Relatively well-equipped Corolla for $18k, $24k Prius II, 12k mi/year, sale sometime between 3-8 years after purchase. The exact break-even point will obviously vary if you tweak the numbers, but it stays within reason.

  455. @- Andrew FL
    “The problem of fisheries is not “short term advantage.” That’s a SYMPTOM. The real problem is that fisheries are PUBLIC PROPERTY. People thus have every incentive to exploit it, but there is no owner to concern himself with the upkeep of the property, maintaining it for future use. The solution is simple, it’s called PROPERTY RIGHTS.”

    That kinda makes the point I was trying to convey.

    Who gets the property rights for the atmosphere, and in what jurisdiction can they sue for damage to its long-term composition ?

  456. I see a great future for lawyers.

    Reminds me of the kids book, “The King, The Mice and the Cheese”. We will get rid of all the politicians and find we have replaced them with lawyers.

  457. “We will get rid of all the politicians and find we have replaced them with lawyers”

    Last time I looked at lot of politicians were lawyers.

Comments are closed.