When results aren’t bad

A random comment on a recent blog post over on Bad Astronomy drew my attention to an article back in November by Willis Eschenbach titled “When Results go Bad”. In the article, he highlights a critical email received by Phil Jones complaining that the temperature data shown in the IPCC chart below does not match actual temperature data from Scandinavia.

AR4 WG1 Fig 9.12

While it is true that this figure diverges from Scandinavian temperature data, the explanation for the divergence is simple: its not purporting to show Scandinavian temperature.

While the graph in question lies on top of Scandinavia, its actually labeled “NEU”, and appears in its location primarily not to overlap with the other European temperature chart, “SEM”

If we look at the IPCC documentation for the figure, we find out that “NEU” stands for Northern Europe, and covers the whole land area between 10W to 40E and 48N to 75N. This is a much larger area than just the Scandinavian countries, especially since the projection used to produce these maps severely distorts the land area in question.

We can use raw GHCN data (v2.mean) to find data for all stations between 10W to 40E and 48N to 75N, calculate the anomalies relative to the 1961-1990 baseline period, assign them to 5×5 grid cells, and apply a land mask to determine the monthly and annual anomalies for the entire area. This gives us around 170 temperature records in the region to work with. The annual anomaly chart looks like this:

(Click images to embiggen)

It includes a 10-year running mean to help smooth out the annual variability. The 10-year mean is similar, but not identical to, the chart shown in the IPCC report. If we read the legend of the chart for the IPCC report, we discover that they are showing discrete 10-year means connected by a line, rather than a running 10-year mean. There are two different interpretations we can have for this, either:

* The full decadal period (e.g. 1980-1990)
* The start of the decade +/- 5 years (e.g. 1985-1995)

Both are plotted in the chart below:


As we see, the latter (+/- 5 years) is effectively identical to the chart shown in the IPCC report.

It seems that Willis, who argued that

“Now, I have not taken a stand on whether the machinations of the CRU extended to actually altering the global temperature figures. It seems quite clear from Professor Karlen’s observations, however, that they have gotten it very wrong in at least the Fennoscandian region. Since this region has very good records and a lot of them, this does not bode well for the rest of the globe …”

might have been a tad premature in his criticism.

Bonus:

I’ve been working a bit with GSOD data (processed from daily values into monthly means by Ron Broberg), and we can compare the results we get with GSOD data as opposed to GHCN data. Note that GSOD has many more stations available (> 500) for the NEU region post-1970 than GHCN.


We can also compare the decadal anomalies:

85 thoughts on “When results aren’t bad”

  1. Zeke: Thanks for clearing this up! It would have been nice if Jones and Trenberth had actually taken the time to explain all this properly to Karlen back in 2008, though…

  2. Strong detective work, Zeke.

    Willis’ article is long and seems to cover multiple issues; I’ll have to leave that for another day. I wanted to leave a note at that WUWT post so that those readers could see what you show — but comments are closed (I guess because the article is months old).

    Following in the line of the efforts of the trend-setting Keith Kloor, it would be nice if it were possible for Willis and you to engage in a civil exchange on the merits of the case. Keeping, y’know, personalities and such to the side. A fella can dream…

  3. But Zeke, you had to go all the way to the Appendix in the Supplementary Material to find the caption for that figure. And then you had to do some math. That’s too much to expect for a busy blog scientist. There’s so much fraud to uncover, one can’t stop to do the math or read the caption. In light of that, I think Willis was quite justified in making his conclusions about the ‘machinations’ of the CRU.

  4. Carrot,

    To be fair, Kevin Trenberth didn’t really look at it in depth either in the email chain. He assumed that the figure did represent Scandinavia, and passed it on the Phil Jones to respond to, who dashed off a quick and somewhat vague answer as he was traveling at the time.

    AMac,

    I think this is the only substantive issue covered, though Willis does spend a bit of time criticizing Trenberth’s (admittedly somewhat poor) response.

  5. Zeke,
    Glancing at the emails, it does look like they went way, way off into the weeds. I’ll give you that.

    Instead of sending rushed and confused emails while on vacation, they should have just forwarded the question to the person most able to accurately deal with a detailed question – the guy who made the figures in the first place.

    So yes, that is a major mitigating factor for Willis – if the emails went into the weeds, and as a result he ended up there as well, that’s not entirely his fault.

    This mainly goes to show the usefulness of having your code handy. You can quickly clean up this sort of mess.

  6. Shame on Willis for paying any attention to what those climate scientists wrote in their e-mails. He should have known that they would screw everything up.

  7. Now I’m curious. Is there available anywhere the result that Karlen calculated? He didn’t chase down the caption to see how NEU was defined, but I wonder if he got the right wrong answer for the area that he was using instead. Meaning, did he combine the stations and find the anomalies in any sensible way, or did he EM Smith it?

  8. How did they estimate the natural variability.

    The annual anomalies are +/- 1.5C. Monthly anomalies would have an even higher range.

    I understand these charts are based on model-forcings which do not exactly build-in appropriate levels of natural variability since it is not a forcing to start with.

  9. Illis

    See the caption, in the supplementary material for Ch9

    The red bands represent approximate ranges covering the middle 90% of 58 simulations of the climate of the 20th century with prescribed anthropogenic and natural forcings from 14 climate models that did not exhibit excessive drift in their control simulations (no more than 0.2ºC per century). The blue bands were determined similarly using 19 simulations with prescribed natural forcings only from 5 models.

    It’s all coming from 20th century model runs – fed with natural forcings, and with whatever internal variability the different models spit out.

    You can get a sense for the magnitude and period of that model internal variability by looking at control integrations.

  10. stan:

    Shame on Willis for paying any attention to what those climate scientists wrote in their e-mails. He should have known that they would screw everything up.

    He could do what Zeke did and figure out how to the right answer from the data, not generate more confusion and obfuscation. Willis IMO does more of the later than the former, which is the true shame of the matter.

  11. Ron,
    sounds like a good quick project. I’m away from the computer for the holiday weekend but I’ll look at it Tues. You could also ask Nick or Chad to run the numbers in the mean time if you don’t want to wait 😛

  12. Huh, in all the fuss over Darwin, that part kind of got lost. But yes, that’s the lead-in. I wonder how the website Willis was using does the station combining.

    That’s still sort of a loose end in all of this – we’ve seen what little impact the GHCN adjustments have on the global record, but nobody’s bothered to emulate the exact v2.0 process for that exact station, to see precisely why it did whatever it did. IN the end, I think Willis’s claim was that the adjustments couldn’t have come about if NOAA had used their procedure as published. He was basing that on some completely inconsequential adjustment in the 1920s that didn’t have any real impact, saying there weren’t enough neighbor stations during the 1920s to use the v2.0 method, while the audience was oohing and aahing at the bigger adjustments later on.

  13. This phenomenon (“proof” by misrepresentation of what a source is referring to) is endemic over at WUWT. I blogged recently on another fine example, Steve Goddard constructing an entire post around misidentifying “pack ice around Iceland” as “the Arctic.” And as here, he used his (lazy and sloppy? Or cunning and dishonest?) mistake to accuse someone ELSE of contradicting themselves:

    [Edited to remove url and hopefully escape spam filter; it’s the second post from the top, “How it works.”]

  14. “If we look at the IPCC documentation for the figure, we find out that “NEU” stands for Northern Europe, and covers the whole land area between 10W to 40E and 48N to 75N. This is a much larger area than just the Scandinavian countries, especially since the projection used to produce these maps severely distorts the land area in question.”

    Which begs the question why the particular projection was chosen and why the information was presented in the way it was. To a casual reader it would appear that the NEU data was indeed referring to Scandinavia. Even Trenberth thought it was.

    Yet again sloppiness by IPCC.

  15. He could do what Zeke did and figure out how to the right answer from the data, not generate more confusion and obfuscation. Willis IMO does more of the later than the former, which is the true shame of the matter.

    i am impressed by this comment! (and i agree, of course)

  16. The interesting thing to note about the attribution studies is the different models used for natural forcing only versus those used for anthro+natural. Odd methodology.

    Also interesting to note how they apparently decimated the model output when matching against observations. nice work there.

  17. The interesting thing to note about the attribution studies is the different models used for natural forcing only versus those used for anthro+natural. Odd methodology.

    I noticed that. Probably only those 5 groups did the natural forcing only 20th C runs.

  18. These results use the NOAA/NCDC GHCN v2 unadjusted data (as opposed to the GHCN v2 adjusted).

    You can download it here:
    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2
    The unadjusted data is v2.mean.Z
    The adjusted data is v2.mean_adj.Z

    For the purist, I’m not sure that there is such a thing as raw wx data set. All weather data sets are compiled from data read from instruments, encoded (manually or by software) into different reporting formats and sent to various collection points. When received by a wx processing center, QC is usually applied at that point to throw out apparently miscoded records and records not correctly read off the wx network. (Not always successfully; there are times when bad records get through) This includes the ISH, GSOD, and GHCN data sets.

    Nevertheless, the results above do not include adjustments such as TOBS, UHI, homogenity.

  19. > Nevertheless, the results above do not include adjustments such as TOBS, UHI, homogenity.

    It is my impression that, for most or all of the analyses that Zeke has posted at The Blackboard, inclusion of these adjustments (e.g. TOBS (time of day), UHI (urban heat island) and homogeneity) would not alter his findings in a major way.

    This obviously couldn’t hold true for studies of the adjustments themselves!

    Zeke, Ron, Nick, Majicjava — does that sound about right?

  20. Amac,

    The way i look at it the work of Zeke and others answers the question of methodological bias in the approach of GISS or CRU or NCDC. GIVEN a dataset, the methods employed by the three majors, do not bias the result in any appreciable way. To be sure there are MINOR differences in the trends reported by various methods. These differences ( lets stipulate less than 10%) are due to various choices an analyst makes: which ocean to use. How to select and combine stations. How to grid the world. How to handle missing data. All those decisions are defensible, but different choices lead to slightly different answers. I’d account for this by calling it a methodological uncertainty, hitherto unexpressed in the literature.

    the next step is to push down through the data to source data
    ( its not ‘raw’, there is no such thing as raw data) and account for all the choices and uncertainties there. Some are unaccounted for in final statements about the certainty of the temperature record. in the end, folks will find that it’s warmer now than it was in 1900. Significantly so, but less certain than is generally expressed. Same means, wider CIs. How that plays out in the various studies that use the temp record ( reconstructions, attribution, hindcast skill, forecast skill) is TBD. The science won’t change, just our certainty.

  21. RE: steven mosher (Comment#47832)
    “The science won’t change, just our certainty.”
    Regarding your comment about what effects coming to a better understanding of the accuracy of the temperature records, the uncertainty bars should narrow if we determine that the data can be cleaned better – maybe not by batch processing, but by individual tweaks. The uncertainty of the global trend would be in its interpretation if it turns out that there is a significant regional variation – as several authors have suggested for the Antarctic region and, maybe, large land masses. What a global average or trend actually means other than a mathematically derived number, is starting to bother me. Is the trend showing a warming of winter, but not summer, night but not day? Water but not land, or the other way around?

    As to the 30/70 split of land/water on the planet, if the 70 water has a 0.12/decade warming trend, and, because of the improperly corrected UHIE (and other physical causes) the 30 land has a 0.18*/decade rise, the global average is 0.11*/decade. If, in fact, the seas reflect the better average, we would have a 0.132*/decade global average. If the sea is closer to “reality”, then we have a 15% error which, by IPCC water vapour feedback mechanisms is a 45% error (for a 3X feedback) going forward. The errors compound.
    The assumption that putting all the data into the mix gives a reasonable average is based on a randomness of error, the pluses cancelling the minuses. So far the work by not just Watts but others who display individual station histories, not just stews of stations, is that there is NOT a random distribution of “errors” or adjustments, but a pronounced early cooling/warming bias.

    Adjusted and merged global data is apparently handled in a systematically justifiable way. A moderated temperature history would not invalidate the modeling done but stretch out the time frame for the end result. Which then diminishes both the sense of crisis and the strength of the alleged primary driver, CO2 and feedback mechanisms.

    The one unquestionable component in all these discussions is the rise of pCO2. Any work done that reduces the Global, night and day, winter and summer alike increase in temperatures brings the importance of pCO2 into question. Models must first match history and then, as time goes on, the present. Any work that questions the global nature and quantity of the temperature data must, perforce, bring into question the APPLICABILITY of the models on which global warming alarmism is based.

    One might say that I argue about trivial changes that might be involved. But I counter that the current temperature rise of 0.7 – 0.9* last hundred years, within the context of global temperatures, their daily/seasonal variance and yearly variability IS a “trivial” amount. Only statistical analysis is capable of detecting that this change is “anomalous”. We suggest that the trend is not trivial in its effects, and perhaps that is true. But how much a variation in temperature records and time/locations would it take to sink the pCO2 as the primary culprit? I suggest, not much. Is the global temperature trend, in fact, a value that tells us very much, or lets us with confidence tell people what it means?

    The models curve match the dataset and project similarly and consistently. But how sensitive are they to data values, how much regional variability do they fail to reveal (and so cover up inappropriate assumptions)?

    The hardest aspect of climate change is not in the calculations but in collecting the necessary real-world data that lead to models and can be used to verify the model’s usefulness. Climbing mountains tells you that many routes get you up, but there are a lot that take you places you should never gone.

  22. Mosher says

    The assumption that putting all the data into the mix gives a reasonable average is based on a randomness of error, the pluses cancelling the minuses. So far the work by not just Watts but others who display individual station histories, not just stews of stations, is that there is NOT a random distribution of “errors” or adjustments, but a pronounced early cooling/warming bias.

    They have not demonstrated anything. They have done a lot of arm waving, made some lunatic conspiracy theories, presented a lot of errors, asked a lot of ‘what if’ questions, not actually done their homework, but leave people like Zeke to do it for them, resulting in their ‘questions’ being shown to be wrong, but they have not actually demonstrated anything.

  23. bugs (Comment#47839)
    July 3rd, 2010 at 6:01 pm

    Zeke has also produced this chart of the adjustments done to Raw land temperature records.

    http://rankexploits.com/musings/wp-content/uploads/2010/03/Picture-114.png

    [Technically, someone needs to see if the Raw GHCN record is indeed the actual raw unadjusted data which I suspect it is not].

    Now what adjustments have been done to the Ocean temperature record. Well, we don’t really know that but I can tell you what they were going to do with the new HadSST3 (before climategate effectively derailed the effort). Here is chart showing the proposed change from a draft paper. In the top graph, the red line is the current HadSST2 and the green line is the proposed HadSST3. [If you remember a certain climategate email from Tom Wigley to Phil Jones about HadSST3, these lines will make sense: except Jones, Rayner, Kaplan, Reynolds and Brohan were proposing to adjust the record even more].

    http://img16.imageshack.us/img16/1205/hadsst3.png

  24. Doug Proctor (Comment#47835) July 3rd, 2010 at 3:21 pm

    I think you have failed to understand my point. I’m saying absolutely nothing about C02. I’m noting this. More than two years ago skeptics and others (I’m not a skeptic) raised a whole host of POTENTIAL issues with the way that the GTI was calculated.
    Issues about the source data. Issues about adjustments. Issues about how the average was calculated. Data. Adjustments.& Method to compose a final number.

    There are OTHER tangentially related issues to this topic. However, over the past 2+ years the public has had the ability to review the code, run the code, and construct their own methods.
    At least two of these methods are novel (nick stokes and Roman/Jeff) while the others ( chad, tamino, zeke) can be seen as ‘replications’ of described methods. The result of this effort is a clear and convincing demonstration that the METHODS of CRU and GISS and NCDC are not biased.

    Now, you may take issue with the whole notion of a GTI, but if you grant that such an average is meaningful ( some don’t) if you grant that, then you have to conclude that there is nothing substantively wrong with the WAY GISS calculate their average or the way CRU calculate their average.

    Don’t confuse this question with questions about adjustments, or uhi or coverage. dont confuse that conclusion with conclusions about .6C or .7C being important. I am making an observation PURELY about the methods. nothing more

  25. Bugs:

    “Mosher says
    The assumption that putting all the data into the mix gives a reasonable average is based on a randomness of error, the pluses cancelling the minuses. So far the work by not just Watts but others who display individual station histories, not just stews of stations, is that there is NOT a random distribution of “errors” or adjustments, but a pronounced early cooling/warming bias.”

    They have not demonstrated anything. They have done a lot of arm waving, made some lunatic conspiracy theories, presented a lot of errors, asked a lot of ‘what if’ questions, not actually done their homework, but leave people like Zeke to do it for them, resulting in their ‘questions’ being shown to be wrong, but they have not actually demonstrated anything.

    ****************************************************************
    can you please quote the right person. I said no such thing. How could you make such a fundamental error. dont answer that. I know.

  26. Bill,

    WRT the SST data. I think folks would do well to press on down to the BOTTOM of GHCN data. Quagmire ahead.

    Then the SST stuff, or you can go start down that path and put resources together.

  27. After the scam mosher tried to play with me re the parker stations, there is nothing he can say that is credible… expecially when it comes to the temperature record or anything in that regard… anything he offers should be considered as completely phony and utter bs

  28. steven mosher said “All those decisions are defensible, but different choices lead to slightly different answers. I’d account for this by calling it a methodological uncertainty, hitherto unexpressed in the literature.”

    It is expressed in the literature, but referred to as ‘structural uncertainty’ see for example:

    http://journals.ametsoc.org/doi/abs/10.1175/BAMS-86-10-1437

  29. can you please quote the right person. I said no such thing. How could you make such a fundamental error. dont answer that. I know.

    my bad.

  30. Mike

    “After the scam mosher tried to play with me re the parker stations, there is nothing he can say that is credible… expecially when it comes to the temperature record or anything in that regard… anything he offers should be considered as completely phony and utter bs”

    Right Mike.

    First it was the Peterson Stations, not the parker stations.

    Second, you failed to write carefully or read carefully. Kinda like Bugs. You made a claim that ALL but ONE, of peterson’s stations
    had issues with TREES. I looked up peterson’s stations. Posted google views of the lat/lon reported by NCDC. As I said then, post up the evidence to support the claim that all but ONE are effected by trees. TREES. don’t go changing the conditions. You made a claim. I posted up the best evidence I could find. Their might be better evidence. Go ahead, provide it. Until then, you are making a claim without supporting evidence.

  31. Ron, there’s an obvious problem with the distribution in that analysis (leptokurtic).

    As I’ve said elsewhere, “The high amount of kurtosis itself is also a bit of a warning sign—usually it is an indicator than non-random processes are involved in the observed residuals. (Think about the extreme example of histogramming a sine wave… you’d get a symmetric distribution, but something that is very non-gaussian.)”

    I’ve not seen anybody who’s embraced this analysis who has addressed the implications of a non-normal distribution to the adjustments. In fact the non-normal distribution comes from this:

    Roman’s plot of adjustment versus time.

    Plotting the residuals over time is a standard procedure. The fact gilest didn’t do it, nor react to the red flag from the leptokurtosis was a bit of a red flag to me by itself.

  32. Carrick, thanks for the pointer. I’ve been working through the script to reproduce gilest’s results. Roman’s is interesting too, and now I have an additional idea to test… 😀

  33. Carrick, thanks for your comment. NicL showed the same thing in his analysis at tAV. Craig Loehle also posted at tAV musing about detection mechanisms in the adjustment algorithms which may account for these patterns. It is something which should be revisited at some point.

  34. Thanks Layman. There is also this guest commentary by Craig at tAV.

    That said, systematic variations in adjustments over time doesn’t prove malevolent intent… in fact if there is a systematic bias in the original data, the process of removing that bias will produce systematic variations in the adjustments.

  35. Ron, the issue WRT adjustments has more to do with lack of any accounting of error propagation.

    I hate repeating it but take TOBS. The TOBs adjustment is required. However, the SE of prediction of TOBS is higher than the error for sampling that Jones for example accrues. Simply, according the Jones the error in a monthly mean is roughly .03C. If you apply a TOBS adjustment to the monthly mean, the adjustment may not bias the mean but it would appear to increase the error since the error of prediction of the TOBS algorithm is larger than the error due to sampling.

  36. Now you’re lying, mosher… you posted up 51 stations then refused to give the station ID’s… and it’s no wonder you didn’t provide the station ID’s… because your 51 stations turned out to be duplicates of 13 stations.
    Those 13 stations were cherry picked for geography… since the main issue being discussed was the presence of trees, you went right to the arid parts of the country.
    On top of it, most of those station lat/longs you directed readers to were not the locations during the relevant years.
    So I’ll repeat… mosher has no credibility when it comes to temperature monitoring discussions after the scam he tried to pull on me

  37. Mike

    I explained exactly what I did.

    You can do it to.

    Go to the CA thread listing the stations. I gave the link

    Copy the name.

    Look the name up at NCDC.

    They will give you the lat/lon.

    here is what you shouldnt do. Dont try to copy all the links into a comment. It will get hung up in the spam filter and if you then try to cut it up and repost you’ll get some duplicates.

    You claimed that only ONE of the stations was FREE of issues with TREES.

    I told you exactly what I did. You can go duplicate it. Go to CA.
    get the station names. Look them up at NCDC. you get the lat/lon.

    IF you have a better fix on the lat lon, then post that. But what you did not do was simple. Post your evidence that backs up your claim. ONLY ONE of petersons stations was free of problems with TREES.

    And again I explained the issue I had posting them all in one post ( too many links )

    Finally, you are sore because

    you claimed that ONLY ONE of petersons sites was FREE of issues from TREES. And I was lucky enough to pick a site in Arizona ( First letter of the alphabet should give you a clue) that was treeless. After that it made perfect sense and was quite good fun to visit the other sites in arizona, new mexico, Utah and find other sites that appear ( on that evidence) to be clear of TREE issues. maybe you have other evidence. Maybe not. Who knows, Anyways, post up your proof for your claim. OR as I explained, you can post up the lat lons that you prefer and people can see for themselves.

    To recap:

    You can follow my instructions or supply your own. The claim:
    only ONE of Petersons stations is free from problems with TREES.

    So, ya, post up the proof or the lat lons of all petersons stations.

  38. We are cooling, folks; the ram has pulled the wool over his own eyes.
    =============

  39. The work by gg/gilest has been totally superseded by that of Zeke, etc. gg looked at how the trend at any given station is affected by NOAA adjustments. That was an easy first step. But it’s just a stepping stone to the big picture, which is how the overall global record is affected by NOAA adjustments. That is what Zeke has done, and it’s what matters in the end.

    By the way, I don’t think Roman at all understood what gg did. This was partially due to some ambiguous phrasing used by gg. I think Roman thought gg was looking at individual adjustments, by themselves. He was not. He was looking at the effect of all adjustments at any given station on the trend at that station. So something big got lost in translation there, between what gg did, and how Roman replied.

    But you can see how things are progressing. Zeke shows the impact of adjustments in the big picture. WUWT continues picking out individual stations, saying they don’t understand the adjustments made there (without any effort at investigating or understanding the methods used), implying some malfeasance and a large impact on the big picture, and trying to extrapolate from there to saying the whole thing is unreliable. But is there any attempt by that crowd to actually see the influence on the big picture? I haven’t seen it.

  40. Oh, and I’ll add that gg’s initial analysis, the difference in trend over the lifetime of that station, wasn’t the most informative. Better to do the same thing over better defined time periods, and if I recall correctly, Nick Stokes jumped in and did that.

    But again, it’s all superseded by Zeke’s work on the global means after spatial gridding.

  41. I understand that Zeke’s (and others) work demonstrates the adjustments in the big picture are negligible.
    .
    I’m slowly working my way in the other direction, to a better understanding of the individual adjustments and especially homogenization. GGs was a first step. Roman’s was better. I’ll for Stoke’s take on it.
    .
    Question: Should we homogenize GSOD? (assuming that ISH is not already). Having the unhomogenized data is good, but there is value in homogenization. I need to better understand why that is and how to do it.

  42. Ron

    If I understand what Roman did, it was not better; it does not add clarity. Again, I think this is due to a basic misunderstanding of what gg did. Look for Nick Stokes’ comments at gg’s; it keeps gg’s basic idea but limits it to certain time periods of interest.

    I’d encourage anybody to dip their toes into the homogenisation pool, to explore the issues therein. The easiest of course would be for you to use the GISS code you’ve got and apply the GISS UHI adjustment. All you’d need would be to add those stations to the inv file and add the satellite brightness for each location. The rest of the inv file doesn’t matter for GISS, I don’t think.

    The second level would be to try more sophisticated methods like that of NOAA. For that, I’d read Menne (2009) very carefully, on pair-wise homogenisation. This method will be applied to GHCN in the next release, so you could wait for that paper and release, too, in case they change the method for the GHCN. You could then write your own code that did something like this, or something different if you think something else is better.

    What you will have trouble doing is applying homogenisation that requires a ton of historical metadata, because it just isn’t available all in once place. You could try this on a country-by-country basis to the extent those countries publish station histories, but even those will be necessarily incomplete and anyway, statistical methods will remain inevitable.

    It’s very outdated now, but I highly recommend this review paper to anybody starting to explore the ideas of homogenisation:
    Peterson et al. “HOMOGENEITY ADJUSTMENTS OF IN SITU ATMOSPHERIC. Int. J. Climatol. 18: 1493–1517 (1998)
    CLIMATE DATA: A REVIEW”
    Just keep in mind that it is outdated, but you have to start somewhere.

  43. This seems to have missed the point of both Karlen’s objections and Eschenbach’s post. Eschenbach specifically notes the extent of the NEU area the graphic is drawn from and Karlen notes “[i]t is also difficult to find evidence of a drastic warming outside urban areas in a large part of the world outside Europe. However the increase in temperature in Central Europe may be because the whole area is urbanized (see e.g. Bidwell, T., 2004: Scotobiology – the biology of darkness. Global change News Letter No. 58 June, 2004).

    So, I find it necessary to object to the talk about a scaring temperature increase because of increased human release of CO2. In fact, the warming seems to be limited to densely populated areas.”

    Kalen didn’t argue that the IPCC graph was wrong because it was of Fennoscandia but that it was wrong because Fennoscandia didn’t match the trend noted in the graph (for the region which included Fennoscandia is one reading but Kalen actually seems to talking about the global graph). Kalen refers to the Nordic area, not because it is where he assumes the graphic refers to, but because he has already done a survey of that area, he also refers to Africa & Australia. Kalen (and Exchenbach) are concerned that what the IPCC report calls ‘global’ is in fact localized to urban areas and perhaps even an artifact of what stations are included in the analysis. We’ve seen over and over again that station choice can make a difference in the trend observed – the question is whether the stations chosen are representative of a global trend or only an abnormality which applies only to those stations.

    The proper refutation would be to disaggregate the NEU temperature records which show warming significantly greater than that of the 1930s and see that those are widely distributed and representative of the NEU area and not primarily in urban/rural/mountain/coastal/whatever areas, and that the Fennoscandian lack of warming is an abnormality in the NEU trend.

  44. As far as I can tell, gg didn’t even understand the implications of what he had done. His core argument:

    How do you prove that? Not by looking at the single probes of course but at the big picture, trying to figure out whether adjustments are used as a way to correct errors or whether they are actually a way to introduce a bias. In science, error is good, bias is bad. If we think that a bias is introduced, we should expect the majority of probes to have a warming adjustment. If the error correction is genuine, on the other hand, you’d expect a normal distribution.

    (Emphasis added.)

    Actually both of these arguments are just wrong.

    In order to get a warming bias, you can do so by cooling temperatures in the first half of the century, and warm temperatures in the second half of the century. E.g.,

    T_adj(t) = T_orig(t) + alpha * (t-t0)

    t0 = (tmax+tmin)/2, where t is time, and [tmin, tmax] is the time interval. T is of course temperature. This clearly produces adjustment that are symmetric around t0… in fact the process of baselining pretty much insures the distribution will be nearly symmetric. It is easy enough to change this to something that would produce a quasinormal distribution (just add a noise term to the adjustments for example). Nor is a normal distribution even expected from the adjustments:

    If there are secular variations in bias over time (this is expected) you don’t get a normal distribution, which is what gg found, namely a highly leptokurtic distribution (peaked at the center). Not only is this not disturbing to me, it’s expected.

    As I see it, the real point is straightforward. There are biases in the data that need adjustment. Many of these biases vary over time in a systematic fashion, and as a result any corrections to these biases should also vary systematically in time. This systematic variation will yield in general a non-normal distribution, which is actually what was found.

    The other point that is interesting is the magnitude of the adjustments do not “swamp’ the observed climatic variation. There is just no way to ascribe all or most of the observed warming to manipulation of data, a point gg and I both agree on, and one that is further fortified by Zeke’s analysis and other prior peer-reviewed studies.

    Roman’s plot of the residuals over time is a demonstration that the adjustments are systematic and further explain the leptokurtic nature of gg’s distribution. Quite obviously this is highly explanatory, namely it explains the odd shape of gg’s distribution.

    Why anybody thinks that “doesn’t add clarity” is beyond me.

  45. Carrot, I’m not sure where you are coming from in suggesting that Roman didn’t understand gg’s post or that his response does not add clarity. You mention Nick – yet I do not see that Nick has any problem with Roman’s attempt to portray adjustments over time (to ‘clarify’ gg’s post) at all. On the contrary, Nick pointed out that gg’s work shows a global trend in the adjustments of 0.0175C/decade over the period 1905 to 2005. Roman’s shows 0.023C/decade over the same period. He also suggested that a weighting factor for the number of stations in a given year may actually make up the difference.

  46. Just how useful are all these supposedly ‘independent reconstructions’ when we have no real idea of how accurately global temperature is being measured?

  47. What I’m saying is that when I read Roman’s page, it looked like he thought gg was making a distribution of individual adjustments, and responded in kind, making the whole thing much, much more complicated than it needed to be.

    That is not what gg did.

    gg made a distribution of how trends at each station changed, due to all adjustments at any given station.

    Big difference. And a simple/elegant way of capturing the exact effect all the sceptics were worried about – adjustments downward followed by adjustments upwards. But because so many people misunderstood what gg did, this was lost upon them.

    In any case, the definitive answer comes from Zeke, having done the gridding. (Well, there was also a relevant graph in a paper from Peterson over a decade ago, but blog science has to go through the process of re-inventing the wheel). This gg/Roman stuff is all old hat, now that much more direct analysis is at hand.

    It took Zeke’s analysis to properly show the actual effect on the big picture. What gg’s analysis showed is that a station like Darwin (with strong upwards adjustment) was something of an outlier, with also opposite stations (strong downwards adjustments) also about equally prevalent. gg’s analysis was not showing that adjustments have no effect. It was showing that it’s highly highly unlikely that adjustments are part of some sort of intentional fraud, given that distribution, and that Darwin has to be seen in the context that it’s unusual.

  48. Carrot, I get that gg plotted a histogram of individual residual station trends, but what was it that gg was trying to show with his demonstration? I submit that gg was attempting to show that GHCN adjustments were random and offsetting with no + or – bias.

    Roman was responding to gg by pointing out that the histogram was not sufficient to prove this case. Just because the station residual trends appeared normal and 0 centered (leaving leptokurtosis aside) does not mean that adjustments were random. The only way to properly look at this question was to look at adjustments over time.

    I found gg’s post (and his responses in comments) very confusing and contradictory. He argued that the histogram is – for all intents and purposes – normal and 0 centered. That it was only slightly off center with an average adjustment of +.017C/decade. He argued that this was insignificant compared with the global trend over the last century of .2C/decade (yes that is 2C for the century). IOW the whole argument gets turned on it’s head when one recognizes that he badly overstates the GW trend and therfore fails to recognize just how significant his average adjustment finding actually is.

  49. Carrot, I get that gg posted a histogram of station adjustment trends. But what was the point of gg’s post? I submit his purporse was to demonstrate that GHCN adjustments are random, offsetting, and therefore unbiased.

    Roman’s response was to point out that gg’s histogram is insufficient for this. Just because the distribution of station residual trends appeared to be normal and 0 centered (leaving lyptokurtosis aside) does not mean that adjustments were random. Roman’s graph showed that these adjustments weren’t random (I should state that I recognize that one can’t conclude from this that the adjustment patterns over time were improper).

    What’s more, gg contradicts contradicts his own analysis with a bad mistake. Under his histogram he states the following:

    Not surprisingly, the distribution of adjustment trends2 is a quasi-normal3 distribution with peak pretty much around 0 (0 is the median adjustment and 0.017 C/decade is the average adjustment – the planet warming trend in the last century has been of about 0.2 C/decade). In other words, most adjustment hardly modify the reading, and the warming and cooling adjustments end up compensating each other

    IOW he suggests here that his finding of +0.017C/decade average adjustment is insignificant and therefore his histogram depicts a normal distribution that is – for all practical purposes – 0 centered and unbiased. By badly overstating the GW trend he fails to recognize just how significant and off center the +0.017 value actually is.

  50. Among the many typos above, the word “lyptokurtosis” should read “leptokurtosis”.

  51. Layman

    Carrot, I get that gg posted a histogram of station adjustment trends.

    Good, because judging from the response of many at the time, this was lost upon many. And from the text of his comment at gg’s, and the substance of his response, I thought Roman was among them. And thus, I think Roman’s response was unnecessarily complicated, and lost the simple elegance of gg’s method.

    But what was the point of gg’s post?

    In my opinion, to show the big picture. We have at WUWT a constant obsession with highlighting individual stations, that they find to be noteworthy in some way, and then the implication that the whole record must be messed up. At the time, the latest example was Darwin. In particular, Darwin was claimed as a “smoking gun” for fraud. A quite serious charge that got some widespread attention; it got out beyond climate blogs into general political blogs.

    Darwin was a station with some sizable adjustments. Without getting into the reason why those adjustments were made, gg wanted to see whether that was representative of the whole. His histogram shows that it simply is not. Darwin is out on the tail, and there is just as much of a tail on the other side – stations where the trend was lowered strongly by adjustments. So just by looking at his histogram, we can see that is rather unlikely that adjustments like Darwin are part of some sort of crude fraud to enhance global warming. The histogram gives you context, the big picture – something you rarely get from WUWT.

    That, to me, is what gg was trying to show, and indeed did show. His analysis was not necessarily showing that adjustments were exactly random, or exactly self-canceling, though it is close (though less so in the US). Its purpose was to show Darwin amidst the context of the whole, to show that adjustments just don’t look like the crude fraud to invent global warming, as has been often claimed (implicitly or explicitly) by the sceptics.

    IOW he suggests here that his finding of +0.017C/decade average adjustment is insignificant and therefore his histogram depicts a normal distribution that is – for all practical purposes – 0 centered and unbiased. By badly overstating the GW trend he fails to recognize just how significant and off center the +0.017 value actually is.

    He didn’t badly overstate it at all; he made the mistake of not lining up time periods.

    If you take the land-only trend since the mid-70s, you get 0.2 C/decade. Just what he said. Land-only is what’s relevant here; we aren’t really looking at oceans here. His mistake was to not limit his analysis to that time period when making that comparison.

    That’s what Nick Stokes came and cleaned up. Look at the change in trend due to adjustment, for better defined time periods.

    Of course, both gg’s and Roman’s estimates of the bias are made completely and totally irrelevant by Zeke, who actually calculated that bias directly. Though as I mentioned before, what Zeke did (comparison of spatially averaged anomalies using unadjusted and NOAA-adjusted data) appeared in the literature over a decade ago. Perhaps unfortunately, plots using only unadjusted data are not widely published; I think the guys at NOAA should start doing so, so people can see what the unadjusted data actually do. I think those guys are catching onto this, as they made a point of using the unadjusted data in their response to Watts’ surfacestations heartland publication, in Menne 2010.

    Of course, none of this goes into the details of how the GHCN adjustments work, and whether that method is any good. That’s a separate question from simply quantifying what effect the adjustments have.

  52. DA: … when we have no real idea of how accurately global temperature is being measured?

    Interesting point. How do you think someone should go about determining the answer to: “how accurately is global temperature being measured?”

  53. I assume, or I hope, he means “how accurately temperatures are being measured at each weather station?”.

  54. You can’t lie your way outta this mosher… I did look up the stations you provided (they are no longer linked by CA as you claim, I had to cross the names on his list with your lat/longs), that’s how I knew you were using screwed up lats and longs, and that your dates were wrong most of the time… I told you long ago that you are using coordinates that were estimated from a topo map by the station manager, that’s why you had so many problems when you set up the google earth program… that’s how I knew you were aware of what you were doing, you have dealt with and complained about this problem on many different occasions.
    Next, your 51 links were not organized as if they were caught up in a spam (or whatever) catcher. I’ve dealt with that too many times and know better… they do not repeat 4 or 5 times in different order with different comments (maybe twice, sure, but 4 or 5 times???). You used 51 links which were duplicates of 13 stations, period… that ol’ big lie strategy.
    And that’s why you did not want to provide the station ID’s, because I tracked down the first three in Arizona and found them all incorrectly located by you and all within the distance > 4 X height rule from both vegetation and structures… hell, two of those were in building complexes with significant tree / wind blocks where a UHI canopy would be expected to regularly exist.
    Your claim that you selected the stations alphabetically (Arizona starts with an A) is pretty ridiculous. Go to Peterson’s station list and notice that Arizona is in group 3, (just like they are on Steve Mc’s list which you have obviously looked at), after Alabama and Arkansas. And let’s not forget that Utah or New Mexico are nowhere near the beginning of the alphabet. However, your stations are pretty much from arid areas since the primary issue was trees (but you keep trying to forget that I was discussing structures as well) ie they were cherry picked to try to prove your point… yeah, let’s skip Alabama and Arkansas, too many trees there.
    Now, if you want the information on obstructions and exposure then you’ll have to call NCDC as they have removed that information from the metadata file, just like they did the information for the observer… again. Unlike the lat/longs you provide (most with the wrong dates), that information was observed by the station manager on site and recorded on the B44 forms before being entered onto the MMS.
    It’s the sure fire sign of an AGW advocate… lie and lie… when caught hide the data… then lie and lie some more.
    And by the way, none of your 13 turned 51 stations were the one without the wind problem… but they all had wind problems during the relevant time frame.

  55. Carrot, since gg’s 0.017C/decade figure relates to the time scales of his station trend histogram, he expicitly (and correctly) related this figure to the century warming trend which is a comparable time scale. Even land only – he simply got the number wrong.

    Nick’s adjustment trend post 1950 was much higher at 0.0276C/decade. Still significant compared to .2C/warming you speak of.

    I’m not going to argue this wrt to Zeke’s analysis with you, other than to say that I don’t see how Roman’s analysis reconciles with it yet (not to say it doesn’t 🙂 )

  56. I’m not going to argue this wrt to Zeke’s analysis with you, other than to say that I don’t see how Roman’s analysis reconciles with it yet

    One piece of that puzzle: Roman’s analysis has no spatial component.

  57. “ou can’t lie your way outta this mosher… I did look up the stations you provided (they are no longer linked by CA as you claim”

    Mike. Learn to read. I did not claim that CA LINKED THEM. I WROTE

    “Go to the CA thread listing the stations. I gave the link
    Copy the name.”

    I gave you the LINK to the CA page. The CA page NEVER linked to the sites. I gave you the link does not mean “CA Linked to them”

    They appear in a post. Then You have to COPY THE NAME. switch to the NCDC site. paste in the name. copy the lat lon. paste into google map. Copy the link from there. It was quite time consuming.

    Anyways, when your ready to post up the evidence that ONLY ONE of petersons sites is free of the effects of trees we can resume.

    If you are going to pick a site randomly, as I did, then you can employ a bunch of different methods. I picked A. to start a search at the front of the alphabet, then Z to start at the last of the A’s. And as I explained, once I found the FIRST site that didnt have any discernable effect from trees, Looking at other arid sites was good hunting practice.

    So, anytime you want to post up the EVIDENCE you used to make your claim, I’d be glad to look at it.

    Like how you determine there is a wind problem. Probably the same way you determined that all but one had a tree problem.

    When you have a list of lat lons post em up.

  58. Layman

    Carrot, since gg’s 0.017C/decade figure relates to the time scales of his station trend histogram,

    But therein lies the problem, and this is where his analysis needed cleaning up. There is no fixed time scale for his original histogram. It’s done over the lifetime of each station, whatever that might be. Some stations might be 1885 to 2010. Some might be 1940 to 1990. Some might be 1980 to 2006. All sorts of station lifetimes are in there. So the exact number you get from his analysis doesn’t have much meaning.

    Add to that the fact that there is no gridding, which is the missing piece that Zeke put in.

    For example, stations in the US tend to have adjustments that have non-negligible effects on the trend. If your data set is overpopulated with US stations, as the GHCN is, then that will bias the overall result.

    If you want to see the actual effect on the global, then you must grid. Which is why the estimates of gg and Roman must be left aside as inferior, at this point. Why bother with crude estimates when you have the actual number you are looking for?

  59. Hi Zeke. nice work.

    difference in trends?
    1900 present
    1979-present ( to compare with satillite)
    1998-present.

    Should be illuminating. cherry picked the last one of course

  60. Agreed Carrot. His average adjustment number does not have any meaning therefore the claim that his histogram is normal, 0 centered, with random and offsetting errors through time does not have any meaning. Nor does his claim that the average adjustment is insignificant when compared with the century land only warming have any meaning.

  61. Mosh,

    Looks like the v2.mean_adj file on the GHCN server hasn’t been completely updated since 2005 (there are only ~150 stations available for 2006-2009), so its not really fair to compare the two after that. Here is the data from 1900-2005


    http://i81.photobucket.com/albums/j237/hausfath/Picture460.png

    1900-2005 trend (C per decade):
    v2 = 0.062
    v2_adj = 0.088

    1979-2005 trend (C per decade):
    v2 = 0.290
    v2_adj = 0.307

    1998-2005 trend (C per decade):
    v2 = 0.098
    v2_adj = -0.152

    Take-away seems to be that adjustments have a pretty big impact pre-1950, not so much in recent years.

  62. You still cant lie your way out of this mosher… CA used to link the spreadsheet that Peterson sent Steve Mc… that spreadsheet is no longer there. in the same state
    And you have finally walked yourself where I was waiting for you to walk yourself to… which is that you did not look up your stations by ID number, you looked them up by name. You know just as well as I do that there are many stations with the same name in the same state… so you have to have the ID number if you want to look at the correct station. So in reality, you don’t even know if those are the Peterson stations you linked to. So, you’re not just a liar, you’re an incompetent liar.
    However, when I get through with my trip and get to the casa computer I’ll pull up that old spreadsheet from a few years ago and get the correct station ID’s just to dig it in a little bit more.

  63. Layman

    Agreed Carrot. His average adjustment number does not have any meaning therefore the claim that his histogram is normal, 0 centered, with random and offsetting errors through time does not have any meaning. Nor does his claim that the average adjustment is insignificant when compared with the century land only warming have any meaning.

    Now you’re overshooting. The quantitative statistics of his histogram can’t be used at face value, because of these issues. But the shape of the histogram has a very strong qualitative meaning, especially after fixing the time period. I bet you that people who were impressed by Willis’s Darwin article, and his claim that there was fraud in the adjustments, would never guess that the histogram would look like that. What lacks meaning is Willis’s histrionics over there.

    But again, this is all outdated, superseded, in the past. gg’s analysis was easy and elegant, and gave a nice quick idea (especially once Nick cleaned up the time periods), but Zeke’s analysis replaces all that stuff.

  64. carrot,

    I also have a quick script to calculate the trend for all v2.mean and v2.mean_adj stations over a given period of time (from start date to end date), if anyone has a particular period in mind for which you want to see the distribution of adjustments.

  65. I’ve never been a fan of people drawing linear trendlines through time periods like “1900-2005”.

    Anything looks linear if you take a small enough chunk of data, but this isn’t a small enough chunk.

  66. Zeke,
    use that script for whatever time period, I don’t care, find the station with the most downward adjustment, and write the anti-Darwin blog post. Accuse the NCDC of fraud. Throw up your hand, wondering what on earth could justify such a strong cooling adjustment.

  67. more seriously, why not make histograms for the same periods you considered above – 1900-2005, 1979-2005, 1998-2005, and maybe toss in 1900-1950.

    I wonder how close the median or mean adjusted trend in the histogram would be to the overall gridded result you give. It can’t be that far off, but I don’t know how to bound how far off it could be.

  68. “You still cant lie your way out of this mosher… CA used to link the spreadsheet that Peterson sent Steve Mc… that spreadsheet is no longer there. in the same state.”

    MikeC. You really have outdone yourself with your failure to read.
    I never claimed that CA linked to the sites. I POINTED you at the CA page that contained the names:

    here is my comment from June 1st.

    steven mosher (Comment#44556) June 1st, 2010 at 9:59 pm
    “I pulled that metadata for each station in Petersons UHI study… and all but one of his rural stations had tree problems…”

    Thats according to Mike C.
    So lets check…
    I picked some peterson site randomly
    https://mi3.ncdc.noaa.gov/mi3qry/locationGrid.cfm?fid=1422&stnId=1422&PleaseWait=OK
    https://mi3.ncdc.noaa.gov/mi3qry/identityGrid.cfm?setCookie=1&fid=899&PleaseWait=OK
    https://mi3.ncdc.noaa.gov/mi3qry/identityGrid.cfm?setCookie=1&fid=921&PleaseWait=OK
    Google earth shots. hmm I see catcus. I see low shrubs. I see fields.
    Not so many tree problems.
    ***********HERE IT IS MIKE READ THIS*****************
    For peterson sites you can start here
    http://climateaudit.org/2007/0…..ban-sites/
    *****************************************************************
    Then look them up. Then google earth. Then over to MikeC
    how come you said all but 1 peterson rural had a tree problem and the first three I pick dont appear to.. one might some low tree to the east.
    Mike, if you want just send the lat lon for peterson sites and I can make google earth tours. Takes a few mintues to confirm what you said.. or disconfirm

    Here is the page I used and linked to:

    http://climateaudit.org/2007/08/03/petersons-urban-sites/

    here are my other comments, telling you what to do:

    steven mosher (Comment#44950) June 7th, 2010 at 3:45 am
    easy Mike, do what I did.
    go to the climate audit page.
    copy the name
    Go to NCDC. do the rest.
    Or post all the correct lat lons since you claim to have dtermined thatonly ONE station is tree free.
    Easy.

    *************************

    so, you have claimed that “I claimed the sites were linked in CA”
    I never made that claim. I gave you the page where the sites are listed. in TEXT. Then I explained to you what I did.
    Copy the name, switch to NCDC site, paste in the name. search.
    NCDC will return the lat lon.
    Then you copy that into google maps.
    Then you hit the link button and you’ll get the links I provided to you.

    OR you can provide the lat lons and I will provide a google tour for you as I promised.

    It will help tremendously if you learn to read and follow directions. So, for the benefit of everyone if you can point out the comment where I said that “CA linked to the peterson sites”
    It would help your case. because as it stands the record proves that you cannot read. or count the number of sites that are Free from problems from trees.

    Cheers

  69. You might want to go back to the thread on the Bad Astronomy Blog to see what Lonny Eachus is saying about this rebuttal. He’s calling it a straw man, though he has demonstrated many times that he doesn’t know what the straw man fallacy is.

  70. Bob Illis,
    Thank you for that, I hadn’t seen that before. I never knew about the attempt from 1881, though that must have really been a stretch to call ‘global’. Can’t imagine that guy had truly global data.

  71. Re: Bill Illis (Jul 7 08:15),

    The concordance of the different reconstructions from about 1895-on is very striking in that figure. The significant departure of the latest trace (Brohan, 2006) from the earlier ones in the ~1875-1995 era is interesting, too. Presumably some compelling reasons for adjustment of numerous early instrumental readings were uncovered.

    It’s worth noting that such systematic post hoc upwards adjustments to the earliest records is evidence against the notion that the instrumental record has been jiggered to unduly emphasize historical warming. (These corrections have the opposite effect.)

  72. Now you’re overshooting. The quantitative statistics of his histogram can’t be used at face value, because of these issues. But the shape of the histogram has a very strong qualitative meaning, especially after fixing the time period.


    I think we are going to have to agree to disagree on this one. First of all you can’t have it both ways – how can inferences be drawn from a value that has no meaning? As for “qualitative”, how can the shape of a histogram bringing together trends from undefined time periods have any “qualitative” meaning to infer there is no bias? The negative trends could come from different periods then the postivie trends. Therefore you are quite right that it would take comparisons of different time periods – ie, Nick’s cleanup method. And what happens with Nick’s cleanup method? Nick goes to great lengths to make it clear that it confirms’s Roman’s analysis, that the changes in adjustments through time using the two methods track each other.

    Roman points out the defficiencies of gg’s analysis. Nick’ analysis confirms it. How is it that Roman does not add clarity? How is it that it was “not better”? How is it that he fails to understand? There is nothing elegent about gg’s analysis, it is riddled with errors of fact, of logic, and of method. Corrections only confirm Roman’s criticism.

    I leave the last word to you if you wish because it will start to become repetitive (but it’s been fun 😉 ).

  73. sorry mosher, you still cannot lie your way out of this.
    Here again you practiced cherry picking. I said trees in that one sentence but in that post I also discussed structures.
    Then you posted three stations and I corrected you by doing research, came up with the correct lat longs and demonstrated that in fact all of those stations had problems with trees and structures. (not to mention that I showed your claim of random selection of the stations to be false)
    Your emotional and flamboyant response was to post up 51 stations which I discovered were really duplicates of 13 stations. Having been caught at that stunt you tried to lie your way out and blame it on spam (or what ever) catcher… a problem since it does not resemble the spam (or whatever) catcher problems we are all familliar with on this blog. Once again you cherry picked stations by choosing stations in arid areas. You claimed that those stations were selected by the alphabet which was also shown to be false.
    After having been caught lying on these points it was also shown that you are incompetent because the stations you selected were from a name list rather than station ID, a real problem since most stations share names with several other stations. So once again, mr mosher, you are not just a liar, you are an incompetent liar.
    But don’t worry, I have Petersons list with station ID’s on the house computer and when I’m done with this trip I’ll post your stations with the station manager info on obstruction and exposure and show, that in fact, Peterson stations have wind problems which (among other problems) render his UHI study completely bogus.

Comments are closed.