Chad recently finished his own land temp reconstruction using raw GHCN data, joining the growing list of bloggers who have helped validate the fact that raw land temps are roughly in line with the land temp records provided by GISS, Hadley, and NCDC.
Chad’s method is similar to that of Nick Stokes and Jeff Id/Roman M, in that they all used the Tamino and Roman method of computing a monthly offset for each station such that the sum of the squared differences between each station after the offset is applied is at a minimum when combining multiple imods into single wmo ids and when combining multiple wmo ids into single grid cell anomalies.
His innovation was to use a land mask to weight each 5×5 lat/lon grid cell by its land area (instead of its total area). As he discovered, not using a land mask resulted in a weird situation where more than 100% of global land area was receiving a temperature for some years. While globally the effect of not using a land mask is mostly negligible, it does introduce a small cooling bias and is particularly significant in the tropics.
If we compare temp reconstructions using similar methods (Nick’s, Jeff/Roman’s, Chad’s, and my own), we see that Chad’s reconstruction trends slightly higher over both the past century and the last few decades:
Similarly, looking at the trends and (non-autocorrelation-corrected) confidence intervals:
You can find all the land, ocean, and land/ocean reconstruction data from everyone who has tried it so far (as well as GISTemp, Hadley, and NCDC) here: http://drop.io/0yhqyon/asset/temp-comps-global-xls-2



One more – I love it.
Who all has taken the further step to do ocean so far? Enough for a land-ocean spaghetti graph yet?
Nick Stokes’ algorithm has mutated to being a bit more sophisticated that what you describe here, hasn’t it?
Also Zeke, do you know if anybody in this group is working on homogenisation yet? You were headed in that direction with the search for UHI, but I don’t think what you’ve put out on that so far lends itself to an obvious strategy for UHI correction.
Carrot,
I’m checking to see if there are any updated outputs from Nick; I was under the impression that most of his recent stuff was geared towards testing spatial correlation and visualization rather than changing the underlying method, but I’ll double check (or he will show up and correct me 😛 )
As far as homogenization goes, its definitely worth diving into, but its also pretty damn complex. I’m still trying to wrap my head around Menne and Williams (2009)’s approach, and it would probably be better handled by someone versed in R (as its a real programming/modeling language).
Re: Zeke (May 19 15:02),
Zeke, I think that’s pretty much right. I’ve been distracted lately with some other things, but I’m planning to release V2 real soon, which will have the spatial capability. The time series treatment (as in v1.4) hasn’t changed.
I had thought about really getting into Menne 2009 as a summer project.. maybe it’ll be a fall project.
I think it’s going to expose some rifts, though. Some people (probably Mosher) will want a purely historical meta-data driven approach, with basically each adjustment made manually somehow or another. On the other end of the spectrum are fully automated objective methods that could run without any historical metadata at all (which I think is Menne 2009). Or you can be someplace in between.
Really, we need other countries to step up and build CRN networks like NOAA has, so we can have data that won’t need any adjusting.
OK then, I must have confused myself on what Nick was up to. Apologies.
I’m planning on working on homogeneity adjustments in the near future. I think the Roman/Tamino method would be useful for correcting for spurious step changes by using nearby stations without the step change to bring the spurious station into alignment with its neighbors. Still have to reaquaint myself with Menne 2009.
Carrot-
The automated methods would be, I think, a lot easier to implement. It would be a real pain in the neck to have to assimilate meta data!
It doesn’t help that metadata (apart from a basic snapshot view) is simply absent from a good portion of GHCN stations… Individual MET offices might have temporal station records (e.g. for moves or sensor changes), but I imagine their completeness varies dramatically from country to country. An automated method might be ideal, provided it performs well at catching step changes and other inhomogeneities.
Having a global CRN would be nice, but it still only gives us records going forward.
Chad,
I agree that automated objective methods are the way to go here. As Zeke points out, international historical metadata are not compiled anywhere yet, and is necessarily incomplete. Even if the country’s met bureau meticulously logged each station move, there are other microsite issues that may not gone unnoticed. And even if you had field notes, you often wouldn’t know how to adjust for the described issues, unless you compared with the neighboring stations. Which brings you back to the automated statistical methods.
I think the thing to do is go automated, and then for those stations where you do have lots of field notes, see if the automated adjustments made sense. Which is what Menne did. he also tested against synthetic flawed data. And at this point, somebody on here is going to start complaining about how the method doesn’t pick up on smaller flaws. I don’t think that can be helped, and I don’t know that it matters in the end.
It would be nice to replicate Menne’s method, try it out on various biased synthetic data, and see how it performs. If it catches and corrects for warming biases just as well as cooling ones, it would help put to rest some of the more egregious gripes about adjustments.
Sounds like a good project for Chad or Nick 😛
I’m vaguely curious what the new method will do with Darwin.
Though I can already predict that Eschenbach will HATE an objective, fully automated method that doesn’t rely on metadata. He already hates the GISS UHI step, and that one is easy to understand and emulate, even if you don’t like it. So we’ll probably be set for a continuous future stream of Eschenbach highlighting an individual station he doesn’t understand, and then making a huge deal out of it.
Congratulations, Chad. It’s very meaningful to see another skilled, disinterested party offer an analysis like this.
Zeke, thanks again for constructing these graphs to highlight areas of agreement and disagreement amongst/between the analysts. They are a model of clarity.
It would seem that the only major possible flaws and weaknesses to chase down would be prior to the data-input step. “Raw” GHCN means “unadjusted”, doesn’t it? Or are there still cases where “pre-adjustments” transformed a temperature reading to some other, later, not-quite-raw number? Sorry–I know this has been hashed out here before.
This ought to serve as the jumping off point for a pretty definitive treatment of UHI. That looks like it will be detectable, but will account for a very modest portion of the observed warming.
Are there other issues beyond UHI that this approach can solve? Probably…
This approach ought to do an excellent job of putting individual errors and discrepancies into the proper context. If an individual Darwin-type discrepancy doesn’t budge the overall picture much, and if there aren’t many of them, then they wouldn’t seem to be that important. Is it likely that there are a lot of similar problems?
Carrot,
Agreed. I think trying to adjust for microsite issues (tree gets cut down, more sun shine, hotter, …) is overkill. I did some experiments with the first difference method to see how well it could detect a step change. It basically consisted of finding outliers in the series and from that identifying the month when the series jumped or dropped. The problem is that if the change is relatively small, the statistical method won’t be able to discern it from the noise. Even if it was identified and corrected, I as well doubt it would matter in the end.
Zeke-
Yes, I’m planning (have been for a long time) on tackling the issue in depth. I saw somewhere (I think Mosher showed me) that Menne posted some Fortran code for the adjustments. I had the code on my computer but it’s gone and I’m still looking for it. So if anyone knows the url, I’d appreciate it.
AMac,
As far as I know, GHCN v2.mean comes directly from CLIMAT reports (and retroactive record compilations for older station) with no adjustments apart from some basic QA tests (e.g. is Dallas reporting a temp colder than that of Siberia, as happened last year). Carrot and Nick have delved a lot deeper into CLIMAT reports, but you can double check more recent data before NCDC gets its hands on it at the WMO GSN website: http://gosic.org/gcos/GSN-data-access.htm
.
GHCN does provide an adjusted dataset called v2.mean_adj, but as far as I know none of the blogger-driven reconstructions used it (and globally the adjustments are fairly small). Its often somewhat confusing, since adjustments in the U.S. (TOBS and inhom) are quite large, while globally they are much smaller.
.
Having these models is quite useful for looking at UHI, and Ron Broberg’s work over at The Whiteboard in creating an independent metadata assembly is invaluable. We’ve done a few initial looks at UHI, but there is still a lot more work to do on that front, and now that basic anomaly and spatial gridding tools are widespread there should be a lot more analysis in the near future.
.
As far as non-UHI issues go, well, looking at the station dropout issue was the impetus behind creating these in the first place. I’m sure other neat applications will crop up with time.
Scratch that. I found the code. ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/software/
Didn’t I post that URL in a comment on your webpage earlier this year, Chad? Stumbled across it wholly accidentally, while looking for something else.
My approach was going to be: Don’t look at their code at all. One should be able to understand and reproduce the method without looking at it.
Amac,
To add to what Zeke said,
What is in the GHCN raw are averages. GHCN keeps files for max, min and mean.
The max is the monthly average of daily max temperatures. The min is the monthly average of daily min temperatures. And the mean is the average of max and min.
So it is raw, so far as an average is raw. So if somebody messes up the math, that’ll propagate unless they mess up so badly, it throws a QC flag. Also, there’s your basic issues with incorrect data entry, etc.
Carrot,
I’m not sure all means are based on the GHCN max and min values, since I recall in Peterson and Vose ’97 that there are a lot fewer stations with min/max records than with mean records in the past. That said, I imagine all the stations reporting regularly via CLIMAT reports (e.g. not retrospective compilations) provide both max and min.
carrot-
Yes it was you who posted the link. I didn’t look at it in depth but it seems well documented enough to consult as a last ditch effort to answer a question or clear up an ambiguity.
Zeke,
Correct, in that the mean mean is not calculated from the max and min by NOAA or WMO. It’s coming in on the CLIMAT, or was logged in the older archives. Presumably in either case, it was calculated by the host country.
Chad,
Do as you choose, but if you do go down that rabbit hole, I think it would be a good blog experiment to see if you could figure it all out, without looking at the code. This would (a), show whether the paper was well-written, and (b) underscore the point that you don’t need the code if the paper is well-written.
Yes carrot, here I am to tell you about those small flaws…
but first let me say that I can see that no one here has read AND understood Menne yet. Menne (in the US) has the luxury of over 10,000 stations to use which are outside of his network of 1200 plus USHCN stations. So the only areas you will be able to do any useful analysis with his method are those areas with dense temperature networks.
As for those small flaws… they are not small when compared to the end result. The average missed discontinuity is .4, the end result is .6. The average station has an average of 7 caught discontinuities and about 6 caught trend discontinuities. The warm discontinuities outnumber the cool discontinuities. Feel free to do the math yourself by lookiing at his work on the USHCN, the final result is that the algorithm is biased to the cool side by about .3, meaning that AT LEAST half of the land warming IN THE US is caused by artificial changepoints that have gone undetected.
The problem you will run into doing a global analysis with a Menne style approach is that as the density of the network is reduced, the accuracy of the algorithm is also reduced.
HOWEVER, you can improve on his work by manually comparing the records and detecting discontinuities. You can then look at various other records available such as meta data (location and equipment changes especially)… and making some adjustments manually… then be sure to include some kind of UHI adjustment (Hansen’s still looks to be the best)
But the best place to start is simply by reading the literature and see what others have done and think of ways to improve what they did.
… I said cool side, that should be warm side, sorry
There is no Darwin type discrepancy. The only discrepancy is Willis ignorance of the history of the site.
Quite pointless when you are using the same adjusted data.
They’re all using the same data, but it isn’t adjusted.
Just to clarify — not because I want to cast aspersions or anything — but because the word gets used in ways that confuses me ..
When you say “raw” you mean — straight from the thermometer i.e. not “corrections” no “adjustments” no “homogenization” just as the person who would have stood in front of the thermometer would have read the number to be. Is that right?
Thanks
Margaret,
No correction, no adjustment. But between the thermometer and the data used here is the math step described at Comment#43660.
The input here is the average temperature for each month. So somebody had to take the raw thermometer readings, and get an average out of them.
Often the thermometer that is used will record the high and low temperature seen over the previous 24 hours. So the guy who walks out to the thermometer will write down those two numbers. The average for the day is the average of the high and the low. Then average over the entire month. That’s finally what Chad used above. [I forget the exact guidelines on how to do the averaging, but this’ll do for now]
So if you trust somebody somewhere to add and divide, to take an average, it’s essentially raw.
Re: Carrot Eater – You continue to be wrong about the GHCN raw data. In many cases, particularly in earlier years, the “raw” GHCN data are corrected, homogenized, and/or use different methods for calculating the monthly means at the same station. And the mean is not always the average of the mean max and mean min.
That, is true, yes.
That, please document. There should not be known cases of such in v2.mean.
Re: That, please document. There should not be known cases of such in v2.mean.
Go to http://archive.org and in the search box, type “Smithsonian 79” and you will get a link to the World Weather Records up to about 1920. There are other volumes as well. At the top of the temperature tables for each station, there is information on how the monthly means were computed. Make sure you read the station notes for the station, if they exist, because that will have additional information on the means. Be prepared, because it is a lot of reading, but very interesting.
P.S. – Thanks to a poster at Jeff Id’s a week or two back on Jeff’s surface temperature post. That poster is the one who found the digitized copies at archive.org.
torn8o,
The word “weather” does not appear anywhere in the search results, and none of the 175 results seem to have anything to do with weather records based on their title and description. Could you provide a more direct link?
I’d wouldn’t be surprised if some of the older retroactively collected temperature records had different QC than new data, but I doubt there was any fancy inhomogeneity corrections that far back…
I couldn’t find it either. But I suppose I could make the trek to the library and see the hard version.
Here is a direct link:
http://www.archive.org/stream/smithsonianmisce791927smit#page/n5/mode/2up
Ahh, interesting. I don’t see anything in there apart from the usual QA; the first section is errata (presumably added after publication), and the method described in checking for transcription errors (e.g. comparing readings against neighboring stations) isn’t homogenization per se unless the record for stations were actually changed to be a function of their neighbors, something not indicated in the document as far as I can tell.
Frankly, I wouldn’t be completely surprised if some of the pre-1960 data had inconsistent QC procedures, but it still probably the “rawest” data that exists today.
Agreed with Zeke. None of that looks like homogenisation, as in adjustments for station moves and the like.
Mainly just errata, in that the number got sent in wrong. Surely you can correct typos, and still be considered ‘raw’.
Glancing through, I did see something like a TOB issue, moving into how the average was computed. That was amusing.
“the usual QA”
Yeah. All these temp monitoring devices were set up not by “climate scientists” looking for global temperatures. They were set up by people; for people; near people so they could tell them what kind of clothes to wear that day.
I’m certainly not going to claim that temp records pre-1920 (or before the mid 1940s for that matter) are particularly good, though they are the best we have. Since then, thankfully, there has been a lot more standardization.
All that said, I wonder if weather reporting naturally attracts people who are diligent with detail. In some sense, looking through those old notes, it’s surprising they bothered to try as hard as they did.
I like to think that in the age predating electronic data storage that there was often a sense of stewardship among many caretakers that is not as prevalent in our throw-away society. They knew that their data was relatively rare and of some value to future generations.
Re: archival data, it seems to me that it’s less useful to assert that the records are good/bad. To a degree (heh), both statements will be true.
It’d be more useful to come up with tests that will tend in one direction in the case of certain quality problems, and in the other in their absence. The frequency of very large discontinuities would be one possible indicator.
This was a theme of the original “Freakanomics” (not referring to the various controversies Levitt has engendered). If certain kinds of cheating took place on a multiple-choice test, what patterns might be seen in the answers? If real-estate agents tended to serve their own interests, how might transaction records reflect that?
A mix of ingenuity and green-shade caution are needed. And us outsiders have to trust that investigators will call ’em as they lay. Transparency can be fostered by harnessing web-based tools.
It’s my impression that this isn’t impossibly far off from the current state of the field. It’s contentious, and some ‘players’ seem likely to end up with much egg on their faces. But the records exist, and are mostly accessible (it seems), and both amateurs and pros have developed a lot of interest in these issues. So it seems as though there are grounds for optimism.
Sorry for the hand-wavey tone of this comment. Mostly an outsider’s impressions from lurking on various threads on the subject. Perhaps those who take issue will cite specific instances that suggest that such optimism isn’t warranted.
oh how dreamy. lol
Well Amac, with the really old data you often get issues when they install the Stevenson screen – big discontinuities there.
oh how dreamy. lol
IZZA ROMANTIK 😉
——
Check out slide #40
http://www.ncdc.noaa.gov/oa/usgcos/NCDCGCOSReview/NCDCGCOSMeetingDiamondIntroAug2006.pdf (6MB)
Re: liza (May 20 08:37),
This is not entirely true. At least during WWII, weather prediction was important to the military who wanted to anticipate conditions when planning attacks or just doing more ordinary logistics.
Godwin’s law alert….
Anticipating weather conditions was sufficiently important that a Nazi U-boat landed in Labrador to set up a secret weather station. See Weather Station Kurt.
Temperature measurements at airports support aviation. Pilots want to know the air density and compute this based on weather data.
These uses are sufficiently important to motivate those funding the weather collection to encourage some attention to detail.
(SteveMcIntyre told me about the Nazi weather station during a conversation where we were challenging each other to come up with the few situations where bloggers can mention Nazi’s without it being an obvious attempt to smear anyone.)
Ron Broberg (Comment#43756) May 20th, 2010 at 10:10 am
I can’t tell which slide is numbered what.. But, I like the one picture of the two guys in brown shirts, one opening and grinning at that old paperwork that came from all that stuff heaped on the shelves and in cardboard boxes in the background. Is that the one you mean? lol 😉
lucia (Comment#43757) May 20th, 2010 at 10:32 am
true! And like I said, they were set up for people’s needs (not to collect “global average” temperature down to a half of a degree for climate models and climate scientists)
Been reading this:
http://climateaudit.org/2009/01/20/realclimate-and-disinformation-on-uhi/
Is all this is so matter of fact and all so good why is it like pulling teeth getting the complete code and data information?
Liza–
It’s true that no one designed the system to measure global average temperature to withing ±anything. However, the needs did often translate into data collectors wanting to maintain better precision and accuracy than one would need to merely decide what clothes to wear on any particular day. That was the need you actually suggested and would tend to give the impression that very little precision or accuracy would have been desired.
lucia (Comment#43760) May 20th, 2010 at 10:55 am
Whatever. I mean no disrespect by that. my comment was in general and works just fine to make the point. Climate scientists are the folks that need to be more precise! My link to CA illustrates how much they try not to be.
Well, well, look what Liza accidentally dragged out.
d’Aleo was on the station drop out meme before any of us had ever heard of EM Smith.
McKitrick 2003
http://scienceblogs.com/deltoid/2004/04/mckitrick.php
.
——
.
Liza, yeah, that’s the one.
Ron Broberg (Comment#43764)-Lambert has never admitted that this issue was shown to be irrelevant:
http://www.uoguelph.ca/~rmckitri/research/Erratum_McKitrick.pdf
I roll my eyes.
Zeke –
I remember an earlier graph you produced with the US data where the stations were split depending on type of screen in use in the late 90s. The interesting thing was that there were some deviations even in the early years before the introduction of the new screens started.
At the time I thought it would be worth repeating the difference exercise using two station sets chosen randomly. Would this be useful to do with the global set to get an idea of how station selection can affect the result. I doubt if the difference will be large but it might give a lower limit to the amount that is worth looking for when comparing methods or digging for UHI etc.
I am afraid my few remaining grey cells are not adequate for this task so I am reduced to asking someone else to undertake my homework if it is of general interest.
Jorge
AndrewFL
How are you rolling your eyes? That nonsense that McKitrick is peddling is what turned to the nonsense that EM Smith/d’Aleo/Watts were peddling, which was shown to be absolutely ridiculous by a whole bunch of people, including Zeke here.
Looking at it, Lambert gave a really primitive, but on the right track, treatment of station dropout.
carrot eater (Comment#43768)-Was Lambert going on about station drop out? Sorry, I thought that was a different post.
Lucia,
If it had been just a German WW2 submarine, you could have let Godwin sleep.
Yeah, station dropout (combined with the stupidity of not using anomalies) originates with McKitrick, circa 2001 I think. I guess Lambert took a stab in 2004; I had no idea. Probably mostly forgotten except d’Aleo, and then EM Smith came and it blew up. Though I have a hunch McKitrick got the idea by looking at a plot made by Willmott back in 1991 or so.
Jorge,
Some difference is inevitable due to stochastic factors, though you can minimize it with a large enough sample size. Other times difference can be due to spatial coverage. The approach I take tries to minimize the latter, by only looking at grid cells that have both types of stations that I am looking at (e.g. both MMTS and CRS, both rural airports and rural non-airports, etc). I discussed it awhile back in this post: http://rankexploits.com/musings/2010/in-search-of-the-uhi-signal/
Carrot Eater:
This could be cleared up if somebody were to get the stomach to download the GHCN daily file and process it.
Presumably things like means that torn8o was alluding to are not relevant here. If you could verify that you can recover v2.mean from this, that removes one more talking point.
I’m not sure what the GHCN daily has to do with it; the archive all looks monthly to me.
I might go to the library and page through sometime, and do some spot checks against v2.mean. Reads like they do some corrections to pressure data. If temperature is messed with, it’s to correct typos or …. you see TOB issues mentioned repeatedly, so I wonder what that’s about.
oh, you want to recover v2.mean from the daily database?
If so, that won’t be a perfect match. I think the daily has less QC done on it, and is less complete.
Yeh, I think a good chunk of GHCN stations don’t have daily records available through NCDC. That said, looking at the differences for the stations that have data might be interesting, though I wouldn’t expect it would be particularly significant either way.
Thanks Zeke –
I had rather assumed that splitting the stations randomly would leave the spatial coverage pretty much intact. Half the stations in a grid cell would be in each group. Clearly that would not always work out and I would be quite happy to only use gridcells that contain members of both random groups.
Jorge
Well, I’m downloading the daily file. This will remove any speculation, at least on my part, for what is really there vs not there.
Chad, carrot showed me the menne file and I pointed you at it.
I have not looked at the code. As carrot notes there is a benefit to reading the paper first. No paper that I have ever read gave enough instruction on how to construct code. The best example of this is Hansens papers which dont even describe certain steps in the code ( having to do with the “quality” of certain data ) so. Eventually, I’ll get around to reading it all, but it really does put the cart before the horse.. at least from my perspective, others can disagree.
WRT adjustments and “objective” versus “historical” approaches.
I think there is a benefit to both especially if you want to test the objective methods. cant really do that without a good solid historical basis. Regardless of the method selected the important point from my perspective is a proper and full accounting of all statistical decisions made and the proper accounting of the uncertainty. For the historical stuff I’m more interested in documenting the LACK OF CHANGES rather than trying to estimate the effect of changes. Take UHI for example. I’d rather find those sites with minimal changes than do one of two things:
1. apply some kinda hocus pocus UHI factor based on f(pop, etc etc)
2. Smooth it away by averaging with other stations.
I think the governing principle here is that you cant get information where there is none. I dont expect different methods to produce different means, not huge differences.. but the accounting of uncertainty hasnt been very diligent.
Hmm how do I put this. Lets take TOBS. it does a good job of recovering the mean for a monthly temp. But it adds uncertainity. Uncertainty in the form of a standard error of prediction. That error of prediction EXCEEDS the undercertainty you would have if you had taken the measures at the right time.
Simply: if you have 60 measure in a month (TOB 7AM)with a mean of 10C your error is mousenuts ( like .03C) error due to sensor accuracy. IF you want to adjust this mean to mean with a TOB midnight… then you have to use a model. a tobs model. that model has an error of prediction. order of magnitude bigger than mousenuts. You get a nice prediction of the mean, BUT that information doesnt come for free. it comes with an error. account for that in a final answer and I am happy. The take away is this. methodologically I dont have any hobby horse ( I rather like change point analysis and have suggested it since 2007, see the relevant CA threads.. I took some abuse for this of course, but what the hell seemed like a cool approach ) The only hobby horse is open data, open source, full accounting.
Carrick… dont go near the daily… you’ll go blind..
Zeke (Comment#43638) May 19th, 2010 at 4:22 pm
“It would be nice to replicate Menne’s method, try it out on various biased synthetic data, and see how it performs. If it catches and corrects for warming biases just as well as cooling ones, it would help put to rest some of the more egregious gripes about adjustments.”
That would be a great project. The biggest issue however is in characerizing what “artificial” changes “look like.” We know what a “tobs bias” looks like because we have stations that sample continuously. We know what a change in elevation looks like. we kinda have an idea what instrument changes look like.
But what does the evolution of UHI look like? what does changing land use over time look like ? does it look like “climate”
Still, it would be good to know the kinds of signal that menne can detect and at what thresholds.. just to bound the system.
liza (Comment#43746) May 20th, 2010 at 8:37 am
“the usual QAâ€
Yeah. All these temp monitoring devices were set up not by “climate scientists†looking for global temperatures. They were set up by people; for people; near people so they could tell them what kind of clothes to wear that day.”
Actually not. some of the longest records were set up as scientific records. Avoid generalizations, generally speaking
Right. Just because in your head, you made a synthetic flaw to be a flaw, doesn’t mean it’s quantitatively any different from a real climate feature that you don’t want to screen out.
In the case of UHI, your only hope is to compare with the neighbors, I think. It can’t be ascertained by looking at a station in a vacuum. Then again, that’s the basis of pretty much all non-metadata adjustments. MikeC I think makes a big deal of looking at diurnal patterns, but I don’t think that’s an unambiguous sign; diurnal patterns can be changing anyway.
Though I have a hunch McKitrick got the idea by looking at a plot made by Willmott back in 1991 or so.
.
Could be. Or McKitrick just took it out of Hansen 1999. More stuff “hidden” in the sci lit.
.
Hansen, J., R. Ruedy, J. Glascoe, and Mki. Sato, 1999: GISS analysis of surface temperature change. J. Geophys. Res., 104, 30997-31022, doi:10.1029/1999JD900835.
http://pubs.giss.nasa.gov/docs/1999/1999_Hansen_etal.pdf (15mb)
Ron, pretty much every paper about the surface record has a plot of station count. Even H&L 1987 does, I think – before the 1990 drop, there was some other drop. History repeats.
But it takes some talent to look at that, and then decide to do the idiot task of taking a simple mean of all the temperature records.
Which is why I suggest the Wilmott 1991 paper. To this day I can’t figure out why, but it’s got the same style graph in it as McKitrick – station history overlaid with a simple mean of temperatures. At least I think in Wilmott’s case they were gridded, but still, I can’t figure out why he didn’t use anomalies for what he was trying to do (test for undersampling bias)
Got a link?
I don’t see a free copy floating around, but you never know
Willmott, Robeson, “INFLUENCE OF SPATIALLY VARIABLE INSTRUMENT NETWORKS ON CLIMATIC Averages” GRL 18: 2249-2251 (1991)
Ron: just shoot me an email if you want a copy, and I’ll see what I can do.
If I have it correct, all the land data sets have been validated and by people who can be trusted. Excellent. The Earth has warmed over the last 130 – 160 years about .07 C.
OK. Here’s my question. And I don’t mean this to be sardonic or crass. I really want to know. So what? Even if we human beings were responsible for the entire .07 C–which I don’t believe for a second–what is it we are supposed to be doing about it?
I’m sorry. I’m just not getting the horrifying implications of this finding. Color me stupid if you want. What’s the big deal? What am I missing? And please don’t tell me the models tell us we are all going to bake like in an oven or drown like in a flood. I can get that from reading the Bible.
“And please don’t tell me the models tell us we are all going to bake like in an oven or drown like in a flood. I can get that from reading the Bible.”
That’s just it, Titan28. The Bible needs to be replaced by IPCC reports. They write their own scripture. Climate Scientists are the new prophets for our era. Or so they think they are.
If you thought the age of the Court Magician was over… it’s just gettin’ started, baby. 😉
Andrew
Titan28,
Its 0.7 C; you are off by an order of magnitude or so 😛
As to how much it matters? Well, that really depends if we end up at 1.5 C or 4 C by the end of the century, or somewhere in between.
Lucia aka Thelma… and how many years of the temperature record will WWII account for? And how many years will the secret temperature stations be in existence? There is a lot of study on the WWII years because of all of the temperature stations that may have been bombed out, observers drafted etc… it’s been a tough period of time for the folks doing global temps
Liza is correct, there was little interest in climate or temperature records for climate monitoring until the late 70’s. Until then, weather observing was more for… well… weather, day to day weather… still is for most weather observers.
carrot,
“MikeC I think makes a big deal of looking at diurnal patterns, but I don’t think that’s an unambiguous sign; diurnal patterns can be changing anyway.”
That’s funny, so does every published climate scientist who analyzes temperature records. Not taking advantage of all the data is like studying the spectrum and leaving out half of the colors.
Titan28,
“OK. Here’s my question. And I don’t mean this to be sardonic or crass. I really want to know. So what?”
Good point, because after you take away the Polar Bears, Hurricanes, Coral Reefs and all of the other ridiculous claims made by the alarmist sector, all we are left with are lower Canadian and Siberian heating bills and plant food.
… and it’s .6C
By the way, would anyone like to wager some large quatloos that the Nino 3.4 region will break the La Nina threshold within 3 weeks?
CE
“Right. Just because in your head, you made a synthetic flaw to be a flaw, doesn’t mean it’s quantitatively any different from a real climate feature that you don’t want to screen out.
In the case of UHI, your only hope is to compare with the neighbors, I think. It can’t be ascertained by looking at a station in a vacuum. Then again, that’s the basis of pretty much all non-metadata adjustments. MikeC I think makes a big deal of looking at diurnal patterns, but I don’t think that’s an unambiguous sign; diurnal patterns can be changing anyway.”
I think you are right in this. Supposedly Anthony asked dr. menne this question in person and Menne said that the method could not detect certain types of changes. In the end I suppose some people will refuse to be convinced of anything. I think however that a diligent look at things can take some silly claims about the size of UHI ( or microsite bias) off the table. That is there is a view that says, there is a bias, but its small. Not finding any difference ( as peterson 2003 notes) IS a mystery. FWIW I think we are talking small numbers here. but thats part of the fun.
Mosher,
Menne didn’t just tell a Anthony, he made it clear in his paper which it appears you still havent read. He and Williams provide histograms which make this point in their papers (08 and 09), which it appears you still haven’t read… and how does Menne deal with these sorts of issues? The same way Quayle dealt with the question of what caused the discontinuities when the equipment was changed from CRS to MMTS “It’s beyond the scope of this study”
On top of it, Williams made it clear in his AMS presentation… the algorithem was designed to retain low frequency variation such as UHI.
This is getting to be like romper room science in here… Mosher is convinced he understands what he has not read and Carrot understands a movie he hasn’t seen.
… oh, and Peterson had his grips on the whole database and cherry picked the stations… it’s why most of them were not even USHCN… he knew exactly what he was doing, he was preppin for the IPCC report
What I took from Peterson 2003 was that you shouldn’t forget about all the other differences between two stations, and that microscale effects can overcome the meso.
CE: what I took from peterson was that he solved the mystery by making two postulates:
1. urban sites were in cool parks.
2. micro site issues with rural sites were limited to rooftop installations.
just structurally if you diagram his argument that is the logic flow.
in fact he uses the word postulate. But I dont want to re argue that paper here. The biggest issue was his site selection. now that folks have there own code and metadata folks can have another look.. But again unlike others I dont expect to see anything that is TOO out of line with the literature.. if people buy Jones’ .05C figure then .1C or maybe .15C is not that far fetched. Skeptical dreams of all the warming vanishing or half of it vanishing are not going to come true. Too many lines of mutual supporting evidence FOR warming, for that dream to be anything but a dream
ah.. yes CE, peterson did move things forward WRT eliminating the other differences between sites. so, points for that, of course
MikeC,
I’m convinced about nothing. I’m tending to believe however that your contributions to our understanding are less than you think they are. I’m pretty certain that when carrot and zeke and nick and jeff and I decide to discuss menne that we will have a conversation where we help each other understand, even if we disagree.
Titan28 hits nail on head: If the results are as presented, and the Earth land temps have gone up by ~0.0XoC in ~100 years, why should we care, much less be spending billions of dollars doing something about it?
hunter: It’s not where it’s come; it’s where it might go in the next 100-200 years that’s of interest in terms of impacts.
“carrot and zeke and nick and jeff and I”
The Self-Appointed Blackboard Council of Climate Orthodoxy And YOU Aren’t A Member 😉
Happy Friday Everyone! 🙂
Imagination Is More Important Than Knowledge -Einstein
Andrew
Thanks for backing me up MikeC! I don’t understand the bullying!
Speaking of weather…
The Farmer’s Almanac page says this:
People who follow the Farmers’ Almanac’s weather predictions say they’re about 80-85% accurate (keep in mind, we make our predictions almost two-years in advance!).
It’s a wonder how they managed almost 200 years without using all those “C02 makes it warmer” calculations.
What I took from Peterson is that he needed to sew some doubt into the UHI argument before AR4… he couldn’t use but a handful of USHCN stations so he had to go into the rest of the NWS network to find stations which would prove his point. I still cannot believe that he did analysis on a network and barely used any stations from that network. How does that old saying about cherry pie go?
Mosher,
“if people buy Jones’ .05C figure then .1C or maybe .15C is not that far fetched. Skeptical dreams of all the warming vanishing or half of it vanishing are not going to come true.”
But .15 is 25% of the warming and that is a real possibility here. The land temps in the past 30 years have outpaced ocean temps by a large margin when in the past they tended to run together. So 25%… and the models… and all of the equasions… pretty bad. Now we move into how much warming is from natural causes… the models still do not have ocean circulation down, the level of scientific understanding of how the sun affects climate is low (TAR)… still no consensus on feedbacks… sounds like the science is quite far from settled… not to mention that the end result of any amount of anthropogenic warming, regardless of how small it may be, is a slightly warmer Canada and Siberia (OMG WHATTA FLIPPIN NIGHTMARE!) and plant food.
Mosher,
“MikeC,
I’m convinced about nothing. I’m tending to believe however that your contributions to our understanding are less than you think they are. I’m pretty certain that when carrot and zeke and nick and jeff and I decide to discuss menne that we will have a conversation where we help each other understand, even if we disagree.”
and Andrew,
“The Self-Appointed Blackboard Council of Climate Orthodoxy And YOU Aren’t A Member”
Regardless of who is a member of any club, if you have a conversation on a public blog in the public comments section… especially on the blog “where climate talk gets hot”… then you should expect others to point out their thoughts when you do the unthinkable, such as discuss papers and such without reading the papers and such…
now let’s all sing… can you tell me how to get, how to get to…(nevermind)
hunter,
I think there is some confusion arising from my slightly heterodox display of trends in degrees C per year instead of degrees C per decade.
To wit, the 0.007 C per year trends from 1900-2009 would translate into 0.7 C per century. Similarly, the 0.02 C trend from 1960-2009 would translate into 2 C per century. Granted, these are land-only trends and the ocean is warming more slowly. Also, they aren’t projections per se, and linear extrapolations based on them will be problematic since the forcings in question aren’t linear. If you really want a projection of what will happen over the next 100 years, you need a climate model, like it or not 😛
Liza,
“Thanks for backing me up MikeC! I don’t understand the bullying!
Speaking of weather…
The Farmer’s Almanac page says this:
People who follow the Farmers’ Almanac’s weather predictions say they’re about 80-85% accurate (keep in mind, we make our predictions almost two-years in advance!).”
I just call em as I see em… you get into the habbit of that in Texas… guns on the table… well… for those with a carry and conceal permit it relieves the little bruise on the lower ribs for a bit too 😉
That’s one thing about the Farmers Almanac… they have a reputation to maintain and they have to have a good record of accurate forcasts. I have clients who require the same and if I was wrong then my financial value would drop, as well as the extra income that is building my retirement.
MikeC
“But .15 is 25% of the warming and that is a real possibility here. The land temps in the past 30 years have outpaced ocean temps by a large margin when in the past they tended to run together. So 25%… and the models… and all of the equasions… pretty bad.”
I dont think you get the math. the land is 30% of the total.
so EVEN IF the land is wrong by 50% ( .4C warming instead of
.8C) you still have warming in the ocean or 70% of the total.
as you note the land has outpaced the sea, if you look at the difference historically you’ll not that it follows an interesting
pattern. Maybe I’ll get back to that, but for the UHI argument you
have to understand that the math is against you. I will make it simple:
Land is up by 1C
Sea is up by 1C
( for example only)
Total = .7*1 + .3*1 = 1C
Suppose the land is WRONG by 50% then you have this
Land is up by .5C
Sea is up by 1C
total = .7*1 + .3*.5 = .85C
Do you get it now? The land has NO LEVERAGE in the final average.
Next DONT confuse issues with the models with this issue. stick to one point until you demonstrate that you understand the problem and that others understand you point.
” Now we move into how much warming is from natural causes… the models still do not have ocean circulation down, the level of scientific understanding of how the sun affects climate is low (TAR)… still no consensus on feedbacks… sounds like the science is quite far from settled… ”
How much warming is from “natural” causes? why all warming is natural. the supernatural has nothing to do with it. The question is how much warming is due to EXPLAINABLE and quantifiable causes. And then what physical model best explains that warming. WRT the settling of the science. I think you get no where by constructing a strawman argument. various parts of the science are expressed with various degrees of certitude. Certain things are “settled” Listen to Lindzen to get some idea of the things that even skeptics should agree with.
“Total = .7*1 + .3*1 = 1C
Suppose the land is WRONG by 50% then you have this
Land is up by .5C
Sea is up by 1C
total = .7*1 + .3*.5 = .85C
Do you get it now? The land has NO LEVERAGE in the final average.”
I’m no math wiz, but it looks like it has .15C worth of leverage in the final average. ?
Andrew
Mosher,
Where are you getting your numbers… the Romper Room grab bag?
Let’s use Hansens numbers, 1880 – current, and using the 5 year smoothed line, Land is 1.3C and ocean is .6C… so using your math, 50% of the land warming being artificial, (assuming ocean is correct but I’m not advocating that here either way, but just for chits and giggles) leaves you with a total warming of about .6, not .85… a number also published by NOAA. Now let’s look at ocean warming, what is it from? Why, the worlds leading climate scientists don’t know, but then that’s the travesty which Trenberth talked about (I believe you understand this one since you wrote the book). And how much is from solar? Why, the worlds leading climate scientists don’t know (TAR indicated a low level of scientific understanding). And just how much warming is from GHG’s? Why, they know that based on models which do not understand the ocean and have to use elevated Aerosol cooling to counter GHG warming (the point here is that they are tuning aerosol cooling which is actually ocean cooling… a product of changing equatorward winds and upwelling) and models which have a programming bias for positive feedbacks, IE man made warming.
So let’s try this, the warming is not just one big thing, it is a lot of little mistakes… little things… .06 here, .12 there… and before you know it, you have very little AGW (in the .2 range if my recollection of the direct effects of GHG’s without enhancement is correct).
But, those folks who have denied themselves knowledge because they did not read the material… well, they get to sing
… can you tell me how to get, how to get to… (nevermind)
MikeC
“Regardless of who is a member of any club, if you have a conversation on a public blog in the public comments section… especially on the blog “where climate talk gets hotâ€â€¦ then you should expect others to point out their thoughts when you do the unthinkable, such as discuss papers and such without reading the papers and such…”
MikeC. In all fairness I should explain. lets see back in 2007 I started to study a variety of homogenization algorithms. You can visit some RC threads and CA threads and read my comments if you doubt that. Anyways, I had some thought about change point analysis and suggested that as an idea. I believe kenneth Fristch picked up on that suggestion. you can ask him for verification. Anyways, I looked around at a varity of approachs, downloaded some software ( again go search CA threads where I point people at this. ) so, then one day I became aware of menne’s work, this is PRIOR to his publication and I informed people that a change in doing things was on the way. The point being this. I’m fairly well aquainted with the variety of methods used. I dont need to read menne’s paper to have an opinion on a class of methods which is all I have done. For example, if menne said that he used Kridging, I would get to express my concerns about that AS A METHOD prior to reading his paper. If you read me carefully as opposed to emotionally you will see that is all I have done.
WRT to homogenization. let me suggest that You start where I started nearly 2 years ago
http://www.climahom.eu/AnClim.html
let me suggest that you acquaint yourself with Alexandersson‘s SNHT. Without even reading menne’s paper I will bet that this method either plays a role or is improved up. Then understand that menne is talking about homogenizing series with an AUTOMATED PROCESS. generally speaking there are two approaches to homogenizing a series.
1. The tedious approach of station by station comparison. a by hand process with a data analyst supervising the process. In this approach a questionable data point has to have documented change in the metadata. basically you see a divergence in the data, you track it back to a change in instrument. you adjust.
2. various automated approaches that look for these divergences
and adjust.
you could also do a hybrid.
So first. You’ve read menne. I havent. When I do decide its worth my time to go back and look at homogenization, I will bet that knowledge of Alexandersson‘s SNHT is going to be important.
I’d also put it in the context of other approaches.. hmm heres a nice place to start for you
ams.confex.com/ams/pdfpapers/70228.pdf
When you acquaint yourself with the primary method (alexandersson) then we might be able to have an intelligent discussion of menne. I’m just gunna bet that he either compares to that method or improves on that method. Could be wrong of course. So, without reading his paper, how good was my guess?
Uses Alexandersson I bet, in some form or other.
Mosher,
Quite wrong on both, he straight out uses it. (then sort of correct on the last sentance) But that withstanding, you still need to read the papers and the underlying material because there is so much more. For example, you can see the effect of Peterson’s influence on both Menne and Williams work over the years as they progress and that UHI, which was intended to stay in the record, now does not exiist in the record, “it is accounted for”… BWAAAAAAAAAAAAAAAAAAAAAHAHAHAHHA… I’ve heard Williams praise how UHI is retained in the record, “That’s not a bad thing.” Then they are a little miffed by the fact that some of the UHI does get removed by the program.
And, just for chits and giggles, SNHT is not an issue… it’s the lack of sensativity. The misses increase as the discontinuity gets smaller… and they did a count, and they did simmulations… and you can do the math and see that the record is warm by .3
The program is also less accurate as the density of the temp network decreases… so it’s not something you can use in most of the rest of the world.
And they screw up in the process of selecting stations to use for adjusting the USHCN station, they select the closest 100, then, and here is where it gets problematic… of that 100 they select the 40 with the most similarities to the target (USHCN) station… ensuring that common errors are more likely to get through…
but Mosh, I shouldn’t have to tell you this, you should know it all by now because you should have read it all by now.
… and I told Lucia aka Thelma that I’m the worlds leading typo-ologist
Carrot Eater said: hunter: “It’s not where it’s come; it’s where it might go in the next 100-200 years that’s of interest in terms of impacts.” But, we do have a fairly well defined sense of those impacts. They are: 1) a minor expansion and/or movement poleward of the temperate zone with its concomitant changes (many positive). 2) some small to moderate rise in sea level, 3) a probably small improvement in mankind’s ability to cope with food shortages.
All of this prior to the next nearly inevitable glaciation.
Have we gone down a dead end path of adding precision to the average temp calculations? We seem to be having a discussion how many angels can dance…? From the commentary it appears we can calculate/estimate within +/- 15-??%. I’m not sure this level of precision adds anything to the policy discussion, but will hopefully add to the understanding of climate.
Moreover, none of this understanding of how to calculate average temps from noisy data and adding precision to it, seems to add value and clarity to the CAGW theory. That is where the underlying questions lie.
MikeC you need to calm down.
As you confirm, Menne uses/improves/changes the method I suspected he would focus on. Why am I not surprised. You would do well to acquaint yourself with the primary literature and the math involved before writing so much confusing stuff.
You really are having a hard time constructing an argument that anyone can cogently respond to. for example:
“And they screw up in the process of selecting stations to use for adjusting the USHCN station, they select the closest 100, then, and here is where it gets problematic… of that 100 they select the 40 with the most similarities to the target (USHCN) station… ensuring that common errors are more likely to get through…”
I suspect what they do is collect the nearest 100 stations and then perform a correlation test. Or alternatively they could select the 100 best correlated stations using some threshhold for correlation. Then they probably down select to the best 40.
in the end they would downselect to an even finer set of comparison stations. OF COURSE errors “get through” the method will have a false alarm rate and a failure to detect rate. It will detect differences where there are none, and miss differences where there are some. That is the nature of this beast.
Let me put it a different way. You want me to read a paper so that I can have a cogent discussion with you, and you cannot talk about the paper without spewing a bunch of groundless accusations about peterson. Like I said, when carrot and zeke and others want to discuss the paper I will read it, because while I disagree with them on some things, we do know how to read and come to meaningful disagreements and hopefully some agreements.
Steven,
Reading Menne (briefly) looks like Alexandersson‘s SNHT is used. I finally got the program to work. Going to look at the results in detail now.
“While no one test clearly outperforms others under
all circumstances, the standard normal homogeneity test
(SNHT; Alexandersson 1986) has been shown to have
superior accuracy in identifying the position of a step
change under a wide variety of step and trend inhomogeneity scenarios relative to other commonly used
methods (DeGaetano 2006; R07). For this reason, the
pairwise algorithm uses the SNHT along with a veriï¬ca-
tion process that identiï¬es the form of the apparent
changepoint (e.g., step change, step change within a trend,
etc.). In fact, the pairwise testing procedure is similar to
the Vincent (1998) and R07 forward and backward re-
gression methods, respectively, but is more easily adapt-
able to a recursive testing approach for resolving mul-
tiple undocumented changepoints, and at the same time
retains the higher power of detection of the SNHT. ”
As I said mike I bet that menne bases his work on SNHT. I think we are done.
Chad,
I would expect him to use it. other than a baysian approach I cant see any other mathmatical approach being superior ( hehe maybe roman or nick or carrick will suggest one)
Issues are going to be the False alarm rate and the detection thresh holds.
The cool thing will be to see how well it does on some blind tests
I’m having fun learning R.. did you have any luck with the GISSdata I uploaded?
Thank you to those beleivers who have answered me with civility.
I would point out that nothign in your answers or the evidence indicates anything dangerous is happening or will happen.
There are only models.
As pointed out over at Pielke Jr’s blog, many of those predictions of doom have been shown in the actual event to be wrongly over statin the risk in a substantial way.
Why should anyone care about yet more declarations of doom from catastrophic AGW promoters?
On an unrelated note, I took the GISS v2.mean_comb file you provided Mosh and ran it through my model:
http://i81.photobucket.com/albums/j237/hausfath/Picture390.png
Adding the extra stations really doesn’t do that much. GISTemp is also still fairly different (even from an only dark station series), which suggests to be that the differing gridding method and anomaly generation (RSM vs CAM) makes a pretty big difference. That said, other series using variants of RSM (e.g. Nick Stokes, Chad) find results quite similar to my CAM results. Its slightly mysterious why GISTemp land is so low over the past few decades.
http://i81.photobucket.com/albums/j237/hausfath/Picture391.png
The trends turn out to be:
1900-2009 trends:
GISTemp 0.0651
GISS All 0.0768
GISS Dark 0.0612
GHCN All 0.0746
1960-2009 trends:
GISTemp 0.1679
GISS All 0.1999
GISS Dark 0.1889
GHCN All 0.2024
So all things being equal, adding the additional GISS stations increases the trend slightly over the century but decreases it slightly in the last few decades.
Zeke,
To make sure I follow what you’re up to
You’re taking the stations GISS uses, and putting it through your program, and comparing it to GISTEMP, and seeing GISTEMP running a bit cooler than your version, recently?
This is strictly the land index, no ocean?
The obvious thought is GISS’s UHI step, but eyeball tells me your difference is bigger than what that does.
Carrot,
Yep, land only. I also include a reconstruction using the GISS stations that are dark (and dropping the B and C nightlight stations). GISTemp land still runs quite a bit lower in recent decades than only GISS dark stations run through my model.
The GISTemp land record I’m using is from http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts.txt
carrot.
I posted the output of GISSTEMp after step zero.
In step zero GISS do the following:
1. add in 100 or so stations from SH (antarctic, islands, etc )
2. they “munge” USHCNv2 into the mix. I havent taken the time
to fully understand how they combine GHCN ( for the us) with USHCN for the US. Ghcn has like 1800 US stations and USHCN has a subset of that like 1200.. so these two get spliced together… see the y2K problem for some background. Nothing dramatic happens here anymore.
3. they fix a couple of records very very minor stuff.
Anyways: If everybody uses that comb file, then the only differences in GISS will come down to this:
A. Giss uses a different method for COMBINING DUPLICATES.
right now everybody just averages the duplicate records in Ghcn. Hansen does NOT.
B. spatial methods.
Anyways, whatever differences there are between Zeke running the ‘comb” file and GISS final output is ONLY a methodological difference and not a data difference. Same input
Put another way: giss has an initial step stepzero. in step zero they do a bit of source combining and data cleanup. The output of that is a file that everybody can use. includes more stations, clean up some warts.
Zeke do a hemisphere cut. NH versus SH.
since the additional GISS stations are mostly SH..
Also, do a USA compare. that will show you what the changes entail there.
Its known that NCDC uses all of ghcn for the US and they show more warming than GISS which does some different things in the US..
Right, I’m familiar with the comb file. Was just making sure what Zeke was looking at.
if you run ccc, you can get gistemp land without the UHI adjustment. again, I don’t think that’s the issue here, but it’s worth taking it out.
Steve,
Yes, learning R (or any new language) is fun. Yes, I cleaned up my program and processed the data. I can confirm Zeke’s statement that ‘Adding the extra stations really doesn’t do that much.’ Here’s a
http://treesfortheforest.files.wordpress.com/2010/05/v2-mean-and-v2-mean-comb.png
Chad broke the site 😛
Fixed now.
Mosh: running N and S Hems as we speak.
Zeke,
Ha. I was wondering why the ‘edit this comment’ wasn’t working too well. Even after I cleared my cache, the site still knew it was me and wouldn’t let me post a new comment.
CE.
I was thinking of a couple things for a run of GISSTEMP.
first I was gunna upload the data after step1. In step one hansen combines duplicates. Again, there should be differences in the 1/100s place but nothing bigger than that. I’d be shocked if there was. So anyways, Then I was gunna run GISS with no urban adjust. probably need to check with the ccc guys on that. I would be WAYYY more comfortable with them working directly on this intercomparison as I dont want to upload something that is out of date
Anyway, next I’ll upload the step1 output.
Funny, if everybody shows that GISS is low in the last few years, that will be something for Zeke to write about. I’ll suggest that people look at how models compare to chad or nick rather than CRU or GISS. seriously. I mean in the past I preferred GISS to CRU because GISS was at least open. Now with several open approaches showing more warming than GISS…….
Imagine the hooting and hollaring if Judith Curry started using the communities temperature series.. ha. especially if it was warmer..
I’ll look tonight, but I’m pretty sure that STEP2 pumps out a set of files without the periurban adjustments alongside the PA files.
Ya, was just looking at that. Ron, you run gisstemp more frequently than I do it would be good to post up the various intermediates for people to use to do a proper benchmarking.
err also chad found a minor annoying thing with some issues with GISSINV.. I could figure out the antarctic problem, but you might want to have a look as well
So the answer is, yes, there is an associated set of ‘non-periurban’ files. Just popped a short post and the associated files. I’ll push up the complete set of intermediaries sometimes this weekend.
http://rhinohide.wordpress.com/2010/05/21/gistemp-town-mouse-and-country-mouse/
.
What did Chad find in GISS v2.inv?
Mosher, you can change tone, make a few more accusations, avoid answering issues, frame the debate, appear to become the useful busy bee, play the strawman or demonstrate your clairvoyance all you want, you still need to read the literature.
Ron,
I found that there are stations listed in the GISS inventory that aren’t actually in the v2.mean_comb file. I found it because one step in my program assumed that all the stations listed will be found in the data file and crashed. I’ve posted it in the comments here.