Now that the data which news report soundbites tell us have always been available are finally available in a way that makes them accessible, people are looking at them. Admittedly, everyone is starting by just looking at the ones they care about. But on the heels of Willis’s report on the temperature adjustment in Darwin, we have Dr. Richard Keen, University of Colorado examining some central Alaska temperatures. These are discussed at The Air Vent. The post title is “Alaska Bodged Too”.
By the way: If you are aware of anyone posting data analysis of giving particularly good coverage to Copenhagen, Climategate, the new temperature series etc. let me know. I added Bishop Hill to my Climate Today (my aggregator), but I’d like to add anyone who breaks lots of stories so I can sneeze anything new. (Both climate-cooler and climate-warmer sources welcome.)
Update: Dec. 9.
Some maps have appeared in comments here or at Jeff’s. These are useful when discussing geographic weighting, the effects of station adjustments etc.
Terry_MN posted a map of the stations:
Map of AK station locations created by Terry_MN
Sod posted a map to show UAH comparisons:
The obvious question is: How do the results in Dr. Richard Keen’s analysis compare to the UAH results. Mind you, we don’t expect lower troposphere temperature trend to exactly match the surface, but we do expect some similarities. (I see the coastal stations in a different “band” of temperature from the inland stations. So, it might not make sense to average these together.)
You might look at Benny Peiser’s site
http://www.thegwpf.org/
if you haven’t already seen it.
Thanks– Added. I also added the UK green web site. (If it turns out to be non-climate green stuff I’ll eventually delete that one.)
Both climate-cooler and climate-warmer sources welcome.)
Well, you know.. how about Science Daily? –
http://www.sciencedaily.com/news/earth_climate/climate/
When did you last lead with anything that wasn’t anti-warmer snark?
If you want to blog about the science, Lucia, it’s out there….
I notice that Jeff Id didn’t take the elementary step of weighting geographically. I have to wonder if the majority of his stations are on or near the coast. Not weighting correctly could easily mess up the results.
Simon:
(a) The article you cited does not pertain to temp data analysis and (b) it’s by Stefan Rahmstorf an RC charter member of The Team. When you get similar sea-level projections by some independent scientists unaffiliated with movement academics, be sure to let us know.
You can always start your own blog Simon.
You’re kind of scraping the bottom when you start complaining that other people aren’t posting on things you personally find interesting.
JohnV:
There may be lots of issues with the station data but the overriding mystery is still this: where do the homogenizing adjustments come from and why are they invariably upward for newer data and downward for older?
JohnV:
If the stations don’t individually show a trend, then a geographical weighting of them isn’t going to magically produce a trend.
(Also the first sentence is “I recently completed a study of central Alaska’s climate, and this is a guest post by Richard Keen.)
George Tobin (Comment#27027)
I didn’t cite an article, I linked to Science Daily’s site. It just happened to have a particular article leading, but if you look tomorrow it will have another one – very possibly one which questions the one from the day before, that’s the way of science.
No problem, George, if you don’t want to keep in touch with scientific papers that are being published. Maybe you prefer the snark-talk as a means of figuring out what’s happening with the climate.
Carrick:
You’re right, but only if the individual stations don’t show a trend. Jeff Id didn’t seem to check that. Did you?
John V.
First, it would be a good idea for you to get the authorship correct
Second, why don’t you then (like Steve McIntyre) show how your arguments/adjustments impact the conclusions? Then we can have some good discussion.
Second, why don’t you then (like Steve McIntyre) show how your arguments/adjustments impact the conclusions?
That’s interesting, jef – how does Tiljander impact the conclusions? Numbers please.
jef:
You’re right — I saw that it was posted by Jeff Id and missed the authorship. That’s my mistake.
I merely suggested that any analysis of geographical temperature trends has to be done right. If someone is going to make conclusions they should probably do the analysis right.
Like Steve McIntyre, I am pointing out sources of error. Like Steve McIntyre says, there is no need for me to quantify them. The analysis was not done correctly. I am merely the auditor in this case. 🙂
JohnV:
I’m asking them to post the original temperature series.
I was just pointing out the obvious 1) no coastal stations were involved, and 2) geographical weighting isn’t an explanation of the difference by itself.
As to checking that… You asked them, not me.
folks, these “no-adjustments” stuff is garbage, and you should know this by now.
here is a look at the last 3 decades by UAH (among all sources!)
http://climate.uah.edu/25yearbig.jpg
correct me if i am completely wrong, but the satellites seem to indicate a +0.3 trend per decade.
and JohnV had it right (again). stations on the coast might have missed that one…
sod:
I’m not “right” about anything. I’m just asking questions because Dr. Keen’s accusation is pretty serious, and because Jeff Id and Lucia both thought it was worth promoting.
JohnV I’m just asking questions
:
:
JohnV The analysis was not done correctly.
That isn’t a question, John, it’s an assertion. If you stand by it, take the data, rerun the analysis “correctly,” and post your code and results. Until then, you should withdraw your assertion.
JohnV–
Yes. Not weighting properly could affect results things up. Obviously, things are going to take a little time to settle. This data that has always been available has only just become accessible after all.
Right now, I’m pretty much linking everything of interest on “climategate” and when something interesting happens, Copenhagen. With respect to any temperature series analysis: Obviously, these are all somewhat preliminary. We’ll see more with specific issues addressed later.
I am trying to filter out the really boring stuff or the stuff that makes Copenhagen sound like some sort of trivial joke. On the convention, I wish there was more interesting stuff. But really… it’s mostly boring. The aggregator picked up things like the convention opened with a really boring youtube video showing a little girl in the playground, then it rained, then she went home, than she had a weird nightmare in which a huge fissure opened in the earth and her teddy polar bear was almost lost in the crack then…. (which made me think: Global warming will cause the earth to split in two?! This falls between boring and such ham handed PR that…well.. If I ran it Neven would probably complain I was being too hard on those who want action.) There is also the story on free sex for copenhagen attendies from prostitutes, and the Nature exciting breaking story reporting that copenhagen attendees chat with each other in the hallway.
There are so many links, I’m trying to sneeze almost everything right now.
TerryMN:
I guess I am doing a little more than just asking questions. 🙂
It’s true that the analysis was not done correctly. I do not know how much error it will cause. I am asking questions of Dr. Richard Keen — perhaps you should also ask him to post his raw data, code, and results. Are you not concerned that he may have made a mistake?
TerryMN–
JohnV’s comment is permitted. If he thinks the Alaska analysis isn’t convincing, he gets to say so and why. People are doing things quickly right now– I’m sure all will fail to live up to perfection during the intial data release period.
well JohnV, looking at that UAH image, i think you are asking the right questions.
———————–
lucia, here is a question for you:
do these attempts of an analysis based on unadjusted temperature data make sense?
sod, as you probably are aware surface stations show a systematic positive bias in temperature trend relative to satellite, even after you correct for the difference in their elevations.
As to the numbers, it’s about to 0.13°C/decade for UAH (fit from 1978 to current), 0.16°C/decade for GISS, and 0.17°C/decade for CRU.
Also, comparing the adjusted to unadjusted data is part of the verification process.
And you don’t know anything about data analysis, I’d advise keeping your mouth shut so your foot doesn’t end up in it.
JohnV
here is someone posting on 16 other stations in Northern Territory of Oz
http://joannenova.com.au/2009/12/smoking-guns-across-australia-wheres-the-warming/
Lucia:
If this was my blog I would probably do a basic check on the validity of a result and accusation before promoting it to a front page article. I would do that to preserve my own credibility.
The Alaska analysis by Dr. Richard Keen could be right. It’s possible that his choice of stations and use of simple averaging will lead to the correct result (using incorrect methods).
JohnV, it’s not your blog, and Lucia isn’t losing credibility simply for running things that may be wrong. Seriously, lighten up. Nobody here is taking anything linked to as the gospel according to Bruce.
The way she would lose credibility would be if, like some blogs, she ran erroneous papers then deleted comments and banned commenters that were critical of them.
In any case,we learn more from mistakes than we do from things that are done correctly. So having a forum to discuss them and pick them apart is a Good Thing[tm].
and here is a site to get temps from Australia that Jo Nova (link above) is using
http://www.bom.gov.au/climate/data/weather-data.shtml
have at it JohnV
oops sorry, should have put the above posts in thread about Darwin
JohnV–
If I’m not mistaken, you installed blog software at your blog but then never post even when people have been begging you to collect together your many analyses published in comments across multiple blogs. In my opinion, your habit is not specifically tailored to improve your credibility and I see no particular reason to imitate you.
Blogs foster conversation. Conversation is occurring. I think that’s good.
i am really sorry for stating my opinion here, even though i don t know anything about data analysis.
but the article says:
My averages show that the past three decades have shown no warming (since the PDO shift in 1977)
but a look at UAH over alaska seems to give a trend above 0.3°C per decade, over the last 25 years.
(please correct me if i get the borders of Alaska or the information of that graph wrong. i am not living on the american continent, and i don t know anything about data analysis. )
http://climate.uah.edu/25yearbig.jpg
now about 0.3°C trend gives about 1°C extra warming over those 30 years.
and the article concludes:
One can only guess what “corrections†were applied to the GHCN and IPCC data sets, but I can easily guess their magnitude – about 1 degree.
looking at UAH data, GISS seems to be doing a pretty good job with those “correctioons”, that we don t understand at all….
ps: and please enlighten me: when i do an adjustment (for example for a heigh change of the station, of for an equipment change, or for time of observation change,) i will simply modify the numbers. what good would it be to compare the modified to the raw data? are you expecting the software doing calculation errors?
Lucia,
I have not had time to post. You’re right. But I try to avoid talking about work I did before. I can’t support it so I stay quiet. (I’m not suggesting that anyone here is doing anything different).
I’m definitely not suggesting that you should imitate my inability to gather my work together. That’s not something I’m proud of. I do think you should consider a little more balance, but your regulars disagree and that’s fine.
I’ll do my best to sit on the sidelines and wait for Dr. Keen’s raw data, code, and results.
John,
I’ve plotted the stations / station combos here:
http://maps.google.com/maps/ms?hl=en&gl=us&ie=UTF8&oe=UTF8&msa=0&msid=116463264361123954466.00047a3ee7551823fe9db&ll=62.492028,-144.228516&spn=27.545866,69.960937&z=4
3 coastal, 8 interior (by my count). How would you suggest they be weighted? That way we can move forward – thanks!
John V , The adjustments are well known in the field. This is not new information and the magnitude of the adjustments is not of an unusual degree. My opinion is that since it is an alaska set of thermometers, it’s unlikely that the trend would be affected very much by area weighting.
Certainly you have identified a source of potential error, however, I find the hypothesis that the study would be reversed by it ‘very’ unlikely. I think the criticism that perhaps more stations need to be examined or conditions around stations ala surfacestations is a stronger one.
TerryMN, I’d love to see the individual temperature traces for the interior sites considered by Richard Keen.
That’s the place to start, IMO.
I’d do it, but I’m getting ready for a field expedition. (Ending up in San Francisco for AGU.)
thanks for that map Terry.
now look at the UAH map again:
http://climate.uah.edu/25yearbig.jpg
if the 7 most southern stations give a trend of 0.1°C per decade (from the UAH data) and the 4 northern stations give 0.3°C, then no weighting would give an Alaska average of 0.17°C
even though the UAH map suggests a trend around 0.3°C…
with the GISS result being very close to the UAH result, i would suggest a regional weighting that is similar to what GISS is using…
Carrick I’d do it, but I’m getting ready for a field expedition. (Ending up in San Francisco for AGU.
Have fun! I’m riding out a potential blizzard, so may have time to plot the individuals (other than the 100K other work-related things still on my plate 🙂 )
TerryMN:
Thanks for the map.
I agree that there are 3 coastal stations. I’m confused by the total count — Dr. Keen said he used 9 stations but there are 11 in your map. I’m also curious why the stations are all in Eastern Alaska. Does anyone know if there are stations in the rest of the state?
My original concern about coastal vs interior has now been replaced by concern about eastern vs western. Why these stations? And why compare a trend from eastern Alaska stations to the IPCC’s trend which was presumably for the whole state?
As skeptics, I’m sure we all want to know the answers to these questions. Perhaps Dr. Keen can provide some answers (along with raw data and code of course).
John,
There are 12 (McCarthy and Kennecott are on top of one another at higher zoom levels). My method was to plug each station into google maps (there were 14 stations total, merged to 9 separate temp trends because of different periods of coverage). I couldn’t get Central (combined w/Ft. Yukon) to plot correctly, and there was one station that didn’t plot – can’t remember off-hand which it was.
Here is Keen’s explanation of site selection:
My study and the GHCN use the same stations, because there are no other long-term stations in the regions.
I don’t know enough about weather stations in AK to know if this is correct or not, but suspect the info could be found fairly quickly on surfacestations.org or one of the GHCN affiliated sites, but (afaikt, only in tabular format).
Oops, sloppy acronym – as far as I can tell, not the nonsense afaik + t
John V,
You are not serious. You have said for years, yes years, that you will write up your blog but it never happens. You appear occasionally and snipe from the sidelines then disappear for long intervals. Your repeated comments about “along with raw data and code of course” just prove that you are not really serious.
JohnV:
Here’s a map I made a while ago of GHCN version 2 stations.
Link.
I don’t have a database of which stations are operating in 2009.
But you can see the network is quite sparse in Alaska (and has been getting sparser over time, unfortunately). Apparently nobody thinks that instrumenting the planet for temperature is sexy anymore.
Dave Andrews:
You’re right — I do keep meaning to write things up properly in a blog. I do keep promising to do so when pressed. Life gets in the way.
As for sniping from the sidelines — that’s what we all do as blog commenters.
I always provided my raw data and code when doing any analysis. That’s what everybody should do so that their work can be checked and mistakes found. (And yes, that includes CRU). We shouldn’t need to do the basic work to check Dr. Keen’s conclusions — he should have made the data and code available immediately. Right?
Would this be an opportunity for some crowdsourcing work? Post details on how to do this correctly and a link to the released data and let everybody chart their own neighborhood?
“JohnV (Comment#27036) December 8th, 2009 at 3:27 pm
jef:
You’re right — I saw that it was posted by Jeff Id and missed the authorship. That’s my mistake.
I merely suggested that any analysis of geographical temperature trends has to be done right. If someone is going to make conclusions they should probably do the analysis right.
Like Steve McIntyre, I am pointing out sources of error. Like Steve McIntyre says, there is no need for me to quantify them. The analysis was not done correctly. I am merely the auditor in this case. 🙂 ”
Like Steve, you too seem to have a strange idea of what constitutes an audit. An auditor does not just report on errors, but also on what is correct. By only reporting errors, you may just inadvertently give the impression that AGW only consists of errors. Not intentional, I’m sure.
I second JohnV’s request for the raw data and code. BTW good to see you again John!
I think its funny how people get this burden of proof thing all backwards. JohnV has every right to question results and demand the data and code BEFORE he gives his consent or renders a judgement. He is under no obligation to do his own analysis. That “put up” or “shut up” mentality has seen its day.
WRT Lucia posting on things before vetting them. Well, thats a danger. Just ask Gavin who posted results from Tom P, gavins guru. Lets face it, if bloggers only posted stuff with data and code to back it up… we would all be reading CA, and you C02 skeptics would not be able to annoy me with your silly Woods experiment. Hmm.
JohnV:
In my opinion, if you’re going to make assertions in a public forum, you either need to publish the code or describe the algorithm well enough for it to be vetted.
In Lucia’s cherry picking example, she doesn’t provide the code, but describes all the steps needed to replicate what she had done. Of course that example was easily replicated, in more complex problems explicit code would be better.
steven–
There would be no law blogs, few political blogs, no fashion, knitting or cooking blogs. No travel blogs and no…
People can post whatever they want. Readers can each decide what they find convincing. I give the same argument against “It’s not peer reviewed”, “I don’t have the code”, “I don’t have the data”.
There isn’t “one rule to bind them all”.
If you are headed to AGU in San Francisco this december then LOOK ME UP.
In 2007 Steve Mc, Anthony, CTM, and I had a great dinner as recounted on both of their sites.
This year I can probably pull in Tom Fuller as well.
http://www.agu.org/meetings/
Carrick–
I would have given it to anyone who wanted it. But I don’t routinely upload all the codes. It’s a small PITA because WordPress often refuses to let you upload certain things. So, you have to leave wordpress, ftp to the directory, add the link etc. In most cases, if the algo is straightforward, no one wants to code. Anyone who wants it can replicate based on the narrative and they know they can.
The problem we’ve had with Phil Jones/ Briffa/ Mann stuff is that it’s quite clear the information required to check simply was not provided. Period. Moreover, we know that the people who said it was not provided is plausible because the instant the stuff was provided, we are seeing blog posts with people having a look and telling us some preliminary observations.
Are those preliminary observations fully documented? Proof-read, checked, copy edited? No. Could these people find errors? Yes.
But whether JohnV or others think these should note be revealed without full code, referecing etc. I think it’s ok. The reason is this: It shows that the moment the data were available, people who wanted it are actually looking at it.
That was the purpose of asking for it and that’s why the requests should not have been refused. In my opinion, that they are pouring over the data is a reportable story in and of itself.
Lucia,
I’m not saying that people can’t post whatever they want. I’m saying this: If you want to persuade me with respect to a question of data and the analysis of the data, the BEST EVIDENCE is a copy of the code and data. And further, if you are not willing to present your best evidence, then I’m probably going to reserve judgement. Just telling you up front.
I cant think of a more fair way to debate an issue. I tell you up front the conditions you have to meet to change my mind. You have the means at your disposal to meet those conditions. they are not onerous. If you don’t want to meet those conditions, why then we can have a fun discussion, but not a serious one. I dont object to having fun.
steve mosher
Sure. I have no problem with that. I think we’ve been here before. I think people have a right to reserve judgement for whatever reason of their choosing. Each one will weigh factors differently.
Your desire for code makes some sense, but I don’t weigh it as heavily.
Sure. And other people tell me theres, and I basically decide how much effort to expend in different areas. After that, we can all agree to disagree. I do what I think and judge best. You do and judge what you think best. Not a problem for me.
This is such crap. People demand data and code then as soon as some data is available some of the same people post critiques without any data and code. This is such hypocritical shit. A shame on all of those houses that woud engage in such hypocrisy, shame on you, shame on you.
I am not surprised, though.
I’m not sure I understand the complaint, Simon. The data and code represent an unambiguous way to determine what was done, and to what. If criticisms are found, it is quite possible they can be conveyed using mathematical or other descriptions, and they would be understood to apply to the method which had been documented by the data and code. What would be hypocritical about this?
Sod,
“but a look at UAH over alaska seems to give a trend above 0.3°C per decade, over the last 25 years.”
What is your reference for the precise accuracy of UAH?? Because UAH is relatively close to GISS, and we know some of what GISS and NCDC does to the data, I am more likely to doubt all three!
You should look into the many difficulties with calibrating the satellite instruments. It would be easy for them to have ended up with a slight bias keeping them close to the GISS/HadC sausage.
Then again, they could be spot on!!! We just don’t KNOW.
oliver (Comment#27108)
I’m not sure I understand your point, Oliver. Where’s the data and code to check Keen’s analysis?
JohnV,
If you’re that concerned, why not contact Dr Keen and ask him yourself, or draw his attention to your blog, this blog, Jeff Id’s blog, or any of the many blogs currently discussing this? His contact details are not difficult to find. If you read his pdf, he’s also inviting comments.
JohnV,
” We shouldn’t need to do the basic work to check Dr. Keen’s conclusions — he should have made the data and code available immediately. Right?”
DITTO!!
oops, gotta do something about that knee.
I thought that was what was being asked for in order to settle some of the criticisms unambiguously. If that’s wrong, sorry — my mistake!
I find the logic hard to follow for this type of story. Luke Warmers all say it’s warming, but then come up with story after story that says it’s not.
John V did some good work on the GISS US temperature dataset. At the end of the day, I think he just showed that GISS’ homogeneity adjustment for the US is probably good enough. The other adjustments like TOBs done by the NCDC were not really the focus.
But John V also showed all his work and if someone wanted to double-check it and verify it, they could do so (although that, in itself, would have taken a lot of effort just like John’s original work did). I’ve also posted up larger-scale analysis and ensured to make the basic data and methods available.
So, that is what we should be doing when we are providing our own independent work.
There is no way lucia or Anthony or Steve can check everything that is provided to them by posters. Like I said, that often takes a great deal of effort and research.
And again, that is why the basic data and methods need to made available is easy-to-use formats for anyone who has the interest and time to do the double-checking.
And that is also why CRU and the NCDC and the NSIDC and GISS and all climate researchers need to do the same. Having a Phd behind your name does not make you exempt from having to be clear about your data.
bugs–
huh?
The linked article gets a 0.69 C/century trend in Alaska. That’s lower than GHCN, but it’s warming. Admittedly, the author attribute that to a PDO switch– but we don’t have to buy that. Also, the computed magnitude might change for reasons JohnV indicated and others.
But the article does not show “no warming”. So where do you get that?
Lucia
“Here is the GHCN annual temperatures for the same region. The GHCN data is dominated by an upward trend. My analysis gives an upward linear trend of 0.69 C/century (due to starting during a cold PDO and ending during a warm PDO)”
He is essentially saying that if you take out the PDO, there is nothing.
bugs–
As I said “but we don’t have to buy that.” He’s found a positive trend. We don’t have to accept his interpretation that it’s due to the PDO. We don’t even have to accept that he’s made no mistakes yet, nor that a small area in Alaska represents the world.
I think what is significant is that people are diving into the story. Of course we are going to hear stories where the data are off first. I even suspect people are picking the areas to look at based on where the outlier warming trends appear in the homogenized data. At a later time, after some thought, someone will sit down and do something systematic.
But in the meant time: Yes. I’m linking to the first reports. I think it’s interesting to watch what people do.
Lucia:
Actually I was siding with you on this one. 😉
In a case like this, people should write their own codes. It’s more instructive.
Carrick–
I know. I just like to mention that if someone asked promptly, I’d usually give it to them. Mind you, I might not have something for a year old blog post. I probably do, but I might not remember the name of the file etc.
simon sez:
“That’s interesting, jef – how does Tiljander impact the conclusions? Numbers please.”
.
Great question. The same question we’ve been asking at RC for over two months.
Lucia
“But in the meant time: Yes. I’m linking to the first reports. I think it’s interesting to watch what people do.”
It’s interesting in the same way it is interesting watching people represent themselves in court. What it adds to the science, I don’t know, except to raise a constant chorus of disapproval. Unless that’s what you were actually after.
bugs:
Actually, we’re interested knowing answers.
I’ll take it, you’re more interested in preserving your Faith.
bugs–
I’m interested in many things. Now that more raw data are available, I’m interested in more public discussions of how the data were homogenized, and I appreciate seeing specific examples.
You know what though? I don’t even know what the constantly spewed phrase “adds to science” is supposed to mean. We can develop theories and hypotheses, we can test against data. We can also reason to see how new theories and data fit with previous theories and explanations that seemed well supported. We look at data try to interpret what it says in a larger picture.
I think people looking at raw data and trying to figure out the long term trend and our confidence in that is a worthy endeavor. If you think it doesn’t “add to science”… well… ok.
bugs,
“adding to science” is simply a code phrase used by alarmists which really means “not doing science in a forum where we can suppress what we don’t like”
Carrick
So much so that there is only one person trawling the GIS code.
It is clear that this does contribute to the science. Instead of treating heavily massaged “homogenized” data sets as if they were unquestioned pure measures, there will need to be an examination and complete disclosure of what adjustments are being made and why. The result will be more reliable data and tested methods.
For example, Willis Eschenbach’s work on the Australian data posted today was amazing and justifies a substantive demand for answers about methodology which can only be a good thing and “add to science.”
If these adjustment practices are found to contain many scientifically unjustified ingredients, then the issue for bugs will be whether the “chorus of disapproval” is actually more scientific in its dispostion than the bad methodology of which the chorus disapproves. Possible?
My guess is that Osama bin Laden is more likely to send out for a rack of pork ribs than bugs is to break with the Faith but this will all “add to science” nevertheless.
The CRU debacle is turning out to be somewhat like the loose thread hanging from the side of a sweater. Now that it’s there for all to see, curious people everywhere are starting to pull on it and the whole sweater is coming apart before our eyes. Both bugs and Simon seem to have a hard time dealing with this.
Bugs
GISS is 10K LOC. What part of this don’t you understand?
here bugs.
http://chiefio.wordpress.com/gistemp/
Watch what 1 guy can do.
Also, you will get a flavor of what kinds of things matter to geeks like us.
http://chiefio.wordpress.com/2009/07/30/gistemp-f-to-c-convert-issues/
I think people looking at raw data and trying to figure out the long term trend and our confidence in that is a worthy endeavor. If you think it doesn’t “add to scienceâ€â€¦ well… ok.
lucia, i am really shocked by your take on this. and even though it was not addressed at me, it looks like an answer to the question i asked in (Comment#27050).
do these attempts of an analysis based on unadjusted temperature data make sense?
and of course, in contrast to what you said in other posts, your answer is missing the most important first word: NO.
if we do not adjust for stuff that we know influences temperature like moving the station uphill, time of observation, change of equipment), then we just produce a very bad analysis.
when we just average data, instead of using a grid to give different weight to different stations, then we just produce a bad analysis.
if we claim that a dataset shows too much warming, because of adjustment errors (GISS), and we find another dataset (satellite UAH) that is NOT dependent on similar adjustments but is in good agreement, then we KNOW, that we have done bad analysis.
i am really shocked that the best replies that i have seen from sceptics so far is your “well, i am just posting random stuff, and it might be interesting”. this is very bad practise!
ps: thanks for including that image of the map in my comment above!
With regards to Richard Keen’s short note on Alaska temperature trends, I don’t think it was intended to be considered a full fledged research paper (though there may be a more detailed examination that backs it up – it’s not clear). Anyway, given JohnV’s points (geographical weighting, raw data and “code”) and some of the other issues raised here, I dropped the good doctor an email and suggested he visit (if so inclined) and respond to some of the issues.
One can but ask…
bugs:
Give people time.
Unlike Phil Jones and his $22 million, we’re doing this for free.
You think the GISS code is interesting to look through,and you have the time? Go for it.
sod you really don’t know that one compares data before and after adjustments as part of any normal verification process?
I wish you would quit lecturing people who are obviously a lot more experienced at this stuff than you are. It kind of makes you resemble a gnat after a while.
I’m not shocked you can’t get it. Like most of the “advocates” on this website, you are basically clueless about how science really works.
“Unlike Phil Jones and his $22 million, we’re doing this for free.”
It’s not ‘his’ $22M. There are copious amounts of abuse for free.
Lucia,
Am I missing something?
“Now that the data which news report soundbites tell us have always been available are finally available in a way that makes them accessible..”
What are you referring to? Do you mean the Jones-adjusted data released by the Met Office?
sod:
How is it a “bad analysis” to identify the difference between raw data and the final reported product in order to determine the methods and reasons for the adjustments? Is it a sin to peek at that man behind the curtain? Does one have to be a Believer to have the right to examine the methodology?
Your response was unscientific and kneejerk. You presume the correctness of the adjustments rather than subject them to reasonable scrutiny.
sod you really don�t know that one compares data before and after adjustments as part of any normal verification process?
that is not what he does. he does an analysis, as if this was a sensible thing to do. it is not.
a school, which is measuring the height of its pupils, decides in a certain year that they should no longer wear their uniform shoes while being measured.
calculating the “trend” over the unadjusted data simply makes no sense at all. i am curious, how you would explain those shrinking kids…
How is it a “bad analysis†to identify the difference between raw data and the final reported product in order to determine the methods and reasons for the adjustments?
ignoring obviously useful adjustments, like time of observation, is simply stupid.
replacing an area weighted approach, with an obviously biased simple average is also simply not a clever thing to do.
as JohnV said, if he is right, then it is only by chance. i call that bad “analysis”.
and the UAH data demonstrates that he is NOT right by chance.
————————–
can anyone here explain to me, why the UAH data shows massive warming over Alaska over the last 25 years, while his analysis shows zero warming?
“Also, you will get a flavor of what kinds of things matter to geeks like us.”
The temperature is tracked as anomolies. If there is a 10C bias up, it doesn’t matter, because there will still be a baseline from which anomolies will be calculated.
The US temperature trend has been adjusted upward by 0.45C.
The rise in the adjusted temperatures in the US since 1900 is just a little more than this.
We absolutely need to check how Thomas Peterson and the NCDC did these adjustments.
The Homogenization adjustment (which should catch the UHI for example and adjust the urban records DOWN to match the rural records) has instead adjusted the average maximum temperature UP by 0.45C – exactly opposite to that which should be expected (minimum is unchanged).
Here is the TOBs adjustment for the US.
http://img69.imageshack.us/img69/6590/ustobs.png
And here is the Homogenization adjustment.
http://img109.imageshack.us/img109/7312/ushomogenizationovertob.png
These are shown in a paper published in the Bulletin of the American Meteorological Society in 2009 (and a free version is here).
http://ams.confex.com/ams/pdfpapers/141108.pdf
Bill, reading the text of the document, I think that the adjustment in the second fig you post, fig 7 in the document, is more to do with a correction for change in instrumentation, not a homogenization adjustment.
Though quite why changing the instrument should lead to such a large adjustment, I dont know.
I agree that a detailed check on all the adjustments is needed. And can we have the raw data please? How many times do we have to ask?
Paul M, it is the homogenization adjustment that the NCDC is using for US temperatures now. The fact that it is called homogenization and doesn’t do what we normally think of as homogenization is an issue but this is how they are adjusting the raw records.
Here is the abstract to the paper describing it which was published in the Journal of Climate in 2009. I haven’t been able to find a full version (there is one on the findarticles site but I don’t like using that site).
http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F2008JCLI2263.1
To state the obvious, we are starting to see some very preliminary analysis of raw data that is interesting. It would be foolish to draw broad conclusions at this time regarding any particular adjustments, let alone the entire data set that goes into determining global average temperatures. I can’t fault this blog for posting what in essense are early discussions. I am interested in reading this material and apparently so are many others.
Sod–
First, I was addressing bugs– starting with “bugs”, and my answer has nothing to do with your question, which, frankly, I never even noticed.
I don’t know why you think the question of whether or not something is “newsworthy” is the same as whether or not computations on unadjusted values “make sense”.
* Do temperaturatures need to be adjusted for things like stations moves? Yes.
* Since you want to focus on this question: Does it ‘make sense’ to compute the trend before adjusting? YES. It always makes sense to know what the values are before adjusting. If nothing else, knowing how much your adjustments change the result gives information about the likely uncertainty in the final result. (As you see, I utterly disagree with you on this.)
* Should we consider the trends based on the unadjusted data “true” or “final”? No. If there are obvious or advisable adjustments, we should apply those and use the adjusted data. The trends based on adjusted data may turn out different from those based on unadjusted data. If the adjustments makes sense, then that’s fine.
* Is it worth airing the magnitude of the adjustments and the uncertainties associated with deciding to adjust? Abso-friggin-lutely, Yes.
People are going to make lots of comparisons. You might not like it, but ….well.. tough. Put on your big boy pants and deal with it.
If the other data set is supposed to measure the same thing, of course. So, when someone shows you their results, and you think they are bad for this reason, say so. To have this conversations, we must have the conversation, which on blogs involves linking, reading etc.
So, the initial result is “newsworthy” as far as “blog newsworthiness” goes. Do I think the New York Times is going to run the Alaska and Darwin analyses as they stand? No. But they are blog newsworthy and I’m going link them. Don’t know why you think this is shocking.
Huh? First: I didn’t say I’m posting random stuff. I said I’m posting things iI find interesting, but I am not verifying that everything is fully vetted, the final results are all accurate etc. That’s not the same as “random stuff”.
Linking to stuff for the purpose of discussion is not saying it’s write or wrong.
In anycase, it’s not bad practice to show preliminary results and let people discuss approaches to analysis, flaws, pitfalls, possible comparisons etc. This is done by collaborative groups all the time.
Because the stuff is being aired is being done, even you have an opportunity to explain why you think what might be wrong with the Alaska analysis. Using words, graphs etc. You can do that here, or you can start your own wordpress blog (which will give you the ability to write top of the fold and pull together graphs, words. links etc.
If you think you have a full pulled together counter argument, and don’t want to start a blog, I’ll let you guest post. If the Alaska thing is clearly wrong then your showing that the results at Id’s are clearly wrong would be useful.
Merely expressing incoherent shock is not useful.
On the issue of geographic weighting– I think for earths surface temperatures or even temperatures in a “region” that’s required. You can’t put 1000 thermometers in Nebraska and 1 in Portland Oregon and average to get the average for the US. Also, I found a link to the google map terry_MN created, and I’m going to insert the image so people can see. That, along with your map for UAH could clarify– and also tell us things the Alaska analysis should be considering. (Like, UAH shows central AK temperatures warming more rapidly than those at the coast, so it might make sense to just keep these separate before comparing to UAH.)
Sod–
I’ve added an update grouping Terry’s map and the UAH map you provided. I think showing the two together is useful, as it emphasizes JohnV’s point. We can’t have much confidence that weighting a while bunch of thermometers on the south/east coast of AK with a few in the center gives an average for AK.
thanks for showing the two maps lucia. i think together they really make a strong argument.
looking at your long response, i agree with some of the points you made, and i strongly disagree with others.
i hope i ll find some time later, for a longer response.
Just for fun I went over to the GISTEMP site and had a look at the temperature trends in Alaska. I generated a plot of the trend from 1900 to 2009 using a 250km smoothing, GISS analysis for land and no data for the oceans.
You can see what it looks like from this graph:
http://preview.tinyurl.com/yc6tfnc
(click through the TinyURL preview to see the site — the preview is there so you can verify where you’re being sent)
The southeast corner of Alaska (the area where all of Dr. Keen’s stations are located) shows a very small trend. The rest of Alaska (which Dr. Keen ignored) shows much more warming.
Many of these adjustments are made on the basis of a need to homogenize the data. Triggered by Willis’ excellent piece, Matt Briggs has started an interesting and informative discussion on homogenization at his blog. See http://wmbriggs.com/blog/?p=1459
Interesting piece started by matt briggs today on adjustments and uncertainty. I recommend it
Bill Illis (Comment#27177) December 9th, 2009 at 6:27 am
The US temperature trend has been adjusted upward by 0.45C.
The rise in the adjusted temperatures in the US since 1900 is just a little more than this.
We absolutely need to check how Thomas Peterson and the NCDC did these adjustments.
Long ago I requested the program for SHAP, TOBS, Filnet etc.
Also asked Menne for his code for USHCN v2
I think the time is ripe for some more requests. Followed by FOIA if they dont cough up the bits.
Here’s a sweet little bit of research:
http://www.youtube.com/watch?v=F_G_-SdAN04
How refreshing, science without the coarse manners.
sod (Comment#27174) December 9th, 2009 at 6:01 am
How is it a “bad analysis†to identify the difference between raw data and the final reported product in order to determine the methods and reasons for the adjustments?
ignoring obviously useful adjustments, like time of observation, is simply stupid.
TOBS: Have you ever read Karl’s 1986 paper? Ever seen his Code?
Hint: Tobs is a MODEL for adjusting for changes in TOBS. The paper, if you can find it, is not that impressive. I’m not even sure that the modelling covers alaska. What I do know for sure is the the TOBS model HAS ERRORS. However those errors are not propagated properly in the final error calculation. Watch what Briggs writes over the next few installments.
peace out
sod:
Yes it does. It doesn’t have direct physical meaning, but its part of the verification process for your adjustment procedure.
Sod,
Damn I have a good memory. the TOBS model is untested for Alaska. Actually Lucia the whole TOBS adjustment is a topic I wanted Steve McIntyre to pick up, but he was far too busy.
here ya go folks:
http://ams.allenpress.com/archive/1520-0450/25/2/pdf/i1520-0450-25-2-145.pdf
Bill Illis (Comment#27181) December 9th, 2009 at 8:18 am
Paul M, it is the homogenization adjustment that the NCDC is using for US temperatures now. The fact that it is called homogenization and doesn’t do what we normally think of as homogenization is an issue but this is how they are adjusting the raw records.
Here is the abstract to the paper describing it which was published in the Journal of Climate in 2009. I haven’t been able to find a full version (there is one on the findarticles site but I don’t like using that site).
That is Menne’s paper. When Anthony visited Ashville he met with Menne. I think he asked for the code, basically its a change point analysis (watch out) Anthony said he was planning on a little test of mennes method. I should ping him and ask if he’s still following that trail
George Tobin (Comment#27029) December 8th, 2009 at 2:58 pm
JohnV:
There may be lots of issues with the station data but the overriding mystery is still this: where do the homogenizing adjustments come from and why are they invariably upward for newer data and downward for older?
They’re not. Here’s an example of a station which has a significant influence on regional analysis owing to its island location. Before adjustment:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=147619010002&data_set=0&num_neighbors=1
After adjustment:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=147619010000&data_set=2&num_neighbors=1
You can see that the pre-1976 data has been adjusted upwards by 1C, thus hugely reducing the false ‘warming trend’ that would be indicated by the unadjusted records.
What seems ‘invariable’ is not the one way adjustments you claim but rather the one way selection of examples by so-called skeptics.
Simon Evans (Comment#27227) December 9th, 2009 at 1:21 pm
Simon u must have missed McIntyre’s post on rural adjustments.
http://climateaudit.org/2009/01/20/realclimate-and-disinformation-on-uhi/
read all the posts there and get back to me.
Simon,
Statitiscally speaking the adjustments on average do warm the record. We can say this with certainty for the US record as NOAA used to post the intermediate outputs of all their adjustement programs ( Filnet, Tobs, Shap) lord knows what happens in the rest of the world. ha, you don’t know. Uncertainty its a lovely thing.
The temperature thing is a data junkies heaven. multiple data sources, multiple error sources, missing data, biased data, its turtles all the way down.
steven mosher (Comment#27229) December 9th, 2009 at 1:45 pm
Simon Evans (Comment#27227) December 9th, 2009 at 1:21 pm
Simon u must have missed McIntyre’s post on rural adjustments.
http://climateaudit.org/2009/0…..on-on-uhi/
read all the posts there and get back to me.
What has any of that got to do with my post above, which simply shows that George Tobin’s assertion that “homogenizing adjustments…. are….invariably upward for newer data and downward for older” is demonstrably false?
“One can only guess what “corrections†were applied to the GHCN and IPCC data sets, but I can easily guess their magnitude – about 1 degree.”
One can only guess at how much money the oil companies are paying are paying Richard Keen, but I can easily guess it’s magnitude, about a million dollars.
See, it’ that easy.
First of all, everyone, slow down!
Let’s look closely at this issue of UAH and Alaska. The trend people are talking about is not “the last 25 years” and it is ridiculous that they keep saying it is. The “last twenty five years” is apparently December 1978 to November 2006 now. The last nearly three years? Pfffffft. Shah.
Anyways, it is what it is, and what is reasonably unambiguous is that after a sudden shift in 1976, not much change has occurred in Alaskan climate. You can look at official data and see that!
http://climate.gi.alaska.edu/ClimTrends/Change/TempChange.html
“Actually, we’re interested knowing answers.
I’ll take it, you’re more interested in preserving your Faith.”
When I read the blogs, there’s far more interest in attacking personalities, attacking climate scientists, snide asides, lunatic conspiracy theories and praising McIntyre for raising the level of scientific debate.
Simon
What has any of that got to do with my post above, which simply shows that George Tobin’s assertion that “homogenizing adjustments…. are….invariably upward for newer data and downward for older†is demonstrably false?
Wrong sentence mate
“What seems ‘invariable’ is not the one way adjustments you claim but rather the one way selection of examples by so-called skeptics.”
Steven,
Sure, we know that adjustments warm the US record. I’m not sure about the rest of the world! I dunno what’s happened with TOB in China or wherever!
The fact remains that George Tobin’s assertion was untrue, and also the fact remains that we’ll continue to get ‘skeptic’ sites writing up stuff on apparently warm-biasing instances whilst ignoring any responsibility to take a scientifically disinterested view of the whole.
Did anybody see the UHI “analysis” on the front page of WattsUpWithThat? I’m curious what the luke-warm tribe thinks of it.
Simon has a (minor) point, the adjust are not invariably upwards, they are merely almost invariably one sided. However Simon, your level of hypocrisy would reduce from gargantuan to merely huge if you also criticized the paleoclimatology community for their “invariable selection” of a few cherry picked proxy records and their biased defence of same.
Oh and Bugs, Steve’s knowledge of an auditor’s job is clearly better than yours, as a one time auditor I can assure you it is not an auditor’s job to give the right answer but merely to point out the right way to reach the answer.
“Oh and Bugs, Steve’s knowledge of an auditor’s job is clearly better than yours, as a one time auditor I can assure you it is not an auditor’s job to give the right answer but merely to point out the right way to reach the answer.”
No, I said it is his job to say what he sees is right and wrong. McIntyre is very quiet on what is correct in the IPCC reports.
JohnV (Comment#27276)-There is no lukewarm “tribe”. As far as I can tell it is totally incoherent. I think lucia is a lukewarmer, in the sense that she believes the IPCC temperature projections are overstating the warming. Steven Mosher claims that he is a lukewarmer, and I’m under the impression that he is more skeptical than lucia, and I actually personally think that his views on most of the science regard openness and are what I’ve heard of his guess at sensitivity, it’s higher than I reckon most self described lukewarmers would say.
Some of the more extreme skeptics might, if they heard my views, call me a lukewarmer. But I deny being a lukewarmer.
As far as I can tell, “lukewarmer” means “people thinking for themselves that are pigeon-holed nonetheless, and don’t fit into the denier versus alarmist dichotomy”.
Andrew FL:
Some of the people here regularly call themselves lukewarmers. I’m not sure it has a solid definition.
I’m asking because the crowd at WattsUpWithThat is mostly gobbling up the video un-critically. The regulars here are less, um, rabid than some of the regulars at WUWT. I’m curious about their response to the video.
Nice contempt.
What I think is…I can’t know if it’s any good without trying to reproduce it myself. So naturally I say, let’s see, if it’s really bad someone will come out and find something wrong with it. That’s how things work. In the mean time, it is neither to be believed, nor denied, but perhaps reacted to with a kind of “curiouser and curiouser” feeling. Whatever that means.
I have just started to read about the major instrumental temperature records and so I am a total neube but is there a GISS for dummies somewhere I can read so I can understand this discussion? Such as — why are the readings “adjusted” — for what purpose and how? What is the justification? How much of an affect does this adjustment have on the results? I don’t even know enough to ask the right questions but there has to be a primer somewhere for someone like me to read. TIA. I’d of course like a balanced one, or one from each camp.
Greenaway,
Start with Hansens paper 2001. Its on the GISS page.
Go onto climate audit. read threads there.
But really start with Hansens paper. Then just ask questions. Ill do my best to answer. Nick and others here can keep me honest.
JohnV. I saw it. Not impressed.
Basically they rely on the GISS coding for rural urban. That said, from a methodological standpoint it was rather like Kenneth Fs approach. The paired test ( also used by peterson) makes some sense. But the difference they showed appeared to be outside the range that you and I were seeing…. That was around .1c-.15C or kinda close to the edge of the noise band if I recall.
mosher:
I sent Anthony Watts a detailed study a year or two ago. I did the same kind of paired test but restricted the analysis to high quality stations. I found a real UHI effect but it wasn’t as large as this video suggests. Surprisingly, I never heard back from him.
For a laugh, I’ve decided to reproduce the analysis from the video. I get a trend from the rural sites of about 0.8C/century — about 10x larger than is shown in the video. “Curiouser and curiouser”.
I’ll see what the urban sites show.
Steven–
Hansen’s paper doesn’t discuss all the adjustments– does it?
Greenaway,
There are tons of adjustments. Qualitatively all make sense. For example:
Supposed back in 1910, the thermometer for Chicago was on what is now located in an open cow pasture. Then the city grew, and someone decided to move the thermometer. So, they moved it to a cow pasture in Oak Park. Then that grew, so they moved it again.
Each time it moves, the average temperature of the new location might be different from the previous one. This generally didn’t matter much at the time these were moved. Even sudden jumps of 1C for the average temperature in January on didn’t affect weather prediction or reporting much.
But now, people are was trying to figure out if the climate was warming at a rate of 0.2C/decade or so, and if the thermometer was moved to someplace 1C warmer or cooler on average this makes a difference. But now, it matters.
There are a bunch of other reasons:
1) We know or at least strongly suspect that even if the globe doesn’t warm, when a location goes from being rural to urban, it warms. So, if lots of thermometers used to be in cow pastures which were turned into airports, then the temperature will rise. If this dominates the signal, this will overestimate the average over the globe.
2) There have been changes in how things are measured. One is called TOBS (Time of observation.) Changing the time of day when things are measured has an effect, so there are corrections for that. There are other changes.
3) I talked about moving a thermometer. But what if moving it slightly (like form Chicago to Oak Park) a new one appeared? Or an old one vanished without being replaced? They try to figure out how to deal with this.
But basically, nearly everyone agrees the temperatures need to be adjusted or homogenized. The question is how should they be. The other question wich generates a lot of the “free the code/free the data” stuff is that the specifics of the adjustments have not been available to the public. Given the amount of detail, this isn’t really something that can be specifically explained in a brief journal article. Lots of stuff is just nitti-gritty and… well… not the sort of details people write up in journal articles.
bugs and Simon,
Pls read back trough your own comments since yesterday for reference and take a deep breath. You’re both so excited you’re getting tied up in your own shoe laces..
That said, the AGW “sweater” is unraveling; sorry to say but difficult to argue with that..
greenaway (Comment#27321)
Here’s GHCN’s own guidance document. The USHCN equivalent is here. And here’s Hansen.
mosher:
I checked the urban stations. The trend is 0.94 degC/century vs 0.81 degC/century for the rural stations. I think that agrees with what we found a couple of years ago.
It’s too bad WUWT posted the video on the front page since it’s likely very wrong.
Willis Eschenbach once again illustrates why you don’t let a baby play with razor blades.
http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php
JohnV:
Actually I’m glad he did. That’s the only way junk gets debunked.
Carrick (Comment#27343)
Actually I’m glad he did. That’s the only way junk gets debunked.
Yes, but the junk is blasted with a megaphone. The debunking starts about comment #200. And there’s hardly ever a proper retraction.
Aside from this UHI stuff, on the front page now there is:
1. An ice core post with a HS “blade” which is shown to be said to be less than MWP etc, but, the data stops in about 1850
2. A post explaining how Cape Naturaliste data is all wrong because of bad thermometer siting, except that the thermometer they show is at Cape Leeuwin (a long way away)
3. A post which claimed Lindzen was leading a campaign at APS, when Lindzen had nothing to do with it
plus Willis waving a whole lot of IPCC plots and talking about GHCN adjustments to Darwin, when the IPCC did not use GHCN adjusted data, and the data they did use, CRU, showed no unusual adjustments to Darwin.
And OK, there was a kinda retraction on 2 and 3.
lucia (Comment#27332) December 9th, 2009 at 10:21 pm
Steven–
Hansen’s paper doesn’t discuss all the adjustments– does it?
Its best to start top down. Hansen ingests GCHN and USHCNv2
raw. ( but raw is adjusted)
He does some additional quality checks
He does a UHI adjustment.
That’s about it.
USCHN v2.. Is called raw, but its not raw. Prior to v2 there were
these adjustements ( from my feeble memory)
Filnet: missing values
SHAP: station history ( lat lon height change)
TOBS; time of observation.
MMTS: instrument change:
In v2, menne’s change point analysis method is used to find undocumented station changes.reference his paper.
So, start with hansen. understand that. Then drill down deeper.
Just a suggestion.
JohnV (Comment#27338) December 9th, 2009 at 10:43 pm
mosher:
I checked the urban stations. The trend is 0.94 degC/century vs 0.81 degC/century for the rural stations. I think that agrees with what we found a couple of years ago.
Ya sounds in the ballbark WRT what we found. I’ll stick with the same recommendation I made 2 years ago. Select the BEST STATIONS, live with spatial gaps /wider uncertainty if you have to. Let those chips fall where they may. This quest to adjust for bias and correct a record gives one a false sense of certainty and it opens the door to just the kind of quality issues that have huge PR play.
NIck, the process is self-correcting.
I dp wish Andrew would place an update at the top of the article pointing out the criticisms of it, then bump it when he makes the update so that people get a chance to see it.
Put another way, the lack of proper retractions bothers me too and does affect my opinion of his blog.
(It’s an opinion I share of RC too, although at least Anthony occasionally posts things that are contrary to his own beliefs. I can’t think of any RC posts that are “off message.”)
Of course if you don’t like what he posts, put your criticisms of him on your own blog.
It’s a lot easier to just poke people in the eyes from comments, rather than put yourself on the front page and risk being exposed as human beings too. I figure that’s what’s really stopping you here.
Nick:
He links the ice core data. You didn’t bother following it? The stated start date is 49 KYrBP, not 1850 AD.
He’s simply showing what happens if you take the same data and progressively zoom out with it.
You get it wrong occasionally too.
steven mosher (Comment#27345)
No, Filnet, TOBS etc are all part of the documented USHCN adjustments.
Carrick (Comment#27349)
No, the most recent data is, as he puts it, 0.095 Kyear BP. And BP should mean before 1950. So, OK, 1855.
Nick, the data he is plotting is here.
If you download it, the start date is 49 KYrBP re 2000 AD.Never mind, by “start date” you mean the most recent age of the core. It took me a while to figure out what you were criticizing sorry.
I don’t think they mean BP in the wikipedia sense, could be wrong there. They clearly state in their documentation that the end year (not the start year) is 2000 AD.
Please disregard the last comment. I understand your point (finally)… been a long day. I’m punch drunk, but not drunk.
You’re just saying that obviously this proxy doesn’t extend to 2000 AD, so the “hockey stick” is a complete misrepresentation.
Yes, absolutely you are right.
Guys, come on, does Nick seriously believe that Greenland wasn’t way warmer during the MWP than today? That’s pretty well established mainstream scientific knowledge.
With regard to the BP issue, it doesn’t make the damnedest bit of difference. The fifties were the last decade of a warm regime in Greenland which was like that of the present day. Which, again, is mainstream scientific knowledge, which no warmer or skeptic who knows anything disputes.
Carrick (Comment#27352)
Well, I think of ice cores as “starting” at the top. The start/end year can’t be 2000 – 1855 or 1905 both mist the current warming.
Nick
u r right nick. TOBS et al are in V2, menne was added just recently
( 2008). I think ( but am not sure) that v1 also had TOBS and other adjustments.. v2 was a horror coming out ( just ask GISS)
I think v2 hit beta in 2006.. then menne was added in 2008.
anyways greeaway is down in the weeds with ushcn adjustments.
bugs–
As far as I can tell, all Tim Lambert manages to tell us is he the Australian BOM agrees. So? He’s also sort of doing the strawman thing. Willis shows a number of graphs, and also has quite a long discusion.
Qualitatively,the quote form BOM discuss issues Willis also discussed. Willis discussed them quantitatively but the quote Tim found discusses them qualitatively. Tim doesn’t even acknowledge that Willis discussed these things — even though Willis even supports the ancilliary discussions with graphs.
For example Tim quote:
“A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious.”
So, in principle, we should see this drop– and we do see drops. But is the drop spurious and due to the change in shelter? Tim doesn’t go to the trouble to confirm this judgement. Maybe it’s right– or not. He doesn’t even go to the trouble to tell us if the drop is the 1941 drop. If so, Willis discusses the possiblity that drop is real and shows the effect of correcting for that here:
As for Tim’s narrative that would lead us to believe that the “Darwin Airport” graph is somehow all measurements at one graph: The narrative says the thermometers were moved. Of course they moved. Darwin airport probably didn’t exist before the Wright Brother’s flew at Kitty Hawk. Do you think an unmoved thermometer was conveniently sitting at just the right spot even before the airport was built?
There is another issue: Do we know if the Aust BOM graph is any higher quality that GHCN? Are the adjustment choices really independent? Or did two groups who communicate with each other convince themselves those are the “right” choices? Maybe their choice are right– or not. But if so, it should be possible to explain this.
But Tim doesn’t address this in anyway.
Mind you, it could be Willis is wrong for the reason Tim suggests. Or Tim might be wrong. Or the truth may lie somewhere in between– the choices made by both GHCN and the Australian BOM can be justified provided we recognize there are large uncertainties in any trends for Darwin airport.
I don’t think we’ll know for a few years. I suspect this is why in the wake of climate gate, CRU has decided to actually make their choices more transparent. Doing so will give people more confidence, which is a good thing.
Lucia,
It transpired last week that the same type of temperature “adjustments” [same direction and in the same quantitative ball park] flagged by Willis E. for Australia have occurred in the New Zealand climate records as well.
In many ways Watt’s comprehensive review of the NOAA network at http://www.surfacestations.org , which showed that data set to be deeply corrupted, started this line of query, and as more and more national data sets now come under scrutiny we may well hear more along the lines of what Wills and the kiwis have found.
It is this process of heightened scrutiny, which is accelerating due to the CRU debacle, that I have likened to the proverbial sweater that starts unraveling when enough people start pulling on the loose ends they find.
[and bugs, no razor blades involved..]
tetris:
Anyone who has looked at Watts’ surface stations ratings has come to the conclusion that site ratings do not have much effect on the temperature trend. That includes an analysis of the best stations by Steve McIntyre:
http://climateaudit.org/2007/10/04/gridding-from-crn1-2/
Watts has hidden his data and threatened to sue people who attempt any analysis using his station ratings (which were published on his website and promoted widely). The SurfaceStations project is not as transparent as it used to be.
John V
I have re-read the CA thread and do not see the conclusion you attribute to SteveM.
Contrary to a CRU or GISS which operate with public monies and are subject to FOI, Watt’s project was a privately funded project and so he is free to do with the results as he pleases.
Are you suggesting that Watts is hiding something? If so that is odd because he shared his findings with NOAA at their request, having been invited by NOAA to the offices, which resulted in a publicly stated undertaking by NOAA to start rectifying the situation. I am not aware that that has actually happened.
Fact remains that when you see example after photographic example of data sampling units in the US surrounded by e.g. multiple AC exhausts [or similar enormities] and the resulting temperature data is used without “adjustment”, and it turns out that a good majority of the sampling stations are compromised in some way or the other, it doesn’t make for high comfort levels in the NOAA “product”, conclusions on AGW/ACC or their projections.
GIGO. And by the way, this sort of “contamination” shows up everywhere: this summer it was shown that the Dutch KNMI’s very own temperature sensor at headquarters had been showing 2C high for years…
Willis’ conclusion that based on what is known about data contamination and the totally discretionary “adjustments” [always up] at a slew of stations, it is time for a complete review of all the data and its sources. Something which the UK Met Office -against the wishes of the politicians who are afraid that skeptics will make something of it- is now proposing to do in the wake of the CRU train wreck, all 130 years of it. We’ll have to wait 3 years, but it should be interesting to see the results.
tetris:
I’ve always been supportive of the Surface Stations project. Identifying and eliminating the worst stations is a good idea.
When it comes to quantifying the effect of the bad stations, all we have are casual studies. McIntyre did not draw any conclusions himself but you can compare his trend using the best stations to NOAA and GISTEMP to reach your own conclusions.
When I first started working on it (in Sept 2007), Watts thanked me a few times for doing analysis work. His attitude towards using the SurfaceStations data changed sometime later.
I didn’t suggest Watts is hiding anything. I only said that the SurfaceStations project is not as transparent as it used to be. There may be justification but it is a fact.
Lucia:
“But basically, nearly everyone agrees the temperatures need to be adjusted or homogenized.”
Is homogenized a synonym for adjusted? I would think that pasteurized data would be preferable to homogenized data. By no accounts do we want adjusted data to become fortified data 🙂
Seriously though, I’ve been working through the videos that Dr David Archer posted over at RealClimate under the thread “An Offering.” In the first few lectures he is working through a very simple formula that is supposed to model planetary temperatures. It occurred to me that when people talk about the global average temperature what they really mean is the global average temperature IN THE SHADE since temps are taken in those little Stevenson huts. I suppose this amounts to what is supposed to be the air temperature. It makes me wonder how the average temperature of the moon or mars is measured and guessed at? And also what would the global average temperature become UNDER OPEN SKY?
Hank–Yes. It’s supposed to be the air temperature. Temperature measurement devices need to be shaded because otherwise the absorb solar (or other) radiation and can end up warmer than their surroundings. So, if you want to know the temperature of the air, you need to shade them. We do the same thing for building sciences in engineering, or measuring temperatures in blast furnaces etc.
You really do want those stevensen huts, and ideally, ventilation is useful too.
Simon Evans #27227,
“You can see that the pre-1976 data has been adjusted upwards by 1C, thus hugely reducing the false ‘warming trend’ that would be indicated by the unadjusted records.
What seems ‘invariable’ is not the one way adjustments you claim but rather the one way selection of examples by so-called skeptics.”
your point is well made on the specific claim that all adjustments SEEM to be down for old data.
One slight issue, what do you think is the rationale for adjusting 80 years of data UP 1C or more when it would appear that the modern 25 years of data is HOT about 1C and should be adjusted DOWN??
After seeing John V’s latest posts at WUWT, I withdraw my comments earlier in the thread about John V doing good work and showing his work.
There is a major flaw in his code and he should start over and find out what the problem is.
The comments above by John V should be taken in that context.
Bill Illis:
I replied in detail over at WUWT. Here are the highlights:
– I used a simple Excel spreadsheet (no OpenTemp)
– It took about 90 minutes
– I am in communication with “Peter’s Dad”
– We are collaborating to reconcile the differences
– I changed my algorithm to match his more closely
– I immediately posted my new results after doing so
– I look like a jerk because I didn’t realize Anthony was replying inline (my fault)
Any questions?
JohnV–
I hate inline replies….. They have their uses, and can more clearly connect a rely to a comment, but they are so easy to miss.
lucia:
I don’t like them either, but I still should have looked more closely.
You know what I do like?
Threaded comments like at the new Climate Audit. They are so much better for containing and following sub-conversations. Have you ever thought about turning on threaded comments?
JohnV–
There seem to be two camps on threaded comments– those who love them and those who hate them. I hate them and find comments at blogs using threaded comments are often confusing to follow because people often don’t really stick to them.
JohnV, you have to reconcile why these select stations (which seem rather random) produce temperature increase numbers (according to your analysis) which are much higher than the NCDC and GISS trends.
I have a hard time believing you downloaded and collated all those records in 90 minutes. (In fact I don’t believe that).
Furthermore, when something that detailed takes 90 minutes, there are always inadvertent errors.
I spent enough time doing so to know that is the case.
Bill Illis:
The stations are indeed rather random. You’ll have to take that up with “Peter’s Dad”. I was just trying to replicate his results.
You’re calling me a liar? Ouch. (But I’ll be ok)
I spend a lot of time working in spreadsheets though. I really am that fast. 🙂
Here’s the procedure:
1. Make a list of the rural stations
2. For each station:
2a. Copy name into the GISTEMP station selector (3 keystrokes)
2b. Verify proper latitude and longitude
2c. Click through to the graph (1 click)
2d. Download as text (1 click)
2e. Copy text into Excel (5 or 6 keystrokes)
2f. Convert data to text (1 click and 1 keystroke)
2g. Copy annual average to station column (1 click, 3 keystrokes)
3. Repeat steps 1 and 2 for urban stations
4. Put in a few formulas to average the data
5. Create a couple of graphs
That’s about 4 clicks and maybe 10 keystrokes per station. There were 56 stations. I can easily do 4 clicks and 10 keystrokes in a minute.
However, you’re right that there might be errors. That’s why I sent my spreadsheet to Anthony and “Peter’s Dad” immediately. We’ll see how it comes out.
lucia:
I wondered about what would happen when people don’t use the threaded comments properly. I spend a lot of time in developer forums where everybody knows what to do, but I can see how they would be a problem in non-computer-geek discussions.
When (and if) I ever get my own blog started, I think I will start with threaded comments and see how they work. It should be an easy experiment at the start when there are only 2 or 3 people reading.
JohnV, did you think I wasn’t going to try your algorithm?
How did you convert the matrix of temperatures by month from the first station (Hemlock) into an Excel data format which would allow the calculation?
What did you do with all the 999.9 ‘s?
It would take me at least 15 minutes to make this first station usable (and I imagine I am as good as you in spreadsheets).
Bill Illis:
This is fun!
If you want to remove the 999.9, do this:
Press Ctrl+H, type 999.9 in the top box, leave the second box empty, click “Replace All”. I did that between steps 3 and 4. It took about 5 seconds so I forgot to include it.
Not to be rude — but maybe you’re not as good with spreadsheets as I am. 🙂
Bill Illis:
Helpfully, Glenn over at WUWT noticed the timestamps between when the station list was created and when I posted my first results. I sent my spreadsheet to Anthony Watts. I don’t mind if he forwards it to you.
JohnV, thanks for coming out.
Next time, show your work a little better so we can know when blank cells are involved in your calculations (which Excel seems to like).
I suggest you head over to:
http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html
and use your skills to sort this out.
Bill Illis:
Excel treats blank cells as null values. It computes trends and averages over them just fine. The spreadsheet from “Peter’s Dad” also has blank cells, so I reproduced his method pretty accurately.
You called me a liar a little while ago. I have demonstrated that I did what I said I did. Do you have the integrity to retract?
I have never looked at the file format from the Met Office. It looks pretty clear though. The monthly temps could be processed pretty easily with Excel. If I wanted to extract the normals and standard deviations I would probably write a little C# program and save the results in a spreadsheet. (There are better options than C#, but it’s what I work in all day).
Do you have a specific question about the format?
Bill Illis:
I should have said I would save the Met Office data in a database (not a spreadsheet).
I’m done for the night. Have a good one.
kuhnkat (Comment#27396) December 10th, 2009 at 5:30 pm
One slight issue, what do you think is the rationale for adjusting 80 years of data UP 1C or more when it would appear that the modern 25 years of data is HOT about 1C and should be adjusted DOWN??
It makes no difference whether the record is plus or minus a consistent amount, since the only matter is of anomaly in relation to the base line. In the case of the station I referenced (St. Helena), the adjustment arises from a significant change in altitude. I suppose you could adjust the modern readings downwards to give a ‘virtual’ reading of the temperature at the old altitude. That would seem an odd approach, and would not alter the station’s contribution to the analysis.
I apologize to John V for my assertions above. It seems that someone else has replicated his work (the last version that is).
Although I orginally posted in this thread semi-defending him, my later posts were out-of-line.
So, I would still like to see an explanation for why this probably random sample of stations is higher than the US in general, it appears that my accusatory tone was also out-of-place.
Willis argument refuted. It seems someone has done the right calculation to refute this cherry-picking of stations which may have been adjusted to show a positive trend. Giorgio Gilestro says he has done the calculation that should have been done in the first place. Instead of picking out one station, he has looked at all the stations in the GHCN set for which adjustments were performed. He shows the distribution of the effects of the adjustment on trend. It looks quite symmetric. Adjustments are just as likely to cool as to heat. And his Python code is available.
Bill Illis:
Thanks. I appreciate that. Apology accepted. I realize I can be a little aggressive sometimes but I am very careful to be honest and transparent.
Nick Stokes:
That’s a cool analysis by Giorgio Gilestro. I will try to find some time to validate his analysis this weekend.
Nick Stokes:
Have you posted that link at WattsUpWithThat or sent it to Anthony? He should be interested in a comprehensive analysis of temperature adjustments at all stations. I realize that Giorgio Gilestro’s results haven’t been independently verified, but that shouldn’t be a problem.
JohnV (Comment#27762)
Have you posted that link at WattsUpWithThat or sent it to Anthony?
GG himself posted there ( gg (00:09:11) : ) – I commented. Willis is apparently in the South Pacific.
Richard Keen posted a bit of a response on Jeff Id’s tAV. While I doubt he answered the questions raised here, I think it’s fair to say his “analysis” was never intended to be definitive.
See: Alaska Bodge Answers on tAV.
JohnV (Comment#27762)
I have now verified Giorgio’s calcs. I tried to post at his site, but it hasn’t appeared yet, and may have sunk into a spam filter. So I’ve posted the code at the Air Vent.
The histogram is here. I get the same mean, 0.0175 deg C/decade. and standard deviation 0.189 C/dec.
Nick Stokes (Comment#27822)
December 13th, 2009 at 4:45 am
Posted this at the long Willis thread at WUWT, but Nick might be the only one that actually looked at it. It’s a blow-up of the gg’s symmetric curve. (Click on the image itself for a clearer view.)
Looks like Nick’s reproduction has similar skewness, and both agree on an adjustment bias of 0.175 deg/century
http://img191.imageshack.us/img191/448/histogram.jpg
Now let’s see the progession on hockey sticks.
IPCC 2001:
http://noconsensus.files.wordpress.com/2009/08/synthesis-report-summary-tar-hockey-stick1.jpg
IPCC 2007:
http://www.worldclimatereport.com/wp-images/gore_hockeystick_fig3.JPG
So first, we have “a little wobble” in the shaft. Now, we clip about 20% off the blade?
I know Nick, that you commented at WUWT that the “bias” is not “sustainable” because of some sort of movement to MMTS instrumentation. I guess I’m less concerned about “sustainability” than historical measurements.
Also, odd that gg seemed to be under the impression that 20th century warming was 0.2 deg/decade, and unless I missed it, nobody but skeptics questioned him on it.
Roman M has an illuminating analysis of GHCN adjustments here:
http://statpad.wordpress.com/2009/12/12/ghcn-and-adjustment-trends/
John M (Comment#27827) December 13th, 2009 at 7:48 am
If you look at the scale, the blade is growing, and will be off the scale before long. So much time and effort expended on something that really isn’t that important. How warm was the MWP is of no consequence when you look at the big picture.
Well, bugs, the blade stops in 2000 or so in the graphs I linked to.
How much has it grown since then?