[7/7 Update]
Note to newcomers, you can find all land temp reconstructions in a single chart in this post: http://rankexploits.com/musings/2010/another-land-temp-reconstruction-joins-the-fray/
As well as a more recent update of land/ocean reconstructions incorporating a land mask and different SST series (HadISST/Reynolds and ERSST): http://rankexploits.com/musings/2010/replication/
[end update]
Nick Stokes took the initiative last week to add Hadley sea surface temperature (HadSST2) data to his land temp reconstruction to produce the first amateur global temp reconstruction (well, apart from CCC’s replication). I followed suit with my model, and we can compare our results to the big three (GISTemp, HadCRUT, and NCDC):
Its interesting to note that they are all pretty close. The slopes for 1900-2009 (in degrees C per decade) are:
- Zeke:Â Â Â Â Â Â Â Â 0.072
- GISTemp:Â Â 0.066
- HadCRUT:Â Â Â 0.073
- NCDC:Â Â Â Â Â Â Â 0.070
- Nick Stokes: 0.070
And for 1960-2009:
0.066
- Zeke:Â Â Â Â Â Â Â Â Â Â 0.139
- GISTemp:Â Â Â 0.135
- HadCRUT:Â Â Â 0.135
- NCDC:Â Â Â Â Â Â Â Â 0.134
- Nick Stokes: 0.150
We can also compare ocean-only temperatures. Here unfortunately I was unable to find any data for GISTemp, who only provide land and land/ocean combined on their website:
All ocean temp series are quite close, with mine being somewhat low pre-1915 (likely due to issues with my model dealing with sparse monthly coverage).
And we can revisit our previous land temp chart:
Close observers will notice a few interesting things. Land temps are warming much faster than ocean temps, especially in the last 50 years (e.g. close to 0.2 C per decade for land and 0.1 C for ocean). My results and those of Nick Stokes appear to be quite closer in land, ocean, and land/ocean series.
Lets compare my results to Nick’s for all three categories:
And look at each separately:
As always, click on images to embiggen.
For those interested in all the data shown here, you can find it in a spreadsheet available here.
[UPDATE 4/20] I used GISTemp Land instead of Land/Ocean in the first go, which was a big blooper on my part. If we use the correct land/ocean temps, we see that both the trends from 1900-2009 and 1960-2009 for GISTemp are in line (or slightly lower for 1900-2009) than NCDC/HadCRUT.
Both my and Nick’s reconstructions run higher than any of the major temp indicies for global land/ocean over the past 50 years.






Good going.
Some thoughts (and don’t take any suggestions as requests): It could be useful to get an Arctic-less series from GISS. They’ve got it lying around as they have the graph on their website, or one could reconstruct it form their L+O gridded data. Of course, this would be re-inventing the wheel, as I think it was Hansen who already diagnosed this divergence (such as it is) between GISS and CRU.
Also, GISS now offers two different ocean sources; I’ve not really looked at what they are and how they’re different, but if it’s there, it’s maybe interesting to look at.
Was it really 2010 before somebody amateur came up with their own land+ocean global? Somebody somewhere must have given it a go, beforehand.
and can you specify what the common baseline is for all the plots? always good to know, to see if your eye is playing tricks on you.
Are the inputs into these reconstructions the same GHCN data sets or have you found others?
carrot eater,
The charts do have a y-axis label 😛
.
j ferguson,
Nick, myself, and maybe NCDC only use GHCN + HadSST. HadCRUT and GISS use a few other stations (antarctic in particular) that aren’t in GHCN (GISS also uses all USHCNv2 data, NCDC may as well…).
Guys if you talk to Nick barnes or run GISS you can pull out a common dataset to work with:
After step1 in GISSTEMP Hansen has:
1. Added antartica
2. Dropped strange stations.
3. combined stations ( dups) according to his unique method.
4. Substituted ushcnv2 for ghcn in certain instances
cant recall the name of the work file ???comb???
bloody… yes, there it is in the label. sorry, zeke.
Mosher: yes, I think you can weed out what you want to exclude from the intermediate file v2.mean_comb.
Ack, I goofed. Accidentally used GISTemp land instead of land/ocean in the first image. Fixing it first thing tomorrow (since I don’t have privileges to edit the post at home).
It should look like this:
http://i81.photobucket.com/albums/j237/hausfath/Picture118.png
Zeke:
This has GISTemp land and ocean broken out separately. I haven’t looked into it recently, but it may be possible to run ClearClimateCode with just ocean.
Nick Barnes. Nick Barnes. Nick Barnes.
That should do it.
How are the different Ocean data GISS now has as options different? If you want one guy’s opinion:
http://bobtisdale.blogspot.com/2010/02/when-did-giss-add-ersstv3b-data-to.html
May one ask a simple question? Why use a specifically a 61-90 baseline, in fact why use a baseline at all?
Zeke: The HADSST2 data has an upward bias starting in 1998 that appears when it’s compared to other Global SST anomaly datasets. It will make its presence known in your short-term comparisons. Here’s HADSST2 minus ERSST.v3b:
http://i49.tinypic.com/20qgmk5.png
And HADSST2 minus ERSST.v2:
http://i49.tinypic.com/j090g1.png
And HADSST2 minus OI.v2:
http://i47.tinypic.com/2gx2nwm.png
And compared to the Hadley Centre’s own HADISST:
http://i45.tinypic.com/f3e5vo.png
The bias is likely caused by the Hadley Centre’s splicing of two SST datasets in 1997/98. Discussed this here:
http://bobtisdale.blogspot.com/2008/12/step-change-in-hadsst-data-after-199798.html
And here:
http://bobtisdale.blogspot.com/2009/12/met-office-prediction-climate-could.html
Zeke,
I am sure that I was able to grab the GISS ocean-only data at their web site some months ago; I remember that it was not where I would have expected to find it. Unfortunately, I don’t now remember exactly where it was (age is a terrible thing!), but I think you had to first click a link to a graphic, and then there was a link to the data that formed the graphic.
Anyway, it is reassuring to see that everyone gets similar trends. The difference between land and ocean is a bit odd. The changes in anomalies (both negative and positive) were always larger for land than for ocean by a fairly constant factor (no surprise), which you can see clearly by plotting land versus ocean anomaly values. But somewhere around 1970- 1975, the land versus ocean proportionality constant increased sharply (looks like a ‘break’ in the slope).
Dave (Comment#41051) April 20th, 2010 at 2:17 am
Anomolies from a baseline are used rather than absolute temperatures.
Sorry, guys, I’m really, really, really busy with the day job at the moment, and preparing a CCC presentation for OKCON this Saturday. Maybe next week? Of course, feel free to grab the code and run it. It should be the case that if you feed None as the land data to step5.py then you’ll get an ocean-only series, but I’m not sure whether we put that in (the reverse is how we generate land-only series).
Land and ocean data are not treated symmetrically by GISTEMP. First land and ocean anomaly series are computed for each grid sub-box. The 1200km radius is used for the land series (step 3), but only SSTs within the sub-box itself are used for the ocean series (step 4). Then the land and ocean series are “combined” (step 5), in code which takes a parameter to allow interpolation, but this parameter is set to 0 in GISTEMP, which fact we have used in ccc-gistemp to simplify the code. The effect is that in a given grid sub-box, the land-sourced series is used unless (a) the centre of the box is more than 100km from the nearest station used for the land series, and (b) there are more than 240 valid monthly values in the ocean series. So, for instance, in the sub-boxes over the Arctic Ocean you get the land series.
GISS supplies a binary file as part of the input files for step 4-5. It’s called oisstv2_mod4.clim, and they provide a Fortran program convert.HadR2_mod4.upto15full_yrs.f to read it. Bob Tisdale discussed what makes up the file. It’s a HAD file to 1981, although Bob says, different to HADSST2. Thereafter it’s the NOAA OI data set.
My preference would be to actually run the GISS fortran somehow rather than trying to read the binary in R.
Andrew_FL (Comment#41049): GISS intends to remain with the combined HADISST/OI.v2 SST data as their official SST dataset. GISS writes, “Until improved assessments of the alternative SST data sets exist, the GISS global analysis will be made available for both HadISST1 and ERSST, in both cases with these longterm data sets concatenated with OISST for 1982-present. HadISST1+OISST will continue to be our standard product unless and until verifications show ERSST to be superior.â€
Discussed that here with links to the GISS documents:
http://bobtisdale.blogspot.com/2010/03/giss-acknowledges-addition-of-ersstv3b.html
The .clim file is the climatology. It is subtracted from the monthly data to calculate anomaly values, before the monthly data is combined into the sub-box series.
Older SST data is in the SBBX.HadR2 file. Step 4 does the anomaly calculation on new SST data, grids it by sub-box, and makes a new SSBX.HadR2 file incorporating it. There is no code in GISTEMP to either generate the .clim file or to make an SBBX.HadR2 file from scratch.
Original post should be fixed now. Working on including GISTemp’s ocean temps later today.
Especially looking at the graph for the ocean anomalies:
What was the forcing that caused the warming from 1910-1945?
What happened in 1945 (the abrupt cooling)?
Compare it with this picture:
http://www.appinsys.com/globalwarming/RS_Alaska_files/image020.gif
carrot eater (Comment#41033) April 19th, 2010 at 9:04 pm
bloody… yes, there it is in the label. sorry, zeke.
Mosher: yes, I think you can weed out what you want to exclude from the intermediate file v2.mean_comb.
Its MORE than exclusions.
Ya thats the file. My understanding is that this file is the the file that people should use as INPUT to to do the best comparision to
GISS. By that I mean this. Lets take Nick as an example:
1. Nick uses GHCN
2. He doesnt use antartica
3. he does a simple average of all “duplicates” for each station.
GISS on the other hand
1. Use GHCN but drop certain stations ( this list of dropped
stations is on their ftp)
2. In the US they do some reconciling of ghcn and ushcn
3. The add antartica.
4. They handle the duplicates differently.
The end of this they output v2.comb.mean.
SO
A. it would be GREAT if CCC posted this intermediate file
as a standard reference ( intermediate data is a great
thing to have )
B. Somebody else could run and post it.
if that’s what you want, I think the ccc guys would encourage you to just run ccc
Ibrahim (Comment#41078)
“What happened in 1945 (the abrupt cooling)?”
You might want to go to ClimateAudit and do a search on “Bucket Adjustments”.
Personally, when doing any regression analysis, I don’t bother going back past 1950.
AJ
AJ (Comment#41111) : I don’t know that bucket adjustments explain it all. The discontinuity in 1945 also shows up in the cloud cover, marine air temp…
http://bobtisdale.blogspot.com/2009/03/large-1945-sst-discontinuity-also.html
…and inverted in wind speed:
http://bobtisdale.blogspot.com/2009/03/part-2-of-large-sst-discontinuity-also.html
Bob Tisdale:
I agree with Bob.
In any case, “step-like” functions in temperature and other variables are often associated with shifts in atmospheric-ocean circulation patterns. This is an observation I know Bob has made in the past and I think he is right.
AJ
see:
http://www1.ncdc.noaa.gov/pub/data/cmb/temperature-monitoring/image001.jpg
The figure shows that the impact of the adjustment to remove the cold bias from bucket sea surface temperature measurements warms the historical data, decreasing the amount of global warming the data indicate.
Further: in climate science we go way back past 1950 and I’ve read enough to know that the warming from 1910 – 1945 is real and nobody seems to have an answer about what forced it.
Nice work. With such a good visual correlation between land and ocean, what does that say about climate thermodynamics? Looks like the increasing air temps are coming from the ocean, not the other way around.
CE:
I’m too lazy to run their code. Actually I did, but had some issues with my version of python ( stupid MAC) and I havent gone back to try it again. I was hoping they would have posted the file as a part of their regression testing. On principle, I think all intermediate files should be a part of any release, but folks can argue about that. maybe, I’ll try again.
Howard,
It only says that the ocean surface temps reach a near-equilibrium with the surface air temperature rather quickly; The fact that land temps are increasing much faster than ocean temps (as well as the absence of a compelling physical mechanism) suggests that the oceans are not driving atmospheric warming.
Ibrahim (Comment#41119)
see:
http://climateaudit.org/2008/05/28/nature-discovers-another-climate-audit-finding/
There’s a nice discussion on the disruptions WWII had on the SST monitoring system (not that there was such a thing back then).
Sure you can go back further than 1950, but I wouldn’t pick start or end dates straddling WWII.
Bob Tisdale (Comment#41116)
I’m sure there’s more to the story than the “buckets” issue. Given the SST graph, however, I don’t think the “buckets” issue has been adequately resolved. There may have been a natural “step” function, but I’d bet that most of it is due to WWII disruptions.
WWII was a time of arguably the greatest social upheaval in human history. I would argue that it had a significant impact on most aspects of the global climate monitoring system. One would have to be cautious about comparing different metrics to resolve these impacts.
I’m also of the opinion that we have not yet heard the last word on WWII-era and pre-war SST adjustments. Ultimately we may just have to deal with larger some level of uncertainty on the SST numbers from that time period.
Carrot Eater:
Maybe.
But it’s pretty much impossible to explain the warming or the sudden cooling to SST adjustments when it is observed in the land-only record as well as in other meteorological parameters.
Carrick (Comment#41205)
“But it’s pretty much impossible to explain the warming or the sudden cooling to SST adjustments when it is observed in the land-only record as well as in other meteorological parameters.”
Not unless the land-only measurements were similarly affected. In addition to obvious physical factors like burning cities and forests with increased aerosols, there were other great socioeconomic disruptions. McKitrick demonstrated that socioeconomic factors significantly effect the climate data.
My preference is to treat the WWII data as unreliable and ignore it. It may be convenient, but as far as hand waving goes, I believe it does have some merit.
Re: AJ (Apr 21 17:57),
I extracted this graph to show that one clear effect of WW1 and WW2 on SST was to greatly reduce the coverage. In effect, in my method at least, that largely sorts itself out. War-influenced readings are down-weighted by the reduction in observations.
Thanks Zeke,
This is contrary to my experience living near and swimming in the ocean most of my life, ocean temperatures don’t seem to change a whole lot while the air temps swing wildly. Also, the ocean is always warmer at night when the air cools down 😉 From a physical perspective, water takes more energy and more time for each degree increase. Therefore, if the air was heating the water, I would expect some sort of lag. Perhaps the lag is there but is not resolved at the scale plotted?
I can understand how volcanic cooling of air and water would be coincident because it is due to lack of direct sun power rather than thermal conduction between a liquid and gas phase.
What about 1998 air temp spike? I thought consensus said the air was heated by the ocean due to super el nino. Is that true?
BTW, I’m sure you are right and I am wrong, but just don’t see it yet.
Nick Stokes (Comment#41209)
“War-influenced readings are down-weighted by the reduction in observations.”
I wouldn’t argue that the reduction in observations down-weighted the anomaly. I would simply argue that the reduction in observations is evidence of a disruption. This in turn disrupted the weighting of methods (bucket vs. intake), which caused a good portion of the drop. Spatial coverage would also be a concern.
Ibrahim (Comment#41119) April 20th, 2010 at 4:12 pm
AJ
They do. See 9.4.1.2 of the IPCC AR4WG1 report. Natural causes. If it wasn’t for AGW, we would be a lot cooler now.
AJ:
They may affect the data, but it’s almost certainly a real effect. (The purpose of measurement is to measure what is really there, not what would have been there were humans not interfering with climate.)
Of course, that’s a different thing than e.g. invoking buckets to explain the drop.
Nick Stokes:
Greatly? You go from roughly 1500 to 1000 stations, that is a factor of sqrt(1.5)= 1.2 -> 20% increase in variance.
That’s not “greatly”.
(It looks a lot more significant in your figure, because you didn’t set the y-minimum to zero.)
bugs:
I’m not sure where your logic is coming from.
First until roughly 1980, anthropogenic CO2 and sulfates almost perfectly balanced. Thus spake the models. And not all of the warming since 1980 is explained by CO2 either.
Secondly, the models are way off in explaining mid-20th century warming period. Of course “natural causes” is a good explanation, if you start with the premise that the models have it right that anthropogenic CO2 and sulfates were mostly balanced forcings.
No reason to wall-paper over the problems with the models. Nothing to be learned from that.
According to the IPCC AR4:
the global (!) warming from about 1910 to 1940 (being 30 years) was 0,5C. After that it cooled 0,2C (just like from 1900-1910 which I can neither explain). Than it stayed about flat untill about 1976 when the real warming began (by the way the start of the last warming correspondends with the phase shift of the PDO).
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/fig/faq-3-1-figure-1-l.png
I’ve uploaded a very simple excel simulator to the public domain. All are welcome to give it a try and modify as you wish.
http://rapidshare.com/files/378678110/tempSim2.xls
This uses a coin toss to simulate temperature change. H=1, T=-1. The resultant temperature is the nett sum of H and T.
The results are graphed. Each time you press F9, the graph is recalculated. Ignore the scale, it is arbitrary. Look at the graphs produced each time you press F9.
I believe you will find that this simulator produces plausible temperature records. Press F9 repeatedly you will get records much like the past 100 years, 1000 years, etc. This suggest to me that temperature could well be a random walk. Which would explain the difficulty models have in predicting the future.
I’ve added some scaling and a 3 state coin (-1,0,1) to allow for temperature to remain constant, and scaled the results.
http://rapidshare.com/files/378702867/tempSim5.xls
This is a random time series after a couple of dozen F9 recalcs. It looks resonably close. With a super computer pressing F9 and comparing the differences I expect I could get very close in only a few days.
carrot eater (Comment#41204) April 21st, 2010 at 5:15 pm
I’m also of the opinion that we have not yet heard the last word on WWII-era and pre-war SST adjustments. Ultimately we may just have to deal with larger some level of uncertainty on the SST numbers from that time period.”
Ditto: we know there are some papers in the Pipe. For my part I’ve been much more accepting of a record with cleaner provenance and more uncertainty than one with sketchy adjustments and tighter CIs..
With historical data it just is what it is (for the most part ). I know that the UK guys have been digitizng some new (old) records.. that’s the interest stuff to me.
So very sad, to see people wasting their time on the homogenized, adjusted data. The “raw” data certainly has problems, but the adjustments turn them into fiction.
Gene,
The land data used in my model and that of Nick Stokes is the raw unadjusted unhomogenized GHCN data. HadSST isn’t raw per se (since “raw” satellite data isn’t very meaningful in this context).
Gene,
Fiction is what some are more comfortable with. Call it science and it justifies. Pretty fancy.
Andrew
Zeke,
V2 is adjusted. The “raw” GHCN data is in the other folder ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily Take a close look at your graph. Who could have ever reached the conclusion that there was an alarming temperature drop from 1945-1978? Now, go look at my workup of the raw GHCN land surface data http://justdata.wordpress.com/ Sure looked like we were headed into the deep-freeze way back then 😉
BTW, there’s no satellite data in HadSST2…
http://badc.nerc.ac.uk/data/hadsst2/
“HadSST2 is produced by taking in-situ measurements of SST from ships and buoys” (also adjusted & homogenized) 🙁
Zeke,
The “raw” data are in the other folder ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily Looking at your graph, who could ever have thought there was an alarming cooling trend from 1945-1978? Now, go look at http://justdata.wordpress.com The temperature decline was real, the shysters have homogenized it out.
BTW, there is no satellite data in HadSST2:
http://badc.nerc.ac.uk/data/hadsst2/
“HadSST2 is produced by taking in-situ measurements of SST from ships and buoys”
Gene,
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.mean.Z is just as “raw” as the daily data, and much more useful to work with. Also, there wasn’t an alarming cooling trend from 1945-1978.
I was under the impression that HadSST2 was supplemented with satellite data post-1979. I could be incorrect about that however.
In general, I’d suggest looking into the various reconstructions that different groups have done (Jeff Id, Nick Stokes, Residual Analysis, Tamino, CCC, and myself) over at http://rankexploits.com/musings/2010/comparing-global-land-temperature-reconstructions/
When you start with the same data, why would you be surprised that your results agree? Many people have compared individual station’s raw data vs V2.mean. The V2 data is fudged. Joe D’aleo did a workup on Central Park a while back http://icecap.us/images/uploads/NOAAroleinclimategate.pdf
Gene,
D’aleo was comparing raw (v2.mean) to adjusted (v2.mean_adj). All our reconstructions use raw (v2.mean).
Quoting D’aleo: “I downloaded the Central Park ‘unadjusted’ data from GISS and did a comparison of annual mean GHCN with the raw annual mean data downloaded from the NWS New York City Office web site here.”
Simple check to see if your dataset is adjusted, check the 1895 annual mean temperature of Central Park. Is it closer to 49F or 53F? The data in ghcn/daily agrees with NWS New York City, 52.7F. StationID=425725030010
Re: Gene Zeien (Apr 22 14:17),
Gene, the mean for that year (1895) station 72503 in my dataset is 53.12F.
Gene,
There are two imods associated with station 72503 for 1895 that have complete records for that year.
imod 001 has a mean annual temp of 53.27F
imod 005 has a mean annual temp of 48.59F
Since 425725030010 represents:
country: 425
wmo_id: 72503
imod: 001
I get 53F for the station in question.
(not sure why my annual number is a tiny bit off Nick’s; might have to do with the way we calculate the annual number from monthly means).
Re: Zeke (Apr 22 15:29),
Zeke, just following up on that – 72503001 is indeed Central Park,
72503005 is West Point
42572503001 NEW YORK CENTRAL PARK 40.78 -73.97 39 18U19342FLxxCO 5x-9 1821 2006 186 C
42572503005 WEST POINT 41.38 -73.97 97 146R -9HIxxno-9x-9 1824 2006 178 C
The lines in the v2.mean file are
4257250300101895 -13 -42 20 101 176 232 229 246 221 113 89 36
4257250300501895 -49 -77 -2 85 154 220 204 222 198 85 55 1
Units 0.1C
I think D’Aleo may have mixed them up.
Nick,
Or he could be using some GISS or USHCNv1/v2 F52 output that is replacing the Central Park station data with that from West Point to correct for discontinuities of some sort. I recall Matt Menne referring to a cooling bias in the central park sensor due to a tree growing over it 😛
Re: Zeke (Apr 22 15:41),
Could be. 72503001 in 1895 (and before 1919) was not even in Central Park at all, but in the Arsenal Building in 5th Ave.
Zeke, you might want to compare the different SST data sets, especially the COADS data set. There are interesting discrepancies prior to 1960. I’m hoping that people will take a look at the SST datasets, there are significant issues here.
Nick,
My bad, I was getting 52.7F from one of my graphs. Got the Central Park ID# from http://data.giss.nasa.gov/gistemp/station_data/
Parsing the daily data file USC00305801.dly, I get 53.3 as a mean temp for Central Park in 1895.
USC00305801 40.7800 -73.9700 40.0 NY NEW YORK CNTRL PRK WSFO HCN
Glad to hear V2.mean is actually usable data.
Re: Zeke (Apr 22 15:29),
Zeke, just on that small puzzle, I got the annual average by adding the months and dividing by 12. If I allow for the months being of different length (Feb etc), I get 53.27.
Judy,
Great idea for a project. I’ll take a stab at it over the weekend if Nick doesn’t beat me to the punch, so to speak.
.
Nick,
Figured it was probably just that 😛
I do like the concept of starting with daily records, instead of dividing it into 12 unequal partitions. There’s also QC stuff you can do with daily data you can’t do with monthly, like automagically filter missing minus signs on individual dates. I’d probably run the data through a 3-point median filter before computing the monthly average (if I were disposed to do so at all).
However, it doesn’t appear that GHCN has it set up so you can do incremental updates. It appears you have to download the bloody 1.8+ GB file each time.
RE: Carrick (Comment#41294)
Saw that piece on missing minus signs on WUWT, the other day. Planning to have a go at finding some obvious ones after I finishing repairing my car. Perhaps this weekend…
RE: Nick Stokes (Comment#41290)
My method divides by the number of days with recorded temps and yielded 53.282527 (which I’d rounded to a more rational 53.3)
Gene #41289
v2.mean is usable, but so is GISS. I think one of the things Zeke and others have been showing is that, not only do the results agree with CRU and GISS, but they do so despite the fact that GISS uses USHCN adjustments, say. Zeke found that using GHCN adjustments didn’t change the result much either.
It’s always possible to find a reasonably large individual adjustment as with Darwin or Central Park and draw some scary graphs. But the adjustments go both ways, and pretty much average out.
So why make them? Well, take Central Park. It was moved in 1919 from a 5th Ave location. USHCN has the metadata for that, and probably overlapping records so that the site difference can be well estimated. Why do you think data which ignores this is better?
RE: Nick Stokes (Comment#41297)
By USHCN’s own reckoning, the effect (+0.5C) of the adjustments on US temperatures is nearly as large as the global effect attributed to CO2. http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif as seen on http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html
I wanted to see what the raw data looked like, and spent my holidays finding out. Honestly, I was quite surprised by the end result.
Gene, #41297
I think it’s 0.5F not C.
Gene,
Ahh, yes, the U.S. is a bit of a special case, and there is a notable difference between U.S. raw and adjusted temps. However, the bulk of these adjustments are to correct for changes in time of observation and sensor type, and while we can argue about the specifics its difficult to support the position that no adjustments are needed to correct for these known biases. We discussed them a bit in previous threads here:
http://rankexploits.com/musings/2010/a-detailed-look-at-ushcn-mimmax-temps/
http://rankexploits.com/musings/2010/uhi-in-the-u-s-a/
http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/
Its always good to have more folks working on temp reconstructions. A good first step is to see if you can replicate what has already been done. After that you know you are on the right track to pursue more novel analysis.
RE: Nick Stokes (Comment#41302)
0.5F not C Uh, oh… 0.6C – 0.5F = 1.1F – 0.5F = 0.6F = .33C
So, of the 0.6C total global warming, within the USA only 50% is not due to adjustments. USHCN graph (ends in ’99) http://www.ncdc.noaa.gov/img/climate/research/ushcn/mean2.5X3.5_pg.gif Now that’s a noisy graph, 3F jumps/drops over 2-3 year periods.
Carrick,
I haven’t looked too closely at GHCN daily, but I was under the impression that some stations had only monthly data available (via CLIMATs), so you might be dealing with a smaller set of stations.
Zeke, I haven’t looked at it either (other than a few spot checks).
If it turns out that the number of daily stations is much lower, then it might have value as a quality control on the monthly records.
But from Ron Brodberg’s comment it kind of looks like GHCN already has some QC software in place for things like sign errors.
Unless you’re interested in really high frequency signals (by climate standards) or are interested in nonlinear processing (median filtering and flipping sign on samples with minus-sign errors are both examples of that), then there isn’t any advantage to use the daily records over the monthly one.
The short version is linear operators like monthly averages and spatial averages commute, and as long as they commute, the monthly average of the spatially averaged signal is equal to the spatial average of the monthly averaged signal (the answer is independent of the order of operation… so we might as well reduce the size of the problem by monthly averaging before doing any spatial processing.)
You might use a better low-pass filter than a simple monthly average before decimating the data… technically data decimation is nonlinear (e.g., monthly averaging but only retaining 12 values out of 365 in a year), but that only matters if you have nonzero frequency components above the Nyquist limit. (As an aside to this aside, if you perform a running monthly average and don’t decimate, then the temporal averaging operation is strictly linear, and the answer is again truly independent of the order of operation, no caveats.)
Gene:
Looks like a roughly 5-year ENSO-like signal to me (not noise), with a roughly 60-year amplitude modulation period in that signal.
(I’ve seen this modulation in some of the more detailed temperature fluctuation analyses I’ve done.)
I’ve put up here a new GHCN analysis which looks at spatial distribution of trends.
Zeke (Comment#41369) April 23rd, 2010 at 4:55 pm
Carrick,
I haven’t looked too closely at GHCN daily, but I was under the impression that some stations had only monthly data available (via CLIMATs), so you might be dealing with a smaller set of stations.
there is a HUGE worldwide dataset of daily data. cru even do a version of it.
If I had a dollar for every time somebody looked at that old USHCN sum of adjustments graph, and mixed up C and F, and then extrapolated from US to global…
That said, I wish the NCDC wouldn’t publish results using F. I know they’re trying to speak to Americans, but anybody doing anything with the data just has to convert it back to C anyway.
As for GHCN daily: I did not think it had as wide a global coverage as the monthly. At least, not yet. I haven’t attempted to quantify the difference, though.
Regarding sign errors: The Peterson 1997 paper describes how they look for outliers. I haven’t seen anybody sit down and apply those tests, and see under what conditions a sign error could slip through.