[10/21 – Updates at Bottom]
Today the Berkeley Earth Surface Temperature project headed by Muller and Rohde release all their data and code along with draft versions of four papers submitted for publication. These papers include:
- Berkeley Earth Temperature Averaging Process, describing their new method for station combination, homogenization, and spatial interpolation. This is by far the most detailed one.
- Influence of Urban Heating on the Global Temperature Land Average, analyzing the impact of urbanization on global temperature trends. They find, interesting enough, that rural station are actually warming faster ( by 0.02°C ± 0.02 per decade) than urban areas.
- Earth Atmospheric Land Surface Temperature and Station Quality in the United States, which does an analysis similar to that in Fall et al 2011 and Menne et al 2010 and finds similar results, e.g. that there is no significant effect of station siting on mean temperature trends.
- Decadal Variations in the Global Atmospheric Land Temperatures, which argues that the AMO has a strong role in decadal (2-15 year) global land temperature variability.
The figure above, from the first paper, shows how the BEST land temperature record compares to those of GISTemp and HadCRUT. Unfortunately, the do not correct for the differing spatial weighting of GISTemp and HadCRUT (discussed in detail here), which means that the comparisons to GISTemp and HadCRUT records are rather misleading. They do show quite similar trends to NCDC (and perhaps ever so slightly higher in recent years), though it is difficult to tell how much of the difference is do to the new homogenization method vs differing spatial coverage.
.
In general, the new method for station combination and homogenization seems quite robust, and produces a temperature record with error bars considerably smaller than other temperature reconstructions. It will be interesting to see how the NCDC’s existing pairwise homogenization algoritm compares to the BEST scalpel approach in more direct tests on data with synthetic (blinded) inhomogenities.
.
The UHI paper is also quite interesting, if much less detailed than the process paper. They use the MODIS remote sensing data to classify stations as urban or rural, and take two approaches to analyzing UHI: simple averaging of all station trends, and global reconstructions of separate urban and rural sets. Both of these approaches likely suffer from bias in spatial coverage (the first more than the latter, as Rhode’s spatial interpolation method will somewhat correct for spatial coverage issues), and I’d be surprised if the conclusion that UHI is negative will hold up under more nuanced analysis. That said, these results make it increasingly unlikely that a large UHI effect will appear in the land record. Stay tuned for much more on this subject after the AGU conference this year where Mosh, Nick Stokes, Matt Menne, Claude Williams, and I are presenting a paper on this.
.
The surface stations paper takes an approach similar to that of Menne et al, but is not as nuanced as Fall et al in its treatment of spatial coverage problems with good sited (CRN 12 stations). The timeframe issue (1950-2010 instead of 1979-2010) issue that Anthony raises is something of a canard, as it won’t affect that results particularly much and BEST provides annual differences between CRN123 and CRN45 stations from 1900 to present, which are quite similar to those in Fall et al (who, its worth noting, provide an analysis of 1895-2010 trends as well as 1979-2010 trends). It also should be relatively trivial to replicate the BEST analysis for 1979-2010 now that they have released their code. Overall the results of BEST are quite similar to those of Menne et al and Fall et al, namely that station siting does not have a large impact on temperature trends.
As we start to play around with the newly released BEST code (and port it to more accessible programming languages like R), I’m looking forward to trying to replicate some of the results shown in these papers, as well as try some new variants of their analysis. Regardless, these new papers should finally lay to rest some of the sillier claims about the reliability of the land temperature record.
.
Update #1: It looks like the BEST website has a comparison of reconstructions that solves the apples to oranges GISTemp issue (not sure about HadCRUT though) and includes all 39,000 stations (and not just GHCN). Unsurprisingly, it shows results quite similar to the other indicies:




I’m curious:
“Overall the results of BEST are quite similar to those of Menne et al and Fall et al, namely that station siting does not have a large impact on temperature trends.”
On temperature trends, yes, but how about on accurate temperature measurements?
IIRC the gistemp “land-only” index is actually a global estimate, made using only land stations. So there’s no mystery in its having lower trends than a land temp reconstruction. In fact a reviewer could well ask why on Earth they chose to compare the two.
.
Muller et al. allude to this in their first paper but state that the literature is unclear as to what exactly gistemp does:
.
John: If you refer to the differences in absolute temperatures between “poor” and “good” stations, they say that the differences found by Fall et al. are not significant. They compute their uncertainty by using a 50-y record rather than since 1979, which apparently caused Anthony’s ire. Personally I find it hard to believe that restricting the method to 1979 onwards would somehow reduce the error estimate.
.
I tried dl’ing their data, but the text version is corrupted, and the matlab version is incomprehensible. I’ll look for more documentation.
The first two pictured graphs are a model of clarity. They are interesting just to look at and reflect on.
I’m intrigued that HADCRU, GISS, and NOAA all “run hot” (wrt Berkeley), 1850-1900. I recall that weather station placement was very sketchy for most of that time, especially ex-US and a few other developed countries. There would have been a much greater emphasis on certain temperate geographies at the expense of tropics and boreal regions. Presumably those are the conditions where differences in homogenization have the greatest effects on the curve.
And then HADCRU and GISS “run cold” for the past two decades. I know you’ve discussed things like HADCRU/GISS vs. NOAA… if only I could keep these alphabet-soup comparisons straight in my head.
The Berkeley error range looks, intuitively, as though it “makes sense”. Those of too many reconstructions don’t seem to pass the eyeball plausibility test. Though whether that’s the fault of the eyeballs or the recons, who can tell.
toto:
What exactly were you trying to say here? It doesn’t make any sense to me sorry. The others aren’t a “global estimate” or they don’t just use “only land stations”???
Zeke – You wrote
“Overall the results of BEST are quite similar to those of Menne et al and Fall et al, namely that station siting does not have a large impact on temperature trends.”
This is not correct with respect to Fall et al – http://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf
We found that
“Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular
in a substantial difference in estimates of the diurnal temperature range trends.”
Roger
You (and Muller) are making conclusiosn just with respect to the mean.
“Regardless, these new papers should finally lay to rest some of the sillier claims about the reliability of the land temperature record.”
————————————
They should, but I doubt that they will. The strong agreement between the surface and tropospheric records since 1979 should also have been a big clue that the surface record was sound.
Thanks, Zeke, for keeping interested parties posted on the latest in analyzing the instrumental temperature data.
I have been looking at the data and combining the analyses of Fall et al and that of the Menne and the algorithm used in the latest version of USHCN. My work is far from complete, but I find, in general, the same phenomena noted by Pielke Sr in this thread and as reported by Fall. I used several time periods, namely, 1920-2011, 1940-2011, 1960-2011 and 1980-2011. I did see differences in the relative trends for the 5 CRN ratings depending on time period.
I also found that the 9 regions used in Fall have variations region to region that are larger and more significant than those from the CRN ratings and that adjustments have to be made for region either using the region factor along with the CRN factors in a model or substracting out the region effect depending on what region the CRN rated station resides. The temperature data can readily be seen as noisy and requires large difference to be present to call them significant. I have not yet combined the data into CRN rating groups but will do that soon.
At the same time I have been analyzing the breakpoints of the TOB and Adjusted data and compared them on the basis of differences in trends for the TOB and Adjusted data for a given station. I have been assuming that those stations which have the same breakpoints for the TOB and Adjusted series should have little or no difference in trends, and better should have significantly smaller trend differences than stations where the TOB and Adjusted series have different breakpoints. On my initial analysis that has not been the case.
I have also assumed, as noted by Menne in his paper on the topic, that a station that breakpoints in the difference series with its correlated neighbors will have breakpoints in its own non differenced series.
Doing breakpoints in R with the function in the strucchange library is a very lengthy calculating process and precludes me from doing difference series with 50 nearest neighbors. Does anyone know what program Menne used and what power computer? Also, I have not been able to locate the breakpoints that were found using the Menne algorithm and wondered whether anyone can link me to that information.
Owen:
Well they’re not identical (nor should they be, since they don’t measure the same thing) but broadly I agree with you here.
Based solely on a quick glance, (all I have time for), the paper “Decadal Variations in the Global Atmospheric Land Temperatures” appears to be a statistical analysis, and fails to consider the processes. It also seems to fail to acknowledge that:
1. the PDO does not represent the detrended SST anomalies of the North Pacific north of 20N,
2. the AMO does represent the detrended SST anomalies of the North Atlantic,
3. and that ENSO is represented by NINO3.4 SST anomalies (not detrended)
4. that their analyses are comparing apples to oranges to pineapples.
Carrick: What exactly were you trying to say here? It doesn’t make any sense to me sorry. The others aren’t a “global estimate†or they don’t just use “only land stations�??
.
Giss “land only” is actually an attempt at reconstructing global temperatures (including much of the oceans) from land data only. This implies a lot of extrapolation from island and coastal stations over the oceans. This is different from trying to reconstruct a land-only temperature series. In particular, it gives an awful amount of weight to islands and coasts. The Galapagos seem to account for maybe 1 or 2% of the globe on figs 7-8 of Hansen 2001.
.
Crutem applies a 5×5 gridding, which still involves a lot of extrapolation and increases the weight of islands and coasts. Muller says that both he and the NOAA product he uses apply strict land masking (at least that’s how I understand the discussion in pages 30-31 of his first paper ).
.
See also this other post by Zeke.
.
At any rate, since Muller doesn’t seem to cite exactly which products he uses for comparison (or maybe I’m just too dense to find it), it’s difficult to be sure.
Thanks for the links, Toto.
Toto
I got the same corruption. I was given access to the data and wrote a package to read it, but it looks like the data has changed substantially. It was also pretty much of a mess back then, but readable. A dataflow diagram would be nice, anyway I like the method so Ill put up with the programming infilicities..
But crap they didnt even check the zip. More later I gotother fish to fry
Regarding the recent claims that warming has stopped in the past 10-12 years, neither the NOAA nor BEST reconstructions show the more pronounced recent leveling-off that is seen in Hadley and GISS records (see the first figure above).
Lucia – I provided this response to Zeke at WUWT
“Zeke Hausfather – I realize that he has a much larger set of locations, but many of them are very short term in duration (as I read from his write up). Moreover, if they are in nearly the same geographic location as the GHCN sites, they are not providing much independent new information.
What I would like to see is a map with the GHCN sites and with the added sites from Muller et al. The years of record for his added sites should be given. Also, were the GHCN sites excluded when they did their trend assessment? If not, and the results are weighted by the years of record, this would bias the results towards the GHCN trends.
The evaluation of the degree of indepenence of the Muller et al sites from the GHCN needs to be quantified.
Perhaps they have done these evaluations. However, from my reading of their work, I have not yet seen it.”
BEST took the NOAA GHCN database and applied a different methodology (and then inexplicably applied a 12 month moving average to all the series) and then came up with broadly similar results to the NOAA, GISS and Hadcrut.
When they randomly chose sites from the 39,000 sites not used in GHCN or by GISS, they get a different result.
Why is this not the headline. I thought this was what they were supposed to be doing.
Figure1: Berkeley Earth
http://berkeleyearth.org/Resources/Berkeley_Earth_Decadal_Variations
Dr. Pielke,
You are correct in your first comment, I should have written that:
“Overall the results of BEST are quite similar to those of Menne et al and Fall et al, namely that station siting does not have a large impact on mean temperature trends.â€
As far as BEST data goes, GSOD data alone (which provides one of the main sources of additional data) gives you a good independent check for GHCN over the past 30 years. See Nick Stokes work on it last year, for example: http://moyhu.blogspot.com/2010/07/global-landocean-gsod-and-ghcn-data.html
Bill Illis,
It looks to me like they get broadly similar results with the full dataset: http://berkeleyearth.org/analysis.php
The graph in the paper is a random 20% subset of stations not in GHCN, whereas the graphs at the link above are all stations in their database.
Zeke #84234,
Thanks for mentioning the GSOD analysis. I’ve made a variety of KMZ and KML files to show station locations in Google Earth, which can be downloaded here. The most recent is best.kmz, and when I crack the big data file, I’ll be able to include info on duration of data. There is also a GSOD.kmz file and numerous GHCN files. I’ve just put up a new post about the BEST file here.
Zeke – I still would like to see an analysis of how much added geographic location the other stations (non-GHCN) covered, as well as their time record length. Roger
Why do we appear to be so content to know about agreements of various temperature series over the last 30 years? If we had only a 30 year temperature record our conclusions about climate change would be significantly different or at least much less certain.
The Menne et al 2010 and Fall et al (2011) papers covered only the past 30 years. If any of you have worked with the available temperature data series that go back in time you would be aware that much missing data exists. The USHCN latest version of adjusted temperatures fills in the missing data from the TOB series using neighboring stations. I am not sure what uncertainties arise from that process, but I am attempting to analyze that issue.
As an aside I need to add that the analyses of Fall and Menne make some implied assumptions that need further analysis also. Both papers seem to imply that what is being measured trend-wise arises from a change in station CRN rating during the period studied, and further, for the results to have quantitative meaning, that the changes occurred for all the stations in a rating group and from some other rated group that is the same. What could rather easily be an alternative is that no significant amount of CRN rating change for the stations analyzed occurred during the time period studied (my work indicates that most of the changes probably occurred much earlier than 30 years ago) or an unknown portion of the stations changed and to a measured CRN rating (when the snapshot was taken by the Watts team) from some unknown CRN rating in the past for each of the individual stations. The second alternative above would imply that the changes in trends as a result of CRN rating changes would measure only a portion of that change that would be expected from a station going from one CRN rated group to another CRN rated group.
Kenneth Fritsch:
That’s the record we have, it’s good the various sets agree over that time, and moreover that included a period of rapid temperature change, so it does test something meaningful.
Secondly, that’s more or less the time period over which anthropogenic global warming is said (by GCMs, which give the interval to be 1975-now) to be resolvable from natural forcings. Going further back in time tells us nothing useful (according to the models) about anthropogenic global warming. We need to go further forward in time (25 years) before incontrovertibly we can separate out anthropogenic forcing from natural climate fluctuations.
Kenneth: apparently their graphs (and the agreement) seems to include much more than the last 30 years. In fact I heard someone is complaining precisely about that 🙂
Updated the OP with a graph that has a fixed GISTemp record and (as far as I can tell) includes all 39,000 stations.
Now we have to explain why Land temperatures have increased by twice as much Ocean surface temperatures … and,
… why the Land surface temperatures are increasing so much faster than the lower troposphere (the UAH/RSS lower troposphere level is supposed to be warming at 1.27 times the land/ocean surface while it seems to be increasing at only about 0.5 of the Land temperatures trend since 1979).
So since 1850 CRU has warmed by 1°C and Berk has warmed by 2°C. Can we deduce anything except that it has warmed a bit?
(Berk’s 2°C would be just what the IPCC allows us before we have to completely stop emissions of CO2.)
Bill Illis (Comment #84254)
October 21st, 2011 at 12:51 pm
“the UAH/RSS lower troposphere level is supposed to be warming at 1.27 times the land/ocean surface”
—————————————–
Is that supposition generally agreed upon? Would you happen to have a reference to the 1.27 factor?
“Secondly, that’s more or less the time period over which anthropogenic global warming is said (by GCMs, which give the interval to be 1975-now) to be resolvable from natural forcings. Going further back in time tells us nothing useful (according to the models) about anthropogenic global warming. We need to go further forward in time (25 years) before incontrovertibly we can separate out anthropogenic forcing from natural climate fluctuations.”
This is a hypothetical, so do not get excited, but if going further back in recent time showed temperatures warmer than we currently measure or nearly as warm we would have to deal with that base line for warming and I would think that would change our thinking on the current warming. Obviously models go back further than 30 years and for reason. Reconstructions depend on instrumental data going back further than 30 years. Of course, if we judge that the models are stand alone correct then, of course, we should be adjusting the instrumental records to what the models reveal or better simply use the model output.
Tropical storm analyses on temperature effects go further back than 30 years and I would guess that might have motivated Judith Curry on the BEST analysis.
What I am most interested in is part of what the Best group is attempting to determine and that is the uncertainty of temperature measurements and particularly as we go back in time. The Best group quotes the IPCC as the 1950s forward being those most likely affected by AGW.
By the way, I find the Best paper the most comprehensive approach to temperature averaging and infilling of those that I have read to date. I have not had time to read and understand all the assumptions made, but the paper does attempt to deal with most of the issues that I have thought were important. I think much of my current skepticism lies with how well the homogenization algorithms capture discontinuities in the temperature series and the uncertainties that process involves.
I think we need an audit of the auditors. I’m still not convinced.
Bugs, I agree. Please start and get others to join you that wish this to be done.That is what the auditors did.
Bill Illis says:
Maybe because you seem to be the only one, including the authors of the figure, who interprets the figure in this way.
Isn’t that general disparity between land and ocean temperatures something that the climate models that you love to hate actually predict ought to be the case?
What? In one part of the sentence, you compare one thing. In the next part another. It is almost as if you are getting really desperate.
Owen (Comment #84257)
October 21st, 2011 at 1:08 pm
—-
From Peter Thorne (UK Met) and Tom Peterson (NCDC) who are in charge of the temperature records. See Figure 7.
http://www.webpages.uidaho.edu/envs501/downloads/Thorne%20et%20al.%202010.pdf
Joel Shore (Comment #84268)
October 21st, 2011 at 6:18 pm
——–
Isn’t that general disparity between land and ocean temperatures something that the climate models that you love to hate actually predict ought to be the case?
… why the Land surface temperatures are increasing so much faster than the lower troposphere (the UAH/RSS lower troposphere level is supposed to be warming at 1.27 times the land/ocean surface while it seems to be increasing at only about 0.5 of the Land temperatures trend since 1979).
What? In one part of the sentence, you compare one thing. In the next part another. It is almost as if you are getting really desperate.
———-
What do the climate models predict for ocean surface temperatures?
And Part III, those are just the trends – emotions, such as desperation, are not part of how I view science.
Part I, it appears that BEST provided an updated chart using all of the 39,000 sites not used by others (rather than just a random sample) which was not included in their original papers (and Zeke has updated the post at the top with that new data).
Bill Illis (Comment #84270)
October 21st, 2011 at 6:45 pm
Thank you for the reference.
Zeke, in one of your comments you say:
If you’re talking about the figure I think you’re talking about, Figure 1 in the paper on decadal variability, it wasn’t 20% of the stations not in GHCN. It was just 2,000 stations. There were about 30,000 total, so the actual amount is closer to seven percent. The only place I remember them using 20% of data was in Figure 6 of their methods paper.
It doesn’t change anything, but I thought you’d like to know.
Bill Illis,
Land vs ocean. Ocean has more water (obviously) than land for it to carry heat into the troposphere and beyond (heat transfer by convection). Look at NH vs SH trends using RSS data. No trend for SH (essentially flat), but the highest temp trend is NH (excluding tropics for both SH and NH). I always thought it was because NH had more land (and more ocean for SH). My theory however doesn’t explain if the heating is due to CO2 or natural causes (like lack of cosmic rays). The same observations would be seen in either case.
I agree with Illis that the point of the BEST project is to distract attention away from the satellite dataset, which will be the kiss of death to CAGW. Assuming 2012 is cold (due to La Nina), you could be looking at 17 years of non-warming (1996-2012). The alarmists are whistling pass the graveyard.
Chris Nelli:
Even with your cherry picking, not likely.
This doesn’t seem to follow your hopes either (°C decadal average UAH TLT)
All records show warming, satellite and surface. The satellite shows more variability from oceanic-oscillations, but even over a 10-year average the trend is still there.
Bill Illis:
That probably has something to do with the ocean maintaining approximately the same rate of heating from equator to ice cap.
Land has feedbacks (e.g., albedo changes associated with ice loss) that accelerates warming (and cooling) in the high arctic.
Figure.
Based on models which IMO lack the resolution to make this sort of prediction. Certainly not to three decimal places! TLT is “well mixed” compared to the boundary layer, so comparing land only surface boundary layer to e.g 8-km above it shows at best shows a serious lack of basic understanding of the differences in the two classes of measurements.
Carrick, FWIW I don’t think 17 years of non-warming is that unlikely in RSS:
http://www.woodfortrees.org/plot/rss/from:1996/plot/rss/from:1996/trend
Thanks Niels! Carrick, please plot 1997-2011 data in RSS. No trend! That’s the last 15 years. Also, both of you ignored an important facet of my argument: assuming 2012 is cold. Where in your graphs do you plot assumed data for 2012? By the way, I consider the trend insignificant if the trend is <0.5 C per century.
You remind me of Atmoz, whom I had a similar argument 3 years ago on his blog. I said then that it was setting up to be 15 years of non-warming (it was 12 up to that point). I also predicted that this would be a growing issue in the debate (which it has). Well, I'm telling you NOW it is setting up for 17 years if 2012 is cold (which is a good bet at this point). In short, the argument is not "no warming for the past decade", but more like, "no warming for 20 years". A huge difference that the warmists aren't prepared for.
BEST posted the monthly data in a text file at:
http://berkeleyearth.org/analysis.php
So, here is the highly variable Monthly Land Temperature anomalies and the 12 month moving average. (Now we see why moving averages were used).
http://img202.imageshack.us/img202/3230/berkeleymonthlylandanom.png
And the 5 Year Moving Average and the 12 month moving average.
http://img838.imageshack.us/img838/2748/berkeleymovavglandanom.png
The warmest month in history is March, 1822 at +2.431C
Carrick,
Who’s cherry picking? The point is to compare the last 15-20 years of satellite data to the output of the climate models. For example, IPCC claims a warming trend 3-4 C per century. Over 20 years, that would be 0.6 – 0.8 C. How do you square that against little to no warming of the past 15-20 years?
Even if you consider all of the 33 years of the satellite dataset, the warming should be 1.0-1.33 C according to IPCC. Again, a cold 2012 possibly brings the temp anomaly to the zero baseline. At the point, the models are off by a century’s worth of warming (which is approx. 1 C per century if the surface data can be believed between 1800-2000).
Talk about missing heat!
.
What source are you using? On woodfortrees, RSS has a trend of 0.4-0.5degC over the entire record, which seems quite compatible with what you call the “IPCC” prediction (which I’m pretty sure applies to the next century, not the previous one).
.
Interestingly, UAH has pretty much the same trend over the entire record. This is despite UAH having a much higher trend than RSS over the last 15 years (so perhaps you might avoid the word “satellites” when you are really referring to RSS only).
.
What Carrick is trying to tell you here is that a flat spell over a 15 year periods in some records (but not others) tells you nothing about the long-term behaviour.
.
I suggested you had a look at this post for a better way to confirm of falsify “global warming”. Did you?
At CA, Steve has some first thoughts.
http://climateaudit.org/2011/10/22/first-thoughts-on-best/
A point he made was how cold it was in the early 1800s, as if there had been a Little Ice Age. 🙂
Chris:
Pretty much you. You’re started the beginning of your series with an positive outlier (the ENSO event) and even doing that we still get a positive slope.
This represents a logical fail as well as a science fail on your part. The science doesn’t say what you claim it says, as toto points out, and your interval choice is high suspect and appears designed to favor a particular interpretation.
toto:
I absolutely agree on this point. You really need a longer interval than 15-years to reliably separate natural variability from anthropogenic forcing.
Figure.
Don B:
Rob Wilson weighs in on that.
I made a comment on that which pretty much echos something I said to one of the noobs on this blog:
My comment was in reference to Rob’s comment:
Only 48 comments on this post, Surprising. If BEST had found lower surface temperature increases and confirmed the UHI, I’d bet (up to 5 full quatloos) that we would have >250 comments by now.
From TPM: “When we began our study, we felt that skeptics had raised legitimate issues, and we didn’t know what we’d find,†Muller wrote in a Friday Wall Street Journal op-ed. “Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that. They managed to avoid bias in their data selection, homogenization and other corrections. Global warming is real. Perhaps our results will help cool this portion of the climate debate.â€
Such a vote of confidence for climate scientists is met here largely by silence and a few random attempts to diminish…..
Owen,
for many of us the BEST results are largely what we expected.
Joel: “Maybe because you seem to be the only one, including the authors of the figure, who interprets the figure in this way.”
Maybe you don’t understand the data he is talking about.
Look at this paper by BEST.
http://www.berkeleyearth.org/Resources/Berkeley_Earth_Decadal_Variations
The BEST record there represents 2000 previously unused and randomly picked records. If you look at the last dozen years, it looks very much like it has a negative trend for that time period.
The BEST data in the chart above represents 39,000 records. I downloaded the data for that chart and plotted and trended the last twelve years. The trend was strongly positive.
I can’t find the data for the 2000 samples used in the “Decadal Variations” paper. But, from eyballing its 12 year trend looks negative. This means that there is a huge divergenve between the randomly selected and previously unused sample of 2000 and the 39,000 record chart.
Both charts are used in their papers. An explanation is called for.
Owen: “Only 48 comments on this post, Surprising. If BEST had found lower surface temperature increases and confirmed the UHI, I’d bet (up to 5 full quatloos) that we would have >250 comments by now.”
Try Judith. She has 655.
“Such a vote of confidence for climate scientists is met here largely by silence and a few random attempts to diminish…..”
Could be due to the BEST group coming up with an idiotic result like negative UHI effect.
Toto and Carrick,
You guys have proved my point that people aren’t ready to defend 20 years of non-warming. You are still working on the 10-15 year non-warming meme. Time to move on to 18-20 years as it is right around the corner. Regarding cherry-picking, this is sort of moot for the following reason: if a climate model can’t match a 20 yr period (regardless where it resides within a 60 yr time frame, for example), then the model is flawed.
A few clarifications:
– I prefer satellite data over surface temp data (don’t need to list the reasons)
– I prefer RSS to UAH due to the way the data is presented/broken down (nothing more sinister than that)
– I said 2012 may bring the temp anamoly to the zero baseline (not zero slope). If you graph RSS data, you will see the temps in 1980 were around the baseline. Thus, this is a comparison of the “ends” versus a trending exercise. By the way, how appropriate is it to apply a linear trend to a “hump”?
– Regarding the ENSO events, looking at the trend from 1995-2014 (20 yrs), for examples, neutralizes 1998 to a large extent.
Are you sure either of you is not Atmoz? I know you guys have a vested interest in keeping this debate going for years or decades, but in two years, assuming no significant warming between 1995-2014 (20 years), I hope you guys have other interests to participate in besides the global warming debate because that debate will be over. Remember, it wasn’t that long ago that people thought that housing prices only go up.
Carrick: “Certainly not to three decimal places! TLT is “well mixed†compared to the boundary layer, so comparing land only surface boundary layer to e.g 8-km above it shows at best shows a serious lack of basic understanding of the differences in the two classes of measurements.”
So, are you saying that the boundary layer is uneffected by mixing for 32 years? Or that it remains unbalanced due to a lack of mixing for 32 years? I might believe a couple of month.
“Land has feedbacks (e.g., albedo changes associated with ice loss) that accelerates warming (and cooling) in the high arctic.”
The same ice loss albedo changes apply to the Arctic ocean. And the ocean around the Antarctic
“This doesn’t seem to follow your hopes either (°C decadal average UAH TLT)”
Big divergence problem happening over there right now. Hard to tell if those numbers mean something or not.
Chris: “Time to move on to 18-20 years as it is right around the corner.”
Not yet Chris. You’ve got 12+ years. With the La Nina, 14 are likely. Beyond that is a guess.
“- I prefer RSS to UAH due to the way the data is presented/broken down (nothing more sinister than that)”
I prefer the satellite data because the UHI problem is solved. But between RSS and UAH it’s an unknown. Right now they are diverging and we don’t know who is wrong. The TMT divergence is even bigger than the TLT divergence that we see. Presentation means squat.
Chris:
It’s not 20 years of non-warming. It’s been warming over that period. You can’t even get the basic facts right.
Bye.
Tilo:
Where did that question come from and why did you think I was saying that or anything remotely associated with it?
The 8-km height does not respond like measurements taken 1-m off the ground in surface level meteorology. (It’s not even well-modeled in large-scale eddy simulations, let alone trying to make connection with 100-km grid-size GCMs.)
But not the majority of the ocean basin, which is well mixed over those time scales.
I’ve noticed it’s always “harder to tell” when facts don’t align with expectations. Regardless what Chris said was factually wrong.
It may be that RSS or UAH or both are wrong, but you haven’t proven that, you’ve just speculated it might be the case.
The degree to which a “divergence problem” is happening is also open to interpretation.
Owen:
Since unlike us unwashed RankExploit denizens, you apparently have already fully digested their four papers, perhaps you could give us a 10-paragraph technical summary of their work, including a cost/benefit evaluation of kriging versus EOF based interpolation schemes? .
(Also Friday posts typically have the lowest popularity rates, so there is that too.)
Yet again the media misrepresents the situation. Typical is the ‘i’, here in the UK, with its “global warming is real, admits climate sceptic”.
Not only does this headline misrepresent Muller and where he was coming from, it also misrepresents the majority of sceptics who have never denied that warming is happening. The dispute has always been about the causes of warming and the possible effects.
As usual the MSM sucks.
Carrick: “The 8-km height does not respond like measurements taken 1-m off the ground in surface level meteorology.”
Agreed. But why should 8km have a lower trend over a long time period. I don’t see how mixing will effect the long period trends.
“But not the majority of the ocean basin”
Ice albedo change doesn’t effect the majority of the land area either.
“It may be that RSS or UAH or both are wrong, but you haven’t proven that, you’ve just speculated it might be the case.”
Okay, it could be one or the other that is wrong. Or it could be that both are wrong. With a divergence we know that they are not both right. So why use either unless you like their result.
“The degree to which a “divergence problem†is happening is also open to interpretation.”
Both UAH and RSS have admitted that there is a divergence problem. And they think that the degree is large enough to worry about.
Tilo:
I’d flip the question: Why is it that being in the land boundary layer can result in amplification of warming signals? Part of the answer in that is that surface land temperatures see a larger trend in minimum temperature than maximum temperature.
They also typically measure over a 24-hour period and either take the mean over 24-hours, or perform the average (Tmax+Tmin)/2. (In spite of claims by some people on this blog that it’s always (Tmax+Tmin)/2, the evidence appears to support the statement that the reporting gets done differently depending on the data provider.)
The reason why there is a larger effect on minimum temperature is likely related to an increase in water vapor in the atmosphere, and of course, this is an effect that shows up at night.
In comparing with satellite measurements, it’s important to note that the data more or less continuously sample the surface, but at the same time of day (typically 2-pm local time). Surface data take continuous temporal data, but have a non-uniform spatial sampling, with some regions of the world undersampled.
That’s saying something different than saying whether the divergence is biggest enough to matter for comparison with the other data sets.
I can add RSS to this comparison if you like, but largely the data sets agree with each other. The question is “does it matter” since it is inevitable that some disagreement will be observed.
There are differences in long-term trends, but they aren’t huge. The fact they compare as well as they do, given these differences is to me, a statement of just how relatively minor some of the effects (e.g. UHI) some skeptics are jonesing about.
Fo comparison of ground to TMS and TLS, it almost certainly is the case that the global climate models simply do not model the effects of temperature change on land surface measurements well enough to draw any meaningful conclusions.
(It’s my opinion that RSS has been largely given a free pass and UAH has received all of the scrutiny, so if I had to bet on which I’d pick as closer to truth, it would be UAH).
Watts Up is a trainwreck on this. Well, an even bigger trainwreck.
Boris, this is a Watts Up train wreck?
Sounds more like the train wreck happened in Berkeley on that one.
Seriously, BEST posted four preprints, started a huge publicity campaign before the papers went through any vetting process, and WUWT is the train wreck!?
Tilo Reber (Comment #84309)
October 22nd, 2011 at 1:48 pm
“Could be due to the BEST group coming up with an idiotic result like negative UHI effect.”
—————————————
From paper:
“We observe the opposite of an urban heating effect over the period 1950 to 2010, with a slope of -0.19 ± 0.19 °C/100yr. This is not statistically consistent with prior estimates, but it does verify that the effect is very small, and almost insignificant on the scale of the observed warming (1.9 ± 0.1 °C/100yr since 1950 in the land average from figure 5A).”
The “almost insignificant” effect they measure is consistent with the very small contributions estimated by GISS, NOAA, and HADCRUT (0.01 to 0.06 C per century).
Why do you think its idiotic?
Carrick:
Sorry, I didn’t understand the answer about why 8 km should have a lower trend over a long time period than the surface. Or, if you like, I didn’t understand why the boundary layer should have an amplification of the warming signal.
“Part of the answer in that is that surface land temperatures see a larger trend in minimum temperature than maximum temperature.”
And how does that explain a difference in trend with 8K?
“The reason why there is a larger effect on minimum temperature is likely related to an increase in water vapor in the atmosphere, and of course, this is an effect that shows up at night.”
Could also be hot concrete and asphalt releasing heat through the night.
“In comparing with satellite measurements, it’s important to note that the data more or less continuously sample the surface, but at the same time of day (typically 2-pm local time). ”
So then are you saying that the satellite data should trend the same as Tmax? Also, I was under the impression that their polar orbits had them sampling both on the day time side and the night time side.
Then the next question is, on what physical basis did the models compute that things should heat up faster at 8K and why are the model’s assumptions wrong?
“There are differences in long-term trends, but they aren’t huge.”
That would seem to be a subjective judgement. When the total temperature variation for the last 150 years is +.8C, and when the IPCC predicted rate of rise is .2C per decade, then .1C of divergence in a decade is huge in my mind.
Boris: “Watts Up is a trainwreck on this.”
Don’t like that one. Try the one Judith has going. You should like it more since I’m in it. 😉 But seriously, I think the UHI discussion there brought up some valid concerns.
Owen: “Why do you think its idiotic?”
The test that they performed has nothing to do with measuring or quantifying a UHI effect. But I’m tired of writing about this for today. Don’t want to redirect Lucia’s visitors. But go and see what I said about it at Climate Etc.
Tilo Reber:
Part of the answer is that surface temperature stations measure the mean temperature (or (Tmin+Tmax)/2), whereas satellites only measure something that is close to Tmax. Because there is a larger effect on Tmin (with warming the difference between Tmax–Tmin is smaller), if you only measure the daytime near peak temperature, you’ll see a smaller slope in the satellite data compared to surface measurements.
Did that make more sense?
Only if the amount of hot concrete and asphalt were positively correlated with latitude, and of course they’re not.
Put another way, for this model you’ve proposed to be explanatory,
it would need to explain this.
Whether something is important is intrinsically subjective—it depends on how large it is compared to what you’re trying to measure. If you’re trying to establish that global warming has happened, that’s a done deal. If you’re trying to establish an anthropogenic component, again pretty much a done deal. If you’re trying to disentangle how much is manmade versus natural, that’s where the rubber meets the road (for example, my oft repeated observation that prior to the 1970s the IPCC AR4 modeling suggests that net anthropogenic foricngs were not resolvable from natural ones).
And of course, there’s the little point of “measurability”, which has to do with whether what you are trying to measure is getting confounded by other factors, such as natural variability. (Which is why you have to hang around longer than 15-years if you want to establish divergence between two series for example.)
Chris Nelli:
There is so much wrong with your posts that it is hard to wade through so much nonsense packed into just a few paragraphs. Others have pointed out your complete cherrypicks and reliance on predicting the future and counting your chickens before they’ve hatched…However, you have also gotten the IPCC projections totally wrong.
The scenarios that project a warming trend that high do not project a linear trend. The IPCC has been very clear that over the near term, the projections are for about 0.2 C per century (i.e., 0.4 C over 20 years, not the 0.6-0.8 C that you state):
( http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html )
Of course, the IPCC is talking about long-term trends too and not what you might be able to cherry-pick over certain intervals.
Tilo Reber:
Well, it might be worthy of a small comment, but for the most part, you are just making way too much over short-term trends. Over the longer period, the agreement is very good.
Carrick:
“if you only measure the daytime near peak temperature, you’ll see a smaller slope in the satellite data compared to surface measurements.”
Still looks to me like AMSUs can take data in the day portion of their orbit and the night portion of their orbit. See:
http://idn.ceos.org/KeywordSearch/Metadata.do?Portal=idn_geoss&KeywordPath=Parameters%7CSPECTRAL%2FENGINEERING&OrigMetadataNode=GCMD&EntryId=GES_DISC_AIRABRAD_NRT_V005&MetadataView=Full&MetadataType=0&lbnode=mdlb2
Look under “Data Resolution” “Temporal Resolution”.
But back to my other question. Do you think that the satellite data should trend with surface Tmax?
“Only if the amount of hot concrete and asphalt were positively correlated with latitude, and of course they’re not.”
Seems to only be positively correlated going north.
“Which is why you have to hang around longer than 15-years if you want to establish divergence between two series for example.”
If they are both reading the same thing, natural variability doesn’t play into it. They should both be effected the same by any natural variabilitiy.
Carrick: have a look at Willis’ latest piece on WUWT. “Train wreck” doesn’t begin to describe it.
toto:
“Carrick: have a look at Willis’ latest piece on WUWT. “Train wreck†doesn’t begin to describe it.”
No, the trainwreck is still at BEST.
They had two charts. One of 39,000 examples that had a strong positive trend, another of 2,000 previously unused stations that looked to have a strong negative trend. Willis digitized the one with the best resolution, not realizing that there were two charts from BEST that had trends since 98 going in opposite directions. And of course BEST never bothered to explain the difference. Then BEST had a data page, but the data for neither chart was there. Only the 39,000 example chart had a link tacked on to the end of their analysis chart page. When I said that I had found it at Judith’s, other people asked me where it was because they couldn’t find it. Where the data for the 2,000 station chart is is still anyone’s guess. So if BEST had actually put their data on their data page and if they had actually labeled it, Willis wouldn’t have had to digitize it.
a repost seems in order for the BEST responce 🙂
Facts that we know:
.
1. It was colder in the mid 1800’s (LIA) than today
.
2. It was warmer in the 1930’s than the 1970’s
3. Its warmer now than in the 1970’s
.
4. Temps from about 2000 to date have been pretty flat
.
The argument is mostly over what date one starts the line on the graph. Start it in 1850′s or 1975 and the world is coming to an end. Start it in 1930′s and people shrug.
If “skeptics have never denied that warming is happening,” they should tell the half of Republicans and majority of Tea Partiers who don’t believe it.
http://environment.yale.edu/climate/files/PoliticsGlobalWarming2011.pdf
http://publicreligion.org/research/2011/09/climate-change-evolution-2012/
Yeah, the MSM sucks, which is why huge portions of the population are ignorant about such facts.
And really, anyone who wants to see a non-peer reviewed, agenda-driven PR campaign need only read such awesome manifestos like “Surface Temperature Records: Policy-Driven Deception?” and “Is the U.S. Surface Temperature Record Reliable?”
http://scienceandpublicpolicy.org/images/stories/papers/originals/surface_temp.pdf
http://heartland.org/sites/default/files/SurfaceStations.pdf
Peer review is an inconvenient distraction when there are plenty of astroturf groups and “think tanks” who will publish anything that fits an agenda.
Those trainwreck spotters out there may wish to take their notepads and head here!
http://wmbriggs.com/blog/?p=4530
I have a bunch of charts on the Berkeley Earth land temperatures (downloaded from the Analysis page yesterday). These are the monthly values and the moving average values using the full 39,000 site database (this is not the individual stations database, just the monthly averages – I also corrected one clear error in one month).
The Monthly Berkeley land temperatures going from 1800 to May 2010.
http://img200.imageshack.us/img200/3230/berkeleymonthlylandanom.png
There is huge variability in this data. I am almost certain they have underestimated the uncertainty involved here (and I start to wonder how the NCDC, Crutemp and GISS can have much more stable temperatures from month to month when they are using a smaller dataset). Here is a scatter (rather than lines/columns) of the same data.
http://img35.imageshack.us/img35/8241/berkeleymonthlylandscat.png
Look how much variability there is versus just the 12 month mean. Even today it is +/- 0.6C and in the early 1800s, it was +/- 2.0C. Any rising trend has to be viewed in that light. Any kind of error or change in measurement techniques is going to produce a trend all by itself.
http://img854.imageshack.us/img854/8255/berkeleyvariancescatter.png
Let’s compare Berkeley to the NCDC and Crutemp3 (GISS doesn’t really have a published comparable value given it includes some ocean temperatures). They are pretty close, at least on a 12 month moving average basis.
http://img21.imageshack.us/img21/7583/berkeleyncdccrutemp3.png
Berkeley, however, is very different on a monthly basis and has a higher increasing trend than both Crutemp3 and NCDC. Crutemp3 first.
http://img822.imageshack.us/img822/7093/berkeleyvscrutemp3.png
Berkeley is much closer to the NCDC.
http://img28.imageshack.us/img28/4859/berkeleyvsncdc.png
Finally, some comments about the early 1800s. Darn, it must have very cold and it must have been very difficult to grow crops in regions where frost is a concern today. 1807 to 1820 was a full 2.0C lower than today. I note the year without summer, 1815, does not actually show up as a cold period in this data, other years around it are much colder.
http://img828.imageshack.us/img828/2748/berkeleymovavglandanom.png
The coldest month on record was January 1809 at -4.2C (really?) and the warmest month on record was March 1822 at +2.4C.
Berkeley’s trend is close to twice that of HadSST2 ocean temperatures.
http://img585.imageshack.us/img585/2064/berkeleyvshadsst2.png
Tilo, thanks for the link. In retrospect it’s obvious they do two measurements a day (polar orbit). At that point, all I can suggest is they aren’t measuring the same quantity (and I”m thinking TTL. This is probably a good thing because it would have predicted a much bigger difference than is seen.
You’d have to model the differences between the two environments to explain any differences.
There isn’t much land-mass in the Southern hemisphere, so the ocean influence is larger (and of course, since the oceans are liquid, you would expect approximately the same rate of warming throughout the ocean basins, remember for oceans we’re using sea surface temperature to estimate surface air temperature).
Regardless this variance is something that any model you propose has to explain.
BIll Illis (#84344)–
Minor point–I’m guessing that the “one clear error in one month” in the BEST product is April 2010, which shows up as -1.035 degC anomaly while its neighbors are around +1 deg C. Note that the uncertainty for that month is given as more than 2 deg C (as is the one for May 2010), far higher than the typical 0.07 to 0.1 deg C. I conclude that BEST’s raw data for those two months are substantially incomplete, and the monthly estimates are best ignored entirely. BEST’s moving averages do not take into account the higher uncertainty, and therefore also should be discarded if they include April 2010.
Carrick: “You’d have to model the differences between the two environments to explain any differences.”
Such modeling is beyond my paygrade. I’m just a coder, remember. But as I understand it, the modeling that was done showed that TLT should be warming faster than the surface.
“There isn’t much land-mass in the Southern hemisphere, so the ocean influence is larger (and of course, since the oceans are liquid, you would expect approximately the same rate of warming throughout the ocean basins”
What I find interesting is that NOAA SST Anomaly maps seem to show the southern oceans as cold and the northern oceans as hot. This has persisted for a long time. And not what I would expect from an evenly mixed addition of CO2 around the world.
http://www.osdpd.noaa.gov/data/sst/anomaly/2011/anomnight.10.20.2011.gif
“Regardless this variance is something that any model you propose has to explain.”
I don’t think that a 20% or so boost in temperature measurements due to UHI would contradict that chart. Although I can see where any claim of UHI being a dominant effect would contradict the chart in the northern hemisphere.
Tilo:
I’d say that’s a problem with any model that claims warming with height. What is seen is a monotonic cooling instead.
That’s pretty much my view. It can’t explain the overall effect (“the world is warming”), but it could still be important, especially if you’re trying to match satellite to surface data, or understand regional effects in global warming.
cce,
My reference to sceptics was to the scientists who doubt and those who have regularly contributed to climate change blogs etc, not to the mass of Tea Partiers/Republicans/Environmentalist followers who, like the majority, tend to believe what the MSM tell them.
But be assured that the distortions of the MSM have been pro AGW for almost two decades.
I think this thread and one at CA and the BEST temperature series brings to fore some interesting potential rethinking that BEST might cause. Maybe even Lucia’s threads on recent warming and models will have to be re-interpreted.
First the graphs indicate some rather significant differences between temperature series. When these differences reach statistical significance as the graphs appear to indicate, one has to wonder whether proper uncertainty limits should have been wider on the preceding series or whether something about the constructions are fundamentally different – or if none of the series has it correct -yet.
The reconstructions could well need re-interpretation given the acceptance of the BEST continuing rapid warming in most recent times. Recall that reconstructions when properly view without the spaghetti and spliced instrumental record all suffer from the inconvenient divergence problem in the modern warming period. It would appear that the BEST series would make the divergence larger – and of course more susceptible to interpretation as the proxies failing respond to higher temperatures now and in the past.
The early 19th century cool period in the BEST series, if accepted, would also almost certainly affect the good agreements that scientists might have thought were established for reconstructions and climate models.
Kenneth the real test will be whether or not people re work their science based on the new series. More often than not people will just continue to cite work that has calibrated to record that we know is not the best. It stuns me for example that NCDC has been largely ignored. A while back Martin Vermeer on RC said that is was simple due dilengence to check various datasets. He said that because Spenser had not.
Goose meet gander.
recompile the damn science and show what effect if any using a BEST series or a NCDC series has on the final results.
Has anybody found out exactly which data source is used for the other “land-only” series (GISS and HadCrut) in the BEST papers?
Zeke writes:
“analyzing the impact of urbanization on global temperature trends. They find, interesting enough, that rural station are actually warming faster ( by 0.02°C ± 0.02 per decade) than urban areas.”
Please plot the shape of the first derivative of log, and consider how sketchy it is to claim they found a negative UHI signal. Sounds like they found evidence of a –positive– UHI signal.
Dave,
Read the links I helpfully provided. Or if that takes too much time, read this:
“Like [Muller et al.], I have no idea whether it will show more warming, about the same, no change, or cooling in the land surface temperature record they are analyzing.”
http://wattsupwiththat.com/2011/03/06/briggs-on-berkeleys-best-plus-my-thoughts-from-my-visit-there/
See? Seven months ago, Watts had “no idea” if the answer was going to be warming, cooling, or no change. No idea. Now he says warming is as obvious as the Pope’s religion. The reason that so many people refuse to believe something that has been shown to be true time and time again is due to the “mainstream media” giving credence to people like Watts whose apparent goal in life is to make people stupid. There was a reason for taking all of those pictures of air conditioners, and it wasn’t to analyze the change in diurnal temperature.
Laura S. “Sounds like they found evidence of a –positive– UHI signal.”
Laura, they didn’t find any evidence of anything, actually. They asked the question, “what is the difference between anomalies found in areas that are classified as rural as opposed to areas that are classified as urban. But areas that are classified as rural can have as much as 49.9% build. And areas that are classified as urban can have as little as 50.1% build.
Also, if a thermometer get’s put into a city that is already built, then the UHI that it has built prior to the thermometer being put there will not show up in that city’s anomaly. Only the build that happens after the thermometer is put there will register. On top of the 50% build requirement between rural and urban, an urban area must have one square kilometer of contigous build area that meets the 50% requirement. So if you have two .75 square kilometer areas of 80% built area seperated by a square kilometer of 40% build area, then the whole thing is rural. It’s easy to see where there could be more new construction going on in areas classified as rural than in areas classified as urban. In fact, Roy Spencer has a paper on the subject that shows exactly that.
Bottom line is that the BEST UHI test is a total fail that achieves nothing in quantifying UHI. But common sense should have told them that when they got the result.
I think their reconstruction method is also a fail, beyond just UHI. Not in the sense that there is no warming, but rather in the sense that HadCrut3 is much closer to the truth than BEST. But that is going to be a little harder to demonstrate.
Anybody got a copy of Matlab they don’t need? 2K per copy, plus more than a hundred for every tool kit – whew.
Tilo:
Octave is free.
cce:
There’s juicier meat here.
But hey, if we can get people in general to finally agree that it is actually warming (except for a few dead-enders), we’ve gotten past a major hurdle.
Carrick: Octave is free.
Yeap, and I’ve got a dual boot that includes Linux. I had already looked into that since I’ve used open products like Lesstiff before with good effect. Unfortunately they say that Octave is not completely compatible. And having downloaded the BEST code, it looks like there is a lot of it. So conversion could be a real pain. I’ve got a student copy of Matlab laying around that my wife used about 9 years ago. But I’m a little concerned about the lack of upgrades and the lack of tool kits. I guess it all depends on how esoteric the BEST coders were. If they were pretty basic I might get away with Octave or the old student Matlab.
It appears to stay away from object-oriented extensions, so making it work without a huge effort is a possibility.
I plan on using Matlab because I have a pre-existing license of course.
Tilo
‘But areas that are classified as rural can have as much as 49.9% build. And areas that are classified as urban can have as little as 50.1% build.”
That’s entirely misleading. In Modis a GIVEN pixel could be 49% built and be classified as “unbuilt”.
When you look at 11km around the site as they did you are looking at a large number of pixels: roughly 400 pixels. The likelihood that the station will fall on a 500 meter pixel that is 49% built is diminishingly small.
Let me take some examples.
Lets suppose I take 18,000 random locations. For each of those random locations I look around the location for 11km.
Lets suppose I find no built pixels. I’ll call that area “unbuilt”
Now lets suppose I find 1 pixel that is “built” that 1 out of 400
Let me call that area “Built”
Your concern is that somehow, simply because modis has a 50% rule on a given pixel, that somehow you will get many Built pixels in the same area.
I can test that.
I selected 18,000 locations around the globe.
For every location, I looked 11Km ( .1degree at the equator) around the location. Ok?
I created two piles:
Pile A: NO built pixels ( but Tilo worries about that 49% pixel)
PileB: 1 or More built pixel in 11km
can we determine is these areas are really free of urbanization:
lets pick a different variable. A different variable that has nothing to do with the modis sensor. A variable like nightlights.
Will the lights in Pile A be Higher or lower than the lights in Pile B?
Well if you are worried about the 49% pixels in pile A.. what do you say?
Here are the facts:
of the sites I picked about 7500 had no built pixels within 11km
What about the average nightlights? well lets recall that according to Imhoff a nightlights value of 0 to 30 is associated
with rural areas. Whats the average nightlights value in that area?
about 12…. In fact 93% of those areas have lights less than the rural cut off.
What about Pile B. Pile B remember had at LEAST one built pixel
the mean “lights”? recall that Imhoff said that a value over 80 designated Urban areas.. Whats the mean of pile B?
132. Yup. more urban in pile B.
Lets take a different measure. Lets take population density.
population density matters because when you have a dense population you have to build big buildings.
Pile A: mean population density 13 people per sq km
Pile B: mean population density? 503 people per sqkm
Lets take another measure! lets use a different measure this time
lets use percent of the area that is grassland
PileA 28%
PileB 19%
lets switch again and think about how your 49% percent pixel could occur? you are looking for a structure that will be less than 250 meters wide.. for this one I’m going to look at the exact location. the EXACT location shows UNBUILT.. what could be there that is less than 250 meters wide..
A runway!
so out of my 7500 unbuilt pixels? guess what? I find 234 airports, checking you see that these are largely dirt and grass runways ( I have a database of 40K airports)
Pile B? over 700 airports.
Is modis perfect in picking out rural? no. However, its accuracy can be cross checked with other data sources and in the end you end up with two piles of stations. One pile, by all measures has less of the things that cause UHI. Less impervious surface, less electricfication, less population, less built surface, more natural landscape, the other pile has more of the stuff that causes UHI.
Again, when you download Modis and spend a year with it, let me know. Next up will be the modis albedo dataset, as we know urban locations tend to have lower albedo and we can use albedo and Tskin to calculate a UI.. urban index.. check the literature on that Tilo its fascinating.
Mosher: “The likelihood that the station will fall on a 500 meter pixel that is 49% built is diminishingly small.”
I don’t really care if it’s 49% or 25%. What counts it the delta build in an area where you are trying to see what the UHI effect on the slope is.
Mosher: “One pile, by all measures has less of the things that cause UHI. Less impervious surface, less electricfication, less population, less built surface, more natural landscape, the other pile has more of the stuff that causes UHI.”
There is not, and has never been, any argument about that from me. Maybe you need to understand what my argument actually is.
When you put a thermometer into a city and start measuring the temperature what you are going to get that determines the slope of anomally measurements is the change in temperature. Any UHI that already exists when you put that thermometer there is not going to be a part of the anomaly for that city. This means that only the additional UHI that occures as the city builds up is going to be a part of the anomaly record. And since many of those cities are already highly built when you put the thermometer there, there is limited opportunity for more build. In many cases you have to tear down a structure to put up another. You will undoubtedly get a lot of new build around the outer edges of the city, but that build is far from the thermometer and while it effects it, it doesn’t effect it as much as build that is nearby.
On the other hand, build in small cities and towns and suburban communites that are not classified as urban can be both close to the thermometer and significant. This means that while their UHI effect may not be as great as the UHI effect of a fully urban area, their delta UHI effect, the thing that effects the slope of the temperature anomaly, may actually be larger. And both Spencer’s paper and the actual results from BEST confirm that this is in fact the case.
Regarding the rest of your comment, it was hard to follow. Your effort at being cute needs to be replaced by more effort at being clear. And I don’t care about things like lighting, since it’s not what was used in this test. I also don’t care about population since it is only loosely correlated with build and since it was not used in this test.
If you are going to do a UHI test that actually quantifies UHI you have to do one of two thing, either you compare the absolute temperature of a heavily urban area with the absolute temperature of areas nearby that are completely devoid of build; or you follow the trend of a city for it’s entire build life and compare it to the trend of an area with zero build. Unfortunately, thermometers in pristeen areas are scarce. And thermometers that have followed the entire build life of a city are also scarce.
Regarding my discussion of percentages, the point was not to show that rural was equally built with urban. The point was to show that there can be and is a great deal of build in areas that are classified as rural, or even areas that are classified as “very rural” by BEST. And that the delta build over some period of time near thermometers in rural areas can actually exceed the delta build near thermometers in urban areas.
Carrick: “I plan on using Matlab because I have a pre-existing license of course.”
The BEST people say in their README file for their code that all modules and data required to get it to run “may” not be present. That their purpose at this point is to let people review their code to see what they did. But they promise a more complete package later. I need to get the code to run and I need to reproduce their results. From there it should be easy to do the tests that I want to do that I think will show that they have some problems.
“recompile the damn science and show what effect if any using a BEST series or a NCDC series has on the final results.”
And at the same time bring those reconstructions up to date by going back to the original reconstructions that have measurements up to the recent past and taking further measurements to the current time. The divergence of temperature proxies presents a pressing question (amongst many other questions) about their capabilities to capture higher temperatures. If BEST is indicating no respite in the recent warming and it is accepted (probably very willingly by the climate modelers) by the climate community, this issue looms larger and would appear to make divergence or a lack thereof easier to determine. A considerable area for out-of-sample testing is being ignored and thinking observers want to know why.
cce,
Get real. Anthony Watts can look after himself but picking apart every word in a blog post as if it is definitive is stupid. We all know, based upon many years of following this issue that most contributors to the various blogs accept that warming is occuring. Why wouldn’t it as we come out of the LIA?
First off, this was not a huge surprise given the amount of work already done on this subject by Zeke, Mosher and others.
That said, here are a few things I find possibly noteworthy with this reconstruction:
* HadCrut is not looking very good
* There seems to be a warming trend of about 0.5 C/century in the early part of the record that (presumably) has nothing to do with anthropogenic greenhouse gases. If we subtract that, we get the following picture:
[IMG]http://i54.tinypic.com/24l0bo4.png[/IMG]
This suggests that only about 0.5 C of the about 1 C of warming we have seen over the past century is anthropogenic.
* In this picture there seems to be a large amount of “natural variation” in the early part of the record, and less towards the middle. I think the early part is probably measurement error, and the “real” natural variability is likely to be what we see towards middle…
Carrick said:
“Sounds more like the train wreck happened in Berkeley on that one.
Seriously, BEST posted four preprints, started a huge publicity campaign before the papers went through any vetting process, and WUWT is the train wreck!?”
Are you seriously defending WUWT by criticizing others for making loud pronouncements before the final results were peer reviewed? Really?
Posting preprints is par for the course for physicists. Which is what most of the BEST crew are.
The publicity hoopla, that’s less defensible..until one considers the massive amounts of hoopla that surrounded ‘Climategate’.
As Mosher said, goose, meet gander.
Julio 84440
“That said, here are a few things I find possibly noteworthy with this reconstruction:
* HadCrut is not looking very good”
LOL, *yeah*, if by ‘not looking very good’ you mean, HadCRUT *underestimated* global warming.
Steve Sullivan:
I’m not quite sure how a PR bliz preceding the peer-review process is equivalent to findings of malfeasance on the part of climate science.
The first, after all, is intentional publicity that you are generating, the second is unwanted publicity surrounding your unethical behavior. They appear to be polar opposites to me.
Boris:
I was thinking more along the lines of the heavy PR of papers that contain glaring mistakes, like the 60-year mistake. Not really defending WUWT, it’s a blog after all run by a commercial meteorologist. Muller is a fellow scientist, so I hold him to higher standards of conduct.
If you’re going to publicize, better make sure you get it right first!
Otherwise…trainwreck.
Steve Sullivan:
If it did, not by enough to matter, and it does “better” than GISTEMP:
1950-2009 °C/decade trend.
ncdc.temp 0.115
hadcrut3gl 0.116
giss 0.107
I’ll also remind people that BEST is land only, and HADCRUT is land+ocean.
Berkeley’s Land temperatures have twice as much increase since 1979 as the Lower Troposphere satellite temperatures and the Ocean temperatures.
http://img189.imageshack.us/img189/4432/berkeleyuahrsshadsst2.png
In the climate models, the Lower Troposphere, at the level UAH and RSS measure at, is supposed to warming by up to 1.27 times more than the surface temperatures. But it seems to be warming at about 50% of the Land temperatures and about 80% of the weighted average Land/Ocean temperatures.
It is also noteworthy that the Lower Troposphere is very consistent month to month with the Ocean temperatures while Berkeley’s Land temperatures are extremely variable and are mostly inconsistent with the other metrics.
Re: julio (Comment #84440)
Zeke writes about why comparisons with Hadcrut and Gistemp are misleading.
Bill Illis:
Again…you need to do apples-to-apples comparisons, and it’s not clear just yet how to do that (land compared to land+sea is going to be noisier, and for land versus other land only, different groups do the global weighting differently).
But even against other land-only series, BEST appears to have a lot of high frequency noise, which could be an artifact of their previously untested algorithm. (Train wreck anybody?)
Tilo. You dont get it.
You are concerned that Modis is not capturing “built” because of a 50% threshold. That worry can be addressed by looking at other sensors.
Again.
pick 18000 locations. Divide them into two piles
A. These have no built pixels within 11km
B. These have 1 or more built pixel, that is 1 or more 500 meter square that has some built area
I say A picks out rural pretty well.
you worry that A might have 49% pixels lurking about and that it might be urban.
Test that. How? look at OTHER INDICATORS of urbanity in the same area.
Pile A. Guess what, population densities below the urban threshold. 13 people per square km.
Pile B: 500 people per square km.
test it another way. Look for lights. using a differenet sensor.
What do you find? thats right Pile A has lights below the urban threshold. pile B? lights above the urban threshold.
test it another way with a different approach. Look at impervious surface. Guess what? pile A has values below the urban threshold, pile B, values above the urban threshold.
By all measures modis has pick out areas that are less urban.
Now, lets talk about growth. I can also tell you the population in both piles from 1900 to today. So if your worried about urban areas being “thresholded” that too can be addressed. The rural areas.. are totally rural. With none of the features that DRIVE UHI.
What drives UHI? What is the #1 feature that results in a high UHI?
building height. Not small buildings, but building tall enough to disrupt the boundary layer and tall enouh to create radiative canyons. Google that. Google the Bubble study if you want to see how quickly city UHI diminishes as you get to the suburbs. Go look at Modis LST data products to see how wide it spreads. Go look, before you assume.
Here’s a comparison of land-only, 1950-2009 inclusive I stop at 1950 because IMO the other series DON’T have enough geographical coverage prior to that for their land-only comparisons with BEST to be reliable. I stop with 2010 because my version of a couple of the series (clear-climate-code land only and JeffID/RomanN) stopped there.
I still haven’t gotten Nick, Zeke, Steve Mosher’s, etc versions incorporated, sorry guys. If anybody wants to toss me a link to their land only “global” series (needs to be sampled monthly or faster), I’ll include it in this spaghetti graph.
Figure.

What the figure clearly shows is that there is a lot of uncorrelated noise at short periods (sub-annual) in BEST. It even appears to have some “upwards curvature” which is usually a symptom of an aliasing problem. (This could be caused by noise amplification from their spatial interpolation scheme.)
Of these reconstructions, JeffID/RomanM’s method is the only one I know that avoids the problems associated with the anomalization method used in most of the global reconstruction algorithms. I think this is why the sub-annual harmonics are so far above their (comparatively low) noise floor in this frequency bad.
“I agree with Illis that the point of the BEST project is to distract attention away from the satellite dataset, which will be the kiss of death to CAGW”
As a card carrying skeptic to the proposition that CO2 is evil, I see the great value in the BEST work as helping to put to rest the “world is not warming” viewpoint. Many skeptics started paying attention in the mid-2000s (for obvious reasons) and then the trends going back to 1998 were negative or flat. I even noticed Hadley got rid of their 5 year running average as it started to turn over. That period is behind us but for those who are not checking in often, that fact may have escaped notice. So I do not even object to the pre-review media blitz. The wide ranging discussions are a good thing. As a regular reader of WUWT, I have even been disappointed by the quality of the commentary. This is all good.
Hopefully we can get to where all participants in the discussion can agree that yes, the atmosphere is warming and get to the more important discussion – so what?
Mosher:
“pick 18000 locations. Divide them into two piles
A. These have no built pixels within 11km
B. These have 1 or more built pixel, that is 1 or more 500 meter square that has some built area”
Mosher, you are truly loosing your mind. And after I have given you the definitions of rural and urban several times now you still don’t understand them and it’s beginning to irritate me.
First, the idea of 1800 random locations is dumb. The majority of the world is unpopulated with no thermometers. So your sample of 1800 is hugely overrepresented by places with no thermometers.
Then your idea about two piles is also dumb because it tells us nothing with regards to distinctions of rural and urban. The definition of urban is not “one built pixel”; it is an area that is greater than 1 square kilometer that has greater than 50% contiguous built. And the definition of rural is not an 11 kilometer area with no built pixels. The definition of rural is everything that is not urban. So in your 11 kilometer area there could be 100 built pixels and it could still be classified as rural, depending on the distribution of the pixels.
For that reason, your tests on pile A and pile B have no relevance to our discussion. Pile A is not representative of a rural area where there are thermometers. Pile B only needs to have one built pixel. But an urban area must have a minimum of four contiguous built pixels. So many, if not most, of your samples of pile B can be rural and can have the characteristics of light and population that you claim can only be seen for urban areas. Your test is a failure and is irrelevant to the discussion about the usefulness of the BEST UHI test.
RobertInAz: That period is behind us but for those who are not checking in often, that fact may have escaped notice.
Nope it’s not. BEST got it wrong. They suffer from the same error as GISS. They are kriging from areas of data to areas of no data. In the Arctic this means that they are kriging the effects of melting shore ice station warming inland over land areas that have no warm water nearby and are therefore actually much colder than the temperature that is assigned to them by kriging. There are a lot of stations near the water in the Arctic and very few that are inland in places like Siberia, Greenland, and Northern Canada. But BEST takes their error a step further. Their method of correcting outliers and discontinuities depends on checking with nearby thermometers to see what reality looks like. But in inland Siberia, Greenland, and Northern Canada there are few thermometers, and they are greatly outnumbered by coastal thermometers. So in the kind of “democratic” modeling method that is used by BEST, the thermometers that actually give the correct results for the inland temperature are outvoted by the shore thermometers that actually have incorrect numbers for inland temperatures. So the correct thermometers are down-weighted. BEST’s methods simply don’t work in areas that are very sparse in thermometers and that have thermal discontinuities in the terrain. I still firmly believe that HadCrut3 is the closest to reality of any of the major surface temp data sets. And HadCrut3 should probably be about .2C lower due to UHI.
Tilo
What if I told you the 18000 locations were actually locations of thermometers.
For 7500 of them, the entire 121 sq km around the site has no built pixels.
for 11500, the entire 121 sq km around the site have at least 1 and as many as 100% built pixels.
Would you call Pile A more rural than pile B.
Understand?
Mosher: “What if I told you the 18000 locations were actually locations of thermometers.”
Then they were not randomly selected as you claimed.
“For 7500 of them, the entire 121 sq km around the site has no built pixels.”
I don’t believe that. I think what you mean is that there are no pixels with more than 50% built.
“for 11500, the entire 121 sq km around the site have at least 1 and as many as 100% built pixels.”
Since only about 27% of thermometers are in urban areas, and since your 11500 is well over half, that means that the majority of your 11500 are in areas that are classified as rural. And of course 1 built pixel does not give even that pixel the status of urban, much less the 121 sq km. So your conclusions about one pile versus the other is not equivalent to conclusions about urban versus rural.
“Would you call Pile A more rural than pile B.”
Yes, I would call pile A more rural than pile B. But that is not the issue and never has been. You simply go ranting on and on without ever having a clue as to what I’m trying to tell you. You show zero comprehension of what I have said. Instead you keep ranting with the idiotic idea that all you need to do is show that one pile is more urban than the other. But that isn’t even close to being the issue. So until you actually understand what the issue is don’t bother me any more. In the post that you responded to at Judith’s I laid it out so plainly and simply that even you can understand it if you try real hard. Now go back and really read it this time instead of poping off like an out of control troll. And the next time that you have something to say, show me that you understand. Otherwise, stop wasting my time.
Carrick says #84244 Going further back in time tells us nothing useful (according to the models) about anthropogenic global warming. We need to go further forward in time (25 years) before incontrovertibly we can separate out anthropogenic forcing from natural climate fluctuations.”
Carrick, that’s only because the models assume almost all recent warming is anthropogenic forcing and almost all previous warming was natural variability. Therefore the models show what is asumed- duh. I think a much more reasonable assumption is that the warming caused by natural variability (who knows what) is continuing at around the same rate as before CO2 forcing was significant. Therefore we subtract the 200+ year instrumental temperature trend of natural variability warming, perhaps ~0.7 per century, from the the current warming to infer what the present CO2 forcing is. Do wehave to go to the playground- “my assumption is better than your assumption?” Isn’t it clear in everybody’s crystal ball that to assume natural variability warming stopped just when anthropogenic forcing began is an absurd assumprtion and much worse than cherry picking?
Carrick writes later-“I absolutely agree on this point. You really need a longer interval than 15-years to reliably separate natural variability from anthropogenic forcing.” Reliable- ha! – as if there’s been no 20 year and 30 year warming periods before the attribute-it-to-C02 era! BTW, I agree we have to go futher forward 25 years or more (I think Judith Curry has suggested 30 years on occasion) to separate out the natural variability from the anthropogenic forcing, but even then it won’t be incontrovertible unless it’s gotten awfully cold or awfully hot. Incontrovertible evidence requires A/B testing and replication. Not likely.
Doug:
I think you missed a step there…
If you’re going to test a particular model, you have to test them against what the models assume, not a particular “favored hypothesis” of your own.
In the end, it’s not my assumptions against yours, it’s the models assumptions against its predictions that you test. Otherwise you aren’t testing that model but your model.
Actually, this figure, which is a result from my own Monte Carlo study of climate variability, just looks at the effect of measured variability on trend estimate. It has nothing to do with CO2 per se.
From my perspective, the models (and model forcing) needs to be a lot better understood before we reach “incontrovertible”.
Carrick,
I appreciate what you and Zeke and Lucia and others are doing statistically. I’m envious that I don’t have the background to join in the fun, and I keep threatening to take an online statistics course. Any suggestions? However, I think that we agree, nothing incontrovertible or even close to it can be inferred when the “forcing” variables and their relative magnitudes of forcing are unknown. Without isolating variables, I don’t see how its possible to test any model that attempts to model something as complex and poorly understood as climate. Humans are usually uncomfortable with ambiguity. Religion is one example. Isn’t climate science another? BTW, I don’t really have a favored hypothesis in terms of outcome. However, from many decades of weather observation, stock market observation, and observation of flora and fauna changes in distribution, I know that, minus specific information and understanding- persistence forecasting works better than any alternatives. For climate, persistence forecasting is the assumption that the longer term trends we identify in the recent past are probably continuing. I think we are in a long term warming trend (~300 years) that will probably be accelerated by anthropogenic forcings. Since there is nothing unprecedented about recent temperature and climate changes, it appears that thus far, the acceleration caused by anthropogenic forcings (and therefore the contribution of anthropogenic forcings) is minimal. The one anthropogenic forcing that appears most clear is soot (carbon black) affecting the albedo in arctic regions (but not in the anarctic where increase in soot has been minimal). As for the very recent temperature flat lining, there are several untestable hypotheses, all, to my mind, less likely then the assumption of contiuned long term warming (~300 years) from unknown forcings (natural variability) with antrhopogenic forcings playing a very minor part suggesting low climate sensitivity- requiring at least 15 or 30 more years of little temperature change before we can be very confident our inference of low climate senitivity. One such less likely assumption is that we could presently be in a short term cooling trend (or even at the beginning of a long term cooling trend- can you say ice age) that is overpowering the very high climate sensitivity that Hansen and others assume. There are other presently untestable possibilities each having their cheerleaders. My conclusion is this. Certitude is not evidence of certainty, and the certitude of Hansen and Gore or the “very likely” 95% certitude of the IPCC is based on assumptions, not evidence. Cheerleaders of alternative forcing assumptions are likewise pulling a rabbit from a hat and not practicing scientific method.
Carrick,
I already posted that link, plus Watt’s Heartland “paper.” I presume Dave didn’t read it.
Dave,
I agree that paying attention to Watts is stupid. A pity so many “skeptics” do.
By the way, here’s Fred Singer yesterday:
But unlike the land surface, the atmosphere has shown no warming trend, either over land or over ocean — according to satellites and independent data from weather balloons. This indicates to me that there is something very wrong with the land surface data.
http://www.nature.com/nature/journal/v478/n7370/full/478428a.html
Here’s Fred Singer 13 years ago:
[Trenberth] claims that “global mean temperature is rising.” Not so. The weather satellite data, the only truly global data set we have, actually show a global cooling trend during the past 19 years. . . . And he should have also mentioned that balloon-borne weather sondes provide an independent set of data that confirm the satellite results of ongoing global cooling. . .In fact, it is the surface data that are suspect, and especially the data that purport to measure the temperature of the sea surface.
http://naturalscience.com/ns/letters/ns_let06.html
Here he is again yesterday:
And finally, we have non-thermometer temperature data from so-called “proxies”: tree rings, ice cores, ocean sediments, stalagmites. They don’t show any global warming since 1940!
****
See? No warming, no warming no warming. Doesn’t matter when or where or by what instrument or method. No warming!
Kenneth
We don’t have BEST land ocean yet– so that limits what tests we can do. Obviously, I think it’s worth at least pushing available data through the “meat grinder” we’ve put other data through– just to see what we get. It will be nice when the Best land ocean stuff comes through.
Tilo: Instead you keep ranting with the idiotic idea that all you need to do is show that one pile is more urban than the other.
.
Whereas you keep ranting with the ideas that:
.
1- Areas far away from any 50% built spot have urbanized as much as the rest (in your opinion, how many of these 50%+ built spots had any kind of buildings in 1900?)
.
2- The amount of built land is so large that UHI must have a major impact on global land temperatures. Most of us who have taken planes and looked out the window do not find that quite as obvious as you do.
.
The real problem is that you can’t get around to actually test those ideas of yours, which should not be too difficult. Using Google Earth with Nick Stoke’s KMZ file of all stations, you can find a bunch of stations that are not located too close to any major structure. For added precision you can confirm their exact location with climate-charts or something. Then you could actually run those stations through some kind of gridding software and see what you get – or just provide the set of stations to Nick or Mosh or JeffID and ask them.
.
To be blunt about it, the burden of proving some fringe theory rests on the fringe theorist.
.
cce: you missed Singer’s remarkable statement about the “myth of rising temperatures”. Bonus: look what that piece was about… 🙂
JohnWho (comment 84206)
In my home town there are two weather stations, one manual and one automatic. They are only 300m apart and virtually no elevation difference.
The manual w/s is near a tarred road with buildings nearby, the AWS is on a grassed oval with no buildings or roads within 50m.
Over the past 16 years since the AWS was opened, the manual w/s has an av max temp of 26.4C and the AWS an av of 25.8C.
Av mins are 13.5C and 13.3 respectively.
That’s an 0.6C difference in av max temps.
Siting can therefore have a quite significant impact on actual temps.
kasphar (Comment #84874) – Looks like global warming to me.
“Overall the results of BEST are quite similar to those of Menne et al and Fall et al, namely that station siting does not have a large impact on temperature trends.”
Small impact then? How small?
Why does this statement not fit with the Graph 2 visual?
From ~ 1975 to ~2010 the temps of Hadcrut and GISS seem to be steadily diverging from BEST from 0 to ~0.4C. They produce a global warming of 0.4C in 35 years? Isnt that significant?
Or is that cooling?
Carrick #84487
I’ve uploaded Zeke’s original Excel file (zipped) with all the reconstructions he put together here. It’s called Temp Comps Global.zip
kasphar writes
“Over the past 16 years since the AWS was opened, the manual w/s has an av max temp of 26.4C and the AWS an av of 25.8C.”
Offsets between thermometers are common. What is important is
a. Do the two sites track each other over the short term
b. Are the differences between the anomalies similar and
c. Be careful to compare the AWS readings with the manual ones only for the same time of day.