Lucia’s recent post inspired me to undertake a project that I’d been planning to do for awhile but never got around to working on: using a monte carlo approach to estimate sampling error in the global temperature record. There seems to be a common argument raised by commenters in climate blogs that the limited number of surface stations available is insufficient to estimate the global temperature field with any accuracy. To test if this is in fact true, we can randomly select subsets of stations to see how a global reconstruction using only those records compares to other randomly selected stations. Specifically, I ran 500 iterations of a process that selects a random 10% of all stations available in GHCN v3 (which ends up giving me 524 total stations to work with, though potentially much fewer for any given month), created a global temperature record with the stations via spatial gridding, and examined the mean and 5th/95th percentiles for each resulting month.
Its worth noting that this -just- examines sampling error, not any systemic bias like UHI. It also shows the confidence intervals for 10% of all stations; they would likely be somewhat smaller with a larger number of stations selected. We can tell from this, however, that uncertainty from sampling error is relatively small, as no matter which set of stations is selected we get similar results. This simply tells us that anomalies are strongly spatially correlated, something that should be unsurprising to most. We can also look at 500 runs of a random 5% of stations (262 total, fewer for any given month).


Zeke, could you expand on your statement that you
> examined the mean and 5th/95th percentiles for each resulting month.
These are the upper and lower gray lines in the graphs.
Are you showing the 5/95 bounds for a particular tranche of 524 stations? The “Monte Carlo” in the title suggests not. Can you describe the origins and meaning of those two gray lines, for the non-adepts in the audience?
AMac,
I started out selecting 524 stations using a random number generator. I calculated the global temperature anomaly each month using a spatial gridding method for those stations. I repeated it for 500 random selections of 524 stations.
The results show the mean (black line), 5th percentile (bottom grey line), and 95th percentile (top grey line) of the 500 runs for each month (actually they show a 5-year running mean of each to make it less noisy, but the takeaway is the same).
Thanks. And the black line must be superimposable upon the anomaly derived from the entire set of stations.
Well, each random selection of 524 stations already has all their records converted into anomalies relative to a 1961-1990 baseline.
OT but interesting carrick, steveF, lucia
http://ams.confex.com/ams/91Annual/flvgateway.cgi/id/17273?recordingid=17273
Zeke, Nice.
I’m getting ready to do something similar with Tamino’s method for a reader.
This is pretty convincing stuff. I think a skeptic would have to fall back on a second line of questioning that almost sounds paranoid–are there other, known sources of bias in one or more classes of measurement instrumentation and were they selected/deselected on a systematic basis because of that know bias?
Anyone know if this has been looked at carefully?
The proper way to do this is called bootstrap, and it’s very easy.
Tom,
This approach only estimates sampling error due to variation in sensor estimates; it doesn’t address any potential sources of systemic bias (UHI, etc.).
Acthof,
This is a bootstrapping method. Though I realize I should have shown 2.5/97.5 CIs instead of 5/95.
Zeke:
So for the more layperson among us (like me), what does this graph mean?
I see an about .25C difference between the mean and the 95% lines, which seems to mean that about 1/3 of the difference in .8C anomaly between 1850 and today could be due to station sampling error? Or maybe about .5C different between the 95% and 5% numbers?
Or does it mean something totally different.
Thanks for any help you can provide in interpreting the graph, because its meaning doesn’t jump up and slap me in the face.
RickA,
This simply shows the sampling error for temperature estimates for any given month. The sampling error in the trend is a somewhat different question, and cannot be answered by these charts.
All the chart means is that for a given time period, the 90% confidence intervals of the global temperature estimates are shown by the dotted lines (the 95% CIs are slightly larger, but not meaningfully so).
Thanks Zeke.
So for any given month, the global temperature estimate could be off by as much as .25C, depending on which group of 524 stations I used?
Re: RickA (Jul 14 16:29),
“So for any given month, the global temperature estimate could be off by as much as .25C, depending on which group of 524 stations I used?”
No. you are using language imprecisely. The probablity that it is off by .25C or more is low. Namely, there is a 5% chance it’s off by more than .25C
Re: Tom Fuller (Jul 14 15:40),
Lets see if I can run down the claims of “bias” and their status
1. High latitude stations were dropped
2. High altitude stations were dropped
3. Stations were moved to Airports
4. The sample is too small. Sample bias
5. the sample has Urban Bias
6. the rural stations are biased by land change
7. there is microsite bias
8. the records are biased by adjustment
A. The rounding adjustment
B. The Tobs adjustment
C. The MMTS adjustment ( change instrument)
D. (min+max)/2 adjustment
E. homogenization
F. hansens UHI adjustment.
G: averaging loses information.
That’s it as far As I can recall.
in every case The pattern of argument is the same.
A few cases are brought out to illustrate the issue.
The problem is then claimed to be important and general (infects the whole)
A test is then performed (A/B)
No bias ( substantial) is found.
claim of bias is repeated or argument is changed.
rinse repeat.
All of the above have been addressed. some better than others obviously.
in the end, there is no documentable case for widespread significant measured bias.
Zeke–
I assume if you repeated for annual average data you’d get tighter confidence intervals. Am I assuming correctly?
Did something I said make you think that?
And you’d get the right answer (instead of the sampling distribution for some arbitrary percentage of stations) if you did a proper bootstrap.
Zeke – Now that you’ve got this handy-dandy Monte Carlo dataset, I would be very interested in how it translates into uncertainty in the linear trends, say from 1900-present and 1970-present.
If the errors in the monthly anomalies are serially uncorrelated, the trend estimates will be much more precise than the spread of the individual monthly anomalies might imply.
If the errors in the monthly anomalies are serially correlated with a correlation coefficient near 1.0, the range of trend errors would correspond to the change in spread over time, still quite small.
If the errors in the monthly anomalies are serially correlated with a time scale of decades (worst case), the range of trend errors would be rather large.
Promising approach but I would prefer to see the monthly data versus a 5-year running mean.
I’ll admit I am biased against most smoothing procedures – they should only be used if the data is too noisy to be useful. In the case of regional/continental monthly series, they are almost always too noisy to be useful. But the global numbers are rarely too noisy to be useful on a monthly basis.
And the climate does not smooth itself over 5 years or even one year. It does what it does at the speed that energy moves through its various components (anywhere from the speed of light to the speed of deep ocean circulation to the speed that continental glaciers melt – but today the average speed of energy moving through the system is just 44 hours).
Is this the homogenized data set or raw+TOBS? It looks like it’s been homogenized so is the selection really random? Picking a station that has be adjusted by a neighboring station is just picking the neighboring station isn’t it?
So from 1880 to 1935 the temperature was rising about 1.2 degrees per century.
From 1935 to 1975 the temperature was falling by about 0.6 degrees per century
From 1975 to 2000 the temperature was rising about 1.6 degrees per century.
Now the temperature is beginning to fall?
I that correct?
I am not sure that this is entirely logical. You have looked within the dataset, and shown that subsets of it are in relatively good agreement, whereas all the questions are surely about what is outside the dataset. So, for example, the lack of coverage in the arctic, or at high altitudes etc does not seem necessarily something that can be addressed within the database -as Steve Mosher seems to suggest. I am left wondering what the overall point of this subset analysis is.
Well there you go, simple proof of what we know.
I wonder if you could project the error bars from these tests to the whole dataset. It stinks when comments ask for more work, but a projection of that sort would seem to have quite a bit of value in comparison to the obscure CRU style projections of error. It would likely be publishable if done thoroughly – unless it has already been done somewhere.
Shouldn’t be blogging this late but the kids woke me up.
The type of analysis you choose depends on the use intended for the data.
These techniques here are, in general, the stock in trade for ore resource evaluation for mining. The information within an ore deposit can be refined to tighter and tighter tolerances by drilling more holes, but holes are very expensive and the principle of diminishing returns operates.
The present analogy is to using few or many weather stations to make an average.
There are many ways to express similarities and differences between this climate approach and any of several mining approaches, but maybe the nugget effect is worth a mention.
In gold mining, the presence of a single large nugget intersected in a drill hole can have a quite significant effect of the calculation of grade in the mine and also on the confidence of the estimation of grade. The more holes that are drilled, the more the probability of hitting a nugget. (It is not usual to make an a priori assumption that the mine will contain big nuggets).
The climate approach is more aligned to smoothing away of effects like the nugget effect. I use the 1998 hot year as an example. I can give you 100 land weather stations whose combination will not show the 1998 hot year. It is just discernable in the top graph with the choice of stations used, which includes lower-contrast ocean stations. Going further, I could give you 100 more land stations that would show it as rather more prominent. (I have done more work with land than ocean data, so I am not so confident with ocean data here).
For this reason, I think it dangerous to state that a small subset of climate records can adequately represent a large. They need not, if you are looking for a nugget effect.
So what might a nugget effect do in climate? A single very hot year will affect the rates of competing chemical reactions in biological systems, probably in a non-linear way – dare I mention coral bleaching as a possibile outcome, ozone concentration in the stratosphere as another – causing an effect that can linger. You can make your own choices, but I dare to say that it would be imprudent to dismiss the effect of a single very hot year entirely.
That is why it should not be smoothed away. That is why more stations are better than few, with the by-product that they lead to better estimates of confidence. If they don’t, then the math assumptions are wrong; or alternatively, you have gone way past the no-return point on the diminishing returns plot.
Zeke, plot the range, delta 5-95%, vs year and tell us why the line-shape has the form it does.
It cannot be changes in station numbers.
Lots of comments here to respond to. I’m about to board a flight from NYC to SF, but I’ll answer folks when I arrive (unless I get lucky and the flight has wifi).
steven mosher (Comment #79089)
“in the end, there is no documentable case for widespread significant measured bias”
——————————
The close agreement between satellite-based tropospheric anomalies and the surface measurements was another huge hint that the surface temperature record (at least since 1978) was pretty solid. That agreement was just ignored by certain skeptics who were convinced that the surface record was full of problems. And there may have been legitimate issues in the surface methodologies- it’s just that they were not significant in the end.
Zeke-It seems to me that this is an interesting look at the uncertainty in the average from further subsampled populations of stations, but the historical population of stations is it self of subsample of the “true” coverage needed for a “perfect” average. It seems to me that these are somewhat different problems and I don’t think that your method makes sense for answering the question of how much uncertainty there is due to the actual historical subsampling. For that, I personally think one would need to take samples of complete populations that are analogous to (or even identical to) the historical sampling of data-which calls I think for some kind of synthetic data on which to experiment, since at no point is the observational data complete. I’d love to hear what you think of this, thanks!
steven mosher (Comment #79089)-Would you say then that there are no significant biases in the global land surface temperature data, since all claims have been “refuted” and therefore the remaining signal is purely climatic?
bootstrap
Micheal Lowe is right, re-sampling can’t tell you if your sample is representative. Put differently: all samples are biased.
If the field has strong spatial correlation, then you should be worried about how good the spatial coverage is; a convenience sample that’s very “lumpy” (which is a feature of random samples too; that’s why they converge so slowly compared to designed samples) will have problems. Maybe you could use some sort of “coverage” statistic to support the argument.
Owen (Comment #79118)-As usual we have someone who spouts nonsense that the satellite data prove the surface data is unbiased. This is incredibly stupid so it’s not wonder this incorrect argument is “ignored by certain skeptics”. The troposphere and the surface are totally different things and there is absolutely no reason to think it reasonable for the surface data and satellite lower troposphere data to warm at the same rate-in fact we should expect the troposphere to change faster. Beyond that, the best analyses of the satellite data do NOT warm at the same rate but slightly LOWER. Even with RSS’s documented warming bias in about 1992, it only has a trend about the same size as the surface data, which again is too small! It is infuriating to here this nonsense that the satellite data don’t contradict the surface data, they absolutely DO!
Owen, Andrew,
Just watch the values:
http://imageshack.us/m/10/9439/cguo.jpg
Andrew FL, “It is infuriating to here this nonsense that the satellite data don’t contradict the surface data, they absolutely DO!”
In your mind they do, anyway. I don’t believe most unbiased observers would say that the satellite data contradict the surface measurements. Quite the contrary. http://www.woodfortrees.org/plot/uah/from:1978/plot/gistemp/from:1978/offset:-0.34
Phi,
I have no idea what you mishmash of plots is supposed to show. They appear to be different types pf plots for each of the temperature series.
Owen,
Oh, it’s very simple. These are annual temperature anomalies over lands of the northern hemisphere according to UAH, GISS and CRU.
Temperatures review simply can not be done on a global basis because the components of the surface temperatures are too inhomogeneous (SST, stations, Arctic, etc.). The good agreement of your chart woodfortrees is an artifact.
Owen (Comment #79125)-Your comparison is meaningless for reasons I already stated above! “most unbiased observers” could only come to your erroneous conclusion if they were as ignorant as you are about what the relationship between the surface and tropospheric variations SHOULD be. A proper comparison would in fact show that the warming in the troposphere is smaller than it should be compared to the surface.
‘Is this the homogenized data set or raw+TOBS? It looks like it’s been homogenized so is the selection really random? Picking a station that has be adjusted by a neighboring station is just picking the neighboring station isn’t it?”
you should probably see some of zekes past posts on the new homogenization techniques. In any case, long ago I did something similar with raw +Tobs. The answer doesnt change.
“steven mosher (Comment #79089)-Would you say then that there are no significant biases in the global land surface temperature data, since all claims have been “refuted†and therefore the remaining signal is purely climatic?”
No, I would not go that far.
Part of the wiggle room lies in what we mean by significant. Significant for WHAT? for doing a temperature recon? for seeing that we have had warming? for testing a GCM? valid for what purpose?
Areas that could be interesting.
1. UHI. If I look at all the literature on UHI and my work and more importantly some of zeke’s new work, I’ll say that I think the truth lies somewhere between Jones estimate of .05C and McKittricks ~.3C for the land. I’d be more inclined toward the Jones end of things. I think the UHA and RSS records constrain this bias to an even narrower range, but that requires another look.
2. Rural contamination from land use changes. One effect Im interested in is the changes that happen when damns are built.
Future work whn I get the chance, but a couple cases Ive seen are interesting.
3. Spatial Sampling. There is always the possibility that the unsampled places are different than the sampled places. That is an argument from ignorance. We have no reason to believe they are different and every time we add new stations the answer stays the same and there is no physical reason to believe that a century long spatial field will have persistent regions with different trends. And UHA does put a limit on this as well.
There is never no bias. The question is how big of a bias is important and can we measure small bias.
‘I wonder if you could project the error bars from these tests to the whole dataset. ”
Conceptually the issue is between these two assumptions:
1. Those who think the unsampled regions are going to be similar to the sampled regions ( duh, basic physics)
2. Those who believe that the unsampled regions some how have different trends than the sampled regions. That somehow those regions are impervious to the changes that happen around them. They are counter currents.
I posted at TAV and now I have had a chance to look at the details of what Zeke did here. This is an interesting analysis and I guess shows the relationship of uncertainty with the coverage of stations.
Zeke, you say you select 10% and 5% of the stations available at GHCN v3. Do you mean the stations available for some minimum period of time that allows the stations to be listed in GHCN? This selection would mean that you can select from stations not currently being used and would, I think, account for the wider uncertainty limits in the early and late instrumental periods.
Since there is a heavy concentration of stations in the US, N America and W Europe a random selection is going to draw heavily from those areas (as does the population in GHCN). You say you use grids but even with huge grids greater than 10 degrees by 10 degrees you will end up with many empty grids. How did you fill empty gids?
Would it be more instructive to select an area of the globe that is reasonably well covered by stations and then draw repetitive random samples of differing sizes.
Temperature correlations over distance is what makes it possible to get reasonable mean temperatures with smaller uncertainty bands, but those correlations are affected by such variables as elevation, proximity to large bodies of water and latitude.
What I have seen in determining the effects of incomplete spatial and temporal station coverage, required looking at grids that were assumed to be covered sufficiently to give a true mean temperature and than drawing various size samples from those grids. These analyses as I recall were accompanied by a model for infilling missing stations by including as independent variables, distance, latitude, altitude and proximity to water and using that model to estimate uncertainties.
Owen,
UAH and GISS follow a very similar pattern, of course, but they do yield OLS trends that differ by about 20% (http://www.woodfortrees.org/plot/uah/from:1978/plot/gistemp/from:1978/offset:-0.34/plot/uah/from:1978/trend/plot/gistemp/from:1978/offset:-.34/trend)
The difference is probably not statistically significant to date, but worth keeping track of to see if the difference increases.
Comparing UAH with Hadley shows less difference in trend:
http://www.woodfortrees.org/plot/uah/from:1978/plot/hadcrut3vgl/from:1978/offset:-0.25/plot/hadcrut3vgl/from:1978/offset:-.25/trend/plot/uah/from:1978/trend
Steven Mosher,
“UHI. If I look at all the literature…”
What do Jones and others?
We have a beam balance and two identical boxes locked, the first is black and labeled: Urban, a pumpkin inside. On the label of the second we read: Rural, no pumpkin inside. The researchers would like to know the weight of cucurbits, so they place each of the box on a tray and weigh, they fund 1 kg for the pumpkin. It’s a bit silly but also were 5 kg of cucumber in the balck box and 15 kg in the white box, in reality the pumpkin weighed 11 kg and there was a total of 31 kg of cucurbits .
phi (Comment #79141),
The point of your comment about pumpkins in boxes is not at all clear to me. What are you trying to say?
Andrew_FL (Comment #79119)
July 15th, 2011 at 7:54 am
You are never going to get the ‘true’ and ‘perfect’ answer. Science specifically says that we never can get a ‘true’ and ‘perfect’ answer to anything.
Zeke has been demonstrating how good the answer that we do get is, and it’s not too bad.
SteveF,
In principle, what we would like to know is what are the perturbations due to environmental changes (Cucurbits). The UHI (pumpkin) is only the form specifically urban. The studies of Jones and others, as the process GISS, are based on a distinction between urban / rural, and on the comparison of the differential trend (beam balance). But other forms of disturbances are ignored (cucumbers) which leads necessarily to underestimate the disruption (and probably also the UHI itself).
“Zeke has been demonstrating how good…”
Pardon me, but Zeke hasn’t “demonstrated” anything.
Andrew
steven mosher said: ‘I wonder if you could project the error bars from these tests to the whole dataset. â€
Conceptually the issue is between these two assumptions:
Your conception of the issues is incorrect. Calculating the confidence intervals correctly using a resampling method for this data set and statistic is a separate issue from whether the sample is representative (which is what you seem to be trying to address with your two assumptions).
steven mosher (Comment #79135)-I think we actually are more in agreement than I thought. I’ll address your comments thoroughly if you don’t mind. 🙂
“Part of the wiggle room lies in what we mean by significant. Significant for WHAT? for doing a temperature recon? for seeing that we have had warming? for testing a GCM? valid for what purpose?”
“Significant” can mean a lot of things indeed and I am not that precise with my language perhaps. I was I think vaguely implying “statistical significance” but that’s a little bit of a slippery issue. I think when I said significant I had in mind biases that would be significant for testing GCMs, possibly for doing temperature recons, although ideally only for calibration would it, in my mind, be an issue. I did not mean they were significant for “seeing if we have had warming” but how much is an issue.
“UHI. If I look at all the literature on UHI and my work and more importantly some of zeke’s new work, I’ll say that I think the truth lies somewhere between Jones estimate of .05C and McKittricks ~.3C for the land.”
This is a little strange, as Ross has not done work looking specifically at UHI but rather “anthropogenic surface processes” which are broader in scope. Michaels likes to mention that in Chad they probably have higher priorities than keeping the Stevenson Screen painted. 😉 Additionally, your claim of how large the effect is in McKitrick’s findings is unfamiliar to me, But I assume you are talking about over the course of a century, since you compare it with Jone’s estimate-I don’t think one can/should do that because Ross has only studied the recent period of satellite data. The bias may also have existed in earlier decades but we can’t directly estimate it in the way he does without such data.
“I’d be more inclined toward the Jones end of things.”
I’m not clear on your reasons why, but some of what Ross is picking up may in fact include landuse effects, and thus in some sense be “climatic” although not representative of the atmosphere as a whole.
“I think the UHA and RSS records constrain this bias to an even narrower range, but that requires another look.”
This is Michaels’ “hard deck” beneath which one can’t drop the surface trend. For what it’s worth, I agree. But keep in mind that RSS has somewhat of a warm bias relative to UAH, although the reverse situation in recent years has mitigated this tendency.
“Rural contamination from land use changes. One effect Im interested in is the changes that happen when damns are built.
Future work whn I get the chance, but a couple cases Ive seen are interesting.”
I’ve heard of some work on how dams (no n 😉 ) change precipitation patterns but I’m unaware of the effects on temperature. Intriguing enough, perhaps you could direct me to the information you have seen?
“Spatial Sampling. There is always the possibility that the unsampled places are different than the sampled places. That is an argument from ignorance.”
I think it is only an argument from ignorance if one is talking about periods overwhich one has no data to compare with. I really don’t expect necessarily that this will have a systematic one directional effect, but consider the satellite period (admittedly not century scale) there is very little data from interior Africa in the surface records for this (or earlier) periods, and during the satellite era that part of Africa shows very little temperature change.
“every time we add new stations the answer stays the same”
I’m not clear which “answer” you are talking about or what new stations being added, but when John Christy had examined East Africa the Sierra Nevada and Central Valley, and Alabama, he found that the trends from his data gotten by “supersampling” tended to differ significantly from what were, at the time, the official records for those regions based on much less data. In Alabama I think he managed to get a lot of the new data included in the record, but I am unsure about California and very doubtful about East Africa. When one examines areas one at a time for gathering more data and careful homogenization, it appears significant changes can occur to the local picture.
The papers I am talking about are:
Christy, John R., William B. Norris, Richard T. McNider, 2009: Surface Temperature Variations in East Africa and Possible Causes. J. Climate, 22, 3342–3356.
Christy, J.R., W.B. Norris, K. Redmond and K. Gallo, 2006: Methodology and results of calculating central California surface temperature trends: Evidence of human-induced climate change? J. Climate, 19, 548-563.
Christy, J.R., 2002: When was the hottest summer? A State Climatologist struggles for an answer. Bull. Amer. Met. Soc. 83, 723-734.
“there is no physical reason to believe that a century long spatial field will have persistent regions with different trends.”
Does observational evidence that you are wrong about this qualify as a “physical reason”? Because the Southeastern US has been cooling when you look at the entire length of the record. Imagine if that part of the US were as badly sampled as Africa! I agree that there is no reason to think that this potential bias goes in only one direction, indeed logically it could go either way, or even completely cancel out. Still, I cannot agree that over the century scale, substantial areas can’t depart from the behavior of areas elsewhere. Unless of course one would decide that the evidence of this actually occuring must be evidence of substantial bias!
bugs (Comment #79143)-I appreciate you putting words in my mouth and lecturing me like I’m a simpleton, but I’m afraid you have totally missed my point. In fact, I’ve discussed my point with Jeff and we agree that the implications of my point is that the true uncertainties are smaller!
But the issue of uncertainty does have to do with “perfection” and “truth” in terms of determining the likely range from the estimate in which the “truth” lies. Of course one can never have the absolute perfect “truth”, that’s not the point. The point is, how close are you to it. And in this case the point isn’t even that, it’s “how close are we likely to be to the true average of this data?” not “how closely we are likely to be to “true” surface temperature anomalies?” And Zeke has, in this post, kinda addressed the first question (I now think his method is making this estimate look too uncertain), but not the second.
SteveF (Comment #79140)-The comparison is still not appropriate, you have to account for the fact that the troposphere is supposed to warm more than the surface, and on that count there is a substantial difference.
Andrew_FL:
I’m interested in your physics based explanation on why it’s obvious that the surface should warm at a slower rate than the lower troposphere. (Don’t bring up the AOGCMs, none of them have a surface boundary layer in them.)
It doesn’t exist.
Carrick (Comment #79152)-I don’t believe I ever said this expectation is a physical law. But just looking at how the troposphere warms and cools more rapidly in general for ENSO and volcanoes, it seems to me that the odd thing out is the long term trend. Why is it different? To me the explanation that makes the most sense is that surface trend is overestimated. If you think it makes more sense that the relationship between these to variables is fundamentally different for multidecadal variations than for short term fluctuations then it becomes useless to compare surface and troposphere at all, and the troposphere could not indict the surface data, but they sure as heck wouldn’t “confirm” them either.
bugs (Comment #79153)-Much as I’d love to discuss the existence of an objective reality with you, I don’t think this is the time or the place. And it would take us quite far afield of the point. But consider this, if there doesn’t exist a perfect “true” average, then what are we estimating? What are we trying to minimize our error towards? That we can’t ever be sure that we have, or even have at all, the “exact truth” does NOT mean that it doesn’t exist. Otherwise there is nothing to strive to be close to. Estimating is pointless. Science is pointless. You are pushing postmodernist drivel.
Andrew, I’m trying to get an explanation for this:
The surface layer is closer to anthropogenic forcings such as land use changes (influence of UHI, farming, irrigation, etc.) as well as other feedbacks lik ice albedo change. It seems reasonable that it should warm faster, not slower. In that sense, it’s not an “overestimation”, it’s a real effect.
I was supposing you were confusing the prediction by some climate models that the mean temperature trend with elevation increases in the troposphere. But the point is that is it is the troposphere the modelers are talking about (above the atmospheric boundary layer), and does not include the influence of the ABL on measured temperature.
I agree with you that they should be different (by how much is an ongoing research question, AFAIK).
Carrick (Comment #79156)-I think it is fair to suggest it may be related to land use and thus “real”, but I don’t think feedbacks explain it (in the case of ice, this feedback is negligible in the tropics, where this problem is largest, and the feedback processes should cause the effect for ENSO and volcanoes, too, right? But the fluctuations show more tropospheric variability) although this would mean the effect of land use is quite different from what modelers currently believe (even in sign!), and are ascribing land-use to CO2. Fair enough, that’s possible.
“I agree with you that they should be different (by how much is an ongoing research question, AFAIK).”
Of course, if we don’t know how different they should be, then Owen is still wrong, in claiming that the surface data are supported by the satellite data. If we don’t know how they should be related, they can’t be compared at all.
Just to reinforce that 5 year moving averages can present a completely different picture, here is the Monthly US Temp Anomalies versus a 5 year Moving Average (and for most of this period, there are more than 1000 stations being used in the values).
http://img19.imageshack.us/img19/5335/ustemps5yearmovavg.png
The monthly anomalies are +/- 5.0C but the 5 year moving average is a nice smooth line which just exhibits several similar cycles to Zeke’s chart above.
Bill Illis (Comment #79165)-I think that by smoothing over the anomalies like that you miss some interesting features. Part of the reason monthly US temps are so noisy is that the recent period of warming is mostly in winter whereas earlier in the record the seasons varied more together, but with a lot of inter-annual variation. Try looking at five year moving averages of individual months.
A good reference for the interesting differential behavior of the early and later warming in the US:
http://www.int-res.com/articles/cr/17/c017p045.pdf
The early warming was throughout the year, the later warming mostly in the coldest days.
Andrew Fl: Having done some sorts, there is certainly more variation in the US temps in the winter months than in summer but I don’t see that it is has changed over time though.
Andrew:
You are definitely conflating surface measurement with (above ABL) tropospheric temperature trends now.
You’re also assuming all of the models disagree with the apparent lack of warming in the troposphere, that’s also not true.
It’s risky to try and make too many inferences in the absence of a model, but that certainly doesn’t stop you from correlating the two, absent a model, given that we know they should be related. (Looking at how they are related is in fact a common step prior to building a model.)
Andrew_FL (Comment #79150),
I think the discrepancy is more for the mid troposphere, not the lower troposphere… which extends all the way to the surface. Don’t get me wrong here, I agree that there does appear to be a significant discrepancy between models and observations WRT tropospheric ‘amplification’, I just don’t think the lower troposphere vs surface comparison trends comparison can show it very well.
Bill Illis (Comment #79168)-I probably wasn’t clear, I was saying that the recent warming period is mostly in the winter months (due in large part by some very cold winters int he seventies) whereas the earlier warming was more uniform throughout the year.
Carrick (Comment #79170)-“You’re also assuming all of the models disagree with the apparent lack of warming in the troposphere, that’s also not true.”
You have to normalize their upper air trends to their surface trends, since some of the models also show very little or way to much surface warming. This substantially reduces the scatter of how models relate surface warming to warming aloft:
John R. Christy 1,*, Benjamin Herman 2, Roger Pielke, Sr. 3, Philip Klotzbach 4, Richard T. McNider 1, Justin J. Hnilo 1, Roy W. Spencer 1, Thomas Chase 3 and David Douglass 5, 2010. What Do Observational Datasets Say about Modeled Tropospheric Temperature Trends since 1979? Remote Sensing ISSN 2072-429.
Indeed, when this is done, at least in the tropics every model has a ratio of surface to TLT trend greater than one.
“that certainly doesn’t stop you from correlating the two, absent a model, given that we know they should be related.”
But claiming that they “confirm” one another is at the very least not a conclusion to which one can jump. As for how they are related, I have analyzed a lot of this, and as far as I can tell, except for the trend, the troposphere amplifies surface change. So far land use is the only good explanation I have heard besides biased data to explain why only the trend is a different story.
SteveF (Comment #79172)-The difference in the mid-troposphere is larger, but the data for it are also more uncertain, since there is greater variation among radiosonde and satellite estimates. I think it is worth looking at, though. And the difference is detectable in the lower troposphere data.
Andrew:
But we’re back to the problem that the AOGCMs don’t actually model temperature within the ABL. At all.
I think the picture is complicated, but there is an overall latitudinal variation in the land temperature record that is nigh impossible to explain with either measurement bias nor with land usage changes.
(I agree with you on this: confirm is too strong a word.)
Carrick (Comment #79175)-Yes, but I was not saying the models are physically realistic in this regard, I was showing that they pretty much agree amongst themselves about what the amplification should be, and that it is different from the observations, at least on a multidecadal timescale. It may be that they all agree and are all wrong on this because of the boundary layer issue.This implies that the surface agreement must be spurious, that is, at least partly “right” for the wrong reasons. But the fact that they agree with one another fairly well, but disagree with the observations, implies something is wrong. We clearly have very different ideas about what that something is, and I don’t see the question getting resolved here or anytime soon.
Andrew, I just don’t get the feeling you have a very strong understanding of the position you are trying to argue. Sorry.
You can’t normalize by the ABL values to compare against models, if the models don’t include ABL…it’s inconsistent, regardless of who made that comparison.
The best you can do is look at TLT, TMT, TTS, and TLS trends, and compare that to models, and as I’ve said, unlike your claims, not all models are in statistical disagreement with the measurements.
Also the lack of a specific model to connect TLT to ABL doesn’t imply that the ABL agreement is spurious at all. The agreement merely says yes there is a connection, but we just don’t fully understand it. That’s it. Nothing ominous in that.
Carrick (Comment #79179)-Now I am positive I have absolutely no clue what you are getting at. The models have a surface trend in them, which much like TLT can be “compared” with models, but the variability amongst models is pretty large so one can always argue that not all the models are inconsistent at every altitude. But this is ridiculous, as the model’s upper air trends are NOT independent of their surface trends! If you look at the trends in this way you will conclude that different models are consistent or not consistent with the observations at different altitudes. How this can be taken as meaning that some of them are right (because they are wildly wrong at the surface, but no so bad higher up, in absolute terms) I cannot fathom. You appear to insist that the lack of a ABL model in the AOGCMs somehow means one cannot compare the way their surface trends relate to their upper air trends with the same metric in observations. I am sorry but this makes zero sense. If there is a difference in this metric, one could potentially attribute it to the lack of a proper ABL model in the GCMs, but that is a physically meaningful difference that is caught, then, by this analysis. There is no reason whatsoever to “forbid” this kind of analysis. I can see you taking issue with any physical interpretation of this fact, but I do not understand why you think that one can’t compare this metric at all.
I have gone through about twenty iterations of the next several sentences of this comment, so I think I’d better stop for now, as I feel I can not continue in a productive manner at this time.
Zeke, do you have any idea what the relationship between the variance and n is ? You used 10% of the dataset; some 500 stations.
What happens to the error bars of; say 1940, 1970 and 2000, as you change the size of your sample from 50, 100, 250, 500, 1000 stations?
Lucia and Zeke, I must say that I get more than a little frustrated by some of these thread topics getting taken over by posters who want to debate other topics. Sometimes the sidebars are related and sometimes are just as interesting or more interesting than the intended topic, but nevertheless they impede discussing the intended topic.
This occurrence is not unique to this blog by any measure and the sidebars can be informative, but I asked a few questions about Zeke’s analysis above that could get lost amongst the sidebars. I particularly like the analyses that you and Zeke present here, but I sometimes need more details to obtain a better understanding of what the analysis means.
Dismissing for this exercise the assumption that proxies can quantify temperature changes over time, I think that Zeke’s analysis or a modification of it could be extrapolated to better understand the uncertainty of temperature reconstructions given the lack of spatial coverage and the fact that the essential distance correlations are severely degraded by the noise in the proxies.
Kenneth Fritsch (Comment #79195)-For what it is worth, I regret my role in this discussion getting side tracked. I also had a question for Zeke above that I wanted to make sure wasn’t forgotten about by the ensuing lengthly discussion.
So, if you please, Zeke, look into the questions raised by:
Kenneth Fritsch (Comment #79138)
and
Andrew_FL (Comment #79119)
Sorry for the delay in getting back to everyone. I’ll write a followup post next week containing various additional analysis folks have been asking for, as I have somewhat limited time this weekend and the Monte Carlo approach takes about 90 minutes to run 500 iterations. Also, everyone should bear in mind that this analysis only shows surface stations (e.g. the global land record) and not the true global temperature.
As for specific comments:
.
Lucia (#79092),
Applying this method to annual average data would indeed yield tighter confidence intervals. Even though the graphs show a five year smooth, thats still a running mean of the monthly 5%/95% CIs, so while it filters out some of the noise it doesn’t really affect the medium to long-term magnitude of the uncertainty.
.
John N-G (#79095),
Great idea! I’ll test the CIs of the trends when I have a chance next week. Hopefully I can figure out how to do a chart of the trends to date from every past month in the record, with the CIs for each period.
.
Bill Illis (#79097),
The monthly values are here, though its a lot harder to make out longer-term variations due to the noise: http://i81.photobucket.com/albums/j237/hausfath/Montecarlomonthly.png
.
BarryW (#79098),
This analysis used homogenized data. I’ll rerun it with unadjusted data next week, but I don’t expect that much difference.
.
DocMartyn (#79101),
For land temps shown here, pretty much, though “beginning to fall” implies a future prediction that I’d not agree with personally.
.
Michael Lowe (#79102),
Certain times will be subject to some coverage-related uncertainty that this analysis can’t quantify simply because if there is no coverage in an area we can’t subsample it. From the 1950s on, however, there is good enough global coverage that this sort of analysis should reflect sampling error (e.g. some runs will have numerous arctic stations, others might have one or none).
.
DocMartyn (#79112),
Station numbers certainly have something to do with it. Here is the monthly standard deviations over time, for example, and you can see the ~1992 dropoff in station numbers clearly: http://i81.photobucket.com/albums/j237/hausfath/Montecarlosds.png
.
Andrew_FL (#79119),
Yep, this method won’t get around uncertainty in time periods with -no- coverage in an area. But in the modern time period, at least, the spatial bias introduced by subsampling (especially in the 5% case) should mimic the potential bias due to absent coverage. E.g. some runs will have no arctic or african stations, others will have many.
.
Acthof Unimty (#79121),
To do a proper bootstrapping, should pick a (normally distributed?) random number of stations for each run instead of a fixed one?
.
Kenneth Fritsch (#79138),
There is a minimum time period imposed by the common anomaly method that I use (at least 10 years of data available for each month during the 1961-1990 baseline period), and I can’t easily get around that to use stations that GHCN doesn’t. Using an alternative set of stations like GSOD (or those that Berkeley Earth has been assembling), one could do a true alternative analysis, and folks like Nick Stokes and Ron Broberg have done some work to that end.
It is true that subsamples will tend to favor U.S. and European stations, all things being equal, though the large 5×5 grids ensure that if multiple stations are chosen in the same grid they won’t be overweighted. It will, however, tend to bias the spatial coverage in those areas. Empty grid cells aren’t filled per se, but rather the global temperature is estimated as a grid-size-weighted average of all available grid cells for each month.
An alternative approach that Mosher took awhile back was to do a Monte Carlo analysis choosing one stations at random in each grid cell when there were more than one available; I recall that he found that the differences were relatively small.
Focusing on a small areas with high spatial coverage is a good idea; I’ll do it with the USHCN data for the US later this week, as we have 1200+ stations in a small area.
.
Let me know if I missed anyone’s questions, and I’d be happy to answer. I’ll likely be away from the computer for most of today though.
Re: Zeke (Jul 16 12:37),
Yes. Just to reiterate Zeke’s point.
One of the tests I did a long while ago with My CAM approach was to look at every 5 degree bin that had more than one station. If that was the case I randomly selected one station to represent the entire 5 degree cell.
As zeke notes that made no difference to the final tally.
Using Tamino’s new method i recently did an average for the state of texas.
100 stations: His method computes an optimal set of offsets for combining stations in a region into one time series. On completion I looked at the cor() between the 100 stations and the “reference” series. .97+ for all the stations.
I’m also working on a totally different way of aggregating data prior to averaging. One based on climate zones. That’s a huge piece of work that will require some new spatial algorithms so dont expect anything right away.
What’s behind this?
As zeke and others have shown the distribution of trends is fairly normal. Still over a 100 year period we do see some stations that exhibit zero trend or negative trends. Is this just a random thing or is there something “special” about these stations? If there is a factor which explains some of these “less than mean” stations, then the NEXT question is:
how much of the unsampled world has these characteristics?
make sense?
Mosher: “UHI. If I look at all the literature on UHI and my work and more importantly some of zeke’s new work, I’ll say that I think the truth lies somewhere between Jones estimate of .05C”
Has anyone produced a list of stations whose temperature has risen more than more than the top error bar and those below the bottom error bar. Or maybe 25% more and 25% less than the average global anomaly.
Phoenix, for example, is about 5C warmer than 1880. Would you suggest .05C of that is UHI and rest is Co2?
Andrew_FL, as I said, I think our problem is inherent in your lack of facility with how one combines models with data. Discussion between us won’t fix this, training and experience are the only cure for that.
But I’ll try one last time…surface temperature measurements get measured in the atmospheric boundary layer, which contains physics not modeled by AOGCMs. This means there is no theoretical expectation whether surface temperature trends should be larger or smaller than TLT values.
It also means you can’t normalize tropospheric temperatures by surface temperatures in order to compare them to models which lack the physics of the ABL. Any person with experience in modeling of data would recognize this statement as tautological.
However, this doesn’t mean that the ABL temperatures are unrelated to TLT values, or that one should expect some sort of correlation between them, just that the AOGCMs can’t predict what relationship we might find experimentally.
Further, we do expect to find a correlational relationship between surface and TLT measurements, and the absence of one would be..disturbing. Finding a good correspondence (even if the trends are different) tells us something important about the reliability of both data sets, even if you disagree with Owen’s exact word choice. I suspect this is what he was trying to say.
Carrick (Comment #79206)-If you “forbid” normalization to the surface then fine, normalize everything to a layer above the boundary layer. Same result: models that have the same trend aloft as the observations (multiply their profile by the trend in the atmospheric layer you normalized to), (instead of previously having the same surface trend) would have surface trends lower than the observations, and if we force them to all have the same trend aloft, ie multiply their normalized profiles for by the model mean for the layer they were normalized to, they will consistently have higher trends aloft than the observations but have a surface trend that matches the data. All you appear to be saying is that you think the only reason for this is that models lack ABL physics, and therefore it is unfair to expect them to get the surface to troposphere relationship right (they can’t, according to you, except for the wrong reasons since, according to you, one needs boundary layer physics to get them right). Well, fine, that’s your explanation for that feature of the data. But I was not responding to that by saying you had to normalize! I was responding to your statement that the models and observations don’t disagree. Your response was basically “of course they don’t agree, because they can’t, your comparison is unfair”. That’s a seperate issue. The point is that the models and the observations have a different relationship of their warming from layer to layer than the observations do. You can argue that they shouldn’t, or that this difference is because of the boundary layer physics. Fine, maybe. But you can’t argue that the models aren’t wrong because they are wrong (you are literally defending the idea that models don’t entirely disagree with observations on the basis that they have woefully wrong physics. What?) There is a disagreement, you are confusing the question of why one exists with whether one exists.
Re: Bruce (Jul 16 13:48),
“Has anyone produced a list of stations whose temperature has risen more than more than the top error bar and those below the bottom error bar. Or maybe 25% more and 25% less than the average global anomaly.
Phoenix, for example, is about 5C warmer than 1880. Would you suggest .05C of that is UHI and rest is Co2?”
That’s doable. But you have to explain what you mean by “top” error bar.
I’ve looked at the top warming stations ( typically at northern lats where science says they should be) I spent a little time looking at ‘cooling’ stations even cooling URBAN stations. you can find all sorts of odd ball things.
Phoenix:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425722780003&data_set=2&num_neighbors=1
not seeing 5C.. unless of course you are looking at the bad unhomogenized data! ( i guess you were)
Phoenix of course is a wonderful example since UHI has been studied there by several researchers. We can clearly see that the effect of UHI there, prior to homogenization, is high. However:
1. We can exclude cities like phoenix from our analysis. When we do this, when we eliminate huge urban centers what do we see? we see the final average doesnt change much. Why? because places like Phoenix are rare in a database of 40K stations. So, when I say the effect is small what I am pointing to is the effect IN GENERAL. As I noted above you can even find urban centers that cool,
2. We can homogenize them or adjust them.
In short, you have to find HUNDREDS of phoenix like situations to impact the mean. There arnt hundreds. In fact I can throw out ALL urban stations and get the “same” answer. So, yes, Phoenix has a UHI in the raw data that is large. But, A) we can leave it out. B) there arnt enough cases like phoenix to matter. C) we can homogenize it ( Hansen does) so
D: Phoenix and cities like it are not a huge impact to the final mean. they just are not.
Re: Bruce (Jul 16 13:48),
is the rest C02?
huh? The temperature is a function of ALL forcings. c02 is but ONE. You have internal forcing ( natural quasi periodic ups and downs), non anthro forcings, and anthro forcings. So, I don’t ( no one does) believe that C02 is the cause of all temperature rise. Burn that strawman.
No. Just take random samples (with the same size as the number of stations for that month) with replacement. This will give repeats of stations obviously, but that’s how bootstrap works. It’s biased for small sample sizes, but that isn’t a problem here (for small samples it’s tractable to calculate the statistic under all permutations of the residuals). This is directly applicable to answering the question raised by John N-G about slope CIs, now your statistic is the slope instead of the monthly mean. No distributional assumptions required.
Once you have that up and running you have a useful tool to look at how robust time correlation statistics are given our measurement uncertainty.
I see Zeke’s chart of monthlies …
http://i81.photobucket.com/albums/j237/hausfath/Montecarlomonthly.png
… and I see great cherrypicking opportunity. One could easily make the chart go up or down or flat by selecting the stations that follow what one is looking for or simply de-selecting/discontinuing/losing the data of the stations that do not follow what one is looking for.
steven mosher, with regard to your Northern climes statement.
Is it possible to do a quick look at the data as see annual span?
See which sites have a narrowing of J-J-A minus D-J-F over the course of the last century.
The thing is I come from England, and I know every square mm of it is artificial. Now this is not quite the case in the USA, but we are getting there; for instance the spread of the European earthworm is almost complete and has changed the whole soil eco-system of the majority of the US. Irrigation has increased by about 90% since 1950 and wet-lands have been drained, rivers straightened, and houses now cover every shoreline, lake or sat, and dot all water courses. Before Federal insurance, people didn’t do this.
Doc,
The package is online. I have two choices every day. Work to make the package better so that others can do work with little to no effort or run studies for people. My experience is this: I run a study or Zeke runs a study and we get more honey dos. Honey do this, Honey do that. Now, if somebody wants to say
I have a theory and I am willing to give that theory up if your test shows X, then I’m probably willing to do a study. Or if Im curious about a thing I’ll do a study. But I did my package for a reason: to give others the power. Currently I’m perfecting some interfaces and its taking all of my attention. except for a pee break or a break to read Lucia.
Maybe in the future if you lay out a thesis and have willingness to give it up if the data shows otherwise.
hehe. steves in a pissy mood.
Mosher, how many GISTEMPs are there?
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425722780003&data_set=1&num_neighbors=1
19.5 to 24.5
Co2 isn’t the cause?
“because places like Phoenix are rare in a database of 40K stations”
40,000 with data up to 2011?
Moshere: “In fact I can throw out ALL urban stations and get the “same†answer. ”
Then there is something wrong with the data.
Bruce, you showed Pheonix data that have not had the homogeneity adjustment applied. Well it’s just one example, but what you need to keep in mind is that the data you linked to are not used to produce the global record. This is the homogeneity adjusted record:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425722780003&data_set=2&num_neighbors=1
Maybe this is a proper adjustment, or maybe not, but the trend is substantially smaller than the data originally had, so for this specific station, bias has been removed, to a greater or lesser extent.
Re: Bruce (Jul 16 21:20),
How many Gisstemps are there?
There is One, One of the benefits of reading the charts and reading the code and actually working with the data is you get to understand what data is
INPUT, and what data is not input. and the various steps the data analysis goes through.
Go to this page
http://data.giss.nasa.gov/gistemp/station_data/
1) Select a specific data set from the pull-down menu below.
see that instruction? See those three options? then go read the documentation and understand what each of them means.
Then you’ll understand what you dont understand, perhaps.
Bruce, you can see all the data for 10s of thousands of stations if you know where to look. Phoenix is rare ( a city of 2 million or more) its rare in the GHCN inventory (7000 stations) its rare in GSOD, its rare in COOP.
But if you think that cities of 2 million are COMMON in the database, then go ahead and prove it. And yes, The larger databases have more recent data.
Steven Mosher and Bruce:
“In fact I can throw out ALL urban stations and get the “same†answer.”
Yup, because the so-called rural stations probably have higher UHI or contamination than the urban stations.
http://wattsupwiththat.com/2010/03/03/spencer-using-hourly-surface-dat-to-gauge-uhi-by-population-density/
Oh yeah, why is the Jones UHI paper still being discussed? If all the data isn’t available to support the station selection it is pointless.
http://climateaudit.org/2010/11/03/phil-jones-and-the-china-network-part-1/
http://climateaudit.org/2010/11/04/phil-jones-and-the-china-network-part-2/
http://climateaudit.org/2010/11/06/phil-jones-and-the-china-network-part-3/
Do read all the way thru the end of the third part. Perusing S. McIntyre’s looks at the other papers on UHI linked in these articles is educational also.
By the way, I do not offer Dr. Spencer’s work as definitive, just as an example of the lack of serious work in the area. This is one of several areas that need clarification before all the hard work Mosh and the Berkley project is doing is useful. Building your science on the backs of snakes doesn’t get you to the stars. Statistics cannot make a silk purse out of a sows ear. Garbage in garbage out yadayadayada
Steven Mosher at 79202: Still over a 100 year period we do see some stations that exhibit zero trend or negative trends. Is this just a random thing or is there something “special†about these stations? If there is a factor which explains some of these “less than mean†stations, then the NEXT question is:
how much of the unsampled world has these characteristics?
This is an important point. So is the point that the selection of CIs in dependent on the use to which the data are to be put. In medical terms, the stats need to be a lot sharper with a life-death drug dose than with a common aspirin.
Re your interest in dams, why not look at Lake Eyre in Central Australia which is mostly dry salt pan, but fills in some years? Interesting that the water source cachments can be 1,500 km and many months away, so the effect of lake water on local climate is lagged from climate at the time the cachments started to run. Here’s a quick look at 2 stations. Seems like water in the lake rather lowers Tmax. http://www.geoffstuff.com/Lake%20Eyre.jpg
Re: Bruce (Jul 16 21:23),
I see. because you think:
A. That UHI is HUGE ( based on one of the most severe cases of UHI studied.
B. These Huge effects are common.
C. these huge effects are uncorrected.
you are going to reject observations and stick with your theory?
If could be:
A. That UHI isnt always Huge
B. That the effects are not so common.
C. that some folks correct for the effects.
Could be your theory of UHI is huge is wrong.
How do we test whether UHI is widespread and Huge?
Hmm. We look through all the stations and only select those that are really rural. Like zero people, zero concrete, zero electric lights.
When we do that what do we find? the same answer give or take .1C
why? because UHI is not always huge, because its not widespread.
its there, UHI is there, but its Not the explanation for all the warming we see. Could it explain .1C or .2C of the warming over land? Possible.
Do we have proof of that? nope. All the evidence from looking at ALL the stations, not just Phoenix, says its not that big.
But if you prefer to say the data is wrong. I’m fine with saying that Phoenix is wrong. hehe.
“How do we test whether UHI is widespread and Huge?
Hmm. We look through all the stations and only select those that are really rural. Like zero people, zero concrete, zero electric lights.
When we do that what do we find? the same answer give or take .1C”
Do you have any sources for that?
I think there’s warming bias (UHI, but also RHI) in almost all station records. Just yesterday I was in a very small village (~1000 people) and I noticed it was ~3 °C hotter than in the countryside, only ~200 m far away.
One way to test this is to re-evaluate the stations and give them some kind of grading. For example 10 for the best of the best and 1 for the worst. Then see if there’s correlation between station quality and the trend.
Re: Edim (Jul 17 00:35),
I was in a very large city today and it was chilly. I went to the countryside and it was much warmer. That proves exactly nothing. Believing that there is a warming bias and PROVING IT are two entirely different things. having spent the better part of a year looking for the proof and not finding it has lead me to the conclusion that the effect must be small. When someone can propose a test that proves it is large (impacts the mean by more than .15C) then I’ll change my mind. No trouble there because I do believe that UHI is real. However the person proposing the test, must be willing to accept the results and give up their beliefs if the test proves them wrong. pretty simple.
As for grading stations, thats been tried. around 1000 stations were graded. no difference in the mean. There are 7000 sites in GHCN. start grading them. Establish an objective criteria before you start grading, however.
Steven,
Yes, it does not prove anything, but it indicates that there might be warming bias. Anthropogenic local warming and it’s impact on instrumental records is too easily dismissed, IMO.
Do you have any links for the station grading test?
I agree, objective criteria should be established before grading.
I don’t have time for any work on this, unfortunately.
Edim (Comment #79229)
July 17th, 2011 at 2:10 am
Watts himself showed there was no significant warming bias, if there is one. Not that he made much of it.
bugs,
I will look into what Watts found.
I think urban/rural can be misleading. For example, some rural stations can have more warming bias than urban if there was more development in the surrounding environment of the rural stations.
That’s why I would like to se an analysis of absolutely the best stations, even if there are only few. I don’t think 1000s of stations and meat-grinding them is necessary.
For the record, I believe that there’s been some warming in the early and late part of the 20th century, with some cooling in the middle.
Steve, if I am a pain, it is only because I wish to know what the ‘A’ in AGW is.
As we do not have the luxury of positive and negative controls, sites we know that have no changes in surrounds or sensors for the past 100 years or which are unaffected by [CO2] drive photon recycling, it all gets very complicated.
I suspect, that there should be changes in daily, monthly and seasonal temperature span in data-sets with large changes in UHI effect, compared with those without.
Given that the data we have is essentially daily max and min, we only really can look at means and distributions. If the distributions of a site in 1940 and 1990 is hugely different, it means that either the sensor, the site or the atmospheric physics has changed. In some sites we might of knowledge of when sensors and sites have changes, and so may be able to tease out their signature.
Ignoring my requests is not, ‘pissy’, we all have lives.
Andrew FL (#79176),
“I don’t believe I ever said this expectation is a physical law. But just looking at how the troposphere warms and cools more rapidly in general for ENSO and volcanoes, it seems to me that the odd thing out is the long term trend.”
————————————————
You seem to be saying that your expectation that the troposphere will show a greater rate of warming than the surface is based entirely on your own analysis of the situation.
Owen (Comment #79234)-Actually, the point I was making was also made by a paper authored by Ben Santer (among others) in 2005. Of course, he decided it meant that the satellite data was wrong. For instance, from the climategate emails, Phil Jones and Tom Wigley discuss the paper:
http://devoidofnulls.wordpress.com/2009/11/30/nice-to-critique-you/
“The timescale argument is quite convincing. It is a pity that there is only Pinatubo that you can test it on. El Chichon ought to work but it is confused by ENSO. Does the amplification work well for the 1997/98 El Nino?…Anyway my thought is as Pinatubo gives the amplification then ENSO ought to as well.”
Indeed, I have confirmed that short term variations are amplified, for ENSO and volcanoes:
http://devoidofnulls.wordpress.com/2011/07/16/relating_lower_tropospheric_temperature_variations_to_the_surfac/
Again, the trend is the odd man out. Note that Phil Jones makes reference to “thermodynamic theory” so evidently they believe there are physical reasons to expect such amplification (and models do the same thing, but Carrick asked I not reference them and I obliged) but I believe that their “theoretical” expectation rests on too many assumptions to be a satisfying reason why there “must” be amplification. So I gave my reason for thinking there should be amplification.
“One of the tests I did a long while ago with My CAM approach was to look at every 5 degree bin that had more than one station. If that was the case I randomly selected one station to represent the entire 5 degree cell.
As zeke notes that made no difference to the final tally.”
Steve Mosher, what does “made no difference in the final tally” mean. I would think that, unless you studied an area of the globe where temperatures had near perfect correlations, you would have computed error bars, as Zeke has done in this exercise that would show some difference in uncertainty.
Zeke, I thought a bit more about your problem. I said,
Since what you are calculating is a weighted average, it probably makes more sense to resample the residuals rather than the stations directly. This way you can keep the same weights for each station without worrying about how to treat weights for repeated stations. Not a big deal conceptually, just a minor tweak.
Essentially what you are say Mosher is that no matter how many 5C UHI cities I find (or 4C or 3C or 2C), the average of the “data” will always be within .1C.
Bzzzt. Not believable. Something stinks.
Re: Kenneth Fritsch (Jul 17 10:23),
of course if you decrease N your uncertainty increases.
estimations of the mean didnt change.
basically the spatial field is highly correlated.
OK, OK you guys have suckered me into discussing an off topic subject here. I have not followed your discussion well on, I think, the reasoning behind expecting the troposphere to warm faster than the surface, i.e. higher trends in the troposphere, but I remember an exchange a long time ago, in blog terms, between Issac Held and Steve McIntrye where Held held that the physics of his models called for greater temperature trends in the troposphere than at the surface.
As I recall Held explained it in thermodynamic terms involving moisture content in the atmosphere. McIntyre was attempting to pin Held down on the matter where, again, as I recall, Held indicated that the mearurements might be wrong. I think the discussion started with theories on the dynamics of hurricanes.
Now I will have to go back and attempt to find that thread just to determine how good/bad my memory is these days.
If the Watts CRN study revealed anything, I think it showed that the micro climate changes over time of individual stations could well overwhelm any effects of UHI and additionally that shorter term trends would not necessarily be affected by changes that occurred and were realized back a longer time in the past.
I sometimes think that the UHI arguments get in the way of more meaningful discussions and analyses of what micro climate changes mean in terms of measuring uncertainty on long term temperature trends. I certainly do not think that looking at trends from 1980 forward is sufficiently long term – unless it can be shown that the micro climate changes occurred within that period.
One should also be aware that the adjusted GISS temperature records in the US are all based on rural stations and thus urban temperature trends will be nearly the same as rural stations – by construction.
What Anthony showed was that if one aggregates stations by their current siting, the mean temperature trend is about the same for good stations as bad stations. Interestingly enough this was not the case for max and min, but the differences were opposite in sign and mostly canceled. This is hardly proof that siting doesn’t matter. The problem is that unfortunately without knowing how the stations were sited in the past one cannot see if siting changes matter. They may, they may not. Who knows?
Anthony hasn’t shown anything about UHI in published work, although I hear his group is working on something.
Also, the results were only for the US. It looks to me like the US has the best data quality and record in the world, so I don’t think one should expect to find large biases here.
steven mosher (Comment #79217)
July 16th, 2011 at 8:07 pm
So, given the choice, which of those two alternatives gets priority? Just curious. 🙂
bugs (Comment #79230)
July 17th, 2011 at 3:19 am
That would be no significant warming bias in mean temperatures according to Leroy’s station siting classification. We didn’t look at urbanization. Anything beyond 100 ft is irrelevant to the classification.
Andrew_FL v. Carrick:
I think you would both agree that GCM surface temps and tropospheric temps are closely related, just not by the same physical laws that cause atmospheric surface temps and tropospheric temps to be closely related. I think you would also both agree that GCM’s predict a tropospheric amplification of trends, and whether or not this has anything to do with actual (lack of) amplification depends on how well the GCM’s mimic the interaction between the surface and the atmosphere. Finally, I think you would both agree that while GCM’s do not prove that the lower troposphere should warm faster than the surface, they also demonstrate that it’s fully plausible that the lower troposphere and the surface might should not necessarily warm at the same rate. Okay?
Mosher: “Phoenix is rare”
I admit I’m just eyeballing, but less look at the data before homogeneity for the Phoenix region (only with 2011 data):
http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?lat=33.43&lon=-112.02&datatype=gistemp&data_set=1
Eyeballing
Sacaton 3C +
Roosevelt 4C +
Wickenberg -3C
Prescott 4C
Ajo 4C
Tuscon 3.5C
Whiteriver 2C +
Williams 2C +
Fort Valley 3C from 1915 to 1935 and then back down
Seligman 2.5C or so
Safford 3C
Blythe 4C
Phoenix isn’t rare at all.
I would still be curious Mosher/Zeke. How many stations with data up to 2011 have pre-adjustment warming of more than 1.5C sometime during the period with which there is data?
“of course if you decrease N your uncertainty increases.
estimations of the mean didnt change.
basically the spatial field is highly correlated.”
Not only N, but the variation in temperature correlations of the individual stations to the mean of the grid temperature in which the stations reside. Obviously, if I use one station to represent a grid and that station that I randomly select agrees well with the mean, I get a good estimation, while if the randomly selected station does not agree, I get a poor estimate.
Of course, what one is required to do with a gridded system is to obtain estimates from nearby grids for a sparsely or unpopulated grids. It then becomes a process of using correlations between grids and estimating how well the station population represents the true mean of a grid.
Here is what I was able to come up with for Isaac Held’s views on preferential warming of the troposphere. I remembered that Held mentioned humidity and lapse rate in his discussion with McIntyre – unfortunately I forgot the proper spelling of his first name
http://en.wikipedia.org/wiki/Isaac_Held
“Held’s research has had two major themes. The first is understanding how the earth’s climate responds to changes in the amount of solar radiation hitting the planet or to the concentration of greenhouse gasses in the atmosphere. His early work showed that a warmer planet would tend to have a more stable tropics, as the higher concentration of water vapor would result in more precipitation and disproportionate heating of the upper troposphere.[2]”
Held, I., The tropospheric lapse rate and climatic sensitivity: Experiments with a two-level atmospheric model. Journal of the Atmospheric Sciences, 35(11), 2083-2098.
John N-G (Comment #79246)-I mostly agree, however I would note that the connection between the surface and troposphere variations depends on timescale in observations. I think this is potentially important, although I admit there is no one explanation that necessarily must follow. I just means that the way the models connect multidecadal trends at the surface to the troposphere is different from what the observations indicate (assuming they are right), but for short term variations the model physics does not seem too unrealistic.
Re: Kenneth Fritsch (Jul 17 11:55),
well propose an experiment.
I’ve been playing around with Tamino’s regional estimator. Just looking at various states and selecting stations based on that rather than grids. Finding stations that dont correlate is a PITA. Now of course we know that at some great distances this wont be the case.
If your theory is that unsampled areas dont corelate with sampled areas, then what’s a good approach to test that theory
Re: Bruce (Jul 17 11:22),
Bruce. You want to look at data that doesnt get used? fine. But I fail to see what that has to do with anything. next, have a look at the stations you listed.
Divide them between urban ( like phoenix) and rural ( <10,000 in population)
Do you see anything odd, even in the bad data?
Re: John N-G (Jul 17 11:15),
hehe. If I short out the keyboard I’ll blame it on this site. Up to now, pee breaks take priority. TMI I know.
Re: Bruce (Jul 17 10:31),
No, I’m saying something different. You can and will find individual stations that show UHI effects. you will because UHI is real. In some cases UHI is large. The question is how many stations have these levels of UHI? And further, How many urban stations actually have COOLING? I found bunches of those. Why do they cool? Well, one THEORY is the “cool park theory”. There are always micro siting issues that can cool an urban location, like shading. For example in the study of UHI in portland the number 1 regressor was ‘canopy cover’, that was shown to cool urban areas lower than the surrounding rural. Water use, also drives the difference between urban and rural as Grimmond and Oke show. Very simply if you look at isolated cases you will be mislead into believing that ALL urban sites are like phoenix. They are not.
The next issue is you persist in looking at data that doesnt get used. That’s raw un homogenized data. For example in GISS the urban stations get forced to match the rural stations.
The real issue you want to raise is the homogenization issue. It DOES work to remove a large component of the UHI, but does it remove it all. There’s reason to believe that it doesnt. That residual is small.
Make sense now. You are looking at bad data. That bad data gets corrected.
the question is: how good is the correction and whats the uncertainty that should be added to the final result. You dont address that question by looking at the bad data.
Finally, if you do look at really rural sites you will see that the average over the whole century doesnt change. That’s a really interesting finding. Kinda odd.
in the end you have to answer this question. If UHI is so clear in an individual case, how does it’s effect vanish in the final result? You cant answer that with your eyeballs.
Mosher: “Divide them between urban ( like phoenix) and rural ( <10,000 in population) Do you see anything odd, even in the bad data?"
These are < 10,000
Sacaton 3C +
Roosevelt 4C +
Wickenberg -3C
Ajo 4C
Whiteriver 2C +
Williams 2C +
Fort Valley 3C from 1915 to 1935 and then back down
Seligman 2.5C or so
Safford 3C
Blythe 4C
Prescott 4C is 27,000
Mosher: “You dont address that question by looking at the bad data.”
You mean uncorrected data. Uncorrected data says UHI is real, it effects < 10,000 stations.
Blythe is interesting. 1000 people in 1930 to 20,000 in 2010.
Marked as < 10,000 in GIS.
“Blythe has a classic low desert climate with extremely low relative humidity and very high summer temperatures. On the average, it receives less than 4 inches of precipitation a year. Stores, shops, restaurants, theaters and homes are air-conditioned much of the year.”
And the corrected data is about a 3C rise.
Peter Cook, humorist, one said something like ” .. they invented an instrument so sensitive that it detected itself”. The thermometer has a problem in this regard. In the most common use, we average the highest and lowest historical thermometer readings each day and use that as a proxy for many events, one being UHI estimation. More recently, near-continuous monitoring has allowed a distinction between a mean daily temperature and a mean daily energy flux. The two can be vastly different, especially when considering UHI in the IR and different instrument enclosure designs.
Personally, I have reached the conclusion that UHI can never be reconstructed and can never be corrected in almost all historic records. The micro environment around the instruments, like shade from buildings and trees, air conditioners running at night, etc., is often mising data.
Some ‘studies’ I have read are defective because of poor use of changes over time, causing a difficulty in determining when UHI might have started or finished or reached a plateau at a given place. Of couse UHI exists; but it is now beyond fruitful to spend much more time on the tip of a pin with angels.
I think all modern temperature changes are a result of the penetration of A/C.
“1946 The demand for room air conditioners began to increase with more than 30,000 units produced on this year.
1953 Room air conditioners sale exceed 1 million units. This is another key milestone in the history of air conditioner.
1998 Unitary air conditioners and heat pumps set a sale record of more than 6 million units.”
http://www.airconditioning-systems.com/history-of-air-conditioner.html
“I’ve been playing around with Tamino’s regional estimator. Just looking at various states and selecting stations based on that rather than grids. Finding stations that dont correlate is a PITA. Now of course we know that at some great distances this wont be the case.
If your theory is that unsampled areas dont corelate with sampled areas, then what’s a good approach to test that theory”
Mosher, what I am saying is that handling the issue of correlation and treating it as if stations either correlate or do not is not very helpful. Correlations vary between stations and that is the gist of the matter for uncertainty with regards to spatial infilling. Temporal infilling requires either extrapolation in time or in space or both and that depends on correlation variation in space and/or time. The distance correlation decreases with distance and it is also affected by changes in station elevation, latitude and proximity to large bodies of water among other potential effects.
I have not read of what I would consider is a satisfactory method in the peer reviewed literature for estimating sampling error, but I am aware of attempts and also know that climate scientists are concerned with the implications of the uncertainty that derives from infilling missing data in space and time. I think 1997 was the first time a paper that covered sampling uncertainty of regional and global temperatures arising from spatial and temporal infilling.
The link below discusses station temperature correlations over distance but as I read it never shows uncertainties of infilling temperatures.
http://homologa.ambiente.sp.gov.br/proclima/publicacoes/publicacoes_inpe/ingles/largescalechanges2006.pdf
This second link actually presents a method for determining sampling error and i8s the best I can find at the moment:
http://journals.ametsoc.org/doi/pdf/10.1175/JCLI4121.1
The article comments:
“The authors’ error estimation is determined by two parameters: the spatial variance and a correlation factor determined by using a regression. The error estimation procedures have the following steps. First, for a given month for each grid box with at least four station anomalies, the spatial variance of the grid box’s temperature anomaly, sigma ^2 , is calculated by using a 5-yr moving time window (MTW). Second, for each grid box with at least four stations, a regression is applied to find a correlation factor, alpha, in the same 5-yr MTW. Third, spatial interpolation is used to fill the spatial variance and the correlation factor in grid boxes with less than four stations. Fourth, the sampling error variance is calculated by using the formula E^2=alpha*sigma^2 /N, where N is the total number of observations for the grid box in the given month…
..Two 5° _ 5° station-dense boxes in the United States are selected to validate our error-estimation theory. They are (45°–50°N, 120°–125°W) in the western United States and (40°–45°N, 70°–75°W) in the eastern United States.”
The third link is referenced in the second link.
http://journals.ametsoc.org/doi/pdf/10.1175/JCLI4121.1
This article “was the first publication on the systematic calculation of the error variance of gridded data on the decadal scale by estimating two parameters: the average variance, S^2, of all the stations in a grid box, and the average intercorrelation dimensionless percentage (r) of these stations based upon the output of general circulation models (GCMs).”
The weaknesses as I see them in these models is dealing with nonstationarity and testing the model against “known” mean temperatures of grids.
I am not sure why climate models are preferred over empirical data from satellites.
Bruce (Comment #79264)
July 17th, 2011 at 7:17 pm
You can think whatever you want. You have no evidence that what you think is correct.
bugs, I do have some evidence.
“The waste heat resulting from air conditioners has increased the air temperature by 1°– 2°C or more on weekdays in the office district in Tokyo. This result demonstrates the importance of considering the waste heat resulting from energy consumption with air-conditioning in the calculation of urban canopy temperature. Moreover, by comparison with a mesoscale meteorological”
http://journals.ametsoc.org/doi/pdf/10.1175/JAM2441.1
The ubiquitousness of A/C units in all the towns I mentioned a few posts ago might explain the temperature showing 2,3,4 and 5c rises.
“On top of massive energy consumption, the use of many air conditioners can and does affect the local temperature. As the cool air is created inside, intense hot air is pumped outside via the condensers. This creates heat zones, multiply these zones in a city and you have what the science world call an urban heat island. This is the name given to describe the characteristic warmth of both the atmosphere and surfaces in cities (urban areas) compared to their (non-urbanized) surroundings.
Take for example blocks of apartments, office blocks or large commercial buildings, pumping out 40C to 60c per unit to the already hot external air temperature. When you add up a city’s worth of air conditioners, you can understand why it is that cities are hotter than the countryside in summer. This additional heat also creates a microclimate convection system whereby the hot air rises swiftly in pocketed areas, creating many new abnormal localized weather patterns.”
http://www.greentimes.com.au/housing-building/the-impact-of-air-conditioners.html
“increasing towards
the centre of urban areas. Temperatures in
urban areas can easily be 3-5 degrees C
above outlying rural areas, but at times of long
heat waves this can rise to 10 degrees. In
Hong Kong, urban heat island effects are
exacerbated by traffic and air conditioning
systems. And as the heat rises so cooling
systems add to the problem.”
http://www.csr-asia.com/report/report_cc_challenges_hk.pdf
“The analysis of temperature records shows that Hong Kong has
been warming up during the past 118 years, in line with the global
warming trend. This is also consistent with the warming trend in China
mainland in the past 50 years. In the period 1989 to 2002, the rural
areas of Hong Kong have been warming up at a rate of about 0.2°C per
decade. At the Hong Kong Observatory Headquarters in the heart of
urban Hong Kong, the corresponding rise was about 0.6°C per decade.
The difference of 0.4°C per decade between temperatures in urban and
rural areas may be attributed to the effects of high density urban
development.”
http://www.weather.gov.hk/publica/tn/tn107.pdf
Warming has been measured by satellites (both troposphere and sst), radiosondes, and ships and buoys. That’s in addition to other measurements like sea level rise, global sea ice, ice balance of the ice caps and alpine glaciers and movement of climatic zones. A/C did not cause these things..
cce, I think you shouldn’t bring up SST because the bucket/inlet thing is a joke. As for sea level … its dropping or level for the last 7-8 years and pinning down what is normal is quite difficult.
Many Greenland temperature records were set in 1929 and have yet to be broken.
etc etc
But this imaginary global average temperature is probably mostly, if not all UHI. And I think 100s of A/C units pumping heat into the air probably have some effect. I think 1-2C in Tokyo on weekdays is not something that should be dismissed.
100s should be 100s of millions.
And it isn’t just A/C units. Its heaters when its cold and pavement and waste heat from cars and planes and factories etc etc.
UHI is real. It is not something to be dismissed.
Are the avhrr, msu, amsu, topix, ssmi, and grace instruments affected by uhi and bucket adjustments? Isn’t it strange that every method of measuring or inferring temperature change points to warming? And if you believe that slr stopped a few years ago (which it didn’t), what if not a rise in temperature caused it to rise before then?
cce (Comment #79273)-Climatic zones are determined in part by surface temperature, so if there is a problem with the measurements thereof, this is not an independent line of evidence, but subject to the same problems. Glaciers and all that are more complex matters than functions of just temperature, and many of our records (ie for sea ice) are too short to tell us much (besides also not being directly tied to temperature/heat. However, at least for the periods they cover it is true that satellites confirm that some warming has taken place. But all this extraneous stuff is bringing in unnecessary lines of evidence of dubious utility to assess the issue at hand, which looks desperate as if looking for any possible thing that will remove all doubt. Sorry to break it too you but there will always be doubt. Best just to stick with arguing from the best lines of evidence and if you can’t convince people, so be it, they won’t be convinced.
By the way, Steven Mosher, you have done analysis involving excluding urban stations, what method do you use to define “urban”? GHCN classification is arbitrary and out of date, which potentially biases the conclusions. Have you tried stratifying the stations by level of population growth instead of level? To me that makes more sense, as we are interested in trends.
Bruce (Comment #79271)
July 17th, 2011 at 8:54 pm
Not of UHI, but that it is the only source of warming. UHI has been studied and taken into consideration for many years. You appear to be claiming it is the only reason we are measuring warming.
Re: Bruce (Jul 17 16:51),
Oh my its everywhere? with a UHI of 3C even at stations in the middle of nowhere, why every station must have 3C of UHI..huh? why is the land average only up ~.8C..
Maybe arizona happens to be one of the states that saw the most warming at rural and urban sites.. maybe that 3C you see at rural sites is the real deal.. which of course means phoenix isnt really that infected. in fact, Arizona is one of the states that saw the most warming.
And wickenburg.. Thats a class 5 crn site according to Anthony’s paper.
Class 5 CRN.. -3C. Hmm.
Bruce for A/C to impact a record the AC has to raise the temperature ABOVE the tmax recorded for the day.
Tmax typically happens around the middle of the day.
AC typically goes on 2-4 hours after the heat of the day ( look at the power loading curves of power stations.
One finding of Anthony’s study that was interesting ( see his talk at the conference) what that Tmax was lowered at CRN 5 sites. basically the heat sink effect.
Its damn hard for AC to impact a Tmax record.
Re: Bruce (Jul 17 21:03),
Nice, except Hong Kong is represented by the Royal observatory in GHCN.
looks pretty flat.
Oh well, its on the other side of the island.
Re: Andrew_FL (Jul 17 22:42),
I used several methods to classify urban and rural.
1. rural: No lights with 100km
2. rural: no built surfaces at the site, within 5km or the site, with 10km of the site.
3. zero population density by all current estimates ( 3 different sources)
4. Zero population and zero growth from 1900 on.
etc etc etc.
Basically, everything I I could get my hands on. I’d never use GHCN classification, except perhaps their lights code, which is 4 sq km of no lights at the site.
I google earthed every site, drew maps with overlays of metadata, historical land use, land type, blah blah blah, modis data at 500meter resolution, population at 1km. historical population, population density,
Its very simple. You can and will find examples where UNDER CERTAIN CONDITIONS uhi is high. it doesnt happen everywhere all the time.
As it turns out given the stations and methods we have the UHI signal is either
1. in every fricken station, so we will never ‘find’ it by ‘splitting’ data into various sets.
2. small enough so that it ON or AROUND the uncertainty boundaries of the data itself… < ~.2C
If #1 where true then we'd see the difference when we compare NCDC's new CRN network ( 7+years of data) with the overlapping data. Gues what?
we dont. If you dont like comparing to UHA, then compare the "old" surface stations to the new CRN stations.
No difference.
http://www.ncdc.noaa.gov/crn/
The logic is pretty clear. If UHI was large we would find it quickly by comparing rural to urban. We dont. Not with population, lights, urban extent data, impervious surface data, land use data, vegetation data, every indicator you can think of.
that means one of two things: it infects everything OR the effect is small.
We have several ways to show that it doesnt infect everything. the new 114 CRN station network is just one.
Here is the real brain buster. Even if was only HALF as warm as we think, AGW would still be true. C02 warms the planet. how much is the question. the surface record isnt that important for answering that question.
“C02 warms the planet.”
Not always. Sometimes the Squiggly Line goes down.
Andrew
Mosher: “Oh my its everywhere? with a UHI of 3C even at stations in the middle of nowhere, why every station must have 3C of UHI..huh? why is the land average only up ~.8C..”
Good questions. However I was responding to the point you made that only place likes Phoenix have UHI when in fact it appears every station (with data to 2011) near Phoenix has UHI (except one).
A/C units have become ubiquitous in warmer regions. And pavement. And all kinds of other UHI inducing changes.
You claim UHI is miniscule, but looking into it just for a few minutes suggests it is widespread.
You may claim that your adjustments turn it into .05C, but trying to minimize it looks kind of silly.
Mosher: “Bruce for A/C to impact a record the AC has to raise the temperature ABOVE the tmax recorded for the day.
Tmax typically happens around the middle of the day.
AC typically goes on 2-4 hours after the heat of the day ( look at the power loading curves of power stations.”
Every place I worked that had A/C and needed it turned it on an hour or so before most workers were due to arrive and then turned it off or down 30 minutes or so after most of them left.
As someone who worked flex hours I can attest to the peace and quiet and general lack of decent fresh air that occurs when it all goes off. The same goes for heating.
Anyone who suggests that all the towns around the stations I list only turned the A/C on in the stores/offices/etc well after the stores opened for business has trouble with reality.
“Nice, except Hong Kong is represented by the Royal observatory in GHCN. looks pretty flat.”
Not really
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=205450050000&data_set=1&num_neighbors=1
From the paper by the “HONG KONG OBSERVATORY” ) I don’t think Royal is used anymore)
“At the Hong Kong Observatory Headquarters, temperature
readings are available for the 118 years period from 1885 to 2002, apart
from a break during the World War II from 1940 to 1946. Analysis of
the annual mean temperature data showed that there was an average rise
of 0.12°C per decade in that 118 year period (Figure 2). The annual
mean temperature rose from 22.0°C in the late 19th century (1891 – 1900)
to a mean of 23.5°C in the most recent 10 years (1993 – 2002). In
post-war years from 1947 to 2002, the average rise amounted to 0.17°C
per decade, similar to the finding of 0.15°C per decade at the Hong Kong
Observatory Headquarters from 1947 to 1999 by Ding et al. (2002). The
warming at the Hong Kong Observatory Headquarters has become
significantly faster in the period 1989 to 2002, at a rate of 0.61°C per
decade.”
Mosher: “in every fricken station, so we will never ‘find’ it by ‘splitting’ data into various sets.”
Now you get it.
As for Wickenburg … it sure fits in the rest of the data if you check the adjustments. (Thats one wicked adjustment!!!)
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425722780090&data_set=1&num_neighbors=1
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425722780090&data_set=2&num_neighbors=1
Mosher: “And wickenburg.. Thats a class 5 crn site according to Anthony’s paper.”
Really? Doesn’t look like it!
http://gallery.surfacestations.org/main.php?g2_return=%2Fmain.php&g2_formUrl=%2Fmain.php&g2_authToken=4ef56627ec1e&g2_view=search.SearchScan&g2_form%5BformName%5D=search_SearchBlock&g2_form%5BsearchCriteria%5D=wickenburg&g2_form%5BuseDefaultSettings%5D=1
steven mosher (Comment #79284)-Thanks. In response to my post most of what you said was unnecessary. I know, for instance that CO2 has a warming effect, so its a little condescending that you “point it out” to me. I was merely curious whether the way urbanization is classified makes a difference. Apparently not. 🙂
Bruce,
Plus, if your building has a data center, A/C runs “all the time”. 😉
Andrew
Re: Bruce (Jul 18 08:34),
I do believe wickenburg is class 5 see below from the paper
“33.9792 -112.7403 29287 5 WICKENBURG AZ 10 x R 0.695470267 0.791711453 0.697202746 ”
from the SI of the published paper. Apparently you didnt read the paper, or download the SI or check the excel spreadsheet.
http://gallery.surfacestations.org/main.php?g2_itemId=13239
Re: Bruce (Jul 18 08:32),
Bruce if you believe UHI is in every site. Then
predict how CRN ( the climate reference network) will look compared to the rest of the network.
The CRN may someday tell us about the climate … once it gets going. But I doubt it will tell us anything about historical climate.
Mosher, try putting the 2 GISS links as separate tabs on your browser and then click back and forth. Its like an animated seesaw …
Considering how badly sited Wickenburg is where would be the justification in adjustments that cooled all the old data to change a downward trend to an upward trend?
Aside from that … UHI is real and appears huge.
Bruce,
UHI has NOT been studied in any reasonable way at this point. Here is S. McIntyre’s work on Jones and Wangs UHI using Chinese data. Keenan found they did NOT have the metadata to support the study. In the posts, McIntyre links to where he looked looked at other UHI studies. None of them appeared to be solid:
http://climateaudit.org/2010/11/03/phil-jones-and-the-china-network-part-1/
There is a part 2 and 3 also to click through. I think the links to the other UHI studies are embedded in part 2.
Steven Mosher makes a number of claims. All he SHOULD be claiming is that the data he has selected to use does not show significant differences in trend between most stations. Taking it a step further and suggesting there is minimal UHI is simply not supported by the work they are doing. And as you pointed out, there are stations with high trends that must be getting averaged out by stations with low trends. This is the type of double talk that has so many reasonable people saying bad things about the Climate Community and the temperature records.
Dr. Spencer, using a different data set, differenced stations in close proximity to each other to determine if there were significant differences due to population sizes. He found that low density and growing had higher trends than high density and growing.
http://www.drroyspencer.com/2010/03/the-global-average-urban-heat-island-effect-in-2000-estimated-from-station-temperatures-and-population-density-data/
He used NOAA’s International Surface Hourly (ISH) temp data:
http://www7.ncdc.noaa.gov/CDO/cdo
and the Socioeconomic Data and Applications Center (SEDAC):
http://sedac.ciesin.columbia.edu/gpw/
for the population data.
They seem to be studying it in Tokyo as per the paper I referenced. Lots of references in back of paper.
http://journals.ametsoc.org/doi/pdf/10.1175/JAM2441.1
The Hong Kong report is interesting.
I’ll agree the papers studying UHI aren’t making headlines the way scare stories about 6m sea level rise. They should though.
Paris thinks waste heat from lots of small A/C units is heating up the city and they are developing a centralized massive A/C system with less waste heat.
“Cooling systems installed in built-up areas generate heat island effects to which district cooling
systems can provide a solution.
De Monck et Al in 2010 produced a scenario simulating the air-conditioning needs in the city of Paris
and the heat island phenomena. It presents the results of three scenarios:
– The current real needs (REAL)
– The current phenomenon if the air-conditioning needs are covered exclusively by dry airconditioning
systems(DRY AC)
– The heat island created by a doubling of the needs with the use of dry systems
The results for the least favourable scenario (DRY Acx2) predict local night temperature increases of
up to 3°C. In heat wave periods this heat island effect could reach much higher values, with
temperature increases of up to 8°C!
By centralizing the production and associated cooling systems, and by opting for varied cooling
systems (wet cooling towers and river Seine water), use of the Climespace DCS can limit these
effects.”
http://www.districtenergyaward.org/download/awards2011/District_Cooling_France_Paris_2011.pdf
kuhnkat.
Spencer doesnt provide code and data.
1. ISH has not been quality controlled.
2. he used elevation from station metadata for lapse rate adjustments
( elevation data is just as bad as the lat/lon data which is 1000s of
km off in some cases
3. he used gpw population density. Also not the best or most accurate
and it depends upon having an ACCURATE station location. more accurate
than exists for ISH.
So, non reproducible results dont interest me. Not when Mann did them, not when Anyone does them
If you guys are really interested in UHI start here
http://www.urbanheatisland.info/
There is MUCH better evidence for your claims there. And look at the Bubble study if you want some real physics.
When you have a TESTABLE hypothesis then we can talk
Andrew,
If the radiosondes say that it has warmed, and satellite SST has said that it has warmed, and satellite troposheric measurements have said that it has warmed, and ship and buoy measurements have said that it has warmed, and thermometers on land have said that it has warmed, and sea ice, land ice, tide guages, satellite altimetry, and flora and fauna back up these claims, then it has warmed. People can doubt these things just as they can doubt any number of well established facts, but let’s not treat them as serious people. All of these observations are germane to the issue at hand, which is the absurd belief that warming is “mostly or all” the result of UHI. It is not. And if our doubter believes that a few years of lower SLR (but not flat or negative) somehow bolster’s his claim, then I think 30 years of data is long enough to establish the contrary.
“X has not been studied in any reasonable way at this point” = “X has been studied to death by numerous unconnected people, all of whom have failed to confirm my a priori misconceptions, which entirely justifies my ignoring them”.
cce, satellites tell us nothing about whether it is warmer than the 1930s (as an example).
As late as 2007 1934,1998,1921,2006 and 1931 were the warmest 5 years in US history.
If UHI has added even .1C to modern temperatures then the top 5 would have been: 1934,1921,1931,1998 and 2006.
So UHI is important. VERY important.
It ain’t global if the USA’s warmest decade was the 1930s.
Bruce,
It very well may have been warming in the 1930s, especially in the arctic. It may have been warmer in the MWP.
Neither of those has very much to do with the question of sensitivity, except if it was warmer then the climate is likely MORE sensitive to C02 than we think.
UHi doesnt matter than much to the current record. but more interestingly the current record cant tell us how sensitive the climate is to a doubling of C02. If we had 150 years of Perfect measures, that would only give us insight into the Transient climate response, not the ECR.
Bruce; Nice, an industry proposal that cites a simulation study and you of course quote the worst scenario from that simulation and the paper is apparently missing from the bibliography.
Further, raising the nighttime temps by 3C doesnt necessarily raise Tmin. TMin happens in the early AM, so typically what you see is that AC kicks in an runs in the early evenings. Why? because during the day the concrete acts as a heat sink ( lower Tmax) then after the sun goes down the concrete gives up its heat. And of course your AC kicks in. But Tmin happens later than that, in the early morning.. when the excess heat dispersed in the early evening has mixed skyward. So you have to demonstrate an increase in Tmin.
pretty simple. The sources I pointed you at are better than industry proposals. Suggest you dive into that data
WRT the simulation cited…
No code, no data, mannian science strikes again.
Mosher: “except if it was warmer then the climate is likely MORE sensitive to C02 than we think.”
Or CO2 has nothing to do with it.
Or CO2 no longer has any effect.
Or it is naturally cooling and CO2 has just enough of an effect to avoid cooling.
Mosher: “Further, raising the nighttime temps by 3C doesnt necessarily raise Tmin. ”
I was pretty sure it means Tmin is 3C higher than it would be.
And I’m pretty sure A/C in office buildings runs all day long in the summer, even if it isn’t on full blast. Remember, the outside air is now 3C wamer than it should be meaning A/C has to be on.
And 8c was possible during heat waves.
Mosher: “No code, no data, mannian science strikes again.”
Did you ask her?
http://www.cnrm-game.fr/spip.php?article244&lang=en
http://www.nasa.gov/pdf/505252main_demunck.pdf
NASA thinks the research was worth mentioning:
“Ramped up air conditioning usage may have even exacerbated the problem, other data presented at the meeting suggests. Cecile de Munck, of the French Centre for Meteorological Research of Meteo-France, conducted a series of modeling experiments that show excess heat expelled onto the streets because of increased air conditioner usage during heat waves can elevate outside street temperatures significantly.
“The finding raises the question: what can we do to design our cities in ways that will blunt the worst effects of heat islands?” said de Munck, who notes also that her research shows that some types of air conditioning exacerbate heat islands more than others.”
http://www.nasa.gov/topics/earth/features/heat-island-sprawl.html
They also said:
“Summer land surface temperature of cities in the Northeast were an average of 7 °C to 9 °C (13°F to 16 °F) warmer than surrounding rural areas over a three year period, the new research shows. The complex phenomenon that drives up temperatures is called the urban heat island effect.”
cce (Comment #79307)-Wow, you didn’t pay much attention to my pointing out that bringing up irrelevant information to the question doesn’t help. Radiosondes are something extraneous to bring up, not necessary. Again, ice is not directly connected to temperature/heat. Tide gauges can tell us somethings but it’s difficult because the land also moves around, up and down, unrelated to warming. We do have satellite altimeter data but not very far back, so it’s difficult to establish long term rates precisely. It’s certainly true, again, that there has been some warming in the last thirty years. But qualitative information is not particularly useful. How much does matter. Now, from what I can tell, you quoting endlessly information to “prove” there has been “some” warming it seems to me you think the amount is not important. It does matter. It is crucial, as is all quantitative information. Without it, we have no science.
steven mosher (Comment #79329)-“Neither of those has very much to do with the question of sensitivity, except if it was warmer then the climate is likely MORE sensitive to C02 than we think.”
This is not quite an accurate statement. Assuming we know the forcing responsible for the MWP and it’s magnitude and assuming that the currently accepted (by the “community”) temperature history (Mann?) is consistent with currently accepted (by the “mainstream”) sensitivity, then this statement could be true. But if we don’t know the forcing that caused the MWP then the MWP tells us no more about climate sensitivity (or “TCR”) than does present warming. BTW recent warming doesn’t tell us the “TCR” either because that would require accurate knowledge of the forcing, which we don’t have.
for the armchair meteorologists who think who think it is of no consequence the SAT and satellite temperature trends are similar, particularly GISS and UAH, consider this comment by Roy Spencer:
” if the satellite warming trends since 1979 are correct, then surface warming during the same time should be significantly less, because moist convection amplifies the warming with height. ”
Also, why did Santer et al 08 go to such lengths to get a paper published that supposedly refuted Douglass et al 07 and supported climate model predictions, only to be ferreted out by McIntyre & McKitrick for the deception that it was. This after ~18 months of obfuscation by the Team and blocking by so-called “peer review” journals.
The contortions one must go through to defend the indefensible…..
Bruce,
Temperatures in the 1930s, during the LIA, MWP or when the Earth was a molten ball of rock are irrelevent to your claims of UHI. We have measurements from instruments unaffected by urbanity and they show warming. Air conditioners have not warmed the troposphere or the sea surface or raised the sea level.
Andrew,
It has been suggested in this thread that “most or all” of the warming is due to UHI. If you think that warming measured by radiosondes is “extraneous” to that argument, then I don’t know what I can tell you. The fact of the matter is multiple lines of evidence show or imply warming. With respect to SLR, the measurements from satellites are consistent with SLR measured by tide guages for the period of overlap. SLR prior to that is coincident with warming as measured by themometers. If one dataset says that the world has warmed, another dataset must say that sea levels have risen due to thermal expansion. They do.
Maybe measured sea level rise is explained by plate tectonics. Maybe the increase of surface air temperature is explained by UHI. Maybe the increase in SST is explained by bucket adjustments. Maybe the decrease in global sea ice is explained by the wind. Maybe the decrease in ice balance of Greenland, Antarctica, and glaciers on every continent is explained by precipitation. Maybe the data is too short to tell us if the world is warming, but long enough to tell us that it is cooling. But I don’t think so.
cce,”We have measurements from instruments unaffected by urbanity and they show warming.”
Yes they do. Up to 9C warming in cities.
““Summer land surface temperature of cities in the Northeast were an average of 7 °C to 9 °C (13°F to 16 °F) warmer than surrounding rural areas over a three year period, the new research shows. The complex phenomenon that drives up temperatures is called the urban heat island effect.â€
“Air conditioners have not warmed the troposphere or the sea surface or raised the sea level.”
Troposphere should be warmer. It isn’t.
SST is a joke. Read about buckets. Engine intakes.
Sea level rise is decelerating at tide gauges.
http://buythetruth.wordpress.com/2009/07/13/missing-fingerprints/
http://climateaudit.org/2011/07/11/more-misrepresentations-from-realclimate/
http://wattsupwiththat.com/2011/03/28/bombshell-conclusion-new-peer-reviewed-analysis-worldwide-temperature-increase-has-not-produced-acceleration-of-global-sea-level-over-the-past-100-years/
Did you know in the US, (according to GIS which is very biased) 1998 was 1% warmer than 1934?
1936 – Warmest Summer
1963 – Warmest Fall
1910 – Warmest Spring
2000 – Warmest Winter
Even a miniscule unaccounted for UHI will put the 1930s as the warmest decade.
And the adjustments did it:
“Here is a graphic showing how over time Giss has reported US 48 temperatures for the current six warmest years since 1998.
http://i52.tinypic.com/14mfgr8.gif
— Bob Koss
Thanks for the posting, Zeke!
Here’s my first attempt to make a quick summary of the Historical Roots (1945-2011) of Climategate available.
http://dl.dropbox.com/u/10640850/20110722_Climategate_Roots.doc
I will try to post a pdf file later today.
What a strange, strange world we live in! Despite all of the politicians and their secret deals, I am pleased to report that
Today all is well,
Oliver K. Manuel
Mosher,
the TESTABLE hypothesis is to use actual rural data that has not been contaminated by anthropogenic waste heat and materials absorbing heat.
You ask what proof I would accept. This is it. until you can show us data that is reasonably uncontaminated you are singing the same BS that we have heard for 20 years. Y’all have the CO2 from carefully controlled stations. We need the same for temperatures, not these cowboy stations.
Over at the Air Vent you claimed that Spencer’s data was not quality controlled. Make up your mind, he didn’t provide data or it isn’t quality controlled. I believe Spencer selected that data due to it being from more modern stations that do not NEED as many garbage adjustments from your “quality control experts”. What good is that “quality control” when the data is contaminated and you have not done studies to identify how bad it is or isn’t??
Your excuse is the same junk we have been fed before. The adjustments are necessary, except there have been few station surveys to prove that point. You have simply spread these one size fits all adjustments over the whole population with no regard for the physical conditions.
Yup, it makes no difference what grouping of stations you use because you will always have enough stations that are showing the .5c+ trend due to the adjustments. Y’all are playing video games with algorithms, code, and garbage data.