Population II

The last post brought up some interesting issues that will help to guide the exploration of  the population at stations over time. Rather than add to the first post, I’ll  do another post. If folks raise interesting comments and I can quickly put up clarifying graphics, I will append them to the post and announce that in the comments. First, a little roadmap to where I hope to go. I’m putting together a repository of some new code for working with Berkeley data, and so this project is a way of testing that out and adding functionality that one needs to do an actual project. In general terms I want to test out some new metadata and some different ways of classifying stations and looking for UHI. In the end I want to see how we end up adjusting or not adjusting places that we would classify as urban. How well, if at all, does the adjustment process work on this problem. We of course have a global answer, but how does it look in detail station by station? And are there ways to improve it?

From 100K feet the plan would be to build a filter that hopefully can divide urban from rural, or provide some categories or a continuous measure of urbanity. In the past I’ve built filters that just contained everything: population density, built area, nightlights, airports. And I’ve also experimented with “buffers” around these elements to insure that rural stations are truly isolated. You can think of the filter as having two stages. In the first stage we try to classify sites that we have good observational reasons for suspecting UHI. From observational studies we know that large cities with tall building and many people suffer from the most UHI. The combination of tall buildings, built surfaces, and human activity create UHI. As we move away from the city toward a more pristine environment some of the causes of UHI ( namely tall building, dense impervious surfaces, and human activity) diminish and it should follow that UHI diminishes.

At the limit as the number of building shrinks and  the surface transformations become smaller, then our concerns  become microsite concerns.  Or we could refer to three different scales : at the city scale (meso scale) and the neighborhood scale and finally the backyard or microscale.  The approach I want to take is to first categorize the easy to categorize stations using population. After the first filtering then we end up with two piles: One pile that is clearly urban and has all the cause of UHI present, and the other pile that is going to necessarily be more arguable. The second filtering will be applied to these remaining stations. It will use some new high resolution satellite products.  Another way to look at this is that in the first pass we will build a pile that has all the known causes of UHI ( tall buildings, many people, dense development ) and in the second pile we will have much fewer people and little development. At least that’s the plan. As always if interesting sidelines come up I will take a look at them,  but some things may get naturally diverted to the second stage.

The first post raised some interesting questions, namely about the number of stations, the number of stations over time, and what do we do about stations that are close to urban cores, or stations in the transition zone (TZ). A good example of a TZ station is de Bilt (http://onlinelibrary.wiley.com/doi/10.1002/joc.902/abstract).  Up until recently this was one of the few empirical studies of UHI  at the outskirts of cities. A more recent study is here:  http://onlinelibrary.wiley.com/doi/10.1002/qj.2836/pdf . More on those two studies toward the end.

 

Stations:

The stations used come from Berkeley Earth’s data set  which ingest data from 14 or so different source decks. Many of these sources contain  duplicate stations, or stations lacking metadata, shorter stations  not collated in the usual inventories like GHCN_Monthly. I’ll point out some of that as we look at the maps. For the most part series like GISS and HADCRUT depend upon an anomaly period ( 1951-80, 1961-90) such that if a station doesn’t have data during that period it isn’t used. After merging  our source decks we end up with ~43K different stations. This is prior to any “slicing”. many of the stations are shorter series, for example  CRN, the gold standard in the US, starts  around 2005. Other data products don’t use this data. What that means is that over time stations come and go such that when we look at them over time we will see that there is no time at which all 43K are present. Below I’ve collated all the stations that appear within a given 30 year period. Note, this doesn’t mean they are all at least 30 years long.  The “Pre” period is stations that exist before 1850. For reference the current station count ( May 2016) was  ~19K.

StationsPerPeriod

Geographically the stations are distributed like this

map1850-1880Map18801910

Map19101940Map19401970

Map19702000Map20002016

With these 43K stations the next series of steps is to geolocate them in the Hyde 3.1 Population density grids. As we discussed the Hyde dataset goes back to the beginning of our data and comes in 5 minute  grids. For what follows we will only be looking at post 1850 data, although we can go back early if there is a relevant question.  Extracting the data  left roughly 3K stations  on the floor. They had no population data. Some of this is due to the Hyde data set lacking population for antarctica, and some of it due to the fact that we also have data from bouys, oil platforms, small islands, atols and stationary ships. Some of it is due to location errors, were a coastal station has latitude and longitude that is in the water. For now I set these aside, so we are working with ~40K stations.

After collating the population I decided to recode it into bins for  display. Since the log of population confused some folks I decided a  descriptive binning might aid the discussion. So I used a slightly modified  approach from RPA. They set up the following categories.

Natural < 25 per sq mile
Rural 26 to 150
Exurban 151 to 500
Sprawl 501-2500
Dense 2501 -10000
Urban Core 10K+

Since there are so few  urban core sites in the data, I lowered the threshold from 10K to 5K.  To give you an idea of how the station locations change over time, I’ve sampled them at 1850 and at 2005.  This doesn’t mean these locations were all populated with stations in 1850, rather it just shows the population “class” of  the grid cells that over time will have stations in them. For example, in 1850  35K of the locations have populations less than 25 people per sq mile ( ~65 people per sq km) Over time those locations get developed and transition to other classes. In 2005  25K of the locations are still “Natural” by this classification scheme. Note that doesn’t mean humans haven’t altered that landscape, it suggests however that they haven’t turned it into NYC.  Also, in 1850 there are a small number of locations that have dense populations ( “Sprawl” Class)

GridPopulationClass

At this point we could simply divide our stations into two piles:  One pile  of Urban core, Dense Suburb, and Sprawl and a second pile of ExUrban, Rural and Natural and then start to look at higher resolution datasets  for other changes to the surface.  Conceptually a pile of  “Urban” sites or sites where we know most of the causes of UHI are present,  and  a second pile  where the changes to the surface are much less dramatic, lets call it VHI (village heat island) and potentially microsite ( which can happen anywhere ).

Or we could make some refinements at the population screen. One concern is that rural areas and natural areas can occur adjacent to urban areas. UHI doesn’t know about city borders or grid cell borders. There are a couple of approaches to handle this. Once would be to start with all known cities and build buffers around the cities. The other approach is to start with the locations and determine if they are adjacent or close to any urban areas. The question, of course, is  how big should we make these buffers? and do we have any observational evidence that helps us to set the buffer? Because the Hyde data is 5 minutes there is already a buffer of sorts built it, but it is at most 4-8km. Modelling of UHI can provide some guidance, and there are a couple of relevant studies that give empirical guidance: The De Bilt study and the recent BUCL  study.

In the Birmingham study a dense network of stations were studied for 20 months to determine how UHI can spread from the urban core to surrounding areas. Birmingham is the second largest city in England with a population of 1.1M. There are  ~250 cities in the world the same size or larger. The study is listed below. Some important takeaways. For a city of this size the data indicated that “rural” sites 12km away could be effected.

Below is a map of Birmingham population and three station locations. two stations are in the urban core and a third is located outside the core.

Birmingham2

So we can add a condition to our filter and  “cull out” similar situations:  To do this culling a buffer was created around every station. The population density of Birmingham was used as a guide and then the population classes were recoded to indicate if a station was close to a dense population zone. The BUCL study seems to indicate that in the worst case conditions ( wind dependent ) rural sites 12km away could be effected. To be on the safe side I used a 20km buffer. Below stations have been recoded  (  _U) to indicate whether they are close to (20km) a urban core that is as dense or denser than Birmingham.

 

Adjacent2

For the “Natural” sites  ( ~25K  total)  66 locations were within 20KM  of an urban core: Of the roughly 8K  Rural sites, ~200 were close to Urban cores. Of the roughly 4K ExUrban locations  around 10% were within 20km of urban cores, and lastly roughly 1/3 of the 2500  “Sprawl” sites were close to cores the size of Birmingham.

The other empirical study is the DeBilt study.  Below is the population grid for the De Bilt area

DeBiltPop

The study is listed below. The population density here is somewhat lower,  Utrecht has on the order of 260K people and Zeist has roughly 60K.  They indicate that over 100 years,  the UHI at DeBilt amounts to .1C +-.06, or roughly 10% of its trend over time.  (For reference, our adjustment code, lowers the trend at this site. ) With a population of 60K people Zeist is roughly 4-5km from the site.  The next step will be to see what using these figures do as a further filter.   As a quick look at the issue, I took the locations and populations of 66,000 cities and villages around the world. This was reduced to those cities that had a population of 50K or more. Then I calculated distances to every site  finding which sites are close to  cities of this size.  In the next step I’ll apply this filter as well. The following gives you an idea of how many sites are close to smaller population centers.. four distance classes are used for display purposes only ( 0-20km, 20-50km, 50-100km, 100+ km). For example, there are ~4000 locations that are “Natural” and located 20-50km from any city of 50K or more.

Distance&PopClass

Discussion.

I’d like to keep the discussion focused on the issues of population  and leave the discussion of adjustments for later.

 

Reading suggestions

Stewart ID. 2011. A systematic review and scientific critique of methodology in
modern urban heat island literature. International Journal of Climatology 31:
200–217, doi:10.1002/joc.2141.
http://onlinelibrary.wiley.com/doi/10.1002/joc.902/abstract
http://onlinelibrary.wiley.com/doi/10.1002/qj.2836/pdf

523 thoughts on “Population II”

  1. H.
    Continuity can’t be asserted. It has to be demonstrated. The argument for continuity rests on several assumptions. I’ll do a post on that in the future. Basically shorter records have lower probability of having discontinuities in observation practice or changes to the environment.

    All part of the myth that length matters

  2. Urban sites may have a long record, but they also are associated with “frequent” moves. A site associated with an agricultural research establishment would, as a guess, probably have a) a long continuous record and b) good metadata c) infrequent relocations.

    Those might be an interesting backbone.

  3. For a city of this size the data indicated that “rural” sites 12km away could be effected.”

    Many bodies of water have even larger effects. And many large cities are near large bodies of water. Wouldn’t one would have to control for wind direction to make a meaningful comparison? I.e., Chicago in a summer with breezes predominantly off Lake Michigan will be cooler than a summer with breezes predominantly from the south or west. Yet 10 or 20 miles inland the lake effect may not be noticeable.

    If atmospheric circulation patterns change, then a warmer/cooler comparison may well be confounded by changing wind patterns. The question then becomes: why are we looking at this data?

  4. Urban sites may have a long record, but they also are associated with “frequent” moves. A site associated with an agricultural research establishment would, as a guess, probably have a) a long continuous record and b) good metadata c) infrequent relocations.
    Those might be an interesting backbone.

    Yes. As I plow through the stuff I always keep my eyes open for good back bones.. Ideally yu want good length and no changes to the surface. The tradeoff between long stations and places that change and short stations at places that dont change is tied to

    temporal variability in the field and spatial variability in the field. I dont think its an easy problem to solve analytically
    ( like Shens work on the optimal number of stations ) but my old math bones say you might be able to express something about it.

    and longish stations are needed, I would think, to do any kind of credible adjusting..

  5. yes kevin if you look at the BUCL data given certain wind directions even the UHI disappeared.. so I focused on worst case.

    Worst case.. Given a high UHI day, given the right wind conditions,
    a 4.5C effect at the CORE reduces to a .5C effect 12km away
    over rural land. beyond 12km there was zero correlation

  6. This post writes:

    In the Birmingham study a dense network of stations were studied for 20 months to determine how UHI can spread from the urban core to surrounding areas. Birmingham is the second largest city in England with a population of 1.1M. There are ~250 cities in the world the same size or larger. The study is listed below. Some important takeaways. For a city of this size the data indicated that “rural” sites 12km away could be effected.

    The BUCL study seems to indicate that in the worst case conditions ( wind dependent ) rural sites 12km away could be effected.

    But you’ll note it does not quote the study at any point. Similarly, in a comment the post’s author writes:

    Worst case.. Given a high UHI day, given the right wind conditions,
    a 4.5C effect at the CORE reduces to a .5C effect 12km away
    over rural land. beyond 12km there was zero correlation

    Again, no quotation was given to provide the basis for this claim. I point this out because the study being cited clearly contradicts what this authors says. The study separated its data into three groups:

    Temperature data are split equally into three wind-speed groups, WG1 (3 m s−1). Only night-time observations (based on daily sunset and sunrise times) with low cloud cover (<4/8 oktas) are included, to focus on conditions most favourable for strong UHI development.

    With some day’s data being screened out due cloud cover. For these three groups, the study found:

    For WG1, correlations are strongest at 4–9 km from the stations, i.e. UHA advection from distant sources will be diminished. For WG2, the highest correlation is at 7–9 km and it remains high until 10–12 km. For WG3, the highest correlation is shifted to 13–15 km, i.e. increased wind speed transports heat further. As such, each wind speed group has its own characteristic UHA distance: the higher the wind speed, the larger the distance. With peak UHA observed in WG2, the UHA distance analysis shows that the downwind UCL warming from the city in this group will be experienced in rural areas up to 12 km away. These distances are calculated based on the mean UHA pattern, therefore on an individual night these distance would be variable.

    This of course makes discussions of “worst case scenario” rather misleading. If one defines the “worst case scenario” as the strongest effect possible, average over nights with low cloud cover, then yes, perhaps 12 kilometers is the maximum distance. However, if one uses a different definition of the “worst case scenario,” such as perhaps the greatest distance at which there would be a significant effect, this study clearly indicates distances greater than 12 kilometers on average were observed.

    And that’s only for average patterns, not for the worst case scenario that could be observed for a given day. Indeed, it’s not just daily patterns that could be greater. As the study says:

    Additionally, because the UHA components presented in Figures 5 and 6 are temporally averaged over 20 months, it is suggested that UHA could be higher under certain meteorological conditions, for example a heatwave (Heaviside et al., 2015).

    The study suggests the effect could be greater than the average result during heatwaves, which I would suggest means the “worst case scenario” cannot be fairly described by merely looking at the averages. However, regardless of what one considers the “worst case scenario,” the simple reality is this post’s author is wrong when he writes:

    beyond 12km there was zero correlation

    The quote I provided from the study above specifically talks about one of its data groups (WG3) having its greatest correlation at 12-15 kilometers, meaning it simply cannot be true there is no correlation at distances greater than 12 kilometers. Even if we only look at the data group with the strongest effect (WG2), we find the greatest correlation at a distance of 9-12 kilometers, but a significant correlation still exists at greater distances. This is clearly shown in the study’s Figure 7b:

    http://www.hi-izuru.org/wp_blog/wp-content/uploads/2016/07/7_18_Population_Figure7.png

    As this figure first panel shows, there is no correlation at distances greater than 12 kilometers between normalized UHI and the urban fraction (Ufrac). While that might be a result worth discussing, it tells us nothing about how heat created by UHI is advected to other locations.

    To examine that issue, which is the subject of these remarks, we need to look at Urban Heat Advection (UHA). That is what the second panel shows. It indicates one of this study’s data groups (WG2) correlates at a distance of 12-15 kilometers “at the 0.05 level,” a drop in both strength and significance from its results for the 9-12 kilometer distance. Another data group (WG3) correlates at a distance of 12-15 kilometers “at the 0.01 level,” with no drop off from shorter distances. In fact, its correlation strength is greater at that distance than any previous one.

    I have no explanation for why the results of this study were misrepresented in such an obvious way, but I think it would be fair to suggest this post’s author did not do a good job of reading this paper. Perhaps he should, as he has told others:

    read more. comment less and make fewer mistakes.

  7. Brandon
    thanks for the information on SUHI. There was some confusion in the comments about surface and suburban.
    If you need cheering up look at the end of comment
    149522″and longish stations are needed.”
    And compare to 145912 risqué end comment “all part of the myth that length matters.
    Surely some mistake?

  8. Liver improved.
    Time for sensible helpful comments and some arguments about station numbers.
    Steven,
    Though you disagreed with McKitrick and his attempts to fill in a data base with so many conflicting variables you could use the idea on the Arctic in reverse.
    Just bin it as having any UHI effect at all.
    Same for the Antarctic.
    True there are a number of stations, 89 you quoted in the Antarctic, but no UHI.
    Microclimate effects?
    Buckets.
    But as you said we are discussing UHI.
    You can safely wipe out all those stations with simple population density and height of buildings.
    Agreed?

  9. Secondly there is a way to estimate UHI at all urban sites simply and effectively from BEST’s climate factor.
    As you said JC’s 28/6/2016 and elsewhere you can estimate any sites temperature by it’s latitude and elevation.
    These factors dictate the temperature at all the sites you use.
    Hence you can dig out what the actual temperature for any day and time of the year for every urban site has been and should be.
    Then you simply compare it to the actual temperature observations at the urban site.
    Hey Presto instant UHI.
    Now before we get all quibbly like Eli, yes there are confounding features like adjacent hills seas and lakes and error range but these are all factored in, right?
    A Quibble of my own.
    You must have already done this and know the answers.
    When you state the UHI is not demonstrable this can be from it either not existing or using the wrong c factor ( latitude and elevation) for that urban site.
    I.e. The basic strict c factor for each site shows a bias.
    There may also be some factor in there presuming a certain amount of heat accumulation over the last few hundred years and more recently with CO2 increase?

  10. 43k stations? 14 data sets?
    Berkely earth itself said 16 data sets (for accuracy) and 39 k archived stations.
    The current count is said to be 19000 stations.
    If this were true why not say we are using 19000 stations.
    Much more respect.

    But then the issues…..The Berkeley Earth method splits the records at such points, creating essentially two records from one. This procedure, referred to as the scalpel, was completely automated to reduce human bias. The 36,866 records were split, on average, 3.9 times each to create 179,928 stations

    So there are really 179,928 stations. So why state 43k?
    Of which only 19000 are active at the current time
    BE counts all this as raw data unless notified that the data has been adjusted but
    BE uses GHCN which includes USHCN which has 1218 purported stations but less than 700 active stations, the rest being infilled with data By GHCN.
    The decline in the number of active USHCN stations has been well documented at Moyhu by Nick Stokes.
    GCHN only has 2475 active (real stations) see SM second graph at JC 29/7/2012 “a new release from BE surface temperatures but purports to 3 times this number 7500 for BE figures.
    Removing fake GHCN stations the number of real stations drops to 11,975.
    But are they real are they reliable and are they real time?
    GHCN uses the most up to date systems it can and still suffers station drop out and late non daily results from other stations.
    If we use GHCN as the measure possibly only 6000 stations are real and capable of giving data for UHI.

  11. ” There are 27,000 stations taken from GHCN daily which is available in a form prior to the application of any QA procedures.”
    29/7/2012 paper.
    How can GHCN a mere subset of BE , provide more stations than the whole active BE stations 19,000??

    If the statement was correct does this imply use of a Gerghis filter?
    In fact this statement in its own must be wrong for BEST claims to use all available data and no amount of filters would reduce this number of stations.

  12. Angech, I support your point that it makes sense that if transparency is being claimed that all the cards be laid on the table so no distortion or cherry picking is attempted. Regarding this point on stations, wouldn’t be nice if a non-profit organization chartered to inform on this topic had all the station history organized and updated on a web page known as a reliable reference? I think it would. (hint 🙂
    .
    Brandon, Steven is regretting his advice to us to read more. Your comment was powerful. It makes sense that UHI from large urban centers could be blown to their neighboring towns. I would think in the case where atmospheric inversions keep the heat from rising and dissipating as it traveled would be especially bad.
    .
    “Longish stations”
    Remembering the point of the whole exercise is to plot GMST from pre-industrial to present, one needs to voice why there is an advantage of having a single 166-yr continuous station over 166 individual 1-yr stations for each year. The key is in the scientific control. With the single station if accurate metadata of all the factors surrounding the instrumentation and siting and time of observation it is difficult enough to correct for UHI influences. But being able to resolve the effect of all the possible influences biasing temperature at 166 locations would be impossible. One could attempt to use matrices to solve all of the variables statistically but calculating the number of stations required to do that is in itself a task.
    .
    To make such an analysis transparent and sensible to public audit is even harder. Thanks you Steven for attempting this. And we will continue appreciation for understanding your challenge if you continue to understand ours.

  13. DeWitt Payne:

    If the wind is blowing, you can’t have an inversion.

    Uh… no. Wind can certainly disrupt a temperature inversion or prevent one from forming, but it is not some binary thing where wind blowing = no inversion. There are a ton of factors involved, and temperature inversions are commonly observed along with measurable wind speeds. In fact,some areas find temperature inversions tend to coincide with greater wind speeds (to a point, of course), with calm winds tending not to produce them.

    Not only is it wrong to claim you can’t have a temperature inversion if the wind is blowing, it is possible to observe temperature inversions with average wind speeds greater than 5 m/s. The highest wind speed group in the study this post misrepresents that I discuss above is only 3+ m/s. Given that study finds urban heat advection can displace warmth by 12+ kilometers given 3+ m/s wind, Ron Graf’s remark is a reasonable thought.

    I can’t say I know enough about the Birmingham area to say much about specifics of temperature inversions in and around it, and I am sure temperature inversions cannot exist there given wind speeds of a certain magnitude (and possibly direction), so I’m not trying to draw quantitative conclusions here. I’m just trying to stress the idea you cannot have temperature inversions when the wind is blowing is just nonsense.

  14. Ron Graf:

    There is an overwhelmingly powerful reason for preferring a 166-year station record to 166 one-year records in the same vicinity that is often overlooked: measurement location and datum level are then controlled variables. Otherwise, one has to rely upon entirely unproven assumptions of spatial homogeneity and temporal stationarity (in effect, ergodicity) in order to stitch together a sufficiently long time-history of temperature variation in an area to establish any secular trend This is what ultimately makes long, vetted records–along with representative geographic coverage–an absolute necessity for rigorous scientific work.

    Sadly, none of the GAT index makers manifest any concept of professional vetting of station records via spectral techniques. And the claim of thousands of useful station records world-wide overlooks the fact that there are only hundreds of century-long, virtually intact, UHI-uncorrupted records available–most of them clustered in the USA, NW Europe and Australia. Aside from satellite records, all the GAT indices are largely computational exercises in prejudicial speculation.

  15. Steven,

    Do we agree that the relevance of population/development classifications for our discussion is strictly limited to what each particular station’s classifications was a particular points in time, especially at the time of its start and decommission?

  16. Brandon S.

    A temperature inversion doesn’t trap hot air. You can’t have an inversion unless at least the potential temperature increases with altitude. That means the air above the inversion layer is hotter than the air below it if brought to the same pressure and may even have a higher actual temperature.

  17. DeWitt Payne, I have no idea what the point of your comment is. It doesn’t contradict anything I said, and it doesn’t defend your claim which I highlighted as bogus.

    If you don’t want to admit what you said was completely wrong, that’s your call. It’d just be better if you chose not to say anything then.

  18. DeWitt, I think I have pointed out before to you that inversions do not need the higher air to be warmer, only less cool then the lapse rate. If I recall correctly you were debunking a comment speculating on the GHG trapped in the air above a city, which I thought was a valid point then and still think it could add to UHI sprawl effect.

  19. Steven, in BEST’s Wickham et al (2013) it states:

    The result of the [Hansen] adjustment on their [GISS] global average is a Reduction of about 0.01C in warming over the period 1900-2009.

    But going to the referenced Hansen(2001, 2010) papers indicates the adjustment to be 0.1 per century. Did the application of the Hansen GISS UHI adjustment result in adjusting only 10% of the Hansen diagnosed UHI or is the 0.01C in Wickham a typo?
    .
    Do you believe that NOAA’s automated pairwise comparison procedure is sufficient for the task of removing UHI bias from GHCN-M?

  20. Regarding inversions and wind, consider the types of inversions:
    radiative inversionscaused by intense cooling during winter. Very intense and shallow. The intensity of radiative inversions mean vertical motion is intensely suppressed. And shallowness of the layer near the surface means calm winds.
    .
    Frontal Inversionscaused by cold(er) air mass undercutting warmer air mass. This is a deeper inversion, typically accompanied by windy conditions.
    .
    subsidence inversionscaused by descending air above which has not reached the surface. This suppresses vertical motion which tends to reduce winds, but the depth is much greater than that of the radiative inversion, so winds are not necessarily calm.
    .
    many other types(stratosphere, thermosphere, turbulence inversion, maritime inversion, et. al. )
    .
    .
    Regarding inversions and trapping heat:
    .
    Inversions can morph. Subsidence also tends to take place behind a frontal system. And one of the great ironies is that during the warmer part of the year, in latitudes experiencing daily net positive radiance, a cold front passing means clear skies and thus and increase in the heating rate. So a cold front passing may mean warmer temperatures.
    .
    DeWitt’s point that potential temperature increases up to the top of the inversion is correct. However, inversions do ‘trap heat’ specifically because of this point. While the inversion is in place, vertical mixing past the top of the inversion is suppressed – so within the inversion layer, sensible heat accumulates. This can occur until the the limit of stability imposed by the inversion is overcome, at which point, mixing across the former inversion can take place and transfer surface heat to the atmosphere above (usually in the form of a thunderstorm ). Usually, another air mass displaces the eroded anti-cyclone before this occurs.
    .
    I’ve been considering this a bit lately.
    .
    Extreme heat in the US ( or at least CONUS ) has not increased, in spite of either AGW or UHI.
    .
    Summertime droughts,which have become less frequent, certainly have a lot to do with that.
    .
    But extreme heat tends to be associated with Heat Waves, which are, amazingly, still ill-defined. Never the less, heat waves are often associated with a strong inversion. Typically, a summer cold front associated anti-cyclone stalls and enables a subsequent subsidence inversion.
    .
    AGW works because increased radiative forcing at the tropopause implies warming of the troposphere as a whole. But during inversion, the layer beneath is decoupled from mixing of the heat of the troposphere at large. There is still radiative forcing from 2x CO2, but the relevant height to consider during inversion is no longer the tropopause, but the layer of the inversion. This number is still positive, but is less than the 3.7W/m^2 per doubling.
    .
    So the rate of warming of extreme heat events might be less than the rate of warming for the average.
    .
    There are many other factors, of course, including secular changes in the large scale circulation, changes in latent heat and potential temperature. But what happens beneath the inversions is of interest, especially with today’s migrating heat wave.

  21. this may well be a stupid question but how great an effect have energy use changes had in urban areas over the temperature record ? are they even quantifiable ?

    these days everyone in the colder parts of the planet has central heating ,at least one car per family , fridges and freezers for food storage as opposed to larders and in the warmer areas similar changes have occurred in terms of air conditioning to keep people cool and a similar shift toward electrical appliances.

    for me the uhi effect should have increased in the large cities over time due to the increase in energy use. however i am perfectly prepared to accept it may be by an insignificant amount.

    i understand the number of cities/towns with populations over 50,000 have increased significantly in the last 100 years,so theoretically there should be a greater suburban/rural area seeing the effects of uhi downwind ?

    i would like to echo the sentiment of others by thanking steve mosher for taking the time to make this series of posts. i am beginning to get some understanding of the mammoth task undertaken by best when they set out to try and decipher the data in all the forms it existed.
    it will certainly make this member of the peanut gallery comment in a more considerate fashion in the future.

  22. how great an effect have energy use changes had in urban areas over the temperature record ? are they even quantifiable ?
    .
    Oke considered this a long time ago, and remarkably, many cities had energy use on the order of 100W/m^2!
    .
    Since by current reckoning, 100W/m^2 means much more warming than is observed, something else must be happening.
    .
    That something else is the wind blowing, which limits UHIE, dispersing heat to the broader atmosphere.
    .
    Globally, again IIRC, the number comes out to around 0.04W/m^2, which is less than noise and less than the heat escaping from the lithosphere.

  23. There will probably always be chunks of manure in all the analyses, UHI being just one.
    .
    But land is a small portion of earth’s surface, and measured land surface a portion of that, and urban measured land surface a portion of that.
    .
    Whatever the residual contribution of UHIE to temperature trends, the comparison with MSU, RAOB, SST, mean that UHIE is not the sole or even the major cause of warming in the global mean.

  24. Turbulent Eddie:

    Whatever the residual contribution of UHIE to temperature trends, the comparison with MSU, RAOB, SST, mean that UHIE is not the sole or even the major cause of warming in the global mean.

    Yes I agree with this. I believe there are in fact multiple arguments that could be used to argue against UHIE being a major factor in the measured rate of warming. Any of these would be sufficient by itself to reject the hypothesis.

    IMO, together they should be enough to allow us to give the “UHIE is responsible for a significant component of observed warming hypothesis” a tasteful burial, and move on. Of course some won’t, but that shouldn’t prevent the rest of us from moving on to more fruitful topics.

    Nice summary of inversions by the way.

  25. There’s a glaring conceptual disconnect between what is actually happening on a GLOBAL basis as a result of UHI effects and what is perceived to be happening globally as a result of GAT projections based on a strongly urban LOCAL station data-base world-wide. The ~0.7C/century discrepancy seen in USA urban temps during the 20th century is not something that can be hand-waved aside as a minor effect in GAT indices just because there’s no comparably dense network of non-urban stations world-wide to reveal the actual UHI discrepancy. Nor can the far-more-egregiously flawed SST record, which has troubled oceanographers over the past century, be invoked credibly as confirming the UHI-corrupted GAT land record.

    No doubt, there’s been a sharp upward swing in global temperatures since the 1976 climate shift, evident particularly in marine data and confirmed world-wide by tropospheric satellite data since 1979. But that is simply far too short a record to draw credible distinctions between multi-decadal (and longer) natural climate oscillations and UHI-induced trends, let alone supposed “CO2 forcing.” Whatever cross-spectral results be obtained from available longer, vetted records usually show low-frequency incoherence between urban and neighboring non-urban records and a lagging phase relationship for CO2.

    What richly deserves to be buried is the notion that climate science, as currently practiced, has a scientifically credible basis for its grandiose projections.

  26. There’s a glaring conceptual disconnect…

    Lot of that going around in science these days.

    Andrew

  27. Carrick: “… but that shouldn’t prevent the rest of us from moving on to more fruitful topics…”
    .
    Exactly what fruitful topics would those be? Ocean record? That has a much murkier database and it relies on land record as its cornerstone of support. Millennial scale variability? Murky speculation. If you have a strong street lamp search there first. The information to be gained is as much into the modus operandi of the investigation as to what was missed or found.
    .
    UHIE was known even before Oke(1972). Sixteen years later Karl looks at it and says yep but its only .06C because 85% of the stations are rural now. Peterson (1997) finds zero, nada. Hansen(2001) finds 0.1C/century but then in practice sneaks in 50% negative adjustments to offset the positive ones. Today HadCRUT makes no UHI adjustment but millions are still spent to analyze to see if it is affecting the historical record. Do I see a pattern? Yes I do.

  28. Ron Graf, as far as I’m concerned UHIE is, if not dead, mortally wounded. So it’s as interesting as discussing what the pavement looks like under a streetlamp.

    There are too many lines of inquiry which lead to the same seemingly inevitable conclusion that UHIE just isn’t as big an issue as some people would like it to be, at least for global temperature trend (land only or land & ocean).

    There are undoubtably biases associated with UHI on the regional scale for a fixed site. However, when you fold this into the global system, where stations move over time from urban to more rural one, I don’t think you should anticipate a constant positive bias.

    And, as I’ve said before, if you want to claim that UHIE is even an important (rather than dominant) effect, you still have replicate the observed geographic influence on warming trend using a model from UHIE.

    The problem you run into is that UHIE should be concentrated around 30°N, whereas the real warming is most extreme near the North Pole. That’s a non-starter as far as I can see.

    I do think there are plenty of interesting problems to look at in climate science (how to: accurately measure the climate sensitivity; predict future warming; more accurately reconstruct the surface temperature field from the available data; etc.). So I don’t think it’s good argument that “this is all we’ve got”.

  29. A “global system, where stations move over time from urban to more rural one” is an article of faith among those unacquainted with the actual history of stations world wide. And only those unacquainted with the multi-factor physics of UHI would conjecture that it should be concentrated around 30°N,” while missing the basic fact that the same change in thermal energy produces greater temperature changes in colder climates.

    Enough of bald conjectures by those who apparently have never taken a scientific thermometer into the field!

  30. sky,

    You should be a little more cautious about what you write; Carrick does lots of field research.

  31. Yup, and installed met stations. Studied bias effects. Constructed my own aspirated thermometers, etc.

    And I do metrology (science of measurement) too, from my Ph.D. thesis on.

    Doesn’t mean I’m right here, but what Sky is doing is a form of appeal to authority. As far as I know, he’s never taken data himself, and his logic is “neither has Carrick”, making his argument the superior one. That’s pretty silly.

    But this doesn’t make much sense:

    while missing the basic fact that the same change in thermal energy produces greater temperature changes in colder climates.

    Population is still concentrated near 30°N and UHIE is supposed to be an urban effect that influences the estimate of temperature trend.

    The idea it’s either all or even that UHIE constitutes a significant proportion of the warming trend is inconsistent with the fact that zonal bands that have almost no population have about 5x the rate of observed warming as zonal bands that are heavily populated.

    If sky or others want to propose a physically based numerical model and can show they can replicate the observed warming patterns, that would at least move the theory out of the intensive care unit. Right now it’s all hand waving and empty claims of authority as far as I can tell.

  32. Carrick, I never meant to claim that UHI is the main force behind the observed warming. My current best guess I would say land use + UHI + SUHI + microsite = 0.2-0.3C/century.
    .
    As far as your 30°N argument I do not understand what one forcing effect cares about another. We could have global cooling and still have that down trend muted by the upward UHI.
    .
    Carrick:

    So I don’t think it’s good argument that “this is all we’ve got”.

    .
    The land record is really the foundation of the observed warming. If it does not hold up to scrutiny then nothing else does. UHI is a real thing. Everyone admits that. But somehow we are supposed to believe it leave no trace in the record.
    .
    I am trying to untangle variables and looking to untangle the easiest or clearest ones first. If UHIE is unimportant to observations why take the stations out of the cities and into parks and airports? And, after doing this does this solve the contamination of the record?
    .
    Why did Peterson find no UHI in the record? His plot of rural stations matched exactly that of the whole database.
    .
    Why did Hansen find 0.1C/century UHI but made adjustments quietly of both positive and negative UHI to cancel each other?
    .
    Why does Steven Mosher say a negative UHI has to be the “jackalope [a Scoobydoo episode] of climate science” in 2008? Why is he today kind enough to take the time to show us the breakdown of classification of existing stations but not make it clear what is happening to population growth around these stations (on average) over time, which is the relevant question?
    .
    I feel SM was “those meddling kids,” unmasking the inappropriate enterprises in 2008, but no longer.
    .
    I think we can get to the bottom of UHIE. Will you help us Carrick?

  33. JD nothing new The Vasa, a Swedish warship of highest specifications and cost in the 1600s sank because of poor design on its maiden voyage.

  34. I am prepared to accept that the temperature charts are correct today.
    That is the measuring devices are working accurately and that the land global temperature today is as accurate as it can be when pasted together.
    There is a discrepency with the satellites in comparing there anomaly measurement changes.
    No one asks what the anomalies are being worked off ie what is the daily real estimated temperatures for those stations used.
    This is constantly changing as the bases are added to by new measurements and by changes in station location and existence and changes in satellites.
    Hence the problem with UHI on one level.
    The urban measurements are used in the global land measurements, they are not adjusted out.
    As Mosher says, if you are measuring an anomaly, what does it matter if you are on an airport Tarmac, or next to a barbecue, as long as the measurements are consistent the anomaly change should be the same.
    Yes you can detect 2 c warming in Tokyo, it is not UHI, it is the natural variation of the earth temperature at that site with time.
    And, sorry Ron, he and Carrick might be right on the overall effect.
    An average 1C rise at urban sites over a hundred years with 27% urban sites would only be 0.26 c If you use the claim of only 1% of 19000 sites it drops to 0.01c, strangely about what they claim.
    The more little spliced in snippet rural stations you manufacture, the less UHI to talk about.
    The real problems are more the terrible adjustment and homogenization and projection issues, note BEST does not homogenise under that name.
    It uses homogenization records from other groups and applies its own process of weighted adjustment.
    (If the station is clouded over or has one of TE’s cold inversions come in for a day but the other stations around miss out and it is 1.55 c out of their average it gets up to a 26 times increase in its anomaly to bring it into the other stations range) v but NB no homogenization.
    The end result is twofold, The hot cities of today as used bias the temp today upwards and the bias from these records goes into adjusting the past lower giving a spurious anomaly increase.
    Steven mentioned tests he could do.
    An obvious one would be checking the spread of anomaly in neighbouring records daily.
    The effect of the bias I am suggesting would be smooth changes across countries especially when severe weather patterns should be seen.
    It would explain why extreme heat days are less common in the US, the records are blurred down apart from the headline one.

  35. Angech: “Steven mentioned tests he could do tests.”
    .
    Karl(1988) did pairwise tests using the USA record to produce the chart that Frank Lansner shared on the first post on population.
    .
    Brussels saw a UHIE trend of 1.7C/century in the 50-year Hamdi(2011) study. Tokyo saw 2C/century. These studies assumed, as did Karl’s, that rural stations are unaffected by humanity. We know now that this is untrue due to LULCC. This is why Steven separated out rural from “natural.” Also, the record is not 100 years, it’s 166 years.
    .
    With these kind of numbers it does not take much effect to add up to 0.2C-0.3C/century, which is 0.3C-0.4C over 166 years. Even Hansen put it at 0.16C in 2001. Karl put it at 0.1C in 1988, though I am not so sure his study was “pre-political.”
    .
    Seoul, Korea, from Kim(2002):

    The difference between the temperature at Seocho and the three-station-averaged temperature can be approximately regarded as the maximum urban heat island intensity in Seoul… The average maximum urban heat island intensity over the 1-yr period is 2.2°C… The average maximum urban heat island intensity is strongest at 0300 LST (3.4°C) and is weakest at 1500 LST (0.6°C). At 0300 LST, the number of days with the urban heat island intensity 5°C is 66.

  36. Why the continuing discussion of UHI effects? Steve Mosher has indicated that he will eventually give us test results for the adjustment algorithm with regards to adjusting for UHI. I assume that means that he has independent measures of UHI to test against or perhaps against a simulated UHI.

    The important point here that seems at times to gets lost in these discussions is how well the adjustment algorithm can find and adjust for any non climate effects or even very localized climate effects on a temperature reporting station. UHI may affect a larger area than most micro non climate effects but it will be treated in the same manner as a micro effect by most of the current adjustment algorithms. Even a reporting station in an urban heat island needs to be tested for micro climate effects that might even overwhelm the UHI effect.

    Treating the UHI effect separate from a micro climate effect might be required if the UHI were found to be spread over adjacent stations that would in turn be used for determining the adjustment required for the station in the urban heat island that was being adjusted. I would think that simulations could be run that would account for various conjectured levels of UHI spreading and determining how this would affect the urban heat island station adjustment.

  37. Ron Graf (Comment #149568)

    Ron can you tell us what the UHI trends for these urban areas were for the 1975-and later times period? That would be the important period with regards to AGW.

  38. And just so no one misses it, Mosher has often tried to defecate on what he has called “blog science”… when it’s skeptical, of course.

    Andrew

  39. Ron Graf (Comment #149568)

    Also can we determine whether the UHI effects found in these studies have been taken into account with the adjustment algorithm?

  40. Might be a councidence (summer, holydays…) but strange… Mosher’s last comment on this site was on July 18 at 5:06, then came Brandon’s comment at 6:19 and up to that point extremely active Mosher’s completely disappeared since.

  41. Kenneth Fritsch:

    Why the continuing discussion of UHI effects? Steve Mosher has indicated that he will eventually give us test results for the adjustment algorithm with regards to adjusting for UHI. I assume that means that he has independent measures of UHI to test against or perhaps against a simulated UHI

    I can’t speak for other people, but I know I don’t feel some need to postpone discussions on the assumption somebody, at some point, will provide something. I certainly wouldn’t feel the need to when the person I’d be waiting on is a person who has written a number of demonstrably false things on the subject in question.

    That’s particularly true when the person in question disappears from the discussion for four days. It’s even more true if that person happened to disappear just after embarrassing errors he made were pointed out. Such a timing may not mean anything, but it certainly doesn’t make me feel waiting for that person to provide some unspecified content would be a useful approach to the discussion.

    Andrew_KY:

    And just so no one misses it, Mosher has often tried to defecate on what he has called “blog science”… when it’s skeptical, of course.

    He also repeatedly talks about the importance of sharing one’s data and code while defending the group he works for – BEST – in its decision not to share certain data, code and even basic information. I believe his justification for that is something along the lines of, “Don’t let perfection become the enemy of good.” My term for it is hypocrisy.

    Sven:

    Might be a councidence (summer, holydays…) but strange… Mosher’s last comment on this site was on July 18 at 5:06, then came Brandon’s comment at 6:19 and up to that point extremely active Mosher’s completely disappeared since.

    He’s done the same thing many other times. People are free to believe it’s a coincidence or not, but whatever the reason for it, I find it makes discussions with him pointless. He routinely says things that are wildly untrue and when confronted about his errors, almost never corrects them.

    I’m still amazed he actually said “slicing” series by assigning breakpoints cannot affect the calculated trend in the last thread. The numerical example he provided even shows he is wrong. But… hey, it apparently works for him. Quite a few people seem to think what he writes is, generally speaking, credible and accurate.

  42. This is getting a bit silly. In the context of discussing UHI-corrupted urban records , I would think that my incredulous snark about “those who apparently have never taken a scientific thermometer into the field” would have been understood to mean “never performed UHI-related experiments.” Carrick tells us, however, that he has “installed met stations. Studied bias effects. Constructed my own aspirated thermometers, etc.
    And I do metrology (science of measurement) too, from my Ph.D. thesis on.”

    That experienced geophysical researchers usually leave such purely metrological tasks to engineers and technicians, concentrating themselves on the design and siting of experiments, seems to be totally unrecognized. Ironically, he then concludes that “what Sky is doing is a form of appeal to authority. As far as I know, he’s never taken data himself…”

    The claim that population is “concentrated near 30°N and UHIE is supposed to be an urban effect that influences the estimate of temperature trend” is somewhat misguided. There is no special concentration of urban records near 30°N, and the apparent temperature trend is a function not of population alone, but also of the exact station siting and relative density of UHI-corrupted records in a region. In the Arctic, even a small town such as Barrow, has a sizable UHI effect in winter. Given the highly non-uniform geographic coverage available and the highly different amplitudes of regional multi-decadal oscillations, it should come as little surprise that “zonal bands that have almost no population have about 5x the rate of observed warming as zonal bands that are heavily populated.” Such are the vagaries of ultra-sparse coverage with deeply flawed records.

    Despite diligent efforts here to elucidate the inadequacies of short, UHI-corrupted station records in establishing truly secular trends in the presence of multiple natural cycles, many mistaken notions patently still remain. I’ll try to touch upon some more tomorrow.

  43. This is getting a bit silly. In the context of discussing UHI-corrupted urban records , I would think that my incredulous snark about “those who apparently have never taken a scientific thermometer into the field” would have been understood to mean “never performed UHI-related experiments.” Carrick tells us, however, that he has “installed met stations. Studied bias effects. Constructed my own aspirated thermometers, etc.
    And I do metrology (science of measurement) too, from my Ph.D. thesis on.”

    That experienced geophysical researchers usually leave such purely metrological tasks to engineers and technicians, concentrating themselves on the design and siting of experiments and incisive analysis of data, seems to be totally unrecognized. Ironically, he then concludes that “what Sky is doing is a form of appeal to authority. As far as I know, he’s never taken data himself…”

    The claim that population is “concentrated near 30°N and UHIE is supposed to be an urban effect that influences the estimate of temperature trend” is somewhat misguided. There is no special concentration of urban records near 30°N, and the apparent temperature trend is a function not of population alone, but also of the exact station siting and relative density of UHI-corrupted records in a region. In the Arctic, even a small town such as Barrow, has a sizable UHI effect in winter. Given the highly non-uniform geographic coverage available and the highly different amplitudes of regional multi-decadal oscillations, it should come as little surprise that “zonal bands that have almost no population have about 5x the rate of observed warming as zonal bands that are heavily populated.” Such are the vagaries of ultra-sparse coverage with deeply flawed records.

    Despite diligent efforts here to elucidate the inadequacies of short, UHI-corrupted station records in establishing truly secular trends in the presence of multiple natural cycles, many mistaken notions patently still remain. I’ll try to clear up some more tomorrow.

  44. Ron Graf:

    As far as your 30°N argument I do not understand what one forcing effect cares about another. We could have global cooling and still have that down trend muted by the upward UHI.

    I believe the general argument is that UHIE affects the estimate of trend, but that it does not signficantly influence global mean trend. If it’s the former, it’s a local effect (that’s why discussing station moves is import).

    If it’s the latter, it’s a real effect influence true global mean temperature, so it’s not causing a measurement error.

    So I don’t think it’s good argument that “this is all we’ve got”.

    .
    The land record is really the foundation of the observed warming. If it does not hold up to scrutiny then nothing else does. UHI is a real thing. Everyone admits that. But somehow we are supposed to believe it leave no trace in the record.

    That, I think, is the issue—it stands up to real scientific scruity.
    Amorphous arguments that have no testable component aren’t an example of real scruity.

    Also, nobody is claim it doesn’t leave a trace on the record. Obviously it does. The question being addressed whether it has a substantial effect on trend.

  45. sky:

    That experienced geophysical researchers usually leave such purely metrological tasks to engineers and technicians, concentrating themselves on the design and siting of experiments and incisive analysis of data, seems to be totally unrecognized

    You obviously have zero experience with actual research.

    I’m a scientist, but I do many things. This includes instrument installations in addition to other tasks. We usually have other crews working us for installations, but I definitely due the “full monty there”, including miles-long hikes through back country carrying about around 100 pounds of equipment (batteries, solar panels, sensors, cables).

    There is no “concentrating” on anything in real world labs. Other than temporary disabilities, if you can’t do the field component, you don’t have a job (it’s part of the job description). But you also have to be able to do experimental design, data analysis, write software, physical modeling, and many other tasks, some of which are very mundane.

    The claim that population is “concentrated near 30°N and UHIE is supposed to be an urban effect that influences the estimate of temperature trend” is somewhat misguided. There is no special concentration of urban records near 30°N, and the apparent temperature trend is a function not of population alone, but also of the exact station siting and relative density of UHI-corrupted records in a region.

    If the issue is influence of UHIE on measured trend, it will track with population, population growth or similar variables that relate to local-scale anthropogenic activity. Getting that to morph into the observed pattern of warming, with the large amount of polar amplification, seems to me to be nigh impossible.

    If you want to propose a physics based model capable of numerical prediction, many of us would be happy to test it. This handwaving you are doing is a waste of everybody’s time though, including yours. It leads to nothing except an endless “yes it is, no it isn’t, yes it is, …”

  46. Carrick,
    “You obviously have zero experience with actual research.”
    .
    Most likely you are right about that. The imaginations of non-scientists about what scientists do, and for that matter, what the practice of science is actually like, tend to be wildly wrong. Today I discussed which of three temperature sensors in an instrument most closely represents reality, a Monte Carlo simulation of Brownian diffusion, and why a troublesome customer has so far not sent any actual data…. only ‘summaries’. Science is sometimes not what is described in 8th grade.

  47. Why 3 temp sensors?
    If all the same obviously to give a back up to which of the first 2 is right, in essence admitting that at least one is wrong half the time.
    Logically this means the third is wrong half the time as well so …
    Your right result is only right 50 % of the time.
    -(if only 2 potions)

    If all 3 different suggest choosing the one closest to the fourth reality test, yourself.
    Sounds fun. Sack the customer or get the cheque banked in advance.

  48. I hope Mosher comes back.
    I appreciate someone of his stature and background trying to get the message across.
    I disagree with they way he uses numbers but when you’re locked in to a position you have to put the best effort in.
    If he is right, most likely the figures will make sense to me one day.
    If wrong he may yet have a Damascus moment.
    CO 2 rise is real
    Current temps at real stations are real.
    Where has all the heat gone?
    Not in the oceans or the atmosphere would be warmer as well.
    At current CO2 levels the atmosphere should be 0.5 degrees warmer than t currently is Gases don’t take centuries to react to daily heat response at daily CO2 levels

  49. Let’s stop the quibbling. Can we all agree on a few things?

    1) There is the potential for investigator bias in every study.
    2) Field’s of study like climate science point 1) is especially true.
    3) Bias has been successfully demonstrated to have affected the veracity of claims in climate science.
    4) A significant part of the audit in climate science comes from non-paid career scientists, engineers and mathematicians.
    5) We all have a lifetime of worthy accomplishments that deserve respect.
    .
    I am reading a number of UHI papers in my limited time for that. I have posted them and would love to discuss any of them. I am particularly interested in ways to isolate UHIE from climate like meteorological record analysis or diurnal temperature range trend, like in Hamdi(2011). I am looking at Karl’s UHI data correlation to DTR.
    .
    Carrick, I agree that UHIE does not affect GMST, which is why we must make sure it should is not recorded as GMST. Same of LULCC, SHUI and micro-site. I disagree with the assumption of UHIE having a firm correlation with population; it’s actually how that the population changes the landscape that’s important. And technology has steadily enhanced the power per person of what can be impacted.
    .
    I think we all appreciate very much Steven M’s willingness to engage and inform. I think we all must respect that he is a professional in this field and know more than all of us put together about UHI study. I am hoping that he will be open to the fact that even the less educated still can have an advantage of fresh eyes and view from a top of shoulders of giants (or something).
    .
    If Steven comes back I would hope he could supply visuals on numbers of average population growth at concentric zones around stations relative to their commission date to decommission date, or the present if still in operation. Because it’s the delta of the UHI and other influences that is relevant.

  50. Ron Graf, I asked a couple of question up thread to which you have not replied.

    My points in asking those questions were:
    1. If the unadjusted station temperatures show UHI effects and the adjusted does not or at least a considerably lesser effect then the UHI vis a vis the adjusted temperature becomes a non issue.

    2. If the UHI effects stay essentially the same with regards to temperature over the AGW related warming period from approximately 1975-present then UHI and its effects on AGW related trends becomes a non issue.

    I think if we could all urge Steven Mosher to get to the punch line and address these issues and forego all the debating contests and personalities this would be a more enjoyable and informative thread.

    Long ago I was involved with a blog on investing and investing strategies where every comment was rated by the readers by posting likes. I quickly noticed that a poster who attempted to post something informative received low to middling numbers of likes while a poster who blasted another poster and got personal would often receive a large number of likes. I think that preference of those participating on blogs is a weakness of blogs in general, but something that I am willing to some extent to tolerate if I think I can learn something after sorting through the chaff.

  51. Ron Graf:

    Let’s stop the quibbling.

    People talking about things you don’t find interesting isn’t quibbling.

    The fact remains that sky, given ample opportunity, has only been able to claim something along the line that people who defend the accuracy of the temperature record simply don’t have any experience with field measurements.

    This is actually a legitimate criticism to raise, were the premise true, so it’s not quibbling to either raise the issue or to respond to it.

    But it also turns out to be false. Nobody here raised their experience level as a point of argument. It was raised as a counterpoint to sky’s argument. That’s a legtimate thing to do, facing the criticism, and it’s not quibbling either.

    Sky then goes on with a particular description of the experience of scientists and engineers who engage in field work. That we somehow leave all of the hard work to grunts. If we did, we’d have exactly the lack of experience that sky was criticizing.

    That description also turns out to be completely false. It’s important to have senior people on site during installations precisely so they will understand field measurements issues.

    It’s not an appeal to authority, since it is a direct response to a criticism.

    I understand you don’t find this an interesting exchange, that’s fine… but it’s still not quibbling.

  52. Beyond that Ron, I don’t think your series of stipulations lead anywhere. [Where do we go from there?]

    But I think the point that also needs to be stipulated is that a scientific criticism is not valid if it cannot lead to testable predictions.

    (It’s a case of what Pauli perjuratively describes as “not even wrong”.)

    All we can ever say about a particular experimental record is whether a series of testable assumptions (a “model”) is self-consistent. So in order to test your concerns, we need to be able to frame them in terms of empircally testable assumptions.

    So… would you be willing frame your concerns in terms of testable hypotheses? I haven’t seen much to test so far.

  53. Kenneth Fritsch:

    1. If the unadjusted station temperatures show UHI effects and the adjusted does not or at least a considerably lesser effect then the UHI vis a vis the adjusted temperature becomes a non issue.

    What we know is that adjustments don’t make too much of a difference. Pretty much everybody who’s done a reconstruction has tried “with adjustments” and “without adjustments” at this point. It’s part of why there is skepticism over significance of UHIE to GMST trend.

    See for example:

    https://moyhu.blogspot.com/2015/02/homogenisation-makes-little-difference.html

  54. Zeke as many of you know has done more exhaustive sorts of studies, which look at regional scales.

    [If you read what I’ve said on the topic, I’ve never claimed UHIE didn’t have important regional scale effects. I simply think it’s not an interesting effect for GMST trend, which is the quantity of interest for AGW.]

    One of them is summarized in this article cowritten by Matthew Menne:

    http://www.realclimate.org/index.php/archives/2013/02/urban-heat-islands-and-u-s-temperature-trends/

    I believe this figure is apropos too.

    He also says:

    We conclude that homogenization does a good job at removing urban-correlated biases subsequent to 1930. Prior to that date, there are significantly fewer stations available in the network with which to detect breakpoints or localized trend biases, and homogenization does less well (though the newly released USHCN version 2.5 does substantially better than version 2.0).

  55. Carrick (Comment #149586)

    Carrick the link to Nick Stokes blog talks about overall GMST averages of adjusted and unadjusted series whereas my question here is whether we can determine the UHI effect at individual stations and then how well does the adjustment algorithm detect it and adjust for it. It was a point that Steve Mosher stated above that he intended to discuss and show evidence for. It is of particular interest to me because I would like to see how well an adjustment algorithm performs in adjusting for slowing changing non climate effects such as I would expect a UHI effect to be.

  56. By the way I noticed that Steve Mosher has posted recently at CA. If he is hard at work he did have time to post over there.

  57. Carrick (Comment #149587)

    I am familiar with the contents of the links in this post and are the reason I brought forth my question. I would still like to see the results for individual stations and where the non climate effect could be primarily related to UHI.

    I do think we need to be discussing UHI effects in terms of adjusted temperatures and the effects over the AGW warming period.

    I have looked at UHI in the US and found that its existence is limited to a low percentage of stations, but that it was significant. This analysis result was based on groupings of stations and did not look at individual stations in detail. The overall effect was very small because of the percentage of effected stations I looked at adjusted data, but the adjustments were not using the more current algorithms – as I recall.

    The other issue with these findings beside the effects on the mean trends which well could be minimal is what do the findings do in terms of the uncertainty of those means.

  58. The entire notion that demonstrably strong UHI effects somehow disappear in GAT indices rests entirely upon the premise that the available data base of station records is representative of the overwhelmingly rural globe. It is only then that the rationale that UHI is but a drop in the bucket can hold. In fact, just the opposite is true: the overwhelming majority of records come from locations where civilization has left its ever-growing footprint upon the landscape and introduced sources of man-made heat on some scale or another. Even in only-recently-developing China, there is growing recognition of regional UHI effects; see:

    http://www.nature.com/articles/srep11160
    http://www.forestthreats.org/products/publications/Spatiotemporal_trends_of_urban_heat_island_effect.pdf

    Population size per se is but a proxy for these gradual physical changes, which vary enormously in timing and strength from location to location. That is what produces a wide-spread inconsistency of apparent “trend” between neighboring station records, while the intra-decadal variations remain highly correlated. These inconsistencies are further compounded in many studies by lack of uniform specification over what interval of time the regressional trend is fitted.

    Sadly, the approach taken in recent decades by “climate scientists” is to accept the basic premise without any serious examination and to “homogenize” records on an ad hoc basis, using demonstrably unrealistic “red noise” models of long-term temperature variations. (See:
    http://s1188.photobucket.com/user/skygram/media/graph1.jpg.html?o=2)
    This in effect bakes in the clear urban bias of the data base into the results presented to the public. The radical effects upon apparent trend of Ver. 3 GHCN homogenizations were starkly shown by “Smokey’s” flash images of many station records shown at WUWT a few years ago.

    As a result of such radical adjustments, an unmistakable ~0.5C decline below the 20th century mean to a widespread 1976 low in global temperatures that had some climatologists speculating about a possibly ending interglacial largely disappears in all post-modern GAT indices. Such stilted time-series–more a product of manufacture than of reliable measurement–satisfy the simplistic preconceptions of a linear trend and noise in “science” that relies upon visual impressions rather than incisive signal analysis techniques. And the UHI corruption is further masked by cherry-picking “trends” over post-1976 intervals of rising natural variations that affect urban and rural records alike.

    No doubt that many who have never seen how different regional average temperatures look when obtained from vetted, century-long non-urban records will continue to believe that UHI exerts little effect in estimating GAT. That belief, alas, is as vacuous as the rank speculation here about my professional work in the absence of any substantive knowledge.

  59. KenF:

    My points in asking those questions were:
    1. If the unadjusted station temperatures show UHI effects and the adjusted does not or at least a considerably lesser effect then the UHI vis a vis the adjusted temperature becomes a non issue.

    Ken, if I understand the question you are asking if any adjustments have been shown to successfully remove UHI. Karl(1988) did the most comprehensive study I’ve found into how UHI affects land records. Karl tested hundreds of stations pairwise urban to rural, for four population categories (as of 1980 census). He found UHI in all of them including the 2k-10k group (~0.1C Tavg over the year 1984). Although UHI was an order of magnitude higher in the >100k group, Karl set UHI’s overall effect in the US stations at 0.06C of bias in 1984. His reasoning for such a small amount was that the station network in 1980 was overwhelmingly rural. As you can guess, my immediate question is: “What was the makeup of the record over time and how many of the urban stations from earlier got closed or moved but whose data still makes up the trend?”
    .
    KenF:

    2. If the UHI effects stay essentially the same with regards to temperature over the AGW related warming period from approximately 1975-present then UHI and its effects on AGW related trends becomes a non issue.

    .
    Karl recommended future study. Perhaps S Mosher is going to surprise us with the publishing of the comprehensive revisit of UHI analysis that is 30 years overdue. Karl made recommended adjustments for the US record but could not say they were valid for the world, even if we had the same metadata then to have applied them. Karl doesn’t mention LULCC, and may have been unaware that his assumption that rural is pristine is false. But he does mention microsite but has no suggestions for it. I asked Steven Mosher what is being done for UHI adjustments and he told me to go research it. Wickham(2013) has information on that. And I am reading Jones(2016) now.

  60. Yes, Mosher’s been active at aTTP as well today where he ended his otherwise good and honest comment with

    “As to results. You dont have results from doing it correctly. So speculation about the answer is just denier crap from our side.”

    OUR SIDE??
    Too funny

  61. sky (Comment #149591)

    I have looked at US station classifications from rural to suburban to urban for USHCN stations with the only significant difference in trends being for the urban stations. You, I believe, are conjecturing that most stations have a UHI effect to some degree. You would be obliged to find some pristine stations without any micro climate effects colocated with stations expected to have UHI and then compare the unadjusted and adjusted temperature series for the complicated stations.

    By the way finding statistically significant trend differences for individual stations is made difficult by the red and white noise in these series.
    .
    .

  62. KenF: “You [Sky], I believe, are conjecturing that most stations have a UHI effect to some degree. You would be obliged to find some pristine stations without any micro climate effects colocated with stations expected to have UHI and then compare the unadjusted and adjusted temperature series for the complicated stations.”
    .
    Ken, you I think are falling into the Mosh trap of Peterson(1997,2003) of finding rural trend is the same as urban trend. This seems to be true so you must choose one of these conclusions:
    1) UHI effect is universal, grows in >80% of stations and is not isolated to just cities.
    2) UHI does not exist.
    .
    Karl, found that a pretty good proxy for UHI was a smidgen off the square root of population. T (urban-rural) = a(T urban)-.45rt
    where “a” is a coefficient determined by the least squares method.

  63. Sky, I respect your knowledge and thank you for your participation and any good factual support you can supply.
    .
    Carrick, I think respect for your knowledge goes without saying around here. I feel badly we have been disagreeing on UHI and hope you can give us a chance to do a thorough cold case investigation for the missing UHI before declaring “case closed.”
    .
    SteveF, I know you are with Carrick, if you guys want to ask Lucia for another open thread I’ll still speak to you. 😉

  64. Ken, there is another proxy for UHIE other than population and that is diurnal temperature range DTR. Karl(1988) found a clear trend since the vast majority of UHIE is a rise of the Tmin. I would think that the use of CRN stations and “natural” stations could provide regional baselines for DTR from climate influences such as cloudiness, wind and precipitation. This mask could be overlain homogenized station data to see DTR anomaly compare it with Tavg anomaly and population and gain a two-hit approach to diagnosing UHIE quantitatively for each station.
    .
    When I proposed this to Steven a few months ago he said it couldn’t be done because of too much variability. But one can look at Karl’s plots and they are impressive. BTW, Karl noted, and you can see, a WWII dip in DTR across all four population groups. Perhaps the SST record’s infamous WWII dip was real and not an artifact at all.

  65. Ron Graf,
    Save for my family, I am ‘with’ nobody but myself. I do expect people to be reasonable, and not assume they can devine the expertise, experience, and knowledge of people they do not know without evidence. I expect that sort of thing from the wingnuts at Ken Rice’s echo chamber, but I object to it there, here, or anywhere else I see it. I happen to know Carrick does lots of field research, and has a long publication record showing that; anyone who suggests he never does field research is simply wrong. Whether you choose to speak to me or not makes no difference. I am not looking for an open thread.

  66. SteveF, if I was looking for an echo chamber I wouldn’t be here. I even visit Rice’s occasionally for a (likely misguided) hope of bringing balance.
    .
    I read Wickham or Rhode(2013), as Jones refers to it, a paper Mosher co-authered. I do not believe it’s conclusion that UHI is non-existent in the record. If Mosher comes back I have a bunch of questions for him on Wickham/Rhode(2013):
    1) Are the “very rural stations” mostly inhospitably aired climates? Because I think those could be susceptible to a low humidity amplification bias.
    .
    2) Does the BEST weighting method discount anomalously cool stations as well as warm ones? If so, how is it theoretically justified? What could cause false cooling?
    .
    3) If the very rural group contains rugged mountainous area is it possible that temperature extremes and more micro-climate-like character could be causing more adjustments by BEST’s iterative weighting comparison method? Could these adjustments be down-weighting high anomalies more then low?
    .
    4) If the BEST iterative weighting process was done to the data then one is already expecting the large city UHI is being mostly reduced. So is the paper a measure of UHI in the record or BEST’s ability to eliminate it’s bias?
    .
    5) Do you have the method that Karl(1988) used to come up with 0.06C/century UHIE in the USA record? Did it weigh the length of service of the stations?
    .
    6) Did Hansen cancel most all of the positive UHI adjustments with negative ones, leaving only a net of 0.01C/century UHI adjustment?

  67. Kenneth Fritsch:

    Long ago I was involved with a blog on investing and investing strategies where every comment was rated by the readers by posting likes. I quickly noticed that a poster who attempted to post something informative received low to middling numbers of likes while a poster who blasted another poster and got personal would often receive a large number of likes. I think that preference of those participating on blogs is a weakness of blogs in general, but something that I am willing to some extent to tolerate if I think I can learn something after sorting through the chaff.

    This isn’t a matter of blogs, but of people in general. Regardless, I think it is telling people who talk about how people should focus on X almost always decided X is something specific they want to talk about while intentionally choosing not to deal with problems of substantial matters in the same discussion.

    Notice how aside from me, not a single person has remarked on the multitdue of factual errors Steven Mosher has posted over the last two threads. Either nobody here has bothered to read what he has written, nobody here has any substantial knowledge of the topics and thus did not spot the obvious errors, or nobody here actually cares to correct the errors they spot. Or it could be some combination of the above. Whatever the exact reason, it is almost insulting for you to write:

    Why the continuing discussion of UHI effects? Steve Mosher has indicated that he will eventually give us test results for the adjustment algorithm with regards to adjusting for UHI. I assume that means that he has independent measures of UHI to test against or perhaps against a simulated UHI

    I think if we could all urge Steven Mosher to get to the punch line and address these issues and forego all the debating contests and personalities this would be a more enjoyable and informative thread.

    While intentionally choosing not to examine or discuss the many errors Mosher has made. Rather than “urge Steven Mosher to get to the punch line,” maybe we could all urge one another to try to actually resolve issues, settle disagreement and figure out what is and is not true. That seems better than letting anybody say anything they want and hoping the myriad biases and interests might lead to errors getting corrected.

    Which has demonstrably failed. In every discussion of UHI I’ve seen.

  68. Carrick:

    But I think the point that also needs to be stipulated is that a scientific criticism is not valid if it cannot lead to testable predictions.

    I wouldn’t stipulate to that as not all valid criticisms lead to testable predictions for much the same reason it is wrong to say, “If you don’t have a better solution, you have no room to criticize mine.”

    All we can ever say about a particular experimental record is whether a series of testable assumptions (a “model”) is self-consistent. All we can ever say about a particular experimental record is whether a series of testable assumptions (a “model”) is self-consistent.

    We can also examine if other models would be consistent with the record in order to attempt to estimate how informative any such consistency might be.

    What we know is that adjustments don’t make too much of a difference. Pretty much everybody who’s done a reconstruction has tried “with adjustments” and “without adjustments” at this point. It’s part of why there is skepticism over significance of UHIE to GMST trend.

    I hate how people use vague phrases like “too much of a difference” to minimize things. What exactly counts as “too much of a difference”? The BEST temperature data set has its overall trend altered by ~15% due to adjustments. Would it be wrong for somebody to think that significant and worth examining, or would that be not “too much of a difference”?

    Personally, I think any discernible effect is worth examining, and I don’t think anyone can make a compelling case that UHI doesn’t have one. People like to say we can know UHI doesn’t have an effect that matters on global temperatures, but what exactly “matters”? Personally, when dealing with a trillion dollar issue, I’d say everything matters.

  69. Ron Graf, it is critical to these discussion to keep in mind exactly what it is that we are talking about.

    Firstly, the Karl work you reference is from the late 1980s and I am still not clear whether the urban differences he found were for adjusted or unadjusted station data. The breakpoint based adjustments came later. The Karl results that you reference appear very much the same as I found in my analysis using adjusted station data that was adjusted with the methods used prior to the use of breakpoints and neighboring station differences.

    We must keep in mind that the UHI effect, regardless of the size, is an issue only to the extent that it cannot be detected and homogenized with the adjustment algorithm.

    If all stations showed sufficient UHI-like effects, (no matter for this argument how implausable that might be) the adjustment algorithm would have some difficulty sorting this out. It would be an impossible task if all the UHI-like effects occurred contemporaneuosly. That occurrence makes no sense at all and I would hope that rational people would agree. If all stations had UHI-like effects the term UHI would lose all meaning. UHI-like would refer to micro climate changes at any station regardless of classification.

    The way out of this seeming dilemma (again no matter how implausable) is to use an adjustment algorithm that deals with station changes on a temporal basis using breakpoints and with a comparison with several neighboring stations that avoids the problem of coincidental changes. That is the basis of the currently used adjustment algorithm. The biggest hurdle for these algorithms is detecting slowly changing micro climate effects.

    Further the adjustment algorithm needs to be tested using benchmarking simulations, where to a known error free climate, known station micro climate effects, including UHI-like effects, are added. Even implausble and less plausible station conditions could be tested and the plausibility issue handled separately.

    I think the delay in the current benchmarking effort for testing station adjustment algorithms might well be a result taking a more imaginative view of micro climate events and effects.

  70. Brandon:

    I wouldn’t stipulate to that as not all valid criticisms lead to testable predictions for much the same reason it is wrong to say, “If you don’t have a better solution, you have no room to criticize mine.”

    But I’m not saying that. What I’m saying is your criticism has to have a testable component, not that you come up with a new, better solution. That is, you don’t need a new theory or hypothesis, you need to be able to explain what’s wrong with the current one in a way that is testable.

    I hate how people use vague phrases like “too much of a difference” to minimize things. What exactly counts as “too much of a difference”?

    This would be an entirely valid criticism, were it all I wrote on the subject, and were I not available for follow up. As it happens, I am both available and happy to expand on what I mean in this case. (As it happens, I have already said much of this before. It is of course impractical to repeat everything I think on every thread.)

    When I say something “makes a difference” here (i.e., be “significant”), what I mean is that it significantly influences the outcome of policy decisions. The metric I’ve chosen to look at is GMST temperature trend between 1970 to current (what I consider the anthropogenic warming period).

    Given the other uncertainties such as modeling uncertainty, which is about a factor of 2, if you end up with less than a 10% effect on GMST temperature trend from adjustments, it is my judgement that this effect need not considered for policy purposes. If it doesn’t influence your decision making, it really “doesn’t matter”.

    Personally, when dealing with a trillion dollar issue, I’d say everything matters.

    Everything matters??? Speaking of hyperbole…

  71. I have to agree that the surface temperature record has seen enough replication that the error bounds are probably small. There are many, many other issues that need work. I’ve often wondered about the effect of the boundary layer and the temperature gradients. There are effects like irrigation and so forth where there is a lot of work to do.

  72. Kenneth L Fritsch:

    If all stations showed sufficient UHI-like effects, (no matter for this argument how implausable that might be) the adjustment algorithm would have some difficulty sorting this out. It would be an impossible task if all the UHI-like effects occurred contemporaneuosly. That occurrence makes no sense at all and I would hope that rational people would agree.

    Whether I am rational or not can be debated, but I certainly agree with you.

    You’d need a mechanism for UHIE influence on measured temperature trend that didn’t depend on how urban the station was, on how well sited it was, etc.

    The only mechanism that comes to mind is the influence of waste heat on GMST (which is global and real in nature), but that’s not really UHIE. Nor is it thought to be very large, as I understand it.

    The point I’ve arguing is that once you say the site matters, and the level of urbanization matters, your model for UHIE influence on measured temperature trend is going to have a “fingerprint” associated with it in terms of a geographical pattern in trend and one that is not observed by the real measurements, which dominantly show a “polar amplification” pattern.

    That limits the possible magnitude of the influence of UHIE, probably to a level where it doesn’t significantly impact policy on AGW amelioration.

  73. David Young:

    There are many, many other issues that need work. I’ve often wondered about the effect of the boundary layer and the temperature gradients.

    The lack of a proper boundary layer in the models worries me the most. It is unfortunate that the satellite-based measurements are not more reliable, because ideally you’d want to compare GSMs to that rather than 1-m above surface measurements, which is much harder to properly model.

    What are you thinking about in terms of temperature gradients though?

    I also think the inaccuracy of the numerical solution (even just grid size) is one people often don’t take seriously enough. One thing people who don’t do numerical work often don’t understand is, if you generate a finite element version of Euler’s equation, typically you don’t just have a less accurate version of Euler’s equation.. you end up with new (and wrong) physics.

    That is things that were physically impossible in the original continuous equation are now possible in the finite element version. This shows up in some cases in exponentially growing “error terms”, and is one of the reasons that forecasting with numerical models is challenging.

  74. KenF, Karl(1988) data was adjusted according to Karl and Williams(1987) for continuity, Karl(1986) for TOB and all meta-data available at that time.
    .
    Carrick:

    Given the other uncertainties such as modeling uncertainty, which is about a factor of 2, if you end up with less than a 10% effect on GMST temperature trend from adjustments, it is my judgement that this effect need not considered for policy purposes.

    .
    Carrick, I am finding bias in every paper I read, mostly omissions. There is so much information one can cherry pick what they care to highlight. To paraphrase the late Speaker of the House Everette Dirkson, 10% here, 10% there and pretty soon you are talking real bias.
    .
    Jones(2016) in its section on urbanization effect makes no mention of Karl(1988), the most comprehensive study on the topic ever done. Instead the acknowledge that automated homogenization techniques are ineffective correcting gradual influences But:

    Despite the difficulties in correcting for urbanization effects, there are two strong arguments that indicate that any residual urbanization effects in the standard (homogenized) temperature datasets are probably very small. The first of these is that SSTs are not affected, so the similarity of warming trends from land and marine regions argues against the effect being important. Second, datasets can be constructed using only rural locations.

    .
    This is the argument Steven Mosher, Best and the establishment makes dating back to Peterson (1997). The record is somehow immune to UHIE, microsite and LULCC because you can’t find it by selecting any subset and getting a different trend. At least Steven Mosher admits this is somewhat of a mystery.
    .
    The other consensus argument against UHI is that the land record matches the sea record, and Jones (2016) makes it. But they also argue in the same paper to use the land record to check the SST record. Its seems circular. Here is a quote justifying marine adjustments in that section:

    If the adjustments were not applied then century timescale warming would be greater, and there would be a major discrepancy between the land and marine components prior to about 1940…

    .
    The confirmation checks are somewhat funny:

    The fact that four different organizations have made such corrections independently is a testimony to the robustness and accuracy of the resulting homogenized data.

    As if they don’t pay attention to each other. BTW, Jones’ paper makes no mention that there is a satellite or radiosonde record. Omissions oops!

  75. Carrick:

    But I’m not saying that. What I’m saying is your criticism has to have a testable component, not that you come up with a new, better solution. That is, you don’t need a new theory or hypothesis, you need to be able to explain what’s wrong with the current one in a way that is testable.

    It’s called a simile. Saying you said something like X is not the saying you said X. It’s fine not to get the point of a comparison, but rather than say, “I didn’t say X,” you should say, “I don’t get how X is similar to what I said.”

    This would be an entirely valid criticism, were it all I wrote on the subject, and were I not available for follow up. As it happens, I am both available and happy to expand on what I mean in this case.

    Saying you hate when people do something is expressing a personal feeling, not making a criticism. I could justly hate when someone does that even if I knew what their quantified opinion would be.

    When I say something “makes a difference” here (i.e., be “significant”), what I mean is that it significantly influences the outcome of policy decisions. The metric I’ve chosen to look at is GMST temperature trend between 1970 to current (what I consider the anthropogenic warming period).

    I wish you would understand why people might not agree that something only matters if “it significantly influences the outcome of policy decisions,” particularly since the politicized nature of the global warming debate means very little will do such.

    Under the standard you describe here, I’d wager 90% of the work on global warming could be disregarded even if the issue weren’t politicized at all. I think there’s a strong argument politicization increases that number to 97+%.

    Everything matters??? Speaking of hyperbole…

    I don’t know who was speaking of hyperbole as the topic hadn’t come up, but no, that was hyperbole. When it comes to trillion dollar issues, I think everything that has a discernible influence is relevant. You may feel only things which greatly bias the global surface temperature are relevant, but that doesn’t mean everyone else feels the same way.

  76. Kenneth L Fritsch:

    The way out of this seeming dilemma (again no matter how implausable) is to use an adjustment algorithm that deals with station changes on a temporal basis using breakpoints and with a comparison with several neighboring stations that avoids the problem of coincidental changes. That is the basis of the currently used adjustment algorithm. The biggest hurdle for these algorithms is detecting slowly changing micro climate effects.

    This isn’t a viable solution to the dilemma you propose. You suggest using breakpoint algorithms to account for the implausible idea all stations contain a similar UHI component. Breakpoint algorithms comparing stations to one another look for differences in pattern between stations. They cannot remove patterns that are shared between stations.

    This is already recognized as a problem simply due to the fact nearby stations can have similar non-climatic influences which hamper the effectiveness of breakpoint algorithms in the world we have now. If it can be a problem in a world which contains a realistic UHI effect, it would certainly be a problem in a world where the UHI effect was far more widespread.

    In fact, if every station had a meaningful UHI component, breakpoint algorithms such as that used by BEST would likely exacerbate the problem.

  77. Ron, you have to be careful not to conflate some poorly thought through comments by Jones with the conclusions drawn.

    The climate models and the observed temperature series show a divergence in temperature trends between land and ocean that is a near linear function of the land warming rate.

    I also agree that the major observed temperature data sets use much the same data and have similar adjustments.

    The main point of my previous post was that these questions about stations micro effects, including UHI, and how well the algorithms adjust for it can be addressed in straight forward manner using the proper analyses and the methods applied to the analyses.

    I also expect that the confidence limits used for temperature trends will be changed more by future analyses than the mean trends will be.

  78. Here is a semi-candid response from Steven regarding UHIE back in January.

    [Ron Graf:] “UHI is a dangerous influence that must be mitigated and adapted to but on the other hand it does not affect the urban temperature record to any statistical significance.”
    .
    [Steven M:]yes that is the mystery as Peterson termed it. go ahead and read back to everything I wrote about Peterson. We all acknowledge that UHI is real. Some argue it is potentially dangerous. To make that case they do tend to focus on worst case UHI.
    Since we see UHI in indivudal records it seems OBVIOUS that the global record should show some sign of it.. after all there are plenty of urban sites in the total average. But when we use the same method , that found the signal in individual cases, and apply it to global data, the signal gets attenuated.. to the vanishing point. To me it is still a mystery of sorts. I was relieved when I found a small signal ( .03C/decade) in the US. when I unleashed that same method on the globe….. POOF that small effect disappeared.
    Ross had a different method he thought could work to pull out the signal.. I still play around with that. But it is frustrating to work for a few months.. load up the data and POOF, get the same answer.
    For me here is where it stands
    A) Our expectations for average UHI over the life of a station may be too high. We are probably conditioned to expect to see values like 2C.
    B) Studies that show UHI in a city get published more easily than
    those that dont. But also see a study of 419 large cities using
    LST as opposed to SAT. The average effect is less than 2C
    C) If a UHI of 2C hits Tmin… and Tmax is un effected.. then
    Tave will see 1C.
    D) if 2C is UHI max and it hits Tmin, but only for 10% of the year
    you are down to a .1C effect in Tave.
    E) Rural/urban isnt either or. That is you can have rural sites
    that are biased by micro site and urban sites that are in cool zones.
    Point E is especially important
    Suppose you have 20 sites
    10 urban with a trend of 1C
    10 rural with a trend of zero
    Now compare them and the difference is apparent.
    Now suppose that you misidentify a rural as urban and an urban as rural
    Urban is now 9 with a trend of 1C and one with a trend of 0
    Rural is now 9 with a trend of 0 and one with a trend of 1C
    Now do the comparison..
    Next imagine the difference isnt 1C but .1C and you see that
    errors in identifying or misclassification can obscure the difference.
    Bottom line. It looks like a simple problem. Divide the raw data into two piles and compare. Opps.. how the hell did that NOT WORK.
    So people cant ask me to be quiet about the test and what didnt work. I went into this thing to prove peterson and Parker wrong.
    Look at all I wrote about Parkers study at climate audit. I went into this thinking they were wrong obviously wrong and so showing that would be easy. two piles.. easy peasy. rural only easy peasy. Only I failed. YEARS OF FAILURE.. finally i found something with zeke and nick.. sure it the US only.. so take that same method and use it in Berkeley earth DOH! they invited me to visit
    I will tell you a funny story
    After I gave Rhode my classifier he went away and did the study
    i forget the numbers so I’ll just use X and Y, say X is .1 and Y is .15
    he walked to the board.
    He wrote down
    Berkeley Earth UHI -.1C
    Mosher ?.15C
    and then he said.. guess the sign on Mosher’s approach.
    haha it was negative. So the same filter that found UHI in the US
    didnt find it globally. Imagine how I felt.
    Anyway.. its a mystery. There are some explanations. I’d love to find .15C of UHI in the land record.. that would be .05C in land and ocean.. err ya.. not the kind of result that changes science.
    Suppose I found .3C? that would be .1C in land ocean. Not a game changer.
    Suppose i found 1C.. Now, that would change land ocean by .3C
    that is an interesting result.

  79. Brandon Shollenberger (Comment #149610)

    Brandon, the adjustment algorithms would be less effective if the micro climate changes occurred during near the same time, but time works in favor of breakpoint finding differences when this does not occur. If the UHI can be spread over large areas from some UHI center, the effect would be contemporaneous but I would doubt that it would be of near the same intensity over the neighboring stations used in looking for differences. If the UHI effect can be spread out over large areas and affect temperature trends in those stations it can no longer be considered a localized effect. Not adjusting for it completely then becomes an issue and problem for allocating warming to natural phenomenon, that due in total to anthropogenic effects and that to GHGs, but not necessarily regions and global warming overall.

  80. Reading a CA post I saw a comment suggesting the rural stations in US may be represented to a higher degree by the volunteer run COOP network, which are more likely to have had micro-site issues like bar-b-q grills parked under the sensor screen or being downstream of a HVAC condenser or chimney, be on paved walkways, etc.
    .
    I see the UHI as small but provable, whereas the unfalsifiable models issues are big but…well, unfalsifiable.

  81. I should add here that I have done a lot of breakpoint analysis using temperature series that were already adjusted using breakpoint algorithms. The upshot is that some very large trend differences can be measured between near neighbor stations with some being significant. It was also evident that the white and red noise in these station series puts large confidence intervals on the trends even when using difference series, and particularly when the series lengths are
    shorter.

    An interesting observation came from this analysis and that being that near neighbor stations can have highly correlated series that can have a difference series with a statistically significant trend. In other words the annual regional temperature changes can be followed among near neighbors but on top of the individual stations having different low frequency changes(trends).

  82. Ken: “In other words the annual regional temperature changes can be followed among near neighbors but on top of the individual stations having different low frequency changes(trends).”
    .
    Neighbors showing correlation of response but different long-term trends is the precise fingerprint of station bias. The only site flaw that can produce erroneous cooling would be a sensor enclosure parked under a shade tree, which could be tested for by seasonal trend.
    .
    Jones and the consensus guys claim that as little as 100 stations could robustly sample the world, that 1000 is certainly plenty. With this logic one could set up 20 stations in the US (with a triangle of sensors) only in the most natural sites in each grid cell and run for 5 years. Make them automated to report over the IP along with a live web cam to monitor 24/7. I wonder how expensive this could be. Keep Karl away.

  83. Carrick, Yes discretizations can easily be over or under stabilized. Both are bad but over stabilization is perhaps more common. It fixes virtually all “superficial problems.” it also gives wrong answers of course.

    The other problem is sub grid models of turbulence for example. These can also over-stabilize the model in addition to being totally wrong outside their calibration set.

  84. David Young:

    Both are bad but over stabilization is perhaps more common. It fixes virtually all “superficial problems.” it also gives wrong answers of course.

    Without claiming anything close to your experience…yes I have noticed that bias: If it doesn’t blow up it must be right.

    Of course overstabilization is more common—in reported results at least…

    Getting $latex +\infty$ in your integration usually isn’t publishable.

  85. Brandon:

    Under the standard you describe here, I’d wager 90% of the work on global warming could be disregarded even if the issue weren’t politicized at all. I think there’s a strong argument politicization increases that number to 97+%.

    To be clear, I think there are good reasons to look at these problems even if they don’t influence policy decisions. I don’t advocate against studying the influences of UHIE for the lack of scientific reasons.

    I could be selling people short, but it’s my impression most of the interest from the skeptical community comes from the belief that UHIE is responsible for a significant portion [*] or all of the measured warming. My comments are directed towards that meme, rather than towards whether there is scientific value in studying it.

    Of course scientific interest or not, if there wasn’t impetuous from AGW, we wouldn’t be seeing such massive funding for such
    otherwise arcane issues, and more immediate problems like severe storm warnings or flooding amelioration would take a higher research precedence then.

    [*] In the sense of influencing policy.

  86. Kenneth L Fritsch:

    Brandon, the adjustment algorithms would be less effective if the micro climate changes occurred during near the same time, but time works in favor of breakpoint finding differences when this does not occur.

    Microclimate changes are a separate (if perhaps somewhat related) issue from UHI, so I’m not sure what you mean by this.

    If the UHI can be spread over large areas from some UHI center, the effect would be contemporaneous but I would doubt that it would be of near the same intensity over the neighboring stations used in looking for differences.

    All homogenization does is make stations more similar to one another based upon whatever criteria/parameters you choose. The primary thing examined in current approaches is trend. If UHI is increasing the trends in stations over the same time period, it can’t be detected by examining how stations are different from one another.

    The only way one could find UHI in your example via the current breakpoint algorithms is if stations were influenced by it over different time periods, with there being significant periods in stations where UHI did not influence their trend. If all that varied was the intensity, then the algorithms might reduce or increase the effect of UHI depending on the distribution of the UHI effect (and any underlying trends).

    An interesting observation came from this analysis and that being that near neighbor stations can have highly correlated series that can have a difference series with a statistically significant trend. In other words the annual regional temperature changes can be followed among near neighbors but on top of the individual stations having different low frequency changes(trends).

    The difference between how series correlate at high frequencies as opposed to low frequencies is an interesting one that can provide a great deal of insight into some questions. It’s a particularly important issue for paleoclimatic reconstructions, but it can also be informative in modern temperature constructions as well. Personally, I think such an approach tends to show homogenization algorithms in a poor light.

  87. Ron Graf:

    This is the argument Steven Mosher, Best and the establishment makes dating back to Peterson (1997). The record is somehow immune to UHIE, microsite and LULCC because you can’t find it by selecting any subset and getting a different trend.

    I don’t think it’s necessarily a mystery.

    The religiously made assumption of the skeptics is that UHIE only manifests itself in data with one sign.

    What this observation indicates is either UHI influence on temperature trend is either very nonlocal (which I think is highly improbable), or that the effect appears in the data with both signs (I think there is already evidence of this).

    Even so, there’ll be an influence…just not necessarily on long-term temperature trend. For example, I wouldn’t be surprised if a properly corrected for record would have less variance. That would be a useful outcome of this research, even if it doesn’t significantly affect the best estimate of trend.

  88. Carrick:

    I could be selling people short, but it’s my impression most of the interest from the skeptical community comes from the belief that UHIE is responsible for a significant portion [*] or all of the measured warming. My comments are directed towards that meme, rather than towards whether there is scientific value in studying it.

    [*] In the sense of influencing policy.

    Whether or not people feel this way, the idea is incoherent to any rational policy debate. Policy decisions on an issue like global warming (that have any real relevance) couldn’t rationally be based on the global temperature trend as that trend conveys so little information about the potential risks of global warming.

    Issues like UHI matter for rational policy making. What you refer to, which they may not matter for, is the irrational social argument that’s more about which side “wins” than any coherent position or plan. Limiting our focus to that argument is little better than limiting the focus to, “Global warming is real!” vs “Global warming isn’t happening!”

    The reason I don’t get this sort of view of the UHI debate is global warming is a trillion dollar issue. If we’re going to go off the global temperature record and UHI has affected it by just 10%, then UHI is at least a hundred billion dollar issue.* That doesn’t seem like the sort of thing people should just wave off because UHI doesn’t disprove global warming (or things like that).

    *This calculation is obviously an oversimplification, but you get the point.

  89. Carrick:

    The religiously made assumption of the skeptics is that UHIE only manifests itself in data with one sign.

    I can’t say I’ve tried to gauge precisely what “the skeptics” believe on this issue, but I don’t think what you say is fair or true. I don’t think it is fair because UHI can, by definition, only manifest itself in data with one sign. The potential for cooling from UHI is not actually due to UHI, but rather, how the data is examined.

    I don’t think what you say is true because if you frame the issue in that way, I suspect many of “the skeptics” would agree.

    What this observation indicates is either UHI influence on temperature trend is either very nonlocal (which I think is highly improbable), or that the effect appears in the data with both signs (I think there is already evidence of this).

    I hope we could all agree these are not the only two possibilities, especially since Ron Graf referred to more than just UHI.

  90. Carrick:

    I don’t think it’s necessarily a mystery.
    The religiously made assumption of the skeptics is that UHIE only manifests itself in data with one sign.
    What this observation indicates is either UHI influence on temperature trend is either very nonlocal (which I think is highly improbable), or that the effect appears in the data with both signs (I think there is already evidence of this).

    .
    Carrick, Steve Mosher claimed in the beginning of the first Population post that I was wrong about UHI being anything but a big city urban issue. I think its clear Karl(1988), which has been swept under the rug by the establishment, supports UHI being a gradual developmental issue. Karl could plot UHIE as a function of population starting at towns of 2000 souls. I agree with you that UHIE can have both signs, (sorry Brandon,) but as S Mosher said in 2007, this would be the Jackalope of climate science. Suburbs in a desert climate that introduce irrigation and lawns could do that. Carrick, I don’t believe that is the solution to our mystery.
    .
    Personally, I see adjustment regimes a little like parameterization in that the operator “knows” the reality and just needs to get the data right and when things get to that end mission accomplished. So the solution could be a chimera of several biases tied up in warming rural and cooling urban.
    .
    The one thing I am most curious about is what happens when a station closes. Any UHIE or micro-site issues get “solved” from the perspective of some. But their bias is permanently cooked into the record without ever being corrected. The new replacement site starts with a clean slate, a new baseline for its anomaly. This is worse for the record then continuing the old station without fixing. Because there is the potential for urban creep and microsite issues to pop up all over again. This was my concern when reading Karl(1988) say UHIE has only a 0.06C/100yr effect because of the overwhelmingly rural makeup of the network. Reading Peterson, I knew that there are relatively few rural stations that still remain unchanged for 100 years. Most of the rural sites in 1988 were not there in 1950. Also Karl never showed his work to get the 0.06C although he showed math on lessor issues in his paper.
    .
    Sky, I am looking at your Eastern China link. Jones 2016 says 68% of E China stations are in urban centers.

    The simple average of all the stations
    shows a greater warming than the land-use weighted series because urban areas (which constitute less than 1% of the total area of the country) are where 68% of stations are located. In summary, there is an urbanization effect in eastern China, but its impact could be considerably reduced by using a network of rural sites.

  91. I go away for a few days and there’s so much wasted bandwidth it’s not worth scrolling back through the whole thing.

  92. Ron Graf:

    I agree with you that UHIE can have both signs, (sorry Brandon,)

    It really can’t though. By definition, UHI deals with heat islands. You can have areas where UHI is positive by smaller than the UHI around it, making it negative relative to the city as a whole. It’s still positive though.

    There is another thing people have examined, called UCI, for “cool islands.” I get there may be some temptation to combine these two things into one, but doing so merely creates unnecessary confusion as UHI is defined in terms of heat. There is even “heat islands” separate from “urban heat islands,” which are just areas warmer than the surrounding areas.

    This is sort of like how people conflate microclimate issues with UHI. It’s wrong, and it just leads to unnecessary confusion.

    DeWitt Payne:

    I go away for a few days and there’s so much wasted bandwidth it’s not worth scrolling back through the whole thing.

    I know, right? People keep saying things like:

    If the wind is blowing, you can’t have an inversion.

    It’s such a waste of bandwidth.

  93. Lucia , you never did get round to that September 2016 ice extent.
    There is a group trying to circumnavigate the North Pole to whom this is vitally important.
    And a group at Neven’ s demanding that it goes low.
    It might.
    But it may stop blocked.
    My guess 4.6 and they get through.

  94. Brandon: “This is sort of like how people conflate microclimate issues with UHI. It’s wrong, and it just leads to unnecessary confusion.”
    .
    You are correct there are different terms coined for different specific effects. I am going to coin NCE non-climate temperature effects to encompass all the terms that affect a sensor but are not representative of a grid cell on the planets surface.
    .
    Perhaps part of the answer to the “mystery” of non-traceability of NCE in the record that pristine sites are more sensitive to NCE than urbanized NCE saturated sites. Karl (in 1988) believed the relationship to population/development was steeper than Oke’s. Oke(1973) proposed increases as the logarithm of the population or as about 0.73 log (pop) where a village with a population of 10 has an urban warming of 0.73 C, a village with 100 has a warming of 1.46 C, a town with a population of 1000 people already has an urban warming of 2.2 C, and a large city with a million people has a warming of 4.4 C. If Oke had the more realistic model then much of the “rural” stations are already half infected with NCE.
    .
    Also, NCE is sporadic to events according to proximity to the sensor. Changnon and Kunkel (2006) examined discontinuities in the weather records for Urbana, Illinois; a site with exceptional metadata and concurrent records when important changes occurred. They identified a cooling of 0.17ºC caused by a non-standard height shelter of 3 m from 1898 to 1948, a gradual warming of 0.9ºC as the University of Illinois campus grew around the site from 1900 to 1983, an immediate 0.8ºC cooling when the site moved 2.2 km to a more rural setting in 1984. Most COOP sites in the USA are not that well documented.
    .
    The zero or negative effect of NCE adopted by the consensus through Peterson’s papers is just ridiculous. The eastern China study shows NCE going from 0 to 4C in just 10 years for a score of urban sites.

  95. Eli,

    Steve Mosher has left the building.

    Since there’s nothing going on but hand waving, it seems like a good idea to me.

  96. “This is sort of like how people conflate microclimate issues with UHI. It’s wrong, and it just leads to unnecessary confusion.”

    Brandon, while micro climate and UHI are different phenomenon and cover different size areas, the adjustment algorithms used to homogenize the station temperature series deals with both effects as if these effects were the same.

    For most observers the effects are similar in that the adjustments are used in attempts to make the station temperature series representative of the area the station is assumed to cover. Now if UHI were shown to affect more or less uniformly that assumed area represented, no or only a small adjustment would be required.

    I do not know whether published work on the areal extent of UHI has been cited here but I would be interested in knowing how much of this discussion here is conjecture or backed with some evidence and proper analysis.

  97. The high correlation and different trends for some near neighbor station pairs that I have found in my analyses of adjusted temperature series was so pervasive that I considered that it might well be a natural phenomena. I have not found any published work that deals with this issue and I have not attempted to conjecture how this condition occurs. At the time of my analysis which was several years ago I thought that this condition could cause problems for adjustment algorithms if it was natural or be a problem resulting from the adjustment process. I presented my work at the time to the authors of the GHCN adjustment algorithm but did not get any satisfactory response from them.

    I need to go back and repeat this analysis using the latest adjusted station data and perhaps show my results here.

    All these complications that are seen when digging sufficiently deep into the unadjusted and adjusted temperature series is a primary reason for me to want to see the benchmarking process put into use in determining through simulations how well the adjustment algorithms can handle these conditions.

  98. Eli,

    Steve Mosher has left the building.

    Since there’s nothing going on but hand waving, it seems like a good idea to me.

    Actually, there’s no bigger Hand Wave then moving all your limbs and leaving altogether, like a bad comedian who ran out of material.

    Andrew

  99. Ron Graf:

    You are correct there are different terms coined for different specific effects. I am going to coin NCE non-climate temperature effects to encompass all the terms that affect a sensor but are not representative of a grid cell on the planets surface.

    I wouldn’t use “grid cell” in this as you don’t have to use grid cells to produce a temperature record, but other than that, I approve of this idea.

    Also, NCE is sporadic to events according to proximity to the sensor.

    Years back Steven Mosher and I were supposed to work together on a project to examine UHI. One of the things I found interesting in our discussions is he felt we should only look at how UHI affected the temperature trends.* I thought we should attempt to quantify the potential effects of UHI on the data and results, regardless of the nature of those effects.

    One of the ideas I proposed was to try to create UHI profiles, where we’d attempt to create models of how UHI affects areas under given circumstances (season, precipitation, wind, etc) and apply those to stations to estimate what effect UHI might have. We’d then examine how the various adjustment methodologies are affected by those UHI profiles (by running the code on the data with each model’s effect removed).

    I also had a second test I wanted to apply where we would perturb stations by synthetic UHI profiles we created in order to simply test the adjustment methodologies so we could see exactly how they handle various situations (that may or may not exist for real). I am genuinely surprised this approach isn’t a popular one. I’ve never understood the near-obsession with how UHI affects global temperature trends.

    *Actually, he switched between saying we should examine how UHI affects trends and how it affects the total rise in temperature. Strangely, he had trouble understanding when I explained those two things are not the same. Discussions of UHI are strange.

  100. Kenneth Fritsch:

    Brandon, while micro climate and UHI are different phenomenon and cover different size areas, the adjustment algorithms used to homogenize the station temperature series deals with both effects as if these effects were the same.

    Yes, but that the same adjustment methodologies are used for both phenomena doesn’t mean we should assume they will be handled equitably. They affect the data in different ways, and as a result, adjustments will be made differently for them. it is theoretically possible an algorithm could handle both issues equally well, but the opposite is also true (and perhaps more plausible).

    The difference is especially relevant since there are a variety of different approaches to adjusting data to acount for things like these, and it is likely the different approaches have different rates of success for each.

    I do not know whether published work on the areal extent of UHI has been cited here but I would be interested in knowing how much of this discussion here is conjecture or backed with some evidence and proper analysis.

    I’m not entirely sure which parts of the discussion you have in mind when you say this, but pretty much everything I have said is based on published literature. My discussion of how the effect of UHI can be affected and/or spread via things like wind is certainly tied to published literature, as it relies in no small part on the very papers this post cites. And misrepresents. I don’t know about what other people have said though.

    The high correlation and different trends for some near neighbor station pairs that I have found in my analyses of adjusted temperature series was so pervasive that I considered that it might well be a natural phenomena. I have not found any published work that deals with this issue and I have not attempted to conjecture how this condition occurs.

    For what it’s worth, there is literature on this topic in paleoclimatology. It’s an interesting topic in that field because different proxies can respond differently to the same environment, but the differences can also be caused by the methodologies used or differences in location. Trying to figure out which explains a particular difference is both tricky and interesting.

    There are actually quite a few parallels between creating the modern temperature record and creating paleoclimatic reconstructions. I’m surprised that doesn’t receive more attention.

  101. Ron Graf:

    Carrick, Steve Mosher claimed in the beginning of the first Population post that I was wrong about UHI being anything but a big city urban issue.

    I believe this claim is very wrong.

    I like the notion of NCE, but like any concept it has to have a way of being quantified. It’s impossible to numerically test something the you can’t calculate.

  102. Brandon:

    It really can’t though. By definition, UHI deals with heat islands. You can have areas where UHI is positive by smaller than the UHI around it, making it negative relative to the city as a whole. It’s still positive though.

    If you are talking about the effect on true temperature at a spatial location in the unperturbed temperature field, UHI always results in a positive bias. However, we don’t know true temperature, rather it’s the quantity we’re trying to infer from limited data containing flaws in the measurement process.

    So to make it clear, when I discuss effects, I’m generally referring to effects on measured quantities as well as actual ones. When I say error, it’s the difference between the measured value and the actual one. When I say measure, I mean taking “raw” numbers and crunching them in a model to obtain a value. That model could be as simple a model for thermal expansion of mercury in a thermometer or as complex as the trajectory of a spacecraft. When I say measured, that could refer to unadjusted or adjusted, with adjustment methodologies being considered part of the model being used.

    We could also either be discussing the measurement at one site, or the inferred regional scale temperature, which is to be related to an average over a comparable area of the surface temperature field. [*]

    Having said all of that, I will claim that UHI can have either sign on measured temperature, which is the thing we have access to.

    For example, if you have a station move over time from urban to rural, that results in a “saw-tooth” pattern in the error in measurement rather than a monotonic increase in error over time. Depending on when the break point occurred, the influence on trend from that can be negative.

    Urban development has other effects besides trapping heat energy (it affects wind flow, humidity and nocturnal subsidence for example through the transfer of mechanical energy from elevated locations to the surface). These effects aren’t the same as UHIE, but they are very difficult to disentangle from UHIE itself.

    Land usage changes are intertwined with UHIE. Rural stations near farms could have a negative net effect on actual temperature from land usage changes. So when you use the comparison method to remove the UHIE from the urban site, you could again end up with an adjustment with the wrong sign. Here it isn’t UHIE that is causing the error, the error is in the model constructed to remove UHIE. In the attempt to fix one problem, you’ve introduced another.

    When you use correlational methods to automatically correct for UHI bias, those methods have noise, and can at least in principle result in corrections that have either sign:

    For example, a site that is affected by UHIE might be mis-categorized as “rural” and used to “fix” rural stations that have been categorized as “urban”. If the person writing the algorithm starts with the personal belief that only large cities can have UHIE, it’s very easy to see how you can end up with adjustments with the wrong sign. This falls under the rubric “model error”.

    [*] Note this is more complicated than it sounds: In a given month, the fetch area is affect by the mean wind speed and direction. Generally people visualize the averaged temperature quantities, e.g. from gridding, as if the area of integration is constant over time… in practice the effective area of integration changes over time, and there is a strong geographical dependence to it. Relating measured to actual quantities is often more challenging than it looks on paper.

  103. What kind of temperatures are available from Berkeley, or what is used: I think of max min or the average, and at what time it is measured if it is max-min results.
    TOB is real and if the observation time changes it can change the result with up to +-0.5K for different reading times. Worst if the station have larger temperature swings daily or weekly.
    It might not be a problem, but take some care, the signal is in the same size.

  104. Svend, TOB seems relatively easy to correct compared to NCE because the observation time (metadata) is much more likely to be kept and there’s a formula correction. And, it can be applied all the way back to the start of the time series uniformly.
    .
    NCE, on the other hand, is mostly undocumented, must be diagnosed and quantified among multiple possible sources, and is should not be applied uniformly to the past except for clear known events, like a new nearby building construction. NCE may take multiple adjustments.
    .
    Carrick, I am pretty sure S Mosher feels that UHIE is not an issue except for large cities like Tokyo and has said as much many times. When I wrote: “Carrick, Steve Mosher claimed in the beginning of the first Population post that I was wrong about UHI being anything but a big city urban issue.” I was addressing you, not including you, sorry.

  105. Ken Fritsch says: “You, I believe, are conjecturing that most stations have a UHI effect to some degree. You would be obliged to find some pristine stations without any micro climate effects colocated with stations expected to have UHI and then compare the unadjusted and adjusted temperature series for the complicated stations.”

    Finding relatively pristine stations without any microclimate effects is a task that requires cross-spectral inter-comparisons. That is precisely how I vetted the non-urban stations used to establish a benchmark average series of yearly temperature deviations from the 20th century mean in each of 22 roughly equal-area climatic regions of the contiguous USA. Urban records were selected largely on the basis of availability of intact series over the same interval, 1896-2005. This criterion along with great regional differences in urbanization limited the choices to only two urban sites per region, one over half a million and another ~100 thousand in population. Only because Billings is the largest city in Montana was I forced to accept smaller cities (> 50 thousand).

    Because of high regional signal coherence at all but the lowest frequencies, simply subtracting the benchmark from the urban average readily reveals the difference attributed to UHI. The aggregate average discrepancy is what I showed in my first graph on the previous thread. Any need to fit an arbitrary linear trend is obviated by determining such benchmarked discrepancies.

    BTW, finding statistically significant trend differences for individual stations is complicated not so much “by the red and white noise,” which is relatively weak, as by the presence of strong multi-decadal oscillations in station series of uneven length. Even greater complications arise when the data is very inexpertly adjusted to compensate for flaws real or imagined. Sadly, much time is wasted working on misleadingly adjusted data series.

  106. Sky comment from 1st population post.

    Karl’s un-politicized results [1984] on UHI in the USA are very much in line with what I found not by pairwise comparisons, but by constructing geographically representative estimates of the real average temperature for the lower 48 states based upon exclusively urban and non-urban records, each of whom covered the entire period 1896-2005. The ~0.7C rise of the urban – non-urban [chart]discrepancy would correspond to Karl’s UHI for a city of roughly half a million. Your [Frank Lansner’s] remark “Would not hurt to skip such [UHI-corrupted urban] stations” [NYC, LA, Chicago, etc..] would be spot on, were it not for the fact that in most of the continents there are precious few non-urban, intact, century-long stations whose records could provide the 20th-century backbone of the historical record. In fact, I cannot even find fifty such records in all of Asia, Africa and S. America–combined. All of the methods of obtaining “global average temperature” are simply ad hoc schemes for covering these glaring lacunae in geographic coverage.

    .
    The periodic site changes and undocumentable NCE make pairwise referencing a nightmare in the USA, which has half the world’s stations and the longest series except for UK. But we are reassured by S Mosher UHI does not exist outside the USA except for Tokyo. We’ll have to discuss eastern China if he comes back.

  107. Wickham(2013) the paper Mosher co-authored has an error in bold below:

    The approach of the GISS team is to identify urban, “periurban” (near urban) and rural stations using satellite images of nighttime lights. Urban and peri-urban stations are then adjusted by subtracting a two-part linear trend based on comparison to an average of nearby rural stations. The result of the adjustment on their global average is a Reduction of about 0.01C in warming over the period 1900-2009.

    .
    Hausfather(2013) and Hansen 2001 support that Wickham is off by an order of magnitude.
    Hausfather(2013):

    For the U.S. data contribution to the NASA GISS analysis, Hansen et al. [2001, 2010] use the USHCN data that has been adjusted by NOAA/NCDC for time of observation and station history changes, but apply their own UHI adjustment. The GISS urban adjustment reduced the otherwise adjusted USHCN version 1 temperature trends by an additional 0.15°C/century, more than twice that of Karl et al. [1988] method [Hansen et al. 2001]…

    .
    It seems that NOAA did indeed use Karl 1988’s urban adjustment until Menne(2009). NOAA supplies GISS adjusted GHCN data minus the urban adjustment which GISS supplies according to Hansen (2001, 2010).

  108. Carrick:

    For example, if you have a station move over time from urban to rural, that results in a “saw-tooth” pattern in the error in measurement rather than a monotonic increase in error over time. Depending on when the break point occurred, the influence on trend from that can be negative.

    Certainly, that’s why I said in a response to you:

    I don’t think it is fair because UHI can, by definition, only manifest itself in data with one sign. The potential for cooling from UHI is not actually due to UHI, but rather, how the data is examined.

    Unlike you, I think most “skeptics” would acknowledge this point. UHI can only cause temperatures to be higher, but that doesn’t mean it is impossible to examine the data in a way which causes the effect to have an apparent negative influence. It’s the difference between using absolute and relative measures.

    The motivation of my remark is simply if people want to have a meaningful discussion of technical subjects, they should accurately describe what they’re talking about. As an example of why this matters, you refer to cooling in results caused by UHI due to how things are calculated, but people have also referred to cooling caused by UHI due to what is actually UCI. The two are not the same. The result is when people refer to UHI having a cooling impact, they create confusion as to what they’re referring to.

    In my experience, that sort of confusion dominates discussions of UHI to the point it is rare people actually understand what everyone is saying.

  109. sky (Comment #149640)

    Did you use adjusted or unadjusted data? Did you take altitude differences into account? How did you determine whether there were no micro climate effects at the pristine stations?

    When you say the red and white noise are not a major issue in looking at trend differences are you talking about the station series or the differences series between paired stations?

  110. “But we are reassured by S Mosher UHI does not exist outside the USA except for Tokyo.”

    Are you talking about adjusted or unadjusted station temperatures here? That difference is, as I noted previously, critical to the discussion.

  111. Brandon:

    The motivation of my remark is simply if people want to have a meaningful discussion of technical subjects, they should accurately describe what they’re talking about. As an example of why this matters, you refer to cooling in results caused by UHI due to how things are calculated, but people have also referred to cooling caused by UHI due to what is actually UCI. The two are not the same.

    Yep. I agree it’s important to make the distiction between the actual field values and the measured ones. I generally focus on measured quantities rather than actual ones, and most of the dialog on this thread relates to measured rather than actual quantities. At times people seem to get confused which quantity they are referencing

  112. Ron—yes I got you were addressing me. I was making sure you had me on record as saying I think the view is wrong.

  113. Ken, the general impression that S Mosher and P Jones, etc., try to convey is that UHI is a minuscule problem that is adjusted for and that the public health/urban planning millions are being spent on concern for a few hot summer days. They even have the audacity to say scientists in that field are exaggerating the issue. Imagine that.
    .
    It seems the central point about NCE is that it’s mostly undocumented, variable in time and place and grows gradually as well as sporadically.
    From Parker(2010)

    Exclusion of urban sites, or selective use of rural sites, requires information (‘metadata’) about the site and its surroundings. The maintenance of metadata, as well as the actual observational data, is a key GCOS principle. Some forms of metadata, such as city population statistics, must be used with care because they may not be representative of the immediate vicinity of the observing site. Thus, Hansen et al did not find a close correspondence between population metadata and satellite-observed night-time surface lighting, which, if at high geographical resolution, is likely to be more representative of the site.

    .
    Hansen(2001), found aerial night light analysis a better proxy then population. He also found that Karl had underestimated UHIE in the overall USHCN by 2.5X (0.06C to 1.5C). —So much for “pre-political” Karl. One thing I am still tracking is a CA post where McIntyre finds that almost all Hansen’s adjustment gets canceled by an opposing negative UHIE adjustment scheme. That’s when Mosher scratches his head and says negative NCE can certainly exist but it would have to be the jackalope of climate science.
    .
    Hansen(2001) also found that Karl’s assumption that rural could be a zero baseline for NCE was wrong. Hansen found 0.1C UHIE in periurban and a several hundredths in rural stations.
    .
    Here is some more detail from Parker(2010):

    Hansen et al.’s adjustments consisted of separate changes to the trend before and after a flexible date. This date and the changes to the trends were chosen to minimize the mean square difference between the adjusted urban (or peri-urban or small town) record and the mean of its rural neighbors. As on September 2009, 53%, 19%, and 28% of the stations used globally by Hansen et al. were designated rural, small town, and urban by population. The adjustment made the trend cooler in 58% of cases but warmer in 42% of cases. From this, Hansen et al. inferred that the urban effect is often less than the combination of regional variability in temperature trends, which affect the comparisons with neighbors, and errors and other heterogeneities remaining in the data after quality control. In other words, despite their prior adjustments, their urban adjustments are likely to have incorporated other residual heterogeneities. Hansen et al.’s adjustments removed about 0.15◦C of urban warming over the United States during the 20th century. Their adjustments removed about 0.1◦C more warming than those applied to the USHCN by Karl et al., who used a population based empirical equation.

    .
    How many feel that Hansen is not political? Raise your hands. If in doubt read the conclusion of Hansen(2010).

  114. The comment editor here seems to reject repeated attempts to insert phrases or entire sentences into the original text. Thus the second paragraph in my comment 149640 lacks the introductory sentence: “Even in the USA, which is rich in long small-town records like no other country, truly pristine stations are rare.” Likewise, the last paragraph should conclude with the phrase “whose apparent trend is not substantially a direct product of in situ measurements.” Finally, my comment 14975 failed to delete after being replaced with 14976 in order to accommodate similar insertions.

    I’ll try to find some time after work to reply on more substantive issues here.

  115. In order to show examples of near neighbor stations having paired high correlations and significant trends in a difference series of the pairs, I went to KNMI and obtained mean temperature series for 10 near neighbors around 39N and -94E from the adjusted GHCN monthly station data set. I used the data from 1950-2015. There were a small number of missing years in some of the station series and for those cases data for that year were not used in the pairings. All 45 pairs of these stations were correlated and a p.value for the correlation determined and a linear regressed trend determined for the paired difference series and a p.value determined for trend. The trend values are given in degrees C per decade.

    The results in the table linked below show that a paired near neighbor station can have a very high and significant correlation and a large and significant trend in the paired difference series. I did not calculate autocorrelation coefficients in order to adjust the p.values for that, but with the very low p.values that would not make the series trend adjusted p.value to rise above 0.05.

    I did an additional analysis with the 10 near neighbor stations around 42N and -94E that I do not show here because the noise in the series produced few significant paired difference series trends even though the trends of the difference series in a number of cases were large.

    http://imagizer.imageshack.us/v2/1600x1200q90/922/QnR05L.png

  116. Ken Fritsch:

    Since century-long records provide close estimates of the long-term mean temperature at any station, I work with the yearly deviations from the 20th-century mean, rather than with recorded temperature levels per se. While this approach obscures the actual temperature differences in pairwise comparisons, it totally eliminates the need to consider the effects of individual station elevation, while still showing the clear discrepancy in the aggregate temporal evolution between urban and non-urban records. That’s why, unlike a pairwise discrepancy of actual temperatures, my graph shows a negative beginning and a positive end (with a zero-average 20th-century anomaly).

    The matter of micro-climate effects (MCE) is more complicated. Clearly, when coherence is very high and the cross-spectral phase is effectively zero, there’s little doubt that MCE is not a confounding factor. That is often the case in the Great Plains, where the coherence levels throughout the frequency range give lie to the notion that noise, red or white, is a significant factor in properly vetted records. But in mountainous areas, in particular, the range of uniformly high coherence narrows sharply and one has to include reduced-coherence stations to obtain a duly representative sample of the microclimate quiltwork in the region. Availability of clean station records along with professional knowledge of real-world spatial temperature variability necessarily enters into that selection. BTW, none of my benchmark USA stations are located on mountain summits, ridges, gorges or otherwise unrepresentative sites.

    Finally, while for reasons unknown some small-town records may manifest quasi-linear trends that rival those of large cities, I have yet to find any such record that is strongly coherent at the lowest frequencies with any neighboring station among thousands of cross-spectral analyses of paired station records I’ve done world-wide. Since any linear trend is perfectly coherent with any other, this is mathematically unquestionable evidence of an entirely local effect or of spurious data.

  117. Sky, Ken, I think Pielke Sr. (2007) agrees there is evidence of local site influences in the record. Pielke uses LULC the same as I define NCE.

    The Trends Project temperature analysis (Hale et al. 2006) examined “normals” (National Climatic Data Center 2002) temperature data for stations near sample blocks in which LULC has been determined for five dates during the period from 1973 to 2000. The normals temperature data have been adjusted for time-of-observation biases based on results of Karl et al. (1986) and have also undergone quality control (Peterson et al. 1998a). Within this dataset, inhomogeneities in the temperature data have been addressed based on recommendations of Peterson and Easterling (1994) and Easterling and Peterson (1995). Hale et al. (2006) examined temperature trends at the normals stations before and after periods of dominant LULC change. Temperature trends were primarily insignificant prior to the period during which the greatest single type of LULC change occurred around normals stations. Additionally, those trends that were significant were equally divided between warming and cooling trends. However, after periods of dominant LULC change, significant trends in minimum, maximum, or mean temperature were far more common, and 95% or more of these significant trends were warming trends. Although the LULC changes have not been identified as the causative factor in the exhibited temperature trends, there is substantial evidence for such speculation. This issue is relevant to the Peterson (2006) analysis because the photographs in Davey and Pielke (2005) suggest that the landscape (and thus the microclimate) around the poorly-sited measurement location (and even the well-sited locations) is not likely to be static.

  118. Pielke Sr. in 2009 suggests using the evolving remote sensing products to define area classification to define growth of LULC since the era of LANSAT (we are waiting, Steven).
    .
    Like Hansen Pielke Sr. suggests that regional LULC not be adjusted for since it is climate change. I disagree that that GHG AGW should be conflated with LULC. At the very least it would need to be deducted from EfCS calculations and radiative model comparisons for SST record validation.

    This database could also potentially provide
    assessment of regional LULC change associated with station locations such that if the LULC change observed at or near a station was, in reality, a change that is taking place on a regional basis, then the station temperature record might be considered
    indicative of the true climate change of the region and not an anomalous change in a trend specific to an individual station. Thus, temperature adjustments may not be appropriate for a station that truly represents the LULC change that has occurred within a region, rather than LULC that is site specific at or near a single station.

    .
    Frankly this is surprising stance by Pielke considering that he argues for only using the Tmax as a measure of climate since the Tmin is so easily biased by ground (boundary layer sampled) artifacts.

  119. sky (Comment #149652)

    You are evidently talking about temperature anomalies. That does not answer my question about whether you used adjusted or unadjusted series – whether they be for anomalies or absolute temperatures. Elevation can effect the temperature trends and that is not an issue of using anomalies – or not. Trends and difference series can be used with the same results whether anomalies or absolute temperatures are used for the calculations.

  120. Ron Graf, I see you are doing a lot of background work on the topic at hand for this thread. I think you would do better to avoid personality conflicts with Steven Mosher and rather clearly state what the published papers mean to you and the discussion. Do not include him or his past comments in your replies. Do not let past hurt feelings influence what you have to say. These analyses are too much fun to get side tracked into something less important and meaningful.

    Sometimes Steve Mosher’s mission to criticize skeptics in a very general way gets in the way of a good analytical discussion. I once told him that he could and maybe at times should criticize my analysis and comments and all I wanted from him was a mention in any forthcoming books he might write.

  121. “Additionally, those trends that were significant were equally divided between warming and cooling trends. However, after periods of dominant LULC change, significant trends in minimum, maximum, or mean temperature were far more common, and 95% or more of these significant trends were warming trends.”

    Even if an effect like this one gets cancelled out with cooling and warming at the stations, to the extent that the effect is not completely handled (or homogenized) by the adjustment algorithm it will widen the trend confidence intervals (CIs) without affecting the mean trends. CIs are an important aspect of judging climate change in the observed period and require close attention in order to get the intervals as correct as we can.

  122. Ken, can you supply a DTR reference from model realizations at RCP4.5 for land/SST over the last 100+ years? Do you have data for the CMIP5 ensemble or selected models?
    .
    Whereas UHIE-LULC is repeatedly found to favor raising Tmin over Tmax by about 10:1 there is a good potential for a proxy there if the background climate can be screened. Pielke pointed out that there is a large reduction in DTR seen in the polar regions which makes one scratch their head.
    .
    It seems to be that UHIE effect could be resolved by using multiple proxies, DTR, population density, LANSAT IR and surface morphology. If the aerial/remote sensing proxy can be correlated with the population and DTR then those two could be used to quantify UHIE in the record era pre-aerial/pre-satellite.
    .
    With the USA overkill in stations it is the perfect vessel to study UHIE from pairs and station moves. If only Karl, Vose, Hansen, and others would work with Pielke Jr and Sr. in a comprehensive study like Karl(1988) it seems like the UHIE-LULC piece of the puzzle is ready to be solved now. Once that’s done microsite would stand out more clearly and it could be identified as well.

  123. IF I WERE IN CHARGE – I would spend some of the money ‘earmarked’ for climate change research or carbon control and invest in doing a bottom up reanalysis of the global data. It would include a UHI scheme like Oke or Karl 1988 that adjusts the raw data, already corrected for time of observation and site moves. I would assemble a staff recreating the metadata of station siting and population for the world’s stations and assemble a team to determine the best approach or blend for an ocean reanalysis (similar to what is described here). A few hundred thousand or even a million dollar investment in trying to get this right could save us spending a trillion dollars more on a non-issue at a time when real issues are already threatening our economic future.

    Above from Ron’s link. Nice thought, but the powers that be would rather spend the trillion on the non-issue.

    Andrew

  124. Andrew, one of the main points it Pielke(2007) was showing that satellite and model reanalysis is an effective tool for validation of station records.
    .
    From the icecap article:

    Jones et al 1990 (Hadley) had concluded that UHI bias in gridded data could be capped at 0.05 deg C (not per decade, per century). Peterson et al 1999 agreed with the conclusions of Jones et al. (1990) and Easterling et al. (1997) that urban effects on 20th century globally and hemispherically averaged land air temperature time-series do not exceed about 0.05°C over the period 1900 to 1990. Peterson (2003) and Parker (2004) argue urban adjustment thus is not necessary.

    .
    I found it an interesting coincidence that Karl(1988), after the most exhaustive study to date, found 0.06C/century in the USHCN, right in line with the best guesses of the establishment. But Karl’s adjustment was replaced with Menne’s pairwise method in 2007 for NCDC V2 but the article shows that actually warms the trend versus the raw data. Very confusing. Perhaps this partly motivated Hansen to study UHIE and why NASA does not accept the Menne adjustment and creates their own.
    .
    In light of the above the fact in itself that Hansen tripled the urban adjustment to 0.15C, according to Parker(2010) and Hausfather(2013), really demonstrates if nothing else that this issue is not under control.
    .
    Among all this confusion I find it inexcusable that Wickham(2013), taking a page and half to review the history on UHIE, cites the Hansen adjustment as 0.01C/century (typo?) and then finds zero statistical UHIE in the NCDC after a Menne-type adjustment.
    .
    I find it already criminal that researchers must practically re-do the work themselves to find out what was done. If glaring errors creating warming or covering over UHIE are found somebody should go to jail.

  125. Ron Graf,

    Back in the day, when I first started examining Global Warming information, Pielke Sr.’s old site is what came up in my searches. I didn’t know how the game worked then, but looking back, I see some of the same Warmer Trolls who were at it then are still doing the same thing today.

    Andrew

  126. I have a question about formatting. How do I get a quote to appear indented and with a color background?

  127. David,

    You use HTML tags. The word ‘blockquote’ without the quotation marks is placed inside the less-than and greater-than symbols before the quoted text, and then ‘/blockquote’ (again without the quotation marks) inside the same less-than and greater-than symbols follows the quote to end the block.

  128. Next thing, Ron will discover the march of the thermometers and put more people into jail.

  129. Ken, can you supply a DTR reference from model realizations at RCP4.5 for land/SST over the last 100+ years? Do you have data for the CMIP5 ensemble or selected models?

    I have been working with mean RCP 4.5 temperatures of individual runs. If you want a look at the ensemble means of the RCP 4.5 run minimum and maximum temperatures that is something readily extracted from KNMI. If want a look at all the individual runs that would take a bit longer.

  130. Ken Fritsch:

    Having previously indicted Ver.3 adjustments and written that “even greater complications [in trend estimation] arise when the data is very inexpertly adjusted to compensate for flaws real or imagined. Sadly, much time is wasted working on misleadingly adjusted data series, whose apparent trend is not substantially a direct product of in situ measurements,” I thought that my exclusive use of unadjusted data was amply clear. All of my data series, in fact, are from prior versions of GHCN, and were vetted individually. They are comparative gems found in a pile of gravel.

    Unlike cross-spectrum analysis, regressional trend fitting is not a general, incisive tool of time-series analysis. It doesn’t reveal any inherent features of the underlying signal when its spectral structure is not very simple. With short records that are mere segments of longer oscillations, in particular, it can easily force the false impression that a persistent, secular trend has been found. And, unless the residuals are demonstrably i.i.d., it tells us little about actual noise levels.

    Such simplistic spectral structures as white noise or red noise, which figure prominently in academic training, are almost never found in the actual power densities of geophysical variables. Properly estimated power spectra of temperature records do not decline monotonically from a peak at zero frequency, as with red noise. Nor, despite wide spectral continuums often seen at intra-decadal frequencies, do we find entirely flat power densities, as with white noise. Nature is much more richly structured than that. And it is to the ceaseless pursuit of understanding its complexities that I must now return.

  131. Forgot to mention that there are no significant systematic differences of elevation between my vetted benchmark stations and the urban ones. Thus there are no imagined elevation effects upon trends in the aggregate.

  132. I find it already criminal that researchers must practically re-do the work themselves to find out what was done. If glaring errors creating warming or covering over UHIE are found somebody should go to jail.

    That kind of talk is what puts these threads into something other than a rational discourse. Making suppositions and then threatening to put people in jail for it is a bit too Trumpish – or Clintonish for that matter.

  133. David Young:

    I have a question about formatting. How do I get a quote to appear indented and with a color background?

    To follow up on SteveF, here’s an example:

    <blockquote>Put text here</blockquote>

    generates:

    Put text here

  134. KenF:

    That kind of talk is what puts these threads into something other than a rational discourse.

    .
    I would think such behavior as conspiring to falsely manipulating data by a NOAA or NASA director in order to alter policy should at least be punished with forced retirement with full benefits.
    .
    Okay, I’ll cut it out if you translate all of what Sky said.

    Never mind.
    .
    On the DTR I am looking for some type of consensus baseline range over land v SST, polar v tropical, arid v humid in the 19th, 20th and 21st century. Also if temperature or altitude have any effect. If a AGW is not affected greatly a mask can be created for climate DTR trend to be subtracted from UHI effect on DTR. Such a proxy could them be validated by other independent means like population, LANSAT IR and non-natural surface density.
    .
    RB, If you are talking about the dropping of stations. That could be justified if they are not valuable. One would have to look.

    https://wattsupwiththat.com/2010/03/08/on-the-march-of-the-thermometers/

    If you are talking about the MMTS adjustment. Yes, I am looking at that too.

  135. Okay, I’ll cut it out if you translate all of what Sky said.

    It is beyond my communication skill level and time allotments to extract much of value to me from the exchanges with Sky. Nothing personal. What I got from him is that he used unadjusted data and did not bother to compare adjusted data because a prior he knows it is wrong and evidently that the unadjusted data he used is correct.

    He talks of spectral analysis of the temperature series but is vague about the results of his analysis. I have used Singular Spectrum Analysis (SSA) and some Empirical Mode Decomposition to analyze both observed and modeled temperature series. I find in almost all cases that the series with SSA can be decomposed and reconstructed as a trend plus white and red noise with no good evidence for any periodic components. It is the residuals that remain after removing the SSA derived trend that have red and white noise.

    If you want a quick look at the models with regard to DTR go to KNMI here:

    https://www.google.com/#q=knmi+climate+explorer

    and look at the CMIP5 scenario runs and from there select CMIP5 mean for RCP4.5 and select either tas minimum or maximum. From there you can chose latitude and longitude boundaries and whether you want to look at sea points or land points. If you need SST data you go the same route except instead of using “surface variables” you choose ocean, ice and upper air variables and then from there select minimum or maximum for the tos variable. Once you have latitude and longitude and sea or land you select “make time series” and you will be presented with monthly absolute and anomaly series. If you want annual data go towards the bottom of the page and select “New Time Series” after determining what month you want that series to begin. Finally select raw data from above the graph for anomaly and you will obtain a table that can be copied to an Excel spreadsheet.

  136. What I got from him is that he used unadjusted data and did not bother to compare adjusted data because a prior he knows it is wrong and evidently that the unadjusted data he used is correct.

    That grossly mischaracterizes what I did and substitutes blind faith in radical adjustments. In reality, it is precisely because I examined the adjusted data thoroughly with advanced signal analysis techniques and compared it widely with thoroughly vetted records (some of them very professionally obtained outside the purvey of GHCN) that I rejected the all-too-patently tendentious adjustments made by the index manufacturers.

    It’s sheer circular reasoning to cite some spectral characteristics of radically adjusted time-series, generally not found in nearly pristine records, as evidence of any geophysical reality.

  137. Ken, thanks v much.
    .
    Sky, even if every impression is that the game is rigged one still must chip away at the evidence. Unlike some, who dismiss pieces one by one, I like to hold onto the pieces and periodically re-evaluate. I feel there is no shame in changing conclusions if the balance shifts. I think Ken is the same. Keep chipping away but in simpler sentences. Thanks much.
    .
    On the effect of the “march of the thermometer,” it goes back to the question of the value of long stations. I had asked the question previously whether it mattered to have 166 1-year stations versus one 166-year station. Thinking more on it, if the only variable were temp and quality control for NCE was perfect then there would be no difference. But for there not to be other variables in a weather station it pretty much has be the same station. Though its true you could start adjusting to correct for changed variables, however, once the investigators had gets on the data their judgement enters too. Judgments are biased. So it becomes impossible to audit the quality of 166 1-yr stations as a practical constraint.
    .
    If one is going to do a study to introduce adjustments into the public historical record I believe there should be a scientific protocol, if not an ethical one, to bring in skeptical experts into every step of the study. Better yet to run concurrently in parallel to see if the same results and conclusions are found.

  138. sky (Comment #149673)

    What spectral analysis did you use for the adjusted and unadjusted series and what were the differences?

  139. GISS FAQ on adjustments: Q. Why are some current station records different from what was shown before 2012?

    A. UK media reports in January 2015 erroneously claimed that differences between the raw GHCN v2 station data (archived here) and the current final GISTEMP adjusted data were due to unjustified positive adjustments made in the GISTEMP analysis. Rather, these differences are dominated by the inclusion of appropriate homogeneity corrections for non-climatic discontinuities made in GHCN v3.2 which span a range of negative and positive values depending on the regional analysis. The impact of all the adjustments can be substantial for some stations and regions, but is small in the global means.

    I can’t get over that the consensus still maintains that positive NCE cancel out negative ones. If this were true it would be shear coincidence because they come from different sources. TOB is unidirectional unless the station changed from morning to afternoon observation time. The MMTS is unidirectional unless a station went back to their old thermometers. The UHI/LULC/MS is unidirectional unless except for the few cases where a station was in open natural desert which later became irrigated. Station moves were almost always to escape outside NCE warming (temporarily).
    .
    I would love to see BEST input just Tmax data adjusted for TOBs and MMTS and see what the trend is. This should eliminate 80-90% of the NCE. This would simply be a qualitative test. If the trend were less than half of Tavg I think we need to investigate a lot more.
    .
    News Flash April, 2015: INQUIRY LAUNCHED INTO GLOBAL TEMPERATURE DATA INTEGRITY

    The London-based think-tank the Global Warming Policy Foundation is today launching a major inquiry into the integrity of the official global surface temperature records.

    We hope that people who are concerned with the integrity of climate science, from all sides of the debate, will help us to get to the bottom of these questions by telling us what they know about the temperature records and the adjustments made to them. The team approaches the subject as open-minded scientists – we intend to let the science do the talking. Our goal is to help the public understand the challenges in assembling climate data sets, the influence of adjustments and modifications to the data, and whether they are justifiable or not.

    .
    Ken, have you heard of this? If so, have you considered submitting your analyses?
    http://www.tempdatareview.org/

  140. I have used Singular Spectrum Analysis (SSA) and some Empirical Mode Decomposition to analyze both observed and modeled temperature series. I find in almost all cases that the series with SSA can be decomposed and reconstructed as a trend plus white and red noise with no good evidence for
    any periodic components. It is the residuals that remain after removing the SSA derived trend that have red and white noise.

    SSA’s vaunted adaptive decomposition of series into (nonlinear) trends, oscillatory components and noise might seem unequivocal. But, like with any orthogonal mathematical expansion, the physical significance of its results is always moot. And with short records, the results can depend heavily upon the initial choice of the number of eigenvalues to be used.

    That the consequent distinction between signal and noise has little physical basis becomes evident via the high cross-spectral coherence of neighboring temperature records throughout all frequencies, leaving far less of total variance that can be attributed to non-temperature record components than is usually attributed by SSA analysis. That’s why I never rely upon SSA for that purpose and consider only the residuals from linear trends.

  141. 149676 Ron.

    7/29/2012 JC article by Mosher has a map of GCHN dropout v Berkeley increase stations.

    Still disturbed by Mosher’s refusal to give an exact link to his 43 K stations Berkeley Earth Data set.
    No one here really knows which stations he is referring to
    There is a “New estimate of the average earth surface land temperature spanning 1753 to 2011” which most closely approximates his claims.
    14 databases but 44455 stations mentioned, only 36866 useful for analysis neither of which fit 43 K.
    Of course these stations are fixed in location though “new” stations are created at these sites when TOB, thermometer, or aberrant readings occur.
    The life of such stations is on average a mere 5.9 years though that would not have been much of a dustraction back in the 1750’s when there were very few.
    One has to be very careful when talking stations to make sure that they have thermometers. 2/3 rd’s of all BEST weather stations used only record rain, not rain and temperature.
    Though BEST tries very hard to use raw data most of the data it gets is not raw.
    Due to known station dropout GHCN and its subsidiaries use made up stations to “preserve” the integrity of their data base.
    USHCN reports over 1200 stations in it’s makeup but at times less than 700 are real . The missing ones are infilled from data averages from the nearest available stations to maintain the overall data integrity.
    Of especial interest in Steven’s comments at the start was one I missed initially.
    Buoys, oil platforms and stationary ships may well give data, but it should not be included in “land temperatures”, should it?
    From which buoys, hopefully not ARGO ?
    How much data is actually ocean data in the 43k stations and why are you including it, Steven.
    Can anyone else give me a simple reference to the 43 k stations quoted?
    It should exist outside of this forum.
    Perhaps ATTP or Eli could help.

  142. Ron,
    “News Flash April, 2015: INQUIRY LAUNCHED INTO GLOBAL TEMPERATURE DATA INTEGRITY”

    My comments on the anniversary of this “News flash” are here.

  143. angech,
    “Can anyone else give me a simple reference to the 43 k stations quoted?”
    In 2012 I made a Google Maps viewer of various station sets, including BEST. You can select according to time intervals of available data, and click on station markers for a plot of the data and more metadata.
    It’s here.

  144. Nick from post April 2016 post:

    So after a year, what has happened? Nothing more to report. The inquiry web pages are still up; submissions have not been published. No further news.

    .
    It is very hard to audit someone when they cannot be compelled to cooperate and you have no subpoena power. As I said, there should be a protocol when public records are changed to include multiple adversarial participants, either integrated, or better, working separately in parallel. The temperature records are what all other decisions are rooting in. Mistakes shouldn’t be tolerated. As Don Monfort said once:

    This is not some inconsequential little paper in a backwater science, like entomology. We wouldn’t be talking about an obscure journal publishing speculative research full of unexplained assumptions, on why green eyed gnats prefer diddling on Thursdays.

  145. Every adjustment should be accompanied by clear documentation of how and why it was made. There is no excuse for skeptics to need to scrape data off a government database and then attempt to decipher why stations in South America should be getting 2-3C negative urban adjustments (cooling the past). We should not have to scratch our heads as to why the USA gets over twice the rate of urban adjustments compared to the rest of the world and why in both groups they are almost equal positive and negative directions. https://climateaudit.org/2008/03/01/positive-and-negative-urban-adjustments/
    https://climateaudit.org/2008/03/01/hansen-and-false-local-adjustments/

  146. sky (Comment #149678)

    Are you going to reply to my question about what spectral methods you used on adjusted and unadjusted temperature series and what differences you found?

    I would not need “cross-spectral coherence of neighboring temperature records throughout all frequencies” to determine that a number of near neighbor temperature stations have high high frequency correlations and significantly different trends in the difference series – and as I showed in the linked table above.

    I am not clear to what station data you were looking at when you claimed good cross spectral coherence of neighboring temperature records. Adjusted or unadjusted and can you give a link to these station data?

  147. Ron Graf (Comment #149683)

    Most data sets use an adjustment (homogenizing) algorithm for adjustments and that would make it difficult to document each station. GHCN has adjusted and unadjusted data that allows finding the changes made.

    Further to your point, in my mind, is the need for a benchmarking evaluation on these rather complicated adjustment algorithms where the climate and non climate station effects are known. We know that process is being put in place by the benchmarking effort to which I have linked and Victor Venema is involved. Recall that the current progress report for that benchmarking alluded to setting up of that process taking longer than expected because the process is complicated (and I think more than was initially thought. It would be good to hear more about what exactly are the parts of the process causing the delay in applying it.

  148. The climate Taliban will never willingly permit transparent climate data.

    When the real Taliban took over in Afghanistan, they got rid of all the meteorologists because predicting the weather meant – predicting the future.

    And the future was god’s domain.

    I’m guessing they aren’t very enamored of climate models either.

    But if they only knew the fallibility of such models, they probably wouldn’t be too concerned.

  149. Re: Ron Graf (Comment #149682)

    The BLS makes CPI adjustments. The methodology is public. The data has policy consequences. There are no formal “multiple adversarial participants, either integrated, or better, working separately in parallel”. But still there is enough material for the ShadowStats wackos to work with and for the gold bugs to feel validated.
    My guess is that no amount of transparency with the temperature records will satisfy the skeptics.

  150. “It is very hard to audit someone when they cannot be compelled to cooperate and you have no subpoena power.”
    Well, Lamar Smith thundered in, bristling with subpoena powers. He hasn’t come up with anything either. But you don’t need to audit anyone. The arguments for the adjustments are published. You just need to read them, and if you think they are wrong, give a scientific response. That’s what isn’t happening.

    “there should be a protocol when public records are changed”
    Public records are not changed.

  151. RB: “My guess is that no amount of transparency with the temperature records will satisfy the skeptics.”
    .
    RB, I do believe in evolution.
    .
    Actually, I have been devoted to science since age 7. It’s funny watching Hillary Clinton inferring that I am a religious gun clinging, bible thumping red-neck. I would love to be in a scientific debate with John Kerry, Barack Obama or Hillary. I hope somebody does some day.

  152. Nick: “Public records are not changed.”
    .
    I guess that would be a matter of legal interpretation. My feeling is if you are making public proclamations of “the record” and it is the adjusted record, and you are claiming you did not legally, technically, change the record, most would be very mislead and confused.
    .
    Nick, RB, I try to be as open-minded as possible. I have no religious point of view. I think we should be going gangbusters on alternative energy. I think those that think they have to rig the deck or lie to persuade the public to go along are flat wrong in motive and in method to achieve their aims.
    .
    I think McKitrick and Michaels(2007) seems a little steep for NC effects. Do you think Peterson and Wickham are too steep in the opposite fashion? If so, where did Peterson err?
    .
    BTW, do you anything critical to say about Joelle Gergis’ reconstruction or MBH for that matter?

  153. Ron,
    “My feeling is if you are making public proclamations of “the record” and it is the adjusted record.”
    The “record” is a record of observations. That isn’t changed. People do calculations based on these, to get a global average temperature etc. That’s a calculation. We know that, because different organizations (and even I) publish different results, and no-one is fazed by that. And if they think of a way of improving their calculation, then they should do that.

  154. I just want to take a moment to say remarks like:

    The climate Taliban will never willingly permit transparent climate data.

    Should not be tolerated. Anyone who likens people to terrorists simply for being on the other side of a “debate” should be laughed out of the room.

  155. The arguments for the adjustments

    It seems like these arguments are assumed to be superior based on an appeal to authority – “they are published”.

    Lots of flawed arguments get published.

    I don’t think any contrary evidence offered would be sufficient for the arguments to be retracted, based on the behavior of people like Nick Stokes.

    So, thanks for nothing.

    Andrew

  156. And when Nick Stokes says this:

    give a scientific response

    I can only chuckle.

    Andrew

  157. Nick,

    If no one has told you yet, you are a propagandist, not a scientist.

    Andrew

  158. Andrew_KY,

    That’s a wonderful straight line, but I’m not going to touch it.

  159. The most fascinating thing is that any view, warm, lukewarm or cold is attached to a stigma by the others, displaying a natural animalistic tendency to dehumanize and thus de-legitimize their thoughts, and ultimately lives.
    .
    Nick, I would guess you, like Steven M, are afraid to sight any flaw with the consensus for fear it will be seized upon to de-legitimize “climate science”. Ironically, I thing it has the opposite effect. I am more concerned with preserving the brand name of all science. And, in with such concern would be relieved and re-assured that science was being done if mistakes could be acknowledged.

  160. Ron,

    Stop trying to read people’s minds, i.e. attribute motives. It’s a waste of bandwidth.

  161. When I quoted McKitrick from 2010 saying, “Fewer than one-third of the weather stations operating in the 1970s remain in operation,[2010]”Nick replied:

    Again just untrue, and ignorant. He is talking about GHCN Monthly, which consists essentially of two parts. One is a collection of historic records, made by a grant funded project in early 1990’s. The other, continuing from then, is a subset of those stations which report monthly under the CLIMAT system….

    .
    I would say this reply was informative but not representative of McKitrick’s point or acknowledging context. It was made to sound that Mc just made up something by tortured semantics, something maybe legally true but not in spirit.
    .
    Then I found on my own investigation the issue of the “March of the thermometers,” which was a real issue and legitimate controversy. Nick, I know you were aware of this because I see your comments on the WUWT blogs all over the place at about it in 2010. Your argument then was that the grid would not be affected. I’ll put the question to you: Do you see any advantages to having one 166-yr station versus 166 1-yr stations? If so, what?

  162. I would sometime very much like to have a discussion about testing the adjustment algorithms on these blog threads – and without the silly tit for tat exchanges. I am linking a post from Victor Venema’s Variable Variability blog that points to some of the issues that need testing and explains in general terms how some of the algorithms operate.

    What Victor Venema states here in the linked excerpt below kind of/sort of gets to the point of my observation from the table I linked above whereby paired temperature stations with high correlations can have very significant trends in their difference series. The correlation comes from, I would suppose, the local weather and complicated regional signal to which Victor refers in his post below and once it is removed in a difference series there should not be a significant trend left. My calculations in the linked table show that in some pairs the trend remains – and with adjusted series.
    I have had blog discussions with Victor about breakpoint algorithms capabilities in finding slowly changing trends and while we did not go tit for tat we agreed to disagree. I believe slowly changing trends are a consideration of the benchmarking effort of which Victor has been a part.
    I have also had discussions with the GHCN group about my findings without getting the idea that they were particularly interested in it at least at that time. We also had a discussion about finding breakpoints in adjusted station pairs differenced series. I could find breakpoints using my method and I think GHCN thought that I found them because my method was different than theirs. I do not recall now whether I asked them to look for breakpoints in adjusted differenced series pairs with their method but if not I should have.
    None of this speaks to quantifying biases or uncertainties in the adjusted temperature data (that would be better handled by the benchmarking effort) but rather brings forth questions that should eventually be addressed.

    http://variable-variability.blogspot.com/2013/07/five-statistically-interesting-problems.html

    As I see it, there are five problems for statisticians to work on. This post discusses the first one. The others will follow the coming days. UPDATE: they are now linked in the list below.

    Problem 1. The inhomogeneous reference problem
    Neighboring stations are typically used as reference. Homogenization methods should take into account that this reference is also inhomogeneous

    Problem 2. The multiple breakpoint problem
    A longer climate series will typically contain more than one break. Methods designed to take this into account are more accurate as ad-hoc solutions based single breakpoint methods

    Problem 3. Computing uncertainties
    We do know about the remaining uncertainties of homogenized data in general, but need methods to estimate the uncertainties for a specific dataset or station

    Problem 4. Correction as model selection problem
    We need objective selection methods for the best correction model to be used

    Problem 5. Deterministic or stochastic corrections?
    Current correction methods are deterministic. A stochastic approach would be more elegant

    Relative homogenization

    Statisticians often work on absolute homogenization. In climatology relative homogenization methods, which utilize a reference time series, are almost exclusively used. Relative homogenization means comparing a candidate station with multiple neighboring stations (Conrad & Pollack, 1950).

    There are two main reasons for using a reference. Firstly, as the weather at two nearby stations is strongly correlated, this can take out a lot of weather noise and make it much easier to see small inhomogeneities. Secondly, it takes out the complicated regional climate signal. Consequently, it becomes a good approximation to assume that the difference time series (candidate minus reference) of two homogeneous stations is just white noise. Any deviation from this can then be considered as inhomogeneity.

    The example with three stations below shows that you can see breaks more clearly in a difference time series (it only shows the noise reduction as no nonlinear trend was added). You can see a break in the pairs B-A and in C-A, thus station A likely has the break. This is confirmed by there being no break in the difference time series of C and B. With more pairs such an inference can be made with more confidence. For more graphical examples, see the post Homogenization for Dummies.

  163. DeWitt: “Stop trying to read people’s minds, i.e. attribute motives.”
    .
    I’m pretty sure there is a whole science devoted to understanding human and animal behavior. If anyone was taking a personal offense they were reading my mind rather than my words.

  164. Ron,
    “which was a real issue and legitimate controversy”
    I don’t think it was. Yes, I commented a lot at the time, and blogged here. There are two main safeguards which ensure that changing the mix of stations won’t bias the result. One is gridding, the other is use of anomaly. Anomaly is designed to ensure that the expected value of each station is similar (near zero), ie to create a near homogeneous population for averaging.

    And that is a major reason why a 166-year record is better than 166 one year records. If I tell you that the average was 17°C in Moyhu in 1971, that is useless information for climate unless you can tell me – well, what is it normally. With 166 1 year records, you can’t answer that for any of them. You can’t form anomalies. None can help you with trend. But a long record can.

  165. Kenneth Fritsch, if you want to try an interesting experiment, here’s one I like. Imagine if you wanted to use breakpoint algorithms to detect problems in station data, but you couldn’t compare any station to any other.

    It’s interesting because you can detect several types of changes in station data just by examining like you would any time series. With some stations, you can actually tell there was some physical change that affected the data.

    What makes this so interesting to me is despite using breakpoint algorithms for this experiment, your results will be wildly different than if you had used breakpoint algorithms to compare stations to one another. The breakpoints identified by the two approaches are often wildly different.

    Side note, it annoys me groups like BEST claim their breakpoint algorithms detect things like station moves. If you look at the breakpoints BEST “detects” with its “empirical breakpoint” approach, it is readily apparent they often have no discernible physical basis.

    One could justify the tentative conclusion these approaches homogenize the data but do not actually account for microclimate influences and the like.

  166. RB (Comment #149688)

    For sure that conspiracy stuff just gets into the way of legitimate concerns – much like some of the superficial discussions proceed at these blogs at times.

    There can be legitimate questions about the inflation index using the wrong basket of goods or not considering asset inflation or not even being such a great indicator for the Federal Reserve to act on.

    With the rather dismal GDP numbers and the fact that the Federal Reserve cannot hit their inflation targets – even with near zero interest rates – if the government could arbitrarily adjust the numbers they certainly would have adjusted those numbers. China is another matter and shows that given sufficient power a government is not above such shenanigans just to protect that power. I believe Greece was doing some mighty covering of its debt and that lead to their crises awhile back. Also recall that even with all the legitimate data available to their agency the Federal Reserve missed the call on the past Great Recession.

    I think some of the conspiracy stuff emanates from the large adjustments that are made to data after the inoitial offering even though those changes are transparent and that goes for government statistics and evidently temperature adjustments.

  167. Nick Stokes writes:

    I don’t think it was. Yes, I commented a lot at the time, and blogged here. There are two main safeguards which ensure that changing the mix of stations won’t bias the result.

    But he knows fully well this isn’t true. The safeguards he refers to are designed to mitigate the effect of changing the mix of stations won’t bias the results, but it is trivially easy to see they cannot completely prevent such from happening.

    I get most people may not care about this distinction given in most cases the potential effect would be small and we can check any changes that happen for for potentially biases, but it is sad people overstate the certainty of results like this.

  168. Brandon Shollenberger (Comment #149705)

    I think you have to consider that a temperature series can have breakpoints that are derived from natural events. Using multiple station comparisons and difference series can help get around this problem if the natural event is sufficiently widespread.

    I thought that Best used a weighting method for stations and that a breakpoint (derived in an approach much the same as to what GHCN uses) would reduce the weighting. I also recall that a station move is considered an automatic reduced weighting and the series from before and after the move are considered separate stations for their calculation purposes. I believe that Best is unique in the temperature data set business in using a weighting approach. The method is more complicated than this weighting and further depends, I think, on station differences from the field which in it resides.

  169. Nick, first I respect your knowledge in climate science far exceeds mine. And, I would want to avoid slinging mud as I have seen others do at you. In fact I admire your thick skin and adroit responses rather than returning mud.
    .
    I would like to disagree with you on your point that anomalies cannot be used on 1-yr stations. From my understanding of Mennian/BEST-type homogenizations, they use the group to do a spatial baseline rather than a temporal one. So BEST does not compare single stations to their own past trend except in how it relates to the group. So BEST could use 1-year stations. And if they were all of perfect quality they could produce a high confidence trend. The problem comes in diagnosing quality as Ken and Victor point out.

  170. Nick, whether you thought the controversy was legitimate or not do you think I am wrong to be disappointed you did not mention it in your lengthy explanation responding to my quote? (Would actually appreciate humble response.)

  171. Brandon,
    ” but it is trivially easy to see they cannot completely prevent such from happening.”
    Well, OK, nothings perfect. So let’s say, strives to ensure. But it’s very effective, and my point is that the effect of these measures was completely ignored in the original discussion. In particular, the use of anomaly. The claim was that “stations were marching southward”. Meaning that as time went on, there were fewer stations in more northerly regions (of the NH).

    But anomaly counters this, by subtracting the station mean (over a period). There is no reason why the anomaly should increase because stations are in warmer places. In fact, there is a bias in HADCRUT, not necessarily from changes, but from under-representation at the poles, and not very effective gridding (lots of empty cells). Arctic regions are warming (a systematic change in anomaly), and this isn’t reflected adequately in the global average, as Cowtan and Way showed. HAD included more Arctic stations – this had a warming effect.

    The reasons for the “march” are typified by Turkey and Brazil. The first stage of GHCN was a collection of whatever good historic records they could find and digitize. Turkey had 253; Brazil about 50. When it came time to set up ongoing maintenance, they didn’t need 253 stations in Turkey. They still have more than they need (about 50). But they needed to keep everything they could in Brazil. This didn’t change the global average. Turkey was not warmer because of the dropped stations.

  172. To revise my comment regarding 1-yr stations, if all 166 of them ran consecutively you could not use BEST-type analysis but you could use each station as an anomaly to the other and assume that site variation would cancel out as long as it was random. The bigger point is that if you have the same quality of detection you get the same information from 166 years of data no matter how you slice it. Optimal spatial sampling as well as in time is another consideration though. But poor ability to audit for quality control is the far largest point.

  173. Nick: “…When it came time to set up ongoing maintenance, they didn’t need 253 stations in Turkey…”
    .
    You claimed that McK’s comment was ignorant because the stations continued on, just not reporting. Was their quality control also dropped substantially? This would be relevant to your point that the stations continued.

  174. Should surface station climate data be homogenized?
    .
    I would think only if one believed the actual distribution was homogenous.
    .
    But satellite temperature analyses indicate at least some inhomogeneity.
    .
    I keep going back to this comparison, noting the UDEL with the highest resolution, also has the highest range of variance. That may mean the net effect of homogenizing is not large, but if the little positive and negative variations are real, then they should be captured and preserved.
    .
    I don’t have a feel for how large the effects may be, but while solving some biases, it does seem plausible that homogenizing introduces new biases.

  175. Ron,
    ” you could use each station as an anomaly to the other”
    You could if they were in the same place. GISS/Hansen does something like that, building up an anomaly base within a grid cell. I’m not a fan of that. The point is to reduce the expected value at each station to a common value (zero, as best you can). That takes the pressure of requiring samples to represent different parts of the population equally. The parts are not (very) different.

    “just not reporting. Was their quality control also dropped substantially?”
    I’m sure they reported to somebody. There’s no reason to expect that GHCN’s decision about inclusion reduced quality of local readings, although it may be that CLIMAT pushed standards up. On the general point of “ignorant”, it relates to this issue of gridding and anomaly. You can argue about their effectiveness, but to ignore their effect entirely is ignorant. But I was also referring to the wrong understanding of how GHCN developed.

  176. Ken Fritsch:

    You write:

    Are you going to reply to my question about what spectral methods you used on adjusted and unadjusted temperature series and what differences you found?

    While my unedited syntax may run longish under the press of time, I cannot imagine how much more clearly my use of unadjusted data can be stated than:

    I thought that my exclusive use of unadjusted data was amply clear. All of my data series, in fact, are from prior versions of GHCN, and were vetted individually. They are comparative gems found in a pile of gravel.?

    Nor, after all my references to cross-spectral coherence, should it come as a surprise that I rely heavily upon cross-spectrum analysis in my work.

    This necessarily involves the estimation of the power spectra of both time-series. With short record lengths militating against direct FFT decimation algorithms, I resort to the Wiener-Kintchine theorem, utilizing more robust estimators of auto- and cross-covariance than those employed by Blackman and Tukey. These model-free spectral results generally are quite consistent with those obtained by Burg’s MEM algorithm, which relies
    upon high-order AR models of the time-series.

    What I usually find in cross-spectral comparisons of adjusted and unadjusted station records is that the low-frequency spectral content has been changed, but without any consistent improvement in low-frequency coherence with neighboring vetted records. In other words, the spectral components that create the impression of trend have been manipulated without bona fide
    scientific basis. That’s why I remain skeptical of all “homogenizations,” especially the egregious low-frequency butchery performed by BEST’s “scalpel.”

    For finding close-by station records exhibiting high coherence throughout the entire baseband range, I would suggest non-mountainous regions such as the Deep South, where major land-use changes were made centuries ago and the station coverage is dense. Having spent much valuable time on vetting my
    benchmark stations in the USA, I not willing to disclose them on a blog that treats all unpublished material in comments as its own intellectual property.

    BTW, to gain more insight into some of the issues you raised, here’s an excellent paper by Vattaurd at al., who analyzed an earlier version of IPCC’s global record via SSA methods:

    http://s3.amazonaws.com/academia.edu.documents/31042283/BCCT_424.pdf? AWSAccessKeyId=AKIAJ56TQJRTWSMTNPEA&Expires=1469744824&Signature=WeLKCp4TTYNSsj8SoWSIM72W9lo%3D&response-content-disposition=inline%3B%20filename%3DSingular-spectrum_analysis_A_toolkit_for.pdf

    For the IPCC temperature data, they found that “oscillatory components account for about 50% of the variance, after trend removal.” Along with trend manipulation, the suppression of strong oscillatory components often proves to be a concomitant effect of inexpert homogenization throughout the Ver.3 GHCN data base.

  177. Nick: “I’m sure they reported to somebody.”
    .
    I thought the point was that they stopped reporting their data to GHCN monthly, the international standard record.
    .
    Nick: “There’s no reason to expect that GHCN’s decision about inclusion reduced quality of local readings, although it may be that CLIMAT pushed standards up.”
    .
    As anyone who runs anything knows, standards do not keep up by themselves. I suspect the Anthony Watts organized citizen station audit circa 2008 proved that. The “March” was likely the bureaucratic response two years in the making.

  178. “I thought the point was that they stopped reporting their data to GHCN monthly, the international standard record.”
    They never did. Stations report to their Met offices; the MOs, since about 1996, report selected station results via CLIMAT to the WMO. GHCN monthly collects information from the CLIMAT forms.

    “The “March” was likely the bureaucratic response two years in the making.”
    Hardly. The change complained of happened around 1990.

  179. Kenneth Fritsch:

    I think you have to consider that a temperature series can have breakpoints that are derived from natural events. Using multiple station comparisons and difference series can help get around this problem if the natural event is sufficiently widespread.

    Depending on the algorithm you use, sure. But in the same way, comparing multiple stations can lead to a breakpoint being assigned to a location simply because one station happens to be in an spot that doesn’t reflect a particular change caused by whatever natural events.

    This goes back to the problem of how one defines what these “breakpoints” are. BEST specifically claimed it’s breakpoint algorithm finds things like station moves and microclimate issues, but in reality, that description is wrong. Not only is it not particularly good at finding those, what it actually finds is any inhomogeneity – regardless of the cause. If one’s goal is to homogenize the data, that’s fine, but if one’s goal is to fix problems in the data… that’s not necessarily the same thing.

    I thought that Best used a weighting method for stations and that a breakpoint (derived in an approach much the same as to what GHCN uses) would reduce the weighting.

    I’m not sure where you got this idea. When BEST identifies a breakpoint, it simply splits the series at that point. The weight assigned to the separated segments is independent of that process (though perhaps whatever caused the breakpoint to be identified might also lead to de-weighting).

    I also recall that a station move is considered an automatic reduced weighting and the series from before and after the move are considered separate stations for their calculation purposes.

    Station moves are treated the same way as any other breakpoint – the record is split and the resulting segments are treated the same as any others.

  180. Nick Stokes:

    Well, OK, nothings perfect. So let’s say, strives to ensure.

    Yes, please do. It is hardly appropriate to criticize people for not understanding/discussing a point while knowingly misrepresenting that point. The near-universal exaggeration of certainty that comes from people defending the temperature records is a significant part of why these discussions are often so fruitless.

    One of the most remarkable things to me is having seen people responsible for making temperature records acknowledge sources of uncertainty in discussion which they knowingly didn’t include in their published uncertainty levels or in any text given with those numbers. I get you can’t always estimate the full effect of all sources of uncertainty (though some of the ones I’ve seen can be estimated), but to not even write anything about them to warn the reader when you know they exist is just dishonest.

    And yet, nobody gets up in arms about it. The reason? Because the debate is polarized between two “sides” that often exhibit little interest in describing things accurately as it would complicate their message.

    In fact, there is a bias in HADCRUT, not necessarily from changes, but from under-representation at the poles, and not very effective gridding (lots of empty cells). Arctic regions are warming (a systematic change in anomaly), and this isn’t reflected adequately in the global average, as Cowtan and Way showed. HAD included more Arctic stations – this had a warming effect.

    I’ve always found this argument interesting because there is no such thing as a “global temperature.” The global products being used don’t measure a real thing. As such, there is no inherent reason they must include any particular region of the planet, nor is there any reason all parts of the planet must be weighted evenly or according to any particular scheme. One could take the temperature of two regions and call that the “global temperature” if they wanted, and it would have as much empirical basis as anything any other group produces.

    So ultimately, this argument comes down to definitions. Originally, “global temperature” was (effectively) defined in one way and most discussion/work was based off that. Then, decades later, people decided to change the definition being used because it was more “right.” It is not objectively better though, and there are many arguments for not making the change. You won’t hear people talk about that though.

    Of course, the near-obsession with global temperatures is silly anyway as it is a terrible metric for any meaningful decision or knowledge about global warming. Still, if people want to do it, I think a more coherent approach would be better. As in, people should actually come up with a precise definition of what they are trying to measure rather than constantly change the parameters of what’s being measured while using the same words to describe it.

    Personally, I like the idea of using a measure of total energy. It isn’t that difficult to convert temperature fields into energy estimates which can then be summed across the planet’s surface. That gives you a consistent, objective standard to use. I don’t think we’ll ever see it done though.

  181. Brandon S:

    When BEST identifies a breakpoint, it simply splits the series at that point. The weight assigned to the separated segments is independent of that process (though perhaps whatever caused the breakpoint to be identified might also lead to de-weighting).

    Brandon, thanks for sharing your knowledge on this. It would be my guess that the whole reason to split off the station into a new segment would be so that it could then be selectively de-weighted, like a surgeon cutting with his scalpel and subsequently ablating the offensive tissue.
    .
    It would be natural for BEST to have run tests to find what degree of known issues could be caught by their algorithm. This should be no secret. For that matter there shouldn’t be any secrets held by a publicly funded non-profit organization. The public are their shareholders.
    .
    TOBS (.2-.5C) and MMTS (.2C) breaks should be child’s play to find. MS; possibly, if the offense is sudden and overt, UHI; probably not going to catch, rural LULC; not a chance.

  182. On second thought, the de-weighting is unnecessary since the two new segments have their new independent baselines. MS could be de-weighted, but how does the algorithm know its MS vs TOBS? De-weighting must be the alternative to splitting in order to handle spurious temporary anomalies, not trend breaks. If the MS is a constant for a period that affects trend the algorithm might just split the station and preserve the MS issue.

  183. Ron,
    “On second thought, the de-weighting is unnecessary since the two new segments have their new independent baselines.”
    Th thing is, you are less certain of each new baseline average, so subtracting them adds more uncertainty to the anomaly.

  184. Read at JC that Steven has been very unwell lately.
    May explain difficulty in posting here.
    I, and I imagine everyone here wish him a speedy recovery and future good health.
    Thanks again for the two posts as well.

  185. Ron Graf:

    Brandon, thanks for sharing your knowledge on this. It would be my guess that the whole reason to split off the station into a new segment would be so that it could then be selectively de-weighted, like a surgeon cutting with his scalpel and subsequently ablating the offensive tissue.

    It would be natural for BEST to have run tests to find what degree of known issues could be caught by their algorithm. This should be no secret. For that matter there shouldn’t be any secrets held by a publicly funded non-profit organization. The public are their shareholders.

    On second thought, the de-weighting is unnecessary since the two new segments have their new independent baselines. MS could be de-weighted, but how does the algorithm know its MS vs TOBS? De-weighting must be the alternative to splitting in order to handle spurious temporary anomalies, not trend breaks.

    The BEST algorithm doesn’t know anything about any type of error. It just detects what it believes to be inhomogeneities and splits stations at them. BEST claims the process is good at detecting problems, but there’s basically no published work demonstrating such. All we have is their word and their presentations making the claim. Personally, I don’t think that’s how science is supposed to be done, but… eh, it seems nobody really cares about out BEST’s many failings. I know I gave up discussing it due to lack of interest. (For the record though, the public doesn’t fund BEST.)

    As for how splitting records and de-weighting interact, they are not really alternatives designed to catch different sources of error. In theory, the de-weighting could catch single-point outliers that wouldn’t be caught by breakpoint analysis, but BEST already filters extreme outliers within the data. Besides which, the entire station record (or the segment with the outlier if the record were split) would be de-weighted, meaning that single data month’s data would mean years of data received less weight.

    Realistically, the de-weighting does one thing – homogenizes the data. At a high level, it is just a form of smoothing the data set.

    Nick Stokes:

    Th thing is, you are less certain of each new baseline average, so subtracting them adds more uncertainty to the anomaly.

    While this might be true, it has nothing to do with any aspect of the BEST methodology being discussed. BEST doesn’t de-weight proxies based on this notion.

  186. angech:

    Read at JC that Steven has been very unwell lately.
    May explain difficulty in posting here.

    Given the number of comments he’s written at other sites during the time he’s been absent here, it is clear he prioritizes jumping into other discussions to commenting here, even just to say it will be a while before he can have a substantial discussion. His medical problems are no explanation for why he hasn’t commented here.

    I get medical issues can suck, and I have had my own to deal with before, but people are too quick to excuse poor behavior.

  187. Nick, thank you for your reference to your helpful list of stations.
    I still lack a simple reference to the 43k BEST station list referred to in this paper.
    I am surprised at people going on about station numbers and other matters without a clear idea of what stations are actually being referred to here.
    References to other subsets like GCHN and their problems merely muddies the waters.
    I must take it that like everyone else here you do not know the source of the 43 k.
    Either that or if you do know you do not wish to divulge it because (mind reading, sorry DeWitt)
    If the actual data set referred to actually exists it could be probed and prodded to reveal some inconsistencies you would not be happy with.

    Brandon
    149707
    Nick knows fully well that this isn’t true.
    Well said but why is it only you pointing this obvious fact.
    Carrick and DeWitt should be jumping all over these statements of misinformation.

    149704
    “I commented a lot at the time, and blogged here. There are two main safeguards which ensure that changing the mix of stations won’t bias the result. One is gridding, the other is use of anomaly. Anomaly is designed to ensure that the expected value of each station is similar (near zero), ie to create a near homogeneous population for averaging.”

    Wrong.
    Anomalies are not designed, ever, to ensure any expected values are similar.
    Anomalies are anomalies, variations from the mean.
    Designing a programme to force station variations to agree with each other is one of the cute ways of torturing data to a specific aim.

    “And that is a major reason why a 166-year record is better than 166 one year records. If I tell you that the average was 17°C in Moyhu in 1971, that is useless information for climate unless you can tell me – well, what is it normally. With 166 1 year records, you can’t answer that for any of them.”

    Wrong,
    So very wrong.
    They actually give you the same information.
    In fact a 166 year record is made up of 166 1 year records.
    Legalistically the 1971 record is better than the 166 year record.
    If you have only one record ever then that record is the most exact information you could ever have to theorise from.
    Once you introduce a 166 year record you will have all sorts of changing trends and confusion in the data.

  188. Brandon S:

    … Personally, I like the idea of using a measure of total energy. It isn’t that difficult to convert temperature fields into energy estimates which can then be summed across the planet’s surface. That gives you a consistent, objective standard to use. I don’t think we’ll ever see it done though.

    Wouldn’t total energy include elements not usually quantified by temperature? atmospheric motion? tides?

    Maybe there is no way to quantify all of this, nor historical records to which a comparison could be made.

  189. angech (Comment #149725)

    I would also wish Steven Mosher the best in his recovery from his illnesses.

  190. I’m on the road and limited to my phone, so I can’t respond to everything I would like to. For the moment:

    angech, I find people rarely point out errors that exaggerate things in the direction they “like.” How rare it is tends to follow a function based on the person and severity of the exaggeration. That’s just “human nature.” People suck.

    jferguson, I guess I should have been more clear. I meant thermal energy, not all energy. Looking at all energy would require silly things. I’m not asking people to look at gravitational and nuclear forces or calculate the energy one would get if they converted all mass of an area into energy. I’m just talking about heat.

  191. Brandon Shollenberger (Comment #149720)

    Brandon, there are a number of issues and concerns about the limitations of temperature adjustment algorithms that I judge could be tested using the benchmarking concept where a realistic climate free of non climate and micro climate effects is used and known non climate and micro climate effects are added and the algorithm applied to determine how well the adjustments get back to the original climate. Doing this benchmarking is not a simple matter when issues that can effect a station are not well known and further when the limits of the algorithm to adjust temperatures is tested with a separate set of conditions.

    For those who have thought sufficiently deep on the adjustment processes the complications became apparent. I think it is those complications that may well be delaying the current benchmarking process from being applied.

  192. Mosher needs prayers regardless the status of his current physical health. If he is indeed ill, I hope he gets better. A simple 1 sentence comment from him could help us, but it’s not his M.O. to clarify things.

    Andrew

  193. Read at JC that Steven has been very unwell lately.
    May explain difficulty in posting here.

    .
    When you don’t feel well you stay closer to home. We likely seem to be hostile territory I guess. Thing is we appreciate the source of knowledge, I even appreciate it when I find upon checking that it was only the fact portion that supported a point of view. Keeps us on our toes and motivates homework — obviously this is his stated aim. Get well soon, Steven.
    .
    Brandon:

    For the record though, the public doesn’t fund BEST.

    .
    BEST is a 501c3 non for profit organization. All grants and donations are non-taxable due to the requirement that the organization is serving a benevolent public purpose. Although many NFPs are closely held like BEST, technically the public owns them. So holding proprietary information can be justified by a NFP that is competing with for profit companies, for example, selling salad dressing to raise money for the needy. But in the absence of private competition I do not see any justification for secrecy about creating a public temperature index.
    .
    On weighting and slicing, it makes sense that the smaller the slice of time series the less certain the baseline for trend. Perhaps BEST takes this uncertainty into account in weighting. In this way a newly created break would be weighted to be mute but then as years past its influence would gradually re-emerge.

  194. Kenneth Fritsch, benchmarking thr various methodologies in use would be a good thing to do, but the problem will be getting people to use meaningful tests then publish their results in a way that can be examined.

    The latter is particularly important for both you and Ron Graf. BEST claims to have done certain benchmark tests which prove certain things about the quality of their work, but they’ve chosen not to publish their results in any way that can be checked. The most detailed work is small presentations – posters.

    Ron Graf, BEST doesn’t do anything in particular to account for the baseline issue you refer to. The issue only arises in that shorter segments tend to have greater uncertainty overall.

  195. Brandon Shollenberger (Comment #149720)

    Your representation of the BEST methods in correcting my incorrect remembering of the general approach makes sense to me. I now recall they slice the station data into shorter segments as you noted. I am attempting recall exactly how the weighting works. They compare the individual station to a field but what is the extent of that field? Is it an iterative process?

  196. Hi Brandon,
    I concede that trying to quantify total energy, not just heat, may be silly, but in my ignorance I had supposed that some solar heat input winds up in mechanical energy, moving air, vapor, water, and so forth. Maybe this form of energy shows up in heat measurements, but I can’t imagine how. or maybe this mechanical energy has no significance in these discussions.

    or maybe my mind is going faster than I thought.

  197. jferguson,

    It shows up as heat eventually because all air and water movement, and particularly turbulent flow, is inherently dissipative. Without a constant energy flow, it would stop. IIRC, and I’m not going to bother to look, something on the order of 1% of incoming solar energy powers the circulation of air and water.

  198. angech,

    Carrick and DeWitt should be jumping all over these statements of misinformation.

    To do that, I would have to care about this. I don’t. I’m on record that the surface and satellite temperature records are not fit for purpose for climate studies. No amount of mathturbation will change that. I have a similar opinion on current climate models.

    However, I have a great deal of respect for Ken, Nick and Steven because they have gotten their hands dirty. Most of the rest of you haven’t.

    To put it another way, pace Andrew_KY, this has much the same interest for me as the Medieval debates about whether a particular class of hypothetical supernatural beings have a finite volume and pass through the space intervening when traveling from point A to point B. This is often ridiculed by referring to it as debating as to how many angels can fit on the point of a pin.

  199. Thanks much Dewitt. I’ve been under the misapprehension that mechanical energy shouldn’t be ignored in these considerations. I’m glad to be corrected. thanks again.

    jf

  200. jferguson, unfortunately DeWitt Payne has given you an incomplete answer. While it is true mechanical motion like you refer to is dissipative, there is an influx of energy that is not necessarily matched by a dissipative outflux. That is, as global warming increases, it is possible for the amount of mechanical energy like that you refer to increase. If that happens, global warming will have increased the amount of energy within the planet’s system in a way the idea I suggested would not capture.

    That is a valid concern, and it would be good to have an estimate of the magnitude of energy we’re talking about. I suspect it would be rather small in comparison. Whether or not it is though, I wouldn’t include it in a measure of global *warming* as mechanical energy is not warmth unless or until it is converted into thermal energy.

    A similar issue is the idea of the planet “greening.” As plant life on the planet increases, more energy absorbed by photosynthesis is trapped within the Earth’s system. That can cause the planet’s total energy to increase even if it’s temperature does not. I don’t think that should be included in a measure of global warming.

    Of course, one could create separate measures which capture different things. It would be perfectly appropriate to have a “total planetary energy” value, a “atmospheric thermal energy” value, a “near surface energy” value and whatever other grouping you might care to use. The key is just to define the measure you’re using in a clear and precise manner.

  201. jfergerson, to add to what Brandon and DeWitt wrote I would remind the planet’s surface is in a dynamic, turbulent, energy steady state. Mechanical dynamics of wind and weather are reacting to the energy and attempting to dissipate it. If that state shifts to a higher energy due to GHG interference of dissipation of energy off the planet then it makes sense the temperature would rise, and in concert, the mechanical energy dynamics. I believe this is why part of climate change predictors warn of increased and more powerful hurricanes and tornadoes. We have yet to see that, which is either an indication the predictive model is wrong or our surface is not warming as according to the record. Personally I thinks its a little of both.
    .
    An important point that I think DeWitt would agree with is that increased mechanical dynamics should aid in dissipation and thus be a dampening influence on temperature rise, a negative feedback. This is in contrast to the repeated consensus claims the overall atmospheric effect (ECS) is positive in feedback at a best estimate of 3X. Nic Lewis and others like the Otto study say ~2 but they assume HadCRUT is right (with no UHI correction) and there is no warming out of the LIA. MIT’s Richard Lindzen say ~1 ECS.
    .
    The first I saw DeWitt question the consensus was on this topic 16 months ago
    http://rankexploits.com/musings/2015/new-thread/#comment-136106

    There is a reason that most, if not all, models diverge from reality, cloud feedback. Clouds are a negative feedback. It’s very simple to demonstrate that clouds make the Earth cooler than it would be otherwise. I believe that alone is sufficient to prove that cloud feedback must be negative. What got me started on this is someone posted a comment at Science of Doom about how Venus has clouds and it’s hot. Consider how much hotter Venus would be if it’s albedo were lower and more sunlight reached the surface. Any model with a positive cloud feedback is not only wrong, all models are wrong, it’s not useful.

  202. The atmosphere is extremely inefficient at converting thermal energy into kinetic energy.
    .
    I think DeWitt’s estimate was being conservative ( 0.01% IIRC ).
    .
    That’s why kinetic energy doesn’t show up in the energy diagrams.
    And why thermal energy doesn’t appear in the equations of motion.
    .
    It is a common mistake assuming that the increase in global warming means an increase in motion of the atmosphere.
    .
    Motion of the atmosphere is determined not by the total thermal energy, but by gradients of thermal energy ( in addition to friction, gravity, orbital energy, and the pressures from the surface ). This occurs meridionally because of the sun shining on a revolving, rotating, nearly spheroidal earth. It also can occur vertically ( thunderstorms ).

  203. TE, would you hypothesize an amplification of AGW in arid regions due to lack of previous GHG (vapor) and now a larger GHG contrast from the past relative to humid regions? If I recall correctly DeWitt attributed polar amplification to mostly this effect.
    .
    I ask because most of the pristine stations would be in desert towns with no growth. Yet they may be experiencing the most reduction in DTR of the non-urban sites.

  204. total energy measurement, temperature fields.

    The energy in other non heat sources atmosphere and ocean movement does not need to be measured as it does not effect the actual thermal energy measurement.
    So much heat from the sun will give x thermal and y mechanical energy.
    We are only interested in the amount of thermal energy present.
    No matter how extreme the atmosphere,ocean movements are they can only come from the heat that comes in. They do not produce new energy.

    BTW is it right that higher temps lead to more hurricanes?
    I recall, probably wrongly ,that there were claims early in AGW that it would lead to less hurricanes??

  205. DeWitt, your comment that you have no interest is admirable.
    Your comment that you do not care about people making statements of misinformation, ditto.
    But you did raise the issue of Nick Stokes being admired because he has got his hands dirty doing the work.
    Admirable.
    However when the same hard working man attempts to mislead with his data and comments it still needs calling out.
    This is not the first time, and I am not the first person to note that Nick does not play fair.
    Neither does ATTP (who prefers that you do not use his real name as it is disrespectful).
    Mind reading is useless here.
    You and I try to be honest and true to ourselves and our different viewpoints and would not dream of using information misleadingly and knowingly selectively.
    Yet there is no acknowledgement of this tactic and a proud chalking up on the wall of another victory every time they get away with known (to them) misleading statements.
    43k stations?
    Do you know which of the BEST data sets this comes from, no.
    Do you care, no.
    So that’s all right then.

  206. angech,
    I don’t know why I’ve been brought into this. Firstly, I don’t care if you use my real name, I simply choose not to and ask people not to if they comment on my blog. Secondly, maybe rather than calling people out when you think they’ve said something misleading, why not assume that they’ve said something they believe to be true and present an argument as to why it is not?

  207. Given that I have no interest in your answer, feel free to treat it as one.

  208. Then you are breaking this site’s rules. Please try to respect site owners by following simple rules about what is and is not allowed.

    In case you are somehow unaware, rhetorical questions are forbidden here (unless accompanied by one’s own answer).

  209. BTW is it right that higher temps lead to more hurricanes?
    .
    Hurricanes are episodic and multi-factoral, so it’s not surprising that the statistics don’t indicate much correlation with anything.
    .
    And number or hurricanes versus intensity of hurricanes are distinct.
    .
    But with respect to temperature versus gradients of temperature, consider that hurricanes are also governed by this, though not apparently.
    .
    Hurricanes are distinct from mid-latitude cyclones in that they are warm core lows. But the unit of action in a hurricane is the convective cell ( thunderstorm ). The buoyancy of this convection occurs because the air mass within the cell becomes warmer than the air above it. So with hurricanes, as with jet streams, it is the gradient of thermal energy ( in the vertical for convection, not horizontal as for jet streams ), not the total of thermal energy that matters.
    .
    Now, when a given hurricane passes over relatively warmer waters, that hurricane does intensify, as Katrina did. But that is consistent with the principle of gradients ( relatively cooler air over relatively warmer waters is more unstable ).
    .
    So, heating the ocean at a faster rate than heating the atmosphere would mean increased intensity, while heating the atmosphere faster than the oceans would mean decreased intensity.
    .
    Consider that in the context of the Hot Spot. Warming over the tropics is supposed to be greater ( by a lot ) aloft. That argues for increased stability in the tropics. Of course, the Hot Spot isn’t observed for the satellite era and things get scrambled by the atmospheric circulation.

  210. Ron Graf (Comment #149738)

    Mosher’s explanations fit well with my current recollections after my memory was being jogged by Brandon. I still do not know the extent of the field(s) used in weighting, but I can go back to BEST to get that information.

    The unique approach of BEST is good to have for an eventual benchmarking test and particularly if the limits of the adjustment algorithms are to be tested.

  211. From Steve Mosher at JC blog.

    Dont expect to hear back from me. Burst appendix.. recovering

    I had a similar appendix complication when I was much younger and my wife was due to give birth to our youngest child. I was in the hospital for 11 days and in the meantime my wife came into and left the same hospital for delivery within a 2 day period in the middle of my stay. I could see them only as they were leaving by an open door of the elevator on the floor were I was staying. My wife said I looked like death warmed over.

  212. Turbulent Eddie (Comment #149753)

    I thought that some scientists, back before the North Atlantic saw a reduction in hurricane frequencies, were predicting that wind shear was going to become a factor in reducing hurricanes with increasing SST. I have heard the story of wind shear interfering with the development of tropical storms and hurricanes many times over the past several years.

  213. ATTP:

    …maybe rather than calling people out when you think they’ve said something misleading, why not assume that they’ve said something they believe to be true and present an argument as to why it is not?

    .
    I would ask ATTP if he is aware that the surface record is under contention as well as the 100% attribution of warming to GHG. If his answer is yes then I would ask that he point to any of his blog posts in the last year that discusses climate sensitivity without acknowledging that those two factors cannot be assumed priori.
    .
    I take this post, for example, in April of this year where ATTP scrutinizes Nic Lewis’s work by questioning the plausibility of a high (0.8) ratio of TCR to EfCS. While ATTP searches for a flaw and finds none he concludes his post with this dismissal of Lewis:

    I should also add that focusing only on climate sensitivity can be a bit misleading, as how much we will actually warm will depend on climate sensitivity and on carbon cycle feedbacks. These can be combined into a single quantity called the transient response to cumulative emissions, [TCRE] which is thought to be between 0.8oC and 2.5oC per 1000GtC. Since we’ve warmed by almost a degree after emitting just under 600GtC might suggest that the lower end of this range is rather unlikely.

    .
    First, the link ATTP provides is to his own earlier post on the subject where the range is acknowledged to be 1.0-2.0C and that Nic Lewis is claiming in his post that it is closer to 1.0C.
    .
    Second, ATTP in his linked post concludes Lewis is projecting a TCRE that it is not linear but waning. But ATTP neglect to mention this.
    .
    Third, of course, there is no mention of the 1C being possibly not accurate or that it is not all caused by AGW.
    .
    Here is another post the next month where ATTP blasts a new paper placing ECS at 1C and claims that it’s mathematically impossible (forgetting to mention the assumptions that would make it plausible, record bias and attribution bias).
    .
    ATTP, will you acknowledge that these two assumptions are under contention by those skeptical of CAGW advocacy? If so, will you commit to reflecting that in your posts characterizing the debate from here forth?

  214. Brandon,
    I did not say my question was rhetorical.

    Ron,
    I don’t claim to be perfect myself. However, with respect to Nic Lewis’s work, I think I’ve always made clear that I find it interesting and valuable and that there isn’t anything intrinsically wrong with what he does. There are, however, assumptions that mean that one cannot rule out certain possibilities and hence – in my view – those estimates are not nearly as robust as some would have you think. I also think your arguments about the possibility of an ECS of 1K are extremely weak. That you continually repeat them does not make them stronger. We’ve been through this so many times, that I have no great interest in doing so again. I’m happy to simply disagree with you.

  215. Ron,

    It is of course OK to be skeptical about the surface temperature record especially given the satellite data. However, I believe the surface record has seen a tremendous amount of work including BEST which started with a skeptical attitude. I personally doubt that there are big inaccuracies. That said, continued work is of course always a good idea.

    There is also lots of other evidence admittedly not as scientific. I live at about 1100 feet in the Cascades. 20 years ago, we often had heavy snowfalls that sometimes stayed around for weeks. The last 5 years, we have had essentially no snow. That could be a natural cycle perhaps, but I think snowfall records would show it too. That’s a pretty sensitive proxy for temperature in these transition zone areas.

    Cliff Mass I think has something on melt out dates at Stevens’ Pass showing not change in melt out date. However, Stevens’ Pass is high enough its not in a transition zone in the snow season.

    If I were going to look at this area, I would look at the boundary layer effects like temperature gradient, the effect of irrigation, etc. There is a much more fertile field for further work.

  216. Another area where things are badly in need of work are ocean models and the ocean/atmosphere interface, where climate scientists acknowledge problems. Of course its a very hard problem.

    Ken Rice’s post on Abraham’s paper shows that even for gross global energy measures, the model spread is huge. Ken tries to claim that because some of the models get the global rate of change right, they “might” be getting global energy balances roughly right. Very very weak stuff. This makes global energy balance models look sophisticated and accurate.

    It’s a huge hole in our current science, but it appears to me that people are too busy running inadequate models to look at it at a fundamental level.

  217. Anders, I asked you a direct question and you responded. I was charitable and assumed your response was meant to actually answer the question. If you were actually ust playing games and being intentionally unhelpful, that’s your choice, but that speaks very poorly of you.

    In the future you should try to refrain from intentionally causing problems Nd actually answer the questions you respond to. Or to put it more concisely, stop trolling.

  218. ATTP, I think you missed my point. It’s not the plausibility of Lewis or Lindzen’s work, it’s that your analysis of it is always devoid of acknowledging two major points from the other side questioning, 100% attribution and 0% bias in adjustments. Point to where I am wrong on this. It seems that many educated skeptics that comment on your blog catch this. Nic Lewis accepts you 100% and 0% I think because he knows that is a pre-requisite for getting published in a very biased environment.
    .
    DY, who exactly is the skeptic on the BEST team?
    .
    BTW, Steven, I hope you get well soon with full recovery.
    .
    There are two arguments against UHIE that the consensus and BEST makes.
    1) The effect is negligible except in urban stations, which are <10% of the stations.
    2) Urban stations have the same trend as rural and pristine.[see Peterson(1997, 2003), Wickham(2013) ]
    .
    These two explanations are not compatible with each other.

  219. Ron,
    I don’t think that I’m obliged to acknowledge “major points from the other side”. The obligation is on the “other side” to make the argument for their “major points” not for me to acknowledge them. However, you’re also not quite right. I don’t think that it’s 100% attribution and 0% bias. I realise that those are assumptions. However, if you go through my blog, you might notice that I’ve written about internal/natural variability. It is extremely difficult for it to play a major role on multi-decade timescales. Not impossible, but unlikely. Also, internal/natural variability can also work both ways. Currently, the best estimate for the period 1950-2011 is that is had a net cooling effect of about 10%. So, if you think it could play a major role on multi-decade timescales, you should be willing to acknowledged that it could have enhanced the warming, or introduced cooling.

    Of course there may be biases in the adjustments to temperature data, but – again – this could work both ways and we have multiple datasets that are broadly consistent. Until such time as someone makes a compelling argument for a significant influence from some kind of bias, we work with the best data we have. Also – globally – UHI, for example, has to be small given the energy involved.

  220. Ron, Initially Muller was skeptical. I think he’s pretty trustworthy and not an activist or ideolog.

  221. ATTP:

    …Also – globally – UHI, for example, has to be small given the energy involved.

    .
    I can’t believe I am reading you making the same mistake that anoilman and others on your site make. It takes very little energy to bias a temperature sensor, just place it next to an air conditioner condenser unit or next to a building or near pavement. UHI is the same thing but further away. Irrigation and rural LULC is the same.
    .
    It would only take a sentence or two each time you blog about ECS to mention that Lewis assumes 100% attribution and 0% bias and uses HADCRUT. For those who might use UAH satellite and question MBH(98,99) and Gergis(2016) that there is no significant variability on a millenial timeframe then the analysis would lead to Lindzen and Choi results of ~1ECS.
    .
    DY, Elizabeth Muller was never skeptical and it’s always been her show and the exec director. Dad had good reason to change.

  222. I have heard the story of wind shear interfering with the development of tropical storms and hurricanes many times over the past several years.
    .
    Yes, occurrence of shear impacts the formation and intensity of tropical cyclones, but I don’t believe there’s a compelling reason to believe that shear, which fluctuates quite a bit as it is, should change appreciably with AGW.

  223. Ron,
    Total global energy consumption is about 5×10^{20}J, which equates to about 0.03W/m^2. Without feedbacks, this would produce a surface warming of around 0.01K. If you think it could trigger feedbacks, it might be 0.02-0.03K. Globally it is negligible.

    Again, I don’t really need to mention what you think I should mention. However, I think I have mentioned HadCRUT, and I’m pretty sure I’ve mentioned the attribution somewhere. That internal variability is unlikely to be significant on multi-decade timescales, has little to do with MBH98/99, although they are consistent with this. How can you use UAH? It’s not measuring the surface!

  224. Turbulent Eddie (Comment #149767)

    https://www.wunderground.com/blog/JeffMasters/the-future-of-wind-shear-will-it-decrease-the-number-of-hurricanes

    Could global warming increase wind shear over the Atlantic, potentially leading to a decrease in the frequency of Atlantic hurricanes? Several modeling studies are now predicting this, and it is a reasonable hypothesis. The most recent study, “Simulated reduction in Atlantic hurricane frequency under twenty-first-century warming conditions”, was published Sunday in Nature Geosciences. The authors, led by Tom Knutson of NOAA’s GFDL laboratory, showed that global warming may reduce the number of Atlantic tropical storms by 27% and hurricanes by 18% by the end of the century. However, their model also found that the strongest hurricanes would get stronger.

    An important reason that their model predicted a decrease in the frequency of Atlantic hurricanes was due to a predicted increase in wind shear. As I explain in my wind shear tutorial, a large change of wind speed with height over a hurricane creates a shearing force that tends to tear the storm apart. The amount of wind shear is critical in determining whether a hurricane can form or survive.

  225. ATTP,
    Wow, you’re doubling down on your assertion with an energy equation? If your are talking about the AGW of non-GHG then we are talking about a tiny shift in the steady state flux. Although even that shouldn’t be counted against ECS, since it’s not GHG, is beside the point that, the issue we are discussing is warmth local to the sensor or station along with biased corrections for other non-climate effects. It’s things like not maintaining a weather station that bias’s the temperature record.

  226. Ron,
    I’m pointing out that the entire world’s energy consumption could only produce a warming of a few one-hundredths of a degree. Therefore, something like UHI cannot be producing a significant amount of warming globally. You appear to be responding to something I didn’t say.

  227. ATTP,
    I agree with your statement of amounting to hundredths of a degree. This is precisely the point. UHI’s warming of localities is not representative of true GMST.
    .
    Have you been mistaken about the whole nature of the UHI question and debate? (not rhetorical)

  228. Ron,
    Try considering what I originally said

    Until such time as someone makes a compelling argument for a significant influence from some kind of bias, we work with the best data we have. Also – globally – UHI, for example, has to be small given the energy involved.

    I was making two largely separate points. Highlighting that there might be some kind of bias influencing the data does not mean that there is actually a significant influence from bias. Maybe there is, but the possibility of this is not something that we can easily take into account if someone hasn’t actually quantified it. And, I added that something like UHI cannot be making a significant global contribution.

  229. Whether UHIE bias can make a significant influence on the global land surface record is the question under debate. Your facts or argument about that are not. Read my comments again.
    .
    This string itself could be fascinating to a professor studying psychology of bias.

  230. I think the reason the satellite record is relevant is because it seems to agree with balloon data pretty well and is consistent between two different groups with different processing algorithms. It does seem to disagree with GCM models however. This is just one of a huge number of issues in climate science that needs serious attention.

  231. Ron,

    This string itself could be fascinating to a professor studying psychology of bias.

    Oh well, I had expected better, but probably shouldn’t have.

  232. ATTP: “Oh well, I had expected better, but probably shouldn’t have.”
    .
    My apologies. Got impatient.
    .
    Can you acknowledge understanding my point that UHIE is about local bias of the sensor and not of affecting the planet?

  233. KF: Yes, shear events are weather, and I don’t believe climate models can ( and so, probably shouldn’t even try ) to predict weather.
    .
    The reality of numbers of storms means, even if warming were changing shear, and shear were changing storms, it is lost in the noise – not significant.

  234. Sorry Ken Rice for my poor timing for deeper reflection on psychology.
    .
    I’m sure I will have a major oops with you some time and you can be more charitable than I was.
    .
    Ken Fritsch, I will try to be more reserved and polite as you advised.
    .
    I will be commenting less for a while and revert to lurking. Thanks to all the regulars who were not that interested in the surface record.

  235. David Young:

    Ron, Initially Muller was skeptical. I think he’s pretty trustworthy and not an activist or ideolog.

    Well, Muller stated BEST would publish all sorts of things to create a tractable product. He has since given interviews in which he claimed BEST published certain data. In reality, BEST has never published that data. That is true even though it has been more than a year since the falsity of his claim was demonstrated.

    See here for an explanation of what I’m talking about, including a journalist’s failure to correct Muller’s error even after it was pointed out. Pay particular attention to how she had updated her piece after this issue was pointed out to give a false explanation of why the data wasn’t available – an explanation she was given by BEST itself. (Also notice how the latest update simply fails to address the issue.)

    My point is whatever Muller may be or may have been, his behavior and that of BEST along with him has not met the standards skeptics promote. He hasn’t even met the basic standard of, “Don’t claim you’ve published things your group apparently has no intention of ever publishing.”

  236. 149749
    Thanks for the clarification ATTP, my problem was that one of your regulars was upset with me when I used his real name at your blog and I thought, mistakenly, you felt the same way.
    Someone else brought you into this conversation along with Nick as examples of people who had done the hard yards and deserved respect.
    I was upset at Nick being helpful but not really being helpful in a style that he has made his own and set out to explain why.
    I will try to follow your advice but Nick, despite his excellent blog and charts has serious form as you should well be aware.
    THanks to all for explaining further re hurricanes.
    A shame that in the desire to blame hurricane damage on AGW the thread between less hurricanes with warming had to be abandoned by the warmists.
    ATTP may have had some discussions on papers by PIelke and hurricanes in the past which were worth looking at.

  237. angech,
    I don’t really like talking too much about others. I’m completely unaware of Nick having any serious form, other than form for being very well-informed and being capable of engaging in discussions without responding in kind to some of the comments aimed at him. I was simply suggesting that constructive discussions normally involve avoiding saying things that make it sound like you’re accusing the other party of doing something nefarious. I’m not claiming perfection myself, to be clear.

    Ron,
    No worries.

  238. Out on gold coast for tea at the slurpy panda or slushy penguin, former I think, near the fish house with Chris Hemsworth and Mark Rufalo rumoured inside.
    Completely unaware of who they are but my sister says I should know.
    Rufalo may have something to do with Climate science data apparently in a recent film.
    Following the Arctic Death Spiral at real science and the great white con.
    Yacht racing is so slow but full of interest. Polar ocean challenge.
    Will Tony or Jim win.
    Who can trade the best insult?
    Same area but two completely different maps, one with no ice and the other out of frozen, the movie.

  239. I was simply suggesting that constructive discussions normally involve avoiding saying things that make it sound like you’re accusing the other party of doing something nefarious.

    Constructive discussions also involve other things besides playing nice. But, everyone already knows that.

    Andrew

  240. angech,

    I meant Kenneth Fritsch, not Ken Rice in my comment about getting one’s hands dirty.

  241. I am impressed with the knowledge and research time put in by almost everyone who posts here. I think that sometimes it’s precisely that investment that causes conflicts. Criticism hurts as much as it helps us to correct course.
    .
    One contrast I see here at Lucia’s is less tribalism than I see at other sites. Independence thought and loyalty to search for truth is held as a higher value than cohesion. The cause here is cutting through #$%&, not ideology.
    But that said I would really like to see Sky and Ken work together.
    .
    Brandon’s sleuthing on Gergis was nothing less than McIntyrian genius. Bravo.
    ============================================
    .
    Next time I need to explain UHIE bias to and CAGW enthusiast I will take a breath an hearken to the scene of ET where Elliott gets out of school be placing the thermometer up against the lamp while mom is not looking.

  242. DeWitt,
    thanks for the explanation re Ken and Ken.
    Egg all over my face but nothing new.
    Kenneth Fritsch is very good, polite and way above my pay grade in maths.

    As an aside I remember Carrick putting up a link to the earth to show it was quite irregular in shape. I was hoping he might link it again but only if it is not too much trouble. Latitude, altitude and gravity all having effects on local temperature expectation but though BEST has a gravity factor I do not think it is variable and obviously it should be and might account for some of the anomalies that BEST corrects for, perhaps when it shouldn’t.
    ATTP certainly adds spice and brings the best in arguments out when he appears.

  243. I won’t let anyone take us backward, deny our economy the benefits of harnessing a clean energy future, or force our children to endure the catastrophe that would result from unchecked climate change.
    Hillary, November 29, 2015

    Looks like Hillary and RB are going to lead us into a glorious future and save the world from CAGW and everything and The Donald and deniers.

    Andrew

  244. angech: What you’re asking for is the “geoid”, the shape of the Earth’s surface. It is quite irrregular.

    http://geomatica.como.polimi.it/elab/geoid/geoidViewer.html

    It isn’t thought to have a significant direct effect on temperature, but it has an important effect on geostrophic flow of ocean current (and thus an effect on climate and on temperature).

    An important related concept is isostatic rebound, which is the lifting of the Earth’s plates in response to the melting of the glacial sheets that occurred at the end of the last ice age. (Incidently this is yet another example of the problems for “young Earth” people, since the Earth has a “memory” of an event that occurred supposedly before the Earth was created.)

    While we tend to think of this as just causing an uplift, it also tilts the local land forms relative to the direction of gravity. This leads to a change in rivers over time, so that rivers that once flowed north for example, now flow south. It also causes very long-period changes in ocean circulation.

  245. Hi Carrick,
    Not sure I understand what that animated image represents. The units appear to be meters, but all the KM high mountains on the surface don’t show up at all. Is the depicted “non-spherical” shape depicted just the deviation from what it would be if the crust were at gravitational equilibrium? Can you explain (just a little)?

  246. SteveF,

    The geoid is, effectively, the true sea level if there were no wind or tides. It varies in height from an ellipsoid because of variations in local gravity due to things like density. Strictly speaking, it’s not defined where there are continents, but you can still calculate what it would be by assuming there was a very narrow canal across the land mass connecting with the oceans on either end. GPS altitude is corrected for the geoid in modern GPS receivers.

  247. To follow up on DeWitt, the geoid represents the shape of the surface of constant gravitational potential with an average radius equal to that of the at-rest (relative to the rotating Earth) mean-sea level. So technically it is defined for continents (but you have to know the density profile under the surface of the continent to calculate where it would be).

    Conceptually you can think of it the way DeWitt describes it though.

    One utility of of high-resolution surface gravity maps is it provides constraints on the subsurface mass density field. By combining these with satellite gravitational observations, you can obtain accurate “inverse solutions” for the mass density field under the surface of the Earth. As we improve satellite measurements, the resolution of the geoid is steadily improving over time.

    GPS’s typically correct for MSL relative to a reference ellipsoid, the most common of which is WGS 84. Not to quibble with DeWitt, I think this has always been true.

  248. DeWitt, Carrick,
    Thanks for the explanation. The chartplotter (navigation system) on my boat references WGS84, so now I have a better idea of what that means. Of course, on a boat in the ocean you tend to follow sea level, so if the GPS system did not correct for ‘the geoid’ change in sea level, the chartplotter might think the boat had taken to the air. Or turned into a submarine! 😉

  249. Ken Rice,
    “I was simply suggesting that constructive discussions normally involve avoiding saying things that make it sound like you’re accusing the other party of doing something nefarious. I’m not claiming perfection myself, to be clear.”
    .
    That comment improved my opinion of you.

  250. Ron Graf,
    “One contrast I see here at Lucia’s is less tribalism than I see at other sites. Independence thought and loyalty to search for truth is held as a higher value than cohesion.”
    .
    Absolutely, and it’s why I have invested the time to write guest posts and comment regularly here. Lunatic rabble blogs (‘rising CO2 doesn’t come from burning fossil fuels!’) and echo chamber blogs (‘the science is irrefutable, billions will die!’) are a waste of time… few in those places are interested in (or even capable of?) critically examining their opinions and actually learning something new. To paraphrase: truth is the first casualty of climate blog wars.

  251. SteveF: “…truth is the first casualty of climate blog wars.”
    .
    Actually, I came to the blogs because of how little truth makes it to the news (or school curriculum). With reporter’s spin and misunderstanding of investigator’s slant there is a wicked “whisper down the lane” effect. There’s high likelihood the answer is lukewarm which is too boring I guess.
    .
    Did we move the needle at all on your curiosity on surface record bias?

  252. Ron,
    No, there may be small biases of course, but it seems to me unlikely those change the overall warming trend very much. I am much more interested in climate sensitivity, because that is were there is great uncertainty and, IMO, clear bias toward exaggerated sensitivity. The whole ‘future catastrophe’ meme, and the draconian (and leftist!) economic and social changes that future catastrophe ‘demands’, is based on a bunch of models that are filled with kludges and biases, and simply not credible. Avoiding very negative social and economic outcomes due to foolish energy policies depends on people regularly pointing out that the GCM predictions are clearly divergent from reality. The dedicated greens who dominate climate science, and climate science funding, are not going to do it… they are like foxes in charge of the hen house.

  253. SteveF,
    I agree determining CS is the most important goal. If I had headed the IPCC at the start I would have added more stations, buoys, kept any existing ones (for long station references) and increased quality control in all of them 10X. Putting the most resources into creating GCMs that can’t be validated until the game is half over seems ridiculous.
    .
    Had we better data today I believe we would have already separated the signals of of non-climate biases and natural variability from AGW. Otto and Lewis-type energy balance models could then diagnose TCR with a narrow range and high confidence (without needing to assume 0 bias and 100% attribution).
    .
    So I am interested in NCE because they should be the easiest to diagnose and quantify (even with poor data).

  254. because that is were there is great uncertainty and, IMO, clear bias toward exaggerated sensitivity

    Neither sounds uncertain nor unbiased. While multiple rounds of GCM analysis/observation updates/energy balance model-observation reconciliation efforts proceed over the years, so also do multiple literature assessments by sceptics with a sceptic ECS anchored at 1.5C arrived at years ago using much simpler models and outdated observations.

  255. RB, have you read the current CA post about the suppression of the Law Dome proxy data? When Steve Mc requests data a general alarm is emailed to “the team” to strategize how to block, delay and deceive.
    .
    Climategate should be an eye-opener to all, not just skeptics. Their behavior is that not that of scientists but of propagandists.
    .
    Instead of solving the mysteries of nature we have to spend our time uncovering the nefarious actions of those wielding power. The oil companies do not have tax authority or guns.
    .
    Also, since you have been following the current post I will ask your opinion of how UHI and both be an important bias in large urban stations (Karl, Hansen) and yet provide a 100-yr trend that is no different than rural (Peterson)?

  256. Consider that Jones(2016), the consensus declaration of the state of the surface record, has a section of urbanization effect and leaves out the most comprehensive study done to date, Karl(1988). Instead Jones cites Peterson(2005) and Wickham(2013) are used to establish there was no urban influence.
    .
    Perhaps the skipping of Karl and Hansen was calculated to avoid the complicated history of Easterling’s guestimate of UHI of 0.05C/cen getting etched in stone with Karl(1988)’s 0.06C/cen only to be blindsided by father Hansen(1999)’s 200% increase. So Hansen (NASA) GISTEMP asks for NOAA’s (Karl’s) product less the UHI adjustment so they can place in Hansen’s 0.15C/cen. But HADCRUT gives zero UHI adjustment. So Jones, a Hadley man, relies on Wickham and Peterson to justify zero. Steve Mosher, feel free to comment on this when you recover.
    .
    Jones(2016) does not throw out UHI altogether. Jones points at China as a problem, having 68% urban stations and less than 1% urban land use. Need some more rural in there. Even there it’s funny he sees it as a representation problem not a false thermometer warming one. Maybe I should write him about Elliot putting the thermometer up to the lamp to get out of school.

  257. Yes, SteveF, Sensitivity to greenhouse forcing is critical to our choices as is the rate of change. And that’s I think where climate science has the most serious challenges. I’m not even sure ECS is fully predictable. You can predict perhaps for small perturbations about some steady state, but when nonlinearities come into play, such as ice feedbacks, its very difficult I think. However, some of these things are very slow processes.

    in the long term, geo-engineering is in mankind’s future as the sun becomes more energetic. And that’s another political nightmare.

  258. Ron,
    While i haven’t got my hands dirty analyzing the temperature records, based on what I’ve read from people like Zeke, I don’t share your views on nefarious intent in suppressing UHI effects. Since land is 30% of the surface, I believe trend impacts are also likely not significant over 60+ year time scales. Lastly, adjustments overall are made in a way that adjusts sea surface temperature trends downwards with a similar magnitude as for land in the opposite direction resulting in an overall reduction in trend. As I remarked about BLS data, people such as you will always see nefarious intent regardless of the transparency.

  259. SteveF, I saw your comment at Judith’s about the model tuning paper. I would expand a little on this here as Judith’s can be very distracting.

    1. You are correct that this information makes it seem ridiculous to try to treat a model ensemble in a rigorous statistical way. The current CMIP5 runs are now known (thanks to this paper) to dramatically understate uncertainty. One would want to generate a new larger ensemble varying the parameters within their range of justifiable values to do that. There are a lot of parameters, so the spread would likely be very large.

    2. It shows that Gavin Cawley’s falsification criterion for a model, viz., that the data must lie outside ALL reasonable runs of the model will NEVER falsify a climate model. I am surprised that he seriously argues for it. Perhaps in some fields of modeling, the models are equally lacking in predictive power and academics nonetheless try to argue there is some value in their modeling activity. Using this criteria, no such model could ever be falsified.

    3. This model tuning paper is a step in the right direction. I do marvel however that it took so long to come clean. It almost certainly had to do with the role these models play in IPCC reports and projections and the fact that a lot of climate scientists depend on these models, either building and maintaining them, or a far larger number who run them to generate “research.”

    4. It also shows that as we argue in our recent AIAA Journal paper on “Implementing a Separated Flow Capability in TRANAIR,” there is a strong bias in science toward ever more complex models to model “more physics” that is not supported by much scientific evidence. Well constrained simple models can be much more predictive and in some cases more accurate too.

    5. We also in that paper show some very surprising data comparing complex RANS models with Drela’s simpler model for a very old test case that is very widely reported in the literature. However, they always show agreement of their models with the easier case. What we found was that for the more challenging cases, the RANS models seemed to fail rather badly. Beware of the literature in CFD and in climate science.

    6. There is a bad replication crisis in science that urgently needs to be addresses. I continue to be amazed that it took so long for this to be realized outside medicine and that so many continue to be in denial about it.

  260. what I’ve read from people like Zeke

    And appeal to Zeke beats an appeal to Mosher 95% (or better) of the time.

    Andrew

  261. RB
    “outdated” observations?
    Sounds nefarious and biased surely.
    All observations , by their very nature, are purely observations.
    They are all outdated because once made they exist in the past.
    All paleo is outdated yet you argue by it frequently.
    All satellite and BEST is “outdated”.
    Worse though is that it has been altered.
    Now as Steven would say this is with good intent.
    Intent of course means bias.
    In his case “good” bias.
    Again all bias is good bias if it agrees with where you want the data to go.
    Like in Gergis.
    Brandon they will scrap this paper, won’t they?

  262. RB:

    …I don’t share your views on nefarious intent in suppressing UHI effects.

    .
    RB, I would posit that 95% of nefarious activities are without nefarious intent. And about 90% of that is because of the intent to fix the behavior of “the people like” (blank) for the greater good.
    .
    I asked two questions of you but you answered a question I did not bring up regarding a point that I didn’t make. Nonetheless I am confident you are a good person. Thanks for caring. People like blank care too.

  263. KenF comment on CA today:

    The lack of those analyses [of Gergis(2016)] and the wrongheaded post fact selection of proxies can only be taken together as an unscientific approach of assuming a “correct” answer and working backwards in attempts to show evidence. If looking deeper with more detailed analyses shows the evidence might not support the assumed answer – as should be the practice of the true scientist – apparently these scientists are willing to forgo it in their haste to support what they have already concluded as the truth.

    .
    Ken, I am glad to see you spell it out. I read every word of your Singular Spectrum Analysis (SSA) twice and got most of it I think but nothing compares to then spelling in out in plain terms.
    .
    Anyone who has worked in science or studied the history of science knows how easy it is to “fool yourself.” This goes double when a team consensus supports and cause must be furthered. Even those of high virtue can do things when encouraged by team approval that would appear to outsiders unwise or even nefarious.
    .
    The answer can only be equal funding to the teams studying the counter hypothesis in any analysis. It’s the hounding nips of competition that keep us from resting after we’ve found the answer we were looking for. It’s the specter of exposure and failure that motivates self-adversarial study approaches and prudence in claims.

  264. Ron,Brandon ATTP has a slightly old post on Gergis
    I wrote “Joelle Gergis got her PhD in 2006.”
    She was the lead researcher on the paper.
    David Karoly was the lead author hence one of her supervisors on the paper.
    Karoly made 2 elementary errors.
    The first was in setting up the paper with the past data compared to a long term trend from 1920 to 1990.
    The paper was then presented as using the detrended data but actually used the trended data and as the lead author along with others, he missed that.
    –
    The second error by David was still in the new paper which took four years to correct. There was a lot of reworking done to get the data with long term trending used to match the original paper.
    This reworking by necessity included Karoly’s error and was referred to by Mosher in a cryptic appeal to Dikran earlier.
    As such it might invalidate the new paper.
    –
    That is why I said it is a shame the parameters for the study were not set out and followed in a routine way which would have produced a routine scientific scientific result which no skeptic would have been able to have been upset with”

    Mosher has read Brandon’s post on the 1920 problem and hinted at it to Dikran who has ignored the comment.
    The problem is Brandon says that the Gergis reanalysing sticks to the script of the original paper but the new paper apparently states the data is detrended from 1930 instead.

    easily fixed of course
    Tell the editor
    “A simple error was made in ascribing the dates used”
    and we can change the wording in a jif.
    After all we have done this before.

  265. Angech:

    “A simple error was made in ascribing the dates used” and we can change the wording in a jif.

    .
    It’s just a “typo.”
    .
    Wait — the validation test period 1900-1930 overlaps the the calibration test period 1920-1990, as Brandon found. Maybe we won’t see a fix for more few years. Gergis is young. But so is Brandon; I don’t think she can outlast us all.
    .

  266. Steven Mosher agrees with you Ron,
    “July 23, 2016 at 4:55 pm
    “It’s still a hockey stick, despite all the years of contrarian fussing. And that’s what counts in the end.”
    Ah no.The problem is it authorizes the use of several suspect practices.
    A) comparing a proxy to multiple temperature fields to find a correlation WITHOUT correcting for this in uncertainties. Its pretty bad with 5 degree bins out to 500km, it would be hilarious
    with 1 degree bins out to 500km.
    B) using “leads” Physically that means the proxy predicts temperature. Unphysical is bad.
    c) screwing up your calibration and verification time periods. Basic stuff.”
    I thought he was channeling Brandon.
    Then he said
    “That said, boneheaded mistakes in Paleo, don’t change the physics.”
    Of course “if” without the mistakes in Paleo it shows great natural variation in temps without CO2 rise it suggests that jumping on short term temperature rise as proof of the physics of CO2 warming would be a similar * mistake.
    Tho it don’t change the physics.

  267. I disagree with SM’s last point (which I’ve seen often made) that high variability is evidence of high sensitivity. Because that is only true If it’s feedback amplifying the swings. Unforced variability, on the other hand, is independent of sensitivity. For examples, half the heat content of the ocean is under 2000m, a dormant lurking monster able to radically drop SST if disturbed. Or, a slowdown in the AMOC could put the high latitude NH into a deep freeze, increasing albedo and triggering a glacial advance. Of course, there could be variability in volcanic, solar or cosmic ray seeding of clouds also. This is what the consensus believes now caused the LIA.

  268. Yes Ron, The idea that high variability must imply high sensitivity in a chaotic nonlinear system is shown to be wrong by a moments reflection. Vortex shedding behind a cylinder is highly variable, but also a very stable semi-periodic state. In any case, such vague pseudo-scientific statements would require a huge research effort to even understand fully.

  269. angech,

    That said, boneheaded mistakes in Paleo, don’t change the physics.

    Is one of those pseudo-scientific statements that means almost nothing. What he means is that the green house effect is not changed. But of course, a lot of our so called “understanding” of climate is based on paleo climate work. But the appeal to “its just physics” is just a tool to convince people who are not familiar with the science that our predictive capability is better than it really is.

  270. Ron Graf (Comment #149814)

    Just to be clear my comment was aimed at those doing temperature reconstructions without an a prior selection of proxies based on some reasonable physical criteria and than using all the proxy data selected. I judge that what is done in these reconstructions is just plain wrong on a very basic and general level. My only reasonable explanation that I can formulate is that these workers/authors are sure they know the conclusion and only have to provide evidence for it without doing the more difficult work of finding and validating proxies that are truly reasonable thermometers that are reliable going back in time and applying rigorous statistics to the results.

    I want to distinguish this criticism from that I apply to other areas of climate science where the problem is most often the lack of sensitivity testing and incomplete analysis. The problem there is not necessarily as fundamentally wrong as with reconstructions but does have the similar weakness in stopping the investigation when a more or less foregone conclusion can be drawn from the analysis – no matter how incomplete that analysis might be.

    As an example I will give some details of a discussion I had with the Karl (2016) authors concerning their recent paper that was supposed to be aimed at warming pause or slowdown by way of a newly constructed NCDC temperature series. I had a problem with the selection of the time periods selected to show a warming slowdown in that the authors started in 1950-2014 as a comparison to the periods 2000-2014 and 1998-2014. I judged that the periods should start with the advent of accelerated GHG levels in the 1970s and the compared periods should be the before and after periods of the suspected slowdown and not two periods that both covered the slowdown period. This part of my criticism was taken seriously as I noted from the periods that the authors planned to use in their follow-up paper.

    I also attempted to make a case with the authors for using trend measurements that could handle non linear trends and thought that I might have convinced them when their follow-up paper was to include trends estimated using Empirical Mode Decomposition (EMD) which is similar to the Singular Spectral Analysis (SSA) that I had suggested they use. On reading the entire write-up I found that they would continue to use linear regression for trends and only show the EMD results as secondary evidence. I was not well familiar with EMD and I studied its applications and history. It turns out that EMD was “invented” within the organization of which Karl is a director and that organization actually has patents on its applications. I also found in my research that the Karl authors were not correctly applying EMD and the correct application would have given results similar to those I derived from using SSA trends. These methods and results made it easier to show that a statistically significant warming slowdown had occurred even when applying it to the newer Karl NCDC temperature data set while that applied by the Karl authors using linear regression and/or their incorrect version of EMD did not.

    Further I was able to find that the use of night time marine measurements for adjusting ship ocean temperatures and in turn buoy temperatures could be misleading by making the temperature data set acknowledge as SST (tos) more like an ocean air temperature (tas). It was in this discussion and that for using EMD that I lost contact/interests from the Karl authors.

    My take away in this example and others I have had when analyzing climate science papers is that the authors analysis as far as it goes is not necessarily in error but that a more complete analysis would have either produced different results or at least more uncertainty in the paper’s results. I somehow do not think this would be a reoccurring situation if the science was being carried out in a disinterested manner with regard to politics and being results orientated.

  271. Off-topic, but the last “open thread” has closed, and this one, like Mae West, has drifted.

    Here’s another instance of characterizing RCP8.5 as “business as usual”: this article about surface mass balance at Camp Century in Greenland, abandoned 50 years ago and now buried beneath more than 100 feet (36m) of snow and ice.

    Media description of the paper is worse. The Smithsonian claims that “Climate change could uncover the toxic and radioactive waste left behind at Camp Century as early as 2090.” However, the paper actually predicts that the camp will be over 200 feet deep (67 m) in 2090. [What the paper says is that 2090 might see the surface mass balance in the area change to negative.]

  272. Kenneth,

    Just to be clear my comment was aimed at those doing temperature reconstructions without an a prior selection of proxies based on some reasonable physical criteria and than using all the proxy data selected.

    Nitpick: It’s a priori, the literal translation from the Latin is “from the earlier.”

    There’s also the question of whether that’s the correct term. I would say the sentence reads better if you left out the ‘an’ and didn’t change the spelling of ‘prior’: …without a prior selection of proxies….

    Which is not to say that likely everybody who reads that sentence knows exactly what you mean. I said it was a nitpick.

  273. HaroldW,

    What’s a few hundred feet of ice between friends? 🙂
    .
    So maybe if the models are right about warming, and the glacial mass balance estimate is close to right for extreme warming, AND the projected warming is very large through 2175, the remains of the camp could theoretically become visible at the surface. Then again, almost certainly not. Green advocates have no level to which it is embarrassing to stoop for a ‘good’

  274. KenF:

    It turns out that EMD was “invented” within the organization of which Karl is a director and that organization actually has patents on its applications. I also found in my research that the Karl authors were not correctly applying EMD and the correct application would have given results similar to those I derived from using SSA trends.

    .
    This is some serious seaweed. Don’t they know when they improperly use a patented method that invalidates their patent? (sarcasm)
    .
    Ken, what are you going to do with your finding? I think S Mc would allow you to guest post if he does not have one planned already. BTW, I think you meant Karl(2015).
    .
    On SST bucket vs intake measurements Matthews and Mathews(2013) is an excellent rebuttal to Hadley’s Thompson(2008) and Karl(2015), Possible artifacts of data biases in the recent global surface warming hiatus. Possible? You unilaterally changed GISTEMP, Tom Karl, on “possible artifacts.” And you are not even using your patented method correctly.
    .
    News Flash 8-3-16– Thomas Karl resigns after 41 years of service.

  275. New paper
    “For predictor selection, both proxy climate and instrumental data were linearly detrended over 1931–90. As detailed in appendix A, only records that were significantly (p < 0.05) correlated with temperature variations in at least one grid cell within 500 km of the proxy’s location over the 1931–90 period were selected for further analysis-"
    –
    Old paper
    "The lead author, Professor David Karoly, told The Conversation: “The actual method … included the long-term trend in the temperatures over the period from 1920 to 1990."
    –
    Problem is obvious.
    Two different data dates claimed but both actually calculate the trends over the same single original base.

  276. Ron Graf (Comment #149825)

    What I wrote about the Karl (2015) authors was primarily from an exchange of emails about a paper they were proposing as a follow-up to Karl (2015). A serious analysis should be based on a published paper – in my mind anyway. Communication with the authors was easy and congenial. It was only after the EMD issue was raised did the communications abruptly end. I have searched the literature for papers by the authors that might have used EMD and have not found any to date. The issue of the night air correction for SST measurements was not answered by the Karl authors, but as I recall they did refer me to the Huang paper.

    I find interesting and curious the time spent looking at and investigating the land temperatures when the ocean temperatures are from 70% of the global area and an even precursory look at the related literature indicates that the uncertainties in the ocean temperature data sets will be greater than that for the land.

  277. DeWitt Payne (Comment #149823)

    DeWitt, I have used the term both ways. I think my most current usage will be determined by how I last saw the term used in the literature. My next usage will definitely be framed by your comment here. It might even be more lasting given the authoritative source.

  278. This is rather off-topic, but like HaroldW above, I don’t think people will mind a quick remark. It turns out Watts Up With That has a new post which praises and promoted Richard Tol. I know it’s probably pointless, but I want to stress how absurd that is. I wrote a comment expressing that sentiment which hasn’t appeared yet, but basically, Richard Tol is a dishonest hack who does terrible work and is only liked by “skeptics” because he says things that sound good to him. If he were on the other “side,” the Watts Up With That crowd would crucify him for how horrible his behavior is.

    That’s all. Feel free to ignore me. I just wanted to point this out since it’s been something like five years since everyone should have started shunning Tol.

  279. Brandon, Toll seems to have a lot of publications. It is hard to get a feeling for his work if you only are familiar with one or two. Certainly your language is pretty harsh.

  280. Brandon, thanks for keeping the skeptics honest. http://www.desmogblog.com/2015/08/03/richard-tol-s-gremlins-continue-undermine-his-work
    .
    If only the climate justice enthusiasts had the same types.
    .
    Ken, I agree the SST dominates the GMST. That makes it all the more surprising how little rigor was put into calibrating instruments back when this was realized in the 1950s to 1970s.
    .
    Apparently the WWII instrument shift from buckets to engine-room intake was known for many years and corrected for in HADSST2. Then Kennedy(2008) study of the meta-data led to the corrected adjustments contained in HADSST3. Then seven years go by before Karl writes:

    …there was a large change in ship observations (i.e., from buckets to engine intake thermometers) that peaked immediately prior to World War II. The previous version of ERSST assumed that no ship corrections were necessary after this time, but recently improved metadata reveal that some ships continued to take bucket observations even up to the present day.

    SEVEN YEARS. It’s bizarre enough that it took to 2008 for Hadley to check. SKS writes a good summary about it in 2012.
    .
    The more troubling mystery is why after the correction there is still a huge bump in the WWII SST.
    .
    I will have to read Huang(2015) and Liu(2015).
    .
    What is also curious is that NOAA is just now decided on the final adjustments to the history of SST just when they will no longer have any of those legacy instruments affecting observations. Karl(2015) announces the switch to almost complete use of buoy and at t severely down-weights the older two methods even as they are phasing out.
    .
    If I were doing the science I would have made certain of my calibrations before the start of my data collection rather at the completion. I think it makes a difference.

  281. Ron, Your link is unreadable because of the childish name calling. Ian Forrester seems particularly vacuous and viscous.

  282. David Young:

    Brandon, Toll seems to have a lot of publications. It is hard to get a feeling for his work if you only are familiar with one or two. Certainly your language is pretty harsh.

    There’s an interesting comparison to be made with how people handle the work of climate scientists, particularly in the choice of language, but I don’t know that we need to go there. I am familiar with much of Tol’s work. Conversely, none of the people I’ve seen praise and promote him are (as far as I can tell).

    One sad part of all this is the problems with Richard Tol’s work and writings aren’t remotely difficult to understand. They’re simpler and more obvious than the problem with work by people like Michael Mann. It doesn’t matter though. People just ignore it all or find excuses to defend it. Heck, we had people here (I won’t name any names) defend Tol’s claim that finding patterns in sorted data proves the data was generated in a biased manner.

    I kid you not. Tol repeatedly defended the idea finding patterns in sorted data showed the data is bad. That is the level of argument we’re talking about. If you can understand why finding patterns in sorted data doesn’t prove the data is bad, you are competent enough to understand the problems with Tol and his work. Heck, just understanding it is bad to secretly alter publications in order to cover up errors is enough in at least some cases.

    Ron Graf:

    Brandon, thanks for keeping the skeptics honest.

    I believe you mean “trying to keep.” I would rather not discuss my success rate.

  283. The reason I was surprised Brandon is that even my descriptions of Ken Rice are far more generous even though I think he’s pretty dishonest on quite a few things. And he has become far more moderate over the lifespan of his blog, I think making a conscious effort to not insult people. Still wrong frequently though.

  284. David Young:

    The reason I was surprised Brandon is that even my descriptions of Ken Rice are far more generous even though I think he’s pretty dishonest on quite a few things.

    I agree, and I describe Anders less harshly as well. That should say a lot about how bad Richard Tol is. He is certainly far worse than Anders. I’ve called him “Mini-Mann” before, and I think the comparison is apt. I’d say he is every bit as bad as Mann (he’s even used threats of lawsuits to try to frighten people into silence!). He just hasn’t had the same popularity.

    Heck, there’s an argument to be made that Tol is worse than Mann. He’s certainly a bigger buffoon.

    By the way, my comment at WUWT still hasn’t appeared.

  285. I know Brandon, it is frustrating when technical comments don’t appear. My experience at Rice’s is the same. I tried to post a comment on the recent GCM tuning paper which is really a vindication of what Judith Curry and I have been saying all along, it never appeared. Yet, he says i am not banned there. Seems he is not telling the truth. Rice is as I say, very dishonest and uses every tool to discredit when the technical content is right. Pretty shabby. The response to the model tuning paper is very instructive. No admission that they were wrong, nothing but a recitation of the paper contents.

  286. BTW, What do you make of the paper on temperature datasets highlighted by Judith? It seems pretty persuasive to me and I’ve seen no rebuttals of it.

  287. Put some of my posts here referencing Brandon and Steven up at WUWT.
    Hope you don’t mind too much.
    Referenced Gergis as “the fortune teller”.
    Hope it sticks.
    Interesting to see Anthony mentions Lucia as playing a vital role in addressing the first paper.
    Any thoughts on the second?

  288. Way OT, but…

    Both the Washington Post and the New York Times have articles today about the Zika virus Miami mosquito locus. They each report that the area of concern is 500 square feet, “ground zero” it’s called in one of these pieces.

    to wit:
    https://www.washingtonpost.com/news/to-your-health/wp/2016/08/08/why-the-zika-travel-warning-in-florida-is-so-narrow-and-what-it-means-for-rest-of-u-s/

    .

    http://www.nytimes.com/2016/08/09/health/zika-virus-florida.html?_r=0

    This seemed a bit precise on first reading. Where the articles go astray is that their calculation starts with the belief that a typical representative of the mosquito species of interest has a lifetime (short, alas) possible excursion of a maximum of 500 feet. We’re discussing a single mosquito here. i would have thought that the coverage of this mosquito might have been more like pi*r^2, (500^2)*pi, or 785,398 and some odd square feet.

    But that’s not all, ground zero was extended by a factor of five because there may have been more than one mosquito, and even further to establish boundaries at streets, the better to be understood by the locals.

    It amazes me that no-one on the editorial staff of either of these fine papers balked at the 500 square feet and asked to see the calculations.

  289. It amazes me that no-one on the editorial staff of either of these fine papers balked at the 500 square feet and asked to see the calculations.

    Unfortunately, we are in an era where just because someone can do a calculation, people think there is knowledge present. But it might be the calculation is just numbers, and it doesn’t represent any kind of physical reality. Almost the entire climate science enterprise is like this.

    Andrew

  290. The 500 square foot figure seems to have originated with a statement by CDC’s Tom Frieden: “There is no information to suggest there is a risk anywhere else in Miami. In fact, the area we are concerned about is about 500 square feet right in the middle of the one-mile radius where there have been infections.” 500 square feet is about the size of a two-car garage. Either CDC knows the location quite precisely (a pool?), or Frieden was misquoted. Or mistaken.

  291. When putting the CDC Zika statement, and the press credulity into context, the inference is that we are in for a world of hurt. And we will be poorly informed about the facts of it. Over the next few weeks tens of thousands of people who are now visiting Brazil will be returning to their homes. How many of those will be Zika carriers, and how many of those will be arriving in areas where mosquitos that can carry the Zika virus live and will bite the human carriers?
    The calculation is too full of unknown variables to be easily estimated.

  292. 500sf was obviously from the PC4 statistic on the multivariate proxy for mosquito’s life in a desirable (reasonably cushy) habitat at 28C avg temp.

  293. I suppose most of you are aware of Bob Carter, an Australian professor who is a climate skeptic speaker. I am wondering if anybody can watch him here and disagree with any of his observations that we face a real threat to global society (and it’s not climate change). https://www.youtube.com/watch?v=5NinRn5faU4
    .
    I want to be skeptical of skeptics, and I disagree, for example, that there is “a pause,” in trend. But on the political implications of what is happening to science, that Ken Rice thousands like him are our children’s trusted educators, scares me.

  294. Ron,
    Thanks, good to know. If anyone might want to understand why some might prefer to remain pseudonymous, reading the comments here probably provide a good illustration.

  295. David Young:

    I know Brandon, it is frustrating when technical comments don’t appear. My experience at Rice’s is the same. I tried to post a comment on the recent GCM tuning paper which is really a vindication of what Judith Curry and I have been saying all along, it never appeared. Yet, he says i am not banned there. Seems he is not telling the truth. Rice is as I say, very dishonest and uses every tool to discredit when the technical content is right. Pretty shabby.

    While some people like to claim moderation is difficult, it really isn’t. Since we’ve been talking about Anders and I’ve been visiting his site over the last few days to read the discussions there, I’ll give an example. In a recent post at Anders’s place, you will find this:

    [Mod: Sorry, I’m not really keen to have another discussion about Steven Schneider. I think he is one of the most unfairly maligned scientists on climate blogs and since he can’t defend himself, I don’t really have any great interest in allowing it here.]

    There was no notification the subject wasn’t supposed to be discussed at Anders’s site. The person submitted a comment with no reason to think it’d be a problem, and it just got deleted without any warning. This is despite the fact there were a couple comments on the subject prior to this one which got deleted. In fact, there have been multiple comments on that topic since this comment got deleted, including at least one by Anders himself.

    I have no problem with a blog proprietor deciding certain subjects are off-topic and not allowing them to be discussed. That’s just not what’s happened here. What happened here is a blogger decided to take advantage of his administrative powers to give himself an advantage during discussions. I’ve seen the same sort of thing at tons of blogs, often by people who complain about how difficult moderation is.

    BTW, What do you make of the paper on temperature datasets highlighted by Judith? It seems pretty persuasive to me and I’ve seen no rebuttals of it.

    I haven’t even looked at it. The subject bores me. I’ve rarely seen discussions of modeling that weren’t filled with errors and misunderstandings. As an example of why I just don’t care to bother, take a look at Anders’s latest post. Read it and the paper it promotes, and it should be obvious why I scoff at this being the level of discussion we have.

    And no, I’m not talking about the paper taking RCP8.5 as the business as usual scenario. Or that it says it uses the estimates effective GHG potentials from the IPCC Second Assessment Report, as opposed to estimates generated in the last two decades. I’m not even talking about how Anders writes:

    There are few interesting things in the figure. For example, if we have emissions in 2030 of around 50GtCO2, then we could likely keep warming below 2oC if we get to emission neutrality between 2050 and 2070. However, the white contours indicate the final emission level. They show that the earlier we reach emission neutrality, the lower (more negative) the final emission level needs to be. This is a little counter-intuitive…

    Which is not “counter-intuitive,” but simply wrong and not what the paper shows. Nope. None of that is what I’m looking at. What I’m looking at is how the results all depend upon a fundamental assumption, one which absolutely no basis is provided for.

    But of course, there’s no way to discuss that since Anders banned me from his site despite me not having done anything wrong at his site.

  296. Ron Graf:

    I suppose most of you are aware of Bob Carter, an Australian professor who is a climate skeptic speaker. I am wondering if anybody can watch him here and disagree with any of his observations that we face a real threat to global society (and it’s not climate change).

    I’d rather not have to read or hear anything Bob Carter has to say ever again. He was the author of Chapter Five of Climate Change: The Facts, a book “skeptics” demonstrated their lack of skepticism by promoting without question. In his chapter, Carter claims an:

    analysis revealed worldwide errors in the range of 1-5C for individual sampled area-boxes, i.e. errors that far exceed the total claimed twentieth century warming of -0.7C.

    And uses that to justify saying:

    Though global average temperature may have warmed during the twentieth century, no direct instrumental records exist that demonstrate any such warming within an acceptable degree of probability.

    Even though the analysis he refers to clearly says those uncertainty levels are for “sampled area-boxes,” boxes the cited study says are 1°x 1° boxes. Using a 1°x 1° grid means there would be 64,800 individual area-boxes, meaning the uncertainty levels stated in that analysis are for when you examine less than .2% of the globe.

    I’ve pointed this out multiple times in the past, such as here. It’s not the only ridiculous thing I’ve seen Carter say, but it was the last straw. Now I actively avoid listening to his thoughts in order to protect my peace of mind.

  297. Ron,
    Yes, there is potential for educators to do real harm AND real good. I have seen both. The harm usually comes from those educators who think they should instill values and political priorities in their students, rather than knowledge and critical thinking. The leftist speech police now trying to control everything said on college campuses is but one example of those harms. Graduates who know nothing of practical use, in spite of a degree, and a couple hundred thousand dollars invested, is another example. What the problem comes down to, I think, is a wholly unjustified sense of moral superiority among educators who are on the political left. Which is, unfortunately, most.

  298. Ron,

    Carter spouts mostly rubbish, because he usually doesn’t understand what he is talking about.

  299. Brandon,
    In that paper’s introduction, the authors write about existing studies of whether or not 2C warming over pre-industrial temperatures can be achieved, and note:

    Invariably, results of such studies depend crucially on assumptions about two factors: first, the maximum feasible rate of decarbonization which is difficult to bind because it is implicitly tied to complex questions of geopolitics and economics.

    When studies are ‘crucially dependent’ on those assumptions, rather than on assumptions about the uncertainty in sensitivity to GHG forcing and historical GHG forcing, then you know right away that those studies, along with any papers citing those studies, are all rubbish.

  300. Oh, hey. Would you look at that. Some 24 hours after I submitted my comment, and long after most people stopped following comments in the thread, my comment at WUWT was approved. And because the timestamp shows when the comment was submitted, not when it was actually posted, nobody seeing it could ever know it was held up for about 24 hours. That’s a problem, particularly since I, as a commenter, have no way to know when the comment might get released. This allows people like Anthony Watts himself to ambush me by submitting responses I would never see unless I checked back at the page every few hours.

    I submitted a single comment to follow-up on the responses I got as Richard Tol posted false information, but again, it has disappeared into moderation. I don’t intend to submit any more comments. I know Watts would likely defend this as just a feature of the automatic moderation, but if your site holds up comments for as much as 24 hours, you could at least sent people a message when their comments are approved. Failing that, if you’re going to respond to the comment, you could at least include a remark like, “This comment was held up in moderation. I’ve released it.” Then people will at least understand what is going on.

    There are some difficulties to moderation, but it is not difficult to at least attempt to treat people fairly. Similarly, I think it’s telling Steven Mosher still hasn’t commented here since I pointed out the obvious error in what he wrote way back when. It appears some people just want to get to express their views, not have a real discussion.

  301. Brandon S:

    Similarly, I think it’s telling Steven Mosher still hasn’t commented here since I pointed out the obvious error in what he wrote way back when.

    Or maybe the rumor of his current health problems is true.

  302. Or maybe the rumor of his current health problems is true.

    He posted a fairly lengthy comment at Cimate Etc. a couple days ago. So I guess his illness causes selective participation.

    Andrew

  303. I have no reason to doubt Steven Mosher is recovering, and I sympathize with how that could affect his life (both online and off). But given he has posted a couple thousand words on other sites over the last couple weeks, it is clear he could have commented here as well.

    Mind you, that doesn’t mean I fault him for failing to engage in detailed or lengthy exchanges here and now. Failing to even leave a short comment informing people he will be absent would have gone a long ways.

    (Without even looking for comments by him, I’ve seen Mosher comment at Climate Audit, Climate Etc. and Anders’s place. His absence from this one is certainly conspicuous.)

  304. Agree w/Brandon. My guess is that he enjoys being “preachy” and it’s tougher to do here. That said, if you’re going to do guest posts, own up to the content. JMO YMMV.

  305. TerryMN, it’s a shame because he made a very obvious error in this piece (I honestly don’t think he bothered to read that paper), and he got a lot of things wrong in the comments thread of the last post. Correcting these errors and discussing the details could be a great way for people to learn. Instead, we’ve had yet another case of a useless discussion in the blogosphere.

    By the way, I still think his claim in the comments on the last thread the “scalpel” method used by BEST can’t affect the calculated trend is mind-blowing. Anyone with the slightest understanding of using breakpoint algorithms for this sort of problem should know better. That he not only said it, but had nobody point out it was nonsense (save me) shows why I usually prefer to avoid these discussions.

    I think things like UHI are interesting and enjoy learning about them. I just find it is nearly impossible to have a useful discussion about them.

  306. I just had an interesting idea for testing breakpoint algorithms and other methods of homogenizations. What if one changed the location data first? As an example, what would happen if you randomly assigned a location to each station? Would that affect the global temperature trend? What about regional ones? How much would they change?

    I’m curious if anyone else has considered this before. With the BEST approach you might have to subtract out the “climate field” first, but other than that, it could make for a useful test of the “homogenization” it does. And I know it’d be an interesting test for other methodologies I’ve seen.

    Has anyone seen this done or tried doing it themselves?

  307. me: “…our children’s trusted educators, [Anders]…”

    Anders, you seem like a regular guy; I share a lot of your passions for planetary science; we’ve had nice discussions. But, my primary fear for the next generation is not we leave them with a cesspool environment, it’s that we leave them a society that doesn’t value freedom of thought and all the other human rights that spring from it and without which make civilization a cesspool.
    .
    I have seen your site support banning or undermining speech of your political opponents. Everyone here has experienced your censorship. We fully understand that you are not alone in this practice. That progressive consensus behavior is to put the value of consensus enforcement above all others, including robust debate, is what Bob Carter was analyzing.
    .
    I just had a comment deleted today from an article on ABC News.com about ice cores. There was no reason given. It was pointing out that Gore’s chart he labeled ice cores and called “Dr. Thompson’s Thermometer,” was actually Mann’s hockey stick. Not be foiled I posted it again 30m ago and it’s still up under “Ron”.
    .
    SteveF:

    Carter spouts mostly rubbish, because he usually doesn’t understand what he is talking about.

    .
    That is may be so but the video I linked does not go into such skydragon claims that offend you and Brandon. Carter simply makes a compelling case against the censorship and doctrine enforcement. It’s good. You would like it.https://www.youtube.com/watch?v=5NinRn5faU4
    .
    We have all experienced this censorship and I am almost curious just to try to comment on ATTP now to see if it uploads. I actually think Anders is a probably a mild example of what we are talking about. He certainly is more thoughtful and polite than most commenters at his site.
    .
    Ken, I read a very entertaining thread on SKS post by Zeke where a commenter, Dazed and Confused, presents himself as a newby investigating Karl(2015) adjustments. Clearly he is feigning his ignorance as he butters then up and then makes devastating points that make all the other enforcers look like monkeys. Zeke jumps in to try to save the thread but this guy was very good. Even if only interested in technical ideas for Karl(2015) at least read the last several comments. He makes great points for further investigation.
    .
    Brandon:

    Would that affect the global temperature trend? What about regional ones? How much would they change?

    .
    On one hand Jones(2016) says you can get a representative sampling of the globe with 100 grid points. On the other hand they keep heavily infected urban stations. If you had the data and tools I would love to see the trend of 100 most pristine land stations. Also, bucket record versus ERI versus buoy.

  308. Brandon et al, It is a good thing to hold people accountable regardless of whether they are on “your side.” In this sense, ATTP and WUWT are flip sides of the same coin. Dan Hughes pointed out a comment at Rice’s on model tuning that has to be quoted to be believed. First William Connelley:

    ATTP: what makes you think I ever *read* the SoD post😕 But now I look I must have, since I commented on some of the details. I think it does a fair job of conveying the complexity of the issue.

    The answer, I think, is that it is very detailed, and very model / centre specific. There’s also a lot of “implicit” tuning, in the sense that if something works out, you leave it alone; and if it doesn’t, you tweak it. It is also such a multi-layered process (even picking one thing I’m vaguely familiar with, like the sea ice) that you’d be hard pressed to go back afterwards and work out what all the tweaks even were (which is why the rather naive Hourdin recommendation for “better documentation” is rather naive; that’s the sort of recommendation anything like this always comes up with). The Curry post is useless, except for its recommendation to read the Hourdin paper.

    He basically admits that tuning is not documented because its almost impossible to do it in a complex model setting. That’s very self-serving of course and an excuse for low standards. The literature in turbulence modeling for example is far more reflective of what goes on to tune the models. And of course the obligatory swipe at Judith for being right all along.

    But then our friend Ken R. joins in to slight Judith. And then in the preceding comment, Rice just states a full out untruth about those who knew all along the truth about GCM’s.

    However, these are complex models in which having parameters that will need tuning is unavoidable; can obviously aim to make it clearer how they are tuned and also maybe develop procedures that are more robust (although I suspect you’ll never get everyone to agree on the optimal procedure). I also get the sense that some seem to judge these as if they should be engineering tools (being used to design something), rather than as scientific tools that are mainly used to understand how a complex system responds to changes.

    I’ve never heard this line but its just more mind reading and excuse making. Of course, there ar engineers who design things and real scientists who seek understanding. Classic prejudice. GCM’s are used by the IPCC for numerical projections that are used in most climate papers about impacts including the one Rice posted on just a day or two ago you were critiquing. It he really so uncritical a thinker??

  309. David Young,
    “It he really so uncritical a thinker??”
    .
    More accurate I think is “selectively critical thinker”. This is a common trait among those who are biased. Nick Stokes, an obviously educated and smart person, suffers from the same thing… gives a free pass to published rubbish when that rubbish supports projections of thermal doom, but criticizes even the smallest uncrossed ‘t’ or un-dotted ‘i’, when thermal doom is not supported. Politics and science don’t form a good mixture.

  310. I went to a work seminar on unconscious bias. The speaker commented that the real problem is with those who think they are without bias.

  311. Ken Rice,
    Who claimed to be without bias? Everyone has normal scientific biases (read ‘The Structure of Scientific Revolutions’, Thomas Kuhn, if you haven’t already). Some are more aware of political bias than others, and think political biases are a bigger problem in some scientific fields than in others. I don’t think astronomy or quantum chemistry have much political bias, but the closer science gets to practical consequences (GMOs, chemical toxicology) the more political biases influence science. Then there is the big tamale…. climate science, where political bias is so obvious that one regularly has to blink in disbelief at the bias on display.

  312. Anders:

    …the real problem is with those who think they are without bias.

    .
    Every teacher should stress the importance of critical thinking. One of my teachers was fond of intentionally misleading the class with a wink to often force them to find the flaw.
    .
    The rewards of conformity are the enemies of science.

  313. I went to a work seminar on unconscious bias. The speaker commented that the real problem is with those who think they are without bias.
    .
    Yes, Eddie been telling you that.
    .
    And of course, the wonderful irony of Blind Spot Bias is that when we think others are guilty of it is when we tend to think ourselves above it and are most susceptible.
    .
    I suspect that for all our reasoning capacity, learning is an emotional response, as anything that triggers the reward centers of the brain probably is.

  314. The one thing everyone can agree is that the other side is biased.
    .
    Unlocking the reality thus becomes an investigation on its own. Lewandowsky found skeptics are prone to delusional conspiracy ideation. However, his study was found later to be biased.
    .
    Better answer:
    1)Fund investigation teams with competitive hypothesis simultaneously.
    .
    2)Have protocols of investigation and reporting established and revisited by an independent body weighted in statistical experts.
    .
    3)Make all data created from public funds to serve the public available to the public along with a full audit trail of adjustment methodologies.
    .
    4)All press reports of results should have comment not only by the authors but by critical expert competitors and critics after review.
    .
    5) Frequent and open debate, particularly of experts of all sides on a stage at once.
    .
    Anders, do you disagree?

  315. Ron,

    Reasonable suggestions all.
    .
    But accepting them means accepting even a possibility that GW alarm is not really justified. I doubt you would get any acceptance of that on “the other side”.
    .
    It is pretty simple; climate science is dominated by and funding largely controlled by people with a strong green orientation, and who are determined to reduce human influence of Earths ecosystems. They would want this result independent of global warming. For these folks the magnitude of future warming doesn’t really matter, what matters is reducing ‘human caused harm to the environment’ (1C?, 2C?, 3C?, doesn’t matter; stop using fossil fuels!). That is why the focus is always on the extreme tail of the PDF for sensitivity, how it is claimed impossible to significantly reduce uncertainty, why only extreme climate model projections are used, extreme seal level rise projections considered… scare stories all.
    .
    If there were public funding of non-main stream research on GW, then the desired ‘green’ outcome of less human impact becomes less likely. They just won’t agree to it.

  316. Ron, If you want proof of what SteveF is saying just look at some of Anders posts for example the one on climate models where I tried to do sustained comments. There are a lot of comments from his regulars about how to deal with someone like me in a way that doesn’t undermine the political goal. Also see the post by Richard Betts about using the “denier” word. A lot of that there too.

  317. Just to reiterate, we can throw around the bias word all we want. However, it is becoming increasingly well documented that science has a severe bias problem. I’ve given some of the references at Climate Audit recently. It’s pretty clear that climate science is at the best no better than other fields. Clearly the GCM literature (which includes perhaps the majority of climate science papers) is worse than the CFD literature, which is really very bad in its positive results bias. The issue here is far bigger than any of us or the lecturer and director of public relations for an Astronomy department who just can’t seem to get his facts straight or discuss an issue directly and with complete honesty.

  318. I will simply point out that in a sentence in which DY complains about others not getting their facts straight, he fails to get two – arguably three – of his facts straight. Well, that’s assuming that he’s referring to me.

  319. DY: “It’s pretty clear that climate science is at the best no better than other fields.”
    .
    I would agree that the people are no better or worse. I think this is where many get it wrong, saying “people like” such and such are the problem. People as a group are almost never the problem. Culture, values, education and standards of behavior are the problem and they are slow to correct, and only correct with good leadership. Powerful corporate interests can be just as corrupt as powerful government interests. It amazes me that the people on the alarm side can feel private money is corrupting but government money is not.

  320. Anders, please know that you are mainly of interest since you are sincere enough to engage. Just tell us your opinions. And, if someone misrepresents them feel free to correct.

  321. Ron,

    please know that you are mainly of interest since you are sincere enough to engage.

    After reading the most recent comments on this thread, my interest in engaging is largely non-existent.

    Just tell us your opinions. And, if someone misrepresents them feel free to correct.

    It’s never worked before.

  322. After reading the most recent comments on this thread, my interest in engaging is largely non-existent.

    And yet you continue to engage. If the quality of your recent posts are a guide you’d be doing all a favor to save the time and pixels.

  323. Ken Rice, I got your title from the pop tech writeup on you. It appears that your title has recently changed as last year I did verify poptech’s attribution. Your CV says you are a reader in Astronomy. But your web site says you are a Professor. I would be happy to know what your current position is. What is really disingenuous of you is to just say someone else is wrong but to never state the real facts as you see them. This is a pattern of deception at best. If you want to say something of value of a technical nature, I would entertain it in a serious and scientific way, but you seem to be a master of evasion on CFD, models generally, engineering and science, the replication crisis and perhaps most amazingly what models are used for. Hint, using them for “understanding” is only a very small fraction of their usages and not interesting in areas where health and safety are issues.

    Ron, The people in climate science are no worse than in CFD almost certainly. However, the history here of “science communication” is just appallingly bad. We have people like that in CFD, who use “colorful fluid dynamics” to sell CFD to the public and money givers. It is a dark pseudo-science using half truths and the power of suggestion to imply things that are simply not true. ATTP has of course had forays into climate science himself, mostly with Cook and Lewandowski on “consensus” which he later admitted was not optimal as a communication strategy. Now ATTP, feel free to correct the record. To be credible you must say more than just “DY is wrong.”

  324. Ken Rice,
    .
    If you think my comment #149867 is not a fair representation, then please explain why. I don’t mind being shown to be wrong. But I find the “Oh, why should I even try!” act both tiresome and not constructive. We face what I think is mainly a political disagreement, and this is interfering with both a substantive discussion of science and any compromise on policy. Refusing to participate will not resolve the political disagreement.

  325. SteveF:

    They would want this result independent of global warming. For these folks the magnitude of future warming doesn’t really matter, what matters is reducing ‘human caused harm to the environment’ (1C?, 2C?, 3C?, doesn’t matter; stop using fossil fuels!).

    .
    I agree that the anthropogenic aspect of the debate seems to be out of proportion to logic. If warming is harmful the harm would be the same whether or not it’s man-caused. The harm of an asteroid strike has an even larger tail of potential danger. Yet there is little talk or money being put into a preventative measure despite our technological capability to do so now.
    .
    I am disappointed Anders would not take the opportunity to clarify and help us mutually understand each other in this wonderful non-politically censored forum Lucia provides. Yet I almost think we can take his place in his absence. As the viewpoint are not unique to Anders we can de-personalize the his position to a general category. And, to be fair we should categorize the skeptic motivate view as well.
    .
    Greens mistrust private enterprise, see all the ills of civilization as coming out of western modern culture, understand the planet’s resources are limited and fear that there is no pilot to the ship. They can point to the many historical proofs of this including wars, overpopulation pollution and wealth disparity. In this context AGW is an extension of the greed of the wealthy and powerful turning a blind eye to their collective abuse and exploitation of nature and humanity. Power must be wrested from their control.
    .
    Skeptics are made of a bit more disorganized coalition but the most affected seem to be libertarians who are suspicious of pretexts for centralizing power as history has shown many examples of catastrophic abuses by authoritarian regimes. Thus skeptics understand that authority can easily corrupt truth, including the practices and reporting of scientific study. In fact climate skeptics believe that CS has failed early audits (MM03, 05) miserably with the hockey stick affairs and Climategate(2009).
    .
    Even those from opposite camps agreeing on 2C warming by 2100, and also agreeing that sea level rise would present a problem, have opposing views into the way science should be conducted and reported and what degree of governmental intervention is most prudent balancing all consequences of each type of action. Greens want to strictly and quickly phase out fossil fuel. Others would favor fossil fuel be replaced as technologically feasible and in any case develop more adaptive mitigation strategies for all possible outcomes.
    .
    Greens generally oppose nuclear energy as an alternative. They are not even crazy about clean and sustainable nuclear fusion. Skeptics are OK with nuclear and but are not pressing for alternatives like fusion in a national priority-type fashion but have no opposition to reasonable research funding. Personally, I would like to see X-prize type awards put up for benchmark achievements.

  326. Ron,
    “Thus skeptics understand that authority can easily corrupt truth”
    .
    And they understand Hillary still can be elected President, despite 25 years of blatant corruption, lies, and deception. To some, especially those on the green tinged left, ends justifies means… whether that is supporting Hillary or supporting “the cause”, as one of the UEA email authors so aptly described it. That, IMO, is pretty much the whole of the problem with climate science: it’s not really science at all, it’s “a cause”…. AKA “politics”.

  327. I don’t care for this much focus on Anders as he is clearly being a troll here, and trolls are best ignored. Instead, I’d like to go back and remark on a comment way upthread where Kenneth Fristch wrote:

    I would sometime very much like to have a discussion about testing the adjustment algorithms on these blog threads – and without the silly tit for tat exchanges. I am linking a post from Victor Venema’s Variable Variability blog that points to some of the issues that need testing and explains in general terms how some of the algorithms operate.

    Then quoted Victor Venema:

    Statisticians often work on absolute homogenization. In climatology relative homogenization methods, which utilize a reference time series, are almost exclusively used. Relative homogenization means comparing a candidate station with multiple neighboring stations (Conrad & Pollack, 1950).
    There are two main reasons for using a reference. Firstly, as the weather at two nearby stations is strongly correlated, this can take out a lot of weather noise and make it much easier to see small inhomogeneities. Secondly, it takes out the complicated regional climate signal. Consequently, it becomes a good approximation to assume that the difference time series (candidate minus reference) of two homogeneous stations is just white noise. Any deviation from this can then be considered as inhomogeneity.

    The reason I want to look at this is I find it instructive how BEST and other groups claim their breakpoint algorithms detect inhomogenities from microsite influences, stations moves and things like that even though there is absolutely no evidence they actually do so. I think “testing the adjustment algorithms” is a great idea, but as far as I’ve seen, such tests have always focused on things like global mean temperatures. They shouldn’t.

    Under the BEST methodology, “breakpoints” are detected when there would be no discernible difference in the station record. The reason is when using relative homogenization, you’re not actually looking for things like station moves and microsite influences. All you’re looking for is times where stations are different from one another in a certain way. Without linking such a difference to a physical cause, this is only homogenization. It is not a treatment of microsite and other unwelcome influences.

    I bring this up because it is possible these methodologies could be used to address the problems we want them to address, but it is simply false for groups like BEST to claim their “empirical breakpoint” algorithm detects the things they claim it detects. There is no evidence the BEST algorithm detects anything it claims to detect, and to whatever extent it may succeed, it certainly fails at a far greater rate. I can’t speak to the false negative rate as I don’t have a good way to detect these influences myself, but the false positive rate appears to be something like 99%.

    I don’t understand why almost nobody discusses this topic. It seems a rather important one to me. Homogenizing your data to get the “right” answer without detecting real breakpoints in your data is very different from correctly identifying unwelcome influences in your data and removing their effect.

  328. If USHCN shows no warming for the USA over the last 20 years where is all the current claimed warming coming from?
    Which other countries?
    Or is it just from ocean warming.
    If the Homogenizing is working why does the influence of the USA stations not lead to a concentric spreading wave of normal temperatures around the world?
    Or does homogenization stop at national borders and at the edge of the sea?

    On a different subject will th melting season run late this year and lead to second lowest extent of arctic sea ice??

  329. I’d like a Super Smart Warmer Analogy Wizard to explain all the ways the earth is not like a pot on a hot stove.

    Andrew

  330. angech, clearly, they’ve just stopped fraudulently altering and creating data to fake a warming trend in USHCN.

    Yes, I may be a little bitter over watching that Steven Goddard presentation yesterday.

  331. “I have no explanation for why the results of this study were misrepresented in such an obvious way, but I think it would be fair to suggest this post’s author did not do a good job of reading this paper.’

    Yup, I confused figure 7a and 7b.

    When I recover– its looking more like 2 more months

    I will have to get the data from the authors.

    Essentially, I will want to know.

    1. Over the 20 months how many actual days of UHI did they
    observe. In their first study they only had 11 days where
    they could observe SUHI. (yes thats different I know )
    2. What is the distribution of Wind directions..

    3. What is the distribution of wind speeds..

    or you guys can write them while I am going to take it easy.

  332. An important issue here that is bigger is the disagreement highlighted in the recent paper highlighted at Judith’s between the tropospheric record from satellites and balloons and the reanalysis of that data used for weather forcasts and the GCM predictions of laps rate. It seems something is wrong here. Could the lapse rate theory be wrong? In any case, this paper argues that one should pay more attention to these tropospheric data sets for estimating climate sensitivity. There are a host of issues that could affect the surface such as land use changes, irrigation, etc. UHI is probably the best understood of these.

  333. BrandonS: “…correctly identifying unwelcome influences in your data and removing their effect.”
    .
    If there are enough distinguishing characteristics to a signal artificial intelligence could be used to target it and respond in programmed fashion. I agree, I don’t see that level of sophistication in BEST and Menne-type relative homogenization.
    .
    I am with you and others that scratch their head thinking how can the world be depending on data that is so lackadaisically handled that it took Hadley 50 years to ask the question where their SST data were coming from, buckets versus intakes or buoys, that it took seven more years before NOAA noticed that someone in Hadley wrote a paper changing HADSST2 to HADSST3, which is the reasoning for the SST portion of Karl(2015). It just seems incredible. But it also is incredible that a retired mining consultant debunked a chart whose graphic almost became the IPCC logo.
    .
    There is no government paper I could find doing the obvious field tests to confirm bucket vs intake vs buoy calibration. The only paper is Matthews and Matthews from the University of Victoria in Canada. The do not support Hadley’s Thompson(2008).
    .
    Brandon, I know that Anthony Watts’s paper, submitted but not yet published, looks for pristine stations to compare for UHI influence. Watt’s team actually called the stations to confirm meta-data. Many said they had never been interviewed before. I’m not sure if Watts did USHCN only. I would think UHIE would be worldwide though Mosher claims he could only find it in USA. The obvious rural to urban comparison studies find whole number differences in average temp. Jones(2016) cites Peterson and BEST that rural trends are no different than rural. CRUTEM does not adjustment for UHI. Hansen is the outlier authority to claim >0.1C/cen UHIE in the raw GHCN. It still is an open question though whether the GISS adjustment functions to do this. Climate Audit found negative UHI adjustments canceled out positive ones in 2008.

  334. I would actually like Mosher to respond to the dataset paper. What I see is a tremendous amount of wasted effort amoung “scientists” at trying to implement their agenda for action and precious serious effort at the remaining problems, which are bigger than they admit. From GCM’s to the lapse rate to land use effects other than UHI, there is so much to do to improve the science and possibly change our policy relevant predictions. There is a lot of denial our there that these issues are significant.

    Mosher, isn’t it time for BEST to move on to some more challenging and important problems?

  335. Steve Mosher,
    Abdominal surgery really sucks, especially in the first couple of weeks. You should start feeling a lot better within a week or two.

  336. “I would actually like Mosher to respond to the dataset paper. What I see is a tremendous amount of wasted effort amoung “scientists” at trying to implement their agenda for action and precious serious effort at the remaining problems, which are bigger than they admit.”

    Their conclusions about ECMWF are unwarranted.

    http://onlinelibrary.wiley.com/doi/10.1002/qj.828/full

    “From GCM’s to the lapse rate to land use effects other than UHI, there is so much to do to improve the science and possibly change our policy relevant predictions. ”

    not much of this is policy relevant. or rather what you think is important is probably not important.

    “There is a lot of denial our there that these issues are significant.
    Mosher, isn’t it time for BEST to move on to some more challenging and important problems?”

    hmm you might be confused. we actually finished our work on temperature years ago and have moved on to other things.

  337. Steve Mosher,
    Abdominal surgery really sucks, especially in the first couple of weeks. You should start feeling a lot better within a week or two.”

    ya, I had to be re admitted last week. And then Another trip to ER yesterday when I stood up, passed out and face planted. not too many stitches.. haha

  338. Steve Mosher
    Of course, being old doesn’t help. Still, you will likely get better, even if more slowly than you might hope. 😉
    .
    Beats the crap out of the alternative.

  339. SM,

    not much of this is policy relevant. or rather what you think is important is probably not important.

    I looked at the ECMWF paper abstract and there are issues to be resolved as with all datasets. This reanalysis is used to initialize weather models in preference to other sources of data. They could just use GCM lapse rates and surface observations so that must be inferior. I think the point is that there is consistency among 3 data sets here and that’s a reason for a second look and perhaps to use them in estimating sensitivity.

    As for GCM climate projections, it seems they are pretty important as most impact studies are based on them. That would seem critical to policy at least to me.

    I am not as knowledgable about the lapse rate but my understanding is that the lapse rate determines the height of the tropopause which is closely related to climate sensitivity. That’s pretty important for policy I would argue.

    Of course it all depends on what you mean by policy I guess.

    What has BEST moved on to? You didn’t say.

  340. DY

    ‘What has BEST moved on to? You didn’t say.”

    After the temperature work, everything has been policy related

    1. Natural Gas and Fracking
    2. Communications work
    3. PM25.
    4. Newest thing, folks will have to wait

  341. Steve Mosher,
    Sorry to hear about the ab surgery and complications.
    I hope the rest of your recovery is uneventful.
    In my experience pacing it a little bit slower than you feel as your “max” is a worthy goal until you really get some healing accomplished. Take it easy, in other words.

    Steve F sums it up rather well:
    “Beats the crap out of the alternative.”

    The blogosphere needs you.

  342. SM, I think all (except the green and lecturers in astronomy) would agree that natural gas is a very good option. I personally don’t understand why the US is not converting its transportation fleet to it. It is abundant and very cheap right now. The pipeline infrastructure is very good and the conversion is easy. Just give an incentive for every gas station to install a compressor. My question is why alarmists are ignoring it. Perhaps they want to keep all remaining fossil fuels in the ground. Pretty pathetic.

  343. David,
    The investment in point of sale distribution is significant. The tank volume for limited range is significant. There is a need for both fuel systems in each car until gas is widely available. The current cost difference between gas and gasoline probably won’t justify the change economically.

  344. After the temperature work, everything has been policy related

    I suspect the temperature work is policy related too. Call it a hunch.

    Andrew

  345. Perhaps, SteveF. As you say, there is an issue with vehicle range on reasonably sized natural gas tanks being less than for gasoline tanks of comparable size.

    A while ago, a friend of mine who works in the oil industry and I did a cost calculation comparing natural gas and gasoline on an equivalent energy basis. Natural gas was about $4 per million BTU’s and gasoline about $2.50 per gallon wholesale. As I recall natural gas was ~8 times cheaper. Natural gas has recently fallen to roughly $2.5 per million BTU’s.

  346. “I suspect the temperature work is policy related too. Call it a hunch.
    Andrew”

    nobody who does policy or pays for policy support work agrees with you.

    your debate is with them.

    I’ve got a post on why… maybe I will post it, and u guys can debate with the wall.

  347. David,
    There is no doubt on a BTU basis natural gas is much cheaper, especially wholesale vs wholesale. But a lot depends on the local retail situation. In Massachusetts, retail prices for natural gas are not all that low; my understanding is that there are local capacity issues and (perhaps) a desire to reduce fossil fuel use in general, so not much motivation to offer very low natural gas prices.

  348. SteveF, As much as it goes against my instincts, the government could make a difference here with incentives. Just as they built the interstate system, cutting our oil usage by a factor of 2 is I think partially a national security issue.

  349. Steven Mosher:

    “I have no explanation for why the results of this study were misrepresented in such an obvious way, but I think it would be fair to suggest this post’s author did not do a good job of reading this paper.’
    Yup, I confused figure 7a and 7b.

    Yes, and Gergis et al just made a “typo” in their paper. It’s not like Mosher’s claim there was 0 correlation at distances greater than 12km in the worst case scenario was contradicted by anything other than a picture. I’m sure there were no words describing the results which might have clued him in that he was making a strong claim without any basis. /sarc

    I really don’t get how anyone lets this sort of comment slide. Mosher’s repeated claim (and even a calculation based upon it) wasn’t wrong because he “confused figure 7a and 7b.” The only way that claim would make sense is if we believed he didn’t read the text of the paper, or if he read it then somehow forgot what it said. You see, Mosher didn’t just say there was no correlation at distances greater than 12 kilometers, he said that was true in the worst case scenario. Neither Figure 7a nor 7b shows the worst case scenario. The paper actively describes scenarios worse than what those figures depict.

    Unless Mosher just opened the document and looked at the pretty pictures, his explanation of this error is intentionally misleading.

    But oh well. I’m out for the rest of the weekend. I’m out of town, was just at a banquet, and now I have a party to go to (dart tournaments are fun, yo). I just happened to see Mosher’s ridiculous excuse on my phone and had to pull out my tablet to scoff at it.

  350. yes brandon the paper describes potential worst case scenarios.

    1. what might happen in a heat wave ( no supporting data )
    2. What simulations suggest..

    In any case you might want to take a look at the distance from W04 to the edge of the urban area.

    Bottom line.. None of the locations they tested would pass as an rural site for my filters..

    Hint paradise is the center of the city.

    second hint they used NVDI for ufract

  351. Keep on refusing to truly acknowledge your errors Steven Mosher. I’m sure that will only encourage the fruitless discussions you like to complain people have.

    When people want to actually have a real discussion of things like UHI or how to construct a temperature record, I’d be happy to join in. Until then, this sort of thing is just a waste of everyone’s time.

  352. Brandon, you are correct that when people acknowledge their errors, whether is be a confused motorist or a mis-calculating president of the USA, it allows everyone to quickly move on. So it’s the mature thing to do.
    .
    The world is not very mature.

  353. Ron Graf, I’ve largely stopped using the phrase “behave like an adult” because I’ve come to realize adults do the same things. I don’t know if maturity increases with age on average, but if so, I haven’t been able to tell.

    That’s not what worries me though. What worries me is it seems people don’t want other people to be mature. Immaturity seems far more popular and desirable. I don’t get it. Even the people who say they dislike childish behavior will often engage in/encourage it.

  354. A dictator/monarch never has to acknowledge mistakes because those around risk their well-being to insist on an acknowledgment. Moving down the power scale, an authority is less compelled to acknowledge errors because they can wield favor or disfavor to those base on their show of respect to reinforce their authority.
    .
    Steven can withhold his interesting hints or give a nasty snipe about your next mistake.
    .
    Anyway, it’s a balance between showing personal respect and demanding respect for the truth, an eternal conflict.

  355. Ron Graf, I agree with your first two paragraphs. I do not agree with the third. The “respect” you’re talking about in that paragraph isn’t respect, it’s obsqequence. There are people I hold in great respect who I criticize vehemently in the search for true, and there are people I hold no respect for I refrain from criticizing because they do a goos job of getting at the truth.

    The problem is people often conflate any disagreement with being disrespectful. On top of that, they treat respect as an imperative without regard for whether or not it is merited. The result is bad behavior gets tacitly approved of.

    If we can move past all that sort of nonsense, the simple point is Steven Mosher has posted all sorts of misinformation in this thread and the last. People should recognize that and correct it. That “people” includes Mosher himself. I doubt it will happen though.

    For instance, we’ve still had nobody (save me) point out Mosher’s claim the BEST scalpel methodology cannot affect the resulting trend is utter nonsense. If that sort of nonsense can go unchallenged, what can’t?

  356. If that sort of nonsense can go unchallenged, what can’t?

    Lots of nonsense goes unchallenged in climate science. The air is thick with it.

    Andrew

  357. Brandon S: ” I do not agree with the third. The “respect” you’re talking about in that paragraph isn’t respect, it’s obsqequence. ”
    ob·se·qui·ous.ness obedient or attentive to an excessive or servile degree.
    .
    I agree, much of what is framed in one’s mind as appropriate respect is may be corruption. Criticism without belittlement is a good thing. Without criticism nobody comments when the emperor has no clothes. Of course, nowadays people can parade with no clothes I think — hmm.
    .
    If BEST scalpel methodology did not change the trend it would still have the raw trend. The obvious question is how does one knows the method is weighting reality over unreality. I suppose the answer would be to find real known examples of biased stations and see how the method handled each one. Then create every combination of bias and test again. I believe Steven has indicated this has been done but it would be nice to see.

  358. Ron,

    If BEST scalpel methodology did not change the trend it would still have the raw trend.

    I don’t believe that is correct. The scalpel is applied after weighting, IIRC. So the trend in question is the weighted trend. Part of the data is then removed and the trend recalculated. That’s the trend that doesn’t change. I seriously doubt that a ‘raw’ trend has any meaning, what with one thing and another.

  359. DeWitt, I admit that I have not studied the code in BEST yet. I am doing an online course in R first — at a slow pace.
    .
    I hope you agree it would be nice to have one of the BEST developers diagram the code for us step by step at some point, and also show the results of synthetic data testing.

  360. Ron Graf:

    If BEST scalpel methodology did not change the trend it would still have the raw trend.

    That’s not quite correct. There are other steps involved in the BEST methodology which could affect the trend.

    The obvious question is how does one knows the method is weighting reality over unreality. I suppose the answer would be to find real known examples of biased stations and see how the method handled each one.

    In the cases I’ve examined, the answer is, “Poorly.” Relative homogenization might be good at removing the net effects of various issues (or it might not be), but it seems rather bad at removing the effects at specific stations. Or at least, this particular approach to it seems rather bad at it. Maybe someone can come up with a better one.

    I believe Steven has indicated this has been done but it would be nice to see.

    Good luck with that. Last I checked, BEST still refuses to publish it’s unadjusted temperature fields online for the public to examine even though Richard Muller claimed they’ve been published. There have been all sorts of tests they’ve supposedly done they choose not to disclose the results for years. And somehow people still call BEST completely transparent.

  361. DeWitt Payne:

    I don’t believe that is correct. The scalpel is applied after weighting, IIRC.

    No. That is exactly backwards. Breakpoints have to be determined first because the weight adjustments BEST uses are done as part of their iterative least-squares minimization process. That is, BEST re-weights series over and over until the final results converge. You can’t do that before determining breakpoints. I’m not sure what the approach you describe could even accomplish.

    So the trend in question is the weighted trend. Part of the data is then removed and the trend recalculated. That’s the trend that doesn’t change.

    We actually have graphs published by BEST itself showing meaningful net changes caused by the BEST “empirical breakpoint” approach. That is, BEST published figures showing its scalpel method does in fact change trends. Despite that, Mosher came here and wrote things like:

    “But anyone who understands how using breakpoint analysis to detect discontinuities knows slicing/breaking longer series into shorter ones does in fact have an effect. That is the entire point of doing it.”
    No breaking series doesnt have any effect.

    if you slice and dont weight… then you will have ZERO effect.
    because slicing does nothing.

    Which is complete nonsense. Even the numerical example he gives makes that obvious:

    You have a series
    A) 0 0 0 0 0 0 0 1 1 1 1 1 1
    Your breakpoint analysis tells you that this might be two stations
    A) 0 0 0 0 0 0 0
    A1) 1 1 1 1 1 1
    That is what slicing does.

    The first series has a trend. If you break it into the two series Mosher offers, it no longer has a trend. Clearly, breaking longer records into shorter ones can have a real effect on your calculated results. That’s true even if you aren’t changing the individual 1s and 0s.

    Though you kind of are as each shorter segment gets its own individual baseline value assigned to it, meaning you actually are eventually shifting the numerical values for each segment up and down in different amounts.

  362. Ron,

    I hope you agree it would be nice to have one of the BEST developers diagram the code for us step by step at some point, and also show the results of synthetic data testing.

    In the best of all possible worlds, perhaps. In this one, no. For one, who’s going to pay this person for their time? If the code was properly documented when it was written, the information is already there in notes in the code. If it wasn’t then it never will be. Reverse engineering undocumented code is problematic at best.

    And finally, I don’t care. It’s all mathturbation anyway, since the original data is hopeless for the purpose of documenting temperature trends with sufficient precision and accuracy over the full instrumental record. Angels and pins.

  363. Ah, yeah that’s an entirely different thing. BEST only uses jacknifing to (under)estimate the uncertainty in their results. Its scalpel process, where station records are split apart by determining “empirical breakpoints” is the main aspect of how they homogenize their data. Very different things.

  364. Brandon, this seems really obvious but I don’t see it pointed out:
    Station moves out of the city, time of observation change to mornings, change to MMTS; all cooling, non climate events produce breakpoints.

    Land use changes, city growth, minor micro-site issues- all small, additive, gradual and ubiquitous with no breaks.
    .
    So the warming NCE must rely on the second half of the BEST process to catch them. If it can be reasoned that UHI is limited to urban centers then there is some plausibility that those stations would be exposed to comparisons to non-urban. But Peterson and Steven Mosher say that urban trends are the same as rural and even CRN stations. La mystery.
    .
    DeWitt, I know you are being a bit cynical. I hope you agree that the land station record, along with radiative transfer, form the core of climate science. All paleo proxies are verified by correlation tests against the land record. The sea record to a large part relies on the land trend and the models rely on both.

  365. the land station record, along with radiative transfer, form the core of climate science

    Ron Graf,

    So the core is rotten. Science proceeds upon good information, not bad information.

    Andrew

  366. I have a serious question for you, Ron Graf. Do you think Climate Science is too big to fail?

    Andrew

  367. Ron,

    All paleo proxies are verified by correlation tests against the land record.

    Not all, only the useless ones. The validity of the paleo record depends on the method used. Treemometers are useless with the current method of ex post facto calibration. Isotope ratio methods, δ18O and δD in ice cores, for example, are calibrated using the local temperature and elevation where snow samples are collected from the surface for analysis.

    The instrumental surface temperature record is simply not precise and accurate enough to tease out things like UHI. GCM’s are even worse. Risk analysis can deal with these uncertainties if the economic data on damages and costs of mitigation and adaptation, which are, IMO, far more important than the temperature data for policy, weren’t so biased.

    All we really know with some confidence is that the average surface temperature appears to be increasing. Period.

  368. OT:

    It looks like Arctic Sea ice extent isn’t going to crater this year. We’re well into August and well above 2012. The minimum should be close to the same as 2015. I doubt the Northwest Passage will be open very long, if it opens at all.

  369. Ron Graf,

    I give paleo climatology about as much credibility as an old lady with a crystal ball. There is just too much unknown about past conditions to draw meaningful inferences. The big picture stuff is probably OK (eg. sea level high stand 5.5-75 meters above today 125,000 years ago), but that is about it.

  370. SteveF,

    I give paleo climatology about as much credibility as an old lady with a crystal ball.

    I don’t think it’s quite that bad. There are confounding influences on isotope ratio methods, but I think they get the big picture reasonably well, events like Younger Dryas, for example. Dating of cores past the 14C range is likely as much of a problem or more than the calculated temperature.

  371. DeWitt Payne (Comment #149923)
    “It looks like Arctic Sea ice extent isn’t going to crater this year.”

    One of the warnings that Neven has to say is never to make a prediction about the Arctic.
    I think both he and WUWT have fallen into traps with past predictions.
    I am watching the Polar Challenge crew battle their way through the ice with great interest.
    Real Climate and the Great white Con putting up a side show.
    Tony was ahead on points but Jim and the boat have made a comeback.
    “I doubt the Northwest Passage will be open very long”
    Rud implies that the ship may not be fast enough to make it through but it will be very dependent on the NW passage refreeze, if and when it occurs.
    Shame Lucia did not open up a guess on the minimum, Was heading to an absolute low now very much toss a coin.
    There must be more multi year ice as the ice that is left is very thick. Will this help it refreeze and extend quickly? It did not do it last year.

  372. Brandon, I am not sure whether it is your sympathetic and caring attitude towards Steven or my questioning of the actual numbers and types of stations he quotes that has led to his continued abstinence since his illness.
    I prefer it to be the dodging on the station issues.
    Just had a major scare today, riding behind a mate who was sideswiped badly by a truck.
    It is times like this that make the problems we talk about seem so trivial as DeWitt says.
    One of the points about UHI is that at some stage surely any contrived forcing of the global temperature, if it exists, must level out?
    Then it will represent the actual guess albeit half a degree higher in perpetuity?

  373. his continued abstinence since his illness

    Interestingly, someone named Steven Mosher is commenting over at Steve Goddard’s Vitally Important Not Science Because It’s Blog Science Site.

    Andrew

  374. Steven Mosher: best wishes for a rapid recovery. I’ve had my appendix out, although it didn’t reach the point of bursting first. In fact the closest I came to dying was the day they discharged me when I made the mistake of watching an episode of “A Bit of Fry & Laurie.”

    I’ve enjoyed these two posts and the comments beneath, although I can’t say that I’m any clearer about UHI. It seems there’s a tension between anecdote (which says UHI can be up to 5C or sommat) and analysis (which says it has no effect on estimates of global temp). Because habitat to some extent “engineers” local climate, I’m not really clear what defines a “pristine” site either. Here in the UK, the entire main island used to be forest from one side to the other, including the tops. The entire country is different now and so would its climate be, even absent any change imposed from elsewhere. But having said that nowhere (in the UK) should be considered “pristine”, maybe it doesn’t matter, because we live where we live, not in the wildwood.

  375. DeWitt Payne (Comment #149925)

    Isotope ratios can track temperature changes of the magnitude that correspond with inter glacial periods, but getting it reasonably correct with the relatively small mean changes like the modern warming period is a different story. I do think that the much better understood physical basis behind the chemical approaches to temperature reconstruction hold more promise for future gains in applying proxy responses for reconstructions than do tree rings and other less understood (studied) proxies.

    A related topic for me currently is attempting to estimate the trend, sampling and measurement errors for proxy responses in temperature reconstructions. I think I have a reasonable handle on the trend and sampling error, but am still searching for data I could use for estimating the measurement error. I was considering using the variation in multiple cores for tree rings. I recall seeing expositions on the measurement errors for isotope ratios but I cannot remember if that was qualitative or quantitative.

    Any suggestions would be much appreciated.

  376. Ken, Brienen(2012) in discussion says that tree cores are mostly selected by investigator’s preference which introduces a bias toward large, healthy trees. They suggest random sampling – duhhh. If only….
    .
    Ken, what are the headings for your table on CA Gergis Data Torture for the last two columns? I think they are standard error. But it would help to give what expected numbers should be for pass/fail. I’m sure Steve Mc appreciates all your good work though he is not very expressive in that dept. Everyone else certainly is.

  377. Interestingly, someone named Steven Mosher is over at Climate Etc., defending the Democratic Party platform.

    Andrew

  378. I too wish Mosher the best with his health. However, perhaps he needs to have a verbositectomy before returning full time to the climate wars.

  379. Ron, if you are referring to my latest post with a table at CA, the last 2 columns are latitude and longitude of the proxy sites where + is north and – is south and + is east and – is west.

    Here is a link to Briffa and Cook relating the problems of using tree rings for temperature reconstructions and the uncertainties involved. It shows that the dendroclimatology community is aware of the problems that are frequently discussed at these blogs. The authors do not in this article or in other dendro publications address the post fact selection issues, but point to measurement problems that if combined with acknowledgment of the post fact selection would raise a very red flag. So far I have found only a single paper that deals with measurement error quantitatively (linked in the second link below).

    https://www.ncdc.noaa.gov/paleo/reports/trieste2008/tree-rings.pdf

    http://people.bu.edu/dietze/manuscripts/ClarkEcolAppl07.pdf

    I write up my analyses summaries for very selfish reasons. Firstly, a write-up forces me to think through the analysis and quickly points to where more analysis is required. Secondly, I am hoping that those who might read it will provide constructive criticism that will provide a basis for improving the analysis or with the case of HaroldW pointing to an error in the data source (which in this case was the latitudes and longitudes of 3 proxy locations). It also gives me some pleasure when I see someone else using a similar approach to analyzing a paper and the data used in that paper.

  380. Ken, I am not even sure how one could accurately account for the error in the low frequency tree ring signal, which is what climate science is interested in, even though the calibration is with the high frequency annular rings. This Gedalof(2010) paper found that CO2 fertilization affected every tree species they studied. But all to varying degree. I already mentioned the Brienen(2012) found an “old slow growth” bias and a “large tree selection” bias. I think these are in addition to the natural growth progression variation specific to species. Each one of these biases must be accurately quantified and adjusted for, each with its own error. Yet, if the science is done meticulously I believe there is promise. A big if. And, of course, there needs to be benchmarking in conventions set. Right now it’s a curve fitting investigator’s paradise.
    .
    Ken, I hope you didn’t mind my 2 cents but I would add a header row to any Excel charts.

  381. Ron,

    I would say that selecting large trees isn’t a bias, it’s a requirement. In general, large trees of a given species are older. If you want data from 100 years ago, you need a tree that’s older than 100 years. Replace large with old and see if you still think it’s a bias.

  382. I’m almost finished Monford’s “Hockey Stick Illusion”. The boldness of the deceit and coverup by Mann, Briffa and the hockey team is just incredible. It is hard to put down. Putting aside all of the statistical failures, post fact selections, CO2 affected proxy contamination, false weighting and data infilling, the worst is data truncation to “hide the decline.” Because the “divergence problem” is not just about concealing an inconvenient section of data, but reveals a false fundamental assumption: that the proxy was able to detect temperatures as warm as today’s, which is necessary for the IPCC to declare current temperatures as unprecedented in the modern era.
    .
    The bravery and persistence of McIntyre and McKitrick to push against the level of corruption they encountered is unimaginable. I think it’s no accident that it was Canadians considering the free speech chilling PC environment in the USA now. To know that National Academy of Sciences could fail so miserably in their review of MM claims, acting like uninformed jurors trying to “split the baby” rather than investigators to whistleblower claims, just further chills the auditing of science or any other government sponsored activity.

  383. DeWitt, 100year-old is a young tree for paleoclimatology. But they need an random selection in order to properly average natural growth progressions in order to normalize the population. This is particularly important in regional curve standardization (RCS). If all the trees are the same age then one cannot tell if the low frequency changes are due to changes in GMST or normal growth progression of that species. If one chooses large healthy trees they are skewing the population toward trees with the most successful recent growing seasons when those trees might have had a tough childhood with lots of competition for sunlight or root space. This would make falsely make recent times look more favorable to the population as a whole.

  384. DeWitt,
    Thanks for that link. I had been wondering about tree lifetimes, because carbon sequestration by trees is (apparently) a significant net sink, and one which will grow with rising atmospheric CO2. An order of magnitude estimate for the “saturation” of that sink would be the average lifetime of a tree. (Pay no attention to the arm waves about CO2 not increasing growth: free air enrichment studies in forests consistently show increased growth.)
    .
    Then there is the greening of arid areas due to reduced water usage at higher CO2… but we don’t like to speak of such things. 😉

  385. DeWitt, I agree that 100 years is old. I disagree that one need sample only large trees. First, dendrochronologists choose forests species that are long lived like Bristlecones and Huon Pines. Second, the need sampling of all ages to statistically account for natural life-cycle growth progression. Sampling all age trees eliminates bias in this analysis.
    .
    Steve, regarding CO2 fertilization the Brienen(2012) and Gedalov(2010) I linked above both acknowledge it obviously a factor but they try to minimize, saying it’s not too confounding for the reconstructions. Interestingly, I just read in the Hockey Stick Illusion that the Stripbark Bristlecones were 20th century high growers not from CO2 fertilization as assumed, but grew in an oval shape due to one side having no bark. The were cored long-ways. When cored 90deg, the short way, the 20th century blade droops.
    .
    I know I am reading one side of the story but Montford makes the whole field look ridiculous in every way. Mann(2008) could claim to be robust to the darn Bristlecones only by also feeding in an upside-down lake sediment (Tiljander) proxy. The mistake was repeated later by one of Mann’s students, Kaufman, who then acknowledged and then corrected, but Mann never did. How embarrassing.
    .
    Mann’s response:

    The claim that “upside down” data were used is bizarre. Multivariate regression methods are insensitive to the sign of predictors. Screening, when used, employed one-sided tests only when a definite sign could be a priori reasoned on physical grounds. Potential nonclimatic influences on the Tiljander and other proxies were discussed in the SI, which showed that none of our central conclusions relied on their use.

    https://agwobserver.wordpress.com/2010/06/28/tiljander/
    .
    The consensus defends Mann on this after admitting it was “accidentally” used upside-down, M&M’s criticism was wrong and misleading because the data was upside-down being that larger values meant cooler. It was the data’s fault and M&M were rude not to point this out more clearly (I guess).

  386. Ron,
    “Steve, regarding CO2 fertilization the Brienen(2012) and Gedalov(2010) I linked above both acknowledge it obviously a factor but they try to minimize, saying it’s not too confounding for the reconstructions.”
    .
    I am not much interested in the reconstructions; I think they are very dubious at best, and more typically based on rubbish. The strip bark pines are but one instance of the rubbish that goes into them. The Yamal ‘dirty dozen’ larch trees Briffa used to ‘show’ a very long temperature history is another. The field so lacks intellectual rigor that it is a bit shocking anyone, including people who work in the field, takes it seriously.
    .
    I am however interested in how the oceans and land plants will impact the trajectory of atmospheric CO2. The lifespan of trees impacts that. Funny how there aren’t many studies of tree-ring width were the trees are not at the limit of their temperature range. You know, where CO2 fertilization would be expected to have an impact.

  387. Hi SteveF,

    “where the trees are at the limit of their temperature range” ?

    Could you explain what limit means in this usage?

  388. SteveF, the Gedalof paper does suggest study is needed to understand tree biology at different possible future CO2 concentrations, up to a 1000ppm. Gedalof gives no answers though .
    .
    I don’t see the use of a carbon sink unless it’s accompanied by a plan to sequester the carbon from decaying back to the atmosphere.
    .
    If the modern warming is partly a cousin of the theorized MWP and RWP, and/or the temperature record has been inflated with bias, and thus ECS is only 1.2-1.5C, I am not sure we want to sequester CO2.
    .
    Even if ECS is 3+ I would favor engineering a temperature control knob we could control rather than sequester CO2.
    .
    I agree it’s worth study of CO2 effect on growth limiting factors in various long-lived tree species. I haven’t given up hope on dendrochronology even though its been abused with poor science practices to date.

  389. Ron,

    You’ve heard of the timber line? That’s the elevation above which trees don’t grow because it’s too cold. For tree ring temperature studies, on which I agree with SteveF that they’re bogus, you sample trees growing at elevations just below the timber line. The timber line elevation varies with species, but the principle is the same.

    Since most plants don’t have a monotonic growth curve with temperature, they grow slowly at both high and low temperature, you want to be well away from the optimum growth temperature.

    This is another subject that has been done to death, IMO. I don’t believe that tree rings can ever be a reliable source of temperature data. There are too many confounding variables.

  390. J ferguson,

    See DeWitt’s reply. Trees are chosen for study based on being very close to the lowest temperature at which they could survive, so even a small change in temperature is supposed to change their growth rate…. then the individual cores are cherry picked according to post-facto screening criteria to give ‘the right results’. It’s rubbish. As Ken Fritsch has often noted, screening criteria, if legitimate, must be set up BEFORE any samples are collected and must be based on a rational (and demonstrable!) connection to temperature.

  391. Ron,
    “..the Gedalof paper does suggest study is needed to understand tree biology at different possible future CO2 concentrations, up to a 1000ppm. Gedalof gives no answers though.”
    .
    Ya well, maybe (just maybe) that work has not been done because there is a great reluctance in climate science to admit ANY positive influence of increased atmospheric CO2. More rapid plant growth is an obvious positive influence the field seems to be trying very hard to ignore or discount. Greening of arid areas (lower plant moisture requirements at higher CO2) is another. As usual, it’s politics influencing how ‘the science’ is done. Do I sound jaded? If so, that is because I am jaded by the politics in climate science.

  392. In light of the extensive discussions that were had with respect to nuclear power in the past, some may be interested in this article from the Latimes detailing what seems to me to be a comparatively small initial accident that has cost $2,000,000,000 to remediate. I am not strongly opposed to nuclear, but I am skeptical of the government’s competence to regulate it. See http://www.latimes.com/nation/la-na-new-mexico-nuclear-dump-20160819-snap-story.html

    JD

  393. JD Ohio,

    Plutonium production waste, which is what Hanford did/does, is nothing like nuclear electric power plant waste. You have to reprocess massive amounts of uranium that has spent very little time in a reactor to isolate plutonium. Hanford produced, IIRC, hundreds of thousands of gallons of highly acidic liquid waste containing the relatively short half life fission products left over from this process.

    IIRC, the acid used is nitric acid, which is a powerful oxidizer. The decision to use an organic material as an absorbant was a mistake along the lines of Takata using ammonium nitrate in their air bags, Japan putting the diesel fuel tanks on the ocean side at Fukushima and running the Chernobyl reactor at conditions known to be unstable. I’m betting a bean counter made that decision, overruling chemists and engineers who probably knew better.

    France, OTOH, which reprocesses fuel from breeder reactors, solidifies their waste to a very small volume. We could do that too, but we don’t, thanks to that idiot Jimmy Carter.

    $2 billion is peanuts. Elon Musk has received $5 billion in subsidies so far.

  394. DP: “I don’t believe that tree rings can ever be a reliable source of temperature data. There are too many confounding variables.”
    .
    Unlike the hydrosphere, trees can be experimented on in controlled or well known conditions. The variables could be untangled if there was good administration immune to corruption by political aims. Thoug I agree it is possible that tree ring widths are reliable in too few cases to give statistically needed Earth grid sampling.

  395. Ron,

    Maybe, if you have a century or so to spare. You have multiple independent variables you can’t control or measure ex post facto. But you only have one dependent variable, the ring width or density, that is probably neither a linear nor a monotonic function of any of the independent variables. IMO, the only reason that work is done in this field is because the data analysis can be tweaked in subtle and not so subtle ways to yield the desired results.

  396. DeWitt: Good to know that the most recent accident involved waste different than that generated by nuclear electric plants. On the other hand, your explanation of the incompetence in New Mexico dovetails with my experience. One of my father’s business associates was appointed to the Ohio Public Utility Commission about 15 years ago, and he knew no more about Utility regulation than your aunt Hilda or your uncle Fred. He undoubtedly made a contribution to the Governor’s political campaign and was rewarded with the appointment. My father passed away about 5 years ago, so I can mention this.

    JD

  397. DeWitt: “IMO, the only reason that work is done in this field is because the data analysis can be tweaked in subtle and not so subtle ways to yield the desired results.”
    .
    You do realize that you are questioning the integrity of dozens of PhDs and top university departments from around the world, on top of the National Science Foundation and the National Academy of Sciences? I think you are. A Nobel Prize was awarded to the IPCC in large part due to the MBH reconstruction. What you are spouting is conspiracy buff fantasies, DeWitt.
    .
    OK, I have to admit I’m on your side. Welcome to the camp. 😉

  398. Ron Graf,
    You are supposed to grab a tinfoil hat at the entrance. 😉
    .
    You want to know how crazy climate science is? Increases in atmospheric CO2 increase plant growth under virtually all circumstances (drought condition, high fertility soil, low fertility soil, in greenhouses, free air field studies, ect), and this fact has been known (and used commercially!) for 60 years. Material balance shows clearly that a substantial fraction (about 25%) of CO2 emissions are being taken up by plants; satellite surveys conclusively confirm a large increase in chlorophyl worldwide. Yet does climate science try to figure out how future plant growth will impact atmospheric CO2?? Hell no! Just about the only papers published on plant growth focus on how terrible CO2 and future warming will be for plants…. all of which is contrary to both factual reality and common sense. The field is sick, very, very sick.

  399. Thanks DeWitt and SteveF for reminding that the ring cycles depend on trees at the treeline. I would have thought that the entire concept of rings memorializing temperatures would have foundered on this alone, but alas?

    Is there a chance that this horse has died? Doe any of the CAGW madness still depend on ring chronologies?

    I ask because a commenter at CA mentioned that he had been unable to obtain funding for an (apparently physical) investigation of isotope premised proxies. A review comment was that there were enough proxies out there already. I thought it a bit strange that this comment went unremarked.

    Does anyone here know of any other possible proxies which are suspected but have to date gone uninvestigated?

  400. J ferguson,
    Reconstructions have become, IMO, a backwater for climate science, with good cause. The vast uncertainty in those reconstructions (many times highlighted by evil people like Steve McIntyre) appear to have forced the IPCC to downplay reconstructions compared to when the ‘hockey stick’ graph was everywhere. I have not heard of new proxies which are refused funding, but as I said, it’s a backwater, so it is possible.

  401. “Does anyone here know of any other possible proxies which are suspected but have to date gone uninvestigated?”
    .
    Good point in that funding that went forward on poorly tested proxies perhaps poisoned the well or dried up funding for good potential candidates. I am aware of oxygen and hydrogen isotope proxies for ice but I wonder why they can’t use them in trees, which are absorbing water directly from the atmosphere (and CO2). Cellulose is C6 H10 O5. Even if the resolution is very poor it could be used as a background correlation.
    .
    DeWitt, Steve, or anyone, why can’t they do isotope analysis of cellulose? Maybe they have.

  402. In the link below you will find the various proxies used for temperature reconstruction. Most reconstructions post fact select proxies whether those proxies are tree rings or other proxies. Using proxies selected a prior using a reasonably based physically understood basis is required of all proxy types. All these proxies are susceptible to other climate variables and sampling and measurement errors and these issues have to be sorted out before attempting to do valid temperature reconstructions.

    https://www.ncdc.noaa.gov/data-access/paleoclimatology-data/datasets

    What I found very interesting and informative was the Mann (2008) temperature reconstruction where the authors noted not only divergence with tree ring proxies but with non tree ring proxies also. It was commented in the paper only in passing but to me indicates a possible symptom of the post fact selection of proxies and that any number of these proxies are not valid or reasonable thermometers to use for historical temperatures – or at least by the methods used in these reconstructions.

  403. Ron,

    The range of temperature in ice cores is a lot larger. It’s possible that δD and δ18O wouldn’t be sensitive enough to detect temperature differences at annual resolution in tree rings. I suspect it’s way more expensive than measuring tree ring density or thickness too.

    Ken,

    Dating of sediment cores is somewhat ad hoc, but you don’t have hundreds to choose from, so isotope ratio measurements aren’t subject to the same sort of ex post facto selection. Or at least I don’t think they are. As I remember, Loehle put together some data from measurements that didn’t require the mathematical shenanigans needed for tree rings, speleothems, for example.

  404. DeWitt Payne (Comment #149959)

    If all the proxy data available is used and the series length goes back sufficiently far in time to make a reasonable comparison with modern times, then, of course, post facto selection is not an issue.

    Without sorting out the measurement and sampling error and getting some independent evidence that the proxy is responding to temperature without overwhelming interference of other climate variables and biasing the results over time, even avoiding post facto selection does not mean that the resulting temperature reconstruction is valid.

    Look at the recent posts at CA whereby someone claiming to use all the proxy data is not since the critical out-of-area proxy data remains exclusively post facto selected. There all sorts of ways of gaming the proxy selection system.

  405. Excerpted from PaulDennis2014 comment on the first (recent) JoelleGergis post at CA

    The climate change industry is vast and populated by people with different skills and backgrounds. Most are not in the natural sciences. e.g. geographers, social scientists, economists, medical practitioners etc. At present it is these groups that are framing the debate. I have made at least four attempts in the past five years, all without success, to secure funding for investigations into the properties of isotope based geothermometers. All external review comments have been strongly positive and supportive, with the exception of one – ‘why do we need another palaeothermometer when we have enough already?’. It leads one to question who is making decisions and on what basis?

    I see it now has an observation from kennethF.

    I’d taken Paul’s meaning to be that there was no support for discovery and qualification of new proxies, only support for data-mining – torturing the existing datasets.

    Am i missing something?

  406. Kenneth Fritsch:

    Look at the recent posts at CA whereby someone claiming to use all the proxy data is not since the critical out-of-area proxy data remains exclusively post facto selected. There all sorts of ways of gaming the proxy selection system.

    Have you been able to reproduce the results describe over there? I’m not sure if I’m just doing something wrong, but I get very different results. I’ve remarked on this several times (including in an e-mail discussion), but so far, I haven’t been able to find any explanation. I haven’t even gotten an answer as to just what data is being used.

    Again, the problem might well be on my end, but as far as I’ve seen, nobody has tried to replicate the results. It seems most people just read something in a post and assume it is true.

  407. irt to better proxies, if the science is “settled”, the last thing wanted by the climate committed is something unsettling.

  408. New Flash– CNN reports that scientists are now 97% agreed that man is warming the planet and we are “experiencing temperatures never seen before.”[in video but not in the print, where they simply imply never before by citing the 14 out of last 15 years.]

  409. Ron,

    And throwing in “shattered” a couple of times in the story really makes for great news coverage.

    Just add in “zombie”, “massive”, “major disaster”, and such and this guy should win a major journalism award.

    Andrew

  410. Ron,

    As the saying goes: Never and always are two words you should always remember never to use.

    It was a whole lot warmer at the peak of the Eocene, not to mention the Paleocene-Eocene Thermal Maximum. Even the claim that the current rate of temperature increase is unprecedented may not be true. The resolution of the timeline for the PETM isn’t all that good, being 55MYa.

  411. Brandon Shollenberger (Comment #149962)

    Brandon I have been working with the original 27 proxies that Gergis was using in the original and rejected paper submitted. I think most of the proxies carried through to the current published paper.

    The easiest way for me to combine the proxy data was to standardize all the individual series by taking anomalies using the base period where the proxies all had common data points and then dividing by the standard deviation of the anomaly series. The composite and final combined series then merely required taking the row means of the matrix where the series were in columns and the years in rows.

    No matter how the Gergis data and methods are analyzed, in my view and in the realm of temperature reconstruction where data torture is accepted and commonly employed, Gergis has to be guilty of extreme data torture. In some of these discussions where a poster concentrates on only one or two of these torture methods the totality of the problem does not get its due. I even read blog exchanges where the arguments were about the official definition of data torture and the number of google hits on that term.

    If the statistics involved with these attempts at temperature reconstructions, like Gergis, are known by the observer, the only real question about Gergis is how the authors get away with it.

  412. Ron Graf (Comment #149964)

    That GHGs cause global temperatures to rise is something with which almost all scientists, meteorologists and knowledgeable skeptics would agree. When Chad says it has become more difficult to be skeptical about global warming and fail to see the urgency of the problem he is taking the vague consensus argument about AGW and what it means for the future of mankind and spinning it in a very political manner. That probably makes for job security at CNN but does not provide anything new – unless you thought these cable stations are politically neutral.

  413. Kenneth Fristch:

    Brandon I have been working with the original 27 proxies that Gergis was using in the original and rejected paper submitted. I think most of the proxies carried through to the current published paper.

    .

    Quite a few of the proxies were carried over, but there were additions. More importantly, the proxies which were screened out were included in the data set this time. As you may recall, only the data that passed screening was published for the 2012 paper.

    But my problem is much more simple. Steve McIntyre posted results which I simply cannot replicate. He claims an important proxy was screened out by the arbitrary requirement there be 50 years of data over the 1921-1990 period (in the corresponding “local” temperature series), but I find well over 50 years of data. When I do the correlation test he performed, I get far lower values, ones which wouldn’t pass screening. I’ve been trying to figure out why my results are so wildly different from his without any success since before his post went online.

    I’m guessing it is something I am doing wrong, but… I don’t know. In my experience, people are quick to accept any criticism of something they dislike without putting any effort into verifying it. And while you say:

    No matter how the Gergis data and methods are analyzed, in my view and in the realm of temperature reconstruction where data torture is accepted and commonly employed, Gergis has to be guilty of extreme data torture. In some of these discussions where a poster concentrates on only one or two of these torture methods the totality of the problem does not get its due. I even read blog exchanges where the arguments were about the official definition of data torture and the number of google hits on that term.

    I don’t think it helps matters to focus on the 2012 paper. People try to change topics all the time as a simple diversionary practice. The best way to combat that is to try to keep one’s focus as narrow and on-point as possible. Or you can just ignore people who do that. Diversionary tactics only work if people actually get diverted.

  414. Brandon Shollenberger (Comment #149969)

    Brandon is your issue about replication with the Law Dome ice core proxy? I believe there was issue of correlation depending on whether the instrumental series used for correlation was local or teleconnected to the Australasia temperatures.

    The original Gergis submission remains important because that paper was well on its way for publication when Jean S at CA tried to duplicate the detrended correlations claimed by the Gergis paper and reported that failure at CA. I believe that the Gergis 27 for which I had data and on which I have been reporting passed the correlations that the Gergis authors thought was on the detrended residual series but was not. I had also commented at CA that I was surprised by the high correlation that the Gergis paper was claiming for the proxy series to instrumental series and that such a correlation is more typical of a series not detrended. The detrended correlations dropped the selection rate to something like 6 or 7 out of 27.

    I also commented at CA about the 7 proxies that were actually out of the original Gergis stated boundaries for their reconstruction. Forgetful Ken Fritsch then uses the Gergis latitude and longitude locations in an analysis reported here that had to be corrected when HaroldW pointed out the error.

    I think what piqued SteveM’s interest in Gergis 2016 is that through data torture, i.e. very improper statistical manipulations, the Gergis authors were able to reclaim as I recall nearly all of the original 27 proxies through other arbitrary selections.

    As an aside I am working on including sampling and measurement errors along with the usual time variation error in confidence intervals for the Gergis 27. I have trend and sampling error and I think I have a good handle on the estimating the measurement error for the tree ring proxies at least. In looking first at the Mt Read tree ring measurements in the database here:

    http://www.ncdc.noaa.gov/data-access/paleoclimatology-data/datasets/tree-ring

    I found that the tree cores were mislabeled by using the same suffix for 2 different cores for the same tree in some cases. The measurement unit for tree ring widths also went inexplicably from thousands of a millimeter for some trees to hundredths of a millimeter for others.

  415. Brandon (#149969) –
    Looking into the question of Law Dome d18O correlation against local temperature, I don’t get either your answer or Steve McIntyre’s!

    For local temperature, I’m using the HadCRUT3v gridcell with NW corner at -65S,110E. Temperatures are present from Jan 1957 on. So I don’t agree with your contention that there are more than 50 years of data in 1921-90. Assigning the temperature average for the 1957-58 summer (SOND’57,JF’58) to 1958 — per McIntyre’s comment — I get a correlation with the Law Dome proxy slightly above 0.54, not quite McIntyre’s stated 0.529. [Law Dome detrended over 1931-90, HadCRUT detrended over 1958-90. Although the un-detrended correlation coefficient is much the same.] While I didn’t figure the degrees of freedom for the t-statistic, McIntyre claimed 37 degrees, which puzzles me because there are only 33 years of data in 1958-90.

    Your turn…

  416. HaroldW, thanks for the comment. It looks like I may have gotten North and South flipped due to the HadCRUT ASCII and NCDF files being flipped opposite of one another. I looked at the file format description on the page I downloaded the data for, and it clearly showed the numbers were labeled for North to South. I hadn’t considered that it might only describe the ASCII file with the NCDF file being organized differently.

    I’ll post again when I’ve had time to verify that. It might help me replicate some of the other Gergis et al numbers too. I’ll also see if I can figure out why the NCDF file I has is missing nearly all the winter data if I use the correct spot. That doesn’t matter for anything related to this paper, but it is an oddity. The version 4 data set I have isn’t missing those values.

    For what it’s worth, when examining my correspondence with Steve McIntyre, I see I had used the opposite orientation I’ve been using in my original efforts. My results didn’t mesh with McIntyre’s though, and apparently after a power outage destroyed all my work on this paper, I mixed up the orientations on my next attempt.

    By the way, I believe McIntyre is using data for any year in which there is at least one month of summer data. Doing the same, I get data for the years 1947, 1948, 1956 and 1957 that you don’t seem to have included. That’s still only 37 years of data though, so I don’t know how he got 37 degrees of freedom, especially not after adjusting them for autocorrelation. When I try doing the same, I only get a correlation of .369. I assume I’m still doing something wrong, which is why I asked for the specific data McIntyre used in these several weeks ago. He’s recently said he plans to tidy up his code and post it once he gets back from a trip, so hopefully we can all reconcile our results at that point.

    By the way, my code shows if I use a lag of -1 or +1 for the Law Dome series, I get negative correlations. I don’t know that that means anything, especially not when I’m getting different results from you two, but it did seem interesting the correlations might change that much just by lagging a series like the authors did.

  417. Kenneth Fritsch:

    Brandon is your issue about replication with the Law Dome ice core proxy? I believe there was issue of correlation depending on whether the instrumental series used for correlation was local or teleconnected to the Australasia temperatures.

    If one uses the field mean for screening purposes, I can replicate the authors correlation scores nearly perfectly. I believe there was one or two proxies where the results inexplicably differed, but otherwise, I’ve reproduced the authors’ results. It’s the local correlation scores that I’m having difficulty replicating. My response to HaroldW discusses the example I was referring to.

    The original Gergis submission remains important…

    Maybe, but if you’re going to discuss it, you should clearly delineate what you’re discussing. When people are primarily discussing results for the 2016 paper, jumping in with discussions of the 2012 paper will muddle things if you’re not careful. That’s particularly true when your comments appear to be responses to people discussing the 2016 results, responses directed at the what they’ve said.

  418. Brandon (#149972) –
    You’re correct that I was only considering the period of continuous temperature entries, omitting the sporadic earlier ones. My effort was off-the-cuff in Excel, and I shied away from the mass of unmeasured months. That probably accounts for the slightly different correlation coefficient I obtained, too.

    It’s possible that the autocorrelation is so low that it doesn’t reduce the degrees of freedom. But 37 or 33, I doubt that it makes a material difference to the significance.

  419. HaroldW:

    You’re correct that I was only considering the period of continuous temperature entries, omitting the sporadic earlier ones. My effort was off-the-cuff in Excel, and I shied away from the mass of unmeasured months. That probably accounts for the slightly different correlation coefficient I obtained, too.

    It probably explains the small difference between your results and those Steve McIntyre reported, but the difference between your results and mine seem too large for that to explain it. Besides, my results don’t match McIntyre’s either, so I’m probably messing something up again.

    It’s possible that the autocorrelation is so low that it doesn’t reduce the degrees of freedom. But 37 or 33, I doubt that it makes a material difference to the significance.

    The autocorrelation in the residuals of the two series (as I’ve calculated it) is quite small, so your instinct may be right. That doesn’t explain things thoughh. Even with the four added years of results, you still wouldn’t have 37 degrees of freedom. Degrees of freedom are, if nothing is affecting them, one less than the number of observations. That means 37 degrees of freedom would require 38 years of data.

    I don’t know that these sort of things would affect any calculations in any meaningful way, but it bugs me there are unresolved questions. I think serious accusations should be based on clear evidence and be easy to check. Details like how one determines what data to include shouldn’t be left to the imagination. Especially not in a discussion where difficulty replicating results is a theme.

    Oh well. Even if I ignored all that, I don’t see how one justifies making serious accusations based on the fact a person didn’t include a proxy series that only had ~30 years of data in the calibration period. Sure, Gergis et al could have extended their calibration period beyond 1990, but that would have meant something like a third of their proxies would have ended before the calibration period. It seems pretty reasonable to me to stop the calibration period rather than have that.

  420. “I don’t think it helps matters to focus on the 2012 paper”?

    I think Ken was actually referencing the 2016 paper.

    The whole circus here is about the Gergis focus on the 2012 paper and it’s seeming phoenix rebirth.
    Your comment “By the way, my code shows if I use a lag of -1 or +1 for the Law Dome series, I get negative correlations” confirms Ken’s comments of “Gergis has to be guilty of extreme data torture.”

    You have pointed out other examples at your blog. Would it be too much to ask for a potted summary of the examples over here for comment on this site?

  421. angech:

    “I don’t think it helps matters to focus on the 2012 paper”?
    –
    I think Ken was actually referencing the 2016 paper.

    I’m not sure what makes you think that, but pretty much nothing he’s said has been true for the 2016 paper. The graphs he made were strictly for 2012 data, and most of his commentary has been about that data. Then again, a fair portion of what he’s said has been muddled/wrong, so maybe it’s understandable people reading it might get confused.

    Your comment “By the way, my code shows if I use a lag of -1 or +1 for the Law Dome series, I get negative correlations” confirms Ken’s comments of “Gergis has to be guilty of extreme data torture.”

    I don’t see why you would think this. One of the main topics people have chosen to discuss is that Gergis et al (2016) didn’t use the Law Dome proxy. I don’t see how the Law Dome’s correlation to a particular grid cell changing in such a manner would confirm “extreme data torture” when no matter happens, the proxy wouldn’t get used anyway.

    You have pointed out other examples at your blog. Would it be too much to ask for a potted summary of the examples over here for comment on this site?

    I don’t know what examples you might have in mind, but to be honest, the main reason I haven’t written more on the Gergis et al paper is I haven’t been able to replicate the results Steve McIntyre have posted, despite having talked to him about those results prior to him posting them online.

    It’s difficult enough to try to replicate Gergis et al’s work. I can’t find the motivation to try to replicate the work of people criticizing it as well. And if most of the discussion being had involves results I can’t replicate, I don’t see much value in me participating. It’s even worse when I then have to spend time correcting all sorts of mistakes from people criticizing the papers apparently without having even bothered to read them.

    I’m still working on trying to replicate the paper’s results, but my motivation at this point is low. It just seems like people are only interesting in finding ways to dismiss the work, not in understanding it or why it is actually bad.

  422. It’s difficult enough to try to replicate Gergis et al’s work.

    Everyone join in with me:

    CLIMATE SCIENCE SUCKS

    Andrew

  423. Brandon (#149976) –
    A quick update: I included all years with at least partial winter observations, and the 1931-90 correlation of (detrended) local temperatures and Law Dome d18O now stands at r=0.530, very close to McIntyre’s reported 0.529. The difference probably is due to my using a HadCRUT3v dataset from mid 2012. (Saved when I was looking at Gergis et al. 2012.)

  424. HaroldW, thanks for the update. It seems I am still the outlier. I’m not sure what’s going on. I’m using the last version of HadCRUT3v published on their site (downloaded this month), and I only get a correlation of .366. If I switch to HadCRUT4, I get a correlation of .439.

    I’d prefer to use the newer version of HadCRUT since the older one I have is missing all the values for this grid cell from May to September, meaning I’m missing at least one month of data for each winter season. I don’t know if Gergis et al’s data set was missing the same values. Are yours? If for some reason you and Steve McIntyre are using a HadCRUT3v data set which isn’t missing that data, that might explain why you guys get similar results to one another.

    By the way, I’ve checked to make sure this isn’t just because of a corrupted download. I get the same result any time I download the file from here:

    https://crudata.uea.ac.uk/cru/data/crutem3/HadCRUT3v.nc

    Load it into R and and check the series for grid cell 59, 5. I’ve repeated the process three different times on two different machines. I don’t know what’s going on. I can’t figure out why five months of each year would be missing.

    If anyone wants to check to see if they have the same issue, I can provide the code I use. I’m really baffled by all this. So far, I haven’t even been able to confirm people are using the same data as one another.

  425. Actually, I’ll just go ahead and post code right away. Mind you, this isn’t a tidied script or anything. I mostly use R through it’s command line. The result is when I combine the various snippets of code, they’re not very pretty.

    The first step is to read the data. I’m assuming you already have the NCDF file from the link in my comment above in your working directory.

    HAD = nc_open(“HadCRUT3v.nc”)
    t = ncvar_get(HAD, “temperature”)
    lon = ncvar_get(HAD, “longitude”)
    lat = ncvar_get(HAD, “latitude”)

    This gives you the full, gridded temperature data set in the variable t. It has three dimensions, with the first being longitude, the second being latitude and the third being temperature. We need to figure out which grid cell to use, so we figure the center of the grid cell for the proxy should be 112.5E, -67.5N. This gives:

    lo = which(lon == 112.5)
    la = which(lat == -67.5)
    dump = ts(t[lo,la,], freq=12, start=1850)

    The first two lines there figure out the grid cell to use. I’ve started using that approach to make sure I don’t make an error when manually looking up coordinates. The third line extracts the data for that grid cell and puts it in a time series so it’s easy to examine.

    I know there should be a simple way to extract the winter data from here, but I couldn’t think of it so I wrote a quick loop:

    scratch = NULL
    for (i in 1:70){
    scratch[i] = mean(dump2[c(9:14)+i*12 – 12], na.rm=TRUE)
    }

    Perhaps someone more experienced with R can tell me how terrible this is and provide a better solution. Regardless, this leaves you with 70 winter averages for the grid cell in question. You can then download the proxy table from Gergis et al, or as I prefer, just copy it to your clipboard and read it in. Either way, you can then run a correlation test on the unscreened data:

    raw = read.table(“clipboard”, sep=”;”, header=TRUE)
    cor.test(scratch, raw[923:992,3], na.rm=TRUE)

    923 and 992 correspond to the years 1921 and 1990. It would be tidier to assign these to variables, but since I was testing different lags, this was quicker. Definitely something which can be improved, but it gives the results:

    data: scratch and raw[923:992, 3]
    t = 2.3298, df = 35, p-value = 0.02571
    alternative hypothesis: true correlation is not equal to 0
    95 percent confidence interval:
    0.04810524 0.61715980
    sample estimates:
    cor
    0.366413

    We then perform a linear fit on these two series and rerun the correlation test on their residuals to produce a detrended correlation:

    lm_t = lm(scratch ~ c(1:70))
    lm_p = lm(raw[923:992,3] ~ c(1:70))
    cor.test(lm_t$residuals, lm_p$residuals[as.numeric(names(lm_t$residuals))], na.rm=TRUE)

    Because of missing values in the instrumental temperature record, we have to subset the proxy residuals to only include matching years. The code: as.numeric(names(lm_t$residuals))) produces a numerical index with the appropriate subset. The results this produces are:

    data: lm_t$residuals and lm_p$residuals[as.numeric(names(lm_t$residuals))]
    t = 2.3475, df = 35, p-value = 0.02468
    alternative hypothesis: true correlation is not equal to 0
    95 percent confidence interval:
    0.05088891 0.61888447
    sample estimates:
    cor
    0.3688263

    I know this code isn’t turnkey or pretty, but it should produce the same results for anyone. If it’s needed, I can tidy up the code and include parts to handle the data downloads. I’m hoping someone will just spot an error in this somewhere though.

  426. Brandon (#149982) –
    Haven’t tried your R code yet, but I re-ran my Excel spreadsheet with the latest HadCRUT3v file. (https://crudata.uea.ac.uk/cru/data/crutem3/hadcrut3v.zip) I obtained a detrended correlation of 0.369581, close to yours. [Perhaps a difference between Excel’s & R’s regression.] So your script is probably accurate, if not pretty. 😉

    Then I went back to look at what I had done earlier, and realized that I had been using HadCRUT3, not HadCRUT3v. Apparently there’s a significant difference between the two datasets for this gridcell. Among others: as you noted, HadCRUT3v lacks values for May-Sept inclusive, while HadCRUT3 does provide values for those months.

  427. HaroldW, thanks. That’s reassuring. I wonder if McIntyre did the same thing. Concerns about data versions are part of why I asked both McIntyre and Gergis for the specific data they used. It hasn’t happened yet (I didn’t even get a response from Gergis). I’m hoping it will.

    By the way, I know how easy it is to accidentally use the wrong version. I think I’ve downloaded the land-only data set something like four times by mistake.

    Oh, and I checked what happens if one adjusts the significance level for autocorrelation. I need to check to make sure I wrote the code correctly, but if I did, accounting for autocorrelation causes the Law Dome proxy to have a non-significant correlation* with the grid cell it is in. And it doesn’t appear to have a statistically significant correlation* with any grid cells nearby either.

    *Using the Gergis et al definition for “statistically significant.” I don’t agree it is correct, but my primary interest is to examine the standards they used. Besides, an accurate calculation would make it harder for things to pass screening, meaning Law Dome still wouldn’t make it.

  428. Oh, and sorry. I forgot a line of code. The dump variable holds all the temperature data for the grid cell as a time series. We only want to look at correlations over the 1921-1990 period though, so I create the dump2 variable:

    dump2 = window(dump,1920,1991)

    The reason it says 1920 instead of 1921 is assigning September, October, November and December to the next year. So 1921’s winter has four months that were measured in 1920. Obviously, without that line the code won’t work as you won’t have a dump2 variable to take the averages from. I really should create a tidied script with more descriptive names.

    By the way, if you’re wondering why I use the 1921-1990 period for correlation tests instead of the 1931-1990 period, it’s because that is the period the authors used when doing their correlation to the regional average even though they claimed to have used the 1931-1990 period. I assumed they did the same with the local correlation screening, and so far the results have been promising.

    That said, I am far from fully replicating their local correlation tests. Because they don’t identify which “local”” grid cell was the one a proxy passed screening against, it can be difficult to tell why results don’t match up at times. For instance, I get the exact same result Gergis et al gets for the local non-detrended screening of Mount Read, but my results for detredned screening are different. Does that mean the proxy passed non-detrended screening with the grid cell it was in and detrended screening with a different grid cell, or is there some other explanation? I don’t know yet. Given the number of combinations involved, it’ll take time to figure out.

  429. SteveF, I happened across an index of studies for CO2 fertilization effect for tropical forests. Looking at one of them they concluded there is no CO2 benefit to the tropics. High temperature apparently is a growth inhibitor their according to the consensus hypothesis. More warm = bad ergo CO2 = bad.
    .
    The premise for the funding is concern for a disturbance to an “important natural CO2 sink.” But as I said before, there is no importance to a sink without a final sequestration step.

  430. Ron,
    There are articles shouting alarm. But the weight of evidence (based on things like O2 trends and ocean data) is that plants have responded quite a lot to higher CO2, and this is consistent with observed atmospheric trends. See for example,
    http://www.pnas.org/content/111/44/15774.full.pdf.
    .
    Evaluation is complicated by things like deforestation vs reforestation, and there are plenty of chicken littles clucking about certain doom, but the weight of evidence is for significant and continuing increases of sequestration by plants in response to rising atmospheric CO2. It would be good to know for how long that rising sequestration will continue.
    .
    I think you can safely count on seeing many more quite speculative claims that plants are not increasing their CO2 uptake, or that they will soon stop, often based on models, not data. SOP for climate ‘science’: always claim it’s worse than we ever imagined. The track record of projections by climate ‘science’ suggest most projections are little more than speculative dung.

  431. Just to give some amusement, here is what Dr Jeff Harvey says on this month’s Deltoid thread:

    “The current destruction of natural systems for short term profit that primarily benefits the privileged few is not a mistake. It’s a calculated process carried out in full knowledge that it threatens the medium to long term survival of our species and many others across the biosphere. In that context it’s a vast crime. And whereas there are those like Stu2 who think that we can work within this selfish, ecocidal system in ways to correct the immense damage that it is doing, there are many others finally waking up to realise that the system is essentially so corrupted by greed and power that it cannot be reined in. It is broken beyond repair. The very fact that banks and corporations determine US policy across the board proves this. The US has long ceased being a representative democracy and is now a fully fledged plutocracy. Why are fossil fuel corporations investing billions in lobbying governments, funding think tanks and AstroTurf groups to downplay or even deny climate change, and thus waging a propaganda war to influence public opinion? How informed is the general public about un/anti democratic bodies like the WTO, IMG, WB and the agendas they support? “

  432. SteveF, when you are saying sequester how do you mean? I would think the tiny fraction of vegetation that does not burn or decay and instead gets buried anaerobically is non-significant.
    .
    Dr. Harvey: “The US has long ceased being a representative democracy and is now a fully fledged plutocracy.”
    .
    Things are trending poorly but the “progressive” movement wants to solve it by expanding government, which in turn can expand corruption. A middle class in the information age can keep corporate power in check just because corporations rely on customer good will — the government much less so. Government should strive to be simple, transparent and unintrusive.

  433. Ron,

    Any persistent increase in total biomass is equivalent to sequestration of the carbon in the increase. Only if biomass stops increasing does the mass rate of CO2 released by decay again equal the mass rate of CO2 absorbed by growth.

    As I’ve said before, if we grew fast growing trees, converted them to charcoal and buried the charcoal in sealed landfills, we could increase biologic sequestration. Paper products are quite stable in sealed landfills too.

  434. Oh yes, cut down the trees ad bury them. I can see the conservationist loving that idea.
    .
    I say there is nothing wrong with high CO2. It is our insurance policy against glaciation tipping point in the case of asteroid strike or nuclear winter. At the same time it makes some more habitable Canadian, Scandinavian and Russian real estate, not to mention Alaska. Also CO2 multiplies farming yields.
    .
    The only issue would be to cool the poles. I’m sure we could figure that out, cloud seeding or reflective chem trails.
    .
    Sea level rise is less of a problem than storm surges. Better than worrying about a foot or two of sea level rise – line the coasts with submerged sea walls that would dampen high surf in storms. That save billions with, or without, sea level rise.

  435. Tamino has posted that he is actually considering hosting discussions.
    My problem/question suggested for debate is why if warming is occurring at a faster rate now but the rate is still slow in terms of human experience should people look for Climate changes to ascribe to global warming when they know, as Tamino has said, that the small average changes that have occurred are not and will not be large enough to cause demonstrable climate change occurrences for at least 60 to 100 years.
    Could we stop blaming tornadoes and Hurricanes and coral bleaching to El Nino and concentrate on assessing the trend and when it will actually become serious?

  436. Ron Graf,
    The total inventory of carbon held in plants, plant debris, and soils depends on how the rates of plant uptake and biodegradation (releasing CO2) compare. Until a tree dies, more rapid growth implies an increasing inventory bound carbon relative to what it would have been without that increase in growth. The overall greening of Earth (expanding area of plant growth) and more rapid plant growth combine to sequester about 25% of current emissions. A reasonable question is how will that process evolve in response to rising CO2 in the future? I don’t know the answer, but the average lifetime of trees is an important factor to consider.
    .
    Another sequestration option is biochar formation (essentially charcoal) which is worked into the soil. Biochar improves soil prductivity and water retention, and ties up carbon for many thousands of years. Biochar formation from farm debris has been proposed as a relatively low cost way to “permanently” sequester a large amount of carbon, should that be needed. I’m not at all worried about Earth entering an ice age… people are not going to let that happen. You can always release CO2 by heating limestone, and there is a huge inventory of limestone available. There are also large coal deposits available. Use coal to heat limestone and you you get a CO2 release ‘two-fer’. 😉

  437. Ron,

    Do you think conservationists would react well to pumping CO2 at high pressure into salt domes?

    Every time I see ‘saves trees’ on a hot air hand drier, I can’t decide whether to laugh or cry. The greens need to get over themselves and look at the world as it is.

    Speaking of conservationists, I see a renewed emphasis on Smoky the Bear and preventing wildfires. That’s in spite of strong evidence that we’ve been going overboard on preventing fires so much that when they happen they’re true wildfires and do far more damage.

  438. That’s in spite of strong evidence that we’ve been going overboard on preventing fires so much that when they happen they’re true wildfires and do far more damage.

    DeWitt,

    My brother is a monk in MA and the monastery can’t do anything with tree cutting and such without permission from the local authorities. Over the last few years, he has been fortunate to talk to the foresters that come every once in awhile to check out what needs to be/can done with the trees on the property. My brother told me that one forester told him that MA needs a “good forest fire”. There’s a lot of old, dead timber in MA that needs to go.

    Andrew

  439. angech,

    “Tamino has posted that he is actually considering hosting discussions.”
    .
    I looked at his proposal. He insists on being able to block any opposing statement which includes ‘false’ information… which is just another way of saying that it would never be a fair and honest debate, since he (the faultless, all-knowing, and god-like Tamino, or perhaps, a one of his chosen archangels) would decide which statements are true and which are false. Earth to Tamino in Heaven: real debates don’t work that way. Like many of the things he writes at his blog, that post at least was worth a good laugh. He takes himself so seriously, and appears to so lack perspective and self awareness, that you can only rationally respond with laughter.
    .
    If he wants a fair and honest debate, he should comment where the blog host doesn’t censor ‘false’ statements… say, somewhere like this blog…. or agree ahead of time to not censor anything an opposing debater writes on his blog. I’ll bet Lucia would be happy to have him make a guest post. I doubt he ever will participate where he doesn’t control what is written. Seems to me he either has not the courage to match his convictions, or understands that a fair format would not be to his advantage.

  440. Graeme,
    I wonder if Dr Jeff Harvey believes the moon landings were faked? Seems consistent with his ‘conspiracy ideation’. Some one call Lewandowsky

  441. There are also large coal deposits available. Use coal to heat limestone and you you get a CO2 release ‘two-fer’. 😉

    SteveF, for that comment you get a card to go straight to Green hell

  442. Kenneth,

    ‘Green hell’ has been my destiny for a very long time. I caught some summer flounder (AKA fluke) the other day, which surely sends me to an even deeper level of Green hell. But I am an atheist, so it doesn’t much bother me.

  443. SteveF,

    I’ll bet Lucia would be happy to have him make a guest post. I doubt he ever will participate where he doesn’t control what is written. Seems to me he either has not the courage to match his convictions, or understands that a fair format would not be to his advantage.

    Sure. If he asked, I’d let him. I doubt if he would ask. But I might first have to look into figuring out how to make sure I can prevent an author who is not me from deleting comments on the posts they write. (I assume they can? I can delete anything because I own the blog.)

    I agree with you that one of the difficulties he would have hosting “discussions” is that he has a reputation of censoring views he does not agree with. People aren’t going to participate in discussions if he can ban/delete and so one because they don’t trust him to not-delete their comments or not ban them. In such cases, visitors who disagree with him don’t think it worth their time to participate there.

    And obviously, some people either have their own platforms already or like discussing things elsewhere already.

  444. Lucia,
    “But I might first have to look into figuring out how to make sure I can prevent an author who is not me from deleting comments on the posts they write. (I assume they can? I can delete anything because I own the blog.)”
    .
    I think being an author under WP lets you add in-line comments and even delete a comment. I have never deleted a comment when I have guest posted, but I think it is possible to do that. More to the point: If Tamino (or anyone else) gives you their word that they would not delete comments after a guest post, I suspect that would be enough… but it would be prudent to first get that word. Worms, after all, are still worms.

  445. Tamino’s comment
    “Here in the U.S. we’ve had 8 once-in-500-years weather events in the last 12 months. Without the climate change that has *already* accumulated, that wouldn’t have happened”.
    Yes?
    Attribution to Climate change of any once in in a 500 year event, let alone 8 of them, is a difficult and fraught mathematical concept at best, which I am sure that you are very well aware of.
    For at least 2 mathematical reasons you are aware of , though there are probably a lot more.
    If you feel you can/should use rare events as proof I would be happy to discuss it further.
    Anyone else have any comments on using rare events occurring seemingly frequently*** as a sign of climate change in case he is prepared to have a discussion?

  446. lucia, there are five different levels offered to you when adding a new user for your site. The Author level shouldn’t give any sort of power to edit/delete comments. Contributor and Subscriber won’t either, but they’re not good for people you want to let write posts. Subscribers can’t write posts at all, and Contributors can’t do things like upload images.

    (Editor and Administrator levels will let the person edit/delete comments.)

  447. “500 year event”

    Rhetorical device employed to make the presenter look like he or she knows what they’re talking about, because people are dumbed down and ‘500’ looks important.

    Andrew

  448. “Here in the U.S. we’ve had 8 once-in-500-years

    I know mother nature doesn’t check her calendar when invoking these things.

    Every year is a 500th year somewhere, right?
    .
    But other than taking Hansen to task once, Tamino has proven himself ideological and close minded, and so, not worth the time of day.

  449. I caught some summer flounder (AKA fluke) the other day, which surely sends me to an even deeper level of Green hell. But I am an atheist, so it doesn’t much bother me.

    Me too, but if we ever are saved it should be noted that Green hell is actually only 1.5 degrees hotter than it is currently. The problem is that the those dwelling in that hell are constantly retreating from rising oceans, caught in terrible storms, always sweating and on the verge of starvation from a dwindling food supply. Oh and further everyone has malaria.

  450. “Here in the U.S. we’ve had 8 once-in-500-years weather events in the last 12 months. Without the climate change that has *already* accumulated, that wouldn’t have happened”.

    Given autocorrelation, one could argue the 8 “events” are really 1 “event”

    Also, before one interpret what 8 once in 500 year events even means, you need to know how many events happen in a month, year or what have you. If there are a billion events a month, having 8 500 year events a month would be… well… a fairly low rate.

    That said: it is a very hot year. The recent 20 -year trend is up– and will continue rising until the temperature dip below the current trend line. (That’s the way these things work.) That indicator is more robust than something like there were 8 once in 500 year events which is rather loosie-goosie.

  451. Attribution is a double edged sword and can be an extremely difficult and fraught mathematical concept at times.
    That is why remarks about
    “Here in the U.S. we’ve had 8 once-in-500-years weather events in the last 12 months. Without the climate change that has *already* accumulated, that wouldn’t have happened.”
    are misleading.
    The US is a very big place.
    “weather events” unspecified are extremely common.
    Once in 500 year weather events of a specific nature and a specific location are a rare event in any one year [0.2%].
    But given enough events and enough locations they become extremely common.
    There are hundreds or thousands or more of one in 500 year weather events in the U.S. in a single year which makes the fact that you have only chosen to identify 8 of them nothing to do with accumulated climate change at all.
    Any other help please.

  452. SteveF, I read on the CE post that part of the Paris conference was the hoped plan for carbon sequestration through biomass combustion. I suppose from an engineering view it is plausible to use the energy to make electricity that could through electrolysis make caustic and some other commercial products. (That’s how we make chlorine gas, hydroxides and peroxides now.) Caustic can be used in smoke stacks to scrub the CO2 by neutralizing it to bicarbonate. I have no idea if this is the plan. A better solution would be to create a planetary albedo control knob. I think reflecting daylight is an easier and more direct proposition than scrubbing CO2 or burning limestone. The promise of nanotechnology is gaining the incredible power of individual particle surface area utilization. Buoyant bio-reflective colloidal suspensions could be cultured to survive only in water above 28C, which would keep it in the tropics. Alternatively, we could disperse a trail of nano-aerosol in low-Earth orbit around the equator, giving the Earth a glowing ring.
    .
    The neat thing about colloids is they can be designed to be annihilated by the introduction of a nano-interference with their charge created stability. They then would agglomerate into droplets by molecular attraction.
    .
    If humanity was life and death serious about saving the Earth it could solve any problem in short order with the intellectual resources on hand. I suspect that many here agree that humanities largest threats are in the man v man conflict, not the man v nature one.
    .
    Lucia, I think offering guest posts alternatively to each side would be an interesting niche for a non-censored site.

  453. My son says the rules are different on enemy sites. He is green and AGW through and through. I have transgressed again and lost my comments elsewhere. So sad.
    I did not regard the other site as an enemy site, just one to discuss in, but so be it.
    Such a pithy takedown of snark rates will never see the day.

  454. Ron Graf,

    Electrolysis of sodium chloride to yield Cl2 and NaOH followed by sequestration of CO2 is an option, but then what do you do with the chlorine? There is a limit to how much polyvinyl chloride the world needs. 🙂
    .
    You are right that there are lots of options to control surface temperatures if that should be needed. I would not presume to even guess which of these will be the most practical in 100 years, and the best approach may be none that we envision today. The point is that there will be technological options, and those are likely to be lower in cost that today’s options.
    .
    The real impediment is not practical, nor technological, it is the explicit rejection of technology to mitigate/control GHG driven warming. It is motivated by the same philosophy as leads to rejection of nuclear power for generating electricity and insistence on solar and wind: a desire for only ‘natural’ solutions, but even more, an explicit rejection of human influence on ‘nature’ simply because that influence is ‘immoral’. Using technology to mitigate GHG driven warming is the most egregious affront to that philosophy. Those folks find technological solutions ‘immoral’, and always will. If you read what those who call for drastic and rapid reductions in fossil fuel use write, you will often see calls for large reductions in human population, which is consistent with the large reductions in human influence on nature those people desire.
    .
    As I have often said, the disagreement is not, nor has it ever been, primarily about science or technology. It is, has always been, and will very likely will remain a philosophical disagreement about ‘right and wrong’. Is a new born human more valuable than a new born hyena? For me, the answer is “Of course”, but it seems to me that many greens consider the hyena far more valuable. The rejection of technological solutions by the same people who want to reduce fossil fuel use is not going to change, for there is little room for compromise when someone thinks that compromise is immoral. They simply don’t like humanity nor what humanity does.
    .
    If the human cost to adopt green demands for fossil fuel reductions is understood by the voting public, then I am confident voters will reject those demands. We need only make sure people understand the human costs.

  455. SteveF, I think you are exactly correct that the basis of the climate debate is not science or cost-benefit analysis for types if action; it’s about differing assumptions pertaining to the legitimacy of human domination of the planet. In the 100,000-year struggle of man against nature it is clear now that nature is on her knees, and there is quite a bit of guilt by some about that. I think those who have achieved fame and fortune from good looks, acting or singing tend to feel the guiltiest. This might be the basis or what we call liberal elitism or “white guilt.”
    .
    My answer for personal karma preservation involves volunteerism, charity and envisioning a worthy purpose for humanity. I have observed that progressives see themselves as futurists, but they have no vision for the future. When I have brought up the topic at ATTP about what people envision for humanity in 100 years there is mostly blankness. There is no consideration that we should be able to use technology to engineer our own habitats off this planet or within it, and that we could make them attractive to live in.
    .
    I occasionally point out to liberal friends that the primary research for space colonization would be closed loop recycling of resources, the same technology we would need to in order to remain on the planet — the one with quakes, hurricanes and pandemics.
    .
    So I agree with everyone that Earth would best to be left alone from human influence.
    .
    Regarding unneeded chlorine gas, one can avoid that issue by using a chloride free electrolyte; phosphate would be my choice. But again, I don’t see a problem with high atmospheric CO2.

  456. I’ve come late to the discussion, which I suspect is a good thing as I might have gotten intemperate … while reading the thread all at once has given me a better overall perspective.

    First, I have to congratulate Brandon on his even-tempered tone in the face of all manner of peculiarities.

    Next, I am mondo depressed that Mosher continues to deny the problems with the scalpel method. It’s been over two years now since I wrote a post asking BEST to show that their scalpel method does NOT introduce a bogus trend into data … crickets.

    I did like Brandon’s bozo simple demonstration of the trouble with the method …

    Even the numerical example he [Steven] gives makes that obvious:

    You have a series
    A) 0 0 0 0 0 0 0 1 1 1 1 1 1
    Your breakpoint analysis tells you that this might be two stations
    A) 0 0 0 0 0 0 0
    A1) 1 1 1 1 1 1
    That is what slicing does.

    The first series has a trend. If you break it into the two series Mosher offers, it no longer has a trend. Clearly, breaking longer records into shorter ones can have a real effect on your calculated results. That’s true even if you aren’t changing the individual 1s and 0s.

    Duh … but for at least two years now, Berkeley Earth and Mosher have failed to grasp the nettle.

    Next, I was also bummed that Mosher flat refused to admit he’d made a serious mistake, and instead tried to handwave it away with some story about mistaking one figure for another. I judge a man not only by how he handles success but also by how he handles failure … fail.

    Conclusion? Mosh started out with a bang, but his series has fallen on its face. Too bad, at the start it looked like it might be interesting. But instead of being here to defend his claims, or to finally answer the scalpel question two years late, he’s decided to go over to WUWT to attack not my post or anything in it, but his fantasies about how I went about writing the post.

    Sadly,

    w.

  457. Some time ago DeWitt wrote

    You’ve heard of the timber line? That’s the elevation above which trees don’t grow because it’s too cold. For tree ring temperature studies, on which I agree with SteveF that they’re bogus, you sample trees growing at elevations just below the timber line. The timber line elevation varies with species, but the principle is the same.

    All sounds great in theory, however in practice…well look for yourself using Google earth at Mt Read, the site of the Tasmanian Huon Pines. This area is heavy into mining too…in case you were inclined to believe “bare patches” were somehow treelines.

    https://www.google.com.au/maps/place/Mount+Read/@-41.8404532,145.5358703,1563m/data=!3m1!1e3!4m5!3m4!1s0xaa7ad3f737670883:0x993eb6b1a57280cc!8m2!3d-41.84!4d145.54?hl=en

  458. TTTM, Timberlines are real. One should look to plot a trend with dwarfism against elevation and wind exposure and be able to predict the natural tree line.
    .
    The problem with proxies is there are too many places to “help” the analysis. I would suggest that tree lines movement could be analyzed from historical aerial stereoscopic imagery could be used to compare to gain fairly quick data without having to go backpacking. This might be a way to study the last 70 years to compare against the tree rings for calibration and divergence clues.

  459. “Next, I am mondo depressed that Mosher continues to deny the problems with the scalpel method. It’s been over two years now since I wrote a post asking BEST to show that their scalpel method does NOT introduce a bogus trend into data … crickets.”

    because it doesnt.

    the scalpel merely changes metadata.

    the scalpel doesnt touch the data.

    The data is never touched.

    How to make the clear for people who dont read code

    Series

    A) 0 0 0 1 1 1

    That is before

    After

    A1) 0 0 0
    A2) 111

    What has changed?

    NOT the data. What has changed is the station ID

    there is no more station A

    there are two stations A1 and A2

    Their data doesnt change

    Then what?

    Then
    A1 is compared to the field
    A2 is compared to the field

    In both cases the station will get a weight

    A1; weight is 1
    A2: weight can ALSO be 1 as it depends on minimizing the error but suppose, we give a weight of .8

    What happens to the data?

    A1) 0 0 0
    A2) 1 1 1

    Still no change

    Why?

    Because we change the weight.

    Then what.

    Then we recalculate the kriging coefficients.. THEY MAY CHANGE

    we re weight

    we rekrig

    What is being changed?

    the station weights and the kriging coefficients

    whats the data?

    A1; 0 0 0
    A2 1 1 1

    When the error is minimized you have new kriging coeficients for the the interpolation of weather.

    Then a you calculate the final field Climate + Weather

    Climate doesnt change. The weather field is changed, really smoothed to minimize the errors.

    The you output field.

    lastly.. Suppose you wanted to know this..

    What would a stations temperature have been, IF it matched the expectation of the field.

    So you run a prediction.. and that prediction outputs changed stations data.

    So the data is changed.. only as a output option after the global average has been calculated.

    Splicing ( which is just fixing meta data ) does not do anything by itself. It doesnt change trends. it allows a weighting procedure to treat segments of a record differently.. to give the segments weights.

    Does it produce bogus trends? The only objective test we can run
    ( testing against blinded data where we KNOW the true state ) says that the method reduces trend errors. So, if the true trend was 1, and the raw trend was 2, we will move it toward 1.
    Suppose it was moved to 1.5.. That I suppose would be bogus and improved.

    The other way We can test this is by seeing what the method does to well calibrated stations ( like CRN) does it pollute them or leave them be..

    or, the funnest thing to do is to take all the stations that are weight 1. that is unadjusted.

    there are about 15,000 of these..

    Guess what you get when you average those?

  460. Ron Graf,
    The movement of tree lines has been looked at I think. I believe this is not a popular ‘proxy’ because the long term trend in the tree line, at least in the northern hemisphere, has been toward the south… consistent with falling summer temperatures in the northern hemisphere through most of the holocene due to gradual movement of maximum solar intensity from the northern summer to the northern winter (maximum intensity is now in January). IIRC, the tree line in northern eurasia was just about on the arctic coast in the early holocene, but is now well south. I suspect ‘northern summers were warmer 8,000 years ago’ is not a story that interests AGW activists. In addition, movement of the tree line may be too slow to tell much about warming over a relatively short period (under a century).

  461. Still sick, but I saw Steven Mosher’s comment above in my RSS reader and I had to get up to comment. Then I realized it’s a waste of time to rehash things so I’ll just point out Mosher’s comment clearly shows the semantics he abuses to claim breakpoint calculations don’t have an effect. His argument is still that the individual 1s and 0s don’t change, thus “slicing does nothing.” He then goes on to write:

    Splicing ( which is just fixing meta data ) does not do anything by itself. It doesnt change trends.

    Yet his own numerical example shows this is wrong. Why? Because he is still making the incorrect assumption that if the idea data points are not changed, the results will not change. That is not correct. When you re-organize data, your results can change whether or not the individual data points change. That is because time series have a temporal element which is contained in more than just the individual numerical values.

    If we allow ourselves to stray into the mathematical for a moment, this is easy to demonstrate. I don’t see that there’s any point though. Mosher seems uninterested in even attempting to have a real discussion, and I think everybody else understands the trends you get with:

    A) 0 0 0 1 1 1

    Will not be the same as the trends you get with:

    A1) 0 0 0
    A2) 1 1 1

    So I’m going back to bed. If anyone needs me to plot them a linear fit of the A series along with linear fits for the A1 and A2 series to prove slicing temperature stations does change your results, let me know. I’ll be happy to show you going from 0 to 1 means you get an increase while going from 0 to 0 or 1 to 1 doesn’t mean you get an increase.

  462. Before I go back to bed, I just want to highlight this as it is a gem of beauty:

    or, the funnest thing to do is to take all the stations that are weight 1. that is unadjusted.

    That’s right folks. According to Steven Mosher, changing:

    A) 0 0 0 1 1 1

    To:

    A1) 0 0 0
    A2) 1 1 1

    Leaves your data unadjusted.

  463. Ron writes

    The problem with proxies is there are too many places to “help” the analysis.

    The problem with Mt Read is that there is no treeline and so the “proxy” that is the Mt Read Huon Pine has no reason to be primarily influenced by temperature.

  464. Surely the key point is this (maybe Steven can clarify)

    Splicing ( which is just fixing meta data ) does not do anything by itself. It doesnt change trends. it allows a weighting procedure to treat segments of a record differently.. to give the segments weights.

    If you didn’t do this, then each record could only have a single weight. If, however, there was some issue with some portion of the record, then either you’d be giving too much (or too little) weight to that portion of the trend. So, presumably this method does change the trend relative to what you would get if you didn’t use this method, but this is done in order to get a better estimate of the trend.

  465. Anders, what you highlight as the key point is simply wrong. Whether or not being able to give less weight to specific sections of a data series is good, splitting the series into multiple parts can and often will change its trend.

    How important that issue isn’t something which has been a focus. The focus has simply been that Steven Mosher is consistently telling people untrue things. This was just an example that is particularly easy to verify.

    A discussion of what effect splitting these temperate records has, and what effect it would have without weighting each segment individually, could certainly be useful. It’s just not going to go well if the person leading ddiscussions in these posts consistently posts false information.

    Personally, I’d much rather be discussing how the parameters chosen by BEST are so large as to cause problems for the scalpel/re-weighting they do, particularly with latitudinal biases and greatly diminished resolution in the results. These posts aren’t about BEST though, so I don’t want to hijack them too much.

  466. By the way, I should point out there is actually a potentially significant effect the BEST empirical breakpoint approach has which often gets overlooked. While one of BEST’s selling points is its ability to use shorter records, it does still have a minimum length requirement. When it splits a station into multiple segments, some segments may get discarded for being to short. That is, the scalpel method will effectively delete some data.

    There are some stations where the practical effect of the scalpel method is to pretty much delete them. I’d argue that’s because far too many breakpoints are “found” in them as BEST assigns breakpoints far too freely, but whatever the reason, I think it’s clear (effectively) deleting a station’s data would change things from leaving it untouched.

  467. You’re missing the temporal aspect. The series 0 0 0 1 1 1 should become:

    A1) 0 0 0 * * *
    A2) * * * 1 1 1

    But I don’t see why you’d weight one more or less than the other.

  468. I would think in order to determine what the scalpel method does in terms of temperature trends where station data has 2 flat but different mean temperatures over several years before and after the break or slice point would have to be put through the entire field comparison and weighting process. That process result would in turn have to compared to some other adjustment algorithm or better be compared in a benchmarking test whereby known station errors can be put into known background temperatures produced from a reasonably realistic simulated climate.

    The BEST adjustment process I would think would be difficult to compare directly to another adjustment algorithm on a station by station comparison due to the way most other algorithms adjust station data directly and BEST does not. This situation points to the importance of using a benchmarking test and comparison for the various available station temperature adjusting algorithms.

  469. Kenneth Fristch:

    I would think in order to determine what the scalpel method does in terms of temperature trends where station data has 2 flat but different mean temperatures over several years before and after the break or slice point would have to be put through the entire field comparison and weighting process.

    No. That is exactly how you shouldn’t test what effect it has. If you want to know what effect a specific something has on a specific example, you should test what effect it has. You should not test what effect there is when you throw a multitude of other factors in. You certainly shouldn’t examine the effect the scalpel methodology has on a single station by carrying out steps which require the use of dozens of other stations.

    The topic at hand was not whether or not using this approach on one specific station would have any effect on temperature trends calculated for an area. The topic was some people believe the BEST scalpel affects the data in a way that changes calculated trends. Steven Mosher responded to such people by saying it cannot because it doesn’t change the data.

    In that context, all we need to see is whether or not splitting a data series into multiple parts can in fact change the results one would get with it despite the individual numerical values not changing. As we’ve seen, it can.

  470. But I don’t see why you’d weight one more or less than the other.

    You couldn’t if that were the only station in the record. But it’s not. Which is also why looking at the effect of splitting records on an individual station trend is rather meaningless.

  471. DeWitt Payne:

    You couldn’t if that were the only station in the record. But it’s not. Which is also why looking at the effect of splitting records on an individual station trend is rather meaningless.

    If you only look at one station, it is rather meaningless (save to demonstrate that introducing breakpoints can in fact change your results even without weighting segments differently). It can, however, be interesting to look at how many stations are split up and how that affects their trend. In fact, if a person were interested enough, they could figure out why each station had each of its breakpoints assigned. It’d be pretty cool.

    It’d also be a crazy amount of work I doubt anyone would want to do. Still, it’d be neat.

  472. TTTM writes: “The problem with Mt Read is that there is no treeline and so the “proxy” that is the Mt Read Huon Pine has no reason to be primarily influenced by temperature.”
    .
    You are correct as I understand it that Mt. Read is not at the edge of temperature limited survival. But I have wondered for some time why the consensus thinks being at the edge of survival is ideal. I’m not convinced the logic is that temperature is continuously the limiting growth factor is valid. Such inhospitable places may have very poor soil or dependable summer precipitation.
    .
    Mt. Read has very dependable precipitation and likely rich soil. So it may present a more temperature dependent growing season. Although this still says little about pests and competitive vegetation, Mt. Read has shown a reverse divergence problem the last 15-20, years, growing faster than observed trend in temperature.
    .
    When Mt. Read data is processed correctly according to protocol using RCS correction there is a pronounced MWP, LIA and warming at 500-300BC, warmer than modern temp. https://climateaudit.files.wordpress.com/2016/08/compare_allen_fig2c_to_rcs.png

  473. #150093 You’re missing the temporal aspect.
    Perhaps.
    I though the effect of slicing was to create a new station.
    The old station dies or freezes in time as a record.
    The new station starts where the old one stopped but is a different station.
    We can assume a temporal relationship but cannot compare the two stations directly.
    Imagine a station at the top of a cliff to one at the bottom. Map coordinates the same, temporally in sequence but utterly different readings.

    (Comment #150088)
    A) 0 0 0 1 1 1 before A1) 0 0 0 A2) 111 After
    Their data doesn’t change Then what?
    A1 is compared to the field A2 is compared to the field
    In both cases the station will get a weight
    A1; weight is 1 [Data does not change]
    “A2: weight can ALSO be 1 as it depends on minimizing the error but suppose, we give a weight of .8” !!!

    Known as a data change which was what was claimed not to happen.
    Worse it compromises the record for the two stations permanently.

    First the A2 station takes a real temperature change and incorporates it into the new [A2] world temperature graph.
    Actual data does not change, trend changes, using a new hotter station.
    Honolulu airport 37 degrees but previously 32 degrees, no airport.
    changes it’s weighting?
    Well it should not
    Note the best weighting is 1 for the actual new temperature, it minimizes error and the reading is true for that bit of the surface of the earth.
    But a problem

    “A2: weight can ALSO be 1 as it depends on minimizing the error but suppose, we give a weight of .8”

    Implies so much.
    A2 weight can be anything we want it to be.
    It should be one [minimizing the error, always a good thing to do]
    but we can make it 0.1, 0.3, 0.8., anything BEST desires.
    Why?
    The original station had a weighting of 1 at a lower temperature.
    It agreed with the neighboring stations. Now the new station at a weighting of 1 demands an upgrade in the temperature of all the surrounding stations.
    So Mano Lau goes from 15 degrees to 17, out of synch with its real temp. Kahului goes from 27 to 29.
    Everyone notices that their local temps are out of whack with the adjusted ones.
    Or you keep those records and down weight the Honolulu effect to 0.8 in Steven’s example or 0.4 in this example or whatever it takes to keep the new records in without upsetting the old records.

    “A2: weight can ALSO be 1 as it depends on minimizing the error but suppose, we give a weight of .8”
    Worth framing in a gold frame this quote.
    Summarizes the scientific method at work.

  474. “Steven Mosher responded to such people by saying it cannot because it doesn’t change the data.”
    That’s actually true. What it does do is discard the metadata that the two segments are from the same station. It does so in the belief that that information is unreliable – ie may not be true (in climate terms). If it isn’t true, they are doing the right thing.

    For my part, I think that is too black and white. You actually think there is a likelihood that the jump is a spurious change. But it may not be. Many jumps discarded would have been real, and that information is lost. I think one should look at the bias it induces. That doesn’t settle it; if there is a bias you may be cancelling a really spurious trend – stations were being moved out of urban areas, for example. But it may also be that a particular real trend (warming, say) was expressed by such jumps, and is being washed out.

  475. Nick Stokes, I’m not sure if you’re cherry-picking quotes to pretend they mean something they don’t mean, not bothering to read what people say or what, but your latest comment is stupid. Here is what I said, with the portion you quoted bolded:

    The topic at hand was not whether or not using this approach on one specific station would have any effect on temperature trends calculated for an area. The topic was some people believe the BEST scalpel affects the data in a way that changes calculated trends. Steven Mosher responded to such people by saying it cannot because it doesn’t change the data.

    You responded to say:

    That’s actually true. What it does do is discard the metadata that the two segments are from the same station. It does so in the belief that that information is unreliable – ie may not be true (in climate terms). If it isn’t true, they are doing the right thing.

    For my part, I think that is too black and white. You actually think there is a likelihood that the jump is a spurious change. But it may not be. Many jumps discarded would have been real, and that information is lost. I think one should look at the bias it induces….

    The portion of the paragraph you failed to quote clearly explains the issue was the (correct) belief splitting data series into shorter segments can change calculated trends in the final results. The very process you describe is how such changes could be introduced. You even say you “think one should look at the bias it induces.

    If you had shown the full paragraph rather than selecting that one sentence for display even though it doesn’t express the issue being discussed, your comment wouldn’t have made any sense. Whether that was intentional or some sort of mistake, it was stupid.

  476. Brandon: “It was stupid.”
    .
    Sometimes the desire to find support for an argument blinds one to make points that seem stupid or dishonest to those of the other point of view. I have to remind myself of the power of bias and that intelligent people with good intentions can make stupid oversights. Example: ATTP argued that UHI was not an issue because it did not significantly heat the global surface. It had to be a mental hiccup. I regretted not pausing to give him a graceful out. So I am working on this too.
    .
    Nick Stokes is not someone that comes to mind when I think if “stupid.” Diddo for Brandon S.

  477. “The portion of the paragraph you failed to quote”
    I wasn’t talking about your paragraph. I was talking about what Steve M said. It’s true.

    It’s also true that it changes the trend. That is a consequence of the decision that the jump was not due to climate, but due to them being functionally different stations. The belief is that that contribution to the trend is spurious. As I went on to say, I think the scalpel method as applied here is too black and white on that. But the trend change is a logical consequence of that metadata decision.

  478. Steven Mosher (Comment #150085)
    September 5th, 2016 at 6:02 pm

    “Next, I am mondo depressed that Mosher continues to deny the problems with the scalpel method. It’s been over two years now since I wrote a post asking BEST to show that their scalpel method does NOT introduce a bogus trend into data … crickets.”

    because it doesnt.
    the scalpel merely changes metadata.
    the scalpel doesnt touch the data.
    The data is never touched.

    Dang, miss the point much? It was never about the data, it’s about the TREND.

    The point is that before applying the scalpel method, the dataset contains one series, “0 0 0 0 1 1 1 1”. That one series has a trend. This is obvious in that the sum of the deltas is plus one.

    After the scalpel method, we have two series, which are

    “0 0 0 0 * * * *
    * * * * 1 1 1 1”

    Neither of those two resulting series have a trend, nor does their combination when using the Berkeley Earth method.

    And that is the point. As I discussed here, the scalpel method transforms e.g. a trendless sawtooth wave into a series with a trend. And as Brandon showed, it transforms a square wave containing a trend into a trendless series.

    That’s the issue that you guys have never really acknowledged.

    w.

  479. And it transforms the series 1111100000 too. The effect of the scalpel method on the overall trend cannot be evaluated in isolation.

  480. Ron Graf, I called what Nick Stokes said/did stupid because it was. That doesn’t mean he is stupid. It just means he said/did something stupid. What one should make of that is up to them.

    I prefer not to focus on people when I can. I find things work better when people focus on what each other say/do rather than who/what they are.

  481. Nick Stokes, please stop acting dumb. You quoted my discussion of what Steven Mosher said, in which I described what he said. You claim he was right while discussing a different point than the one that had been being ddiscussed.

    If you wish to claim Mosher was right, please quote what he said with the appropriate context. Don’t quote someone referring to what he said and claim he said aomething else.

  482. Sorry for the triple post, but updates are being slow right now. Willis Eschenbach, neither you nor I have shown anything about what the BEST methodology will do for specific examples. I haven’t even attempted to. My remarks havd simply been to show an example of how the scalpel method could cause trends to change, not to make claims about how specific cases would be handled.

    As for you, you haven’t done anything to show the BEST methodology would handle sawtooth patterns the way you describe. What you describe is a possibility, but it is not something anyone can know to be true given the work which has been done so far.

  483. For Law Dome it would be informative for me to see for the detrended residuals: the t value, degrees of freedom and p.value for both the HadCRUT3 and HadCRUT3v temperature series for the grid cell in question when correlated against the Law Dome series and also the ar1 coefficients for the detrended residuals used to adjust the confidence intervals for the correlation values.

    My questions are:

    1. Does the 50 year criteria of Gergis effectively eliminate Law Dome as a proxy for testing? (I see that the degrees of freedom used would indicate that to be the case).

    2. Are the correlations of the detrended series of HadCRUT3 versus HadCRUT3v for the grid cell in question with the detrended Law Dome series significantly different given the adjusted CIs for both cases?

    3. Are the linear regressed trends adjusted for ar1 from HadCRUT3 and HadCRUT3v series for the grid cell in question significantly different?

    4. How do the linear regressed trends of these 2 instrumental series for the grid cell in question compare with the Law Dome linear regressed trend after adjusting the trend CIs for ar1?

  484. Kenneth Fritsch, the code provided in my post should make it easy to obtain that information. You’d only need to write a few extra lines of code.

    For what it’s worth, I’m not convinced this Law Dome proxy would have been included without the 50 year requirement. Because I can’t reconcile my results for most proxies with those reported by Gergis et al, I can’t be sure one way or the other. I’m not sure they used the current HadCRUT3v dataset, and if they didn’t, it’ll likely be impossible to tell with certainty.

    I can redo the calculations on the current HadCRUT3v to make sure, but I don’t think the Law Dome proxy would have passed screening with the current HadCRUT3 data set. If it did, it’d be by a narrow margin. It wouldn’t be with the high significance Steve McIntyre described.

    Anyway, the differences between HadCRUT3 and HadCRUT3v are interesting, and things like that are why I think the Gergis et al authors should have archived the gridded data they used (I’ve e-mailed them requesting it with no success). I don’t understand why they’d archive their proxy data but not their instrumental data.

  485. “I don’t understand why they’d archive their proxy data but not their instrumental data.”
    .
    This is especially relevant since the instrumental data set is not fixed but gets constantly updated with late reporting and corrections.
    .
    CRU discusses variance adjusted data here.

    Updating includes not just data for the last month but the addition of any late reports for up to approximately the last two years. In addition to this the method of variance adjustment (used for CRUTEM3v and HadCRUT3v) works on the anomalous temperatures relative to the underlying trend on an approximate 30-year timescale. With the addition of subsequent years, the underlying trend will alter slightly, changing the variance-adjusted values. Effects will be greatest on the last year of the record, but an influence can be evident for the last three to four years…

    CRU also says Jones(2001) describes the full details of the making of HADCRUT3V.

  486. Brandon (#150137) –
    It’s not so much that I was right, as I made the same mistake. Comes from not having had dance lessons.*

    *From Molière’s “Le bourgeois gentilhomme”:
    DANCING MASTER: When a man has committed a mistake in his conduct, in family affairs, or in affairs of government of a state, or in the command of an army, do we not always say, “He took a bad step in such and such an affair?”

    MONSIEUR JOURDAIN: Yes, that’s said.

    DANCING MASTER: And can taking a bad step result from anything but not knowing how to dance?

  487. Brandon Shollenberger (Comment #150110)
    September 8th, 2016 at 6:05 pm

    Willis Eschenbach, neither you nor I have shown anything about what the BEST methodology will do for specific examples. I haven’t even attempted to. My remarks havd simply been to show an example of how the scalpel method could cause trends to change, not to make claims about how specific cases would be handled.

    Agreed.

    As for you, you haven’t done anything to show the BEST methodology would handle sawtooth patterns the way you describe. What you describe is a possibility, but it is not something anyone can know to be true given the work which has been done so far.

    Also agreed. What both of us has done is to show that there are situations where the scalpel method changes the trend. What I have asked the Berkeley Earth folks for and haven’t received is details about, in your words,“what the BEST methodology will do for specific examples”.

    What I’ve asked for is some kind of testing of their exact methodology against a variety of test patterns to see whether what we have shown is a possible problem actually does affect the results, and if so, how much.

    Regards,

    w.

  488. I’m calling the minimum for this year’s JAXA Arctic sea ice extent (seven day moving average) at 4.036Mm² on 9/6/2016. That’s the second lowest since the record started in 1979. Lowest was 2012 at 3.201 and third lowest was 2007 at 4.084. Eyeballing the shape of the minimum, I’m expecting a fairly rapid freeze now that the satellite Arctic temperature anomaly has returned to normal levels.

  489. I don’t know how you’d create specific examples to test with the BEST methodology since their breakpoint algorithm can compare up to, what was it, 200 stations when determining breakpoints. BEST did do benchmark testing though, and they even presented a poster about it. Steven Mosher likes to talk about the results for the tests as proving BEST is better.

    For some reason, they’ve never published any data or results for those tests though. I don’t get that. It’s been over five years, and apparently everyone is supposed to just accept a poster as all they’ll ever get. Very strange.

  490. Ron Graf, ATTP has an excellent piece up on attribution in climate change. Any serious thoughts welcome.
    Polar Challenge through on its way around the pole. P[lain sailing home.

  491. angech,

    JAXA has Antarctic sea ice extent at a record low for the date.

    Polar Challenge didn’t exactly circumnavigate the pole. The sailed from Murmansk to the North Atlantic through the Northeast and the Northwest Passages. It doesn’t look like they plan to return to Murmansk before returning home. OTOH, if they sailed to Murmansk this year from their home port, they would complete a circumnavigation when they arrived at their home port.

Comments are closed.