The Great GISTemp Mystery – Solved!

Update: Mystery Solved! See below.

One thing that has always been somewhat of a mystery for us in the land temp reconstruction community is why the GISTemp land record is so much lower than our reconstructions. We know that our reconstructions are pretty much in line with or somewhat below NCDC land temps (likely somewhat below due to not using a land mask–apart from Chad–and using GHCN v2.mean instead of v2.mean_adj), but they are quite a bit above GISTemp in recent years (and have higher trends over the whole period).


(Click to embiggen)

At first I wrote it off as a difference in data used; after all, GISTemp uses data from Antarctica as well as the full USHCN set in addition to GHCN v2.mean. However, Steve Mosher recently provided the full STEP0 datafile for GISTemp, and using that with the GISTemp inventory file we can compare reconstructions using the full set of stations used by GISTemp. Somewhat surprisingly, the temperatures produced by the GISS station set are not very different from the temperatures produced by just GHCN stations. However, both differ somewhat substantially from the GISTemp land temperatures:

Now, my second thought is that the anomaly calculation method could be the primary cause of the disagreement. After all, GISTemp uses the Reference Station Method (RSM), while my approach uses the Common Anomaly Method (CAM). However, other reconstructions by Chad, Nick Stokes, and Jeff Id/Roman use a version of RSM, and their results are pretty much in line with mine.

With different station sets and anomaly calculation methods mostly ruled out, we are left with gridding methods and inhomogeneity correction as possible causes. Since GISTemp uses nightlights to correct for UHI, I figured if this was the culprit we would expect the final GISTemp land record to be similar to the record constructed from only “dark” nightlight stations used by GISS. However, this turns out not really to be the case:

What about gridding methods? After all, GISTemp uses equal sized grid boxes, while the rest of us use 5×5 lat/lon grids? Well, I have no easy way to replicate GISTemp’s gridding method, but I can look at N. Hemisphere and S. Hemisphere records separately to try to get a sense of any gridding effects. This produces a particularly odd result:

N. Hemisphere

S. Hemisphere

What is interesting is that the discrepancy in recent years we saw in the global land temps mostly disappears! Looking at the differences verifies this:

So, I’m stumped. Anyone more familiar with the GISTemp code have any thoughts?

Update: Mystery Solved!

Like most mysteries, the solution to this one appears to have been pretty mundane. Dr. Ruedy responded to an email I sent him explaining that GISTemp doesn’t really attempt to model land-only temperatures. Rather, the table in question is an approximation of global temperatures using only land stations. This means that it is -not- weighted proportionate to the land area in each hemisphere.

As our commenter gp2 discovered, the answer to the mystery was “hidden in plain sight” in chapter 3 of the IPCC AR4 WGI:

Most of the differences arise from the diversity of spatial averaging techniques. The global average for CRUTEM3 is a land-area weighted sum (0.68 × NH + 0.32 × SH). For NCDC it is an area-weighted average of the grid-box anomalies where available worldwide. For GISS it is the average of the anomalies for the zones 90°N to 23.6°N, 23.6°N to 23.6°S and 23.6°S to 90°S with weightings 0.3, 0.4 and 0.3, respectively, proportional to their total areas. For Lugina et al. (2005) it is (NH + 0.866 × SH) / 1.866 because they excluded latitudes south of 60°S. As a result, the recent global trends are largest in CRUTEM3 and NCDC, which give more weight to the NH where recent trends have been greatest

Indeed, when we calculate those three bands separately (90°N to 23.6°N, 23.6°N to 23.6°S and 23.6°S to 90°S) and create a weighted average, our results are pretty much in line with GISTemp:


248 thoughts on “The Great GISTemp Mystery – Solved!”

  1. Andrew FL,

    As of January its applied worldwide:

    “January 16, 2010: The urban adjustment, previously based on satellite-observed nightlight radiance in the contiguous United States and population in the rest of the world (Hansen et al., 2001), is now based on nightlight radiances everywhere, as described in an upcoming publication. The effect on the global temperature trend is small: Based on the 1900-2009 period, that change reduces it by about 0.005 °C per century.”

    Via http://data.giss.nasa.gov/gistemp/updates/

  2. Zeke, this is very interesting. Have you thought about trying to break this down further by latitude band for example? OK I don’t know what I’m saying – perhaps this is a lot of work.

    Or what happens when you split the Dark/all data (graph 3) by hemispheres?

  3. Zeke,
    Okay, you eliminate HadCru which is adjusted and is very similar to GISS.
    You notice small differences after considering the anomaly method.
    Small differences when considering homoginization
    Onward, you cannot consider differences in gridding but you can somehow get a feel for it by looking at the hemispheres… which leads to hemisphere numbers which do not take into account the difference in the amount of land between the hemispheres
    And there was no smoothing to help eliminate the noise
    There are few differences between the constructions until after the mid 80’s
    I’m lost…

  4. Zeke,
    I checked all land stations vs rural with my code and got a result like yours – not even in the right direction to explain the Gistemp deviation.

  5. ON what baseline are all the graphs drawn?

    The global deviation right now isn’t that much greater than the SH deviation was in 1930-1950.

  6. carrot eater has a point. Some people have produced anomaly plots indicating that the 1930s were as warm as today, but you are showing today’s temperatures to be 0.6 to 1.0 C (??, units please) warmer. Can you explain the difference?

  7. bob sykes (Comment#43883) May 22nd, 2010 at 7:00 am

    carrot eater has a point. Some people have produced anomaly plots indicating that the 1930s were as warm as today, but you are showing today’s temperatures to be 0.6 to 1.0 C (??, units please) warmer. Can you explain the difference?
    ————————————————-

    I’m unaware of global datasets which show the 1930’s as warm as today. There are certainly some regional data — such as in the US — that show that. Could you be transferring what you might have seen, based on US data, to what is being shown here based on global data?

  8. bob sykes: those people are generally showing data only for some particular region, where the 1930s were particularly warm. You can get this in the US, or perhaps arctic areas. Other parts of the world, you won’t get it.

    Always take note of whether the chart you’re seeing is global, hemispheric, national, central england, etc…

  9. First off I would caution folks about confusing this issue with the issue of UHI. What we should be looking at at this stage is a PURE comparisons of methods on the same data. I’ll describe in a short version what the flow is

    GISSTMEP:

    Input: GHCN, USHCN, Antarctic data,

    OUTPUT: StepZero “comb” file.

    Now that everyone is using that we have the same data IN.
    I would suggest that Ron B, rerun GISSTEMP and repost the file
    I did. Why? He RUNS the actual GISSTEMP source on a linux box
    I ran CCC version. I expect NO DIFFERENCE, but I want to eliminate any issues ( I ran on a MAC) Ron has tested his results
    against official GISSTEMP results.

    Now, Gisstemp read in the “comb” file and from that point there are Methodological choices.

    ftp://data.giss.nasa.gov/pub/gistemp/GISS_Obs_analysis/GISTEMP_sources/gistemp.txt

    Step 1 Choice: combining duplicates. See this description:

    Step 1 : Simplifications, elimination of dubious records, 2 adjustments (do_comb_step1.sh)
    ———————————————————————–
    The various sources at a single location are combined into one record, if
    possible, using a version of the reference station method. The adjustments
    are determined in this case using series of estimated annual means.

    Non-overlapping records are viewed as a single record, unless this would
    result introducing a discontinuity; in the documented case of St.Helena
    the discontinuity is eliminated by adding 1C to the early part.

    After noticing an unusual warming trend in Hawaii, closer investigation
    showed its origin to be in the Lihue record; it had a discontinuity around
    1950 not present in any neighboring station. Based on those data, we added
    0.8C to the part before the discontinuity.

    Some unphysical looking segments were eliminated after manual inspection of
    unusual looking annual mean graphs and comparing them to the corresponding
    graphs of all neighboring stations.

    Result: Ts.txt”

    This STEP is the first place where Giss does something different than all the other methods.

    Let me explain and then suggest something.

    When GHCN creates a file for every given GHCNID there can be multiple records. In fact there are on average 1.89 duplicate records per GHCNID. many have no duplicates, others have up to 9. These duplicates are really NOT duplicates. more precisely, they are records that GHCN has decided to not treat as duplicates. PRIOR to getting into ghcn there is a screening process. That process looks at multiple records for the same station and decides if they are duplicates ( like 90% the same is considered a duplicate. A duplicate might be something like two PHYSICAL records of the same instrument that just happen to differ because of scribal errors. or not. Anyways, GHCN preserves those records that fail to meet their test.
    Bored yet?
    So for example You will have a record for Ghcnid as follows
    42512345002 0 1980: 23 23 24 25 NA 36 37 23 NA 24 26 28
    42512345002 1 1980: 23 22 25 25 16 36 37 23 NA 24 26 29

    Thats TWO records for the same location at the same time which are not judged to be duplicates by ghcn. So they preserve them in the record.

    The first difference between GISS and everybody else is that GISS treats these records with a RSM. see the description above.
    Everybody ELSE does a simple average of the duplicates.

    Here is my suggestion. You want to eliminate all possible causes of the differences. At the end of GISS step one, Giss output the result. “v2.step1.out”

    If Ron can post that for you all from his run of the code that would be great. If he cant I will post the result from CCC.

    Run your code using that file as input and you will have an idea about what Gisstep step one processing does. ( meanwhile I will compare the files record by record arrg)

    I do not expect a huge difference, but my approach would be to eliminate possible causes in a sequential order.

    Also, Ron can turn off the urban/rural adjust. I suggest that he turn OFF the urban adjust for the purpose of this pure comparison. Talking about UHI at this stage just muddies the waters and brings out the worst in people.

    So: if you take step1 OUTPUT as your Input, and compar against a GISStemp output with NO urban adjust, we will have a FIRST.

    the first REAL comparison of gridding methods and averaging. This will give you an idea of the uncertainty introduced by methodological choices.

    Finally, When everybody is running with step1 output, then I would suggest looking at the gridded product for the last year.
    Find the grid square with the biggest difference between you and Giss. Plot the stations of that grid. grunt work from there on out.

  10. So Zeke Just to be clear there are THREE Methodological choices.

    1. Handling duplicates
    2. Gridding and averaging
    3. UHI adjust.

    By using the Step ONE output as I suggest you are down to two culprits.

    2 & 3. Rons has recently run GISS without the urban adjust.

    So if everyone uses step1 OUTPUT and then compares against GISSTEMP with no urban adjust, you are left with one difference:

    Giss choice of equal area approach versus everybody else. I’m assuming the people who use 5×5 do an area weighting scheme.

    Its funny that this choice should drive the answer only in the latter years?

  11. Mosh,

    I don’t think your description of how our methods deal with single wmo_ids with multiple imods (to use the GHCN jargon) is strictly true.

    I know Chad, for one, said that he uses RSM both for combining imod into wmo_ids AND combining wmo_ids into grid-cells.

    And yes, everyone who does 5×5 gridding uses area weighting. Not all use a land mask, however, but using one increases rather than decreases the land temp trend.

  12. And we should not lose sight of point that briggs has made repeatly. when we compare models ( GCMS) with observations (say hadcru) we are really comparing models with models. The differences in how one chooses to combine observations and derive the MODEL of the means, carries with it uncertainties that are largely unaccounted for.

  13. Mosh, I don’t think anybody here is talking about UHI as a physical feature; we’re just talking about what you’re talking about – wanting to remove points of difference in the analysis, to see which processing step is causing the difference.

    As you’ve said, the differences in processing are
    – the GISS UHI step, which doesn’t change the results all that much
    – RSM instead of CAM for combining stations (I don’t think this can be it, and anyway some of the other guys are using a variant of the RSM), and something like RSM for the duplicates (again, I doubt this is it)
    – the spatial methods – interpolating to a point instead of averaging within a box, and different size grid boxes. And there’s no land/ocean mask, but that’s true for most everybody, isn’t it?

    Unless there’s a arithmetic error someplace or an optical illusion, it’s got to be in the spatial methods. But if it is, then how are the hemispheric means OK? That’s what makes this so weird.

  14. Zeke. ok. I stand corrected on chad are you sure he uses RSM on duplicates?

    Inventory file
    425,12345,001

    countrycode,wmo,mod

    Datafile:
    425,12345,001,0
    425,12345,001,1
    425,12345,001,2

    countrycode,wmo,mod,dup. maybe we should ask him directly or somebody who has looked at his code.

  15. By the way, has tamino put out his results in digital form, so it can be added to the spaghetti graph?

  16. Don’t worry Mosh, I am too. I suspected that the divergence in recent temps would appear more prominently in either N Hem or S Hem to help us narrow down our search, not vanish entirely 😛

    Carrot,

    As far as I know, Tamino has never provided the numerical outputs, just the graphs. I’d like to add his reconstruction if he has and I somehow missed it…

  17. I think the hint is in the NH/SH plots.

    If you look at those, there is SH deviation between Zeke and GIStemp in the 1940s, but not now. But in the global, it’s the other way around.

    Let’s think about GISS. It uses RSM to find the temperature record interpolated to a point in each grid subbox. It then uses RSM again to combine the boxes together. At no point is there a well-defined baseline. I can’t imagine that this matters, but.. something has to matter.

  18. Mosh,

    I hope it wouldn’t, given the fact that there is a lot more land in the N. Hemisphere. If you knew the ratio of the total land area covered by GISTemp’s gridboxes every year in both the N. Hemisphere and the S. Hemisphere you could probably recreate global with a weighted average, but thats a much harder task.

    Carrot,

    I suspect its not just baselines, since over the century period (1900-2009) GISTemp has a notably lower trend than other series.

  19. It probably goes without saying (or maybe it has been said), but I presume that GISTEMP’s 1200 km smoothing has been turned off for these comparisons, right? Otherwise, you are extrapolating coastal temperatures over large areas of the ocean. This will dramatically increase the area represented by those stations (usually not as much warming) compared to those in the interior (usually more warming). For the Land-Ocean analysis, the effect is the opposite, since all of the extrapolation occurs in continental interiors or over sea ice.

  20. Zeke,
    But there’s also baselines in the middle of the RSM algorithm, when the stations are combined and when the boxes are combined. Unlike the display baseline, these actually affect the calculation. The baseline is basically the period of overlap between the things to be combined. If things are weird enough, the choice of baseline there can affect how things are stitched together. Also, the GISS RSM is dependent on ordering (they pick the longest record first), and I think everybody else removed that weakness.

  21. cce, I’m pretty sure this is all with GISS’s interpolation, and that’s being listed as one of the possible reasons for divergence. But conventional wisdom says that would make GISS go up faster because of the Arctic, not slower.

    But there’s something to be said for your thought. But if your idea was the answer, than why doesn’t it show in the hemisphere graphs?

  22. Zeke,
    I suggest you re-write this post with some incendiary language (“independent scientists cannot reproduce fraudulent gistemp”), and sent it to Watts for cross-posting.

    Can anybody recommend which paper is most current, in terms of exactly how NCDC does it?

    Also, I can’t work out whether they have a land-only index, but if they did, one could add some udon noodles to the spaghetti.

    http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/map/download.html

    These guys use GHCN upto some point, and then take in CLIMATs after that without using the GHCN middleman.

  23. Carrot,

    For that to work I’d have to switch the NCDC and GISTemp lines in the first graph 😛

    Unfortunately, like most mysteries, I suspect the explanation for this one will turn out to be something rather mundane.

  24. Folks, as Carrot eater says: “(“independent scientists cannot reproduce fraudulent gistemp”)” Is the goal to show an independent replication of GISSTemp?

    What is/are your goals beyond replication? Are you then going to analyze the value of their processes?

    I have many more questions, but will stop with that set in hopes of getting answers.

  25. cce is correct, giss interpolation favours islands and coast compared to land interior, this is also explained in the ipcc ar4:

    “Further, small differences arise from the treatment of gaps in the data. The GISS gridding method favours isolated island and coastal sites, thereby reducing recent trends,”

    Also note that:
    “Most of the differences arise from the diversity of spatial averaging techniques. The global average for CRUTEM3 is a land-area weighted sum (0.68 × NH + 0.32 × SH). For NCDC it is an area-weighted average of the grid-box anomalies where available worldwide. For GISS it is the average of the anomalies for the zones 90°N to 23.6°N, 23.6°N to 23.6°S and 23.6°S to 90°S with weightings 0.3, 0.4 and 0.3, respectively, proportional to their total areas. For Lugina et al. (2005) it is (NH + 0.866 × SH) / 1.866 because they excluded latitudes south of 60°S. As a result, the recent global trends are largest in CRUTEM3 and NCDC, which give more weight to the NH where recent trends have been greatest”

    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-2-2.html

  26. gp2 appears to have the answer. I should be able to calculate the anomalies for those zones and average them, and see how the results look.

  27. Zeke,
    I suggest you re-write this post with some incendiary language (“independent scientists cannot reproduce fraudulent gistemp”), and sent it to Watts for cross-posting.

    hehe. i was thinking the same thing.

  28. CoRev

    “Is the goal to show an independent replication of GISSTemp?
    What is/are your goals beyond replication? Are you then going to analyze the value of their processes?”

    Everybody has different goals I suspect. I think the goal of this excercise is to understand why the results of several “independent” programmers get a different result from GISS.

    Nobody has tried a pure replication ( working from the paper only construct code that follows the description in the paper) That kind of excercise which many people think is important merely tests…
    A. the authors ability to write good instructions
    B. the coders ability to follow them.
    It has no scientific value.

    What people are doing here is looking at the problem and trying different methods for calculating the average.

    Issues: how do you average over an irregular area on a sphere with an non uniform series.. non uniform in space and time. It would appear that choices in methods lead to non trivial differences in certain time periods (if you consider .1C to be non trivial). Which method is best? thats the final question. But first we eliminate the possibility that somebody (or everybody) made a math error.

    I think our expectation was that different methods would lead to trivial differences ( like RSM and CAM being very close.. or Nicks approach and jeffs approach being very close ) say less than 5%
    but the differences post 2000, are not non trivial.

  29. Zeke and Steve,
    If I understand this post correctly, you are you using the v2.mean_comb as input into ‘alternate’ surface record anomaly generators, and then noting that GISTEMP land seems to run lower than those from the ‘other guys.’
    .
    But the final GISTEMP land output includes homogenization, ushcn fiddling, and tob adjustments (STEP1) not included in the v2.mean_comb. The combo file from STEP0 is still ‘raw.’ The head-to-head comparison is GISTEMP output against the ‘other guys’ running GHCN v2.mean_adj.
    .
    STEP0 takes the raw input (v2.mean) from GHCN and combines it with USHCN (post 1980) and Antarctica data. This generates v2.mean_comb.
    .
    STEP1 takes ‘raw’ input (v2.mean_comb) from STEP0 and does the station combo, more ushcn fiddling, tob adjustments, and some ‘mcdw’ adjustments. The output is Ts.bin and Ts.txt. These output files are the equivalent of v2.mean_adj, but they are in a complete different (and difficult) format.
    .
    STEP2 takes Ts.bin, breaks into zones, adds the periurban adjustment, and creates a set of zonal files for further processing.
    .
    STEP3 takes the zonal files and prepares “land only” results. One of these files can be kicked down to STEP4-5 for merging with sea data.
    .
    My previous post shows that dropping the periurban in STEP2 doesn’t change much up. I suspect that most of your ‘divergence’ is in STEP1.

  30. Ah, so the answer was in AR4 all along. I thought it was only us sceptics who were ‘guilty’ of not reading AR4 closely enough:-)

  31. It also explains, conveniently enough, why the issue didn’t appear on the hemispheric charts.

    Trying again to embed that image…

  32. Mosher, thanks for the response. After you folks have developed semi-independent (to each other and NASA/NOAA) approaches do you intend to evaluate the value of those (NASA/NOAA) steps? Or perhaps to evaluate the HADCRU approach to determine why they are diverging, etc?

  33. cce,

    Smoothing doesn’t matter much, it’s the zonal weightings that were the real culprit.

  34. Let me clear up some confusion. I use the same method to combine duplicates and different stations. I wouldn’t call it a RSM because it’s not aligning a number of stations with a single (reference) station. It’s producing offsets so all the stations are aligned with each other.

  35. Zeke, it’s more than just “smoothing.” Data from the land is extended over the ocean, dramatically increasing the area represented by these coastal/island stations. For example, eyeballing the GISTEMP maps, data from Hawaii represents close to half of the area of the entire lower 48 states.

  36. cce,

    Look at the graph above. Both use the data at the end of STEP0 of GISTemp. The black line is the standard GISTemp record produced from land-only stations. The green line takes the data, uses CAM for anomalies, and grids on 5×5 lan/lon cells. The difference is fairly negligable for most years.

  37. Ron

    i was just try to get everybody on the same INPUT DATA page;
    the post step0 comb file does that..err ya.

    everybody else was using ghcn.. not folding in antarctica or ushcn.
    by using the comb file everybody is on the same input page.

    from there the differences were all processing.

    AND as we found out, Giss do not produce a ‘land only’ product
    as zeke and the good doctor Ruedy have explained.

  38. CoRev,

    “Mosher, thanks for the response. After you folks have developed semi-independent (to each other and NASA/NOAA) approaches do you intend to evaluate the value of those (NASA/NOAA) steps? Or perhaps to evaluate the HADCRU approach to determine why they are diverging, etc?”

    there isnt any plan. I think everybody doing this has there own little angle on things. mostly the guys are DYI types ( if I can surmise and project) There are a bunch of ways forward. Let me just list the issues that I see at play.

    1. The issue of independently verifying the professionals.
    note one thing. everybody doing constructive work is some
    sort of believer in AGW ( some strong believers, others weak)
    The doubters dont dare to suggest a solution in code, much
    less take the couple of 3 weeks of work it takes. This work
    is pretty much done.

    2. reconciling differences. Work with hadcru would fall in here.
    understanding how the arctic is handled better. To reconcile with hadcrut we would need to change datasets/formats. Not sure if people are up for that.. but It might be easy to switch to the hadcru data subset..

    3. Math geek tests for which method is BEST. angels on the heads
    of pins debates..viscious but nobody gets hurt. This is boring except for some of us.

    4. Metadata WARS. Ron B has a bunch of work on that which could be expanded

    5. Then comes the investiagtion of the source temperature data, adjustments, etc,

    6. Looking at issues like UHI.

    BUT if people thought that the giss approach was deeply flawed, they are deeply wrong.

  39. My next project is to add in Chad’s land mask, and calculate land/ocean temps using the GISTemp zonal averaging method. I suspect the result will be fairly close to the GISTemp land/ocean record, but we will see.

  40. I’m still stuck on the N/S hemisphere versus global. Carrot Eater said it was WTF territory but there was no followup – on what physical basis (I know Mosh – GisTemp is just a model…) can the global temperature be higher than BOTH the Northern Hemisphere and the Southern Hemisphere?

  41. Thanks again, Mosher. I think I’ll go back to lurking until you folks start the comparison(s) and best methods discussions.

  42. carrot eater (Comment#43935) May 22nd, 2010 at 3:32 pm

    Well, that was fun.

    No, it’s not fun unless some scientists are named and pilloried, along with backhanded accusations of fraud and incometence.

  43. Zeke,

    I get it now. i.e. the zonal weighting in addition to weighting differences caused by the smoothing/extrapolation. FWIW, according to the GISTEMP map tool (which can calculate trends), setting “smoothing” to 250 km instead of 1200 km increases the 1980 – 2009 trend of the “traditional” analysis by about 9%, while it decreases the Land/Ocean trend about 10%.

  44. David Jay,
    I touched on it way back in the beginning of the thread where I said in part…

    MikeC (Comment#43874)
    May 21st, 2010 at 9:13 pm
    “… which leads to hemisphere numbers which do not take into account the difference in the amount of land between the hemispheres”

    In the end it was an email from Dr R at NASA GISS that settled the issue (nothing to do with the back slapping for a job well done on a contrived post)… There is more land in the Northern Hemisphere than in the Southern Hemisphere, the GISS numbers do not take that into account.

    mosher,
    “1. The issue of independently verifying the professionals.
    note one thing. everybody doing constructive work is some
    sort of believer in AGW ( some strong believers, others weak)”

    That from a man who still does not read the studies

  45. bugs said,{No, it’s not fun unless some scientists are named and pilloried, along with backhanded accusations of fraud and incometence.}

    Ah, we leave that part to pros like you bugs!

    :)~

  46. Thanks again, Mosher. I think I’ll go back to lurking until you folks start the comparison(s) and best methods discussions.

    Thanks CoRev. Since this is all volunteer work people who help work at their own pace on stuff that interests them.

  47. [delurk] Nicely done, Zeke et al. I’ve little to add beyond the kudos, but enjoy learning by reading. [/delurk]

  48. [delurk]AMac , darn it! Now I have to change my commenting format. And Boris, I also had to look up a definition for cromulent. Yes, it was! The rest of you are on your own for the def.[/delurk]

  49. Saw this blog link posted on CA.
    Blog says: “A few years ago, I tried to get an idea of the extent and scope of GHCN data by plotting the temperature series for each individual station.
    This time, I decided to not worry about the actual measurements, but just lay out the locations of temperature stations with data in the unadjusted mean file v2.mean.Z every month in the years spanned by the GHCNv2 database.”

    http://blog.qtau.com/2010/05/dude-where-is-my-thermometer.html

    Interesting dude!

  50. Liza.. Ha sinar is an old time regular at CA. good guy.

    Whats interesting to note is that we can now show that dropping thermometers has no appreciable effect on the mean. We can show that by doing a test that I suggested to EM Smith back in Jan.
    A test he agreed was the best test. in GHCN the number of stations rise from the begining ( in 1701) to a peak in the 1990 period reaching a total of 7280. After that time the number decreases. What I suggested was the following: lets say that today we have only 2000 stations in the database. down from our peak of 7000. What we can do is run the average with ONLY those 2000
    Then compare that to the case where we added more stations ( up to 7000) and then dropped some after 1990. That is we can answer the question” what does adding stations do? That gives us an understanding of what dropping stations does.

    Now, I will have to admit that the first time I saw that chart of dropping stations ALL sorts of red lights went off. EVERY PERSON who had worked with data would raise an eyebrow. But after some careful readers at CA pointed out that the anomaly method was built to handle this kind of change in sampling I was convinced. Still, some people want to see the code actually run the test I described. I know I did. Anyway. That test has now been run. Its been run with actual gisstemp code, with a refactoring of that code. Its been run with OTHER methods ( zeke’s method, taminos method) The result the great drop out does NOT change the answer in any significant way. The methods worked! We can now report that result. people who want to question it, can download the code and run it for themselves. if they chose not to, then I dont know what to say to them. here is an example of open code and open data answering a doubt.

    Also, we can test how sensitive the answer is to the total number of stations. Like I said there are 7280 stations in that database.

    You think that changing the number of stations would have a big effect? after all the weather here is different from the weather 6o miles away. seems reasonable right? seems intuitive? seems to make sense? common sense! Want to test your common sense?
    You can. You can test the idea that fewer stations leads to a different answer. you could even as Nick stokes has done, reduce the dataset to 60 STATIONS!! 60 stations to cover the whole world? impossible! well, if we are curious we can test such a wild notion. Thats fun! it also shows us that our common sense sometimes misleads us. Now that doesnt mean we should only use 60 stations, but if we had to, we could STILL get a decent result ( not the exact same result as 7280 stations) from that small a number. And guess what, there are theoretical papers that argue that 60 is probably too small a number, but 120 or so is plenty.
    Those are fun questions. We get to actually test those notions now.

  51. “You can test the idea that fewer stations leads to a different answer. you could even as Nick stokes has done, reduce the dataset to 60 STATIONS!!”

    The fewer the stations, the fewer the amount of numbers that need to be faked. Why fake the numbers for 1000 stations when you only need to fake 60 and ignore the rest? 😉

    Andrew

  52. Mosher, I don’t have time to chat. I’ve got to go.
    Basically, as I have said, that “result” number holds no meaning full information. It really doesn’t tell you anything even though you keep saying it does. You can’t really be representing “The Northern Hemisphere” for instance, because Canada and Russia aren’t represented hardly at all compared to the USA as I can see from that slide show. They are located at higher latitudes and the reasons for climate in those places are vastly different then the in USA and other places on earth- and not just because they are farther north but that matters. Same goes for the SH not being represented in the data. Mid west temperatures in the USA depend on all kinds of factors that are different to those on the coasts; etc. That “mean” number tells you none of that. Besides keeping in mind how vast and diverse the climate on this planet is; and you make your charts in 1 degree increments instead of tenth’s of a degree- it’s a flat line.

  53. Steven Mosher,

    Climate Science has chosen to present information with unverifiable numbers and fakeable graphs. It seems to me that you should be spending your time looking at these problems and perhaps ways to fix them, rather than ignoring them and talking about the moon.

    Andrew

  54. Nice work, cce, gp2, zeke.
    .
    I need to make some corrections to my above post.
    .
    Ts.txt is not a difficult format. I was confusing it with the Ts.{label}.[1-6] which are harder to decode by eyeball (when written to text)
    .
    Ts.bin is created in STEP2 from Ts.txt – not together with it.
    .
    I suggested that there are TOB adjustments in STEP1. Probably wrong. There are adjustments based on a file named SUMOFDAYS. Probably not TOB adjustments.
    .
    Sorry for the errors – too much stuff off-the-cuff and wrong. I’ll try to be more careful next time. Plus I just spent more hours in that code this weekend than I did during the previous part of the year. New stuff coming out tonight!

  55. “Basically, as I have said, that “result” number holds no meaning full information. It really doesn’t tell you anything even though you keep saying it does. You can’t really be representing “The Northern Hemisphere” for instance, because Canada and Russia aren’t represented hardly at all compared to the USA as I can see from that slide show. ”

    On the contrary. Lets start with the US which has 1800 stations
    ACTUALLY there are way more than 1800, there multiple thousands. Stations that dont get selected for GHCN. When we look at those 1000’s of stations we get an average. Lets take the
    average between 1950 and 1960, lets say that average is 14.3C. Do you want to know something? If we take those 1800 stations and we ONLY Select
    600 of them.. guess what? the average between 1950 and 1960 will be…… close to 14.3. In fact if we randomly select 1000 different sets of 600 stations.. guess what.. they all will fall in the range of 14.3C +-.05C. Thats because temperature is correlated over distance. And what is COOLER STILL is that we can take just 200 stations and you know what? the average will be 14.3C +-.1C. Doesnt that insult your common sense. I know it did insult my common sense. But those are the facts. Now, We can also test this for Russia and Canada. Again, we take data from the many stations ( they are in a database that reports daily and hourly data ) there are thousands of these. And we can do the same thing. We sample the country with many stations and then we “decimate” our sample. Choose fewer and see what happens. Guess what? the means are stable. One of my favorite studies uses 10s of thousands around the world. Now, that data only exists from the 50’s to the present but it lets us TEST our theories about sampling. How much do we need to sample and how close do the stations need to be. There is no need to huff and puff and wave your arms. You just stick 1000 thermometers in your bath tub. Its 78 degrees. you start to remove them, 100, 200, 300, 400
    and you watch what happens to the mean. guess what? it doesnt change.

    Now the less SPATIALLY UNIFORM the area is, the more thermometers you will need. For example, in the US, there are parts of the country where 1 thermometer per 2degrees is fine. For other parts you need 2 or 3. Anyways, There is even ANOTHER WAY to show that what I say is true. And that involves 24,000 stations. Guess what? Well you know the answer. Sampling the climate field is a pretty straightforward problem. The work has been done and re done. You should read some of it. heres a nice one. when they say.. 30 arc second data. there is also 10 minute data.. lots of stuff.

    http://www.worldclim.org/tiles.php

    The climate feild is diverse. Just like populations of people are diverse. But we have well know ways of sampling a small number and inferring what the total population looks like. And we can even test that sampling. which we have.

  56. It all makes sense now. NOAA faked those 60 stations knowing that one day Nick would choose them. Or maybe, NOAA faked them out of general malice, and Nick chose them knowing they were faked. And climate scientists should try to fix problems, but when they do, they should be accused of faking data.

  57. cce,

    You’re missing the point. If there was one faked number or 1,000,000,000 faked numbers in the data, how would you know the difference?

    Andrew

  58. Easy, Andrew. I accept that the data is not and will never be perfect. I accept that there are ways to identify flawed data by comparing stations to their neighbors and by parsing the data in different ways as Ron is doing. I accept that there will be some amount of uncertainty due to incomplete coverage and the various ways of correcting and combining data. Finally, and most importantly, I exclude crazy conspiracy theories.

  59. There is absolutely no TOB done by GISTemp, Ron. Though it does import TOB already done in USHCN.

  60. I know just enough about stats to get my self in trouble, but…
    .
    The dropping of large numbers of sites to a very small number of sites should have an effect.
    .
    Yes…you could get an answer to the question with as few as 60 stations world wide as noted above. Could get an answer with only 1 station.
    .
    Now…what happens to your error bands? They get wider as you drop from larger sample sizes to smaller sizes do they not.
    .
    If the +/- error range is large…the answer has no real meaning. You would get something like..”..Yes, the world has warmed +0.5C, +/- 2.0C..”
    .
    So if you are going to say you see a change in temp, give me the range of error to go with it or it means nothing. It might still mean nothing if the error band is too large.
    .
    Dropping stations and then using a WAG to infill the area dropped…? Why? You lose precision for no good reason.

  61. Steve Mosher,
    couple things other things besides what Ed Forbes said!
    The vast majority of the land in the NH is in Russia, Asia and Canada. That’s where you should be getting most of your data from. Those data could have the least UHI effect too: vast areas of land that are less populated because it is too cold! (and water is frozen) How many stations are in Asia, Russia and Canada and how they are dispersed? Let see an exact map.

    Also, what you (and a lot of people) are doing by accepting the current surface station data is believing that they have accurate correction for UHI (I know you don’t want to talk about it) Without an accurate representation of the UHI for every station or for a good number of them that +/- error .05 could be much higher.

    Bottom line is too- Phil Jones said:
    “I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. ”

    He goes on to say “quite close” to significance level..close but as they say, you get no cigar!

  62. In millions of square miles
    Russia: 6.5
    China: 3.7
    Canada: 3.8
    United States: 3.5
    .
    Stations In GHCN (total count, not current)
    Russia: 66 (europe) + 222 (asia)
    China: 422
    Canada: 847
    United States: 1921
    .
    Those countries are larger than the US, but not incredibly larger.
    There are hundreds of stations in each.

  63. yes ed, of course you lose precision as I showed in my example.
    But the point was really to show to liza how the math works.

    For example. In the US with 600 stations, your resolution is about .05C.. at 250 stations its about .1C. Perhaps you can explain to liza the concept of diminishing returns.. The point was simply this. Liza persists in the false notion that you need stations every 6 inches to capture the climate. We know that not to be true. Is more better, of course. But past a certain point you are not gaining much information. Obviouslt, with fewer stations you have larger error bars. But that point is lost on liza.

  64. liza —

    This builds on a question I directed to Bad Andrew at tAV.

    One way of looking at is to ask you, “Liza, is there any evidence that would make you see things differently?”

    The answer may be “no.” It is, for many people, about certain things. When that is the case, we should enter into a dialog with realistic expectations..

    Or, the answer may be, “yes, it’s possible that I could change my mind.”

    You’ve been laying out objections to certain data and ideas–but saying “no” doesn’t really address the question. At some point, it seems to me that you could give equal thought to what sort of information would be meaningful. In other words, you propose tests.

    It might take learning computer coding to actually answer them, but it would be a start.

    For instance, I think sometimes you’ve said that the instrumental temperature record isn’t good enough to draw solid conclusions. In which case, in defined terms, what would be good enough?

    Other times, I think you’ve said that you don’t believe in the entire idea of “temperatures,” or averaged temperatures, or temperature (or anomaly) values that represent an entire area. Which would presumably mean that there’s no point seeing if the database Zeke uses is good enough, or if his code is robust enough — because it can never be convincing, no matter what.

    The two positions seem the same, but I think they are different. (Though I wouldn’t hold out much hope for a meeting of the minds in either case.)

  65. Thanks Ron. Liza also doesnt get the concept of spatial correlation.

    Liza. The SIZE isnt the only important factor. Its the relative homgeniety of the area that matters. In anycase, more stations would be better, past a certain point you have overkill, under certain numbers you have wide Confidence intervals.

  66. I do not know how others reconstruction are gridded and interpolated but the difference between NCDC and giss can be explained by interpolation over the ocean if the anomaly is computed as an area average over the entire world while computing it by the 3 latitude band greatly reduce the effect of interpolation.

    In the image below nasa1200 is the gistemp computed as an area average over the entire world(as NCDC), nasa1200b is gistemp computed with the 3 latitude band(as giss), land1200 and land1200b are the same but with grid pixel over the ocean removed;all the data are 24months running mean.

    http://img375.imageshack.us/img375/4798/nasagiss4.png

  67. “what would be good enough?”

    AMac,

    By asking this question you are erroneously placing the burden on liza to give a definitive answer that no else has given so far. This is an unkown threshold, given the state of climate knowledge. The burden does not belong on her. The burden is on the person making the AGW claim to provide the evidence. Squiggly lines, unverifiable numbers and speculations about how the climate works are not evidence. How do you expect liza can answer this question when you can’t answer it yourself?

    Andrew

  68. I understand your frustration with me (and my husband) guys.
    I understand the math’s fine. It’s a math/number thought experiment. I tend to think in physical realities however . I live in Southern California and I can drive North from San Diego to the city of Santa Clarita over a hundred miles, or East a hundred miles toward the desert and pretty much never see any open land. It’s all urban sprawl. I’ve experienced temperature differences between being in the city and traveling just a few miles out of it in 10’s of degrees, not fractions of degrees cooler. There are mountains with snow on them right now here; biggest one is called Mt Baldy; just an hour away. I can look at this mountain while I am mowing my lawn in shorts and a tank top (when it is warmer out! Its cold here now!). Is that mountain not part of the “globe” and doesn’t it matter to the average temperature? How many thermometers would give a true “average” temperature between me and that mountain?

    Amac, you are right, I’ve said all those things in one way or the other. I’ve also said : It would be like saying you know what the “global average rainfall” should be down to fractions of an inch by clumping and massaging data from thousands of places with totally different geology; and then go claiming “global average rainfall” is broken/different/changing from “what’s on record” or “what’s normal”.

    I’ve also asked “What’s the average temperature of the Pacific Ocean?” and do you think that number would mean anything? No answer.

    you said:
    “I think sometimes you’ve said that the instrumental temperature record isn’t good enough to draw solid conclusions. In which case, in defined terms, what would be good enough?”

    What solid conclusions do you mean? That the climate is in a warm phase on this planet? I don’t think you need all these people playing with all these massaged data to know that. It’s called an interglacial period (really we are still “in” or coming out of a glacial period when half of the USA was buried under miles thick of ice and snow!) or also referred to as a glacial minima.

    Ron Broberg (Comment#44002) May 23rd, 2010 at 9:36 pm
    Stations In GHCN (2009)
    Russia: 24 (europe) + 97 (asia)
    China: 73
    Canada: 49
    United States: 587

    Ah..so that’s good coverage? “Liza also doesnt get the concept of spatial correlation.” I don’t think you understand the geology, ie: the physical real world. See Mt. Baldy.

    Every part of AGW theory lives in a computer.
    In the world I live in even the moon warms the atmosphere, 0.03 of one degree to be exact say some scientists; and it also breaks up sea ice when it is full! Is the moon in your computer?

  69. Andrew_KY (Comment#44013) May 24th, 2010 at 6:42 am
    Good morning Andrew! Thank you. 😉

  70. My gosh, I just checked the temp outside because I had to turn on the furnace this morning…again. It says 48°f, burr! However the “official surface station” (at the airpot) for my area on my homepage website is reporting 56°f! Big difference!

  71. Good Morning liza. 🙂

    The propaganda is thick and hard for some to cut through. But I think AMac is sincerely looking for answers and I think he’ll come around eventually.

    Andrew

  72. Re: JSwift,

    It is useless to attempt to reason a man out of what he was never reasoned into.

    I think JSwift’s comment about the AGW debate is quite perceptive.

    There are many things that must be so — or not so — thus, they are. Or aren’t.

    It was incredulity about the particulars of the Tiljander proxies that got me to look at AGW in t he first place. Not the varves themselves–which are quite ordinary–but the defense of their every use, provided it’s for the Right Cause.

    The general phenomenon crops up over and over again, from d’Aleo and ChiefIO to RealClimate and the Team.

    Zeke’s posts are one of the islands that Feynman would recognize as science in substance as well as form.

  73. How many thermometers would give a true “average” temperature between me and that mountain?
    .
    We don’t rely on ‘average’ temps, do we?
    .
    You do understand the difference between anomalies and averages, question mark?
    .
    Are 50 stations in a 3.5 million square mile territory sufficient? I suppose that’s a statistical question with answers that include varying degrees of confidence. I’m not sure where the answer lies, but I’m willing to follow up the question and learn the answer. Are you?

  74. Thanks Andrew: “I think AMac is sincerely looking for answers …”
    Sorry if I was not understanding that this morning AMac. 🙂

  75. I’m not sure if the observation of x number of stations representing x amount of surface area is as important an observation as the 8 degree difference between Liza and the airport a short distance away.

  76. MikeC, I know! We’ve been paying attention to this..our thermometer and that one at the airport is always a lot different. It’s not like this isn’t important to us…our whole family in various places around the “southland” here; play at this game of temp observations too; plus father-in-law in Hawaii. He likes to make fun of how cold we are.

    Ron, I am as open as anybody should be. I think you have to first convince me what “normal” is!

  77. Ron Broberg:

    You do understand the difference between anomalies and averages, question mark?

    I do think that’s the issue here.

    What is important for Liza’s location re: an airport is e.g. the monthly average at the airport minus is value over some normalization period (e.g., 1961-1980), versus the monthly average at Liza’s home minus its value over the same normalization period.

    When centered on this same period, does Liza’s thermometer show a different long-termtrend than the airport??? That’s what is most important for measuring climate, not whether there’s just an offset—or even different RMS noise (though that is important too for a different reason—it sets the number of stations you need in one region to get an optimal estimate of climate for that region).

  78. Carrick, my “region” is all urban sprawl. 🙂 I am closer to the ocean by about 4 or 5 miles then the airport is too. It’s still constant buildings and trees, pavement etc and all the way to Pacific Coast Highway from my house. This area used to be mostly wetlands and oil fields not too long ago. Edit add on: and farms!

    Here’s comments from my facebook page when I asked the question:

    “Where the heck is summer?” yesterday:

    My friend in Illinois said : It is here in Illinois..FINALLY hee hee

    My Canadian friend: How cold was it? 70 degrees? hee hee
    45 Min away from here, we had an inch of snow.

    My friend who lives about 20 mins south from me in San Juan Capistrano said:
    I was freezing today Liza…and for me to be freezing at this stage of my life is amazing

    My friend in the UK said: It’s been really hot here this weekend Liza, a cloudless summer blue sky and a blazing sun. Think we swapped weather with you in CA!

    Another friend in Illinois: You said it! It was 90 degrees here yesterday.

    Friend in the UK again says: It was in the 80’s here….nothing to do with global warming, just my hot flushes keeping England hot!
    🙂

  79. Lisa summer is definitely here for me. It was in the mid-90s yesterday (heat index was around 103), and being the weekend, I had the “joy” of working outside in it all day.

  80. torn8o:

    Is this what you mean when you say:
    temperature anomalies are correlated over distance?

    This is a problem a lot of us want to study next.

  81. Here’s an old picture of where I live, a view from the pier:
    http://www.yesterdayla.com/Graphics/huntington4.jpg

    I wish I knew what year it was taken!

    Another:
    http://www.yesterdayla.com/Graphics/huntington1.jpg

    Here’s the same pier and the surrounding area a little later in history:

    http://www.yesterdayla.com/Graphics/huntington2.jpg

    And now:
    And an aerial view of the same pier:
    http://www.stockteam.com/huntington-beach-pier-aerial.html
    This picture is kind of weird (I think it sticks when loading)…the pier has the red dome. That’s Ruby’s restaurant.

    I hope these show up as only links. Sorry if I mess up!

  82. Bugs,

    Comparing the GLOBAL OHC with the local arctic ice extent is not particularly illuminating. To get a better understanding you’d plot the variables that matter: the local SST and most importantly the wind. To the extent that the global represents the local, there is some small point in showing the global number, but the case would be clearer if you plot the local SSTs.

  83. Amac:

    “The general phenomenon crops up over and over again, from d’Aleo and ChiefIO to RealClimate and the Team.
    Zeke’s posts are one of the islands that Feynman would recognize as science in substance as well as form.”

    Ditto on that. At times I think both sides are imitating the worst behavior they see on the other side. “you think that science is bad, watch THIS.”

  84. Carrick. On trends. One funny little “study” I did a while back was to look at trends in Deserts. With Ron Bs improved metadata, and Zeke’s tools, that might be a fun one to re do. And Zeke has already done airports versus non airports. generally speaking I would think airports are going to be a fairly stable place to establish a trend.. after the initial period of development

  85. Stephen Mosher:

    I would think airports are going to be a fairly stable place to establish a trend.. after the initial period of development

    They are ideal in many respects… great horizontal fetch so fully developed turbulent (and maximal vertical mixing in the ABL) is a huge plus. And the land is maintained in the same state so you don’t get shifts in vegetation over time.

    Some of the things people complain about, like heat sources from airplanes amount to almost nonexistent effects (unless you have planes stacked up beside the instrument all day of course). And even then, if you do a single measurement from that site per day, you could time it so it falls in between pushes.

  86. The wiki page for the airfield where the weather station that reports my area says:

    The Los Angeles area is also subject to phenomena typical of a microclimate. As such, the temperatures can vary as much as 18°F (10°C) between inland areas and the coast, with a temperature gradient of **over one degree per mile*** (1.6 km) from the coast inland.

    This in just one small blip on the planet. One surface station representing more then one city here too! Between all that said above and the urban development (and also added vegetation) I provided pictures of, how can you possibly know if any kind of “trend” is real climate or not “normal”, or know if you adjusted for the UHI correctly etc. , especially on such small timescales?

    I think you are just guessing.

  87. Goofy question of the day –

    As air travel has cheapened, airports have become busier, thus maybe leading to higher anomolic readings of the temps.

    What happens if air travel, once again becomes basicaly a rich persons’ pleasure and flights in and out of airports drop? Do we see some runways then being converted back to “natural” habitat? Dropping ground travel to and fro from said airports? etc.?
    And if this then counters some of the heat island effect, will we not see anomolies drop back towards the base temps?

  88. DeNihilist (Comment#44037) May 24th, 2010 at 10:49 am
    the wiki page for my airfield also said the lowest temperature ever recorded (even in this age of AGW) was was 30°F in 2002! lol

  89. Re: liza (May 24 10:45),

    > I think you are just guessing.

    I think you’re pretty smart 🙂 and could design experiments to test this notion. With an eye to simple statistics, I think you could figure out how to put quantitative bounds around your results. Like, “This test of the records of 50 stations of X and 50 stations of Y over 40 years, shows that the difference in 40-year trends between Xs and Ys is 95% likely to be between 0.1 degree and 0.3 degrees.”

    If you tried an approach like that, what do you think your results would show, compared to what Zeke finds? Compared to what Jeff Id/RomanM find?

  90. Liza:

    The Los Angeles area is also subject to phenomena typical of a microclimate. As such, the temperatures can vary as much as 18°F (10°C) between inland areas and the coast, with a temperature gradient of **over one degree per mile*** (1.6 km) from the coast inland.

    Costal sites are “special” because of the marine-land atmospheric boundary layer. Add in the fact you are surrounded by costal mountains, and there is a lot of unusual micrometerological effects for your area.

    But, here’s the issue. What happens when you anomalize each site and compare temperature trends between different sites around HB?

    How does the variability in temperature trends between sites compare to the variability at a given site?

    If the variability in temperature trend associated with weather at one site is large compared to the variability between sites, it’s just common sense you can neglect the multiple site issue in favor of the big driver on the variability in the measurement of climate, which is weather itself.

  91. Denihilist:

    As air travel has cheapened, airports have become busier, thus maybe leading to higher anomolic readings of the temps.

    Extremely unlikely due to the large mixing volumes for the air around airports. Remember in the daytime you’re talking about a layer of air 2-km plus thick that is overturning every 20-30 minutes or so. Plus there’s a very low surface friction around airports, so your mean wind velocity should be higher than in the middle of town.

  92. Thanx, Carrick. I can accept that (how big of me eh? 🙂 )

    So then even if the other possibilities were to happen, then you would expect not much of a divergence?

  93. DeNihilist:

    So then even if the other possibilities were to happen, then you would expect not much of a divergence?

    I actually suspect that parking the instruments at airports might underestimate the magnitude of the impact of urbanization on climate (larger mixing fraction of rural atmosphere at airports than in city parks for example).

  94. carrick,

    “Plus there’s a very low surface friction around airports, so your mean wind velocity should be higher than in the middle of town.”

    Except that you are measuring the day’s high and low temps. The air is calmest during the coolest part of the 24 hour period, just before sunrise. That’s when UHI has it’s strongest effect, and the reason that UHI is detected most in Tmin (there is a much more mild effect in Tmax). The station at the airport will usually show a lower reading than downtown because the airport station is properly sited more often and is located in the cooler area of the heat island. Towns and cities will also have varying levels of UHI depending on the building materials.
    There has been a bunch of research on this topic, especially by the Swediash.

  95. DeNihilist,
    “So then even if the other possibilities were to happen, then you would expect not much of a divergence?”

    There is a lot of divergence, especially on Tmin. But the airport is usually going to be much cooler than the downtown areas, even if the downtown area is in a park. But then, it is a common sense issue, if the station at the park is 200 feet from the urban objects (roads, buildings, etc) then it will be cooler than the station 50 feet away from simillar objects.

  96. MikeC: If I’m not mistaken, most modern airport met stations do hourly measurements and the day time average is based on that.

    For example, see this.

    I’ve looked at the tmax/tmin bias issue with a station that has 1-second measurement intervals, admittedly this is just at one site. I didn’t find a significant effect for computing (Tmax+Tmin)/2 versus Tavg.

    Interesting aside, for certain types of distributions, (Tmax+Tmin)/2 is a more robust central tendency estimator (mode estimation) than is mean temperature. I didn’t get a chance to look at this yet, but it would be interesting. (Humans don’t average temperature for 24-hour periods to decide what is “typical”. (Tmax+Tmin)/2 may be a better metric for a human, and maybe even a GCM than Tavg.)

    There are 50 stations world wide with 1-s measurement intervals that I have access to, all well sited (including one in the Antarctica). When things clear up for me this summer, I plan on doing a 5 year x 50 station study.

    I guess it would be useful to have access to some “poorly sited” locations to compare against.

  97. MikeC:

    But then, it is a common sense issue, if the station at the park is 200 feet from the urban objects (roads, buildings, etc) then it will be cooler than the station 50 feet away from simillar object

    I’d love to know how big of an effect this is in practice. Any ideas? E.g. 0.1°C?

    I suspect the bigger effect is local albedo from parking lot vs grassy knoll.

  98. carrick,
    “MikeC: If I’m not mistaken, most modern airport met stations do hourly measurements and the day time average is based on that.”

    In the context of this conversation (what GHCN and USHCN use) it’s gonna be the average of Tmax and Tmin, just like in the backyard stations.

  99. carrick,

    “I’d love to know how big of an effect this is in practice. Any ideas? E.g. 0.1°C? I suspect the bigger effect is local albedo from parking lot vs grassy knoll.”

    It entirely depends on the individual circumstances. How far is the station from the nearest heat source… how big is the city or nearby city… the direction of the normal airflow… air conditioners are seasonal machines.. are there any trees blocking wind (that’s a real biggie I have found out). But one thing is for certain, there is a predictable pattern depending on the circumstances. When I looked at pairs Menne’s CRN 1 and 2 stations which were close to each other, the size, distance and wind direction of the heat island in relation to the station did appear to be a factor… and the differences were significant, in the .5 to 1.0 range.

  100. carrick,
    “But, here’s the issue. What happens when you anomalize each site and compare temperature trends between different sites around HB?”

    It is the same, it depends on the circumstances… but the biggest influence on the trends in that area is PDO, very visible.

  101. Sorry carrick, i was distracted when I wanted to discuss the albedo issue. Think Tmax and Tmin. The strongest effect of UHI is on Tmin. Albedo will be a Tmax issue. The grassy knoll will absorb more heat since it is watered and is darker than the Meesquite grasslands in the surrounding countryside if it is the dry time of the season. The parking lot; is it black asphault or light colored concrete? Darker will absorb more heat, lighter will reflect more… like I said, depends on the circumstances

  102. MikeC:

    In the context of this conversation (what GHCN and USHCN use) it’s gonna be the average of Tmax and Tmin, just like in the backyard stations.

    Reading their descriptions, that’s not obvious to me. Do you have a link that definitively states this? In the documentation I’ve found, they describe taking the mean, max and min values from a time series. If they use “mean” in its standard usage, that is not (Tmax+Tmin)/2. I know that Watts and others claim differently, but they are aren’t authoritative IMO.

    It is the same, it depends on the circumstances… but the biggest influence on the trends in that area is PDO, very visible.

    Of course that’s a regional scale disturbance. If that swamps micrositing issues, then that just points to too much importance being placed on micro-site effects.

  103. Ya carrick,
    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/

    Anyone working on finding the correct temps and who has not read AT LEAST all of the studies at the bottom may as well put on their wizzard hat and play merlin the mosher… and guess at how it works
    … AND THE AIRPORTS ARE IN ds 3206 IF MEMORY SERVES ME CORRECTLY…dam caps, but i was tooo lazy to retypoe it all 🙂

  104. carrick,
    “Of course that’s a regional scale disturbance. If that swamps micrositing issues, then that just points to too much importance being placed on micro-site effects.”

    no way carrick… you need to break it all down, you are looking at a small signal riding on a larger signal… and while I am pretty sure that there will be no floooding or malaria outbreaks, you need to know everything that goes into the trend… besides, the recent cold outbreaks after years of global warming brain-washing is padding my retirement fund 😉

  105. Well, applying a land mask to both land and ocean stations turned out to be a much larger problem than I originally anticipated. I did have a chance to look at the effect of zonal averaging on GISS Step 0 land/ocean data, however:

    Zeke, I’m not so sure why you are applying a land mask, didn’t you know that pretty much all land is affected by ocedan temps? Have you ever plotted land and ocean temps on the same graph and wondered why they have simillar long and short term signals?

  106. And just looking at the last 20 years (global land/ocean):

    MikeC,

    I wanted to apply a land mask so that 5×5 cells that have both land and ocean stations have each weighted by the relative proportion of land in the cell. Otherwise land stations tend to be overweighted in overlapping cells since there are more of them.

  107. Fair deal Zeke, would you be better off using just the land stations and weighting those grid cells? That way you do not bias your results by removing the land stations which are most effected by the oceans.

  108. MikeC:

    … AND THE AIRPORTS ARE IN ds 3206 IF MEMORY SERVES ME CORRECTLY…dam caps, but i was tooo lazy to retypoe it all

    I don’t buy it. I can find online feeds that are hourly for many US sites, no problem. I linked you a feed that was obviously hourly for memphis.

    There is no logical reason to assume that when somebody says “mean” they mean (Tmax+Tmin)/2, UNLESS they come out and say precisely that. I believe this to be true for sites that only reporting Tmax/Tmin from instrumentation that only reports daily excursions.

    As to reading all of the links… I could probably give you 50 you haven’t read either. Not sure what the point of that snark is, especially if the articles AREN’T germane to this question.

  109. no way carrick… you need to break it all down, you are looking at a small signal riding on a larger signal… and while I am pretty sure that there will be no floooding or malaria outbreaks, you need to know everything that goes into the trend… besides, the recent cold outbreaks after years of global warming brain-washing is padding my retirement fund

    Order of effect applies here, and application of data is important too.

    If you’re looking for global temperature trends, then uncorrelated effects that are much smaller than your largest source of variability aren’t players. They get booted off the team bus.

  110. The (tmax+tmin)/2 versus Tave is a non issue.

    If you like you can go hunt down the CA threads where we discuss that. Theres a file with 190 stations where both are calculated over a 12 year period. .

    Since the data exists, people are free to go down load it and write something up.

    Basically, you will see that if you are measuring the TREND OVER TIME that sampling the min and max and averaging them, does not distort the trend.

    Zeke> I’ve been thinking we ought to make a list of the REAL problems with the record, as opposed to the spurious issues

  111. carrick.

    http://www.john-daly.com/tob/TOBSUMC.HTM

    jerryB put this together. You’ve got hourly station data for a bunch of stations. He’s already done the calculation for comparing Tave to tmean(max=min)/2 Lots of years, lots of stations.

    then there is CRN data as well.

    I recal reading one paper that said that (tmax+Tmin)/2 was an unbiased estimator.. Cant recall.

  112. In the context of monthly surface record discussions, Tmean is the average of Tmax and Tmin, unless otherwise stated. I don’t know what would give somebody an impression otherwise.

  113. In the context of monthly GHCN surface record discussions, Tmean is a variety of different methods and frequently is NOT the average of Tmax and Tmin, even in the U.S. In the end though, I don’t know why that matters so much. If a station gets its monthly mean from the average of all the noon temperatures, who cares, as long as that method is in use for the whole time series. Remember that we are using anomalies and not absolute temperatures.

  114. Carrot Eater:

    In the context of monthly surface record discussions, Tmean is the average of Tmax and Tmin, unless otherwise stated. I don’t know what would give somebody an impression otherwise.

    Because “mean” doesn’t mean average of max and min. That’s one reason.

    I’m checking with NOAA.

  115. torn8o:

    In the context of monthly GHCN surface record discussions, Tmean is a variety of different methods and frequently is NOT the average of Tmax and Tmin, even in the U.S. In the end though, I don’t know why that matters so much. If a station gets its monthly mean from the average of all the noon temperatures, who cares, as long as that method is in use for the whole time series. Remember that we are using anomalies and not absolute temperatures.

    This was my impression too: 1) that a variety of methods are used to compute it and 2) it doesn’t matter very much, though 3) some people claim that tmin gets affected more than tmax, but 4) that would affect the “true” tmean too.

  116. Mosh,

    As far as real problems go, this would be my list (in respective order of importance):

    1) Sensor changes
    2) Station moves
    3) TOBs
    4) UHI/microclimate changes

    USHCN does a decent job of dealing with the first three. As far as GHCN goes, who knows? Hopefully we will find that the upcoming v3 does a reasonable job correcting for the larger inhomogeneities automatically.

  117. Stephen Mosher:

    If you like you can go hunt down the CA threads where we discuss that. Theres a file with 190 stations where both are calculated over a 12 year period. .

    If you could dig up that link, it would be appreciated.

    I looked through the daly file… I admit I was in a hurry (a lot going on at work right now), but it looked to be related to time of observation bias, not comparing central tendency estimators.

  118. Zeke

    Hopefully we will find that the upcoming v3 does a reasonable job correcting for the larger inhomogeneities automatically.

    v2 was meant to, as well. I never really looked into how well it actually might do that, by testing with synthetic data.

  119. Carrot,

    Fair enough, I must admit having never really looked into GHCN v2 adjustments (apart from noting that their net effect was rather small).

    On an unrelated note, I just added in Chad’s land mask to my model and found similar results. I think I’ve found an elegant way to incorporate the land mask into land/ocean series, and I’ll play around with it a bit more tomorrow.

  120. Carrick

    Because “mean” doesn’t mean average of max and min. That’s one reason.

    If you want to be picky about it, any method of coming up with something called Tmean is only an approximation of the true mean. But you go with the jargon in use.

    It varies from country to country. Some average Tmax and Tmin. Some average observations at four or more times each day. Some do weighted averages of various sorts.

    But I think the average of Tmax/Tmin is the most common method.

    If you go digging in boring old WMO publications, you can find guidelines on these things.

    In the context of monthly GHCN surface record discussions, Tmean is a variety of different methods and frequently is NOT the average of Tmax and Tmin, even in the U.S.

    Yes… but I think Tmax/Tmin average is the single most common method. I could be wrong.

    In the end though, I don’t know why that matters so much.

    It matters if a given station changes its method. Then you have a TOB sort of inhomogeneity.

    If each station always uses the same method, it doesn’t much matter.

  121. (Off topic, but the original topic is long since resolved) I forget who, but somebody asked me for links once when I said Watts often gets all creepy and tries to out (or at least put out any information he can figure out about) any anonymous commenters who criticise him. It’s hard to search for comments on wuwt, but there’s a mild instance on a current sea-ice thread, with Watts giving somebody’s location. Meanwhile his numerous anonymous regulars get to carry on.

    You think some scientists get defensive when criticised… Watts gets pretty weird in this regard. He gets all worked up about the fact that somebody on the internet isn’t using their full name – as if that is keeping him from discussing the substance of a complaint. This was also the basis of Watts’ response to Tamino over the station drop thing – waah, he’s anonymous and waah he didn’t post any code.

    Speaking of which – so Lucia, what did Watts say about that? Watts seems impressed with himself for putting his full name on things he does. Well, that’s great, but that still leaves the matter of the level of scholarship therein.

  122. carrick,
    “I don’t buy it. I can find online feeds that are hourly for many US sites, no problem. I linked you a feed that was obviously hourly for memphis.”
    Memphis is not a USHCN station.

    “There is no logical reason to assume that when somebody says “mean” they mean (Tmax+Tmin)/2, UNLESS they come out and say precisely that. I believe this to be true for sites that only reporting Tmax/Tmin from instrumentation that only reports daily excursions. ”

    It is stated on the page I linked you to. That is the USHCN v2 page

    “As to reading all of the links… I could probably give you 50 you haven’t read either. Not sure what the point of that snark is, especially if the articles AREN’T germane to this question.”

    Those papers are everything that lead to the USHCN v2, it’s the foundation, the building blocks, everything

    “Order of effect applies here, and application of data is important too.
    If you’re looking for global temperature trends, then uncorrelated effects that are much smaller than your largest source of variability aren’t players. They get booted off the team bus.”

    Oh no, everything gets considered. It was one of the reasons they went to the automated algorithm in version 2, many of the events which lead to discontinuities were not recorded in the station histories.

  123. Carrot Eater:

    Yes… but I think Tmax/Tmin average is the single most common method. I could be wrong.

    That’s historically true for sure, but only because they had thermometers that could track max/mins long before they had thermometers that could automatically log temperature over time.

    It’s my impression that people haven’t distinguished between (Tmax+Tmin)/2 and Tmean simply because the two don’t vary that much wrt each other.

  124. torn8to,
    “In the context of monthly GHCN surface record discussions, Tmean is a variety of different methods and frequently is NOT the average of Tmax and Tmin, even in the U.S.”

    Please provide the name and station number of one station in the US which fits that claim.

  125. MikeC:

    It is stated on the page I linked you to. That is the USHCN v2 page

    Then provide the bloody quote.

    NM, I’m guessing you’re referring to this, but wonder if your copy/paste buttons are broken.

    First, daily maximum and minimum temperatures and total precipitation were extracted from a number of different NCDC data sources and subjected to a series of quality evaluation checks. […] Daily maximum and minimum temperature values that passed the evaluation checks were used to compute monthly average values

  126. carrick,
    “NM, I’m guessing you’re referring to this, but wonder if your copy/paste buttons are broken”

    Naw, just real lazy after a long day and a double cheeseburger and great big bowl of chili… I’m old too

  127. carrick,
    “That’s historically true for sure, but only because they had thermometers that could track max/mins long before they had thermometers that could automatically log temperature over time”

    They do it for uniformity… they actually had to work on that issue because of the worry that the new HGO’s (in ASOS) were too sensative

  128. Carrick, the file on Daly ( I linked to it ) does have all the info you need.. here this is jerryb’s work, old CA regular

    http://www.john-daly.com/tob/TOBSUMC.HTM

    Format of the DAT files:

    One line for each hourly observation. On all lines: year, month, day, hour, temperature (in degrees Celsius, converted from degrees Fahrenheit and rounded to tenths). At midnight, and at several hypothetical times of observation, there will be seven additional numbers:

    min (minimum,low) temperature of the past 24 hours
    max (maximum,high) temperature of the past 24 hours
    (min+max)/2 (mean) of the past 24 hours
    average (smoothed) hourly temperature of the past 24 hours
    number of consecutive hourly observations in the past 24 hours
    hour of most recent occurrence of min temperature in the past 24 hours
    hour of most recent occurrence of max temperature in the past 24 hours

    Those descriptions need further qualification. “The past 24 hours” will include the observations at both the beginning and the end of those 24 hours, and so will include 25 observations unless some data are missing. The “average (smoothed) hourly temperature of the past 24 hours” uses half of the first, and half of the last, of those observations (plus all of the other 23 observations). The number of consecutive observations will usually be 25, and if it is not, some data are missing, and 24 hour periods that are missing data will not be used in the summaries. If the “hour of most recent occurrence” is 25, it indicates that that occurrence was the observation at the beginning of that 24 hour period (i.e. it was 24 hours old).

    Records llok like so

    A sample monthly summary:

    YR M F 24:00 06:00 07:00 08:00 09:00 16:00 17:00 18:00

    86 05 T 17.676 17.216 17.556 17.713 17.794 18.358 18.265

    86 05 A 17.743 17.599 17.601 17.609 17.617 17.679 17.691

    you would compare T and A

    86 05 L 0 12 7 4 0 0 0 0 0
    86 05 H 1 0 0 0 0 7 7 5 3

    Source of hourly data: http://www.epa.gov/scram001/surfacemetdata.htm of which data from 190 stations were used, including 9 years of data for 171 stations, and 8 years of data for 19 stations.

    http://www.john-daly.com/tob/SUMDATC.HTM

  129. Carrick (Comment#44034) May 24th, 2010 at 10:29 am

    ya the horizontal fetch is key. much as I diskliked aspects of parkers UHI paper, the selction of airports made some sense after I considered the wind

    also the waste heat argument is pretty lame. I suppose with hourly reports from a station and hourly close by, and landing times…

  130. Carrick (Comment#44043) May 24th, 2010 at 1:41 pm said:

    “”Costal sites are “special” because of the marine-land atmospheric boundary layer. Add in the fact you are surrounded by costal mountains, and there is a lot of unusual micrometerological effects for your area.
    But, here’s the issue. What happens when you anomalize each site and compare temperature trends between different sites around HB?
    How does the variability in temperature trends between sites compare to the variability at a given site?
    If the variability in temperature trend associated with weather at one site is large compared to the variability between sites, it’s just common sense you can neglect the multiple site issue in favor of the big driver on the variability in the measurement of climate, which is weather itself.”””

    Good morning everyone.

    Okay. Coastal city’s are special? Well yeah. “there is a lot of unusual micrometerological effects for your area.” Yes as I pointed out first. I don’t think Mt. Baldy is considered a “coastal mountain” , but ok…

    “What happens when you anomalize each site and compare temperature trends between different sites around HB?”

    What sites? How many? What locations? The surface station I mention reporting for my area isn’t in HB; and I don’t actually live in HB either, my house is in a different city zone yet I am closer to downtown HB and the beach then my own address’s town hall is. And my town isn’t where the surface station is located either. That’s a completely different city zone too.

    As for “Coastal sites are special” the whole western side of California where most people live, boarders the Pacific Ocean. Some cities that are a few miles inland like where my surface station is, or down town Los Angeles, or Hollywood aren’t considered “coastal” are they? yet you can get to a beach from all of them in just a few mins. Disneyland is a 15 or 20 min drive from my house on city streets. Do you think Disneyland is “on the coast”? Mt Baldy is much farther away and much more inland from me then Disneyland and you said “coastal mountains”.

    Anyway, I continue to believe you guys are blowing off vast complicated areas of land as if they are nothing. In fact; you are treating the whole planet this way. I am just bringing up my own small yet complicated tiny tiny sliver of California, one of the biggest states in this nation, in the NH, on this planet earth, just to illustrate this.
    And you believe you’ve got all the correct representation down to a few sites and down to tenth’s of a degree; even with all the added people/urban changes to the land, without any errors (errors that can propagate)? I don’t think so.

    (BTW besides the ocean and the ” marine-land atmospheric boundary layer.” we also have a an Island Effect here (called “IE” ) because of Catalina. Look it up; as far as weather goes it could become complicated. ;))

  131. Re: MikeC – Eureka, CA data in GHCN prior to 1951. Search the inventory to get the station number.

  132. Liza:

    Okay. Coastal city’s are special? Well yeah. “there is a lot of unusual micrometerological effects for your area.” Yes as I pointed out first. I don’t think Mt. Baldy is considered a “coastal mountain” , but ok…

    Yep. It is considered part of a “costal range.” I’m not sure that “coastal mountain” is a term that gets used.

    Anyway, I continue to believe you guys are blowing off vast complicated areas of land as if they are nothing. In fact; you are treating the whole planet this way. I am just bringing up my own small yet complicated tiny tiny sliver of California, one of the biggest states in this nation, in the NH, on this planet earth, just to illustrate this.

    Coastal is special everywhere… because it is not typical. The entire West Coast of the US (the part controlled by costal physics) only represents 0.05% of the total area of the Earth.

    And you believe you’ve got all the correct representation down to a few sites and down to tenth’s of a degree; even with all the added people/urban changes to the land, without any errors (errors that can propagate)? I don’t think so.

    The question can be pretty easily framed mathematically. The results aren’t nearly as absurd as you suggest, without having done a lick of actual work to study the problem.

    You claim good math skills, you should be able to write down the problem and examine it for yourself then. Plenty of other people have done this and come to a different conclusion than you have come to without doing any of the hard work yourself.

    This is really one of those problems where no one will be able to convince you. So if it is that important, you’ll have to do this yourself.

    There is a right answer to the question you raise, it is addressable with currently available science, it has been studied extensively, and the result is at odds to your intuition.

    In the end, intuition is a lousy substitute for good science.

  133. Liza,

    > you guys are blowing off vast complicated areas of land as if they are nothing.

    You’ve brought this up more than once. But… nobody (here) is arguing that your own tiny sliver of California doesn’t feature complicated climate regimes and microclimates. Or that many other places don’t have their own complexities. In fact, every reader nods their head when you make that point, in words or with pictures.

    As an exercise, try flipping the problem around. (I don’t think you’ve done this.)

    You convene a meeting of people from… some geographical area. Your neighborhood? Huntington Beach? Orange County? California?

    You folks start talking about the weather. Someone observes that, “Compared to the winter of 2009, in my experience, the just-ended winter of 2010 sure was…”

    * Rainy (or dry)
    * Hot (or cold)
    * Sunny (or cloudy)
    * Windy (or calm)

    Most of the people in the room pipe up, “Yeah, it sure was!”

    But some dissent… “Naw, I’d say it was just the opposite!”

    And a few note, “Gee, I think it was just about the same!”

    So… if you wanted to explore the weather and compare winter 2009 to winter 2010, how would you go about it?

    * Maybe you embraced cultural relativism as a liberal arts major, and it’s served you well ever since. “Different people have different ways of knowing; no one perspective can be privileged over any other. Attempts to describe weather in quantitative terms are merely poorly-masked maneuverings by the patriarchy. They are constructing a pseudo-objective high ground in order to warrant their own power, by delegitimizing the experiential reality of marginal members of society. There is no ‘weather’ or ‘trend’ to speak of, only each individual’s subjective experience of it.”

    * Since everyone at your gathering is a weather enthusiast, they all have home weather stations. In fact, each of them is connected to Weather Underground, so the observation records are conveniently archived! Perhaps there’s a way to make use of all this information, and make a general statement about how it’s gotten wetter/hotter/sunnier/windier in your neighborhood/Huntington Beach/Orange County/California. But… assuming you’re not a deconstructivist ideologue… how would you go about the task?

    Earlier, you did allow that the depths of the last Ice Age was “colder” than the current interglacial. If some time in the past (e.g. 18,000 BP) can be a lot colder than now, then presumably some other time in the past (e.g. the MWP, 1910, last year) can be a little colder than now (or warmer, drier, wetter, etc.).

    Is Zeke on the right track by trying to measure such trends, quantitatively? I’m not asking whether his answer is exactly or even approximately correct–I’m asking whether his entire approach of “measuring” is legitimate or not, in your eyes.

    If it’s not, then it seems to me that we’ve all got to get used to cultural relativism–a bunch of people arguing round and round, endlessly.

    If Zeke’s physical-sciences philosophy is legitimate but the particulars of his implementation are faulty: then the concept of “progress” is possible.

    * He’s overreaching by graphing “land temperature” for the entire world (or the NH, or the continental US): What geography is small enough to be meaningful?

    * His source of data isn’t good enough: What source would be more reliable?

    * His methods of analysis aren’t adequate: What methods would be better?

    And–most of all–how could we test our claims (“not good enough!”) and Zeke’s claims (“is too good enough!”)?

    .

    So the exercise would be: Can you re-frame your objections to Zeke’s work, so that the key elements of your narrative are expressed in the terms of a falsifiable hypothesis?

  134. “So the exercise would be: Can you re-frame your objections to Zeke’s work, so that the key elements of your narrative are expressed in the terms of a falsifiable hypothesis?”

    AMac,

    You continue to suffer from some logical fallacies:

    1. Appeal to Zeke. It’s up to Zeke to demonstrate that his work is meaningful. You can’t assume Zeke’s work is ‘good enough’ unless you verify it yourself. You haven’t done so, as far as I can tell.

    2. liza doesn’t have to assemble a narrative of climate science, if climate science can’t provide enough information to make such an accurate narrative possible. You are looking at the problem backwards.

    Andrew

  135. > Appeal to Zeke

    Is Zeke a lonely fellow, adrift on an isolated piece of intellectual flotsam?

    After enjoying Razib Khan’s essays on the fall of Rome, I’ve been reading histories of that era by Peter Heather and others. Sitting in a room in the Northeast US: what standards should I hold Heather to, in demonstrating that his work is meaningful? I wasn’t around in AD 400, and neither was he. Maybe this whole “Roman Empire” thingy was concocted by Gibbon to spur book sales?

    I haven’t run code on GISS, or audited archaeological digs in Wessex. Other people have. Who to trust? It’s a complex question. But I don’t see how it would be possible to go beyond simple variants of village life, if “DIY” was to be the hard-and-fast answer.

    >liza doesn’t have to assemble a narrative of climate science, if climate science can’t provide enough information to make such an accurate narrative possible.

    1. Liza doesn’t have to do anything she doesn’t want to.

    2. I’m not defending an amorphous “climate science,” I’m discussing the interpretation of Zeke Hausfather’s temperature-anomaly graphs, which was the topic of Liza’s comment, and of this thread.

    3. if, emphasis added.

  136. 2. I’m not defending an amorphous “climate science,” I’m discussing the interpretation of Zeke Hausfather’s temperature-anomaly graphs, which was the topic of Liza’s comment, and of this thread.

    AMac,

    Then place the same scrutiny on Zeke’s contributions as you do on liza’s. Do that, and I might shut up. 😉

    Andrew

  137. Thankfully I have folks like Jeff Id, Nick Stokes, and Chad to make sure my modeling isn’t too far off the mark. We have small methodological differences, but the results are largely in line with each other.

    Liza,

    As far as measuring temp goes, a fairly easy experiment would be to collect data from all the various weather stations near your house, calculate the anomalies for each, and see how well they are correlated.

  138. Andrew, the chances of you shutting up appear to be non-existent.

    But your saying hugely vague things does not advance matters. Zeke does some analysis, and the rest of us discuss and explore it in whichever ways that are interesting to us. If something doesn’t look quite right, we explore that.

    You, on the other hand, simply assert that Zeke’s work may not be meaningful. That adds nothing. You have to tell us, in what way do you think it is not meaningful.

    If somebody gives a seminar, and in the question period I say, “I don’t think this is meaningful”, and just stop there, the whole room will just stare at me. It would be expected that I explain what my objection is. If I responded to the staring by only saying, “well, it’s up to you to prove to me that it is meaningful”, that will only result in more stares – how does anybody know how to do that, when I never made clear my objection in the first place?

    Simply saying “I object” without being able to even articulate why: this makes you a denier, not a sceptic. But you seem OK with that label, so perhaps this is fruitless.

  139. I’ll tell you why it’s not meaningful…
    My house wasn’t even here until 1960. This graph starts at 1880.
    1880’s here was farm land – then the pier (1914) was built to establish Huntington Beach (on the beach) as a resort.

    History page says:

    At the same time in 1914, an Americana (encyclopedia) salesman bought land from the Huntington Beach Company to subdivide into small lots and give away premiums with the purchase of their book sets. As land sales to individuals were slow, the land developers were delighted to be rid of the surplus land which was unsuitable for housing because of its deep gullies. Their relief probably turned to dismay when oil was discovered on the property, known as the “encyclopedia lots”, in 1920. Overnight, the composition of the community was changed.

    Housing of all kinds developed rapidly for the incoming population. A tent city was erected on the abandoned Methodist campground. Tiny cottages were built on 25 foot lots to house oil workers and their families. Second stories of commercial building were remodeled from office space to rooming houses for single laborers, and even barns and garages were converted into rental housing.

    Large homes had been built along the ocean front earlier, representing the choice residential neighborhood. Now this section expanded inland between 17th and 23rd streets. As the oil field behind the neighborhood became defined, speculators and residents realized that there was probably oil there too. Bowing to public pressure, the City Council agreed to allow drilling of this “town lot” area in April of 1926.

    Within a short time some 300 dwellings were moved, some as far as Fullerton, old-timers say, to make way for the oil rigs and production equipment.

    You saw my pictures right? The one with all the oil rigs on the beach?

    Not saying what Zeke is doing is “wrong”. My objection is not with math. I am saying is this math holds no meaning full information.

    What does this “trend” on your graph tell you about where I live over time? Or the whole of Southern California for that matter exactly?

    What do you do to the numbers to adjust from farms; to a town, to oil fields, to a town again; to big city, to wall to wall urban sprawl?…buildings, streets, trees, lawns, freeways, houses back to back as far as the eye can see all the way to the mountains? Is it one calculation or many to adjust for this?

    Did people back then even write down tenth’s of a degree? Can you show me please? I know some people that didn’t write down “cents” in their check book register. They just rounded up to the nearest dollar.

    These graphs seem to go up according to my eye around 1980. Pretty much when a “regular old person” like me started to use a computer too -instead of writing data down in logs (or check registers) by hand.

  140. “prove to me that it is meaningful”

    CE,

    It is the job of science and scientists to do this, FYI.

    This is why smart people as opposed to not smart should become scientists. It’s difficult.

    “Believe in it because scientists claim it” is as anti-science as you can get.

    Andrew

  141. Liza,
    Thank you for the thoughtful and well-written reply. So in short, you are worried that changes in the local environment (between agricultural, urban, rural, industrial, residential) at any place will make it difficult to detect with confidence any long-term climate trends. These environmental changes could have large and difficult-to-predict consequences on the measured temperature.

    That’s a reasonable starting place. But it isn’t a reasonable ending place. This is where one gets to work. With the data we have, we can try to explore these issues, and see what impact they might have. We will never get perfect answers, but we see where we can get. As it happens, you aren’t the first person to think of this, and so lots of people have tried looking at the problem in different ways. Including our own Zeke.

    One simple way of going about it is to identify rural stations, and look at their trends. A site that is rural now probably always was rural, and if it wasn’t (say, there used to be Detroit there, or there used to be some oil industry work there), you’d know about it.

  142. Why can’t I look at these graphs and see as time goes by, humans are able to look at “the climate on Earth” in more detail as we advance our technology ; like the difference between looking at a organism with your naked eye and looking at it under a microscope?

    Everything I’ve read about the LIA including what wikipedia says:

    The Little Ice Age (LIA) was a period of cooling that occurred after a warmer era known as the Medieval Warm Period. While not a true ice age, the term was introduced into scientific literature by François E. Matthes in 1939.[1] It is conventionally defined as a period extending from the 16th to the 19th centuries,[2][3][4] though climatologists and historians working with local records no longer expect to agree on either the start or end dates of this period, which varied according to local conditions. It is generally agreed that there were three minima, beginning about 1650, about 1770, and 1850, each separated by intervals of slight warming.

    Maybe all these graphs are showing me is time going by, more people in more places; and thus more detailed temp reconstructions (down to tenths of a degree) (because of advanced measuring devices and technology) of a natural interval of warm on Earth that’s been happening since the LIA!

  143. Andrew,
    You entirely missed the point. If you cannot even articulate what specific objection you have to some idea, theory or analysis, then we can not even begin to talk.

  144. carrot eater (Comment#44118) May 25th, 2010 at 10:56 am
    Thanks back at you carrot.
    All I want to do is swim in my pool finally for Memorial Day!
    Apparently 30 yrs a go it was worth it for somebody to build a pool on this property. 🙂

    I think I know what Andrew is frustrated over.

    You say to me:
    One simple way of going about it is to identify rural stations, and look at their trends. A site that is rural now probably always was rural, and if it wasn’t (say, there used to be Detroit there, or there used to be some oil industry work there), you’d know about all of it.

    That’s exactly what I meant when I mentioned Canada, Asia and Russia up thread in regards to the whole NH. I said most of the data for the NH should come from those places. (they are rural and I mentioned the water is frozen too. No water milling around causing havoc like my ocean) Do you think what I said was true as opposed to what every one else thought? (I didn’t get the impression others were impressed with what I said! 😉 )

    I am off-have to do more prep in the back yard for a swim party bbq that probably isn’t going to happen with the “swim” part (or so the weatherman says: high 64°f today, *maybe* 70s° for the Memorial Day weekend)

  145. Liza,

    I agree with CE, #44116 was a well-written response. If you agree that is it possible to get useful information about the weather by measuring the temperature — and I think you do — then it’s possible to connect the dots between your perspective and Zeke’s posts. For instance, AFAICT, his approach doesn’t deny the importance of microclimates, or changing land-use patterns, or UHI, or temp averaging or rounding, or dropped stations. Instead, it provides an avenue for investigating these and other influences on the record.

  146. AMac,

    Well, only as far as those influences can be measured. We can run a set of stations by, say, urbanity and CRN rating, but the latter is only useful to the extent that CRN ratings are actually indicative of micro-climate.

  147. “After enjoying Razib Khan’s essays on the fall of Rome, I’ve been reading histories of that era by Peter Heather and others. Sitting in a room in the Northeast US: what standards should I hold Heather to, in demonstrating that his work is meaningful? I wasn’t around in AD 400, and neither was he. Maybe this whole “Roman Empire” thingy was concocted by Gibbon to spur book sales?”

    Sorry AMac,

    Believing in the Roman Empire (or not) has nothing to do with whether or not I should believe climate science squiggly lines.

    Andrew

  148. AMac,

    I read and loved L.Sprague de Camp’s fantasy books when I were a lad. 😉

    Andrew

  149. Torn8to,
    I’m away from home at the moment and all I have is this crappy Netbook which does not have my data downloads, which is why I asked for the station ID. However, I assume you are talking about

    Eureka, CA
    Latitude 40.8097, Longitude -124.1603
    Elevation 6.1 Meters

    NCDC Station ID 10001150
    NWS coop # 042910
    GISS ID 42572594000
    WMO ID 72594

    Oak Ridge has min and max temps going back to 1890. NCDC has free copies of the station originals going back to 1948 and for 3 bucks each they’ll sell you station originals going back to 1906, all with daily and monthly min, max and mean temps.
    At first I kinda giggled when you brought up pre 1951 data, that it should be no wonder. But they have all of that data. I’m now wondering about the source of the data you have; where it came from, do you understand it correctly, did you read the literature which accompanies that data, is it possible that the data is from one of the files where they combine data from different stations… there are a bunch of explanations… but one thing is for sure, NCDC has min, max and mean for that data going back to at least 1906… and NCDC runs the show on GHCN.

  150. Liza

    Maybe all these graphs are showing me is time going by, more people in more places; and thus more detailed temp reconstructions (down to tenths of a degree) (because of advanced measuring devices and technology) of a natural interval of warm on Earth that’s been happening since the LIA!

    The scientists are way ahead of you. Chapter 9 in the IPCC report “Understanding and atrributing climate change”

    http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9.pdf

    Santer et al http://www.nature.com/nature/journal/v382/n6586/abs/382039a0.html

  151. Andrew_KY (Comment#44117) May 25th, 2010 at 10:41 am

    “prove to me that it is meaningful”

    CE,

    It is the job of science and scientists to do this, FYI.

    This is why smart people as opposed to not smart should become scientists. It’s difficult.

    “Believe in it because scientists claim it” is as anti-science as you can get.

    Andrew

    The scientists have supported their claims. The problem appears to be one of your being able to understand the evidence provided to support their claims. I have the same problem, a lot of it goes over my head when they start talking about the advanced physics and mathematics involved. That just means I am an average person. In such a situation where the evidence is more complex than I can understand, the only rational approach is to defer to the recognised authority. When you go to a doctor, do you try to understand the exact chemical properties and affects a medication you are prescribed uses to achieve it’s intended goal, and exactly how those properties can cause side effects? I’ve asked a few times, and it doesn’t take long before I realise I have no idea what an ‘ion channel’ is. So I take the pill, and it usually works.

  152. bugs, I don’t think the climate impact is anywhere as tight as you are making it out to be.

    The truth is they largely ignore what the physical science actually tells them is happening, especially if (joy of joy) things are “much worse than we expected :-(“.

    This is the only branch of science i know where data that is inconsistent with the model predictions is seen as confirmation of the underlying theory.

  153. Liza, When will you finally come to grips with the fact that the great big Pacific Ocean outside your doorstep is going to boil away and your neighborhood is going to be a vast desert wasteland by 2030?

  154. MikeC (Comment#44133) May 25th, 2010 at 4:09 pm
    Never! The ocean is still a few miles away. So, I am still waiting for the “Big One” to hit (a chunk of California is going to fall into the ocean don’t you know?!) so my house will really become beach front property!

  155. Carrick (Comment#44132) May 25th, 2010 at 4:05 pm

    bugs, I don’t think the climate impact is anywhere as tight as you are making it out to be.

    The truth is they largely ignore what the physical science actually tells them is happening, especially if (joy of joy) things are “much worse than we expected 🙁 “.

    This is the only branch of science i know where data that is inconsistent with the model predictions is seen as confirmation of the underlying theory.

    The model predictions are not confirmation, but supporting evidence. I don’t know how you think science works, but IIRC, it’s about providing evidence that supports your case. If models were all they had, I would not be convinced. That is why the IPCC report provides multiple, independent, sources of supporting evidence.

  156. While you are waiting, learn how to resuscitate a dolphin, because according to a study in the Journal of Greenpeace they will be flopping helplessly on the sand by 2048

  157. MikeC (Comment#44136) May 25th, 2010 at 4:52 pm
    Got it covered! My step daughter is in her first year of jr college heading toward becoming some kind of a marine biologist. (she is certified for life saving also because she was a surf camp instructor for two summers) She will take up the cause! (LOL I have a story for everything!)

  158. bugs:

    The model predictions are not confirmation, but supporting evidence

    One is supporting evidence for the other.

    You like many others have put the cart in front of the horse.

    If we can’t explain it with the basic science, then all we can say is “we can’t explain it”.

    This is especially an issue we know there is short period climate fluctuations that the models simply don’t do a great job of “capturing”, and that these fluctuations can produce large apparent “warming signals”. The 2007 arctic melt-off by any “reasonable person’s measure” was related to one of these fluctuations, yet at the time it was used as proof that “things were proceeding even faster than we anticipated”. Same goes with the the Australian drought, and a dozen similar effects that you guys cherry pick to “demonstrate” the severity of the effects of human interference with climate.

    The “multiple independent lines of evidence” are in may cases cherry picked natural fluctuations. It is often the case that 10-year out of date results (where the supposed “proof” has long since “evaporated”) are juxtaposed against the “crisis du jour”.

    I know how to disentangle the hype from the truth. Do you?

  159. In April, 2010, the GISTemp Land temperature anomaly is 0.64C at the 250 km smoothing and 0.80C at the 1200 km smoothing(using a 1901 to 2000 base period).

    The NCDC land temperature anomaly for the same period is 1.286C (1.395C in March)

    The 1951 to 1980 versus 1901 to 2000 base period shouldn’t make much difference because they are very close to Zero in both datasets.

    Can anyone explain why they are so different in April, 2010 – more than 0.4C in the last few months.

    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2010&month_last=4&sat=4&sst=0&type=anoms&mean_gen=04&year1=2010&year2=2010&base1=1901&base2=2000&radius=1200&pol=reg

    ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/monthly.land.90S.90N.df_1901-2000mean.dat

    There has to be errors in the NCDC database.

  160. Bill, perhaps most of the difference was explained in this post… GISS does not weigh the difference in land surface area between the hemispheres

  161. And anyway, any errors in the NCDC database are going to propagate through to GISS… the differences between GISS and NCDC are in the processing, not the source data.

    Carrick and bugs: I think specific examples of drought are an example where pop-attribution by media and activists gets ahead of the published literature. There always were droughts; there always will be droughts; that’s not the point. The point goes back to the loaded dice – droughts may become somewhat more frequent or severe.

  162. MikeC, both the SH and NH from the NCDC are substantially higher than the GISTemp Land number so that is not it.

    The NCDC’s land temperature anomalies can change by more than 1.5C from month to month. Someone needs to investigate these errors.

    The Ocean temperatures do not change by more than 0.15C from month to month yet the Land temperature numbers can change by 1.76C.

  163. Bill, can you put together a map with the same anomaly period and etc as GISS?

  164. For 2010 GISS seems to be on a different planet compared to HadCRUT. When GISS ends up being the only product of the Big 4 declaring 2010 as the warmest year in history, how will that be explained?

  165. Carrick (Comment#44138) May 25th, 2010 at 6:01 pm

    bugs:

    The model predictions are not confirmation, but supporting evidence

    One is supporting evidence for the other.

    You like many others have put the cart in front of the horse.

    If we can’t explain it with the basic science, then all we can say is “we can’t explain it”.

    This is especially an issue we know there is short period climate fluctuations that the models simply don’t do a great job of “capturing”, and that these fluctuations can produce large apparent “warming signals”. The 2007 arctic melt-off by any “reasonable person’s measure” was related to one of these fluctuations, yet at the time it was used as proof that “things were proceeding even faster than we anticipated”.
    Same goes with the the Australian drought, and a dozen similar effects that you guys cherry pick to “demonstrate” the severity of the effects of human interference with climate.

    We have basic science that explains AGW, radiative physics, and that is well understood. The short term chaotic noise will never be predictable, the long term climate more so.

    The ice melt in 2007 was extraordinary, and worth a close look. http://psc.apl.washington.edu/ArcticSeaiceVolume/images/BPIOMASIceVolumeAnomalyCurrent.png This seems to indicate that 2007 was indicitive of something more long term than short term.

    The Australian drought in terms of rainfall was not unprecedented, in terms of the lack of rain and higher temperatures, you get a lot less runoff. So, yes, consistent with AGW.

  166. Zeke, Your discussion was monthly (April) but your graph was annual… if it’s not too much trouble, how do the monthlys look for the same time period?

  167. You also have to remember that NCDC releases its’s data as preliminary and will finalize everything within a few months

  168. MikeC,

    The graph three posts up is the N. Hemispheric land April anomaly for each set for the past 50 years or so. Its not showing an annual anomaly.

  169. bugs:

    We have basic science that explains AGW, radiative physics, and that is well understood. The short term chaotic noise will never be predictable, the long term climate more so.

    Not everybody agrees it’s chaotic (at least not long-term climate variability). And you don’t have to reproduce identical weather for it to be statistically identical climate-wise.

    The 2007 ice melt was extraordinary, but it was weather, not climate. The trend in loss of arctic ice over 30+ years is climate related (IMO), not weather related. It is predicted by the models.

    The problem with bringing up droughts is multifold: We are no where close to being able to distinguish between droughts from AGW (or even NGW) and droughts from shifts in ocean-atmospheric circulation patterns, the longest of which is nearly 60 years in length. Secondly, the model expectation is that average land precipitation increases, so more land becomes arable not less (growing seasons are extended too). This is also confirmed

    The dry belts expand at the same time, thats true, but there’s this thing called “irrigation” that helps in adaptation to that. The world has had no major food shortages due to drought in developed nations. (The problems in developing nations are multifaceted, climate plays a back seat role compared to war, rapid population growth and poor governance.)

  170. The dry belts expand at the same time, thats true, but there’s this thing called “irrigation” that helps in adaptation to that.

    What if you irrigation is already fully developed and rainfall is going to be reduced by the dry belt? Because that is the situation.

  171. Zeke, here is the “monthly” NCDC temperature chart (your numbers are the annual averages).

    http://img39.imageshack.us/img39/6703/ncdcmontrend.png

    This is how it changes from month to month (Ocean temps are reasonably stable but the Land anomaly is far too variable for how the real climate works and for how big this dataset is supposed to be).

    http://img401.imageshack.us/img401/3200/ncdcmonthlych.png

    GISTemp and Crutemp3 do not show the same level of variability although there is certainly more than there should be.

    http://data.giss.nasa.gov/gistemp/graphs/Fig.C.lrg.gif

    http://hadobs.metoffice.com/crutem3/diagnostics/global/nh+sh/monthly.png

    I think there is a level of precision being ascribed here that simply does not exist. The global average Land temperatures do not change by 1.5C from average from month to month. If they do, then we should be increasing the error bars and rewriting the climate models because there is far more variability than believed. (either that or there are large wide-ranging errors in the databases which are not being discovered.)

  172. Zeke, here is the “monthly” NCDC temperature chart (your numbers are the annual averages).
    .
    The fact that Zeke’s chart does not show 12 points per year does not mean that he used annual anomalies. It means he used *just* April, as he has clearly stated twice above.
    .
    He provided the link to the data he used and you can look up the April values yourself:
    2010 4 1.4230
    2009 4 1.1114
    2008 4 0.9358
    2007 4 1.5613

    2000 4 1.6676
    .
    The global average Land temperatures do not change by 1.5C from average from month to month.
    .
    How much does it change by? How do you know that?

  173. Re: MikeC

    I was wrong about Eureka. That data was published after 1950 so the Bigelow corrections did not get applied. It also looks like NCDC caught and reversed those corrections for some other stations in GHCN, but not all. Abilene (72266) is corrected prior to 1951 but raw from 51 on. Also Denver and I think Red Bluff. There are likely others. It takes a lot of cross-checking between GHCN, the station forms, and the original data source.

  174. Ron Broberg,

    I noticed the same thing, but I think he is talking about the anomalies between datasets…

    Bill,

    can you please clarify what is changing from month to month? and if it’s beetween datasets, grid cells or what

    Zekie, Sorry, I meant month to month, not April to April… if it’s not too much trouble, are you able to do that?

  175. Torn8to, It’s not that you got it wrong, it’s that you are double checking your understanding and the possibilities… which means one thing… YOU ROCK!

    As for the datasets and how hard it is to track all that stuff, yup… there are how many datasets, all using one variation or another of either the same thing or portions of the same thing with their own adjustments… such a pain

  176. torn8o,
    “It also looks like NCDC caught and reversed those corrections for some other stations in GHCN, but not all.”

    The metadata says this station moved that year from it’s previously undisclosed location to a rooftop downtown before it was moved to it’s current location on the island. Quite often, they will take the records from an old station in the area and combine them with a replacement station or a station which was set up by the old weather bureau a year or so after the old one was shut down. Some of the datasets such as USHCN will combine these station records and call them one station, then use the FILNET program to estimate for the missing data for the time between when the first station shut down and the next one started. (it is common because observers quit, die, etc) But before they are combined, the records are actually kept in separate files as if they were different stations.

  177. NCDC has both warmer during the warm months and cooler during the cool months… any one know what that’s about?

  178. very warm/very cold months are usually due to warm/cold condition over land interior particularly eurasia during january-march season (see for example the two warm spike in january 2007 and march 2008)obviously temperature variance is much stronger there compared to coastal regions and islands so the main reason is again giss interpolation over the ocean…

    black line is gistemp computed as an area average over the entire world, red line is the same but with grid pixel over the ocean removed:
    http://img229.imageshack.us/img229/6916/giss.png

  179. Zeke,
    O/T, but you suggested on another thread converting the ERSST 2×2 data to v2.mean format. That’s not so easy, because the dataset is 6 times bigger. For HADSST I just imported and transposed the whole think, although it was near my memory max. With ERSST, they’ve had to split the data set for similar reasons.

    The task is doable with subsplitting files, but not with the same software.

  180. bugs, then there’s this:

    climate model attribution

    That is as good as it gets, IMO. What it says is prior to 1980, we don’t need to invoke human forcings to explain temperature changes.

    Not surprising, Gavin and I agree on this:

    You always need a model of some sort.

    If you don’t have a model to explain the effect, that means you can’t explain the effect. What you guys miss (not Gavin) is that you can’t invoke “worse than we expected” as proof that we understand what we are talking about.

  181. Carrick the other issue is the selection of models for attribution studies. It not the same as those for projections. They use ALL the models, good and bad, for projections ( democracy of models) that gives you a wide spread that is hard to falsify, but for attribution where you want a tight spread, they select a subset of models. Thats how I understand the description. So what yo want to see is how that subset does on projections.

  182. Carrick (Comment#44178) May 26th, 2010 at 8:29 pm

    bugs, then there’s this:

    climate model attribution

    That is as good as it gets, IMO. What it says is prior to 1980, we don’t need to invoke human forcings to explain temperature changes.

    Not surprising, Gavin and I agree on this:

    You always need a model of some sort.

    If you don’t have a model to explain the effect, that means you can’t explain the effect. What you guys miss (not Gavin) is that you can’t invoke “worse than we expected” as proof that we understand what we are talking about.

    You make it a false dichotomy. It’s a matter of how skillful are the models, how well can we explain climate. Our explanation will never be perfect, that is the nature science itself. This is not an exercise in engineering. The total ice volume is reducing, as predicted. There is not complete understanding of how easily ice can break up. I don’t see how that is a matter of comfort or reason for complacense.

  183. Every week I look at how the models are doing which forcast the ENSO. Despite the fact that this area is the most observed on earth, the models miss more often than a blind man shooting at a flea at 1000 yards.

  184. bugs:

    You make it a false dichotomy. It’s a matter of how skillful are the models, how well can we explain climate. Our explanation will never be perfect, that is the nature science itself. This is not an exercise in engineering. The total ice volume is reducing, as predicted. There is not complete understanding of how easily ice can break up. I don’t see how that is a matter of comfort or reason for complacense.

    I am not sure what the false dichotomy is here.

    We agree that the ice is decreasing, that this is what the theory suggests should happen.

    My point is pretty simple: Things that the model don’t predict should not be used as confirming evidence for that theory. It’s not a dichotomy, it’s just the difference between good science and very bad science.

  185. Stephen Mosher, there’s at least two sides to this story.

    You should use the model(s) that most accurately characterize the effects you are interested in (or at least meet a minimum requirement of accuracy based on the quality of the data that are available).

    It’s certainly true that they have “kitchen sink” graphs where everybody’s plot gets included. For some things, like the graph above, you’re just trying to show that something is a robust feature of the models. That is useful information.

    However, there isn’t any “defined accuracy” implied by aggregating models. The most accurate models trump the least accurate ones, and other than considering systematic biases between model choices, I think there’s little else to be learned.

  186. MikeC:

    Every week I look at how the models are doing which forcast the ENSO. Despite the fact that this area is the most observed on earth, the models miss more often than a blind man shooting at a flea at 1000 yards.

    It’s left to be learned how much predictivity there is to ENSOs, but the models clearly lack the fidelity to resolve these features. You’d probably need better than 50kmx50xkm resolution to accurately depict the physics associated with this process.

  187. Carrick (Comment#44182) May 26th, 2010 at 9:58 pm

    I am not sure what the false dichotomy is here.

    We agree that the ice is decreasing, that this is what the theory suggests should happen.

    My point is pretty simple: Things that the model don’t predict should not be used as confirming evidence for that theory. It’s not a dichotomy, it’s just the difference between good science and very bad science.

    I don’t think I would be wrong in guessing that the rate at which long term sea ice can break up is one of the less well understood aspects of AGW. What it indicates is that the better understood aspects of AGW are proceeding as expected, but there is the potential for it to be worse than expected.

  188. What you guys miss (not Gavin) is that you can’t invoke “worse than we expected” as proof that we understand what we are talking about.
    .
    I’m not sure who/when/what … but I suspect that ‘worse than we expected’ is not invoked so much as ‘proof of understanding’ as much as ‘risks in tinkering with systems we don’t understand.’ If things are slowly changing more or less in accord with models, that bolsters confidence that the system is more or less well modeled. If there are rapid changes that our occurring outside of modeling, that suggests that the risks to tinkering with the climate system may be greater than we know. If nothing else the envelope of uncertainty increases, and while a wide uncertainty includes scenarios where nothing much happens, it also includes scenarios like Huber 2010

  189. bugs:

    I don’t think I would be wrong in guessing that the rate at which long term sea ice can break up is one of the less well understood aspects of AGW. What it indicates is that the better understood aspects of AGW are proceeding as expected, but there is the potential for it to be worse than expected.

    While it’s true it “could be worse”, that just speculative, meaning there is absolutely no confidence in the attribution of the effect to any particular cause.

    The issue here is that we pretty much know why the ice loss accelerated starting in 2005 (shifts in ocean current patterns). If the ocean current patterns shift back, we could potentially be in for a long term period of “rebuilding” (wrong slope).

    A similar view from Michael Tobis.

    I think the take home message here is climate alarmism undermines the science. If people are getting pushback from scientists, it’s not because they’re being “unscientific” but because the alarmists are.

  190. The issue here is that we pretty much know why the ice loss accelerated starting in 2005 (shifts in ocean current patterns).

    .
    I didn’t know this (there’s a lot I don’t know). It sounds interesting. Can you lead me to some more info on this, Carrick?

  191. climate alarmism undermines the science
    .
    That’s just silly. ‘The science’ is neither bolstered nor undermined by public opinion or political debate. Science has its own rules for advancing or discounting scientific opinion.

  192. Ron Broberg (Comment#44192) May 27th, 2010 at 8:34 am
    Clearly you haven’t worked as a government employed scientist.

  193. Mosher:

    Carrick the other issue is the selection of models for attribution studies. It not the same as those for projections. They use ALL the models, good and bad, for projections ( democracy of models) that gives you a wide spread that is hard to falsify, but for attribution where you want a tight spread, they select a subset of models. Thats how I understand the description. So what yo want to see is how that subset does on projections

    Yes, and how those models do on simulations with paleodata would be nice to see too. With the most recent findings of negligable solar irradiance changes over centuries we would no doubt see that the models agree on an essentially flat global temperature for the past couple of milennia (an extended version of the flat ‘only natural forcings’ IPCC graph that Carrick linked to). You would get to see that a warm MWP would falsify those models.
    What you have in the IPCC report is ‘kitchen sink’ (thanks Carrick) graphs of simulations – some with outdated paleo data (irradiance).

  194. Liza:

    Clearly you haven’t worked as a government employed scientist.

    Well stated.

    To the degree that the science is immune from public scrutiny, what Ron says is true.. However, publicly funded research is 100% driven by government policy, which in turn gets affected by public perception.

    Alarmist rhetoric that gets contradicted by later events can have a huge effect on public perception of the credibility of the research, and in turn affect the amount of funding and direction of the research. (Funding agencies have a huge role in where the focus of the research is. And they very much are affected by current administration policy.)

  195. Niels:

    You would get to see that a warm MWP would falsify those models.

    Yes and no.

    If the warming of the MWP was due to shifts in atmospheric ocean oscillations, and this shift is one that is “known” to not be included in the model, it would only confirm the model couldn’t explain it. But logically that forcing is a different thing that CO2 forcings.

    So you could accurately predict CO2 forcings and not accurately predict e.g. the PDO, and the model could still be a useful tool for making policy decisions on fossil fuel usage, for example.

    (There are some caveats to this that I don’t have time to relay.)

  196. OK, I came late to this discussion, but I followed a previous discussion where Lucia, Zeke, Mosher and others showed that the station dropout doesn’t affect the end result of the temp calculations if all of the stations have the same trend (or close enough to not matter). Is this discussion indicating that we now know that all of the stations in fact DO have the same trend, including the ones that were “dropped”? Or will that have to wait for the USHCN.v3 analysis? Most of what relates to my question was from Mosher’s comment #43984 and before.
    Thanks.

  197. Carrick, I’m not aware that it is acknowledged by the IPCC that atmospheric and ocean oscillations like the PDO could have a large effect and cause a MWP. I don’t think it is. Their effect on global temperature are believed to be small.

    A warm MWP would not only confirm that at least one natural factor is missing in the models. It would confirm that an _important_ natural factor is missing. And it would remove trust in attribution studies if we know that the ‘natural forcings only’ runs are likely wrong.

    Would you use such models for projections and policy decisions that are known to not account for the most important natural climate factors?

  198. Zeke:

    Thanks for the clarification, and thanks for the work itself (and to Lucia, Mosher, Roman, and others). It’s nice to have people around who are willing to do this kind of thing–the math is beyond me, and probably always will be, but I can usually get the concepts if the explanation is clear.

  199. Carrick, I’m not aware that it is acknowledged by eg the IPCC that atmospheric ocean oscillations like the PDO could have a large effect and cause a MWP. I don’t think it is. Effects on global temperature are believed to be small.

    A warm MWP would not only confirm that at least one natural factor is missing in the models as you say. It would show that an _important_ natural factor is missing. And it would remove trust in attribution studies if we know that the ‘natural forcings only’ runs are likely wrong.

    Sure, CO2 could still be an important climate driver – or not.

  200. Bill S not sure I fully understand your question.

    Every bit of evidence we have– the understanding of the maths, the tests with fewer stations or more stations, these all confirm the assertion that Station drop will only impact the bias if the stations dropped have trends which deviate from the norm substantially. In short you cant conclude anything adverse about drops in stations merely by looking at the number of stations dropped or whether they were at high altitudes or high latitudes. I suppose somebody could make an odd ball argument that we have to wait to ghcn 3 to absolutely nail this down, but the prospect of GHCN 3 changing the answer is next to nil

  201. Niels A Nielsen (Comment#44230) May 27th, 2010 at 3:31 pm

    Carrick, I’m not aware that it is acknowledged by eg the IPCC that atmospheric ocean oscillations like the PDO could have a large effect and cause a MWP. I don’t think it is. Effects on global temperature are believed to be small.

    Temperature, for sure, climate, not so much. A cycle brings you back to where you started.

  202. Carrick (Comment#44199) May 27th, 2010 at 10:43 am

    Liza:

    Clearly you haven’t worked as a government employed scientist.

    Well stated.

    To the degree that the science is immune from public scrutiny, what Ron says is true.. However, publicly funded research is 100% driven by government policy, which in turn gets affected by public perception.

    Curiously, the science has stayed consistent over times that governments have changed.

Comments are closed.