Steig’s Antarctica: Part Two

Earlier I had said that I didn’t think Harry would make much of a difference to Steig’s results. I added the caveat that I had concerns about the paper – they just weren’t with Harry. This post describes a portion of them.

R SCRIPT

ant_recon_names1 <–save this as “ant_recon_names.txt”

table-s1-inconsistencies

Steig’s Antarctica Part Two: AVHRR vs. AWS Reconstructions

My understanding of Steig’s analyses:

  1. Reconstruction using RegEM with the manned surface station data for pre-1982 and processed AVHRR data (infrared satellite measurements) for post-1982 times (ant_recon.txt). This is the main reconstruction.
  2. Reconstruction using RegEM with 15 of the 42 manned surface stations for pre-1982 and processed AVHRR data for post-1982 times (reconstruction data not provided by Steig).
  3. Reconstruction using PCA with manned surface station data for pre-1982 and processed AVHRR for post-1982 (reconstruction data not provided by Steig).
  4. Reconstruction using RegEM with the manned surface station data for pre-1980 and AWS data for post-1980 times (ant_recon_aws.txt).

The purpose of #2 seems to be an effort to show that the results are not overly sensitive to the stations used for the reconstruction. The purpose of #3 seems to be an effort to show that the results are independent of the method (i.e., both RegEM and PCA reconstructions yield similar results). The purpose of #4 is to show that the late period (1982-2006) results can be confirmed by AWS data.

The reason for these is to show that processing AVHRR data in the manner described in the paper has physical validity. #2 – #4 provide the benchmarks against which the main reconstruction is compared, inasmuch as actual ground temperatures are a direct measurement of surface air temperatures and infrared satellite information are the proxies.

Without the reconstruction data, I have no means of analyzing #2 and #3, and Steig did not spend much time discussing those results. He provided some maps that visually show a degree of correlation with the main reconstruction. However, Steig did provide the reconstruction data for #4. He also spent the most time talking about #4 in the Supplemental Information. From this, we might choose to infer that the AWS reconstruction provided the benchmark that best matched the TIR data. With that in mind, let’s look at how well the satellite holds up to the AWS reconstruction.

In order to compare, we need to find the grid location from the full reconstruction (ant_recon.txt) that contains the AWS station. This is accomplished by taking the latitude difference and multiplying by 1.852km and the longitude difference and multiplying by (pi/180)*cos(latitude)*6398km. Square them, sum them, take the square root, and sort the distances. The ant_recon_names.txt file has these collated: station name, closest full recon column number, distance in km.

SOURCE ANALYSIS

Subtitle: What made it into the paper (and what didn’t)?

The first thing we note is that – of the 63 AWS stations used in ant_recon_aws.txt, only 26 appear on Table S1. The caption explains that the remainder either had insufficient data or demonstrated poor verification skill.

TABLE S1 EXCLUSION CRITERIA

According to the caption, stations that do not have enough calibration information (less than 40% complete) or demonstrate insufficient verification skill are not included. Sidebar: though it seems reasonable on the surface, there is no additional discussion about why the cutoff points were set at these levels. If results are to be discarded, some justification ought to be provided. At the very least, those results that almost made it should be mentioned to allow the reader to judge how sensitive the final results are to the cutoff criteria. However, a closer examination shows the completeness criterion to be applied loosely at best:

  1. Stations are included that have data less than 40% complete (Enigma Lake, LGB20, LGB35, Larsen Ice Shelf, Nico).
  2. Stations are excluded that have data greater than 40% complete (Byrd, Mt. Siple, pre-correction Harry).

Just to cover all bases, since Steig stated that there were both early (1982-1994.5) and late (1994.5-2006) calibration/verification periods, we should see if the caption was referring to the need to be >40% complete in just the early period, just the late period, or both. No matter how we look at it, some stations that do not meet the criteria are included, and some that do are excluded (verification skill notwithstanding).

Early period, less than 40% complete, included in Table S1: Cape Ross, Elaine, Enigma Lake, LGB20, LGB35, Larsen Ice Shelf, Marilyn, Nico, Pegasus South, Tourmaline Plateau. Of special note, LGB35 has a total of 6 months of data in this period and 3 other sites are less than 20% complete.

Early period, greater than 40%, excluded from Table S1: Byrd, Uranus Glacier. 11 other stations are greater than 20% complete – in other words, more complete than many stations that were included.

Late period, less than 40% complete, included in Table S1: D_10 (25% complete).

Late period, greater than 40% complete, excluded from Table S1: 17 stations. 10 other stations were excluded that were more complete than D_10.

A complete listing of the stations, including the number of months present for the overall period and sub-periods is contained in “Table S1 Inconsistencies.txt”.

MISSING DATA POINTS

Since RegEM does not replace actual data with imputed values, we should be able to tell if any data from any time series was not included by subtracting the reconstruction from the actual values. Because the reconstruction is in anomalies, we first must make anomalies out of the actual data.

The easiest way to do this is simply find the average value for each month where data is present and subtract that value. Unfortunately, when this is done, we find the following:

This means that – for whatever reason – Steig did not use all available data to calculate the anomalies. These offsets will result in noise when doing point-by-point comparisons between actual and reconstructed data.

To fix this, instead of using the monthly averages for the anomalies, we will instead subtract RECON from ACTUAL and use the resulting mode of each month for the anomaly. That method yields:

Now that we are using the same anomalies as the reconstruction, let’s take a quick look at how the READER data compares to the AVHRR reconstruction:

Some of the stations have very little data, so determining trends is not without substantial error. However, it does appear as if the READER data does not match well to the AVHRR reconstruction. For now, we’ll leave it at this. We may look at the actual data vs. AVHRR reconstruction in more detail later.

Next, we want to see if any data is missing. Because our anomalies match exactly with Steig’s, this is simple. We just look for any points on a plot of ACTUAL – RECON that do not lie on zero. After doing this, we come up with the following list:

Stations with a significant number of missing points: Bonaparte Point (~half) and D_47 (14 points).

EDIT:  The issue with Bonaparte Point is because the data set has changed at BAS from the time Steig did his analysis and when I downloaded the sets.  BAS did not post a correction notice for this.

Stations with 4 – 10 missing points: Cape Denison, Hi Priestley Gl, and Marble Point (which appears in S1).

Stations with 1 – 3 missing points: Butler Island, Clean Air, D_10, D_80, Dome CII, Elaine, Ferrell, GEO3, Gill, Harry, Henry, LGB20, Larsen Ice Shelf, Manuela, Marilyn, Minna Bluff, Mt. Siple, Nico, Pegasus North, Pegasus South, Port Martin, Possession Island, Racer Rock, Relay Station, Schwerdtfeger, and Whitlock.

That is a total of (2 + 3 + 26) 31 stations where some data points were excluded from the analysis. Of note, the Harry, Elaine, and Gill points that were excluded are present in both the pre- and post-corrected READER sets, so the corrections did not affect that. Additionally, the Ferrell points that were excluded were all late-time points that were significantly cooler than earlier points.  The 2005 Clean Air data that was excluded was later removed by BAS as being errant.

While there is nothing unusual about removing points that seem to be errant, this highlights the importance of Steve McIntyre’s quest to have data provided AS USED. In some cases with the AWS stations, the entire series has ~20 data points. Removal of 2 points is removal of 10% of the data and quite conceivably could significantly impact verification skill for that station.

The other thing this indicates is that Steig did actually go through the raw data and remove outliers despite having implied at RC that such a quality control task was too arduous to be practical. Almost half of the station data sets were modified in some fashion from what is posted on the READER site – including Harry. Rather than assume that Steig was deliberately manipulating data to achieve the desired result, a more likely and reasonable assumption would be that he succumbed to the all-too-human tendency not to look quite as critically at data that fit his view of warming in Antarctica. This, too, highlights the importance of posting data sets AS USED – especially prior to the paper being published. This gives the researcher an opportunity to correct mistakes and other things that were overlooked prior to the words being permanently emblazoned on paper.

Sidebar: my personal opinion is that there is nothing untoward here – the Ferrell points do indeed look odd and I probably would have excluded them myself. I haven’t yet looked at the rest of the actual data to see if the removed points truly seem to be outliers. Regardless, the number of removed points is small, and while it may affect verification scores for some stations with little data, I seriously doubt it affects the overall reconstruction in any observable way.

TABLE S1 vs. S2

Since Byrd, pre-correction Harry, and Mt. Siple do not appear in Table S1 yet had more than 40% of their data complete, they must have failed to show sufficient verification skill. However, all three (along with Siple) are later used in Table S2 to show good correlation to the 15-predictor reconstruction. If they could not show sufficient verification skill within their own reconstruction, then the comparison to a different reconstruction is meaningless. Pulling pieces that DO NOT pass the verification criteria and using them to show a correlation is tantamount to admitting that the verification criteria have no power.

The other obvious question is why were other stations not included in the comparison? Elaine, Gill, Lettau, and Schwerdtfeger all passed verification and are in the same area as Harry and Byrd. For the peninsula, Uranus Glacier has more data over a longer period than Siple. The reader is left to wonder at the reasons behind station selection.

TREND ANALYSIS

Subtitle: What the AVHRR/AWS comparison tells us (and what it doesn’t)

Let us start this off with a couple of graphs. On the top are the 1957-2007 trends for the 26 AWS stations in Table S1. On the bottom are the 1957-2007 trends for the corresponding grid points in the AVHRR reconstruction. There is a typo in the caption for Table S1. It says the trends listed correspond to the full (AVHRR) reconstruction, but they do not. The trends are for the AWS reconstruction.

They look pretty similar. The AWS data has a mean trend of 0.172 deg C/decade and the AVHRR data has a mean trend of 0.127 deg C/decade. Fairly close. Let us do a paired t-test to evaluate our confidence intervals:

 t = 2.0415, df = 25, p-value = 0.05189
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -0.0003933995 0.0895627379
sample estimates:
mean of the differences
 0.04458467

Note that the 95% confidence intervals include zero, so we cannot say to a confidence level of 95% that there is a statistically significant difference in the means.

Just for fun, let’s look at all 63 stations:

AWS mean: 0.138 deg C/decade. AVHRR mean: 0.125 deg C/decade. T-test:

 t = 0.7398, df = 62, p-value = 0.4622
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -0.02202794 0.04791103
sample estimates:
mean of the differences
 0.01294154

An interesting thing happened. We note that the AVHRR trend is relatively insensitive to the inclusion/exclusion of grid points while the AWS trend is highly sensitive to station inclusion/exclusion. This would indicate that justification for why the cutoff points were chosen as they were should be required.

Now let’s dig a little deeper. We should remember that the AWS data only applies after 1980 and the AVHRR data only applies after 1982. Prior to that the reconstruction is driven by the manned surface stations for both reconstructions. So if we intend to use the AWS reconstruction to show concordance with the AVHRR reconstruction, we would be better served to break up the trends into pre-1980 and post-1982 buckets:

The AWS recon (mean: 0.209 deg C/decade) shows much greater warming than the AVHRR recon (0.117 deg C/decade). To see how robust the difference in means is, let us do a paired t-test on the trends. We get:

 t = 3.0137, df = 25, p-value = 0.005843
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 0.02924094 0.15547439
sample estimates:
mean of the differences
 0.09235766

Note the small p-value and wide confidence interval.

Now let’s look at post-1982:

The AWS recon (mean: 0.0078 deg C/decade) shows much less warming than the AVHRR recon (0.260 deg C/decade). Our t-test tells us:

 t = -6.35, df = 25, p-value = 1.202e-06
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -0.3338020 -0.1703029
sample estimates:
mean of the differences
 -0.2520525

Curiouser and curiouser.

Just to see, if we look at the post-1982 trends for all stations (not just the ones in Table S1), we see:

The AWS mean is 0.039 deg C/decade and the AVHRR mean is 0.255 deg C/decade. Our t-test says:

 t = -8.8007, df = 62, p-value = 1.634e-12
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -0.2660813 -0.1675805
sample estimates:
mean of the differences
 -0.2168309

Pretty much the same as the 26 station test.

While the AWS recon and AVHRR recon have a similar 1957-2007 linear trend, how they get from 1957 to 2007 is wholly different. The AVHRR shows fairly steady warming throughout, while the AWS shows strong warming to 1980 and a flat trend afterwards. Our t-tests would indicate that in early and late periods, the trend data comes from different populations.

So now we ask ourselves, does the AWS reconstruction provide any meaningful “reality check” on the AVHRR reconstruction? The statistics tell us that it does not.

CORRELATIONS

The last thing we will look at (for the moment) is the correlation between the data sets. To start, let us look at a scatterplot of the AVHRR grid points vs. the 26 AWS stations in the AWS recon:

t = 128.6379, df = 15598, p-value < 2.2e-16
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
 0.7097751 0.7250059
sample estimates:
 cor
0.7174762

At first glance, it appears as if there is a decent correlation between the two sets, with the range on the AWS set being about twice the range on the AVHRR set. Now let’s look at the period from 1957-1980:

t = 154.6585, df = 7798, p-value < 2.2e-16
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
 0.8628489 0.8737656
sample estimates:
 cor
0.8684124

The correlation here appears much stronger. Now 1980-2007:

t = 64.8594, df = 7824, p-value < 2.2e-16
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
 0.5767256 0.6055481
sample estimates:
 cor
0.5913256

The correlation looks much less strong. In fact, as time progresses, the correlation continues to degrade. Here’s 2000-2007:

t = 28.1629, df = 2624, p-value < 2.2e-16
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
 0.4518528 0.5106172
sample estimates:
 cor
0.4817765

Note that the correlation visually seems to be dominated by a few series. I have not yet had time to see which ones are dominant. Additionally, none of this takes into account the autocorrelation reported at CA, which was significant (see Roman’s Deconstructing thread).

CONCLUSION

(Sort of)

Superficially, the AVHRR recon and AWS recon appear to match reasonably well. The match is driven primarily by the 1957-1980 timeframe – where neither AWS data nor AVHRR data are present. It degrades significantly post-1980. This result should not be surprising since the 1957-1980 timeframe utilizes manned surface station data in both reconstructions.

This does not mean that Steig’s AVHRR reconstruction is an inaccurate picture of Antarctica, but it does mean that the provided benchmark does not match within the certainty needed to call the AVHRR trends accurate. Since they differ substantially from each other, one or the other (or both) must differ substantially from reality. They happen to end at about the same value in 2007 – which gives reasonably well-matched linear trends from 1957 – but they approach the end value in very different ways.

NEXT

If I have time, there will be a Part Three to this. I am intending to:

  1. Do a comparison of all of the actual data (AWS and manned) to the reconstruction values in the AVHRR reconstruction.
  2. Break out the manned stations for the AVHRR reconstruction and compare their trends vs. the rest of the reconstruction.
  3. Determine the amount of actual data retained from the manned stations.

And a few other odds and ends (like seeing which series dominate the positive correlations).

Cheers! 🙂

23 thoughts on “Steig’s Antarctica: Part Two”

  1. Lucia,

    Interesting piece. I’m left with a couple of questions however: 1] your analysis notwithstanding, what to do with Trenberth’s [certainly no skeptic as we all know] very poignant comment to the effect that “data can not be made where there are none”, and 2] the academic and analytical aspects aside, does the Steig paper matter at all other than for AGW/ACC PR purposes, as it was subsequently used by Mann [no surprise there, I might add]?

    As I posted on CA at the time, at the rate of “warming” posited in the Steig, Mann, et. al. paper, it would take over 2500 years for Antarctica to reach temperatures approaching the melting point of ice. That does not take into account that there is an average of over 8000 ft of ice over the continent, and does nothing to address the question of where all the calories necessary to melt it all would be coming from. Given that physics teaches us that the warming caused by CO2 occurs on a falling log scale, it precludes that warming coming from anything homo sapiens would be capable of achieving by way of burning “fossil fuels”: best available data tell us there simply are not the hydrocarbons on earth to effect that increase.
    Something to ponder for your readers maybe?

  2. Tetris
    You need to examine why there is meltwater in Antarctica (and Greenland) on the surface during summer. That will inform you as to why your comment about Antarctica requiring 2500 years of warming is wrong.

  3. Nathan,

    about how many gigatons of ice a year do you think direct radiation and sublimation actually melt??????

  4. Kuhnkat
    There is loots of info out there, you can read for yourself.

    Here, use google scholar and search for “Antarctic melt water”

    Tetris’ point baout 2500 years to get temps high enough to melt ice, is simply wrong, temps get that high periodically now.
    It’s also irrelevent as the danger from Antartic ice isn’t from it melting in-situ. It would be from the West Antarctic ice sheet floating off if sufficient water got under it.

  5. Fantastic piece, Ryan. Needless to say after you finish it up your research, you should edit it and publish. I’m looking forward to seeing comments at RC.

  6. Nathan,
    Tetris’s point was eminently ignorable, and your response was meaningless. I don’t think anybody would dispute that temperatures get warm enough in both the Antarctic and Greenland to melt ice, but I seem to recall it occasionally gets cold enough to freeze it again.

  7. Kazinski,
    I’m sorry you didn’t understand my post, I thought the meaning was clear:
    Tetris – too cold to melt ice in Antarctica
    Nathan – no it isn’t, it does melt.

    I don’t understand why you are responding. Do you have a point?

    “I don’t think anybody would dispute that temperatures get warm enough in both the Antarctic and Greenland to melt ice”

    It would also seem that Tetris and Kuhnkat disagree with you, perhaps you should be talking to them? It’s also a commonly held belief around the traps.

    As to it re-freezing, yes of course it does. However recently on Greenland the water has been able to escape.

  8. Tetris–
    This is a guest post by RyanO. I need to change the theme to make that clear automatically! (I’ll do that now.)

  9. The AVHRR data always shows warming, even in locations where the ground instruments report cooling. Could there be an uncorrected warming bias? Has the satellite data been adjusted for orbital decay?

  10. Tetris and Scooter–
    You are both answering some general questions that are outside the scope of what RyanO is doing. Ryan is specifically trying to examine what he figures out reading Steig’s paper. Other’s who read the paper can possibly guide him to figure out answers to ambiguities etc. But questions that basically ask whether CO2 could even hypothetically result in warming are, strictly, speaking outside the scope of an analysis to determine whether or not it warmed.

    One of the wonderful things in both science (and even outside science) is we can sometimes isolate questions and get the answer to a specific question without worrying. Whether the Antarctic warmed since 1959, and whether Steig actually showed this is, in some sense a separate question from whether or not CO2 could be the cause (or non-cause) of any warming.

    They are related in the sense that if warming occurred, that tends to support theories that increases in CO2 can cause warming. If warming did not occur, that may suggest the effect of CO2 is either small or doesn’t happen (or it’s insufficiently strong to overcome some other factor). If we can’t tell whether warming occurred, that might suggest we just can’t tell from this data.

  11. Mostly I’m just trying to see if the words in the paper are supported by the data. So far, I don’t think the words are.
    This doesn’t mean that Antarctica isn’t warming, nor does it mean the satellite reconstruction is bad. I don’t have enough information (nor do I have enough statistical knowledge even if I had the information) to tell that. Others at CA with a much better knowledge base are already investigating that, however. 🙂
    The big red flag I had with the paper was that most of the data that forms the individual time series is imputed (manufactured by RegEM or a PCA analysis), and there was no benchmark provided to indicate that such an imputation is valid over this large of an area with very little actual data.
    So what I decided to do is compare the consistency of one imputation (the AWS recon) versus another (the AVHRR reconstruction). If they correlate well, then I would feel that the imputation has potential (albeit unproven) validity. If they do not correlate well, then it seems hard to argue that an imputation on this scale has systematic validity. This is something that is within my abilities to do (at least a rough comparison).
    The result of doing this is not any confirmation or disproof of the results; the result is simply how much confidence do I have that the results reflect reality.
    So far, my personal confidence in this is low. It is quite possible that the AVHRR reconstruction is a valid picture of Antarctica, but I frankly do not have enough confidence to believe that conclusion. Antarctica may be warming more, or less, or exactly the same. There simply doesn’t seem to be enough information to tell at this point.

  12. Nic L pointed out that the Bonaparte Point file at BAS has changed since Steig ran the AWS reconstruction. No notice of this change was posted, and the old file is gone. So the problems I noted with the Bonaparte Point data are due to changes in the data at BAS.

  13. Ryan wrote –“The big red flag I had with the paper was that most of the data that forms the individual time series is imputed (manufactured by RegEM or a PCA analysis), and there was no benchmark provided to indicate that such an imputation is valid over this large of an area with very little actual data.”

    This is why people conclude that the paper is all about politics. The data quality is too weak, the coverage of the area is too sparse and the statistical techniques fail to cover the necessary bases for anyone to be able to express confidence in the results.

    As science, it’s sloppy. As politics, it offers “results” that the authors desired. Job well done.

  14. Nathan,

    the last 30 years is a perfect example of what happens when a large ice sheet melts. The arctic has done a really good job of “floating off and melting.” yearly.

    Could you please point out the negative consequences of this??

    Do it quick cause it appears to be returning to what the Warmers consider “normal!!”

  15. Kuhnkat
    The Arctic is not and an example of floating off and melting, as it is already floating.
    The West Antarctic ice sheet lies mostly below sea level but is not ‘floating’.
    The negative consequence of that is, of course, sea level rise.

    What appears to be returning to “normal”?

  16. Ryan – Interesting post, particularly with respect to Steig et al (S09) possibly not adhering to the stated exclusion criteria. My understanding of what they did, however, is a little different that what you stated at the outset and I would like to check what others understanding is.

    For the primary reconstruction, my understanding is that S09 used the manned-station data (and possibly 4 AWS?). Since there are significant gaps and/or incomplete time coverage for the manned stations, they used the AVHRR satelitte data to develop a covariance structure for the period of satelitte coverage. Then, using the satelitte-derived covariance structure for 1982-2007, they used RegEM to infill missing data at the manned-stations for the entire period from 1957-2007. Finally, using the infilled data records, they (1) calculated trends at individual stations and (2) looked at temperature trends over Antartica as a whole (presumably using some other algorithm to interpolate grid point data from the station data)

    For the AWS reconstruction, they used the AWS data instead of the satelitte data to develop the covariance matrix.

    Is my understanding the same as yours and is the correct understanding of what they did?

  17. Bob,
    Gavin made some comments at RC that indicate it was done in the manner you describe. Eric didn’t contradict those statements. However, to me, the paper indicates otherwise.
    .
    From my reading of the paper and supplemental info, one of the things Steig was trying to do was develop a screening/processing method for the AVHRR data that could be used to obtain surface temperatures . . . relieving the need to rely on sparse and poorly maintained ground instrumentation. That would mean that the AVHRR data would have been used directly and not simply to obtain the spatial profile of temperature.
    .
    There is a way to prove whether the AVHRR data was used directly or used for covariance. Hopefully I’ll be able to post about that fairly soon (though I do this in my spare time, so it’s time dependent). In the meantime, though, I think there are some indications from the comparison of the AWS/AVHRR recons that indicates the TIR data was used directly; namely, the fact that the trend from the 1957-1980 subset of the reconstructions differ. If the satellite data were used for the spatial profile alone I would not expect a large impact on the trend (since the source data for both reconstructions is exactly the same in that period).
    .
    I know most of the folks at CA think it was used the way you describe. I should be able to know for sure in a day (or three). 🙂

  18. RyanO:

    I’ve had to read this three times, but I think I’m following your argument. I’ve learned quite a bit. Thank you.

  19. Ryan – I went back and re-read the actual paper as available at the meteo.psu.edu site. Unfortunately, it does not clear up exactly what they did. The description of the methodology in the first few paragraphs of the paper seems to support my interpretation whereas the description of methodology in the additional on-line methods description (not the supplemental materials) strongly supports your interpretation. This is especially confusing since the first interpretation indicates the reconstruction is primarily dependant on actual station data while the second interpretation indicates that the reconstruction is primarily dependant on the interpreted satelitte data.

    Unfortunately, this means that, despite protestations to the contrary, one cannot simply read the methods section and figure out what they have done. This provides a perfect example of why the “code” should be released so an interested party could go step-by-step through the code and figure out exactly what was done.

  20. Bob,
    .
    I am no longer sure what they did except insofar as the AVHRR recon does not seem to contain ANY actual ground station data. Everything looks imputed. Everything . . . even from 1957-1982.
    .
    I found the grid locations in the AVHRR recon that corresponded to the station locations. I had expected to find a near-perfect match in the 1957-1982 range (since supposedly the station data was used for that range) and a not-so-perfect match afterwards (since my guess was that they used the satellite data as more than just a spatial map). When I actually graphed it, though, I was rather surprised.
    .
    There’s no match. The trends are different, and display the same general shape as the AWS to AVHRR comparison – i.e., the actual station data shows essentially no warming after 1969, but the AVHRR recon shows accelerating warming during that period – even on the gridpoints that match the station locations. I don’t see any actual station data anywhere in the AVHRR recon. Absolutely everything looks imputed.
    .
    Regardless of what they actually did, the claim that the satellite information was used for covariance alone is not supported by a direct comparison between the station data and the reconstruction. Not a single one matches.
    .
    I posted these over at CA as well – these are the AVHRR recon minus the station data at the appropriate grid point for stations with essentially complete records over the analysis period.
    .
    Edit: Pics didn’t work . . . not sure why . . . here’s the CA link for them:
    .
    http://www.climateaudit.org/?p=5151#comment-327342
    .
    Here’s also some plots that show the difference in trend by period and location on Antarctica:
    .
    http://www.climateaudit.org/?p=5151#comment-327282
    .
    So, to be honest, I don’t know what they did. All I know is that the surface data does not match their reconstruction, either temporally or spatially.

  21. Bob,
    .
    Without a doubt, the entire AVHRR recon is imputed. Not just part of it. The whole thing – even the satellite data. I’ll ask Lucia if it would be okay for me to do a part 3 post on it. 🙂

  22. Ryan O [10353]
    Liked you analysis and your comments in [10215]. I would like to comment on your opening observation to the effect that maybe Harry wasn’t that important and wouldn’t make much difference to Steig’s results. I would like to suggest that Harry was very important indeed to Steig and the Team, although not the way you thought of it: Harry was in fact so important that Gavin Schmidt went so far as to appropriate [steal, plagiarize or any other euphemism that fits] the results that Steve McIntyre’s had posted on CA immediately after having reviewed the paper showing that the data didn’t compute, and then passed them off a correction by the Team. When a researcher goes to those lengths to obfuscate something, that tells us that Harry was ever so important. Not so much at the science level, but at the level of causing damage to the proponents by allowing for the undermining of the “message”, which in the end is precisely what happened.

  23. Nathan:

    But your (Hansen’s actually) entire premise about Arctic ice extents and global warming fails – in practice, sea ice extent from one year to the next is NOT related to CO2 levels nor reflected heat energy.

    Summer 2007 sea ice extent was lower than summer 2006 – but temperatures had DECLINED between 2006 and 2007. Temperatures in 2008 “should have been” warmer than in 2007 because reflectivity was lower- but they were not. Global temperatures continued declining between 2008 and 2009, and for the past ten years they have gotten lower the higher CO2 has gone. (If 27 years of 1/2 of one degree temperature change creates a world-wide disaster, what does ten years of declining temperature indicate? (Other than failed theories, that is.))

    Summer sea ice in 2008 should never have been able to recover from the lows in 2007. But it did. 2009 winter sea ice now (mid-Feb 2009) is at near-record highs. Why? According to Hansen (and those who follow his propaganda (er, religion) of AGW, it could not be. Once sea ice has re-frozen over an area, the surface reflectivity is restores – regardless od depth of ice (one year, two year, etc. (Granted, shallower ice will melt again the next spring quicker, but that doesn’t didn’t prevent this recovery either.)

    You are claiming feedback based on year-to-year changes in the earth’s reflectivity based on changes in sea ice extent, but your claims are false. You are predicting long-term disaster over many centuries, but cannot make ten years of predictions correctly.

Comments are closed.