NSDIC Prediction: Submission to SEARCH

In comments, crandles asked if I was going to submit my forecasts of NH Ice Extent to SEARCH. Though I usually discussed forecasting the 7-day JAXA average (on which we bet quatloos) I can easily tweak this for the NSDIC September NH Extent Average. I did so in August and submitted. I was then surprised to learn they also solicit forecasts for their September report. The forecasts were due today, which is too early to know the NSDIC August report. But, no matter; sent mine in making projections based on JAXA, CRYOSPHERE and PIOMAS data. I thought some of you might want to read first sentence in my most recent submission:

1. Extent Projection: 4.47 million square kilometers September NH sea ice extent minimum.

So, I’m predicting a lower extent than I did a month ago when the first sentence in my submission read

1. Extent Projection: 4.57 million square kilometers September NH sea ice extent minimum.


Given the size of my uncertainty intervals, the people at Search made the reasonable decision rounded that to 4.6 when displaying the range of submissions they received:

Presumably, the submitted forecasts will tend to cluster as we get closer to the minimum. We’ll have to wait so see if they cluster sufficiently for SEARCH to post 3 digits.

Interestingly enough, my first submission was based on a method that extrapolated based on extent only. If I were to use that method today, I would be forecasting 4.58 millions square kilometers. With the most recent reported values for regression parameters I use, that particular regression happened to provide the best single regression between NSDIC extent and any predictive variable I tried.

However, as many know, I’ve been folding in other regressors and switched to a method based on weighting multiple candidates regressions. Today, the candidates that filtered to the top based on their low corrected Akaike criteria (AICc) were:

  • NSDIC extent v(current JAXA Extent,current CT Area) w=58.4%
  • (NSDIC extent- current JAXA Extent) v July 31, PIOMAS Volume w=6.4%
  • (NSDIC extent- current JAXA Extent) v current CT Area. w=35.2%

I have to admit to finding the latter two a bit odd, but if I’m using a “method”, I think I have to stick with “the method”. Afterwards, when things like “volume” or “area” pop-out as strong regressors, I can try to ponder whether I think there might be a physical reasons or whether I think these are acting as proxies for something important not included in the regression. (FWIW: Speculating, I can think of reasons why high area could help reduce ice losses. But, I was expecting (extent-area) to matter more.)

For now I’ll observe that it barely matters what we include in the regression: We are nearing the minimum and uncertainty intervals are narrowing. If I define “loss” as (NSDIC extent- current JAXA Extent), based on years 1979-2010, the mean “loss” between the current 7-day JAXA value and the NSDIC September extent is only 0.13 million square kilometers. (For those eyeballing the JAXA graph thinking this sounds awfully small– it should look small. This the Sept. NSDIC NH extent mean is generally– thought not always–higher than the minimum based on the JAXA 7-day average. This is not too surprising since the September averages includes 21 days with mostly extents larger than those that contribute to the 7-day average minimum.)

It turns out that if we regress this loss against Cryosphere today 7-day average area, we predict a loss of 0.53 million square kilometers. While the difference looks large, even achieving a record loss doesn’t result in a huge difference in what I predict. For those wondering about how “loss” varies with area, here’s the graph:


The black dot illustrates the ‘loss’ predicted based on CT-area for the upcoming month. Notice the prediction is for a near record loss given the short amount of remaining time; this regression is included in my weighted model with a weight of 32%. So, at least one model that survives the selection process suggests we may see record losses during September. I can only begin to imagine the chatter at Nevens! (And WUWT, and even here!)

Overall prediction
Turning to overall predictions: My forecast for the NSDIC September Ice Extent is 4.47 millions square km; this value is show along with values from other years below:

Last years value is shown as a solid black circle. Note that it lies outside the ±95% uncertainty intervals based on the weighted model (i.e. outside the slate blue horizontal dashed lines.) Assuming residuals to that fit would be gaussian, the probability of exceeding last years September extent is estimated at less than 1%. Meanwhile, the 2007 minimum still lies inside the uncertainty intervals and I estimate the probability of overtaking the minimum is 17.9%. That’s pretty high!

Tomorrow, I’ll post on the horserace and tell you who is in the lead for the Quatloos. That’s based on JAXA.

78 thoughts on “NSDIC Prediction: Submission to SEARCH”

  1. All I can say about the <4.1 and >5.0 predictions is: I’d like to get some of whatever they’re smoking.

  2. Dewitt-
    Those were submitted in late July, so that cuts them some slack. Still, 3.96 or 5.4? The 3.96 is a heurisitic method and the 5.4 “a combination of methods”. Those both sound like they could mean anything ranging from “gut feeling” to something rather complicated.

  3. “Those both sound like they could mean anything ranging from “gut feeling” to something rather complicated.”

    Admittedly, the 3.96 forecast comes with a pretty terse explanation but the 5.4 one comes with one running to 3 pages including figures. See http://www.arcus.org/files/search/sea-ice-outlook/2011/06/pdf/pan-arctic/wadhamspanarcticjune.pdf and http://www.arcus.org/files/search/sea-ice-outlook/2011/08/pdf/pan-arctic/shibata_etal_panarctic_august.pdf

    I’m a bit surprised, Lucia, that you apparently haven’t read the other contributors explanations of their methodology.

  4. HR,

    I can’t give you a full rundown of the different area/extent products from memory but the differences I know of are as follows:

    – different passive microwave sensors (AMSR-E, SSMI)
    – different grid resolutions (25km, 12.5km, 6.25km)
    – different concentration thresholds (15%, 30% – BTW, WUWT seems to have a labeling error there since DMI uses a 30% threshold as you can see at http://ocean.dmi.dk/arctic/icecover.uk.php)
    – different algorithms for interpreting the passive microwave data (sorry, the ins and outs of this are way beyond me to say anything meaningful about).
    – different methods of smoothing the data (averaging over periods from 2-5 days, whatever Norsex does, perhaps others)

    A single authoritative data set would be nice but passive microwave sensing of sea ice apparently isn’t that cut and dried.

  5. >I’m a bit surprised, Lucia, that you apparently haven’t read the other contributors explanations of their methodology.

    I didn’t even know they were online. Oddly, the notes to contributors say they won’t be.

  6. Jon–
    I guess when I’d read the brief synopses way back when, I must have seen the links– but forgot. Anyway, I know I never downloaded them all, and only read the summaries Arcus extracted. Having read the ones you suggested, I think I was wise to not read them. It let me go ahead and do it my way. 🙂

  7. A single authoritative data set would be nice but passive microwave sensing of sea ice apparently isn’t that cut and dried.

    We’ve got several global surface temperature products. No reason not to have several extent/area products.

    HR– At Neven’s, I recall Larry Hamilton also said that U of Bremen isn’t calibrated all the way back in time. I think that’s part of the reason they only present graphs and not data. I don’t know what they consider to be involved in calibrating, but take that for what it’s worth.

  8. Re: lucia (Aug 30 16:35),

    The authors of the algorithm used for the Uni-Bremen data are at Uni-Hamburg. The data are here:

    ftp://ftp-projects.zmaw.de/seaice/AMSR-E_ASI_IceConc/area-extent/

    Unfortunately, they only update monthly around the middle of the next month. July’s data was posted August 9, for example.

    An interesting question on area is whether the pole hole (the gray area in the center) is included. It is for extent because it’s a safe assumption that the concentration in the hole is > 15%. NSIDC does not include the hole in their area data.

    1) The “extent” column includes the area near the pole not imaged by the
    sensor. It is assumed to be entirely ice covered with at least 15%
    concentration. However, the “area” column excludes the area not imaged by
    the sensor. This area is 1.19 million square kilometers for SMMR (from the
    beginning of the series through June 1987) and 0.31 million square
    kilometers for SSM/I (from July 1987 to present). Therefore, there is a
    discontinuity in the “area” data values in this file at the June/July 1987
    boundary.

    I don’t know for sure about anyone else.

  9. Dewitt–
    Hmmm… Now I have to hunt for the discussion about calibration at Neven’s. That’s not easy since Neven’s blog uses paged comments. Even if I remember which post, I can’t just use the search feature on my browser to find all the comments from Larry!

  10. If you start reading through the documentation on extent and ice measuring with a skeptical eye it’s amazing what you find.
    I wouldnt make too much of records

  11. Re: lucia (Aug 30 17:23),

    That caveat was added after JeffId posted an article with conclusions based on the uncorrected NSIDC area data. One of the things I like about Jeff is that he freely admits it when he makes an error, unlike numerous other people who shall remain nameless.

  12. I wouldnt make too much of records

    No. But even with noisy data, setting new records with some frequency can only happen if the amount of ice really is trending down. The record breaking themselves are just horse-races– it’s the same with global surface temperatures.

  13. I believe the metric for Arcus Search sea ice outlook is the September average sea ice extent as reported by the NSIDC for the whole month.

    I note that over the past 8 years of Jaxa data, the NSIDC September average has been 222,000 km2 lower than the Jaxa data as of day 241 (or August 29 in most years).

    But this has varied from 78,000 km2 higher in 2004 to 483,000 km2 lower in 2008.

    This range would then place the NSIDC September 2011 value at anywhere from 4.41M km2 to 4.97M km2 with 4.67km2 as the expected value.

    lucia’s number is looking pretty close but the numbers can vary a bit from now on depending on the weather.

  14. Carrick,

    I ran the monthly NOAA extent anomaly figures for 1/2007-7/2011 through the decompose function in R.

    Graph here

    I don’t see that there is any more real information than plotting the 2007-2011 (max-min)-1979-2006 average max-min (in this case area rather than extent).

    Graph here

    The decompose function makes it look like there’s a smooth curve trend, but that’s all filtering (13 point symmetrical filter of some sort, meaning only about 5 actual degrees of freedom), not to mention that the seasonal correction probably has substantial uncertainty and may not even be constant.

  15. DeWitt, I’m not sure why you’d use that sort of decomposition instead of a spectral based one like this.

    It’s obviously a periodic phenomena, you seem to be fighting that fact tooth and nail. When you have periodic phenomena the “standard” approach is the Fourier transform not something based on moving averages. I’m baffled why you’d even want to try that.

    /// not to mention that the seasonal correction probably has substantial uncertainty and may not even be constant

    I’m not even sure what you even mean by “the seasonal correction probably has substantial uncertainty.”

    What “seasonal correction” and why does it have “substantial uncertainty”? I have no idea what you are even trying to say here…

    The “standard” method (anomaly based) is to assume that the seasonal variation is the same between years, average over base range of years, then subtract from each years data. E.g., the “anomaly method”.

    If the seasonal variation is the same, the seasonal dependence in the anomalized series will be substantially suppressed. That works pretty well until around 2007.

    Again If you compare the spectra of the pre-2007 to the post-2007 data, you see exactly this pattern of suppressed seasonal variation in the anomalized data in the pre 2007 data, but the failure of the anomaly method to suppress the seasonal variation post 2007.

    What other explanation is there besides, the anomaly method worked up to 2007, suggesting the assumption that the seasonal variation is the same between each season? And what other explanation is there for the failure of the anomaly method post 2007, other than “the seasonal variation changed”?

    At some point, I’m going to assume you just don’t grok frequency domain and drop this.

  16. Carrick This change is something that Willis noted as well. I argued that it voided his null hypothesis. he agreed, but argued that it might be a sensor problem ( nice try).

    I can look up the links, but somethings going on with that

  17. DeWitt, before we talk past each other on this anymore than we already are, I’m using phase in this sense:

    $latex E(t) = \Lambda(t) + A(t) \cos[2\pi t + \Phi(t)]$,

    where $latex \Lambda(t)$ is the DC offset (secular variation that Rob gets all excited about with his 10-year trends), $latex A(t)$ is the variation in amplitude and $latex \Phi(t)$ the variation in phase and $latex t$ is in years (so $latex A(t)$ and $latex \Phi(t)$ are measured relative to the modulation frequency of $latex 1\hbox{yr}^{-1}$.)

    Not precisely defining what I meant has certainly led to confusion here. I also should be saying a change in the annual phase behavior, not just “phase shift”.

    What I’m trying to differentiate is what I think is a change in the seasonal pattern rather than simple scaling of the amplitude. That is, post 2007, what I am saying is I think the change is more complex than just multiplying the seasonal variation by a constant, e.g., in math

    $latex y(t) = \Lambda(t) + \alpha A(t) \cos[2\pi t + \Phi(t)]$,

    So far I haven’t gotten to the point of being able to narrow down the tests satisfactorily enough for me. Looking at max – min simply won’t answer this sort of question.

    Hopefully this will clarify things a bit, and I admit I’m still muddling through the best way to answer the underlying question I’m looking at.

  18. steven, do you have a link for that discussion with willis. I certainly don’t believe that either secular trend nor any putative changes in the seasonal variation can be explained by “sensor malfunction”. (heh.)

  19. Re: Carrick (Aug 30 21:40),

    At least we agree on what phase shift means.

    If I regress the daily averages for 2003-2006 against the daily averages for 2007-2010, the sum of squares for the regression minimizes at a shift of only one day (standard error of the regression of 0.112 Mm²). That’s not much of a phase shift (~1 degree) compared to the change in offset (1.31 Mm²) and amplitude (0.898).

  20. DeWitt, and yet we know the date of minimum extent is shifting later into the year (by about 10 days from 1970 to now).

    There’s a term in statistics, that applies here: “efficiency”. If you aren’t seeing any effect of change in season (that we can see in the raw data), I’d suggest you need to look for a better metric.

  21. Re: Carrick (Aug 31 00:11),

    Ten days seems like a lot. The day of minimum area from CT shows no trend at all from 1979-2010

    I don’t have the full set, but the mean day of year for the minimum extent using NSIDC data from 1979-2000 showed no trend. The mean day of year for minimum extent for JAXA from 2002-2010 is only 4 days later, which is barely significant given the variances.

  22. Following the wonderful presentation on Fox Business network by Bill Nye, respected member of the Union of Concerned Scientists, a non-partisan, extremely objective group, I started realizing the effects of climate change. Since about 1979, the year satellite measurements began, I have been noticing subtle changes in the air. My nose hairs are becoming more itchy, and the trend is rising. Globally, nose picking rates are all trending up. People between the ages 25-42 are noticing a 7% decrease in total nose hair coverage per annum.

    @Lucia

    Any chance we can bet some serious quatloos on Rick Perry thumping Obama in 2012? The good doctor would like to wager his membership to this site.

  23. DeWitt:

    The mean day of year for minimum extent for JAXA from 2002-2010 is only 4 days later,

    Well it passes the smell test…assuming a linear trend (you do): (2010-1972)/(2010-2002) * 4 = 19

    If you agree 4 days in 8 years is barely significant, then 10 days in 38 years is more than ‘barely”.

    I’m not looking at area, I think it’s close to meaningless.

  24. Following up on area, it should be a bit of a clue that almost no group publishes area, and the only one that we can find a digital link for, buries the link.

    Regarding the shift in the minimum, I think the right way to do it is a Fourier (or lag-based) analysis (I’m still working on this, or more to the point, planning on working on this). Even looking at the minimum still only tells part of the story, because it doesn’t address things like potential steepening of the sides. I think you can get at that by looking at the amplitudes and phases of the Fourier components, or similar approach, I’m not sure just looking at the lag in days tells us the full story (but we can still look at that too).

    When finding minimum dates, just picking the minimum is a very noise way to go about doing it. Fitting to e.g. a quadratic around the minimum over a period of several weeks to a month might give a much less noisy estimate (we want to filter out day-to-day weather effects for sure, if what we want to look at are long term trends).

  25. Re: Carrick (Aug 31 11:31),

    The four days difference is between the mean of 1979-2000 and the mean of 2002-2010. That’s 4 days over the span of 1989-2006. That would be 8 days from 1970-2010, but that’s almost certainly an overestimate. I’ll try quadratic fits. 8/15-10/15 should be an adequate range.

    Of the four sites I check, two publish area data as well as extent, NOAA (monthly) and Uni-Hamburg. JAXA only publishes extent and CT only publishes area. Well, CT has a quarterly extent graph, but I don’t really believe it as it disagrees with the other three sources for extent where it overlaps. Arctic-ROOS also has extent and area graphs, but I don’t trust them either.

    I don’t think area is any worse than extent. Extent is much more subject to weather than area. Area has accuracy problems near the minimum, but that should be more of a systematic error that goes away when you look at anomalies. I can do a linear transform between CT and Uni-Hamburg area. There’s a pattern in the residuals near the minimum, but it seems consistent from year to year.

    Uni-Hamburg extent is tracking their 2007 data much more closely than JAXA. Uni-Hamburg extent looks like a 50:50 shot for a minimum below 2007.

  26. steven mosher (Comment #80992)
    > “I can look up the links, but somethings going on with that”

    This pretty much has to happen as the extent declines. That’s because maximum extent in the Arctic Ocean is limited by the shorelines of Siberia and Canada. This acts to suppress what would otherwise be a greater winter extent, and this suppression acts in turn to suppress what would otherwise be a larger annual variation: i.e., it flattens the top of the extent curve in wintertime.

    If all we had were a seasonal variation and a linear trend superimposed, the shape of the seasonal curve wouldn’t change and no change in seasonal variation would be noted. But as the secular change lowers the curve from year to year, the flattened top of the curve now has more room to become less flat. In other words, the earlier springs and later autumns mean that the ice extent hits the limiting shorelines later, and recedes from them earlier. So the edge of the extent spends more time at sea and less time attached to the continents, making it more variable.

  27. DeWitt, when I get back to it, I’ll look at extent and area both, but I think it’s dangerous territory to use a metric the pros don’t seem to favor. Right now, I’m looking at different metrics, and different ways of measuring things.

    While I use annual average + max-min myself, I consider it a sanity check, not a final product—nonparametric method is good, since it is only weakly model dependent, but tends to be the noisiest approach. Just like median is less sensitive to outliers than mean, but doesn’t have the 1/sqrt(n) suppression of the later metric.

    Done properly adding more data never hurts (even if its correlated), and hopefully we can both agree that there is information in the seasonal cycle not contained in the max-min of the seasonal cycle.

    KAP, I like your model. “Flat-topping” is something you can test for using harmonic analysis, so I’ll look at it.

  28. Carrick,

    How do you know the minimum extent has shifted by 10 days since 1970? I really don’t see the logic in such a shift. You do get higher rates of freezing, but not until the middle of October, well past the minimum extent date.

    I used a quadratic fit over 90 days, which is probably too much, and can see no trend from 1979 to 2010. Prior to 1987, the extent data was every other day, which probably biases the calculated minimum.

  29. I don’t think there has been a change in the average minimum date of September 12th.

    2007 was the longest minimum date at around Sept 24 but 2009 was September 13. The earliest was August 24.

    Some years have different trends depending on the weather.

  30. crandles–
    I saw your post but I’d drunk too much wine to comment. (It’s a holiday in the US. 🙂 )

    I don’t send as much detail as you do to search. (It looks like different people send them more or less information.)

    My uncertainties are all based on the hindcasts. R has the advantage of quickly and easily giving both RMSE’s and confidence intervals, so I use confidence intervals. I told SEARCH ±0.1825 as the 1&sigma. I didn’t write down the specific rmse’s for each fit. (I can do that later– it’s early morning. We’re going out to breakfast, and– oddly– I need to add a line to the script to not automatically use today’s numbers and tell me the RMSE’s if I use data available today! )

  31. KAP

    I’m thinking the increased annual variability has more to do with the destruction of MYI. Think of the ice as oscillating between a hard stop ( the physical geography) at the top end, and a soft stop at the lower end (MYI)

    Change that lower end bound and you’ll see the kind of difference that carricks chart shows

  32. Re Bill Illis’s
    “I don’t think there has been a change in the average minimum date of September 12th.

    2007 was the longest minimum date at around Sept 24 but 2009 was September 13. The earliest was August 24.

    Some years have different trends depending on the weather.”

    If my minimum prediction method (linked above) isn’t just fluke or wildly overfitting, the lower the ice area in June July Aug the more energy is captured by the ocean and the longer it will take before freeze up begins at the ice edges is unlikely to be insignificant.

    Why would there be no change in average date? Your example of 2007 minimum being much later than 2009 seems to support the idea of a change in date depending on area earlier in the melting season.

  33. Re: crandles (Sep 4 12:39),

    Why would there be no change in average date?

    The better question is why would there be any change in average date. Thawing earlier and freezing later is potentially symmetric about the same minimum date. If thawing occurred earlier but freezing was the same or earlier, the minimum date would be earlier. Conversely, if the thawing was the same or later and freezing was later, then the minimum date would be later. I have seen zero evidence that the average minimum date has shifted at all, much less the ten days that Carrick asserted.

  34. DeWitt–
    It seems to me that if the atmosphere becomes less transparent to radiation loss when the sun has set, this would reduce the rate of radiative cooling at night. On the fringes, this could shift average day when radiative cooling will overtake melting due to heat addition from other sources.

    I don’t know when this might become apparent– but it seems to me the date could shift.

  35. Re: lucia (Sep 6 08:59),

    It seems to me that if the atmosphere becomes less transparent to radiation loss when the sun has set, this would reduce the rate of radiative cooling at night.

    The effect should be the same when the sun starts to rise above the horizon in the spring as when it starts to set for the winter It should cause earlier melting as well as later freezing. I see no reason for the effect to be particularly asymmetric.

  36. Dewitt–
    I see what you are saying. I guess I wasn’t following along enough, I thought you were suggesting end date for melt shouldn’t change at all– but that’s not what you meant.

    As for symmetry– I guess whether shifts are symmetric might depend on the the full assortment of mechanisms and their balance. I don’t know them, so I’m not going to speculate on symmetry vs. assymetry based on physics.

  37. Lucia:

    As for symmetry– I guess whether shifts are symmetric might depend on the the full assortment of mechanisms and their balance. I don’t know them, so I’m not going to speculate on symmetry vs. assymetry based on physics.

    That’s where I am too… it’s not exactly a “no brainer” that you’d expect symmetry here. That’s smacks of a spherical chicken approximation to climate.

    Bill Illis, even in your extent series, there is a visible trend in the minimum, it’s just superimposed on a lot of noise, if you look at the amount of shift from beginning to end of series. I don’t think plunking minima from the daily is the way to go, because you’re superimposing weather noise onto the seasonal pattern.

    Recall it is the question of “is there a shift in seasonal pattern?” that is being raised here, not seasonal pattern plus some other large source of measurement error (with respect to the measurement of the seasonal pattern). But even doing it the way you did it, you’re getting about 9 days (rounded to nearest number), based on your estimated trend and the number of years, which is “about 10” (and not “zero”).

    It’s a fair question of whether this trend is statistically significant, but adding weather noise to the estimate of the minimum is going to inflate the uncertainty in any estimate of the trend in the shift of the minimum in the seasonal pattern…this is true regardless of whether the underlying trend is positive, negative or zero.

    It is certainly the case that not all methods of analysis are going to be equally efficient in estimating things like the minimum in the seasonal ice extent pattern. I do mean to go back and look at different methods for estimating the minimum (and other spectral properties), but I don’t have any time for fun right now…out in the field right now.

    Finally, I believe that DeWitt is continuing to look at sea ice area rather than extent. In effect he is saying “I don’t see any shift the minimum in a metric that Carrick isn’t using, and in fact disputes whether it is even a reliable metric, so I can’t understand why he would see a different result using a different metric, and I clearly don’t see any shift when I am using a metric he’s not looking at, nor making any claims about.”

  38. Carrick (Comment #81158)
    September 6th, 2011 at 6:42 pm
    ———-

    To detect a change in the seasonal pattern would require a lot of computational power – there are over 14,000 datapoints, so checking just the daily minimum date over 38 years takes enough effort.

    I’ve run data analysis programs that took 5 hours to run on a modern computer (and mine would die today if I tried to do something similar).

    Having said that, the seasonality of 2007 was clearly different than 1980 (or the other unusual year 1996). I don’t ascribe it to open water in September because the Albedo of open water rises to that of sea ice once the solar inclination falls to 10 degrees or so (and it is only 23.4 degrees on June 21 at the north pole).

    September weather is about all we can ascribe the changes to. Once October rolls around, low sea ice conditions catch up very quickly toward the average with the highest sea ice accumulation periods occuring in years when the September sea ice is lowest (this is one situation where the seasonality has changed).

  39. Bill Illis, I was thinking of something more like a Fourier decomposition method. I don’t think it’s particularly a CPU intensive method….the issue is “efficiency”, and I’m not sure it’s the appropriate basis for expansion due to the high max/min asymmetry.

    I think if you’re going to do a full analysis, not only do you have to remove the effect of short-period fluctuations associated with ordinary weather, I think you’d probably want to include the effects of the Arctic Oscillation index in your regression analysis. (See e.g. this.)

    \What effect all of this has on annual trend and whether there is any net phase shift in the seasonal pattern after you’ve done the appropriate analysis…I couldn’t possibly say, because I’ve as yet to do it myself.

  40. Re: Carrick (Sep 6 18:42),

    Finally, I believe that DeWitt is continuing to look at sea ice area rather than extent.

    Nope. I’ve done it with both extent and area. The absolute minimum data is very noisy. It’s highly questionable to justify your 10 day shift using a linear trend that isn’t even close to being significantly different from zero. As I posted earlier, I used a quadratic fit of NSIDC and JAXA extent data, which has much less noise and still didn’t see a statistically significant trend in the minimum date. Those data are clearly not linear.

  41. I wasn’t sure what I was looking at there DeWitt. Thanks for the explanation, it wasn’t obvious to me last night. I wouldn’t personally use this result to argue that arctic ice minimum is constant over time.

    I’m not sure about the step function pre-1985, looks artifactual to me (like maybe they changed how they measured ice extent).

    Also, are you really arguing that 1985-current in your set is consistent with a constant minimum? Eyeballing it, the slope doesn’t look that different from what Bill Illis or I obtained with the other data set.

  42. Re: Carrick (Sep 7 06:19),

    I wouldn’t personally use this result to argue that arctic ice minimum is constant over time.

    The null hypothesis is that the minimum is constant. That hypothesis cannot be rejected at the 95% confidence level. What more needs to be said?

    Pre-1987, the extent data is every other day. That may have biased the quadratic fit.

  43. Re: Carrick (Sep 7 06:19),

    To put it another way: I don’t need to argue that there has been no change in the minimum date. You are the one who asserted that there had been a ten day change in the minimum extent since 1970. That assertion continues to be unsupported by quantitative evidence or literature citation.

  44. DeWitt # 81165,

    The null hypothesis is that the minimum is constant. That hypothesis cannot be rejected at the 95% confidence level. What more needs to be said?

    Well, one might say that 95% confidence is not always the best level to think about with very noisy data. If the null hypothesis can be rejected at some lower level (say, 80% or 90%) then that might be informative in this case.
    Mind you, I am not saying that is the case here, I am just saying there is nothing magical about 95%. Sort of like Santer et al arguing that the tropospheric temperature data is so noisy that you can’t state with 95% confidence that the models are wrong about tropospheric amplification… even though his analysis showed the chance was ~90% that they are in fact wrong.

  45. Sort of like Santer et al arguing that the tropospheric temperature data is so noisy that you can’t state with 95% confidence that the models are wrong about tropospheric amplification… even though his analysis showed the chance was ~90% that they are in fact wrong.

    I printed that out yesterday and now need to read it. What a huge number of authors! 🙂

  46. Re: SteveF (Sep 7 08:40),

    If the null hypothesis can be rejected at some lower level (say, 80% or 90%) then that might be informative in this case.

    For the quadratic fit, the P value of the slope is 0.5. The quadratic fit data also has a negative slope. Let’s cherry pick and throw out the data before 1987. That does give you a statistically significant positive slope, ignoring serial autocorrelation. But that slope predicts that minimum extent in 1979 should have been on day 247 ±3 instead of the actual day 264 for both NSIDC extent and CT area and day 256 for the minimum of the quadratic fit to the NSIDC extent data. Even then you don’t get ten days difference from 1970 to 2011. That doesn’t pass the smell test. Carrick has failed to cite a source other than “we know” for his assertion that there has been a change of ten days from 1970-2010.

  47. lucia (Comment #81172),
    “What a huge number of authors”
    Appeals to authority may seem more credible to some when there are lots of authors. Question: how many qualified climate scientists does it take to refute an “obviously flawed” paper in climate science? Answer: If it is really very flawed, then it takes exactly one… and he/she doesn’t even have to be a climate scientist.
    .
    BTW, I just noticed that the small animal in your cat’s mouth does not look happy, and maybe even dead! Kind of gives one a new perspective about tigers. 😉

  48. Speaking of cats, I got a new kitten from the feline rescue (he was found abandoned in a parking lot). His name is Mazel Tov and he had one infected eye removed, but was cleared by the vets for adoption. Orange and white and spunky. 😉

    Andrew

  49. SteveF–
    Yes. “The General” was bringing it to the cat door. Which I locked (mostly for fear the chipmunk was still alive. “The General” liked to play catch and release.)

    Then, I opened he window as he climbed the step to show me his prize. He was our favorite cat ever. He was also quite an amazing hunter with a wide variety of hunting techniques which included waiting on a tree branch and landing on chipmunks.

    Neither of our current cats are chipmunk killers. Duma specializes in bunnies, fairly large rabbits and squirrels. (No photos yet. But we know he eats plenty of rabbit based on the leavings. ) ‘Mo doesn’t hunt anything beyond shadows.

  50. Lucia, # 81179,
    My place on Cape Cod has lots (dozens!) of chipmunks that live around the yard… they are remarkably brazen. An outdoor cat would be very happy hunting there, save for the local populations of foxes and coyotes. I have been told cats that go outdoors often seem to ‘disappear’ for no apparent reason. It is a hard, cruel world out there.

  51. I have been told cats that go outdoors often seem to ‘disappear’ for no apparent reason.

    Maybe they just find a better family. 🙂

    As you may recall, Duma adopted us by breaking in the cat door one November and then gradually decided these were good digs. When Duma first began adopting us, the neighbors used to see him chasing off local racoons. He’s not really that large of a cat, but evidently, he can scare off racoons!

    We know where he used to live; it was about 1 mile away. He was chipped and the vet called the previous family. Since he’d pretty much moved in with us, they agreed to just let him be our cat. He must have really been looking for a “better” (from a cat POV) family. The previous house had numerous cats, dogs, kids. No cat door. Clearly, he thinks this place with a cat door and less competition for attention is a better deal.

    He still goes out. We sometimes get worried if we hear weird animal noises at night. Luck for us, he’s never come home smelling like skunk! Unfortunately, he chipmunks don’t interest him. Too small I guess.

    Mo (the diabetic one) doesn’t go out.

  52. DeWitt:

    The null hypothesis is that the minimum is constant. That hypothesis cannot be rejected at the 95% confidence level. What more needs to be said?

    There’s a lot more than can be said, as you probably are aware. You’ve got an obvious trend in your data, and are happy with calling it “null”. If that makes you happy… great.

  53. DeWitt:

    To put it another way: I don’t need to argue that there has been no change in the minimum date.

    And nobody asked you to, either. You have a point in there somewhere?

  54. Carrick, DeWitt,

    Humm… seems senior scientists of all types can be prickly. 😉
    .
    It think it is pretty clear the data is at least mildly suggestive of a trend toward a somewhat later minimum. IMO, there is no possibility of accurately quantifying that based on the data available. A carefully reasoned physical argument (dare I say it… an ice melt model) would be, to me at least, more convincing than statistical arguments. I haven’t seen that so far.

  55. Lucia,
    “he can scare off racoons!”

    In my youth, I saw a raccoon (a mother of 5 babies) clamp onto the neck of a 70 lb hound dog and nearly kill it. The dog’s owner saved the dog by clubbing the raccoon to death (the babies went to the local animal rescue folks).
    It is true that racoons do not like confrontations (they seem to run away when possible), but I have no doubt they could easily kill a cat… if needed.

  56. SteveF, I agree a model based argument would be an improvement over purely statistical ones (that is usually the case). One simple model is “it’s getting warmer”, so the first freeze date occurs later in the season as the arctic temperature warms. In fact, I don’t see how that couldn’t be the case (unless the temperature data are that wrong, which I doubt).

    Pushed and prodded, here’s my updated analysis.

    First my version of GSFC + JAXA spliced data. (This is updated daily.)

    As I’ve explained a number of times by this point, this is the data set I’m working from.. Whether that’s good, bad or otherwise is a point of debate, but it’s the data I’m talking about with respect to temporal trends, not any other random data set.

    Secondly, here’s my minimum data.

    And here is the fit.

    I tested it with Akaike’s information criterion, and this test strongly prefers a linear to a quadratic fit. At the 95% CL, the number of days shift from 1972 through 2010 is 3.4 ≤ days ≤ 11.0.

    My original “10 number” guestimate is admittedly high, but it does appear, at least with the data set I was using, that there is a trend with year in the minimum ice towards later in the season.

  57. I have no doubt they could easily kill a cat… if needed.

    That’s why I hope he gave up the raccoon fights. He’s got the neighbor’s cat cowed. Oddly, he defers to old, fat, slow, clawless ‘Mo.

  58. Side note on how I fit the data: I used a 90-day period centered on the “picked” minimum value, and fit it to a quartic function rather than a quadratic function.

    I also looked at serial autocorrelation, and found it to be around 0.07. Correcting for this moves the p value to around 0.002 or 0.003 depending on which method you use.

  59. Lucia,

    Oddly, he defers to old, fat, slow, clawless ‘Mo.

    I too have been puzzled by odd animal dominance outcomes. A lot of times I think it has more to do with willingness to fight than with capability to win.

  60. SteveF–
    Jim and my shared theory is he knows Mo “belongs” here and the humans will step in. Also, he’s not perpetually hungry like ‘Mo.

  61. Carrick,

    OK, you have convinced me: the most likely outcome is that there has been some shift to a later date for the minimum. Still, I would not bet my life on it. Too many other (not accounted for) variables may enter the picture. Still, I would like to know why DeWitt’s arguments for equal changes (earlier melt, later freeze) are not correct.

  62. Luica,

    he’s not perpetually hungry like ‘Mo.

    I will assume that Mo is of the ‘large’ persuasion… among cats, that may be enough to dissuade him from fighting.

  63. Mo is very, very fat and doughy. He is afraid of everything; he has no claws. But he really, really, really wants to eat all the time. If Duma wanted to take Mo, he could take Mo. But Duma doesn’t.

  64. Lucia,

    really, really, really wants to eat all the time

    Like lots of overweight, late life diabetic people the world over.

    Second thought: maybe that is why Micheal Tobis has so many of his soylent green thoughts.

  65. SteveF:

    Still, I would like to know why DeWitt’s arguments for equal changes (earlier melt, later freeze) are not correct.

    IMO, because those have to do with the inflection points, not the extrema.

    …. And there is an inherent asymmetry between ice & water, so that may not even hold true. Something to think about… how does the inflection point relate to the vernal and autumnal equinoxes? Is it on the same day, or is there an offset. If there is an offset, that speaks to thermal inertia, and the thermal inertia of the Arctic system is certainly undergoing change right now….

  66. Carrick (Comment #81193)
    September 7th, 2011 at 4:49 pm
    ———-

    I don’t get the same minimum dates when I look through your data.

  67. Bill, were you fitting to a quartic function, or picking the “raw” minima? (See this note on how I obtained my minima. This relates to a discussion that DeWitt and I had about removing the effects of weather noise.)

  68. BIll, if this helps any, this is my fit for year “35” (1971 + 35 = 2006). Year selected at random, other than I like the number “35”. (J/K.)

    Note that some fitting programs may blow chunks when trying to fit this data to a quartic function: It’s ill-conditioned if you don’t center the data before performing the fit, or use some other scheme than LU decomposition to fit the data. (See this comment thread for more info on that. Note emphasis on may.)

  69. A few notes “added in proof”:

    Tweaked my shell script to compute the fit minima slightly, and updated my fit minimum data file accordingly (just increased the resolution for the search for the minimum). This has minimal effect on the results. If I get around to writing a quartic fit routine in e.g. awk, I’ll post the script.

    As it is right now…it’s not particularly useful, because it uses my private set of UNIX commands, the main problem being my “linfit” program, which contains proprietary code that I am not allowed to share.

    Secondly, I added a quadratic fit line to the figure I posted for Bill Illis, to show why I went to quartic rather than quadratic smoothing.

Comments are closed.