Weighted Prediction of Ice Melt

Wednesday, I posted an update on the probability of a new record for the minimum 7-day average JAXA ice extent and also discussed the possibility of developing a prediction based on weighting a number of candidate predictive models using their corrected Akaike criterion (AICc). Today, I’m going to explain how I weight predictions from a range of candidate models. I think the discussion will also allow readers to see that choice of model can affect the estimated value for ice minimums.

For those who want the short story though: Today, my estimate that we will break the 7-day average JAXA sea ice extent minimum is 11%. If we were betting on the minimum today, I would enter 4.63 million km^2.

Recall that in the previous post I had decided to attempt to come up with the best predictor of the minimum 7-day average JAXA ice extent using time series for ice extent and ice area and no other information. In that post, I had said I was limiting my method to single regressions. Oddly enough, the main reason for that limitation was that, on Wednesday, none of the multiple regressions achieved statistical significance for all parameters. I thought I could simplify blog discussion by ignoring multiple regressions.

But, yesterday that changed. So, today, I will be including a multiple regression in my list of candidate models. The candidate models, which meet the criteria of being based on time series for Ice Extent or Ice Area only and having all fitting parameters other than the constant shift statistically significant are:

  1. Min. 7-day JAXA ice extent v Current 7-day JAXA ice extent.
  2. Min. 7-day JAXA ice extent v Current 7-day CT area.
  3. Remaining melt v Current “Vulnerable ice” area (i.e. difference between Current 7-day JAXA ice extent and Current 7-day CT area ).
  4. Min. 7-day JAXA ice extent v Current 7-day CT area and Current “Vulnerable ice” area.

My method of selecting candidates and determining the best estimate for the minimum 7-day average sea ice extent was to:

  • Perform a regression between the regressand (i.e. item I wish to predict ) against the regressors (i.e. items I hope will predict the regressand).
  • Excluding the intercept, check whether the coefficients for the fit are statistically significant at the ±95% level. If all coefficients are signficant, this model is included in the list of candidate models.
  • Find the corrected Akaike coefficient (AICc) for each candidate model, ‘i’, $latex AICc_i $
  • Find the minimum $latex AICc_{min} $ of all candidate models. The model with the minimum AICc coefficient is the most probable model based on AICc.
  • Compute the probability that a model ‘i’ is “best” relative to the most probable model using $latex p_i=exp(-(AICc_i – AICc_{min})/2) $.
  • For each candidate model, determine the best estimate for the minimum 7 day ice extent,$latex y_i $ , and the standard error in that estimate, $latex s_i $. To compute standard error, I take the sum of the squares of the residuals, and also include an estimate of the uncertainty arising from the uncertainty in the estimate for the fitting parameters.

    Processing up to this point resulted in the following results for each of the four candidate models:
     

    Candidate Models and Predictions for NH Ice Extent Minimum.
    Regressors AICc Prediction Standard Error $latex p_i $
    million km^2
    Extent 900.1 4.80 .32 0.014
    Area 895.2 4.49 .30 0.161
    Vulnerable Area 891.6 4.70 .26 1.000
    Extent and Vulnerable 891.7 4.58 .28 0.935

    (Note: The most probable model is based on prediction of remaining ice loss based on “vulnerable ice” discussed on Wednesday. The prediction of total extent based on both current area and current amount of vulnerable ice is assessed as 93.5% as likely best model. )

  • Exclude any candidate model with $latex p_i<0.05 $ from further consideration. Based on this, I removed the model predicting the minimum NH ice extent based on the current extent from my list of candidates.
  • Compute weights for each model using $latex w_i=p_i/\sum(p_i) $ where the sum is computed over the models remaining after the previous step. These weights are estimates of the probability that the model ‘i’ is “true” under the assumption that the collection of models considers includes the true model.
  • Because each model ‘i’ is assumed to a probability $latex w_i $ of being “true”, the best estimate based on information from all models $latex y_{weighted}= \sum{( w_i y_i)}.$
  • Estimate of the uncertainty requires first determining the bias between my stated best estimate, $latex y_{weighted} $ and the true value if a particular model is true, $latex b_i=y_i- y_{weighted} $. The conditional rms of the errors if model ‘i’ is true is then $latex sc_i=\sqrt{(s_i^2 + b_i^2 )}$. The uncertainty for the weighted model is $latex s_{weighted}= \sqrt{\sum( w_i sc_i^2)}.$
     

    Best Estimate based on Weighting of Candidate Models
    Regressors AICc Prediction Standard Error $latex w_i $
    million km^2
    Area 895.2 4.49 .30 0.077
    Vulnerable Area 891.6 4.70 .26 0.477
    Extent and Vulnerable 891.7 4.58 .28 0.446
    Final model
    Weighted 4.63 .29

On Wednesday, I posted a graph illustrating the prediction of the remaining ice extent loss based on the amount of vulnerable ice. Today, I’ll post the best fit prediction of the minimum 7-day ice extent based on current 7 day average ice area (blue) along with the best fit prediction based on that model (orange circles):


The orange caption indicates that if the model based on current ice area is true, the best estimate of the minimum 7 day ice extent is 4.49 million km^2 and there is a 24% probability of breaking the 2007 ice extent minimum. This is the highest probability estimate based on candidate models used to create the weighted model.

The best fit curve ice extent based on the probability weighted model is 4.63 million km^2 and illustrated with slate-blue triangles. Note predicted minimum based on the weighted model exceeds that based on the ice area, and using the weighted model the probability that the 2007 minimum will be broken is estimated as 11%.

Also note the predicted minimum exceeds my Wed estimate that the minimum ice extent of 4.7 million km^3 and the probability of a minimum would be 9%. Because new data is available everyday, my estimates will change each day. However, the main reason for the change today is that I added a candidate model when the multiple regression based on both area and vulnerable ice achieved significance. This model predicts a lower extent than the method based on vulnerable ice alone and is assessed as nearly as probable as the model based on vulnerable ice alone. The consequence is a reduced estimate for minimum extent.

How are blackboard bettors doing?
I added purple traces to the graph to describe how Blackboard bettors are doing in purple. The solid purple line passing through the black circle shows the average of blackboard bets excluding Shoosh/Jay’s ridiculous bet. The upper dashed line denotes the top 95% percentile cutoff. The lower dashed line denotes the lower 95% percentile of bettors.

The mean blackboard bettor placed a bet approximately equal to last years ice extent minimum. Comparison of the upper line indicates that a few blackboard bettors have placed bets that currently fall well outside my ±95% uncertainty intervals based on the weighted model. Interestingly enough the lower 95% uncertainty intervals for the weighted model happens to exactly match the lower 95% cut-off for bets.

Going forward, I will be using this general method to provide best estimate of the ice minimum. As we get nearer and nearer the minimum and use fresh data, we should expect the uncertainty intervals to collapse down to zero. I’ll also be showing where my current estimate for the upcoming minimum falls relative to the bets placed and since a few people placed very high bets, we can probably start reporting who is now out of contention on this bet.

55 thoughts on “Weighted Prediction of Ice Melt”

  1. I looked to see if there was a trend in loss to minimum for CT Arctic area. There isn’t. Or at least the 95% confidence limits on the slope included zero. I’d been using the loss to minimum for the average to project the minimum area. That may underestimate the projected loss. If I use the average loss to minimum, the projected minimum is 2.751 Mm2, well below the 2007 low of 2.92 Mm2. The uncertainty is large, though. For ±2σ, the upper and lower limits are 3.24 and 2.26 Mm2.

  2. DeWitt–
    Do you mean remaining CT Area loss to minimum? That is: did you fit

    “remaining CT area loss ~ year”?
    I wouldn’t be surprised there is no trend with year because there was no detectable trend with year for remaining (JAXA/GSFC) extent loss to minimum. The fitting parameter with year is no where near statistically significant. (If I ignore lack of significance, my uncertainty intervals are huge include zero trend with time.)

    Have you tried
    “remaining CT area loss ~ ‘vulnarable extent'” where ‘vulnarable extent is the difference between the extent and area? That’s what I found hindcast remaining extent loss well. ”

    If you are using R and try several fits, you can get the AIC this way:

    fit=lm(y~x)
    AIC(fit)

    Or course, you can also read off the statistical significance using
    summary(fit) and see the probability that ,m, the fit coefficient for y=mx+b, happened by random chance.

    I’d run all this for minimum area….. but I haven’t generalized by turning everything into subroutine calls. (I haven’t because I’ve been thinking about how I decide which possible models to include. So, while this post discusses only area, extent and difference in area and extent as regressors, I have manually checked time (i.e. year). Currently, for the upcoming minimum, time doesn’t make it into the mix of regressors that usefully hindcast past extents or ice extent losses. (For remaining loss, it doesn’t qualify because it’s not even statistically signficant. For extent it’s AIC is so low relative to area and extent that it’s not worth typing the numbers into my first table above.)

  3. The average loss in extent from today – day 224 to the average minimum on day 255 Sept 12 – is 965,525 km2.

    The last four years have been a little higher than that but there is no correlation of this remaining 31 day melt rate with the year.

    The highest loss over this period was 2008 with 1,540,000 km2 and the lowest was 1975 at just 150,000 km2 but 1972 had a big number as well. It seems to be random depending on the weather I guess.

    So take a million off of today’s Jaxa value of 5.827 km2.

  4. I like the approaches of showing predictions using various combinations of area, extent, and “vulnerable” ice (extent-area). If I were making predictions, and could figure out where the data was, I think the next variable to use would be the various basin areas… eg, if I know that the minimum 7-day average has historically never included any extent in, say, the Greenland Sea, and yet I see some area in the Greeland Sea in August, I might be able to assume that the Greenland ice will melt out to zero.

    (I think this approach would have more value in earlier months, where sometimes the Hudson Bay melts out early, and sometimes it melts out late, but it seems like either way the extent of ice in the Hudson Bay isn’t really telling us much about what September extent is likely to be. So if it melts out early, and we were making a prediction based on total ice area in all basins, we’d be predicting too much September loss, and if it melts out late, we’d be predicting too little September loss).

  5. Re: M (Aug 13 12:31),

    The Greenland Sea would be a bad choice. At this time of year, much of the ice in the Greenland Sea is ice from the Arctic Basin that has been transported through the Fram Strait. If anything, it may be inversely correlated or the correlation has changed sign over time. The Canadian Archipelago might work if you had the data.

  6. Re: lucia (Aug 13 10:47),

    Do you mean remaining CT Area loss to minimum? That is: did you fit

    “remaining CT area loss ~ year”?

    Yes.

    I was curious to see if remaining area loss was a function of area and used time as a proxy for area. Obviously, I should have used area rather than time. I doubt the results will be significantly different, though.

    The current area is already lower any year before 2007 and should go below 2009 within a week.

  7. M–
    Does anyone report extent or area for various basins on a daily basis? I think for short term predictions, using the freshest data possible tends to be useful.

  8. DeWitt–

    If anything, it may be inversely correlated or the correlation has changed sign over time. The Canadian Archipelago might work if you had the data.

    Negative correlations add predictive value too. 🙂

    Obviously, I should have used area rather than time. I doubt the results will be significantly different, though.

    I think you are correct that area won’t give any better predictions than time. Remaining area isn’t predictive for remaining extent loss either.

  9. Dewitt–
    I quickly tried to shove in the stuff to estimate the minimum ice. Using the vulnerable ice method I find that the upcoming area loss is *negatively* correlated with the amount of remaining vulnerable ice, but the coefficient is not statistically significant. So, using the mean loss, I’m predicting the record for the area minimum will be broken and the probability of breaking that record is about 65%.

    I haven’t looked at any other methods, so I don’t know how this would change if I used current ice area, ice extent etc. and fit that the the upcoming minimum.

    Looks like we might break the record for the area minimum but not the extent minimum.

  10. lucia (Comment #80369)
    August 13th, 2011 at 3:04 pm
    —————-

    The NSIDC started reporting their daily numbers earlier in July (which only includes the last 29 days on a rolling basis) and the individual regions are also contained in the csv spreadsheet. The last 29 days is probably not going to help much (what is wrong with these guys) but …

    The Home page for the data is here where regional breakdown charts back to 2007 are also available. csv file is at the top right of the page. All the charts and data come from their FTP site which is notoriously slow or broken (what is wrong with these guys for the 100th time).

    http://nsidc.org/data/masie/index.html

    The Greenland Sea, which is where the majority of the ice gets exported out of the Arctic to melt in the south, has been well-above all other years for most of 2011 although only the last 29 days are shown (don’t need to say it).

    ftp://sidads.colorado.edu/DATASETS/NOAA/G02186/plots/r07_Greenland_Sea_ts.png

  11. Bill-
    Thanks.
    I clicked the link called “Comma Separated Values (CSV) file”, but that only has data for 2011. That’s not suitable for testing how a statistical model might hindcast.

  12. Bill.

    Mechanical stuff really starts to matter

    http://www7320.nrlssc.navy.mil/hycomARC/navo/arcticicespddrfnowcast.gif

    There is an interesting balance going on bewteen ice melting in place, moving and compacting, and being exported.

    with little MYI to speak of anymore .. fascinating problem.

    Lucia, I dont think a statistical approach is merited. now its down to where is the thick ice, where is the thin ice, who is moving where and then water temps. and sunshine.

    Cool problem.

  13. Mosher– I see graphs, but no links to data. I’m looking for numbers.

    Lucia, I dont think a statistical approach is merited. now its down to where is the thick ice, where is the thin ice, who is moving where and then water temps. and sunshine.

    Cool problem.

    If you are suggesting the mechanistic model would be interesting I agree. But that doesn’t mean that statistical models will have no predictive ability. Phenomenological models for heat transfer rates are great– but statistical fits often give pretty good estimates too. In the case of ice: Yes. Ice is melting in place, moving, compacting and being exported. This is happening right now, and will happen from now through mid to late September. We don’t know the future weather– we would have to predict that to later predict how the ice is going to move, melt etc.

    So, no matter what method is used, we are going to create a best estimate and a spread around that estimate. Potentially a mechanistic/ phenomenological model would give more efficient estimates– but that doesn’t mean a curve fitting approach based on what’s happened in past years isn’t merited. It just may have bigger uncertainty intervals than some other method.

    But I don’t have a mechanistic model and can’t possibly develop in time to apply to this years minimum ice extent predictions. (I’m not going to bother to try to develoe one for next year either. )

    So, I’m going to give estimates based on what’s happened in other years using data easily available on the web. If you have a better method, have at it! 🙂

  14. Steve Mosher– Also, to place the images of ice in those basins in context, I would need data back in time. Ideally, back to at least 1979 since that’s how far back the CT area data goes.

    uhmmm– what’s “MYI”?

  15. Today’s preliminary JAXA extent is 5713594 km². That’s 113906 km² lower than yesterday. Could be a fluke, of course, but it could also be the long overdue first step off the cliff.

  16. Hmm Ok.. i’ll go hunting. maybe in places I shouldnt be looking..

    WRT the mechanical versus statistical. Maybe “merited” was the wrong word. I dont disagree with anything you said..

    I’m merely expressing that hunch. looks like a mechanical model would out perform the statistical fro this point on. If I had one, i’d use it.

  17. DeWitt– That’s high. I added a black dot to indicate the melt/week *if* today’s remains constant for 7 days.

    Daily melt is noisy– but that is very high for this time of year.

  18. Steven Mosher–
    If I had a mechanical model, I’d use it too. Heck, if I had a metric that quantified velocity of ice in the direction of the Straights of the Fram with daily values back to 1979, I’d see whether that had a good correlation. (I need the old values to know whether the velocities on the graph you show are very fast, just a little fast and also to know how much difference they make.)

    But…. I don’t have that. So, I’m using what I have.

  19. Here is the daily melt rate in 2011 versus the average from 1972 to today.

    It looks like the daily numbers are variable but they are not any more variable than a 7-day, 2-week, or 6-week average.

    It also looks like the melt rate just varies around the average primarily (even 2007 followed the pattern closely). It can be off for a week or so at a time and the minimum can occur on a different day than typical (earlier or later) but the daily melt rate follows the average (I looked at other years and they are all the same). Perhaps the peak size in March is a better indicator.

    http://img546.imageshack.us/img546/5020/nhsiedailymeltrate.png

  20. Re: lucia (Aug 14 06:32),

    The final number was higher 127,187 km². If it were that high for 7 days, 2011 would easily pass 2007. I expect the rate to be below average (more negative) for a while, but I would be surprised if it averaged -100,000 km²/day.

    Not that anyone seems to care this year, but the southern Northwest Passage has been open for a while. It’s looking like the Parry Channel may open this year too.

  21. DeWitt– Yes. There is still a fighting chance we could get below 2007. Mosher’s graph showing the wind speeds suggests we really might break the ice extent minimum. However, I don’t know how to incorporate that into projections. I don’t know how often in the past we’ve had wind speeds and directions just like that. But… I do know that formation has just got to be favorable to ice loss.

  22. Mosher– Is that graph automatically updating? I thought last night it was showing ice flushing through the Fram and now it’s not!

  23. lucia,

    In the last three days, CT Arctic area has lost 0.38 Mm². It’s now below 2009 with only 2007, 2008 and 2010 lower. There is also far more vulnerable ice this year than in 2007, 1.81 Mm² in 2007 vs 2.49 Mm² in 2011.

  24. Yes, Lucia, that graph of Mosher must be changing, just take a look in the archive posted below:

    http://www7320.nrlssc.navy.mil/hycomARC/navo/arc_list_arcticicespddrf.html

    Though I must admit your math skills in this field, goes way above my head, I must say I admire your (and Neven et. al.!) fighting spirit! These last years in the arctic can’t be easy to fit in to any models available. Sort of like trying to solve todays global financial problems with yesterdays tools!

    The Slush Ice of this year, mostly created due to a continuing drop in both thickness and volume, will be the true Joker in the last weeks of the decrease. Remember that the show run ’till the seatemperature drop back to -1,7 C….

  25. DeWitt–

    There is also far more vulnerable ice this year than in 2007, 1.81 Mm² in 2007 vs 2.49 Mm² in 2011.

    Yep! But I think with vulnerable ice, it’s a race with time. The best fit formula says

    remaining extent loss = 5.349e-01* vulnerable + 1.968e+04

    With the time window remaining, on average, the remainig extent loss (r_e_l) s is 53% of vulnerable. ( Of course, this doesn’t mean that only the vulnerable melts. The phenomenology involves compression, expansion, melting etc. But this is the fit.)

    Since final extent is

    final = current- (r_e_l)

    final_ext = current_ext – 5.349e-01* vulnerable + 1.968e+04

    We have more ‘current_ext’ than on a similar day as in 2007; this favors not breaking the minimum. But based on the fit we expect to lose more than in 2007. However, owing to time we may not lose all the vulnerable ice before water starts refreezing.

    BTW: algebraically, this looks like:

    final_ext = 5.349e-01 CT_area + (1-5.349e-01) current_ext + 1.968e+04

    ——–
    (For those puzzling about the fact that I seem to have two very similar regressions, I sort of do. Note the one multiple regression I include is of the form:
    final_ext = A CT_area + B current_ext + const.

    Which means, in some sense, by multiple regression and my “loss” based regression are similar fits. But in the case of the multiple regression A and B are not constrained to (B=1-A). So these two end up being different fits with slightly different predictions for the minimum extent.

  26. Christopher–
    Thanks. This looks similar to what Steven posted:

    The red /black arrows indicated ice flushing rapidly out the Fram.

    Heres how the image looks now (the 20110819 looks like a time stamp and if so, it suggest the updating graph is a forecast. I’m not entire sure what this is telling us. )

  27. Re: Christoffer L (Aug 14 11:10),

    All things being equal, slush ice should freeze over faster than open water so the recovery from the minimum should be faster than 2007. Of course lots of slush ice could mean more gets flushed through the Fram Strait….

  28. The US Navy Arctic ocean/sea ice system is at this link (they have the highest resolution ocean models not surprisingly I guess). You can run 30 day animations (365 as well), go back in the archive several years etc. No data though.

    The ice thickness animation is probably the best and shows the ice export the best (the ice direction animation depends too much on the low and high pressure systems and just moves around too much to be useful).

    http://www7320.nrlssc.navy.mil/hycomARC/navo/arcticictn_nowcast_anim30d.gif

    Sea Surface Temperature is also useful.

    http://www7320.nrlssc.navy.mil/hycomARC/navo/arcticsst_nowcast_anim30d.gif

    Home page here.

    http://www7320.nrlssc.navy.mil/hycomARC/arctic.html

  29. yes Lucia the chart updates.

    Watching the ice flow change has been a real eye opener.

    Basically the ice exits the straight to die in the greenland sea. Ice in the basin dies a slow melt death… if it takes too long then it survives, very interesting dynamic problem.

    Some mornings i wake up and wonder if it will all be gone in a flash. or if the slush will just linger on long enough to refreeze. Either way some major damage has been done to volume..

    I lurk at Neven’s fascinating place. Its quite a contrast to WUWT when goddard used to talk about ice.

    I’ll stick to what I said in 2007.

    When searching for arguments for or against AGW do NOT run for the ice. Dont base anything on the status of ice. Its wicked hard .

  30. steven:

    When searching for arguments for or against AGW do NOT run for the ice. Dont base anything on the status of ice. Its wicked hard .

    I’ll echo that…. ice loss depends on weather, and weather is highly unpredictable in the arctic, even over the span of a week, let alone a season.

    That said there are long-term correlations in weather patterns, and the right model might be able to pick these out without needing a detailed dynamical model. You’d probably need wind speed, direction, temperature and pressure fields over the arctic to do it…. basically I’m thinking of an empirical orthogonal function approach.

  31. Steve mosher–
    I prefer global temperature as an indicator of AGW. Still, long term ice levels must be lower if the earth warms and higher if the earth cools. The difficulty is that the level seems very noisy and the level is less amenable to prediction in the first place. So it’s not clear how long “long” has to be.

    It is a lot more exciting for gambling though! 🙂

  32. http://www.the-cryosphere-discuss.net/5/1311/2011/tcd-5-1311-2011-print.pdf

    Figure 3 here (and the text) suggest that winter and spring ice export may be more influential. Surprisingly now is a low period for ice export.

    “The yearly cycle in area export is pronounced. Figure 3 shows that the major export occurs between October and April, and that there is close to zero export in July and August”

    Also see the whole of section 4.4 “Implications for summer seaice extent”

  33. Yes Lucia.

    I’m speaking only of rhetoric here. I noticed a pattern in the debate back in 2007. When people questioned the temperature record, nearly everybody on the AGW side, ran for the ice. That seemed unwise then and unwise now. Unwise because as carrick notes weather can dominate. things like soot, and wind, and weather patterns and currents. funny I told godddard the same thing. DONT hang your argument on the ice. watch it, bet on it, its fascinating, but if you hang your hat on it as an argument the weather may kick your rhetorical ass.

  34. Here’s the current version of the image at mosher’s link:

    hmm…. what I have doesn’t seem to animate. It’s stuck at 7/22.

    Correction… it animates if you click to enlarge.

    You know what I hate about these animated GIF’s though? People don’t add a blank page for ‘begin and end’. So, I have a hard time shifting my eyes back and forth to the time to get a sense of the relationship between time and what’s happening. With this one, I have to watch over and over and over before I can get any notion of what’s happening now.

  35. Figure 3 here (and the text) suggest that winter and spring ice export may be more influential. Surprisingly now is a low period for ice export.

    Yes. But for the purposes of predicting the upcoming minimum using data available now, what happened in winter and spring is already incorporated into the current ice area and extent. If we want to predict remaining ice loss, we need to predict what happens between now and mid-late September.


  36. (The final black circle is the loss rate if we use the most recent daily rate of change and assume it is sustained for a week. The smooth curve is the loss rate computed by taking the difference in the 7 day smooth separated by 7 days.)

  37. I noticed over at Neven’s that the exponential fit to ice volume goes to zero in September in 2015. Since extent and area don’t seem to be decaying that fast, I would guess that the exponential model may be unrealistic. The Gompertz curve seems more reasonable. I have some doubts about the accuracy of the volume measurement.

  38. Lucia:

    You know what I hate about these animated GIF’s though? People don’t add a blank page for ‘begin and end’. So, I have a hard time shifting my eyes back and forth to the time to get a sense of the relationship between time and what’s happening. With this one, I have to watch over and over and over before I can get any notion of what’s happening now.

    Here’s a slowed down version with a pause at the end.

    It looks to me what we are seeing is the circulation of polar Rossby waves.

  39. Perfect speed! I can look at the graphic and glance at the dates at the appropriate speed! Thanks Carrick.

    According to that gif, there is high velocity out the Fram right now, it’s slowing, and there then we should see a slow down in the loss rate. Of course, prediction is hard. Especially about the future. Still, that’s interesting.

  40. You’re welcome, Lucia. The too-fast animation bothered me at least as much as not having a cue when it was restarting.

  41. Lucia,

    Yes. What I’m saying is that if you follow those scientists reading of the impact of mechanistic processes then summer ice export may be of minor importance, it may not be worth worrying too much about daily ice export in August as a predictor of minimum ice extent. Having said that I’ve been curious to see real-time ice export data myself but can’t find any online. I may bother these scientists with an email to see if they have any advice or get their reading of the 2011 data so far.

  42. ya cyclone. ouch.

    But we dont have metrics for things like, area of ice at 100%

    will the slush puppy die? get smooshed together, exit the scene?
    hang on long enough to refreeze and stay above the 15% level.

  43. Check out the phytoplankton bloom north of Norway in the Barents. (Happens often enough but this is one is very dense and has been there for more than month now).

    http://rapidfire.sci.gsfc.nasa.gov/imagery/subsets/Arctic_r02c04/2011227/Arctic_r02c04.2011227.terra.2km.jpg

    http://rapidfire.sci.gsfc.nasa.gov/imagery/subsets/Arctic_r02c05/2011227/Arctic_r02c05.2011227.terra.2km.jpg

    The NorthWest Passage daredevils can probably start their run now.

    http://rapidfire.sci.gsfc.nasa.gov/imagery/subsets/Arctic_r04c02/2011227/Arctic_r04c02.2011227.terra.1km.jpg

  44. About the Tunnels.

    Seems they can also weaken hurricanes like I thought according to this very very smart professor. So if they can weaken a hurricane then I claim they can also bring back our Northern summertime Arctic Ice if left in cooling phase for longer time periods.There is a HUGE and GIGANTIC power factor associated with them also! That will pay the bills and make them profitable also!

    quote:
    Yes, I have spoken with Patrick, and, yes, a scheme somewhat like the one he describes could weaken hurricanes threatening places like Miami that have strong western-margin currents just offshore. There are, however, numerous qualifications.

    The scheme that we discussed involved an array of several rows devices across the Gulfstream. Each device would be a rectangular duct 140 m long and 10 by 14 m in cross section. Normally the devices would be moored horizontally at a depth of 100m with their long axes aligned with the current flow. They would be nearly neutrally buoyant. When a hurricane approached, ballast at the downstream end of the channel would be released, allowing the device to float up to a 45 deg angle. Cold water entering the upstream end would flow up to the surface and mix with the warmer water there. Since the mixture would be negatively buoyant, it would sink. But mixing due to several (3-10) lines of these devices could cool the surface waters of the Gulfstream by 1-2C, enough to weaken an Andrew-like hurricane from category 5 to category 3. A rough calculation indicates that a device every 100 m on each line of moorings (~1000 devices per ~100 km line) and 3-10 lines of moorings would be required. My guess is that it would cost $250K to fabricate and deploy a single device, but there might be economies of scale. One might also be able to optimize the size and spacing of the devices.

    Let’s say that careful calculation told us that 4 lines of 1000 devices each would do the trick. At $0.25M per device, the cost works out to 4*1000*($0.25M) = $1000M. The actual cost might range from a few hundred million to a small multiple of a (US = 1000M) billion. One would want to do a detailed simulation before defining the scope of the project, but the basic notion is conversion of some of the kinetic energy of the Gulfstream into gravitational potential energy of the mixed water column. Again, I’ve not done that detailed simulation, only back-of-the-envelope calculations.

    Activation of the array would require accurate forecasting since it would take several days for the effect to make its way from south of the Dry Tortugas (optimum location for protecting the maximum amount of shoreline) to the landfall point.

    South Florida gets hit by a category 4 or 5 hurricane at every few years, but the really damaging ones like Andrew tend to be once-a-generation events, or less frequent. The array would need to be deployed and maintained for a long time between activations that actually safeguard property, although false alarms would not be particularly costly. Annual maintenance could easily exceed 10% of initial deployment cost. Bear in mind that Key West to Jacksonville is the only stretch of US coastline where this strategy would work. The other vulnerable sites, Houston-Galveston and New Orleans, lack the necessary strong offshore currents. While Georgia and the Carolinas also experience many hurricane landfalls and have the Gulfstream offshore, most of these cyclones are already weakening because of vertical shear of the horizontal wind so that a second installation north of Jacksonville would be much less useful.

    There has been a lot of talk about using wave and current energy to cool the ocean ahead of hurricanes. My general conclusion is that while these ideas might be made to work, the proponents underestimate the scope of the required effort, as well as the political will and recurring cost necessary to keep the project going in the long intervals between really damaging hurricanes. Skeptic that I am, I think that wiser land-use policy and more rigorous building standards are much more cost-effective and more politically feasible. A proof-of-concept that might entail deploying a half dozen devices has some appeal, but I think that there are more promising ways to spend disaster-prevention money.

    Best regards,

    Hugh Willoughby

    Department of Earth Science

    http://www.fiu.edu/%7Ewillough/PUBS/HEW_VITABREV.pdf

  45. “I noticed over at Neven’s that the exponential fit to ice volume goes to zero in September in 2015. ”

    Neven is a loser. The only reason he did that is because he knows it will start a controversy and then when it doesn’t happen he’ll blame Chinese coal emissions and push the date back to 2020.

  46. @Cyclonebuster

    This idea of harnessing the power of hurricanes is intriguing. However, I believe the future is lightning capture. There has to be enormous power in a single bolt of lightning. Actually, I’ve even read that an average 20 minute lightning storm can power the entire country for several minutes.

Comments are closed.