Time for the scheduled Wednesday NH ice update which includes the new method I concocted to predict the minimum. Based on my full new method, my best estimate of the 7 day minimum ice extent is currently 4.64 million km^2 and the probability of breaking the breaking the 2007 sea ice minimum is approximately 9%. Because the method is involves weighting results of two candidate models by their relative probabilities based on the corrected Aikake criteria, the explanation of how I estimate this requires multiple blog post. One post will explain the estimate based on the candidate model that appears most likely; other posts the less likely model and also explains how I weight the predictions from two models and estimate the uncertainty in the weighted.
The new method was motivated mostly by my desire to try to have better and better short term models as we move toward the time of the NH sea ice minimum. More specifically, the choice of predictor a discussion with who suggested CRANDALS who suggested that current ice area would be a better predictor or the ultimate ice extent minimum than current ice extent.
I admitted this might well be true. In my first cut for a short term fit, I’d limited myself to finding the best fit based on daily sea ice extent only and using only 1 predictor. This results in fits like the ones discussed in my previous NH Ice Extent Update.
As I would like to improve that model by considering current ice area, I asked for a link to daily NH ice area data. Neven pointed me to the Cryosphere Today NH ice area data starting in 1970. Yesterday, I fiddled around to find the best predictor of the 7-day smooth JAXA minimum ice extent using limiting analysis to data since 1979. For ice extent, I used a merge of JAXA and NASA-GSFC. For ice area I used the Cryosphere today.
Once I had the data, I needed to think of candidate models to test. Currently, loads of people are trying to predict the NH Ice Extent minimum, and each discussed statistical models with the predictors based on different notions of what ought to plausibly predict ice extent or ice melt rate. My initial goal is to consider candidate models that use one predictor to predict the ice extent. That is: For now, I am avoiding multiple regressions.
With the available data, I computed the corrected Aikake criteria (AICc) for simple linear regressions for the following candidate models:
- Minimum 7-day Ice Extent predicted by Current 7-day Ice Extent. Motivation: for short term predictions, the best predictor will be the current value of the thing predicted. AICc=901.9
- Minimum 7-day Ice Extent predicted by Current 7-day Ice Area. Motivation: For slightly longer term predictions, the area gives a best estimate of the “less vulnerable” ise. AICc=894.2
- Remaining – Extent Losses predicted by the difference between the Current 7-day Ice Extent and the Current 7-day Ice Area. Here I define “Remaining – Extent Losses” as the difference between the 7 day-average NH ice extent on the current day number counted from Jan 1 and the minimum for a particular year. I will call the difference between the ice extent and ice area the amount of “vulnerable ice”. Note that using this model, the best estimate of the ice minimum is the difference between the current NH ice extent– which is known– and the predicted melt. Motivation: The difference between extent and area describes the amount of “vulnerable ice”. AICc =892.0
Based on AICc, of the three candidate models, predicting remaining extent loss as a function of the amount of vulnerable ice is the most efficient. Note that I find CRANDALS is correct that the current area gives a better prediction of the NH Ice Extent minimum– though I suspect as we near the actual minimum this must change. In the limit that we reach the minimum in the ice extent, the current ice extent must give a very good prediction! Still, currently: the best predictor is amount of vulnerable ice, the second best is area.
The fit based on vulnerable ice is shown below:

The open black circles are data for remaining losses as a function of the amount of vulnerable ice. Note that greater losses are correlated with larger amounts of vulnerable ice. The solid blue line indicates the best fit estimate for the remaining extent loss based on the current amount of vulnerable ice; the dashed lines indicate the ±95% scatter about the mean assuming the residuals are normally distributed (which they appear not to be). The filled orange circle indicates best estimate of the ice loss rate based on the current amount of vulnerable ice (i.e. a remaining loss of 1.55 million square km); the open circles indicate the ±95% confidence intervals based on scatter about the best fit curve and including the uncertainty in the trend and constant term for the model.
The (+) indicates extent loss, that if achieved will result in a tie with the 2010 ice extent. The (x) indicates the extent loss that will result in a tie with the 2007 minimum. Both fall inside the ±95% uncertainty intervals indicating that, according to this model, we may well see a new record low NH ice extent, or we may see the NH ice extent above that seen in 2010. Based on this model, the probability of a new low is estimated at 7% while the probability of a NH ice extent that exceeds 2010’s value is 28%. These are noted in black in the lower right hand corner of the figure.
Some of you will note that other probabilities are listed in blue. The second set of probabilities takes into account that while the “vulnerable ice” model appears to explain the past data better than the other two models, it is not sufficiently better to discount that possibility that the other models are in reality better predictors. Taking predictions of the other models into account, I estimate the probability of a new minimum at 9%. I will discuss the estimate based on the other two fits and how I weighted the two estimates in a future post planned within 2 days.
If you have other methods of predicting the current ice, or know of other troves of daily data, point me to them. Right now I know lots of people are fascinated by the horserace, and I like to quantify what I think is going to happen. Even if I guess wrong, I enjoy this more than just looking at the JAXA chart.
Links to data sources:
- JAXA ice extent
- gsfc.nasa.gov
- Cryosphere Today.
The anomalies in the Cryosphere Today ice area linked above are based on the 1979-2000 averages. The anomalies posted on the site are based on the 1979-2008 averages (30 years). The link to that dataset is:
http://arctic.atmos.uiuc.edu/cryosphere/timeseries.anom.1979-2008
Antarctic area and anomalies (1979-2008):
http://arctic.atmos.uiuc.edu/cryosphere/timeseries.south.anom.1979-2008
Global area (sum of Arctic and Antarctic for the same day):
http://arctic.atmos.uiuc.edu/cryosphere/timeseries.global.anom.1979-2008
The probability of a new record low for Cryosphere Today Arctic ice area is still about 50%.
I dunno. There’s two big tongues of 2+ year-old ice out there in the basin. They may completely melt. But they might just break up into irregular chunks. With ~10% of each of them above water while the portion below keeps them nicely separated you might see 15% JAXA doing okay while CT area is massacred.
Yeah, geometry isn’t my thing, is it? :/
Dewitt–
I haven’t done any calculations for probability of ice area lows. I don’t use the anomalies for my prediction. I use the ice areas.
FergalR–
The disadvantage of looking at images is that I don’t know how often in the past there might have been two big tongues of 2+ yo ice out there, and I don’t know how fast that sort of thing might melt.
The image method could work if someone figured out some systematic way of interpreting all detailed and complicated stuff going on in the image. I think there glaciologists also have mechanistic models , possibly including weather prediction and known weather conditions. Mine fit doesn’t do that. It’s just trying to identify a good proxy that somehow can deal with both changes in climate and weather in a simple way.
@Lucia
I am wondering if you can also post monthly temperatures of both southern and northern hemisphere ice caps. If you’re going to attribute melting ice to global warming, then temperatures should be going up, correct? Otherwise, any lost ice should be attributed to natural variation, and you should note this.
Jay/Shoosh–
Feel free to post temperature information you have providing a link.
Lucia,
I’m curious how this works, as we go forward in time does the vulnerable ice become a better predictor of the final ice extent, I assume this is the case. And I assume going back in time it becomes worse. Do the dotted blue line just get tighter and wider. Is there a window in the year when this is the better predictor? I’m sure I’m asking a lot here.
HR–
All three of the predictors checked here will be better as time goes forward. The dotted lines will get tighter.
I’m not sure when the cut-off for each predictors being best happens. But I suspect:
1) Around January or earlier, the best predictor will be a fit involving year– which is not discussed here.
2) Near the 2nd week of September, the 7-day average extent will tend to be a better predcitor than area.
Dr. Jay Cadbury, phd.,
After detrending the data and using temperature and ice anomalies with the same base period, there is no statistically significant correlation between monthly UAH north pole temperature data and monthly NSIDC extent. That seems to show that ice extent does not respond much to month-to-month changes in air temperature.
Two seminal studies [Polyakov et. al 2003/2004] conclude that “mechanical” variables such as wind and currents have at least as much if not more bearing on Arctic ice than temperature variation. They also point out that this is true in particular for what they call the “peripheral seas” of the Arctic basin.
david. SST melts the ice not the air. using hot air to melt ice is not very efficient: Ever try to deice something with hot air?
‘In general, we find that melting ice with hot air is a very inefficient process, with not much more than 5 percent of the energy stored in the fuel going to melting the ice. Tests conducted at CRREL using the exhaust gases of a gas turbine engine for melting ice yielded similar results with maximum efficiencies never exceeding 8 percent. Since modern combustion chambers are highly efficient, yielding fuel conversion efficiencies on the order of 85 percent or more, we attribute no more than 15 percent of the loss of energy to incomplete combustion. This means nearly 80 percent of the fuel energy is lost through heat transfer effects such as heat losses through the heater housing and duct work. In addition, incomplete heat transfer between the hot air and ice surface reduces melting efficiency.
What works better is warm water:
http://sharaku.eorc.jaxa.jp/cgi-bin/amsr/polar_sst/polar_sst.cgi?lang=e
When the ice is melting what do you think happens to air temps above it? Start looking at these things. its fun
http://imb.crrel.usace.army.mil/2011C.htm
http://nsidc.org/arcticmet/factors/temperature.html
Here look at july when the ice is melting.
http://iabp.apl.washington.edu/data_satemp.html
steven mosher,
I agree. I was simply answering Dr Jay Cadbury’s question. I am an Arctic alarmist because of the warm waters flowing into it. But good to have those posted – perhaps he will read them … 😉
Re: steven mosher (Aug 10 20:59),
In the case of Arctic sea ice, it’s all about radiative transfer. The water temperature under the ice doesn’t change much from summer to winter. But in winter, the net radiative heat loss is more than enough to remove enough heat to freeze the surface. Then ice thickness builds up and the surface temperature drops until the thermal transfer by conduction through the ice matches the radiative heat loss. That’s about 2m thickness maximum. In the summer with the sun up 24 hours/day, the bottom of the ice melts because heat is no longer being radiated away. The sun also melts the surface when there are no clouds. Melt pools on the surface also absorb more solar radiation, speeding up the surface warming.
We know that radiative heat loss is a major effect because you get a large temperature inversion over the ice pack during the winter. That happens when radiative heat loss from the surface exceeds heat transfer from the air.
Thanks dewitt.
heres a nice animation for the doctor
http://www7320.nrlssc.navy.mil/hycomARC/navo/arcticsst_nowcast_anim365d.gif
tetris
Two seminal studies [Polyakov et. al 2003/2004] conclude that “mechanical†variables such as wind and currents have at least as much if not more bearing on Arctic ice than temperature variation. They also point out that this is true in particular for what they call the “peripheral seas†of the Arctic basin.
###
thanks I was reading those papers ( or one that was similar) the other day and the apportioned it about 50/50 thermal versus mechanical.
Suffice it to say if the temperature were colder, mechanical wouldnt matter much .. watching the ice getting ready to export out Fram its clear that mechanical plays a role in driving ice to its death.. Its kinda fun to watch the concentration move.
For your reading pleasure: “Dramatic interannual changes of perennial Arctic sea ice linked to abnormal summer storm activity”, Screen et al. (2011) JGR Atmospheres
Steve/Tetris
The way I interpret that is:
In any given year, mechanical effects influence the summer minimum much more than local temperature variations in the arctic. But this does not imply that the warming trend does not have a noticeable effect. Clearly, if the planet were warmer (or colder) for a long long long time, we’d have more (or less) ice on average in the arctic. In either case, we’d still see quite a bit of variability due to mechanical effects during the summer.
@Lucia
“But this does not imply that the warming trend does not have a noticeable effect.”
Okay, now the warming trend you are referring to, is this sea surface temperature or air temperature?
@Mosher
Thanks for the link, Steve but it was not what I was looking for. I need hard numbers of sea surface temperatures from 1979-present.
Jay– I mean the global mean surface temperatures and in particular surface temperatures in the regions adjacent to the arctic.
In terms of thermal, as opposed to mechanical effects, the water under the ice , and at it’s margins, determines melt/loss. Relating the air temperature presumes that this is a proxy for the water temperature. I think what Lucia is saying is that while Air Temp is clearly a poor proxy for water temp on short timescales, on long timescales they should behave similarly, so one should expect the air temp to be a decent proxy for the trending effect of water temps. I can’t say for sure, I’m not sure anyone can since AFAIK water temps under the ice can’t be measured (if anyone knows of sub-ice data, a link would be appreciated) however, if the relationship between “Marine Air Temperature” over most of the world is any indication she is probably correct (indeed, at least away from Ice I seem to recall that the near surface air temps behave similarly to the sea surface even on an interannual scale. The exception to the over all rule seems to be the tropics, where for reasons apparently unknown, marine air temps haven’t followed the sea surface temps well. (Note here that this is a different question than the upper air discrepancy issue!)
http://www.agu.org/pubs/crossref/2001…/2000GL011167.shtml
I doubt whatever goes on in the tropics, if a physical effect, is also going on in the Arctic, so air temps long term being a decent proxy for the sub ice water temps seems reasonable. The problem of attributing trends to thermal factors then becomes a problem of determining the seperate possibility of trends in mechanical factors like wind, and of course, wind if trending is not necessarily independent of the temperature trend or of AGW. Thus the thermal contribution is probably not really what we should concern ourselves with, so much as the causes of changes in all factors themselves.
Thanks for the link, Steve but it was not what I was looking for. I need hard numbers of sea surface temperatures from 1979-present.
Google is your friend and I aint your data chimp.
Andrew_FL–
I think you are describing something near what I think. Quite generally, I think that
1) The change in radiative balance is causing the warming trend for the earth as a whole. So, that warming trend is a symptom of the radiation balance. Changes in radiative balance toward the heat gain end at the pole will tend to reduce ice formation and increase ice melt. This means over the long term, we expect ice loss while the earth warms.
2) There is heat and mass transport between the poles and the rest of the planet. To the extent that the mid-latitudes and tropics warm, some warm water and air will move to the poles. This will inhibit freezing and encourage melting. In this case, there is an link between warming in places where we have thermometers and melting of ice at the poles.
I’m not sure what temperatures Jay/Shoos is trying to look at. Evidently, he doesn’t want to hunt for them himself. Like mosher, I’m not going to look for them. I couldn’t if I wanted to because I don’t even know what temperature Shoosh thinks he wants to see. But if Jay/Shoosh knew, I’d probably suggest he can search for them himself.
Whatever data you hunt down it will be the wrong data.
If somebody doesnt have a hypothesis to test, I fetch no data.
lucia (Comment #80310)-I basically agree although I would add that the relative importance of the local energy balance versus latitudinal heat transport is highly seasonally dependent. temperatures during the cold portion of the year vary wildly in the Arctic because of periodic penetration of the Arctic Circle by cyclonic disturbances from further south. During the summer temperatures very closely follow a smooth path expected from the dominance of the local incident solar energy. Doubtless there is still transport of heat in summer, but it’s less sporadic and I suspect weaker, as the overall temperature gradient is weaker. For climate trends and variability I suspect similar behavior would occur in the seasonally separated data.
Andrew FL,
The interesting thing about the Arctic is that it is so very different from the Antarctic, which I think stems mainly from the difference in heat transport in the northern versus southern hemispheres. The arctic is surrounded by land masses with low heat capacity; this allows warmth from lower latitudes to arrive at high northern latitudes pretty easily… strongly promoting melting over the warm part of the year. Antarctica is surrounded by only (very cold!) ocean, and so is mostly protected from intrusions of warm air in the southern summer. The same ease of transport also (of course) facilitates intrusion of chilled Arctic air even to relatively low northern latitudes… like Florida, and simultaneous transport of heat far to the north in winter; this doesn’t happen very much in the southern winter. Average local warming of a few degrees has little or no impact on ice in the Antarctic, but a substantial impact in the Arctic.
SteveF (Comment #80315)-Yes, another feature with Antarctica is that it is itself at a very high altitude, being a continental itself, and quite mountainous. Thus heat moving from the equator must also move up through the atmosphere in addition to just going Southward. Purely horizontal transport will be from the higher altitudes that have less overall heat.
Mosher,
Have another look at the final paragraph on page 2084 [2003]. “At least as much”: which means at least 50/50 and likely a bit more according to their sensitvity analysis. If you insist on reading that as 50/50 only, so be it.
Lucia [80299]
What do you make then of DeWitt Payne’s point @ [80295]?
tetris–
I think what Dewitt says @ [80295] is perfectly consistent with my (1) at 80310.
tetris. i dont even insist that we were reading the same paper. As I noted the other day I read something like that that apportioned things about 50/50. If you insist on saying we in fact read the same paper then by all means do.
Re: tetris (Aug 11 21:58),
What she said.
An increase in average air temperature and humidity will increase the downwelling IR. That will decrease the net upward loss in heat and result in thinner ice.
@Steve Mosher
Okay, well I have a hypothesis. My hypothesis is that Lucia will not post the average arctic temperatures 1979-present because there is almost no change. I further hypothesize that if the temperatures are not fluctuating significantly, the measurements of ice growth and loss are incorrect.
I have an alternative hypothesis. The arctic ocean in summer is one large ice-bath and it is literally impossible for the surface temperature to deviate significantly above or below the melting point of water; 0C.
Of course this says nothing about how much ice is melting or what the temperatures *would* be if the melting ice didn’t constrain it.
A corollary to my hypothesis is that Dr. Jay Cadbury, phd must have received his diploma via mail-order; or in a subject far-removed from the physical sciences.
Difference as measure of vunerable ice – very interesting thanks.
By 31 August, I find extent has become a better measure than area but area can still usefully be used.
You say you wanted a single measure and use extent, area and difference between two as your three single measures. If difference is a single measure, what about average of extent and area? Or maybe a weighted average favouring area as that seems a better predictor? Or, for that matter, a weighted average plus some multiple of the difference? Are these also single predictors and are they likely to do better?
Thanks to Lucia and the readers who pointed out the GFSC Nasa daily extent numbers. My attempt at a merge with IJIS data is at https://docs.google.com/spreadsheet/ccc?key=0AjpGniYbi4andElTczhVS2t2VW5Ka0sySnFrcndTTkE&hl=en_US#gid=0
The thanks made it into my contribution but I don’t think the link from there works.
Lucia, are you going to submit a SEARCH outlook contribution?
At the point in time where I wrote this post, yes.
I didn’t try that. But it would qualify.
I would consider a weighted average a single predictor only if I didn’t pick the weights by fitting to data. At any time of year, there is a weighting that gives ‘best’ predictor. But if I picked the weights by finding the best fit weighting, that fit is not really different from using a multiple regression. So I consider that a multiple regression in disguise.
The link seems to work for me. Thanks for that.
I did submit a search outlook. I saw you and Anthony did. Unfortunately, this method won’t be my submission because it’s too late. I submitted based on extent. I guess I should have started all my fiddling with fits earlier.
Your predicting the Sept minimum, right? It’s interesting to me that extent doesn’t become the best until it’s nearly predicting itself. By Sept 1, the current extent is contained in the September average. That’s why I figured it would eventually have to be the best predictor.
7-day averages for area and extent have something like a 90% correlation. Also, the fit looks pretty close to:
Area=1* Extent + noise + a constant.
That is: the coeffient between the two is statistically indistinguishable from 1. (I think I got something in the 0.9X range.)
So, yes, if one is a decent predictor, the other one will usually also be a good predictor.
Thank you for reply.
31 August is only a datapoint where extent is better. It could become better earlier. Also note my method may not agree with when others find extent becomes better than area due to my gompertz fittings data preparation.
Another datapoint is August 15th where I still find area to be better based on regression factors of .9787 for area vs .296 for extent.
My Aug 15th prediction for September NSIDC average extent is 4.438+(CT Area for 15/8 – 3.623)*0.9787 + (IJIS for 15/8 – 5.824)*0.296
(or maybe that ‘is’ should be ‘was’ because of your difference = vunerable extent suggestion… nah, better let you submit that.)
Yes. It would be interesting to see AIC coefficients for different fits, including your gompertz fit for time– but I haven’t taken the time to read how you do it and then implement it into my script.
(My current script is a nightmare of fiddles. All the “free the code” people would laugh — and also discover that one can write spaghetti code in R. Spaghetti code isn’t limited to fortran. I need to write some functions instead of using ‘cut-paste-edit’ for repetitive tasks. But I didn’t know which path was going to look promising and so right now it’s just a nightmare.)
Crandles— can we still submit? Is there a mid august submission potential?
SEPTEMBER REPORT (brief updates based on August data). Deadline for contributions: 30 August. Publish reports online: 14 September.
http://siempre.arcus.org/4DACTION/wi_ai_getArcticInfo/4850
That was only a tentative timetable it may have changed…
(No matter how spaghetti your R code, that isn’t much competition, my spreadsheets would rightly be viewed as worse.)
crandles

I thought you might be interested in this plot.
Black: Extent v. area all time.
Yellow: Include march until day 220 only which is more or less the “melt period”. Not how mostly the ‘yellow’ is in the top half suggesting that during melt, the (extent-area) tends to be a little higher than during re-freezing.
Blue: day 220.
Red: 2011 from March through day 220.
This graph doesn’t indicate the time, but the period of slow extent loss corresponds roughly to the period where the extent was not dropping, but area was. So, the red line has a shallow inclination.
Nah. I’ve done spreadsheets. I’ve done R.
I can do things in R I can’t do in spreadsheets, but I can create true spaghetti using fortran, R, excel what have you. The secret to creating spaghetti is not knowing precisely what you plan to do before you start, doing something that seems interesting– and organizing for that task. Then, finding something interesting, and then tweaking the ‘code’ to simultanenously keep the old stuff (to build on) and look at new stuff.
Because Dewitt asked me something, wedged in the *middle* of a bit of script that tries to look at the best fit to predict extent, I have a snippet of code to look at how it predicts area. I’ve also got snippets of code to check out time etc.
All these have notes telling myself to clean it up later– when I write a function! (This is planned now because I think I know my general notion of how to create the ‘best’ predictor given a certain batch of ‘fresh’ data. I couldn’t do it before because I wasn’t sure precisely *what* I was going to do.)
>”but I haven’t taken the time to read how you do it”
If you want that part:
Gompertz fit of average September extent per NSIDC is 4.438.
Gompertz fit through 15th Aug data extrapolated to 2011 gives the 3.623.
Similarly gompertz fit through my GFSC-IJIS JAXA data merge for 15th Aug extrapolated to 2011 gives 5.824.
If we are on the gompertz fit for area and extent at 15th Aug, I take that as being on target for the gompertz fit for September. (Scatter plots of residuals from the gompertz fits also suggest linear and proportional relationship.)
Then it is just multiple regression after eliminating any data that isn’t significant to find the regression factors.
Effectiviely I am detrending all the data using gompertz fits and using residuals to estimate residuals.
My attempts to ascertain whether this is better than multiple linear regression suggest that it is. I am waiting for someone to say there is a better way of doing non-linear regression so I can say OK you do it and I can then give up. ;o)
(err., I mean leave it to someone more expert.)
crandles,
It’s cheating if I just take your numbers. I need to read how you actually do it and shove, shove that into my spaghetti code and figure out how to get the corrected Akaike (AICc) values.
That’s what I thought you were doing. Replicating and then overlaying the other stuff I want to do is on my “to do” list. I just haven’t done it.
I don’t know which way is best. But eventually, I’m just going to report AICc values. I just don’t know if I’ll get around to it before the ice minimum — it’s just a matter of prioritizing re-organizing the spaghetti code vs. adding something that would be interesting to know but make it even more spaghetti-ish.
I’m sure I’ll want to make a prediction for next year after the minimum. In that case, I strongly suspect fits to year will have the best AICc values, so I’ll compare linear, quadratic and your gompretz for AICc.
>”I’ll compare linear, quadratic and your gompretz for AICc.”
I predict a win for exponential ;o) but only marginally and I think I prefer to stick to gompertz despite exponential being better as the shape is prepared for a slowdown in rate of loss as models have predicted whereas exponential isn’t prepared for that.
Mosher
Are you telling me -after your comment about my reference to the Polyakov papers- that we are not talking about the same papers?
DeWitt Payne [80328]
“as she says”
“will decrease the net upward loss in heat and result in thinner ice”,
“A net upward loss in heat results in thinner ice?” Pray tell.
Quite apart from the fact that sea ice has approx 85% of its volume in the ambient water [temp] and therefore is much less subject to variations in air temps, how does a “net upward loss in heat” explain anything?
An “upward loss of heat” assumes a source of heat losing calories from below, energy/heat transfering to something colder above. Does this mean that the arctic water is warmer than the air above it and the ice is melting because it is the intermediary layer?
If that holds, you have found the answer to Trenberth’s complaint about where the warming is “hiding”. In the waters of the Arctic Ocean, melting the ice from below.
Re: tetris (Aug 14 22:03),
Radiative heat loss to space. It’s quite large at high latitudes in the winter when the sun isn’t shining, ~70 W/m². You can freeze a lot of ice when you’re losing 70 W/m² 24/7. But an increase in CO2, even at the same temperature will reduce that loss rate. An increase in average atmospheric temperature will reduce it even more.
The temperature directly below the ice is fixed by the freezing point of sea water. The surface temperature of the ice is lower than this so heat will be conducted through the ice and radiated to space. It’s an equilibrium process that depends on the thickness of the ice among other things.
Of course. At night, radiation is either to overheat clouds, when and where the sky is clear and the atmosphere transparent at relevant radiating wavelengths, space. The temperature of space is near absolute zero. That’s colder than the sea surface or ice surface.
This sort of calculation is done in mechanical engineering too– we account for both convection and radiation heat transfer on objects in many instances. (In some instances, one is clearly much larger in magnitde than the other and we can neglect the smaller one.)
If CO2 makes the atmosphere less transparent at relevant wavelengths, then the heat loss to 0K space is reduced, and you would expect ice to form at a slower rate. The thickness accumulated over winter will be less than otherwise.
Tried to use your Ext-Area=vunerable ice with my technique but didn’t find it much use certainly not even approaching significance.
Decided to see if I could learn anything from replicating your method but I am either going wrong or am puzzled.
Std deviation of extent reductions was .248
Standard error when using year to predict drops .222
Standard error when using vunerable area = extent-area at 15/8 to predict drops .234
Standard error when using area at 15/8 to predict drops .217
The conclusions would seem to be:
1. Whether I use my method or your method, vunerable ice = extent – area does not seem a useful technique. Area again seems a better predictor.
2. You have shown the method does well so it must be coming from something other than the vunerable ice = extent – area ‘trick’
So I am puzzled by why
1. You have used this extent-area predictor, and
2. Why that method does better than your simple linear regression methods with extent and area. I am thinking those simple linear regression with extent and area do badly because the data is curved not linear with time and predicting change after 15/8 has this built in whereas the linear regressions with extent and area do not.
If my thoughts on this are correct that would seem strong evidence that a non linear regression method perhaps like mine or the extent reduction from some dates are better methods than linear regressions.
My method probably still needs testing against average reduction in extent and reduction predicted by area. However my standard error was down to .219 which is close to the .217 above.
OK with a bit of testing I am now much more sure that predicting the drop using area is a better technique than I have been using. This is because I have tried predicting previous 20 years without using data unavailable at a true prediction time. The RMSE is .217 i.e. despite changes in the regression slope and intercept, the error does not increase.
Using my technique the error does increase.