Those of you who read WUWT will be interested to learn that DMI seems to have found the ice they lost yesterday:

I first read of the drop out at WUWT where the drop out is immortalized:

I also read about it in comments at Neven’s. It seems to me that pretty much everyone from stone-cold cooler to hell-fire and brimstone warmer recognized this as sensor error.
These sorts of short term data glitches number among the reasons I wait a little while before calling the NH sea ice minimum winner.
Humm. Looking more and more like no record minimum this year. Oh well, maybe we’ll get to hear the blaring MSM headlines and the rants about the end of the world next year or the year after; surely sometime soon… unless our lukewarmer coffins have been nailed fully closed before then. 😉
Neven pointed me to area data from the cryosphere today. At the current time of year, the 7-day smooth area gives better predictor than 7-day smooth extent. So, my prediction method has changed, and so has my estimate of the probability of a minimum. My new method says the probability of a minimum is 20%. (If I used my old method predicting based on extent, I’d get p=4%.)
So…. we’ll see.
The ice is currently very spread out given it’s area. I’ll show an interesting graph of that soon. But anyway, because it’s spread out, we can lose extent at constant area due to compaction. Of course, we can also lose extent and area if ice blows out the Fram or just melts.
Lucia:
You do have the most interesting posts of all the blogs I visit!
Roy
Re: SteveF (Aug 8 16:10),
I don’t pay any attention to DMI. They’re pretty much hopeless.
Uni-Bremen, OTOH, still looks like about even odds. This time of year gets tricky because it’s hard to tell melt pools on the surface of ice from open water. There’s a whole lot of low concentration ice on the Alaskan side of the Arctic Ocean between 150 and 180 W longitude. We still have at least a month of melt season left. That could all go. Even if it didn’t melt, winds could consolidate it a lot.
DeWitt–
Does UNI-Bremen have data avaliable? Or just the graph?
Dewitt.
If all the LT 100% concentration ice goes or compacts, it’ll be a blow out. It would be interesting to have a look at the volume of ice that was lost in 2007 from Now until the end.
It would be awesome if this was polar bears armed with flamethrowers, though.
I don’t know if this post by Steven Goddard’s been raised here.
http://stevengoddard.wordpress.com/2011/08/08/1947-temperatures-in-the-arctic-have-increased-by-10-degrees-fahrenheit-since-1900/
10F increase in air temps & a 3-5F increase in SST.
If you go down to p24 & beyond here, http://climate4you.com/Text/Climate4you_March_2011.pdf
You see the huge leap in temps, especially winter ones, starting around 1916. An increase far more rapid & of greater magnitude over the course of a decade, than anything since.
steven–
If we count extent lost from the current day to the minimum, 2007 is not the worst year. It’s something like 1999. In fact, 2008 was worse than 2007. I’ll be adding a “most lost here on in” projection. If it happens, it will be a blowout.
From WUWT, a reply to another sensor dropout.
Anthony thinks such trivial events are still news. It’s like saying every passenger jet that takes off will probably have a certain number of components that are not functioning correctly, every time it happens.
“the QC is partly manual”
I don’t find this particularly comforting.
Andrew
The QC on a jet involves plenty of manual checks.
bug–
People are watching the ice now. Those at Neven’s blog also noticed the dropout and chatted too.
I think partly manual QC is often wise. I strongly suspect it’s wise in this case. The main thing is that if you see a sudden change in data reported real time, it’s worth bearing in mind that it might be a sensor issue. That’s what I do.
That said: I don’t think bearing this in mind means people — including Anthony– can’t notice and talk about it. People would be talking about it whether or not Anthony blogged– they already were noticing and talking before Anthony blogged. Anthony conveyed their question to Walt; Walt answered. Anthony told his readers.
I do sympathize with Walt– it’s got to be tough to have a zillion readers looking at fresh data as it comes out. But by the same token, they are going to ask questions, and I think it’s better for Walt that Anthony asks rather than having one bajillion different emails arriving asking.
bugs:
Personally I thought Anthony’s post and Walt’s response pretty interesting.
Maybe bugs could start his own blog and see how many people he can get to visit it daily.
lucia:
ditto. You shouldn’t trust an automated algorithm more than you would a human.
Re: lucia (Aug 8 19:49),
Uni-Hamburg posts data monthly in the area-extent folder here. Unfortunately, that always puts them several weeks behind real time. Uni-Hamburg says they use the same algorithm (Spreen, et.al.) to convert the raw observation to area and extent.
Cryosphere Today screws up occasionally too. But I don’t write a post. I send a note to William Chapman and it gets fixed. Speaking of William Chapman, CT has updated their extent graph through 2010. Unfortunately, they haven’t updated the digital archive yet. I should probably ask him about that.
DeWitt, having everybody writing a post would be annoying. But having somebody write a post, and shows the responses, does expose the QC process to the rest of us. Anybody who finds Anthony’s posts boring is free to skip them of course. I’m a dataphile, so naturally I like them.
I’m not sure why you think things like this are worth blogging about. Data is not perfect, especially near real-time data. That’s not news.
Precisely, Bugs. And the fact there is less than half the total number of surface stations now is not news either and it doesn’t matter. If you buy that then I have some fine mortgages that are gold plated according to the S&P that I would like to sell you, clown.
I would think that pointing to these errors speaks to the QC and attention that is paid to the data. That it might be embarrassing to those responsible and they shrug it off as nothing new speaks to their view of things. Better would have been a response that said here is what caused the error and this is what we did to correct it. Here also is an explanation of why this error has (no) bearing on the longer term measurements.
Kenneth–
I wouldn’t take this as shrugging off. I suspect that ordinarily, the data compiled for the graph would not be mission critical for anyone. So, the graph could be created once a month after manual QC. Owing to interest, the post data prior to full QA/QC. Sometimes, it gets changed. So what?
Sure. Not posting is fine too. But I don’t think there is much wrong with Anthony posting. People were discussing the issue in comments and at other blogs where questions and answers were hard to find. Anthony posted the question, asked and got the answer. Now we all know and don’t have to ‘hear it through the grapevine”.
While I can understands why Walt might find it annoying and pesky…well… so? He could just delay data posting a week. People who are used to the graphs being up to date would howl, but he could just say that under current funding QC is partly manual and they have their guy do it once a week. Period.
Lucia does Walt find it all that pesky?
Carrick–
I don’t know. But I take this to suggest he very well might find it pesky:
Lucia, I’d find it flattering that so many people cared. 😉
For what it’s worth, I found Anthonys article interesting. Nothing wrong with having another layer of QC, and in a way it reassures me that the satellite data is being faithfully relayed.
Lucia: “I do sympathize with Walt– it’s got to be tough to have a zillion readers looking at fresh data as it comes out. But by the same token, they are going to ask questions, and I think it’s better for Walt that Anthony asks rather than having one bajillion different emails arriving asking.”
That’s why they pay him the big bucks.
I would seriously doubt Walt is paid “Big Bucks”. The amount he would get paid wouldn’t get an investment banker out of bed in the morning.
bugs–
Get a grip on reality. Lots of barristas, store clerks, hamburger flippers, cab drivers, and cashiers consider what a scientist like Walt makes ‘big bucks’.
@Bugs
Sadly Bugs, the rich have gotten poorer under Obama. 18,394 people reported income of over 10 million. By 2009, this number was down to 8,274. Nice job.
“Owing to interest, the post data prior to full QA/QC. Sometimes, it gets changed. So what?”
Not to be pedantic – well maybe a bit – I have not taken much time to read the comments on this error, but after seeing an error this obvious, regardless of have quickly it was corrected, my only interest would be to hear the author of the error explain. I like to correct and explain my errors as quickly as I can.
Sloppy reporting can in some instances go to credibility and I want those who might bother to read what I say at least to know that I know I was sloppy in that particular case.
Also DeWitt Payne had a scenario that could call for a quick collapse of the Arctic ice extent. Would the error excursion on this graph be similar to that from an expected collapse? If so, the error, as I noted above, would not necessarily be so obvious but rather more of a monumental nature.
Too much coffee this AM, but one more item that might be pertinent to the subject here. The Feds in Washington are asking S&P to explain a “significant” error when they make one and what was done to correct it. While I think the Feds are more motivated by bullying than QC here and S&P and the other financial rating agencies have problems with due diligence and biases, I think the call for corrective action would not be unfamiliar to those who have spent time with QC/QA in private businesses. It could apply to public organizations also, but I am not familiar with what they do for QA.
Kenneth–
Sure. But even in private industry QC, people must recognize that some errors are “insignificant” and spending too much time dwelling on them is counter-productive.
The NH- sea ice extent graphs as products are mission critical to no one. This drop out wasn’t going to imperil ocean going ice-breakers, or even cause anyone to cancel their family picnic. People ‘out there’ already know the answer– and knew it quickly. Polar maps showed a big “blind spot” in an area, and nearly all the curious people found this out very quickly.
I don’t think posting a long detailed explanation is necessary. If someone else things so– well ok. But I really don’t see why people involved in providing that for-information-only graph need to treat this as some sort of priority.
Lucia,
Really all that is needed is an appropriate disclaimer (front and center) on the site, explaining that the data is not fully checked and that satellite problems can sometimes cause (brief) anomalies. If they wanted to be extra clear, they could even show a couple of examples. Then Walt would not likely receive emails form Anthony. 🙂
SteveF–
You are likely correct. Maybe they’ll put one up. 🙂
Lucia, my point is – and from my experience of dealing with customers and vendors – that an error not caught, no matter how small or quickly corrected, leads to a perception of sloppiness and a perception that is best changed by admitting the mistake and showing corrective action. Admittedly the relationship we are discussing here is not the same as a customer/vendor one, but I know that it is not only the response that is important it is the attitude evidenced by the responder.
In a former life I dealt with customers who would complain about certain aspects of our products quality that would never be an issue with their customers: the end user of the integrated product. I would never get away with telling them that the problem was non-problem, but rather I would have to explain in detail our field test results. That did not mean that what they initially saw as a quality issue was not a perceived issue with them with regards to our general attitude towards quality and therefore an issue that needed attention and correction. This scenario played-out in reverse for our vendors.
Kenneth-
In his email to Anthony, walt admitted sensor error and took corrective action. The graph is now fixed.
I realize that some might not see this as “enough”– and in the private sector, some customers and vendors might not have seen your explanation of field test as “enough”. But in every endeavor, there is a limit to how much time should be sunk into each individual mistake or error.
In my view, Walt’s letter is pretty much enough for this specific error. Walt would be wise to do as SteveF suggests– put an disclaimer on the web page. But beyond that I don’t see any need for any more. If someone else does see a need for more, they are entitled to their opinion. But in my opinion, they may be expecting some sort of response that is out or proportion to what happened which is that non-critical data, displayed as information for the curious was out of whack for about a day. Most of the curious could tell it was out of whack and had access to other data that made them be able to even figure out that it was out of what, and why. Moreover, those exact same curious outsiders can easily grasp– without any additional info from Walt, what likely happened and how this doesn’t propagate into long term data.
So, I think Walt’s response is sufficient with respect to anyone understanding what happened here. But if you think otherwise, ok.
“I would never get away with telling them that the problem was non-problem” Indeed Kenneth. Responsiveness is a big issue at my place. Of all things, Perceptions of Non or Dismissive Responses (about pretty much anything) by the staff do not go over well with the owners (for obvious reasons).
Andrew
The following excerpt from Dr. Walt Meier’s reply shows nothing disconcerting from an attitudinal prospective, but obviously what caused the problem remains a mystery. I doubt from the reply that Dr. Meier is in a position to deliver the details of the problem or corrective action that would prevent future reoccurrences. Perhaps there are more recent accounts of what caused the problem – that I have not read.
“This is quite clearly a data issue. We don’t work with the F15 satellite anymore – we’ve been using the sensor on the newer F17 satellite, so I can’t say if it is a a sensor problem or a processing issue at DMI. I
could be the CME, though it doesn’t seem to have affected the F17 sensor. From the image, it looks to be a missing swath of data, perhaps from CME, perhaps from some other issue. A missing swath is not
particularly unexpected. Sometimes the data can be recovered later and added in, sometimes not. The AMSR-E issue in the Antarctic also appears to be due to one or more missing swaths of data on Aug. 5:”
“This is quite clearly a data issue”
Ummmm, to HIM it might be “quite clearly” a lot of things. Don’t help US too much though.
Andrew
Kenneth–
Do you mean we have not been given a detailed diagnosis of precisely what caused this:

We haven’t. But I don’t really see any need to precisely how and why this gave out. I think Walt has taken the appropriate corrective action which was: remove the faulty data. I think he has communicated what went wrong at the level of detail that is appropriate for the type of error, and the quality status of the data (which is “for information only” type data.)
I guess some people can want more, but I think someone elevating reporting requirements to that level would be mis-prioritizing efforts to get good data.
“I guess some people can want more, but I think someone elevating reporting requirements to that level would be mis-prioritizing efforts to get good data.”
In an attempt to get the last word:
I think it is rather obvious that there are no formal reporting requirements and at this point I would want more information merely to satisfy my scientific curiosity. In my business experiences technical types, like scientists and engineers, were more than happy to share these kinds of information. I suspect in this case we are not in contact with those who are technically involved.
Kenneth–
I am sure that if you had a relationship with the person dealing with the sensor, he or she would be more than happy to explain what went wrong in almost gory detail. But that doesn’t mean they would be eager to spend the day writing up a formal document describing this, formatting it, having the document pass through their boss and editor all for posting to the web.
The later is what would need to be done if the “explanatory” document is expected to answer more questions than it raises among the curious.
There is a difference between describing something informally to a person who asks and writing up a document that will be circulated. I can’t imagine that Walt Meir wants to instruct his staff to do the latter now or in the future when they have finally figured out precisely what happened.
“The later is what would need to be done if the “explanatory†document is expected to answer more questions than it raises among the curious.”
I think you have hit upon the conundrum of too much work for a formal reply and insufficient information from an administrative informal reply. The reply I read did pose more questions than answers, such as how is the missing data filled in later.
Your picture with the blacked out missing area reminds me of an eclipse shadow and thus I am leaving this issue as my ancestors of many generations before did: The ice gods were angry that day.
As far as I can tell, Anthony didn’t ask that. I don’t think Walt should waste time answering unasked questions. Also, I don’t know why you are suggesting that Walt’s reply raises this question. It seems to me whether data are infilled is one a user should if they were using a data product. They should ask it whether or not they see this drop out– so that’s not the same as saying the reply raises the question.
I suspect if you asked the question, the answer is that they will follow whatever their current procedure is and that procedure is described somewhere. I don’t know DMI’s method, but JAXA seems to include -9999 error flags. That’s a pretty common procedure for people reporting data. Analysts can later infill in whatever way they deem useful for their particular analysis.
Re: Kenneth Fritsch (Aug 12 08:43),
If you look at the daily sea ice maps at the Uni-Bremen site you see missing swathes all the time. That’s why they average over more than one day. My guess would be that DMI has an automatic script that runs daily. Ideally, it should flag for attention any time there is data missing in a large area over the whole averaging period. But as pointed out, this isn’t mission critical data for anyone and people don’t always pay attention. There was a post that got a lot of attention at WUWT a few years back over an adjustment at Cryosphere Today when several days of data that were missed because an automatic script broke were put back in.
“A missing swath is not particularly unexpected. Sometimes the data can be recovered later and added in, sometimes not.”
I am curious as to what happens when the data are not recovered and how often that occurs, but not as curious as I would think someone modeling minimum ice extent or running a gambling emporium on ice extent would be.