I see people are still discussing the ZODs. Obviously, they needed a new home that didn’t look like this:

They are now hosted at: David Appell.com
I see people are still discussing the ZODs. Obviously, they needed a new home that didn’t look like this:

They are now hosted at: David Appell.com
Comments are closed.
He announced this recently at CA. He also reports on his blog that he has received a letter from the IPCC asking him to take down the files from his site, and that he does not plan to comply with the request.
My own experience with the “open and transparent” IPCC is mentioned under David Appell’s comment on the “Stocker’s Earmarks” thread at CA.
Kneel before ZOD!
PaulM–
Yes. He emailed me and I figured if anyone has trouble finding it, we might as well help them. So… here it is. We’ll see whether the IPCC can really do anything about it being available. I don’t see much harm in it being available even if “the” IPCC would prefer it not be.
Boris– Some people want to read it. I don’t see any harm in that and believe it should be readily available. Of course, when people discuss it, they should say it’s the ZOD so that everyone understands the text isn’t from the final version. But I think anything can be discussed: emails, conversation overheard on buses, letters from John Adams to Abigail Adams, etc, what a politician might have quipped during a flight. There is no reason to pretend that which is not “published” in some “formal” sense does not even exist.
Lucia,
I am pleased the ZOD is available, primarily because it is a useful diagnostic of the IPCC process. Even a quick read of the existing sections on the ocean (for example) shows that there is no mention at all of the flattening of ocean heat uptake since 2003, but repetitive discussion of heat accumulation from the 1970s through 2003. It is as if the document was written in 2003, and the last 8 years of data (of much higher quality) don’t even exist! On its face this looks to me like people who are choosing to ignore data which are inconvenient because they conflict with the conclusions of AR4 and/or make the story line of pending environmental doom more complicated than they would like it to be. It will be interesting to see if those involved in the process can bring themselves to meaningfully address data which conflict with earlier IPCC assessments. My honest guests: there is not a chance they will.
Sorry, I should have included the reference.
As for this ZOD, I really don’t have a problem with people posting it or the IPCC trying to protect their “property.”
Wow.The believers really are going to do it.
This is sad: Lysenkoism on a worldwide basis.
I bet that many who enabled Lysenko thought they were doing the progressive thing as well.
Solzhenitsyn had it right:
To do great evil, one has to be convinced they are doing a great good.
Hunter’s comment raises a recurring question: should Stalin be included into Godwin’s law?
Shame it was the ZOD and not the FOD that was leaked… The FOD is greatly improved in many, many areas.
No, that’s a separate one. I deem it “Toto’s Law of Convergence.” Any skeptical discussion of AGW will always converge to a discussion of Stalinism.
Zeke (Comment #89044),
Well, in that case, maybe you should be the one to leak it and show everyone the improvements that have been made. You could optically scan the pages and release untraceable files. 😉
.
But seriously, you appear to be toying with us. What kinds of improvements? Do you mean there are graphs in the FOD where there were none in the ZOD, or something more substantive?
SteveF,
Alas, I’m not really at liberty to talk about them, but they are significantly rewritten in many places.
Zeke,
Funny, in my copy of the FOD, very little seems changed….. mostly just missing sections and graphics added. Looks just as bad or worse to me in terms of the science.
.
OK, I am only joking, but I think your comment about the content of the FOD was more than a bit unfair; how can anyone assess the accuracy of your glowing evaluation of a document only you have, but are not at liberty to talk about?
.
“Significantly rewritten” is a lot more fair to the rest of us than “greatly improved in many, many areas.” Better yet would be if someone would just publish each draft version on the way to the final version.
It is interesting to me that Mike Godwin’s observation has morphed into one of the only grassroots social laws of the internet. It seems unlikely that the Godwin’s law was meant to suppress conversation, since Mike was a founder of the lectronic frontier foundation and served as chielf legal counsel to wikipedia for awhile. Yet it has certainly been used in an attempt to do just that. FWIW, I was only focusing on how peer groups can cluster around ideas and shared beliefs and then not only self-censor but seek to impose restrictions on others. History is replete with examples of this. are we going to expand Godwin’s law to prevent any historical references to people doing this? That would seem to make even the threatened invocation of Godwin’s law itself a means to squelch conversation, which seems recursive and circular.
It is interesting to me that Mike Godwin’s observation has morphed into one of the only grassroots social laws of the internet. It seems unlikely that the Godwin’s law was meant to suppress conversation, since Mike was a founder of the lectronic frontier foundation and served as chielf legal counsel to wikipedia for awhile. Yet it has certainly been used in an attempt to do just that. FWIW, I was only focusing on how peer groups can cluster around ideas and shared beliefs and then not only self-censor but seek to impose restrictions on others. History is replete with examples of this. are we going to expand Godwin’s law to prevent any historical references to people doing this? That would seem to make even the threatened invocation of Godwin’s law itself a means to squelch conversation, which seems recursive and circular.
FOD is much improved
1. graphs included in high and low res
2. very upfront WRT uncertainties
3. Some very specific complaints made about Ar4 are addressed openly and fairly.
Im only talking about the one area I care about
SteveF– It’s true no one can assess the accuracy of Zeke’s claim. Nevertheless, if he’s receiving copies after making an agreement not to make the FOD public, you can’t fault him on keeping his word about that. I suspect in due course FOD’s will leak by someone– even if it’s through carelessness. Though, by the time they leak other drafts or the final may be available.
To those on the AR review team – please can you ask for a fully referenced, transparent account of the basic temp sensitivity to CO2 derivation to be included. If you have an existing reference to hand please post it. I’m thinking a self contained and consistent reference piece which includes the basic model assumed, governing equations, physical measures and definitions, calculations and supporting evidence. Thanks.
Lucia,
I was not suggesting that he betray his word, nor would I. I was suggesting that claiming a document which is not available to most of those reading this blog is “greatly improved in many, many areas†is pretty much meaningless. Not even an “IMO” qualifier, just “greatly improved”…. come on Zeke!
Steven Mosher,
No need to be coy Steven; which area is that?
I figure Mosher is interested in temperature reconstructions.
I thought for Mosher sensitivity was the only question?
Maybe Steve should clarify–maybe he’s only interested in one thing at a time…
I think SM is distracted by the near-completion of his 7th generation Turing-bot..
Or maybe the bot is putting the finishing touches to SM7?
The FOD and the ZOD are indeed significantly different in some areas. Those who have access to the FOD have agreed that it should not be “cited, quoted or distributed”. Mosher may already have slightly overstepped the mark.
The reason for saying this is to point out that a detailed analysis and critique of the ZOD would be a bit of a waste of time. The FOD is currently being analysed by ‘experts’ (that is, those of us who ticked the ‘I am an expert’ box on the IPCC form).
Detailed blog discussion is not a good idea because this could skew the review process. Suppose an influential blog picks out one issue from the ZOD and makes a big fuss about it. This could influence reviewers to focus on this one issue at the expense of others.
I thought I knew the one thing Mosher was interested in, but I doubt the AR5 covers Old Pulteney.
PaulM – FWIW I think the whole process should be open, transparent and online. IMO it would result in a better quality document which overcame invalid objections as part of its development whilst accepting and acknowledging valid ones.
“Detailed blog discussion is not a good idea because this could skew the review process. Suppose an influential blog picks out one issue from the ZOD and makes a big fuss about it. This could influence reviewers to focus on this one issue at the expense of others.”
Are we talking science here or advocacy and politics? Sorry, but your reasoning here seems more like a lame excuse for something less than complete transparency. I think, in essence, you are saying that people in the process may over react to transparency and open discussions. Now why would they over react?
The semi-transparency that evidently exists in this process currently allows comments like those from Zeke and Steve Mosher that begs the question of who is to be a judge of the process.
I am thinking that what we have here is the IPCC’s version of Freemasonry. Unfortunately it involves a governmental body unlike that of the Free Masons.
PaulM,
I’m with curious on this one. Sure, each draft is a work in progress, and people should treat it that way, but those involved should have absolutely nothing to hide. If the ‘experts’ writing the drafts fail to cite or improperly discount a relevant reference, there is no reason to not have this pointed out, on a blog, or by some other means.
.
More importantly, if you are really are going to trust the quality of the sausage, you ought to be familiar with the sausage making process. The way in which the final draft of AR4 came into being caused a lot of people to question if those involved (especially those in control of the final content, many of whom had a vested interest in proving their own publications ‘correct’ and conflicting publications ‘wrong’) were acting as dispassionate experts or as advocates.
.
An open process could only help AR5 gain credibility…. if the process honestly addresses ‘the science’, and not ‘the politics’.
PaulM:
As opposed to the way it works now…where the anointed inner circle gets to skew the focus on one issue at the expense of the others. (To be crystal clear,by “anointed inner circle”, I mean people like Michael Mann and Eric Steig. The ones who’ve “paid their dues” get precedence here.)
Regardless, consensus statements are always highly politicized (internal science politics) as well as being skewed by the potential policy outcomes associated with different issues. I see no reason that making it more transparent wouldn’t reduce to some extent the inevitable distortions associated with a document like this.
Good post at CA on this issue. A comment from “hereinsd” claims a link to a .rar of the full ZOD:
http://climateaudit.org/2012/01/26/another-ipcc-demand-for-secrecy/
Over the last few years there has been plenty of interesting science relevant to climate. For example SteveF cites ocean related work that has essentially been ignored in the ZOD.
If there is any discussion of CERN’s “CLOUD” experiment in the ZOD I have yet to find it. Maybe the FOD will address that?
PaulM says that those of us who are working our way through the ZOD are wasting our time. He is probably right but we may unearth something of interest. It seems likely that the FOD and all future versions will be leaked at some point so the unwashed public won’t have to wait until the final version is handed down from Mount Olympus.
Dave Appell is fighting for the public’s “right to know”. The IPCC can play “whack a mole” as much as it likes but the ZODs will stay on the Internet until the FODs render them irrelevant.
But not at Tamino’s.
toto–
Who is Tamino criticizing? I couldn’t find a link. Does Tamino’s article have anything to do with the work SteveF cited? (Real questions.)
Toto #89071,
I have no idea why you think Tamino’s post has anything to do with my observation of a complete lack of recent ocean heat data in the ZOD. I am well aware that there exists the potential for relatively short term variation in ocean heat around a secular trend. (see for example my post from last July 26th which contained the following graphic: http://rankexploits.com/imageDiversion.php?uri=/musings/wp-content/uploads/2011/07/Fig3.jpg.)
The recent (post 2003) rate of heat uptake is clearly less than it was between ~1990 and 2003 (http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/), and this is confirmed by a lower recent rate of rise in sea level (see my post: http://rankexploits.com/musings/2011/estimates-of-mass-and-steric-contributions-to-sea-level-rise/).
.
So please explain what you think is relevant in Tamino’s post to the lack of information on recent ocean heat content in the ZOD.
toto (Comment #89071) —
In the linked post, “Fake Predictions for Fake Skeptics,” Tamino condemns a skeptic for taking 2003 as the starting point for a consideration of the rate of ocean heat accumulation. In the post’s graphs, Tamino shows why 2003 is a cherry-picked starting point that will lead to a an aberrantly-low estimate, since that years’ and the following years’ values were unusually high.
Tamino doesn’t identify which skeptic he has in mind; regulars in the comments are enjoying the exercise of unveiling the target.
Toto, does Tamino have SteveF in mind? If not, are you implying that SteveF’s findings depend on placing the start of the analysis near 2003? It would be helpful to have the target make his or her case: could there be other considerations that weigh against a 2003 (or so) start being an exercise in post-hoc cherry-picking?
Tamino’s semi-Socratic style is not to my taste. Certain previous attack-posts of his turned out to have much less substance than initially seemed to be the case. So that’s not a good motivator to play Sherlock Holmes.
“PaulM says that those of us who are working our way through the ZOD are wasting our time.”
The information from ZOD may change in form in the later and final versions but the process that all this reveals is very releveant to interested parties. Is it the process and the outside influences on that process that the IPCC wants to avoid? And why would it want to avoid those influences?
I also asked the question at CA about anyone knowing where the influencial MSM organizations like the NYT and Washington Post stand on the attempts by the IPCC in keeping the process (in real time) secret.
Lucia:
As far as I can tell, the tapeworm in his belly.
It’s just another nonsense blog post refuting nonsense arguments that nobody actually makes.
AMac:
Yep, like most of that crowd, he makes up with bluster what he lacks in substance. As with his blog denizens, birds of a feather and all that.
Re Toto #89071:
Come on now Toto, surely there are some things there that don’t pass your smell test? What Tamino has shown is that if you arbitrarily pick the period from which to a) linearly extrapolate, and b) baseline, you can end up with a line that ends up near current observations (why 1993-2003?). Imagine if a skeptic had done such a thing to show a greater divergence (and I’m sure they have, and they would rightfully be criticized)?
If this is related to the previous Bob Tisdale/Tamino argument and GISS, I actually calculated the OHC from the CMIP3 model run outputs rather than simply linearly extrapolating, and noted that baselining based on the overlapping 20th century period would indeed seem to show 2003 as the point where they start to diverge. But of course there are different tactics to use with baselining, as Lucia has gone over countless times before. My only complaint would be (as she mentioned) that you SHOULDN’T baseline based on one period to verify that the 20th century hindcast looks pretty good, then rebaseline to make the current projections look better (while chopping off the 20th century hindcast that would now look poor).
Anyhow, the issue has been the recent flattening in the upper 700 m OHC anyway. In the updated Katsman and van Oldenborgh (2011) paper, which uses one of the models (ECHAM-MPI) with some of the most interannual variability, they have noted (after correcting their math) that (according to this model) there is 25-30% of getting a similar 8-year flattening over a 30-year period (other models, such as the GISS ones, would likely project a 0% chance due to poor variability). Even if we were to accept this methodology (and their are several issues I point out) and think that such a flattening has a decent chance of happening over a 30-year period, it certainly would seem to be of interest that it happened in the first 8 years of the projection, and right when the ARGO network greatly increased its coverage.
.
Only if his tapeworm is called Bob and posts on WUWT.
.
.
Because that’s what Bob chose! He took the model trend over the 1993-2003 period, and then drew a line with the same trend, but starting from the 2003 point, instead of just continuing the curve. Tut-tut.
.
That being said, Tamino’s comparison is not entirely apples-to-apples, because (IIUC) he takes the trend from the observations, rather than the models as Bob does. But I understand that models do pretty well over 1993-2003 for surface temperature, so I’m not sure that would make much of a difference.
.
My own take from this is that OHC may well have “flattened”, but I’ll wait a couple more years to be sure.
Troy_CA,
The divergence of ocean heat uptake rates from model projections is a very sensitive subject, because it has important knock-on implications, including confirmation that the ocean parts of the models consistently predict too much heat uptake. If the real ocean heat uptake is lower than the models predict, then that means the sensitivity to forcing has to be lower than stated by the models, since more heat is being lost to space for the same increase in temperature. In addition, getting the response to volcanoes right when the ocean heat loss is lower than thought demands a substantially lower sensitivity, since the short term forcing level associated with volcanoes is not so loosely-goosy as the “aerosol adjustments” to the GHG forcing history.
.
Which is a complicated way of saying: the combination of lower than modeled ocean/atmosphere heat exchange and reasonably well constrained volcanic forcing levels will indicate the model sensitivity is far from correct (too high). And that is indeed a touchy subject for Tamnio and company.
Re Toto #89079:
Oh, how I wish Tamino might have actually linked to the actual argument he was trying to refute (as this would explain that the 1993-2003 originally came from Hansen 2005). Of course, he doesn’t really counter Bob’s argument, instead (as you mention) comparing trends from observations to observations in order to discredit Bob’s comparison of models to observations. And it’s not clear why Tamino’s analysis would be any more valid than Bob’s, or why this was brought up to refute SteveF’s point about the ZOD (or even the flattening in general?) If anything, the Tamino gripe with Bob is about baselines (see my previous comment for what you get with the 20th century baseline), which has nothing to do with the difference in trends.
Toto #89079,
.
{Slaps forehead and groans} Of course they match the history… they have been ‘tuned’ using aerosol offsets!
Models (and not just climate models) are only valid when they make valid predictions, not when they accurately match known historical data. I kind of hope you are joking, but I fear you are not.
toto:
Got it. Tamino is too big of a woose to link to the post he’s criticizing. Speaking of Bob, isn’t he comparing with ARGO, which starts in 2003?
The only real difference between Bob and tamino is Bob doesn’t pretend to be a statistician, where as tamino sometimes does.
Very observant Carrick, very observant. 🙂
http://t.qkme.me/59u2.jpg
Carrick–
When I read the post I both wondered who made the “fake” prediction and who made the “real” prediction that supposedly panned out. Back in 2003 or before, did anyone make a “real” prediction that temperatures would follow the 1993-2003 trend?
Toto– I don’t know why Tamino doesn’t link to the analysis he is criticizing. It’s difficult to think of a valid reason. The one that seems most probable is that if he links to the argument, then people might be able to see that Tamino’s own argument is a engaging a strawman, or is just a red herring. Or it might be possible for someone to clarify that Tamino didn’t really engage the argument by the unnamed fake skeptic. The fact that Tamino’s fans might guess who he is criticizing doesn’t really help much since they aren’t likely to dig through anyway.
Lucia, that’s why posts like this are difficult to critique (or even pointless).
I can think of one reason other than not wanting to give WUWT any extra traffic why he might not label the person he’s critiquing by name: He can say anything he wants about a hypothetical “composite person” without violating his terms of usage.
Remember what happened when he did that nice little attack piece on Donald Rapp?
Anyway, if it’s transparent enough that toto can say who he was talking about, pretending that it’s a generic criticism probably wouldn’t fly if somebody had a legitimate complaint (in fact it would look worse, because it could be see as evidence you were aware that what you were saying might be in conflict with your TOS agreement).
Chuckles, great pic, lol.
Actually this photo reminds me of politicians on the campaign trail who go out and try and act like they’re “normal people.” The only one I know who could pull it off was Bill Clinton, ’cause IMO you always look pretty cool blowing on a sax.
SteveF:
Even if they were, that’s my point. Replacing observations with models shouldn’t change much.
.
Carrick
Dude. You know better.
.
1- His data is the same as Tamino’s. He just restricts it to the “ARGO era”. Nothing wrong with that. However…
2- …He then takes the 1993-2003 trend, and “extends” it from 2003 onwards – but silently shifts it up by zeroing it at 2003 (i.e. within an up-fluctuation in the series). Which he can get away with, because he doesn’t show the pre-2003 data.
3- He then claims that the post-2003 ARGO data is incompatible with the (silently up-shifted) “trend”. Ta-da!
.
Tamino just shows the pre-2003 data, and the real 1993-2003 trend. This real trend, without the spurious up-shift, looks fully compatible with the ARGO data.
.
Lucia:
They might, if it were.
Re Toto:
Toto, Bob has not done anything to the *trend*. He has shifted the intercept of the line on the chart. There is nothing more “real” about Tamino’s choice of baseline/intercept. You understand that because the absolute ocean temperatures output by the models are not correct, and because the NODC OHC is calculated as an anomaly, graphing the two together must be done relative to some common baseline? Tamino has chosen to use 1993-2003 as the baseline, Bob has chosen 2003 to zero at. As I pointed out, if you use the whole overlapping period in the 20th century and GISS-ER, Bob’s graph would appear more correct of the two. I agree that Bob’s intercept was likely designed to show the maximum discrepency for his purpose, and that he may have ended up with a correct alignment according to my description above due to luck. But this has nothing to do with any adjustment to trends…indeed, no amount of up or down-shifting the line will affect the discrepency in trends.
Toto:
Not really. No insult meant to Bob, but I generally don’t read his posts.
I don’t claim any particular knowledge of his post or tamino’s unrelated comment to a pretend skeptic.
toto:
Trends aren’t affected by baseline shifts, “spurious” or otherwise.
I think you need to tune this discussion a bit. I’m still not planning on reading Tamino’s post regardless. Pointless, since it was written about a fictional person or tapeworm.
(And if he was actually writing about a real person, I have no admiration or respect for a coward who can’t admit this.)
Also, Tisdale does publish the slopes in his figure and he does specify that he’s comparing against the Hansen (2005) model.
Is that a bad thing to do because their trends really don’t agree very well? (So the real problem is you aren’t a “team player” by noticing it and you’re not “helping the cause.”)
I have no particular comment over where the baselines should be … because it’s just visual candy here and tests nothing. (I will say that during the “verification period”, same 1993-2003, the model and data set should be shifted to have the same baseline, and the data and model ensemble mean should be shown for the full period.)
I can understand the point of comparing data to model predictions, which comparing e.g., 1990-2000 doesn’t necessarily do (I suspect I would get the same trend using a zero dimensional “two box” model and the same assumed forcings…so it’s my opinion that this comparison isn’t much a test of the skill of a GCM).
In any case, what you’d like to do is look at the difference in trends between model and data relative to their uncertainties.
So it is a legitimate question: “If the model mean continues to diver from the observations, how many years are required until the models can be said to have failed?”
I’m sure tamino didn’t address this, since his comment was about a fictitious person or tapeworm, obviously unrelated to Tisdale’s post and thereby safe from any TOS issues.
Apologies for not finding the right thread. This is back to interpretations of Hansen ’88 – although I don’t think specifically his Congressional testimony, so maybe there is no public announcement of BAU to take into account…
This is a Michael Mann lecture just linked at Bishop Hill. http://e-education.mediasite.com/mediasite/Viewer/?peid=526d1c0752384b8ab2bf8ab4cd47603f1d
I seriously started listening to the lecture with an open mind – I just thought I’d listen to his side of the story, and thought I’d end with some sypmapthy for the personal abuse he’s received. Well I couldn’t listen beyond 10 minutes, because it seemed such a warped version of reality. If anybody is interested, there is an astonishing graph at 9 minutes showing how fantastically accurate Hansen’s predictions were. They look even more wonderful than they did at SkS [with no need for the excuses about why they were a bit ‘off’]
There are various issues but apart from only showing scenario B [cf Michaels?] the observed data stops at 2005 – even though the lecture was within the last week 😉 Can anybody suggest why that might be?
Quite apart from that, I would simply characterise [the part that I watched] as dishonest. I can’t put it any other way. It boggles my mind.
Hansen provides his own update of the 1988 predictions versus the temp record here. (Mann is using the land-station data only and cutting it off – which he is prone to do of course – wherever he sees fit).
http://www.columbia.edu/~mhs119/Temperature/T_moreFigs/PNAS_GTCh_Fig2.gif
Carrick (Comment #89093) January 27th, 2012 at 5:27 pm
“Also, Tisdale does publish the slopes in his figure and he does specify that he’s comparing against the Hansen (2005) model.”
But he isn’t. He’s just taken a simple statement by Hansen that the model got a value of 0.6 W/m2 for 1993-2003 (which seems reasonable) and claimed that same number is a prediction for some future time, which he has falsified. His lame justification is
“It appears to be common practice by the climate science community in their presentations of data at their blogs to extrapolate model projections and to shift or offset data.”
Regarding Toto’s comment that I should know better… (regarding using post 2003 versus all of the data).
Here’s a quick movie I made of the coverage by year.
Clearly the coverage are much less complete prior to the ARGOS network being introduced. Toto is simply wrong on this account.
1991 is here at the NODC site.
Nick how does
Have absolutely anything to do what what you said “He’s just taken a simple statement by Hansen that the model got a value of 0.6 W/m2 for 1993-2003 (which seems reasonable) and claimed that same number is a prediction for some future time, which he has falsified”
What did he “falsify”?
Put it another way, what should he have done instead in order to compare model ensemble output with? Wouldn’t it be the case, that given that Hansen generally assumes (as is correct) that CO2 forcing is increasing over time, that using this number is, if anything, a conservative approach?
Put a third way, are you actually claiming Hansen would have predicted a lower OHC uptake for 2005-2011? If not, how is this a substantive misrepresentative of the models (or Hansen’s account of them)?
Nick Stokes (Comment #89101),
Actually Nick, James Hansen continues to claim an overall energy imbalance of ~0.6 watt/M^2 for 2005 to 2010. (http://www.columbia.edu/~jeh1/mailings/2011/20110415_EnergyImbalancePaper.pdf)
He manages this by dismissing all the lower calculations of ARGO heat from 0-700 meters, which are in basic agreement with each other, and relies instead on the single outlier high value from von Stuckerman. Why does this not suprise me? Maybe it is because Hansen has claimed high ocean heat uptake for a decade (“in the pipeline”, “unavoidable” and all that)… no matter what the actual data happen to be saying.
So Tisdale doesn’t seem all that far off with a continued “0.6 watt/M^2” imbalance. I can’t speak to any claimed justification, but I think the one who is really in need of justification is James Hansen.
Carrick (Comment #89103)
Here is the Bob’s original quote, that I raised with him.
“I zeroed the data for my graph in 2003, which is the end year of the Hansen et al (2005) graph, to show how poorly the model projection matched the data during the ARGO-era, from 2003 to present.”
“show how poorly the model projection matched the data” sounds like falsified to me, but substitute whatever word you prefer. But what he has isn’t a model projection. No model calculated that. Hansen wasn’t making any projections. There weren’t any scenarios. He was using known forcings.
If he were to make a projection, it would have been based on a scenario for future forcings, and who knows what he would have chosen? The point is, he didn’t do it. It’s Bob’s creation.
Maybe I need to drunk-blog to follow your logic Nick.
This figure generated by Tidsale is probably the most representative of the “real picture.”
If I understand Nick’s sole objection, it’s that Tisdale extrapolated the GISS Model E from 2005 to 2011.
The problem I’m having with Nick’s logic is a) this isn’t an unreasonable thing to do, and b) as long as Tisdale doesn’t represent what he’s doing as “Hansen’s personal projection”, there is no problem with Hansen not having personally done the “laying of hands” on the projection before Tisdale put it on a graph. c) I’m in a bit of disbelief that Nick really wonders “who knows what he would have chosen”.
…
I’m pretty sure it wouldn’t have been a horizontal line.
That’s the part where I started feeling a need for booze.
Carrick (Comment #89107)
Missing link.
“as long as Tisdale doesn’t represent what he’s doing as “Hansen’s personal projection—
He represented it as a model projection. It wasn’t.
A physically based model, starting from 2003 and aware of the recent rapid rise in OHC to 2003, might well have decided that there would be a pause. We don’t know. All we know is that Hansen cited a model result that in the decade to 2003, OHC would rise at 0.6W/m2. And the measured rise was rapid.
In the years following 2003, the trend was lower. Bob says that because the model got 1993-2003 right, and then the trend changed, that this change represents a failure of the model. But all we know is that the model got it right in the period cited. That’s not a sign of a model failing.
SteveF (Comment #89104)
“Actually Nick, James Hansen continues to claim an overall energy imbalance of ~0.6 watt/M^2 for 2005 to 2010.”
That’s energy imbalance, not OHC. For OHC, he cites von Schuckmann and Le Traon (2011) who deal with 60S-60N and say (Hansen):
heat gain 0.45, 0.55 and 0.60 W/m2
for ocean depths 0-700 m, 10-1500 m, and 0-2000 m, respectively, based on 2005-2010 trends. Multiply by 0.7 for global imbalance.
Bob was describing 0-700m, so the S< figure, obtained after filtering, is 0.31W/m2.
Nick Stokes:
Fortunately, the PCMDI CMIP3 archive contains the GISS model projections. I calculated the OHC for those here:
http://troyca.wordpress.com/2011/10/13/giss-er-ocean-heat-content-20th-century-experiment-and-a1b-projections/
It does not appear that the GISS model projected such a flattening. In fact, none of the 5 GISS-ER A1B runs included in the archive suggested the likelihood of such a flattening over the next 100 years. Of course, the GISS models do not include much variability in decade-scale trends…whether this represents a “failing”, and whether such a “failing” results from poor reproduction of natural variability or a general over-sensitivity can be argued, but I would not say there’s anything in the model projections that greatly diverges from a linear extrapolation of the trend during this time period.
Troy,
What you don’t have is a proper way of relating the A1B calcs to the 20Cen calcs. RC also did a linear extrapolation here (they didn’t say it was a model projection). You can see that the effect is very different from Bob’s, and it depends on where you start.
You aligned the A1B to a 1955-99 baseline by subtracting, not the means for those runs for this period, but the mean for a different set of runs. I don’t think that is reliable.
It’s true that the A1B runs have less variability than obs. But the 20Cen runs don’t. I suspect the reason is in the variability of the forcings, known in one case and not in the other.
Re: Nick Stokes:
The models output absolute ocean temperatures (from which I calculate OHC anomalies), so aligning them is simply a matter of determining the offset for each month. And I believe those A1B runs are simply a continuation of the 20th century runs…of the 9 20th century runs, 5 continue on to A1B, 1 goes to A2, 1 goes to B1, 1 goes to “committed” scenario, and 1 goes to another scenario (2xCO2?). For some reason the archive is simply missing the period from 1999 to 2003 in ocean temperatures (it’s there for other parameters). So I don’t think using the five exact runs is going to move the intercept much.
I doubt this is the reason. GISS poorly simulates ENSO, which has an effect on ocean temperatures and the TOA radiative flux. Compare GISS runs to ECHAM-MPI or GFDL, which include much more “unforced” variability. A cynic might think that the 20th century runs better simulate variability because the forcing history was influenced by attempting to simulate this aspect.
Regardless, my point was that using the linear extrapolation of the 1993-2003 trend to estimate the model projection was not too far off for GISS…indeed, you show Gavin doing the same thing (obviously, he picks a baseline that makes the projections look best, contrasted with Bob, although I think Gavin has also mis-labelled the GISS run incorrectly as GISS-ER).
Nick Stokes:
It is a reasonable extension of the model projections.
He should have stated upfront what he did, other than that BFD.
Of course it wouldn’t, as you should be aware: the models lack the skill to explain why you get periodic deviations from the global mean trend. This deviation is understood to be (goes under the rubric) “short-period climate fluctuations”, and you can see a similar excursion between 1970 and 1980, not explained by the models.
If it’s anything, it’s evidence of model tuning.
Hopefully you are well enough trained to understand you don’t use the agreement between a model and prior measurements as validation of a model.
Nick Stokes,
Here is what Hansen et al wrote in 2005:
(Bold is mine.) And now Hansen’s best estimate is ~0.6 watt/M^2… and that is using a highball calculation of upper ocean warming which is in conflict with most other published values. This drop in imbalance has happened in spite of continued rapid increase in well mixed GHG forcing… more than 0.2 watt/M^2 higher in 2011 than in 2003. Please note that in 2005 Hansen et al claimed that a 10 year period (1993 to 2003) of rapid heat accumulation, based mostly on less certain XBT data, was enough to confirm a high value for imbalance. So I guess James Hansen (and maybe even Nick Stokes?) will agree we can safely look at 2001 to 2011 measured heat accumulation, based mostly on more certain ARGO data, to confirm a much lower estimate of imbalance.
.
What sticks in my craw Nick is that when the data support hysteria, people like James Hansen (and there are a multitude) always use those data as absolute confirmation of the extreme danger of CO2 emissions, but when the data don’t support hysteria, those data are either ignored or, if ignoring them is impossible, discounted with rapid hand waves and contorted explanations. As a rational person, Nick, surely you can see that this obvious shading of the analysis in favor of data which support your preexisting views reduces credibility…. and suggests an awful lot of confirmation bias in the field.
It appears as though the modelers want to edge their way past the concept of Post Hoc. None are saying, “Yeah, this is an instance of post hoc analysis. But none are making a compelling case that these models aren’t in any way tuned, with respect to aerosols and other forcings.
I was attempting to post these questions while the BlackBoard was off-line (for me anyway). I think these questions have been addressed in the meantime, but I continue to think we are not certain what we are looking at and why it appears as it does.
Interesting divergence problem discussed here in yet another area of climate science. I have not been following the discussion as closely as I should have to ask intelligent questions, but I’ll ask some questions anyway.
In the linked graph shown in the post: Troy_CA (Comment #89110) the projection of the model(s) from 2003 into the future shows essentially a straight line projection. Before 2003 the model appears to follow the observed ocean heat content undulations reasonably well. The divergence of the model from the observed in the period from 2003 to present is obviously the point of the discussion here. It is obvious that the post 2003 model series is a projection using the A 1B scenario. Was the pre 2003 period modeled series using known forcings and GHG concentrations? Is the A 1B scenario representative of the conditions existing between 2003-present? If not can the difference explain the divergence. Or is there some other known limitation of the models that allows them to capture the real undulations of ocean heat content after the fact but produce rather straight line projections into the future?
Kenneth–
Dreamhost was down. It’s been down quite a few times in the past two weeks.
Just for the record here’s the link missing from Nick’s quote
It is customary to provide a url when selectively quoting another person to allow others to see the context in which the quote was made, and no better scholarship than extending somebody else’s projection without stating that you did this up front…even if it was obvious this is what he had done. What’s good for the goose is good for the gander, and it is a bit in poor taste to so vociferously attack another person’s work while having similar levels of scholastic lapse in one’s own work. It is also in very poor taste to never admit when you made errors in your criticisms. We need to hold our own words up to the same microscope that we hold others to.
I believe this was the missing link. Sorry, had I been drunk blogging I wouldn’t have made that error. 😉
Notice that the models don’t do that well with intervals of less than say 20 years. This is hardly surprising. They have a similar problem with short-period fluctuations in the atmosphere. It’s also understood that the same short-period fluctuations are observed in the ocean data, but with larger magnitude (as they probably originate in the coupling of the atmosphere and ocean systems, that is hardly surprising).
Tisdale asked in this oft criticized figure the legitimate question, “if the model mean continues to diverge from the observations, how many years are required until the models can be said to have failed?”
Too bad he didn’t attempt to answer it. 😉
Nor did anybody really try and rebut his comment based on discussions of significance. When people choose to generate figures as Tisdale did, that seems to me to be a better and more constructive approach that trying to nickel and dime him to death.
Troy #89115
“And I believe those A1B runs are simply a continuation of the 20th century runs…of the 9 20th century runs, 5 continue on to A1B, 1 goes to A2, 1 goes to B1, 1 goes to “committed†scenario, and 1 goes to another scenario (2xCO2?).”
Yes, but as I understand your code, you align by subtracting from A1B the 1955-09 mean of all 9 20Cen runs, not just the 5 ancestors of the A1B (if that’s what they are).
Carrick
“It is a reasonable extension of the model projections.”
I don’t have any objection to seeing what an extrapolation would look like. But if it fails, you can’t say that the model failed. Bob’s logic goes like this:
We have these known trends:
s1: observed 1993-2003
s2: modelled 1993-2003
s3: observed 2005-2010
s3<s2, so "the model projection poorly matched the data".
But equally, s3<s1 – no model there. The only thing you can deduce is that the trend changed. The model matched s1 (s1=s2) approx, which is all that can be expected.
SteveF,
“surely you can see that this obvious shading of the analysis in favor of data which support your preexisting views reduces credibility”
No, Hansen’s estimate of imbalance came down from 0.85 W/m2 in 2005 to 0.6 W/m2. That seems to be an outcome of the recent lower OHC rate of increase. Maybe you think it should have come down more, but I don’t see that the vS< estimate is an outlier.
> The model matched s1 (s1=s2) approx, which is all that can be expected.
Well.
If that’s how you look at things when the outcome favors your friends’ ideas, consistency would demand that you extend the same leeway to your adversaries, when similar circumstances obtain.
Which, of course, they often do. Since all that’s required to find a model s2 that plausibly matches observation set s1 is to look hard enough. Or tune extensively enough.
Will you be awarding points to the various coolers and lukewarmers for their s2/s1 matches? I doubt it. This strikes me as closer to advocacy than inquiry.
Nick:
I’ll start by pointing out again, I don’t think Tisdale actually showed that anything failed. (See my comment above about other periods with similar excursions from the model mean.)
I agree that he should have said something like “this is my projection based on the ensemble mean of the models, and I believe that this extrapolation is valid for the following reasons_____.”
However, to say that the models would have given a flat response for that period is I think a bit historically blind…. we know the state of the models in 2011 and in 2003, and we know the state of the assumed forcings in 2011 and 2003, and there is nothing in either of these that could have given a flat response in the ensemble mean of the models between 2003 and 2011.
I suspect one could reproduce the OHC from 2003-2010 from available model runs if one cared that much. I don’t care that much and think that Tisdale’s extrapolation is reasonable, and would be very surprised if you went through the extra effort of downloading and computing OHC from those model outputs we would find it to be that much different.
If you want to be overly legalistic about what is “proven” and “not proven” you can continue to do so. Personally I don’t find that a very constructive way forward.
Carrick,
No, nor would I expect them to, any more than I would expect a straight answer from a ‘progressive’ to the question: “What is a fair absolute maximum tax rate for the rich?”. Directly answering a question like Bob’s puts you on record. The climate science community seems to never put themselves at risk of being proven wrong with clear answers to clear questions about projections. That’s why there are so very many mealy-mouth “may/might/could/potentially” projections in climate science. The truth seems to be 1) they don’t know, 2) they know they don’t know, and 3) they want the public to think they do know.
SteveF, getting them to give straight answers here is a bit like getting straight answers to some of the issues being raised on noconsensus about the eventual effects of warming (adverse versus beneficial). It seems the more they know, the less they want to talk about the details. (They leave village id*ots like Joe Romm to do the blathering about harmful effects, and don’t contradict him, even when they know he’s wrong–wouldn’t help the cause if they corrected him.)
Then there’s the part of me that’s astonished about them being astonished by how their credibility is tanking as the process gets more transparent (like that unfortunate release of behind-the-scenes emails that showed the real BAU scenario.)
AMac (Comment #89124)
“Will you be awarding points to the various coolers and lukewarmers for their s2/s1 matches? I doubt it. This strikes me as closer to advocacy than inquiry.”
No, Amac, you’re reading stuff into my comment that just isn’t there. I’m not saying the agreement between s1 and s2 proves it is a good model. I’m just saying it isn’t ground for criticism. It’s the only result there was, and the model was criticised.
“I doubt this is the reason. GISS poorly simulates ENSO, which has an effect on ocean temperatures and the TOA radiative flux. Compare GISS runs to ECHAM-MPI or GFDL, which include much more “unforced†variability. A cynic might think that the 20th century runs better simulate variability because the forcing history was influenced by attempting to simulate this aspect.”
In the name of science should not we be discussing the points that Troy made here and looking for further elucidation from other climate models? Bob T’s points are a starting point for a reasonable discussion and so why do we get into a, he said, he said, discussion with the defense attorney for the consensus.
Nick Stokes (Comment #89133)
> No, Amac, you’re reading stuff into my comment that just isn’t there. I’m not saying the agreement between s1 and s2 proves it is a good model.
OK, re-reading your comment, I see that your claim could be a very limited one. Apologies for over-interpreting.
Kenneth:
Well I agree, and I’ve raised a number of points, which for brevity I’ll repeat here. These are comments I made previously in an email, and maybe they bear repeating:
I think our defense attorney prefers to obfuscate the real issue, which is do the models agree with the OHC data, and if not, why not. I’m interested in hearing your comments.
OK now that we are by Bob T’s errors let us get on with the points made by Troy. Are the other model outputs available for OHC? I know that KNMI has much climate model data available at their site – maybe I should look there.
Kenneth, I think he makes similar comments to mine, which are interspersed with my critique of Bob. Also, Bob did start the ball a’rolling so we can use what he did, guided by our 20-20 hind sight to work out how to improve on what he did. Here are my comments:
1) There is more variability in the data than in the individual model runs.
2) Much of this variability is likely associated with short-period climate fluctuations, which the models do not accurately reproduce (even in an ensemble sense).
3) We know by looking at the spatial scale of *most* of the models and the spatial scale of the atmosphere-ocean coupled oscillations that these models actually shouldn’t do a good job (rather they should produce a low-pass filtered version of it, because their spatial scale is too coarse… there are a few models coming online which are supposed to correct this).
4) The ensemble model mean will be a “smoothed out” version of the model response, regardless of whether the individual models accurately reproduced the variability seen in climate, so you need to plot both the model mean and the ±2 sigma variation about the mean in any graphical comparison.
5) You still need to estimate the uncertainty in the OHC trend based on the variance observed in the OHC and the duration of the trend estimate.
6) Finally you need to actually look at trend_{OHC} – trend_{model} and test it against the null hypothesis (that they are equal).
What is the p value for that test?
And that’s the question that should be answered from any study.
In doing this one needs to be cognizant of the shift in the geographical distribution of OHC measurements over time, and correct e.g., for the step function observed between 2003-2005. (However, this needs to be done in an objective fashion…for which I have some ideas…and not just by “eyeballing it”).
Comments?
Carrick, nobody said it would be an easy analysis. I have been duly forewarned.
KNMI does not have the OHC data, but it did give me a lead as to where I can obtain it. And that is the easy part. Maybe we can impose on Nick to undertake this project at his blog.
Ciao Lucia. Lots of talk on this thread about Tamino’s latest disagreement with my simple short-term ARGO-era OHC graph. Troy CA, toto, SteveF, Carrick, Bill Illis, Nick Stokes, Kenneth Fritsch, and maybe a few others have been discussing it.
As far as I can tell, and sometimes it’s tough to tell with Tamino’s complaints, his primary concern is where I have shown the intersection of the ARGO-era OHC data and an extension of the linear trend of the model mean from the GISS-Model ER that was used to simulate OHC in Hansen et al (2005). He definitely does not discuss the observed ARGO-era trend.
Hansen et al did not illustrate the data back to the 1955 start of the dataset for a number of reasons. They only showed the model and observational data from 1993 to 2003. The ensemble members and ensemble mean data for the Model-ER simulations (hindcasts and projections) are not available in an easy-to-use format (like at the KNMI Climate Explorer). So I’ve based my depiction of the model mean on what Hansen has stated was the trend during that 1993 to 2003 period. There have been comments in the past about my simply extending the trend of the GISS model mean to cover the ARGO-era period. I’ve rebutted those by linking the 2009 and 2010 model-data posts at RealClimate to show that even Climate Science by Real Climate Scientists presents the data the same way, as an extension of the linear trend for the decade staring in 1993. See here: http://www.realclimate.org/index.php/archives/2009/12/updates-to-model-data-comparisons/
And here:
http://www.realclimate.org/index.php/archives/2011/01/2010-updates-to-model-data-comparisons/
The paper does not state what anomaly base period was used for their Figure 2, but it looks as though they’ve simply zeroed the model mean and the OHC data at 1993. One of my arguments has been that their zeroing of the two datasets in 1993 and my zeroing them in 2003 are no different. Regardless, where the extension of the model mean intersects with the ARGO-era data would depend on the base years used for anomalies. If you look at the two RealClimate posts, Gavin did some base year jockeying to accommodate the NODC’s 2010 revisions to their OHC data. That OHC update lowered the long-term (1955-2010) trend by about 9% if memory serves, so Gavin needed to shift things around. Notice that he also stopped showing the first decade and a half of data in the 2010 update. Wonder why? So shifting base years is not uncommon.
Further to that, I replicated the long-term GISS Model-ER ensemble mean for OHC from a presentation Gavin gave in 2008. See Gavin’s OHC graph on page 8:
http://map.nasa.gov/documents/3_07_Meeting_presentations/Schmidt_MAP.pdf
I’ve discussed my replication of that data in a post that I’ve linked to the three recent OHC updates, but apparently few people read links.
In the following graph, I’ve replaced the older version of the NODC OHC data with the current version. It presents the OHC data and my replica of the model hindcast/projection with the base years set to the full term of the data 1955-2010. That way no one can accuse me of cherry picking the base years:
http://oi41.tinypic.com/117fx1c.jpg
And here’s the graph of the OHC data and the model projection when I shorten the two datasets to the ARGO-era:
http://i41.tinypic.com/eklz6x.jpg
The presentation’s just about the same as the graph that Tamino complains about, except that it’s shifted upwards instead of sitting at zero anomalies.
If and when GISS makes their Model-ER simulations of OHC for the top 700m available in an easy to use format—at the KNMI Climate Explorer would be nice—I will be able to present the actual GISS Model-ER ensemble mean data and trend based on that data. Since that’s not likely to happen, I’m stuck with the other way that for some reason attracts so much attention.
Regards
Bob TIsdale:
Remember not to fixate on the mean value and trend value. There are uncertainties in these quantities that you need to show as well (in both data and model mean ensemble).
OTH, this is a decent graph you made. I like it far better than the truncated version, for reasons I gave above.
(Note the step function when ARGOS came on line too… probably this is an artifact of the shift in geographical distribution of the measurements, and in any case needs to be corrected for in some, hopefully objective, manner).
Nick:
Yes, this is what I did, but this has little effect, and I don’t it is fair to say that this would render the result not “reliable” (your comment #89112). Even if the 5 20th century runs that went on to become the A1B runs just happened to be clustered rather than interspersed among the 9 runs (an extreme case), this would shift the baseline a maximum of 0.48 (10^22 J) from the 9-run average I actually used. Compare that to the approximately 7 (10^22 J) discrepency we see (in 2010), and even that extreme case doesn’t cause a shift on the same magnitude. This makes sense, as GISS-ER doesn’t produce much unforced variability in OHC on interannual scales, much less on 45-year scales.
Re Kenneth #89136:
I calculated the OHC for ECHAM-MPI here (I think there was only 2 A1B runs archived), and you might find the paper KO11 paper interesting as it kind of sort of tries to determine the probability of getting an 8-year flattening based on the ECHAM-MPI model runs (although it reads more like a defense of why such a flattening is not exceptional, a recent correction to their math suggests that it is less likely than originally published).
All of the CMIP3 runs are archived at PCMDI, so it is possible to get a good representation for OHC by processing the thetaO (ocean temperature by depth) files. The only problem is that a) the files are large, and b) it can take a while to write custom processing for each of the different models, since they use different horizontal and vertical resolutions. If you can find the finished global OHC anomaly product elsewhere I would recommend that instead (and pointing us all to it!)
Troy_CA:
That may be what you need though, if you want to compare model output to an incomplete geographical coverage. That is make the geographical coverage of the model over time equal that of the data.
This is not a ridiculously hard problem, and you almost certainly can gain access to other code outputs by writing the groups, but it’s beyond the level of “hobbyist” that I’m willing to do. I suspect once you were done with it, you’d have a peer review quality publication out of it.
Thanks for the info, Troy, that is where I was headed based on what I saw linked at KNMI.
Carrick, I know you are addressing your comment to Troy, but I would probably not even be considered at the hobbyist level in some of the analysis that I attempt to do. I do them first because as a retired guy I have time to do them – or not depending on my mood. And second because it helps me with my understanding of the matter at hand. Finally my mind is challenged by the process of collecting and manipulating the data.
Ken, good luck on your endeavor, and if you want ideas past us, hopefully we can help on that.
I need to contact Oldenborgh from KNMI and determine if they plan on placing OHC data on their site. I think Troy has indicated that the data handling requires downloading and processing lots of data, and I can understand that based on my assumption that to obtain an accurate picture of OHC I would need to download the temperature data available all the way down to 700 meters. Also it may be that the projections of the climate models merely extrapolate in a more or less linear fashion and thus would miss a flattening period. Then we would be talking about the trend projections of the model versus the observed trends and at what point would the trends be significantly different. I believe Troy has referenced a paper under review that has attempted to estimate the probability of the observed flattening being incompatible with model projections and the number changing to lower probabilities in the process. I need to look at the papers that Troy suggested reading. In the end I may have my current questions answered without doing my own analysis.
Also in light of the discussion of obtaining funding for research (on another thread?), I should note that a retired guy’s project funding usually comes from She Who Must Be Obeyed and in the form of time away from other projects.
When a graduate student in college many years ago, the federally funded projects were just becoming a major source of covering research costs. My research professor obtained most of his research funding out of his own university funds and he would complain about a biochemist on the staff for writing creative proposals for funding cancer research which was a big push at the time. He did have a very high success rate which I attributed as much to his being at the right place with the right specialty than for his writing skills and creativity.
Keep in mind when it comes time to pay publishing charges, we can probably scrape together enough donations to pay it.
If I wanted to match the model to data, I’d want the full 3-d ocean data, because I’d want to match the shift in geographical distribution over time of the actual measurements.
The step function between 2003-2005 in OHC should be predictable from the models… since it most likely comes from a shift on geographical distribution. The biggest change was the inclusion of the latitudes between 30-60°S.
Once again here’s an animated gif I put together for that. Lunch is over…back to meeting day.
Another quick thought, then I have to run.
It’s also possible that part of the flattening was due to the change in geographical sampling too.
Is the 3-d data available needed to compute OHC between 1991 and current?
Carrick, Kenneth,
FWIW, much of the rapid rise in OHC post 1992 can be attributed to recovery from El Chichon and Pinatubo cooling; dial out those effects and there seems to be a much smoother rise in both temperature and OHC from the early 1970’s to the early 2000’s. The other (unpleasant) potential problem is the accuracy/biases/adjustments to the XBT data versus ARGO data. The rapid run-up in ocean heat from 2000 to 2003 corresponds to the transition for XBT data to ARGO data; looks to me like there may still be “issues” with the accuracy of the transition.
Kenneth:
To clarify a few things, the paper that I mentioned (Katsman and Oldenborgh 2011) has already been published, but a correction to it was published a few months later. Basically, if you look at the relevant period (1990-2020), KO11 ran a bunch of ECHAM-MPI simulations and estimated what percentage of the runs over this time period would be expected to have an 8-year flattening similar to what we’ve seen. Originally, they determined it was 57%, deeming the event “not exceptional”. However, they failed to take into account autocorrelation (hence why I originally posted on it, in the previous link given), which they then fixed in the published correction, bringing down the probability to 25-30%. I’m not sure if this is the question you wanted answered? You might ask, “why a 30 year period?”, or note that the ENSO conditions required to produce such a flattening in the model (El Nino-dominated decade) did not seem to be the case for the current real-world flattening.
Anyhow, it would not surprise me if *none* of the CMIP3 model runs showed such a flattening over the recent decade. The 2 ECHAM-MPI CMIP3 runs in the archive do not show such a flattening (KO11 ran their own 17 simulations), and I believe that’s the model with the most unforced variation…the 5 GISS-ER runs certainly show no such flattening (it may be up to GFDL then).
Sounds like someone could (should) be opening a can of worms.
I obtained a reply from KNMI and they do have plans for eventually including some OHC data at their site – also wondered if I had read the recent article in Nature Geoscience on the OHC trends inferred from satellite radiation observations as opposed to Upper OHC that is observed directly with XTs/ARGO?
Geert Jan van Oldenborgh who manages the KNMI data has always been good about replying to my emails.
An aside, Phil Jones has also replied in a timely fashion to my emails about the use of GHCN V2 and V3 data. I have been attempting to determine how much value added product is involved with the CRU temperature data set. I now know that that the GHCN data which makes up a significant portion of the CRU data is used directly by CRU and that CRU continues to use the V2 data as adjusted by GHCN even though GHCN stopped updating the V2 data in June of 2011. CRU has not updated beyond that data as of a couple of weeks ago.
I had thought that CRU was mixing V2 and V3 data but later found through communication with GHCN that I was confused because GHCN has a data set with many versions for the same station in V2. V3 uses only a single version for each station. GHCN sent me this huge file of data that I was unaware existed and I was able to resolve the problem. Actually I have to credit Nick Stokes with knowing about the multiple station versions.
Jones was not real clear on what they do with the other data they receive outside the data from GHCN, but I do not think they adjust that data internally either – but I do need to look further into that one. The only value added features I attach to CRU are some quality control procedures and gridding the station data.
Ah, I realized my previous comment was in moderation due to entering the improper e-mail (Lucia, you can delete that last one).
———–
Kenneth:
To clarify a few things, the paper that I mentioned (Katsman and Oldenborgh 2010) has already been published, but a correction to it was published a few months later. Basically, if you look at the relevant period (1990-2020), KO11 ran a bunch of ECHAM-MPI simulations and estimated what percentage of the runs over this time period would be expected to have an 8-year flattening similar to what we’ve seen. Originally, they determined it was 57%, deeming the event “not exceptional”. However, they failed to take into account autocorrelation (hence why I originally posted on it, in the previous link given), which they then fixed in the published correction, bringing down the probability to 25-30%. I’m not sure if this is the question you wanted answered? You might ask, “why a 30 year period?”, or note that the ENSO conditions required to produce such a flattening in the model (El Nino-dominated decade) did not seem to be the case for the current real-world flattening.
Anyhow, it would not surprise me if *none* of the CMIP3 model runs showed such a flattening over the recent decade. The 2 ECHAM-MPI CMIP3 runs in the archive do not show such a flattening (KO11 ran their own 17 simulations), and I believe that’s the model with the most unforced variation…the 5 GISS-ER runs certainly show no such flattening (it may be up to GFDL then).
Regarding the Nat Geo paper, I think he was referring to Loeb et. al
Thanks, Troy, for all the background information and links on OHC. It shows I have more reading and analyzing to do to be up to speed on this subject matter – that I do find to be interesting and challenging.
Interesting that the correction points to 0 to 5 % probability for a 9 year flattening in the 1990-2020 period. Also not to realize that if you used overlapping 8 year periods you are going to obtain auto correlation is a bit disquieting.