For those who like to read the Zero Order Drafts of the AR5, Galloping Camel has posted the WGII ZOD. Links to both the WGI and WGII are in a grey banner under a turquoise banner near the top of his site:
Gallopng Camel Links to AR5 WGI and WGII ZODs. You can get individual chapters for AR5 WGII ZOD and AR5 WGI ZOD.
———–
Open thread. If you have questions about seeing a 404 error…. it was a typo. Thanks John and Steve for emailing quickly.
Yes, but what did Mann have for lunch on Dec 16th, 1997?
They are hiding the Pastrami!
Boris–
I think in Chapter 3, you will learn that, violating the rules oflow fat, low carb and low calorie diet plans , Mann ate a Croque-monsieur prepared with ham, as traditional, for lunch on Dec. 16, 1997.
He also ate a huge plate of french fries fried in 30%-30%-30% blend of traditional hydrogenated vegetable oil, beef tallow and lard slathered them with both ketchup and mayonnaise and washed the whole meal down with a diet cherry coke. After eating he burped.
Unfortunately, the video documenting was lost when the detective following him around was hit by a bus.
(For the humor impaired that was irony….Bad irony, but still….)
Returning to the issue of the ZODs: Many of us agree these aren’t smoking guns. But everyone does wonder what’s going into them. So, some are curious enough to read them. I skim them. Lots of people have the day off. They might as well know where to find them!
Somehow that looks more like a “Croak-Monsieur”
See: A blaze of unwelcome light
No mention of Don Easterbrook or Timothy Patterson
They say the modern period as warmer than the medieval warm period.
Compare theNIPCC 2011 interim and 2009 reports at:
http://www.nipccreport.org/reports/reports.html
Readers may be interested in commenting on a topic at a time,
comparing the IPCC vs NIPCC reports.
R. Timothy Patterson has developed a very high resolution microtome for sediment analysis that gives annual resolution temperatures:
Macumber, A.L., Patterson, R.T.,Neville, L.A., Falck, H., 2011. A sledge microtome for high resolution subsampling of freeze cores. Journal of Paleolimnology, v. 45, p. 307-310. DOI 10.1007/s10933-010-9487-4.
Dinoflagellate cyst-based reconstructions of mid to late Holocene winter sea-surface temperature and productivity from an anoxic fjord in the NE Pacific Ocean R. Timothy Patterson, Graeme T. Swindles, Helen M. Roe, Arun Kumar, Andreas Prokoph Quaternary International 235 (2011) 13-25
This gives very high resolution temperature fluctuations:
Peterson is predicting decades of cold weather based on the PDO shifting negative plus short low intensity solar cycles. Similar to Don Easterbrook.
David: yeah, I notice they don’t even cite Oliver K. Manuel either. 🙂
lucia, slight typo:
You had to know Mann et al. would get a reference:
I’m not sure if anyone will care, but I’m uncertain at how to feel about Figure 5.7 in the WG1 ZOD. I didn’t find any reference for the temperature reconstructions used in (a), but it seems to me they just copied Figure 6.10 from the AR4 WGI report. If that’s the case, it disturbs me. There are eight reconstructions in that figure which go back to before 1400, and one of them is none other than the original hockey stick. That’s right. It seems MBH is once again being used in an IPCC paper. This problem is somewhat mitigated though, as there is a note for the figure saying:
While I’m curious just what this “adjustment” is (I didn’t see any discussion of it), I’m more interested in the idea MBH99 might be dropped, but only because it would be replaced by Mann 2008. It would seem the flaws of both papers may simply be ignored.
By the way, on the subject of temperature reconstructions, it says:
It would seem nothing ever changes.
Still my fave bit: Chapter 10, p18:
“In summary, while the trend in global mean temperature over the past decade is not significantly different from zero..” Followed by a list of dubious excuses why this might be so.
Is there a special IPCC statistical definition of “not significantly different from zero”?
Looks delicious, Lucia. I was more making fun of the galloping camel’s commentary than any discussion of the ZOD, which I think is harmless fun.
“It would seem the flaws of both papers may simply be ignored.”
Mann 98/99 was in the AR4 and the issues discussed. As for Mann ’08, the IPCC is limited to the peer reviewed literature and won’t bother with stuff on blogs.
Re: Boris (Comment #88123)
Boris, I believe the game is called hide the salami and it has nothing to do with food; nonetheless, someone gets screwed. 😉
Boris:
I am quite aware MBH was included in the AR4. In fact, I said it was in the comment you’re responding to.
More importantly, the AR4 did not discuss the flaws of MBH. All it did was report criticisms of MBH had been made then dismiss those criticisms. There was no discussion of them, and indeed, AR4 didn’t even mention the majority of the issues with MBH. Even worse, AR4 contains false claims used to dismiss the criticisms of MBH.
I hope you don’t expect anyone to believe this.
The IPCC may not “bother with stuff on blogs,” but it has shown itself to be happy to use non-peer reviewed material of less reliability than what it could find on blogs.
Besides which, it’s not like criticisms of Mann 2008 are absent from peer reviewed literature. Given that, your response to me is wrong, and not just because you’re making things up in it.
Unfortunately journals like Nature have given people like Michael Mann cover to include nice little sentences like this “None of the errors affected our previous reported results.”
Which turns out is flat wrong—the errors do affect the results… in deed, correcting for the errors eviscerates the results.
This is a sentence that was allowed to be added to the Mann corrigendum after completion of the review process, something that was intellectual dishonest in its own right (in addition to the fact that what it stated was flat wrong).
Oh well, we can blame all this on McIntyre and Watts too, can’t we?
For those who are interested in more than talking points and blog drama, Chapter 5 is actually remarkably informative. There seems to be many more players than the usual suspects in the paleo game.
Carrick, that reminds me, I still have no idea who or what to blame for all this. The strategies used to keep the hockey stick alive are so devious I feel like it’d have to be the work of geniuses, but at the same time, so much of what they do seems so stupid. I don’t understand them, and I really don’t understand how they get away with it.
Toto, I am definitely “interested in more than talking points and blog drama.” Indeed, nothing I brought up was a talking point or blog drama. However, given what I did bring up, I have to ask something. How exactly do you conclude the chapter “is actually remarkably informative”? I certainly understand thinking it seems informative, but if you know the authors are willing to include untruths, doesn’t that make you hesitant to accept what they say about subjects you don’t know well? Are you perhaps really informed about the topics, and thus can tell us with certainty they did a better job elsewhere?
It seems research misconduct issues in climate science always bring up the immediate (pandering) charges of politicization, fossil-fuel funding, mindless talking points or what not brush-offs of facts they are uncomfortable dealing with.
For some reason, the CAGW crowd are unable to admit that research misconduct happens in climate science, just as it happens in other fields, and it is often the people are from the group of the most politically influential and “high impact” research, who are in the thick of it.
Watch how the CAGW types brush this off, deliberately misinterpret this, or anything else other than admitting fundamental truths about academic research, and the very human limits of the individuals involved.
At least what happens in other fields gets noticed and acknowledged (even if the party central to the scandal keeps his retirement benefits and a potential for future employment in academia).
At last we have scientists speaking out in public against the IPCC’s crazy idea that CO2 drives Earth’s climate.
Steve Goddard may be right. This year the whole shabby fraud may be buried:
http://www.real-science.com/2012-global-warming-report-card#comments
Including Penn State, the NRC and the NSF, who have all looked for, and fail to find, any such “misconduct”.
I guess that “CAGW crowd” includes a lot of people. 🙂
You know, next time you feel the urge to castigate Mann about “research misconduct”, you might want to try and read the actual corrigendum first.
gallopingcamel (Comment #88152)
January 3rd, 2012 at 12:20 am
“At last we have scientists speaking out in public against the IPCC’s crazy idea that CO2 drives Earth’s climate.
Steve Goddard may be right. This year the whole shabby fraud may be buried”
————————————
Haven’t heard Goddard’s name in some time – was that an appeal to authority?
toto, appeal to authority if it’s what keeps your Faith pure.
There are many types and levels of misconduct. If somebody is claiming that Mann changed values in data file used as in input to his study, then attempted to block assess to the modified files, then didn’t tell people (Mann did in MBH) I think that would qualify.
Where in any of the
whitewashesinvestigations you mentioned, was this issue raised?Inserting a sentence that contextually changes the meaning of the article “None of the errors affected our previous reported results.†would in most fields be academic misconduct both on Mann’s part and on the part of the editor from Nature who allowed this change be inserted without review.
Again where is it mentioned?
Since you’re completely deaf to the possibility that these guys act in any other capacity than as saints, I’ll skip the list. If you want a longer list, it is easy enough to provide.
All I’m illustrating is the questions the panels ask are very carefully pared to avoid asking awkward answers. It’s a reoccurring pattern in all fields.
I’ve already read the corrigendum, and know the history of how that little absurd claim”None of the errors affected the results” got inserted after referee review in flagrant disregard for journal rules and about 100x more on the subject in addition to this.
Another common trick of yours toto: accuse the other side of your own sins (not doing your own research).
Re: gallopingcamel (Jan 3 00:20),
I would rank Steve Goddard above Nasif Nahle on the credibility scale, but not by much. Neither has anywhere close to a positive ranking.
meh… make that “to avoid awkward answers.”
Wow.
Bring out the pitchforks!
OK, back to reality. The real “journal rules”, as opposed to the ones that you seem to have made up on the spot, is that corrigenda don’t need to be peer-reviewed at all. Barring disagreement among authors, editors may or may not contact outside reviewers, the opinions of which they may or may not take into consideration.
Never mind that if you have actually read the corrigendum, then the sentence is obviously true, and your original statement is utterly bonkers. The only corrections made in the corrigendum are clarifications of what was actually done in the original paper. “Correcting” for these could not possibly affect anything in the results, let alone “eviscerate” the conclusions, since the paper already includes these “corrections”.
My guess is that you are conflating the actual corrigendum with other faults, real or imaginary, that you think should have been included in it.
Consider the gap between the wide range (and stridency) of your accusations, and the evidence that you back them up with. Do you really think that the only persons who could fail to be convinced are the “CAGW crowd” who are “completely deaf to the possibility that these guys act in any other capacity than as saints”?
Boris:
Your faith in the IPCC is touching. However, not everything they rely upon is peer-reviewed. They have famously included papers not yet published (e.g.,the “Jesus paper” saga) and press-release-quality items from advocacy groups (e.g., the Pauchuri humiliation over the imminent demise of the Himalayan ice cap).
The credibility of the IPCC suffers precisely because it’s policy is not to seek out peer-reviewed material or it’s substantive equivalent but instead to filter for insider-club-approved contents to the exclusion of qualified but non-echo-chamber-suitable work. The IPCC is about the preferred narrative first and the science second. Only suckers believe otherwise.
toto:
Why does anybody bother with Mann? From the Climategate emails it is pretty clear there is regret even within the inner circle for supporting the data release stonewalling and for the circle-the-wagons response to the rather obvious questions as to whether that Mann’s paleo work is, shall we say, rather goal-oriented. From the choice to hide-the-decline to letting bristlecone pine or disturbed Finnish lake bottoms dominate the analysis, there is the steady odor of a preselected outcome working it’s way.
The fact that the errors acknowledged in the corrigendum are said not to affect the paper’s conclusion is not reassuring when one of the M&M accusations against the ’98 hockey stick was that the Mannomatic methodology is such that it almost does not matter what data was tossed in out or out — the stick would reappear regardless.
The credibility of climate science as a whole suffered because of a shared petulant defense of Mann. I don’t think he should have been subjected to subpoenas or any inference of unlawful behavior. But the petty, arrogant, thuggish response to legitimate questions about his highly publicized work is the essence of the Climategate problem and Mann should be held responsible in large part for setting that tone.
toto, it was the decision of the journal editors that this corrigendum go through peer review. That is public record.
There is no such eigenstate “peer reviewed until it isn’t.” That category doesn’t exist.
It is certainly not obviously true since you have to run a fairly complex set of algorithms not fully spelled out in the paper on a large set of files in order to verify these corrections don’t affect the results.
That was not done.
If you have a peer reviewed document (regardless of other journal policies this one was), it is not ethical to modify the conclusions of your paper after the peer review process was completed without the reviewers consent.
Nothing these guys do is ever wrong. Anybody who points out the slightest taint to what they do is carrying a pitchfork and is being strident. WTFE.
In retrospect, I agree with you that a purely legalistic reading… none of THESE errors… implies that Mann is only speaking to the data errors that he mentioned in his corrigendum. Interestingly in other contexts, you and Boris both seem to argue that we include both the corrigendum and the supplementary information. In this part of the corrigendum, the issue of short-centered PCs is brought up.
In particular see this comment contained in the supplementary note:
An admission of short-centering, but not a direct admission of error. It’s a bit like saying “on line 10 we divided the result by zero.”
So if you want to be completely legalistic in your approach (and I’m sure you do), the statement is in the narrowest sense technically true. In a more general scientific framework for interpretation, however, the statement is misleading, and ultimately the errors in the paper admitted to by Mann (including the important ones he didn’t admit were true errors) do change the results, and rather dramatically.
MBH’s results were just plain wrong.
My reading of it, and that of many others, includes of the errors reported by others, such as the use of strip bark bristlecones and the short-centered PCs. In fact, when you see the statement used, it is typically used as a cover that short-centering + incorrect proxies = wrong answer.
Given that there have been multipage corrigendums published in Nature previously, and given the high importance placed on this work by the IPCC, the climate community in general and the public, it is surprising that the editors decided “space constraints” prevented publication of these substantive issues in the main body of the corrigendum.
First there isn’t that big of a ‘gap’ between what I say and the publicly available evidence. Secondly, there isn’t any level of evidence that you aren’t going to twist with words like “wow”, “talking points”, “pitchforks” and “stridency” so it’s completely obvious I’m dealing with somebody who knows the Truth so simple facts won’t change their mind. Thirdly this is a blog, not a research paper or a book.
If you have any specific questions about the basis for any comments I’ve made, I can point you to them…many of them are based on reading the tons of work that McIntyre has done on this and by reading other available sources (including the emails) I can probably dig up the specific webpage that discusses it (at least the figures).
In the mean time, your reactions remind me a lot of the types of brush offs I was seeing here:
In other fields, mistakes are admitted to, analyzed and prodded and their relevance noted and accepted. In climate science, it’s all about “circle the wagons” and it has been since about 1990….well before McIntyre of the “mad” Watts showed up on the scene.
toto: Just curious, what is the proper way to correct a published paper when it is discovered that you used proxies with larger uncertainties and documented biases and the removal of those proxies from the reconstruction changes the conclusions of your paper?
CAGW ANS: Hide it in the SI to a corrigendum, admit you did it (so CYA) but don’t ever admit it’s wrong.
You guys need to see Salzer’s paper on the Bristlecone pine issue. Maybe you’ll find scientific misconduct there, too! In fact, I’m sure you will find it just about everywhere you look. Oh, except Wegman’s plagiarized mess.
Boris, how did the plagiarism by Wegman’s coauthor affect the conclusions of the paper, especially the sections that weren’t involved?
Let’s do it this way, you chastise Mann for his sloppy academic standards, I’ll do the same for Wegman. You go first.
I don’t think there’s much left to be said about strip bark bristlecones and their growth patterns, but point me to the article. I’ll read it
Just in case there are those of you reading along who don’t know how the game is played, here’s a step by step process:
1. Allege misconduct.
2. Offer nonspecific examples of “misconduct.” (Don’t forget to broaden the term misconduct to include doing things differently than you would do and maintain a tone of disgust for maximum effect.)
3. Make broad statements about an entire field based on your claims of “misconduct.”
4. Demand investigations.
5. Dismiss the results of those investigations, with reference to highly publicized sexual abuse cases, if at all possible.
6. Register your disgust once more.
7. Never actually look at the literature on a subject such as divergence or Bristlecone Pines.
8. If you do ever look at such research, go to 1.
http://www.pnas.org/content/106/48/20348.long
Thanks I found/remembered it.
Thanks for the piffle too:
What I talked about was very specific and directed.
You are basically lying when you say otherwise.
I never asked for an investigation, but I noted that the investigations that were conducted failed to address any of the issues that had been original raised. That’s why I called them “white washes”.
This is easy enough to verify: Look at the specific questions that were posed then try and find them in the investigative reports.
Or we can just have a big cry about how mean everybody is to climate scientists instead of directly acknowledge and address problems as they crop up and try and improve the process.
This thesis has one of the more complete treatments of the problem. Oddly the data used by Salzer et al (Malcolm Hughs, Linah Ababneh’s advisor, is a coauthor) is an obsolete version of the data included in Ababneh’s thesis.
Interestingly, and in complete harmony with how climate scientists practice their trade and earn distrust from other researchers, the data from that study has to date not been publicly released.
There is by the way unpublished data on strip bark growth patterns more recent than Salzer et al. that’s been discussed at the AGU. The main input of it is, the strip bark process is associated with plant stress. Technically it’s a “die back” process that results in the loss of tree bark… as a result you end up with asymmetric growth patterns. Along one axis (transverse to the direction of the striping) you see a very non-hockeystick like pattern. Along the direction of of the stripping, an accelerated growth pattern is observed (a “hockey stick”, just like the one you get if you plot number of firetrucks in San Francisco versus year… “right” shape but not an AGW proxy).
But it almost certainly has nothing to do with temperature!. It is just the response of tree growth post damage.
But hey, Boris. You know everything right? We’re just all stupid here.
Various hockey sticks
http://www.skepticalscience.com/broken-hockey-stick.htm
“Interestingly, and in complete harmony with how climate scientists practice their trade and earn distrust from other researchers”
We need a ‘rolls eyes’ smiley.
bugs – yes! Such a great reference! I always liked these comments from Ch 9:
“Some of these criticisms are more relevant than others, but taken together, they are an important aspect of a more general finding of this committee, which is that uncertainties of the published reconstructions have been underestimated. Methods for evaluation of uncertainties are discussed in Chapter 9.”
“Even less confidence can be placed in the original conclusions by Mann et al. (1999) that “the 1990s are likely the warmest decade, and 1998 the warmest year, in at least a millennium†because the uncertainties inherent in temperature reconstructions for individual years and decades are larger than those for longer time periods, and because not all of the available proxies record temperature information on such short timescales. However, the methods in use are evolving and are expected to improve.”
Looking forwards to those improved methods hitting the headlines and reassuring everyone the spirit of science is alive and well. Such a shame they dropped the Tiljander book on the way to the bench.
(apols to hostess – snip as req.)
Bugs #88198 —
The Skeptical Science post on the Hockey Stick that you link includes a reproduction of a Mann et al (2008) figure as its own Figure 6.
That paper (Mann08) has so many deficiencies that it seems very unwise to rely on it as a defense of any thesis.
For instance, one of Mann08’s central, high-profile conclusions was that paleo recons built from other-than-treering proxies confirmed the general hockey-stick shape that Mann and others had constructed from treering proxies.
This conclusion is why these results were featured in a super-high-impact general-science publication — PNAS — rather than in a specialty journal.
This finding is entirely dependent on the indefensible use of the Tiljander proxies. Without Tiljander, this conclusion collapses. Link.
Note that this statement is based on the work of Prof. Mann and co-authors in Mann09. Their stance is exactly as described by Carrick, upthread. In other words, it’s a non-concession concession. The commentaries of Drs. Mann, Schmidt, et al are strange stews of non-standard grammar, where nothing can be taken to be stated clearly or consistently.
Also as Carrick wrote, this sort of messiness and fallibility isn’t at all unusual in the scientific enterprise. The closed ranks of the mainstream climate-science community and its supporters: that’s worthy of note.
Bugs, Boris: You guys complain about handwaving and vagueness by critics of the mainstream. That’s one reason I focus on specifics, boring though they can be. By the way, it’s kind of meta, when you make vague complaints about the vagueness of specific critiques. The offer of a guest post at my blog is still open, if you want broader dissemination of your ideas.
“4. Demand investigations.
5. Dismiss the results of those investigations, with reference to highly publicized sexual abuse cases, if at all possible.”
I believe the investigations were not in response to any skeptics demands but rather were initiated by organizations wanting to stop the bleeding by “investigation”. Looking at the results from any side of this issue one must think that these organizations would have done better not even to have made the attempt – although they do provide cover in the MSM where writers can say that the scientists were completely exonerated and without noting the quality of the investigations.
Skeptical Science as a source, bugs? Get serious. When have they ever played anything down the middle?
If you want to know (and I’m sure you’re do >.<) what I think the "best" reconstruction looks like, it would be something like this:
ensemble proxy reconstruction
I used Moberg 2005, Loehle 2008, Mann 2008 EIV (which Mann himself admits is the better of the reconstructions from that paper) Ljungqvist 2010 for the ensemble, and including MBH1998 for comparison. THe 95% CLs were obtained by looking at the variation between ensemble elements, it did not fold in the individual reconstruction uncertainties (which I don’t have handy in all cases), so the “true” 95% CL is probably somewhat larger than shown in this figure.
Because of loss of variance, I “recalibrated” the other series in the ensemble to be consistent with Ljungqvist, and kept a constant baseline over the period 1000-1900. MBH98 was not rescaled or shifted. The post 1900 period lines up with the others, so there was no need for a baseline shift, and because it has zero correlation with the other series pre-1900, obtaining a rescaling factor was impossible.
Note that these reconstructions suggest that it is currently warmer than anytime in the last 2000 years. I consider this the most probably outcome based on current data…. but it’s not a hockey stick either.
I don’t have any problem saying that MBH is contradicted by the newer reconstructions, and I believe the reasons relate to the interplay between the short-centering PCs used in that paper and the inclusion of erroneous series (GIGO), and it was this interaction between these two errors that led to a “flat” hockey stick handle. (Manual editing of one of the non-proxy data files to extend the series back to 1400 contributed to this loss of any MWP too.)
Regarding this comment:
Everything I’ve said is readily falsifiable. If you can falsify it do so, don’t spam random articles unless you can show how that addresses any of the issues related to responsible conduct of research that I’ve been raising.
The issue for me here isn’t about whether the MWP is warmer or cooler than the current climate. I think there is no doubt it was globally warmer during the MWP than during the following LIA, and the data seem to show this reliably.
My problem is people that blindly support anything climate scientists do without questioning it and with the misbehavior on the part of the scientists themselves, because when they get caught, as they invariably do, it damages science as a whole. It affects me personally, so naturally I take it personally.
AMac:
Actually it’s plain ironic. 😉
I think they don’t like that you and I give detailed and nuanced enough criticisms, since none of them have done their homework or are really familiar enough (beyond reading true believer blog hagiographic posts on the subject) to be able to respond intelligently to anything that’s been said.
By the way, Mann 08 EIV probably should have been left out of my ensemble, I left it it just to avoid arguments over that.
(In comparison, Mann 08 CPS blows chunks btw.)
Without Mann 08 EIV, the statement “it is warmer now than any time in the last 2000 years” is no longer true at the 95% CL. So including that one reconstruction does have big repercussions.
Here’s the reconstruction without Mann 2008 EIV.
Carrick,
You already know both things I’m gonna say 🙂
1 – Cool reconstructions. With/without Mann08 EIV, doesn’t really matter. What’s significant is that you mush together a bunch of different recons, none with any great claim to “validity,” by a method that lacks (as best I can tell) a prior commitment to a particular shape. Hockey Stick!! Not-Hockey-Stick!! Superwarm MWP!! FrostyFreezy LIA!!
2 – I suspect that the 95% confidence levels are way too narrow, due to…
— any guesses? —
Yep, the pitfalls of Post Hoc Analysis. These proxies are for the most part artisanal: their methods of selection or construction bias them — intentionally or not — to deliver a Warm Present, Cooler Past sort of shape.
How should the confidence intervals reflect this bias? As the Wikipedia entry points out, in some cases it’s nearly impossible to apply an algorithm like the Bonferroni Correction to arrive at a defensible and quantitative answer. “Hard to quantify” isn’t at all the same as “not worth discussing.”
@Carrick
“Note that these reconstructions suggest that it is currently warmer than anytime in the last 2000 years. I consider this the most probably outcome based on current data…. but it’s not a hockey stick either.
I don’t have any problem saying that MBH is contradicted by the newer reconstructions, and I believe the reasons relate to the interplay between the short-centering PCs used in that paper and the inclusion of erroneous series (GIGO), and it was this interaction between these two errors that led to a “flat†hockey stick handle. (Manual editing of one of the non-proxy data files to extend the series back to 1400 contributed to this loss of any MWP too.)”
The claim never was that there was a ‘flat’ hockey stick handle. The error bars from the start indicated to me that they weren’t making any serious claims to flatness. It never surprised me that MBH was going to be but the first in a long series of developments in paleoclimatology reconstructions. I asked a scientist about the issue, and he pointed out that.
1) The case for AGW is not built on the ‘hockey stick’, it is just one part of several lines of independent research that support AGW, and is about 10% of the case for AGW.
2) It’s the ‘blade’ that is important. A relatively steady climate is now shooting up at a rapid rate, as your graph shows. That is what is important.
Scientists taking personal attacks personally. Amazing. Who would have guessed they were human. Reminds me of the usual tactic of bullies. Provoke a response, then take that as an excuse to beat up the victim.
Re: bugs (Jan 4 14:47),
Which graph are you referring to? The one linked in Carrick’s comment #88207, or in his #88209?
If we’re discussing the graph that leaves out Mann08 EIV (#88209), I’m not so sure about the “relatively stable climate.” And my estimate of the date at which temperature starts “shooting up at a rapid rate” is ~ 1600 AD.
What’s your estimate?
Re: Carrick (Comment #88209)
Wow!
bugs:
Actually, was a big deal at the time and I’ll remind you that Soon and Baliunas got eviscerated for suggesting that there was a real MWP. Loehle also got dinged for daring to show a MWP. This point has been hotly debated since MBH appeared and showed an absolutely flat “handle” to the hockey stick.
To be truthful, I think the real problem with MBH is, it isn’t really a reconstruction, the reconstructed data prior to 1900 are basically just red noise and contain no temperature signal at all. This belief is supported by the complete inconsistency of MBH with respect to “modern” reconstructions.
Demonstrating that cries of victimization and being bullied what you guys do best, anytime criticism is leveled at your saint/martyrs. WTFEx2.
Regardless, the mistakes and misconduct Mann made largely were done in the absence of, and generally preceded, any supposed personal attacks, so this excuse making is just all wet.
We agree on this (I don’t even think you need the instrumental temperature record to make a case for AGW, but that’s another story) .
Unfortunately the IPCC, Al Gore and endless other sources do not. See David Hagen’s count of references above Mann is by far the highest referenced author in that section of the IPCC. Somebody there obviously thinks his work is important.
(I think it is important too, even if the importance is somewhat inflated).
If you can trust the tree ring proxies as temperature, that is indeed what it shows (regardless of Mann 08 EIV, which grafts on the instrumental temperature data to the end of their series).
If you notice, however, I labeled the ordinal axis “pseudo temperature” or if you want “proxy temperature”, that is because the scale of the proxy temperature isn’t 1 to 1.
To illustrate the issues, the relative calibration for Mann 08 EIV is about 0.5, Mobert is 1.07 and Loehle is about 1.2, relative to Ljungqvist. So there is quite a bit of variability between reconstructions in the scale factor. (Loehle IMO is least likely to experience variance deflation and so is probably closer to real temperature…except he doesn’t correct for latitudinal amplification effect in his reconstruction, so even his is probably low).
The other point is we don’t know the relationship is linear. If it saturates at high frequencies associated with the AGW period and the MWP, as is one possible interpretation of the divergence problem for tree rings.
Of course that doesn’t explain Loehle’s reconstruction, which doesn’t use tree rings, and stops in 1935—before the AGW period. If you graft the instrumental record onto his measurements, you get the statement “it is warmer now than any period in the last 2000+ years.”
Oddly, it is the consistency of Loehle’s measurements with the other series that gives me more confidence that I might otherwise have in the tree ring based reconstructions (especially Ljungqvist’s and Moberg’s).
I don’t think the proxies contain enough high-frequency information for us to definitely say “, it has been warming more rapidly now than in the last 2000 year. ” If I am wrong on that, this is another important point, and one that does not depend on the flat hockeystick seen in MBH1998.
Here is the Pearson correlation coefficient between the different series and Ljungqvist, done for 500-year sliding windows.
Note that the correlation for MBH98 approaches zero previous to the calibration period. Mann 08 CPS is essentially junk, while Mann 08 EIV generally agrees with the other series, including Loehle’s.
next time, bugs, look at the graphics…the pictures they show would make Mann’s beard fall out if they ever got close to the IPCC
The ZODs are of course already out of date. The WG1 First Order Drafts are now out for review, and you can sign up to be a reviewer here.
The WG2 FOD is currently being prepared and will be out for review in June. There will be a similar open invitation to sign up to review it.
(Please note that to be effective, review comments should be specific and evidence-based, ie: don’t just say things like “insert ‘not’ here”, say “evidence supporting a different conclusion is provided by XXXX(date)” )
Physicist David Ritson in Climategate 2 email 1667 (Feb. 2005):
Jonathan Jones, in a recent comment at Judith Curry’s blog (January 4, 2012 at 5:40 am):
The more I reflect on the assumptions and difficulties associated with correlating tree ring characteristics with the temperature aspect of paleoclimate, the more I tend to agree with Jones’ radical stance.
A history of honest and frank critiques from within the field would have offered some assurance that science’s self-correcting mechanisms were operational. But that didn’t happen, and doesn’t seem to be happening now. Maybe at some future point there will be grounds for renewed optimism in the value of multiproxy-based paleoclimate reconstructions.
AMac:
I am pretty sure Craig Loehle’s reconstruction does not suffer from post hoc analysis by the way.
The variability between series is an honest way of testing the uncertainty of the series, assuming they didn’t do further post hoc analysis, to try and get their series to match with the previous ones (there doesn’t seem to be a lot of evidence for that).
The difference between non post hoc analyses (unfortunately there is just one) and the others that cull from a larger set of proxies is a measure of the uncertainty introduced by the culling process. That is something I know how to rigorously calculate the CLs for—whether I do or not, depends on time and interest.
I suspect you’ll find the error bounds are tighter than you had expected. (The lack of promise of a novel result is a disincentive for me here, I’ll admit.)
From the start of Craig’s method section:
Using just non-tree-ring series that have been already converted into temperature seems to be a legitimate thing to do. It might be worth revisiting his data set and seeing if I can come up with a somewhat more accurate series that properly incorporates arctic amplification. His combined series is just an average over these 20. It also doesn’t include Volstok, which would be an interesting series to add.
Carrick #88226,
Thanks for the remarks. I’ll have to defer to you, as in this sort of discussion calculation trumps word-based handwaving.
As far as concordance among recons: it does strike me that Loehle2008 seemed to avoid post hoc problems, but I haven’t re-read the paper with that in mind.
Some of the other recons (MBH1998, Moberg 2005, Mann 2008 EIV) may share many of the same proxy series, I believe (but am not sure). Ljungqvist 2010 is built on 30 proxies, where few seem to be common to the earlier recons (PDF link).
AMac:
I get tired of using MBH as a punching bag, but this is another misconduct issue. Mann claims that R2 is not a valid statistic (that it would be “silly” to use it). There is zero support for this claim.
Moreover, we now know that he computed R2 for his reconstruction, and he obtained nearly zero values when he did so, and that he refused to report these values of R2 (McIntyre later had the pleasure of doing so in one of many “personal attacks” [*]) , arguing that the RE statistic that he calculated was more reliable. (But his computation of the value needed for RE for verification was also wonky, that’s another story that involves pure incompetency and not misbehavior). OTOH, refusing to report or release adverse results is always an ethics breach in science.
[*] personal attack (n) — pointing out an unpublished adverse result from a published climate science paper. [climate science lingo.]
AMac:
Well if it will push the debate forward, I’d be willing to sit down and grind this out.
Regarding Ljungqvist 2010, it may also qualify as a no post hoc culling study to. I remember really liking the study at the time, in part because of the relatively large and uniform coverage, and not liking so much the method of combining points (simple averaging).
Unfortunately (again the norm) the data he used are not publicly available.
I found this comment interesting:
How many researchers need to agree that you shouldn’t use strip barked bristlecones, before we’ll all agree here that Mann made an error by including them?
Ignorance can’t be plead here, since Mann links to Graybill’s own work on bristlecones and of course is using the strip bark bristlecone tree ring data collected by Graybill, and certainly would have been aware of this paper (which he did not cite in his 1998 paper):
This is shades of Tiljander: Take a data set that the original authors say are not temperature proxies and use them as temperature proxies anyway.
Carrick & Amac. Great conversation guys. More please 🙂
DeWitt Payne,
Sorry to hear you don’t like Steve Goddard.
You are one of the people I particularly respect, so I was hoping would volunteer to participate in a review of the leaked IPCC documents.
The IPCC secretariat has (politely) asked me to remove the files from my web site. What would you advise me to do?
“ate in a review of the leaked IPCC documents.
The IPCC secretariat has (politely) asked me to remove the files from my web site. What would you advise me to do?”
##############
Ask him politely to resign.
seriously. Post his email.
Explain that you believe the IAC was correct, that the process needs to be open. And send a letter to andrew revkin.
@Carrick (Comment #88221)
January 4th, 2012 at 4:43 pm
bugs:
S&B got eviscerated for writing garbage. Loehle was criticised for making his own mistakes. MBH actually claims
“suggest that each of these factors has contributed to the climate variability of the past 400 years, with greenhouse gases emerging as the dominant forcing during the twentieth century. Northern
Hemisphere mean annual temperatures for three of the past eight years are warmer than any other year since (at least)
AD 1400.” Nothing wrong with that, and it has been borne out in your own officially approved reconstruction.
@Carrick
“How many researchers need to agree that you shouldn’t use strip barked bristlecones, before we’ll all agree here that Mann made an error by including them?”
I have no doubt that errors have been made in climate research, I have no doubt more will be made. MBH has been shown to be more correct than S&B “It’s the sun”.
Gallopingcamel @88232 – Good to know the request was a polite one. Did they explain their reasoning? I think you should resist the request and reply in polite terms explaining the value and benefit of a more open and transparent process. If you need a legal view before responding it might be worth dropping a comment at Tallbloke’s asking for Stephen Wilde’s contact details. Best wishes
Bugs
The ZOD quotes papers using the same proxies as S&B making the same points that S&B made
dont you find that odd
carrick you should read the proposal briffa wrote to study the divergence problem..
why according to his proposal divergence could mean that all denchronology is crap.
funny, he didnt write much about that in Ar4
“why according to his proposal divergence could mean that all denchronology is crap.”
I’ll eat me a strip bark sandwich if that’s the conclusion he reaches…
@Mosher
“Bugs
The ZOD quotes papers using the same proxies as S&B making the same points that S&B made
dont you find that odd”
S&B decide it’s all because of the sun. Which is odd.
#88243
“funny, he didnt write much about that in Ar4”
Well, someone did. Sec 6.6.1.1:
Of course, the original work was done last century, and covered in the AR3:
Sec 2.3.2.1
Nick Stokes,
Given the context of the discussion upthread, I am not sure that you carefully read the AR4 and AR3 material you quoted in Comment #88247. (If you see your job as Counsel for the Defense as obtaining a Not-Proven verdict, you may not find that to be necessary.)
Mann08’s reconstructions serve as poster children for the flaws described in your excerpts, as well as others. Yet this paper’s findings are used as building blocks in the zero-order draft of Section I of the 5th report. This is more evidence that these authors don’t take their own warnings seriously. Mention, then disregard: business as usual.
Re: gallopingcamel (Jan 4 23:01),
Unfortunately, I don’t have the time right now. I’m working on experiments to show that Wood was wrong and the LW IR transmission characteristics of the glazing does change the internal temperature in the expected way, i.e. an IR transparent windowed box has a lower temperature than a box with a less IR transparent window like glass.
Does this all mean that trees make lousy thermometers??? I had no idea;)
“Mann08′s reconstructions serve as poster children for the flaws described in your excerpts, as well as others. Yet this paper’s findings are used as building blocks in the zero-order draft of Section I of the 5th report. This is more evidence that these authors don’t take their own warnings seriously. Mention, then disregard: business as usual.”
Excellent point here, AMac, because that is a major problem that I see in the some climate scientists writings and their more tenacious defenders. Most papers allude to their own possible weaknesses in their findings and methods, sometimes rather vaguely and sometimes in the SI and the paper proper, which can be played either as something to be ignored when attempting to make an IPCC type point or when one wants to point to something in answer to criticisms of ignoring the weaknesses ii is there for that purpose. Please note that Mann (08) clearly showed the divergence problem in not only the dendro proxies but also the non dendro ones. Have you seen any serious conversations in the climate science community about what the divergence in non dendro implies for arguments and conjectures for the cause(s) of divergence. In the future, expect that further develops in proxy divergence and particularly non dendro ones will be able to point to Mann (08). In the meantime expect those weaknesses to be pretty much ignored or minimized.
I think you can judge just how serious these statements of weaknesses are taken by looking at the evidence for the scientists’ attempts to answer or at least study them further. The divergence problem appears to receive attention only by way of studies to show that the problem is one of methodology and/or arises from anthropogenic causes – and thus saving the work for proxies as thermometers measuring temperatures in the past.
Experimentally and statistically a good approach to divergence would be to bring the proxies used in past reconstructions up to date and here I am talking about dendro and non dendro. Another approach would be to use an a prior selection process for proxies that is founded in reasonable science and then use that process to select proxies for a reconstruction and let the chips fall where they may. In fact, one might never get beyond the calibration stage. Now if one did this say 20 or more times and then used the best performers as evidence for proxies as thermometers we would have a statistical problem. A situation like this one shows that the experiment would be required to be run under very controlled and certifiable conditions.
bugs:
Many of the criticisms of S&B were errant and the basic conclusions about the climatology were right. One of the chief complaints about S&B was the use of precipitation proxies. First in S&B that wasn’t an error, since he was using the proxies as a “climatological index” not as a temperature proxy, and secondly Mann himself uses precipitation proxies with the assumption that there could be a regional-scale correlation between precipitation and temperature.
The only legitimate complaint about Loehle’s reconstruction was he didn’t do an uncertainty analysis originally, that was admitted to and corrected. (Already we have a difference in pattern from MBH.) In spite of the bloaviating from your side, it does appear that his method for reconstruction has been vindicated. [It’s really ironically seeing the cries of anguish over that error, given how often the CAGW types don’t publish CIs themselves.]
I’m willing to stipulate at this point that none of you guyswill ever admit that the MBH reconstruction is meaningless. New reconstructions aren’t an improvement on it, other than in the sense of correcting a wrong answer. The answer is in the correlation: MBH has essentially zero correlation pre-1900 and so does Mann 08 CPS.
I’m also willing to stipulate that none of you are capable of eating crow and admitting that Loehle’s reconstruction has been vindicated by newer reconstructions.
And wrong. But it doesn’t affect their results sections.
Conclusions sections are often FOS.
re: Nick Stokes (Comment #88247)
Nick, before you go too far down this road, have a read of the project description here. I think that Mosher’s point had to do with the following quote:
I don’t believe you will find any discussion of methodological bias potentially accounting for the DP in TAR or AR4.
The big mistake that S&B made was a PR blitz after it’s publication, including making statements that weren’t supported by their work.
While I agree that this should not have happened, I detect a cloy smell of hypocrisy here from those who became outraged by it, because many of they themselves follow a similar pattern. (Witness BESTs news blitzkrieg before their papers had even been provisionally accepted.)
hehe.
you know the hilarious thing
When I wrote the book on this I actually counted the words briffa wrte on this in AR4.
I counted the words because I wanted to say the picture of a divergence is worth a thousand words and briffa only gave us 264 words..
So, Nick, yes I know Briffa wrote that in fact I had it committed to memory.. as in word for word recall. see around page 159 of our book. I believe I quote the whole paragraph.
But as I said… In his proposal to get money……he writes
like all divergence could mean that all dendro is crap.
Oh my god! All dendro might be crap, send me money
On the other hand, when you have some data to show, that might caste doubt on dendro.. well, it would be “inappropriate” to show that graphic.
Here is briffa writing to get funded:
“The existence of divergence casts doubt on the uniformitarian assumption that underpins a number of important tree-ring based (dendroclimatic) reconstructions. It suggests that the degree of warmth in certain periods in the past, particularly in medieval times, may be underestimated or at least subject to greater uncertainty than is currently accepted. ”
AND does he explain to the funders
“and the possibility of investigating them further is restricted by the lack of recent tree ring data at most of the sites from which tree ring data discussed in this chapter were acquired. “
Nope!
And lurker above points out the last bit.
By the way, bugs, why not provide a list of errors that “eviscerate” S&B (especially given that their basically conclusions, mainly that the MWP was global, are many-times-over confirmed now).
Feel free to point out problems with their discussion and conclusion section to, but be specific as possible.
It’ll be an interesting psychological experiment: What you guys consider reasonable criticism, and whether you are capable of listing specific errors or just resort to the usual ad hominems, and how people here respond to legitimate criticism. (I think the only unfair part of the comparison here is nobody is particularly wed to Soon on this blog the way you guys are to Mann.)
I read the linked white paper by Briffa and Cook some time ago and thought it could have been just as easily written by a skeptic – except I do not think such an animal exists in the dendro world.
http://www.ncdc.noaa.gov/paleo/reports/trieste2008/tree-rings.pdf
What are the Sources of Uncertainty in the Tree-Ring Data: How can They be Quantified and Represented?
White paper on tree rings submitted by Keith Briffa and Ed Cook
Thanks for the link Kenneth. Why was the white paper submitted? Did they say?
Re: Carrick #88264
I don’t hold out high hopes for this experiment. Recall bugs’ comment on your paleotemperature recon, upthread at #88216,
I asked bugs (Comment #88218),
No response yet.
I think that means that if you contribute an image with a consensus-pleasing shape like the one in #88207, then you’re doing Important Science. But if you come up with an ugly shape like the one in #88209, that’s a trivial finding.
Carrick = important and Carrick = trivial. Hmmm, confusing. So let’s drop this topic and move along to another subject.
One other comment bugs, then I’ll allow you to vindicate yourself after making such a series of brash statements or slink away perhaps leaving a slime trail, whichever you prefer.
S&B say in their abstract::
This is largely in agreement with proxy-only reconstructions.
I think much of the criticism of this basic conclusion is all wet.
Also, the discussion of the relationship of the sun to climate is not found in S&B (which appeared in Journal of Climate), it is in a followup paper in the ever enigmatic E&E, that included three additional authors.
I’ve gone back through some of the criticisms, and most of them are frankly just wave-offs, indicating to me that these critics hadn’t even read the paper. It is also interesting that much of the controversy and animus surrounds the political impact of the paper.
The paper’s fundamental “flaw” from their view is apparently that it “doesn’t help the cause,” not that anything substantive was in error in in.
Amac, I have no hope that bugs will be able to actually generate a list of the errors that “eviscerate” S&B, or even a list of potential errors.
The only substantive criticism I’ve ever seen (that didn’t relate to how this paper impacted the hockey stick meme and its utility for political persuasion) was itself flawed, namely the use of precipitation proxies.
I actually like the notion of using tree rings to generate a “climatological index”, sort of like MEI for ENSO. Basically it would just measure rate of tree growth during different periods of climate, and would indicate which climate was more favorable to tree growth, rather than trying to produce a calibrated proxy temperature.
I do know if you look a the “raw proxy” data, there generally isn’t a divergence problem.
See this figure.
(Note the predominant use of precipitation proxies in this study to try and measure temperature!)
It’s only after you run it through a series of algorithm that tries to correlate tree-ring growth with temperature, that the divergence problem flairs up.
I thought the histogram of trends useful too.
[Note this study was promoted by a constructive comment made Boris. He and I get along like fire and water, but this does illustrative that people who are critical of each other’s views can lead to progress, and not just virtual food fights.]
By the way, for people who are interested in more technical topics, I don’t use a simple non-overlapped bin counting algorithm for generating my histograms.
What I do is divide the range $latex x_{min}$ to $latex x_{max}$ into bins of $latex \Delta x$. In a “classic” histogram, you would end up with $latex N_{bins} = (x_{max}-x_{min})/\Delta x$, with each bin centered at $latex x_n = x_{xmin} + (n+1/2) \Delta x$, $latex n = 0, 1, \ldots, N_{bins}-1$.
[Usually $latex \Delta x$ is chosen to be exactly divisible in the range $latex x_{max}-x_{min}$. ]
What I do instead is generate $latex N_{bins} = (x_{max}-x_{min})/\Delta x \times n_{over}$, where $latex n_{over}$ is the “over sampling” amount. (Typically I use $latex n_{over} = 4$.)
In this case, you end up with bins of width $latex \Delta x$ that have a spacing (center of bin to center of bin) of $latex \delta x = \Delta x/n_{over}$, and the center of each bin is $latex x_n = x_{min} + (n+1/2) \delta x $, and the new range of values binned is $latex x_{min} – \Delta x/2 + \delta x/2$ to $latex x_{max} + \Delta x/2 – \delta x/2$.
[In order to preserve the original area under the curve, for each data point that falls in a bin, I increment the bin’s value by $latex 1/n_{over}$.]
Since I frequently use this algorithm for generating histograms, I thought it was worth explicating it here at least once. If anyone is interested, I can post AWK code that implements exactly how I do it, since semi-verbal descriptions don’t always do full justice to the original algorithm.
Re: Carrick #88269,
[handwaves]
Interestingly perhaps, data-gazing has led me to believe that Tiljander03’s Lightsum and Darksum are both proxies for precipitation. I.e. the temperature-proxy claim of that paper’s authors (which Mann08 picked up on) was wrong.
There’s a nice temperature proxy for Finnish Lake Hamptrask that goes back a few centuries and shows the LIA nicely. Tiljander data series don’t.
One can get insight into these series by looking at median values rather than at running averages (as, effectively, Tiljander03 and Mann08 did). Because most varves are thin and a few are thick, very thick, or very very thick, the curves produced via medians are much smoother than those produced by averaging. They show long-term trends more clearly.
I suspect that the thick varves are produced by years with unusual events that lead to lots of silt-sized particles being mobilized in the watershed: hurricanes, torrential rains, rapid snowmelts, forest fires. Most years, nothing like that happens, and there’s a record of a baseline level of sedimentation activity. This baseline varies as a function of “climate” (precip, temp) and climate-influenced recent history (e.g. type and extent of vegetation).
Tiljander asserted that, pre-1720, Lightsum was inversely related to temp while Darksum was directly related (Mann08’s poor due diligence meant that they missed this point). However, the median curves of Lightsum and Darksum exhibit a nice direct correlation to each other, 200 to 1720.
Is all this true? I dunno. But at least it’s plausible. It’d be nice if the folks who search the world for “proxies” could approach data series with a bit more interest in their possible subtleties.
[/handwaves]
Amac:
Do you have these figures posted on your web page?
Thanks.
Also, simply because people think that precipitation correlates with temperature doesn’t always make it so (some places, e.g., low-precipt region I believe it anti-correlates). I’m not necessarily defending the objective correctness of S&B, by the way, just noting that what they did was consistent with practice in the field, and following common practice in the field, the conclusions they arrive at are reasonable.
The point being you can’t self-consistently selectively attack S&B for using common practice, without then leveling the same charges at all other papers (which is most) that use the same methodology.
Self-consistency has never been a hall-mark of the CAGW crowd of course, otherwise they’d beat down anybody who tried to use examples of extreme weather as evidence for CAGW. (In other words, Joe Romm should have a lot more footsteps on his face than he does, but he’s perceived to further the cause and hence immune from the bevy of criticism leveled towards those who don’t help the cause).
One amusing side-note, then I’ve got to get back to serious work (proposals to review etc), I heard an NPR reporter (I think it was on Ira Flatow’s Science Friday) use the TÅhoku tsunami as an example of extreme weather in the same breath she was discussing AGW and berating the ignorance of republicans.
First of all, tsunamis are geophysical phenomena, they involve the movement of the sea floor of the ocean and are unrelated to meteorological phenomena, which pertain to events that occur in the atmosphere. Secondly the idea that an earthquake is in any way related to this tiny bit of warming we’ve seen is completely absurd. Thirdly, who’s the one that is really scientifically illiterate here?
Her other claim , made in almost the same breath, was that the 2011 tornado season was “record breaking”…well not in terms of total number of tornados it was not. Nor in terms of violence… the 1974 season had as many EF5 tornadoes on April 3-4, 1974–six–as we had during the entire 2011 season. In case you’re interested the April 1974 outbreak had 24 EF4s while the entirely 2011 season had 16 EF4s. EF4 and EF5s are the “killer” tornados (being inside your house in a safe room won’t necessarily save you), so these are in a category of violent weather unto themselves.
Carrick, you should be careful with your criticisms of Mann and MBH. You say:
Mann did not refuse to report the values he got for R2. He happily reported the value for the 1820 step of his reconstruction (in the caption of Figure 3). This means you’re wrong. Mann only refused to report the R2 values which were bad.
That’s obviously much better!
Brandon, ouch, I stand corrected. He only refused to report adverse results, which is way better. 😉
Carrick, what are you thinking? Mann published a “good” R2 result, refused to publish the multiple adverse R2 results, claimed he never calculated any R2 results and even said calculating those results would be “silly.” This is perfectly simple and sensible, hence why he receives full support, either vocally or tacitly, for all of this from so many people.
I don’t know why this is so hard for you to understand.
I’m not sure why it is so important for you guys to resurrect the S&B paper. I thought skeptics had given up on that paper as dead after Climate Research repudiated it and Hans Van Storch–not exactly a Mann acolyte–resigned over the poor peer review process and the refusal to publish a refutation alongside the paper in CR. Not to mention the rebuttal from most of the authors cited in S&B.
http://www.geo.umass.edu/faculty/bradley/mann2003a.pdf
Boris, I’m not resurrecting anything. I’m asking for specific criticisms of the paper, which as far as I can tell isn’t that controversial, nor inconsistent with more recent studies.
Can you provide any substantive, detailed criticisms?
Resignation of editors is generally not an example of anything other than public pressure and anyway just an appeal to authority on your part. If editors resigned every time a badly written paper was written and accepted for publication, there’d be nobody left for the job.
Also, do you really want to get me started on criticizing yet another craptastic paper by Mann? (Imagine him taking umbrage to what S&B say…and all later proxy reconstructions…which is that his original reconstruction is completely wrong.)
I agree that your criticisms are excruciatingly specific.
But then I wasn’t referring to you, because I don’t think you allege “misconduct” the way Carrick does.
Boris:
Twist in the wind, Boris.
What was non-specific about any of the issues I raised?
If you want, I can take AMac up on his generic offer, and write them up in as excruciatingly detailed as you would like.
I propose you will just perform the CAGW brush-off dance in response, so this does act as a disincentive to engage you on this.
Carrick (Comment #88266)
January 5th, 2012 at 12:02 pm
Thanks for the link Kenneth. Why was the white paper submitted? Did they say?
The following excerpt from the paper might give a clue. As I recall I found this white paper at a dendro site – maybe the one where tree ring measurements are archived. Take a look at the last line in the excerpt about tree ring samples dying with their collectors.
Excerpt from white paper:
However, it is not just the measurement data that should be highlighted in this discussion.
As an example the following is a quote from Jonathan Palmer:
“Major crisis looming here are the physical samples. We are loosing the trees. Steady
can tell you about his efforts in SE-Asia. In NZ, we have 40,000 year old ancient kauri
being mined. I reckon it will be exhausted within 10 years. The holocene sites in 5 years.
Saw-millers are already starting to buy farms so that they can secure some future supply.
We have set-up an archive at a local museum for biscuits of kauri for future research
programs. In other words I have adopted a fire-fighting approach – save as many
samples as I can and hope there might be funding to work on them later. Steady has
funded me over the last 5 years to collect silver pine (Halocarpus biformis) from the West Cost. We have multi-millennial chronos thanks to that investment – but some sources have been completely destroyed by the land being converted to dairy pastures. The other area is now a kiwi habitat sanctuary so the permit process for further sampling has become much harder. So, data archiving is vital, but I’m first trying to save samples!”
Many dendro people, in different parts of the world, could tell similar stories. PAGES
highlighted this problem once, but little came of it. The sources of old tree-ring material
are disappearing around the world and as old dendrochronologists whither away, their
sample collections often disappear with them!
Re: Boris #88280,
> I agree that your criticisms are excruciatingly specific.
LOL
If you read the link you’d find your answer. Your claim is thatS&B should be free to use precipitation proxies as temp proxies because that’s what everyone else does. But when other climate papers use a precip proxy to infer temperature, they do so based on actual evidence.
s&B uses the following definition:
Do we really need to go into detail on how this definition is useless is determining if the 20th century is warmer than the MWP?
Also, S&B average the 20th century together, so their conclusions are misleading within the context of the discussion of MBH and other reconstruction’s conclusions about the later 20th century warmth.
re: steven mosher (Comment #88263)
Briffa privately floats the possibility of potential methodological bias to his network in the lead up to AR4. Here is a quote from email #1055 (from Briffa in March, 2006) explaining why (IIRC) there will be no discussion of this in AR4:
After I did my post at tAV back in November, I my speculation was not unlike Briffa’s – that artifacts of standardization methodology might actually explain the DP (at least insofar as his own work was concerned). However after a refreshed look at Roman’s plot of temp trend due to homogeneity adjustments and the similarity to my graph of NH observations minus non-truncated Briffa01, I wanted to have a closer look at the unadjusted observations vs non-truncated Briffa01. I am still working with this but I can show (borrowing some of Ryan O’s and Steve McIntyre’s R scripts) this 1900 to 1993 plot of non-truncated Briffa01 along side a composite series of unadjusted GHCN v3 stations (computed as annual values) which have at least 1000 (out of a possible 1128) monthly data points. Subtracting Briffa01 from the unadjusted GHCN composite yields a time series of white noise with an insignificant positive linear trend of 0.18C per century. Caveats are that stations with at least 1000 monthly data points are heavily weighted towards the US and are poluted with obvious step errors in the station series. The key question of course is whether these errors across all stations are biased and account for the temperature trends due to adjustments as shown by Roman. The fit of unadjusted GHCN with Briffa01 suggest that the one must also be open to the adjustment process itself as having the bias.
Boris:
Actually they don’t. They do it based on correlation, not prior knowledge of causation. Post hoc analysis at its prime.
Secondly, if you want to invalidate the use of precipitation proxies in S&B, you actually need to show they can’t work as temperature proxies, not just say they “may not”. [You can ding S&B for a methodological flaw, assuming that they did not validate that their precipitation proxies could be used as temperature proxies.]
I hope this isn’t your best argument. S&B weren’t just looking at temperature, they were looking at climate change, which includes changes in precipitation patterns in addition to temperature.
These other papers mixed instrumental records together with proxy measurements, which may or may not be consistent statistical procedure, but is not one done by all other proxy reconstructions as I’ve shown above.
It is not misleading to say based purely on proxy measurements the late 20th century was not anomalously warm.
Next?
Carrick,
Your link to the discussion of the Harvard biology professor for starters. You wanted that to prove something about CAGWers and how we won’t admit that misconduct occurs in academics or something. So because I don’t agree that your unproven claims about the R2 statistic amount to research misconduct, I must be ready to deny that misconduct takes place anywhere. Right.
Of course, the NAS looked into the verification statistics in 2006 and I don’t remember any mention of “misconduct” then. Far form it. I’m sure McIntyre also brought that up when he talked with the NSF. Again, bupkis. Perhaps instead of multiple investigations being “whitewashes” you have a bizarre definition of misconduct where you assign the worst possible motives to people with whom you disagree.
Boris:
I hope your brain is more ordered than this, you’d meshed together multiple issues, misunderstood the point I was making regarding the Harvard ex-professor, and generated a strawman conclusion (and not one that I agree with).
Withholding the R2 statistics is easy to prove–Mann’s fortran source used in MBH calculates it. The fact, as Brandon observed, that he reported the 1820 step proves he looked at this output. He not only had access to these R2 values, he refused to release them when pressured. That too is public record.
Can you do anything besides appeal to authority? Since you bring up NAS, show us where Boehlert’s original questions tasked to the panel were actually addressed. (It’s a classic political game of non-responsive answer. You’re given a list of four direct questions, and none get answered.)
Anyway the facts speak for themselves. I would request that you not mention any investigation except as it directly addresses any issues that I’ve raised.
Link for me one place where I’ve discussed motive.
@Mosher
The whole IPCC report, as large as it is, is still incredibly light on detail. It’s not dendro, or paleo, it’s everything. For the detail, there are numerous references. The whole climate/AGW thing seems to be one of the most complex scientific issues we have. People still argue about the finer details of how the greenhouse effect even works, with many still getting the basics wrong.
http://rabett.blogspot.com/2012/01/indelible-dumbness-of-physicists.html
Carrick,
You appeal to your own authority on the issue of misconduct and my only point in bringing up the NAS and NSF investigations is that they disagree with your subjective definition of “misconduct.”
But please link to the r2 discussion.
Boris,your excuse for continuing to appeal to authority is just lame.
I’m not appealing to authority here. I’m stating facts that are generally in the public record, and ones that I believe in each case, if questioned, I can substantiate.
Regarding the “r2 discussion”… The statement that calculating R2 was silly comes from the NAS panel discussion, see here:
As I mentioned that statement is patently false (“that would be silly”), and it has been widely criticized since (I can provide you with links if you have problems with that yourself)
If you need me to, I can come up with specific examples where Mann denied computing R2. Do you want to see Mann’s fortran code to? (That also is public record too.)
So here is what I claim (based on memory now):
1) Mann computed R2 and RE for different time periods in his reconstruction.
2) He reported the RE statistic, which he claimed verified his reconstruction (but he got the threshold for verification wrong, and his RE statistic failed to verify, another story for another time).
3) He withheld the adverse (essentially zero) values for R2
4) When pressed he both ridiculed the use of R2 and claimed the superiority of RE in comparison to it (based on an erroneous reading on his part of a particular textbook).
5) He also denied ever having computed R2, even though he reports it in his paper, when the value was favorable (h/t Brandon).
6) It is demonstrable that his fortran code computes R2, so he had access to those and could have, had he chosen, to include those in his original paper or his corrigendum.
7) It is unethical to withhold adverse results, and so Mann’s refusal to release his adverse results was an ethics breach.
Note that #7 does not assume motive. If you have trouble figuring out how that could be, I can ‘xplain that to you as well. (Hint to get you started: If you break a law that you were unaware of, was the act unlawful? If a child performs an unethical act that they are unaware is unethical, does that mean they are a “demon”?)
Oh and:
8) All of the above is in the public record. Boris does not need me to hold his hand to find it.
Make that “If you need me to, I can come up with
specificother examples where Mann denied computing R2″. Note that he denies computing R2 in the quoted statement.Surely there’s a transcript of the hearing you can source instead of McIntyre?
But, yeah, show me where Mann said he didn’t calculate the R2.
The point is that S&B don’t even base their results on correlation. An “anomalous” period could have been very wet during the MWP and very dry in the 20th century.
That was in the transcript provided by Steve. I misspoke.
As to a transcript, I’m sure there’s a real transcript somewhere, I haven’t bothered looking for it, since I’ve seen nobody (including Mann–who will sue for libel and I don’t see him letting Steve off the hook here) disputing McIntyre’s account.
Technically you don’t know how they base their results, or whether they had other a priori reasons for assuming the precipitation records behave as temperature records.
If they fail on this account, it’s a methodological flaw, but that doesn’t by itself invalidate the study.
In order to show this potential methodological error invalidates their results, you would need to show the precipitation proxies really don’t behave as a temperature proxy in a climate field reconstruction sort of way. Has anybody done this? (Given the large number of precipitation proxies used by Mann 08, my guess is they’re fine here.)
The other refusal to provide R2 that I can immediately think of, was Mann’s response to the request by McIntyre during his review of the corrigendum that R2 be included. This is part of the “not generally public” information.
McKitrick has a copy of Mann’s response to this request if you are interested, there is a published source for it too, so if it is libelous, again Mann should take actions to remediate it.
I believe the record will show that this pattern of refusing to release the R2 values was pervasive—other people like Casper Ammann only did so under duress—as was the pattern of incorrectly computing the values of RE needed for verification. I suspect they based their methodology for RE verification (and arguments for not including R2) upon that of Mann’s.
Social network theory anybody?
Mann gave his reasoning for preferring the RE statistic in his response to the Barton letter, including references. If you’re going to prove misconduct, you’re going to have to show that he intentionally misused those sources.
Boris:
There are different levels of misconduct. Some of those require proving motive, others do not. Again if I break a law I don’t know about, is what I did illegal? (ANS: Yes. Motive generally doesn’t affect whether the law was violated.)
I’m not alleging, nor have I ever, that Mann is intrinsically evil (I would describe him as inordinately biased in how he treats data).
To prove that Mann committed an ethics breach, you merely need to demonstrate that he withheld adverse information to his study. The motives or reasons he gives withholding the adverse information are not relevant. The act itself is unethical.
In this particular case, his conclusion about RE is wrong, but I don’t think he was intentionally giving a wrong conclusion, rather I think his personal bias affected his judgement (gee imagine that ever happening), and he selected RE because it appeared to verify his reconstruction when he incorrectly calculated the threshold needed for verification, and because R2 was near zero as a verification statistics. I think Mann really believed in his reconstruction (got “too close to it”, imagine that happening too, right?), so there had to be something wrong with R2.
This preceding paragraph is speculative, but I’m giving it so you can at least see how I’m approaching this. I haven’t seen anything he’s done that approaches the bar of removal of tenure for example.
Carrick,
“I think Mann really believed in his reconstruction (got “too close to itâ€, imagine that happening too, right?), so there had to be something wrong with R2.”
So it seems Mike Mann has something in common with aging Hollywood actors and actresses… they start to believe their own press releases too. 😉
Oh, and they also support drastic reductions in fossil fuel use, except when they are flying about in their personal jets.
Bugs
‘The whole IPCC report, as large as it is, is still incredibly light on detail. It’s not dendro, or paleo, it’s everything. For the detail, there are numerous references”
What has that got to do with the issue I raise.
Its pretty simple: The divergence is unexplained. The best practice would be to SHOW the divergence, and explain it implications in the text. The implications are: dendrochronology as a science may be bankrupt, the trees may be responding differently in a fashon that is locally driven ( in time and space), some of the data may be bad, the methods may be imposing this pattern.
Instead, Briffa hides the decline and papers over the difficult issues. He even argues that there isnt data to resolve the issue
on the other hand, when it comes time to request money, something closer to the truth is told. The science may be wrong, the methods may be wrong, its damn worthy of investigation.
If it belongs in a summary of the science, it does so with big old caveats, not caveats squirrelled away in documents 5 years old
What you see is what anyone trained in rhetoric would see in an instant. A message crafted for a particular effect given the author’s intent and his audience.
Jolliffe’s review comments on the use of RE versus r (made as an anonymous reviewer, but he later identified himself as the author of them):
Admission in MBH that they did compute r and R2:
We can turn this into a discussion of verification statistics, but I suspect that will close this thread. 😛 (Might be mercy.)
SteveF:
I suspect Mann gets access to the State P… oops Penn State private jet when he’s fund raising, so yeah, even this applies to him too.
(Wonder if Mann would appreciate one of these?)
Steven Mosher:
Give him a break. He’s just practicing CAGW logic. Insult the author then address an unrelated issue as if it were germane. CAGW Politics 1.0.
Also interesting from Carrick’s link:
As to withholding the R2 statistic: it isn’t really important if Mann actually believes that the R2 statistic is useless for these kinds of data. In other words: would Mann have withheld the R2 statistic if it had helped his case? No one can truly answer that. It’s certainly possible that Mann justified the withholding of the R2 after he saw that it wasn’t good. There’s no way to know this for sure, and thus no real evidence of misconduct. And, I might add, no way for me to prove that Mann didn’t misbehave.
steven mosher (Comment #88233) ,
Thanks for your comment. I am one of your fans in spite of your occasional grumpiness.
curious (Comment #88240)
Good to know the request was a polite one. Did they explain their reasoning?
I posted a link to the actual email here:
http://www.gallopingcamel.info/IPCC.htm
Clearly it was not written by a lawyer; that may be the next step if I fail to grab my ankles.
While a response from the IPCC was expected it took much less time than I anticipated. At least nobody can accuse them of being “asleep at the switch”.
DeWitt Payne,
Good luck with that project.
It sounds interesting but a little more information would be appreciated.
curious (Comment #88240)
“If you need a legal view before responding it might be worth dropping a comment at Tallbloke’s asking for Stephen Wilde’s contact details.”
I am hanging out at the Talkshop. They seem to be my kind of people. It would be great if some of them would join my review team:
http://tallbloke.wordpress.com/2012/01/05/i-am-spartacus-stand-up-and-be-counted-for-science/#more-4146
My wife is afraid that legal action will be taken if I refuse to comply with the IPCC’s “Cease & Desist” request. That possibility should not to be taken lightly given what is happening to Tim Ball and John O’Sullivan:
http://joannenova.com.au/2012/01/john-osullivan-puts-his-house-on-the-line-more-than-any-skeptic-ought-to-be-asked-to-do/
Boris, R2 is a widely accepted verification statistic. Mann can invent his own statistical theory that says otherwise (though as you know he admits to not being a statistician), but regardless of motive, it is inappropriate to withhold it, especially having calculated it and known the results to be adverse.
And as I pointed out above, he did report R2 when it was favorable. He only withheld the adverse values.
In science, we have rules of conduct for a reason—people can be wrong. If it was obvious that Mann was right about R2 being defective, then it would be obvious to others as well. You have to allow other people to decide.
I see this as an ethical breach of the order of tearing a page out of a lab book. You are essentially arguing over whether the page is important, I’m arguing removing the page is wrong, regardless of motivation and regardless of what is written on the page.
To censor or not?
Verify the rising gorge.
Know it when see it.
===========
Carrick,
Yes, but I understand that if someone is being mean to you, then normal rules of scientific conduct don’t apply any longer. 😉
Re: Boris #88309,
Boris is quoting Referee #3 and not Dr Joliffe, who Carrick quoted in #88306. This is the explanation of the seemingly-contradictory opinions on offer in the two excerpts. Just to be clear.
Re: Carrick #88272,
I’ll put up a file with median values for Lightsum and Darksum in a day or two and link it in this thread. Sorry for the delay, couldn’t get to it last night.
SteveF:
Actually if you suspect they will be mean towards you at some future date..say 5 years after publication, that also absolves you from following ordinary rules of scientific conduct, doesn’t it?
AMac:
Thanks AMac, my brain was fried from proposal reading by last night so I never followed up on the source of Boris’s comment. It probably would have been better if he had mentioned it was written by a different reviewer (who’s name I believe is public information at this point too, but I don’t remember it off hand, maybe CA?)
stacking the deck of the NAS/NRC review.
Peter North on “due diligence” in an experts panel:
Funny I’ve served on expert panels.. and that wasn’t ever our approach. (The idea nobody had even tried putting their hands onto data tells you what you need to know.)
Referee #3:
Iis it just me or does anyone else find this comment to be bizarre?
Layman, yes it is bizarre.
It appears odd but Referee #3 seems to have made a greater effort towards due diligence than the North panel did. He at least actually analyzed data, even if he missed some important issues (IMO).
I’ve gotten access to data from papers before, as a referee, when there was a dispute over how the data were being analyzed. It isn’t very common, because generally the disputes tend to be interpretational rather than methodological.
This particular panel failed miserably in the stated reasons for their assembly, plus they failed the science by not addressing the questions they were original tasked to answer, and coming to an “expert panel conclusion” on the issues where contention between researchers clearly existed.
Here are the questions Boehlert posed to panel to answer by the way:
Not only were they not answered, it is quite apparent that they were never actually posed to the panel. Such is the nature of a white-wash. Go through the motions as if you are responsive, while making sure consensus opinion isn’t affected by the outcome.
Here’s Sherwood’s complete letter TB2 the Wayback machine.
What is bizarre is that the real issue here is the public pillory being used against Mann. Mann hasn’t committed any fraud, and he is the type of person who is not about to admit to a mistake. If you think you know better, publish a paper that does it right. Christy and Spence have made several mistakes, people point them out and move on. One of the earliest discoveries about the scientific method was nothing to do with science itself, but with people and personalities. Don’t personalise science, don’t attack individuals. I think we can see the wisdom in that right now.
“steven mosher (Comment #88242)
January 5th, 2012 at 4:49 am
Bugs
The ZOD quotes papers using the same proxies as S&B making the same points that S&B made
dont you find that odd”
S&B quote papers in ways that the authors of those papers find incorrect. That is now a common trick for ‘skeptics’ such as Monckton.
@Mosher
“on the other hand, when it comes time to request money, something closer to the truth is told. The science may be wrong, the methods may be wrong, its damn worthy of investigation.”
What happened to the new, caring, sharing Mosher? Did he never really exist.
Bugs: “Don’t personalise science, don’t attack individuals. I think we can see the wisdom in that right now.”
When and where Michael Mann has committed scientific misconduct, he – as a person – should be “attacked” for that. When and where his work is wrong, it should be “attacked”.
Michael Mann, Climategate mail from 2007:
“I have been talking w/ folks in the states about finding an investigative journalist to investigate and expose McIntyre, and his thusfar unexplored connections with fossil fuel interests. Perhaps the same needs to be done w/ this Keenan guy. I believe that the only way to stop these people is by exposing them and
discrediting them.”
Oh the irony…
Bugs,
Whoa!
.
I suppose that applies just as much to climate scientists who routinely attack people with whom they disagree, right?
.
FWIW, I agree there has been way too much focus on the disagreeable behaviors of certain climate scientists (Mike Mann is certainly one of them), and not enough focus on the technical weaknesses of the arguments they make (O’Donnell et al and several other papers being notable exceptions). The UEA email messages do show a lot of petty personal hostility, wagon circling, scheming to “punish” people who get out of line, and other behind-the scenes activities that many find objectionable, but IMO people focus too much on that.
.
Climate science can do itself a huge favor and gain credibility if they do AR5 right. But my skim of the ZOD in a few subject areas is best described as discouraging. For example, the analysis of ocean heat content seems to stop in 2003…. and completely ignore the 2003 to 2011 drop in heat accumulation. I ask: how is that possible!?! That is the sort of thing that hurts the credibility of climate science, and makes people think it is far too politically informed/motivated an activity. It doesn’t have to (and shouldn’t!) be that way. I really wish they would see the light, but I fear they never will.
Re: bugs #88334,
Whether by design or accident, it is very difficult to have a constructive exchange with you. For instance, sometimes you make a remark that raises issues of its own. But you seem to be in the habit of ignoring requests for further discussion, as you fancy. E.g see upthead #88267.
What’s “here”? Remarks made by Carrick, SteveF, me, etc. on this thread? Another person at a different website? A Straw Man in some field somewhere?
Please refer to the comment you are discussing or rebutting (if you are).
Yes. Readers do not need to publish papers in Mann’s specialty to come to that insight. Are you suggesting that being thin-skinned is a reliable indication of scientific excellence?
.
Well, apparently it’s not just you, it’s also Carrick. 🙂
.
For others, it goes like this:
.
1- Take a noisy trend, with large fluctuations.
2- Divide the series by any number you like. Of course this will strongly alter the trend. You may even make it almost entirely flat.
3- Calculate the RE of the altered series against the original series. It grows worse and worse as the divisor increases, reflecting the difference between the trends.
4- Calculate the r2. It stays at one (i.e. “perfect match”) no matter how different the two trends are, not reflecting the difference between the trends.
.
So yeah, if you’re more interested in tracking the long-term behaviour than the short-term fluctuations, you’d need some hard justification for preferring r2 over RE.
Toto, your example has no meanining. You compare two time series which differ only by scale. Of course the R2 value is one. You and the “others” might want to try testing the sensitivity of R2 to a change of trend by detrending rather than rescaling.
toto:
I think you would find Jolliffe agreeing with me too, and based on his comments other statisticians in addition to him.
Isn’t your argument backwards here though?
In your example, r2 will give a spurious significance, whereas RE does not. Of course that’s why Jolliffe says you need to use multiple verification statistics…to avoid this issue, and this is something I would endorse too.
In Mann’s case, r2 was basically zero, and Mann was finding significance with RE, Which is the opposite of your case. [And to remind, what I was dinging him for, was the failure to report the adverse values of R2, which should have been an alarm bell for him, but instead he worked up a pretzeled up theory why R2 wasn’t an appropriate verification statistic.]
The problem with RE is there is no underlying statistical distribution associated with it, so you are stuck with performing simulations to compute the threshold for e.g. the 95% CL, and you have no easy way to check that you’ve done it right.
I would claim that Mann didn’t compute this correctly (neither did a number of later papers which appeared to follow his methodology), and if you compute the RE threshold correct, his RE verification statistic also fails prior to 1900.
This argument is supported by a calculation performed by McIntyre and later confirmed by Casper Ammann: Mann used a value of 0 for RE for his threshold of significance, McIntyre found it to be 0.54 and Ammann found it to be 0.52. The actually MBH hockey stick had an RE of 0.48, meaning the hockey stick failed to verify even using Mann’s preferred statistic.
(Which is why his reconstruction has absolutely nothing in common with more modern reconstructions, outside of the calibration period. IMO, the MBH reconstruction is meaningless.)
Again you use multiple verification statistics… your example is extremely artificial even if technically correct. It’s rare that we have long term trends that are strictly constant in nature.
Moreover, if you are trying to determine whether a particular proxy is behaves as a temperature proxy, you’re probably better off with Pearson’s correlation… for exactly the reasons you gave: They have different scales, and if you just used RE to compare them, you would always reject significance.
toto
in that case, why would you NOT show the R2?…just as a casual credential for passing “Go” rather than because it means anything? Just asking
toto, perhaps you can come up with a concrete example where r2 spuriously fails to verify and RE doesn’t.
“Yes. Readers do not need to publish papers in Mann’s specialty to come to that insight. Are you suggesting that being thin-skinned is a reliable indication of scientific excellence?”
I am saying people are people. I would also be correct in saying that Mann knows a lot more about climate science than anyone who posts in this blog. You think he has something wrong, publish it, advance the science. Another part of the development of the scientific method. Random writing at random places just causes confusion and advances nothing. Look at the net contribution of ‘skeptics’. Who can know what they have contributed, when the vast majority of it is rubbish. How do you sort the wheat from the chaff. We still get garbage like this.
http://rabett.blogspot.com/2012/01/indelible-dumbness-of-physicists.html
bugs (Comment #88353)
> How do you sort the wheat from the chaff.
It’s a good question. If one that’s already been discussed often here, and elsewhere.
Obviously, “place a lot of weight on the peer-reviewed literature” is one part of the answer. Who disagrees? Obviously too, “rely on the peer-reviewed literature” isn’t a complete answer for any thinking person.
To repeat (from other Blackboard threads), scientists don’t see reliance on the peer-reviewed literature as an adequate answer. Consider conferences, talks, Happy Hours, funding proposals, posters, scienceblogs…
If you take the time to contest these points — in this forum — well, that’ll be another amusing meta anecdote to add to the ones I’ve already witnessed.
You might learn a lot from choosing some dinky climate-sci issue and working it through. (I’ve done that, y’know.) It’d leave you with a different perspective than the one you seem to have now.
> We still get garbage like this. [URL]
Bugs, life is short. With a lead-in like that, why in the world would I bother clicking on that link?
Bugs,
One of the maddening things about climate science (even more than most fields of science) is that lots of different knowledge is relevant. Does Mike Mann know more than me about, say, the various reconstruction techniques for climate proxies? Sure. But what I think you miss is that there are lots of people who comment at this blog (and elsewhere) who know quite a lot more about relevant subject matter than Mike Mann. I am certain DeWitt knows a heck of a lot more physical and analytical chemistry. I am sure I know a lot more about light scattering (aerosols), AMac sure seems to know a fair amount about (dare I say it?) lake varves. Carrick knows a lot about data treatment and error analysis. Paul_K and Julio know a lot of physics. Lucia knowns a great deal about laminar and turbulent fluid flows… not to mention statistical treatment of data.
.
When you suggest that Mike Mann knows more about “climate science” it suggests that you view climate science as some special field, disconnected from other technical fields; nothing could be further from the truth, because all fields of science are (and must be!) internally and externally consistent, which is to say, consistent with all other fields. When you say that practicing scientists and engineers (some of whom are professors at universities!) should not critique the technical content of climate science, here or elsewhere, it just shows your lack of understanding about how science works.
agrees…that is probably the most pathetic thing the self-professed rabett has ever posted. But I think that it attracted the attentionmof Pierre humbert. So more kudos points for Rabett. Not a lot to dispute what Brown was saying though.l
“Bugs, life is short. With a lead-in like that, why in the world would I bother clicking on that link?”
It’s a joke made by a chemist. Yet another physicist, a PhD no less, has come up with another theory about the greenhouse gas effect that is totally wrong.
“You might learn a lot from choosing some dinky climate-sci issue and working it through. (I’ve done that, y’know.) It’d leave you with a different perspective than the one you seem to have now.”
People work a lot of things through on blogs, and come up with nothing, even if they think they all agree that they have. Fact is, the IPCC report is substantially correct, which is an amazing achievement for such a complex issue.
Read the link I gave, the subtleties of the greenhouse effect a far more complex than a lot of people realise.
SteveF:
Well said.
I will note that black and white thinking (“Mann knows more…”), such as bugs exhibits here, seems to be a prerequisite for accepting CAGW as fact.
Science is democratic, a thing is true regardless of who says it. A mistake is still a mistake regardless who made it.
bugs (Comment #88358) —
> Read the link I gave,
Again, you misunderstand. This is (supposed to be) a conversation. If you believe the link you offered moves the dialog forward, explain how you think it does so.
My backlog is too full of interesting material to put “garbage” at the head of the queue.
By the way, as regards conversation, you never responded to #88267.
Bugs,
“the IPCC report is substantially correct, which is an amazing achievement for such a complex issue.”
It would be amazing if a consensus report on a field was not ‘substantially correct’, at least on the basics. It is pretty clear that a consensus report on oncology, or particle physics, or protein folding would also be ‘substantially correct’; that hurtle is not a high one to achieve. The key issues (climate sensitivity, aerosol effects, model accuracy, rate and extent of future warming, consequences of future warming) are poorly defined, or hardly defined at all. The devil is in the details, and the details matter when potentially massive public expenditure is involved.
There seems to be a lot more scrutiny going into climate research than, say, went into WMD before the Iraq war.
Climate research is still at the bleeding edge in several areas. That is not the fault of the scientists, that is the nature of the subject.
bugs:
Again this black and white thinking…
I don’t blame climate science for the complexity and relative youth of the field.
I can blame them. as with any other scientist who I hold to exactly the same standard, if they aren’t transparent with respect to methods and data, and withhold adverse results.
Truthfully it is because of the interest in this topic by laypeople like yourself that so much confusion over what constitutes appropriate behavior seems to arise. If people like you could start seeing this as less of a “good guys versus bad guys” and more like an antagonistic relationship between people who have different views, neither of which is immediately obviously correct (which is a healthy thing), I think that would advance the dialog more than berating other people because they aren’t worthy of debating Michael Mann.
Carrick,
“I will note that black and white thinking (“Mann knows more…â€)”
.
And that is the foundation of all appeals to authority. Bugs seems to miss the blindingly obvious: whether someone thinks Mike Mann is a saint or a tool, a genius or a moron, issues like: inverting the lake varve data, insisting on the validity of questionable strip-bark data, and contorting statistical analysis to conform with his conclusions, are far more important issues.
bugs,
“There seems to be a lot more scrutiny going into climate research than, say, went into WMD before the Iraq war.”
Say what?
The WMD data was much less than clear, with great uncertainty. The Bush administration (and the leadership of the countries who joined the Bush administration) took the “precautionary” position with respect to Sadam’s WMD’s, and they clearly got it wrong. That error cost a huge amount of money that could have been better spent, not to mention thousands of lives.
.
Humm…. Yes, errors of judgement, even when motivated by the precautionary principle and good intentions can cause terrible harm. Just as adopting the precautionary principle for low probability outcomes like extreme future warming could do terrible harm.
My point was the level of scrutiny. I don’t think that any other recent area of science or public importance has been so heavily scrutinised. To what end? Nothing really, the science is still saying pretty much the same thing it was saying twenty years ago. MBH98, whatever it’s faults, still makes a claim that stands up. You only have to look at melting of what has been up to now been ‘permanent’ ice to realise that.
For bugs – not sure what your definition of ‘permanent ice’ is but you might enjoy this article:
http://noconsensus.wordpress.com/2009/06/16/historic-variation-in-arctic-ice-tony-b/
MBH98 was indeed excellent, though upside-down varves would have made it even better. Oh, wait…
bugs:
Medical science….definitely more heavily scrutinized..
Something to do with the trillions being advocated to be spent by the advocates.
Actually it’s definitively and unquestionably wrong. (Of course you didn’t spell out which claim you thought still stands up… but if it’s based on science fiction styled mathematics, who cares?)
Not related to MBH to start with. Secondly we have no records for arctic ice extent for the MWP.
There is this
http://tamino.wordpress.com/2011/12/02/1400-years-of-arctic-ice/
I was aware of that paper on arctic ice reconstruction, but it’s one of those results that bears replication, and I have no idea what to make of it without further professional review. Do you notice how Tamino is completely uncritical of anything that supports his prior held beliefs? Skeptical Science is as bad in their treatment.
More examples of black and white thinking…
[But again it’s only peripherally related to MBH98]
Not really, more supporting evidence that MBH98 was right. Lets assume that Mann is found guilty of scientific fraud (I don’t think it would ever be possible to prove, and I don’t think he has ever committed any), what is achieved then? Nothing. Over the course of twenty odd years, what has happened is that ‘skeptics’ have either (depending on their bent), had to concede that AGW science is fundamentally correct, with known areas of uncertainty, or have just continued to serve up drivel that is fundamentally wrong.
Interesting mail
http://tomnelson.blogspot.com/2012/01/unsettled-science-warmist-tom-wigley.html
I’m sorry, but you’ll have to let us know what is interesting about it. Nice to see the Mosher of old back again, though. Cheers.
Bugs, Im sorry, class in not in session for you.
the issue is sensitivity. look it up
I know sensitivity is an issue. What else is new?
bugs, near-zero correlation doesn’t make it right, it makes it dead wrong. If you make conclusions based on a wrong analysis, the conclusions are irrelevant.
Sorry you can’t figure out simple things like that on your own.
Bugs,
Which is much the same a saying that radiative physics is fundamentally correct. But that begs the real issue: the areas of uncertainty are the only ones that really matter for public policy. The whole debate is not and has never been about the basics. It has always been about the important stuff like climate sensitivity, accurate projections of warming, and accurate projections of subsequent effects. And those are the weakest areas of climate science, not the strongest. The endless emphasis on wild-eyed projections of doom (1.5 meter sea level rise by 2100, much of the Earth soon to be uninhabitable by humans, etc.) which are almost certainly wrong, are always what is always used to justify the need for ‘profound changes in how people live their lives’.
.
Of course, it seems to me pretty clear that those who are most dedicated to the need for ‘profound changes in how people live their lives’‘ would be pushing for that outcome independent of global warming. They were saying the same things on the first Earth Day in 1970 as they are saying today. (And yes, I did attend the first Earth Day activities on my college campus; I was just as repulsed by the rubbish then as now.) Before that, the Club of Rome said much the same. Before that, Rachel Carson, and a host of others. Neo-Mathusian nonsense has been around for most of my life… and I am not young. Climate science has been far too influenced by and connected to the nut-cakes.
I could have come to the same conclusion as Mann using a SWAG.
SteveF:
The most egregious example of this is schizophrenic street people joining the Occupy Wall Street movement (probably for the warmth and protection from the police) and end up being given a chance to talk…and people responded enthusiastically as if what they were saying was anything besides word salad.
That the problem with environmentowackoism in a nut shell. In sort of a reverse Turing test, when you can’t tell the difference between what a supposedly rational person is saying versus a schizophrenic one, then you have a movement that has jumped the rails.
Based on historical reports I think this underestimates the amount of Artcic sea ice loss at during the late MWP, the period when it was possible for Vikings to sail in a direct line between Iceland and Greenland during the summer.
It doesn’t even get the increase in ice that prevented the Vikings from visiting Greenland after around 1450 or so. (It incredibly shows less arctic ice during the LIA.)
In other words, probably red noise with recent arctic sea ice loss spliced onto the end.
“Carrick (Comment #88384)
January 7th, 2012 at 9:04 am
bugs, near-zero correlation doesn’t make it right, it makes it dead wrong. If you make conclusions based on a wrong analysis, the conclusions are irrelevant.
Sorry you can’t figure out simple things like that on your own.”
It’s warming, and this is despite a quieter than normal solar cycle, and the extent to which the Southern oceans sea suck in heat.
http://theconversation.edu.au/climate-change-and-the-acidifying-freshening-warming-southern-ocean-4489
I think this illustrates perfectly the problem I have with Carrick’s positions on climate science. He’s well versed on the minutiae of MBH98–at least on McIntyre’s arguments–but then he used that to extrapolate out to “climate science” in general. The irony of this bogus extrapolation whilst he chastises others for “black and white thinking” is not lost on me.
Carrick isn’t the first person to try to argue the big picture of climate science from minor statistical disagreements in a 13 year old paper. (Lest anyone think that the statistical critiques are as devastating as Carrick makes them out to be, remember that the NAS report agreed that MBH contained mistakes, but said that those mistakes did not invalidate the research. This is the same NAS report that skeptics happily quote when it comes to the use of Bristlecones.)
Mosher’s post is also instructive as it is more about the subtitle of a presentation than any substance. But the cadre of skeptical blogs are rarely about actually investigating the science. It is more often about uncovering some “misconduct.” Does anyone believe they actually think Mann being sanctioned in some way would illuminate the science?
Let’s say they do think that, well that would be stupid.
Let’s say they don’t think so, then why is there a fixation on Mann and tiny things he may or may not have done wrong when Clinton was president?
The answer is obvious: these pseudo-skeptics can point to wrongdoing as a way to argue that all of climate science is corrupt. That’s a lot easier than actually showing the science to be in error.
Re: Turing tests — I believe that ‘bugs’ is unknown to the rest of us as a person. We only know him or her by typed comments such as these. (Much the same obtains for me and others, obviously.)
It strikes me that ‘bugs’ meets Turing-like criteria for being a committed and subtle climate skeptic, in disguise. In this thread and others, ‘bugs’ repeats half-baked talking points, and fails to address the cogent lukewarmer concerns that arise during discussion. By doing so, ‘bugs’ intends to get the uncommitted reader to question the merits of Mainstream views, and their proselytizers.
I don’t actually think this is the case. But, if I were ‘bugs’, the plausibility of this hypothetical would give me pause.
“There’s nothing I like less than bad arguments for a view that I hold dear,” wrote Daniel Dennett. His subject was different, but the point holds.
[Edit: I composed this comment prior to reading Boris’ #88391, immediately supra.]
Boris:
I’m not arguing the big picture from “minor statistical disagreements in a 13 year old paper”. You’re simply mistaken on that point. I agree with the main conclusions regarding AGW, but obviously have doubt re the impact of that warming.
I’m simply discussing a paper you nor any other CAGW person refuses to admit is completely wrong (and the errors went well beyond simply “statistic” issues, they had to do with the debate over what is acceptable conduct in science).
The NAS position was basically “well it isn’t that different from other papers [that also use strip bark bristlecones and Mann’s erroneous verification statistics], so the errors must not have mattered that much.” This was a result obtained by “winging it” in North’s own words. Some analysis that was. As to whether a 13 year old paper is relevant—it’s published and it’s not been retracted so of course it’s relevant. (I seem to remember some 1905 papers that still have a big impact on science.)
Try the dismissive hand-waving act on somebody who is a bit more easily fooled.
For the record, since I think Boris may have missed it, I think the “modern” reconstructions:
figure
are basically right, and I don’t expect future reconstructions to dramatically change the picture (post 800 AD at least).
It is clear from this (and even more obvious if you just compute the cross-correlation between MBH98 and the newer reconstructions) that MBH98 simply has given erroneous results. The newer ones aren’t an update, they are a contradiction (they lie outside of the stated uncertainties of MBH98).
I also think if you graft on the instrumental temperature record (assuming that’s appropriate to do, I’m agnostic on that, to borrow Nick’s word), the conclusion is that it it warmer now than any time in the last 2000 years, with the majority of the difference occurring to to temperature increases post 1970 (what I view as the AGW period).
Identifying mistakes that are made in a paper, especially one that influences a slew of future papers, is important regardless of claims to the contrary.
The “feet to the fire” that Mann has “endured” (never seen that happen to a scientist before :rolls eyes:), has made him a better scientist: He is much more open in what he does, in providing accessing to his data and code.
He still has problems with clearly labeling adverse issues as such (preferring to split them between the paper and the author-maintained SI), and with ignoring the advise of the people who collected proxies on their relevance as temperature proxies.
I think Briffa sums it up nicely:
(Briffa also called criticisms of Mann “fraudulent”, I suppose that’s SOP CYA, since how a criticism can be fraudulent is a bit difficult to follow.)
boris did u read the mail. its not about the subtitle
not at agu
I missed this gem:
As far as I know, nobody on this blog has called for Mann to be sanctioned.
So who were you thinking of?
Mann, as the victim, would be more influential as an advocate for immediate AGW mitigation, so only the silly people on the other side of the issue would push for a sanction or even an investigation.
I personally judge that Mann and how he and his works are viewed by other climate scientists and defenders gives me a better insight into the consensus community. Mann has provided some interesting insights and evidence of the weaknesses of issues such as temperature reconstructions and the history and predicted future of tropical storm intensity and frequency.
It would kind of be like I heard some group purported to be supporting Ron Paul making that very distasteful piece about Huntsman’s adopted daughters.
Carrick [88400]
You may be wrong there. I don’t post much here these days but I follow the threads. I for one would be happy to see anything and everything that can be obtained -for all to see once and for all- concerning the innards of Mann’s work, be it through FOIA or any other legal means available. No holds barred.
Because purporting to do science -with public funds no less- and refusing for whatever reason, to share raw data and analytical methods [as expected in the oh so basic “materials an methods” section of any proper scientific paper] is simply and fundamentally unacceptable. Full stop.
And anyone, who for whatever reason, thinks that asking for that basic information to be made available -because the authors obviously and repeatedly will not- is somehow not kosher, should have a serious discussion with self about the fundamentals and ethics of science.
If it turns out that Mann engaged in anything remotely resembling obfuscation, manipulation or any other form of generally understood fraudulent behaviour with the facts/data [as in the Enron corporate or the Vioxx clincal/medical science cases], he should be sanctioned to the full extent of the law.
Because, if it turns out that Mann’s work was in fact fraudulent, and crucially since it was used for years as a core argument in the IPCC’s “advice to decisionmakers” to reshape far reaching socio-economic policies, with multi billion dollar consequences, I would like to learn how it might be possible to sue [class action] him and anyone else involved in the scheme [including the IPCC, if possible].
Mann is innocent until proven guilty; of course. But if shown guilty, he and his fellow travellers should be sanctioned to the full extent of the law. In order to ascertain that, we should have every access to his work. [“No shit Holmes… Keep on digging, my friend”..] Anyone who thinks/argues [not necessarily you] that any scientist should somehow be above those fundamentals, has entered intellectual and legal never neverland.
“Anonymous” left a link at my blog to a new manuscript. “Climate Change: Where is the Hockey Stick? Evidence from Millennial-Scale Reconstructed and Updated Temperature Time Series,” by Guido Travaglini of the Law and Finance faculty at the University of Rome. PDF. I do not know its publication or review status (it’s dated 2011).
This author tackles many of the questions concerning multiproxy reconstructions that Carrick has addressed on this thread, including whether it is legitimate to infill data beyond the end of a time series (he believes that it is).
Carrick,
Here are the Tiljander03 data series “Lightsum” and “Darksum”, viewed as median values, as discussed in Comment #88271, upthread. The period is set for 25 years. The picture is not as distinctive as I had recalled: the overall pattern is similar to that presented by performing an 11-year rolling average. JPEG.
Data file is here: .xls.
Re: tetris (Jan 7 14:52),
There were reasons, not excuses mind you, for the Vioxx screwup at Merck. But note the response when things became clear. Merck immediately pulled the drug from the market. Btw, the cardiotoxicity of the other COX-2 NSAIDS may only be slightly better than Vioxx. Celebrex is, in fact, still on the market. I know of an employee at a major drug company that was caught recently fudging data. That person’s employment was terminated immediately.
Big Pharma simply has too much at stake. Fictions like The Constant Gardener or a recent episode of Person of Interest where senior drug company officials order the murder of individuals who might blow the whistle on a bad product are just that, fictions.
Thanks AMac. Beautiful hockey stick, I can see why Mann would gravitate towards it. 😉
tetris, I don’t see prosecuting Mann for his obdurate behavior to be particularly beneficial. If anything it would just lead to a sense of scapegoating individual researchers.
I see getting the field to change its attitudes as a whole in terms of data and code sharing more important, and to a large degree it has improved.
The biggest criss they are suffering in paleoclimatology is that since the original series aren’t getting routinely achieved in an equivalent website as GHCN, and as the authors retire, in some cases, their data disappears with them. In other words, their own stubbornness about not sharing data is coming back to bite them.
Kenneth Fritsch:
My thoughts exactly. But beyond that, I’m far from being convinced that there is anything worthy of an investigation. IMO the so-called academic reports “vindicating” Mann have been embarrassing puff pieces. North with his admitting to “winging it” totally loses any respect or credibility IMO.
More academic investigations would be equally pointless, be performed at the same abysmal level, and I think I’ve seen enough senior researchers publicly humiliate themselves shilling for a “fellow” researcher.
@Carrick
“I see getting the field to change its attitudes as a whole in terms of data and code sharing more important, and to a large degree it has improved.”
What data or code is not being shared? IIRC, it’s only UAH. And that Piers Corbyn guy keeps everything secret as a matter of policy.
bugs:
You can’t actually be serious/I nominate this for the silliest comment on this blog thread award.
Neither Ljungqvist’s proxies nor his code is in the public (though he is cooperative about sharing the data under an agreement of confidentiality). Moberg 2005, Briffa 2006. Briffa is legendary for his stonewalling attempts to gain access to his data and/or metadata (Yamal anyone?). etc etc etc
The problems with the satellite data start with NASA, well before they reach UAH. Some details about AQUA are still under raps, and they’ve danced around FOIA with the best of the paleodendrologists.
etc^30.
I’d like to see UAH release their code of course, I’ve heard rumors that at least RSS is going to do so. I don’t see anybody doing anything with either of these (while we’re at it, why do you suppose it is HadSST doesn’t receive more scrutiny than it does?)
Isn’t all Piers Corbyn’s stuff proprietary code that was privately funded? That isn’t the same as data or code collected using US grants with explicit data sharing clauses in them. Obviously we draw the line at commercial products, nobody probably even wants to see the Windows 7 code for example (ich).
Even Apple has portions of its code in the public domain. You know you’ve got transparency issues when a software giant is more “open” than you are.
> I nominate this for the silliest comment on this blog thread award.
But the competition is stiff…
> But the competition is stiff…
Yeah if you count comments…Heads, not so much…
Why is it a silly comment to ask about what data isn’t available? It’s hard to keep track of the Plen-T-Plaints.
“This author tackles many of the questions concerning multiproxy reconstructions that Carrick has addressed on this thread, including whether it is legitimate to infill data beyond the end of a time series (he believes that it is).”
AMac, I have not read the paper in question here, but does the infilling beyond the end time of the series mean attaching the temperature series to the end of a proxy reconstruction? In my mind the only way you could legitimately do this would be where you have established an equivalency of the instrumental thermometers to the proxy thermometers.
If these papers were to show individual proxies (and without the spaghetti effect) without the attached instrumental record, the visual impact would be entirely different than what is normally presented. That the intent of the authors is something different than pure science is in my mind clearly evident by their using these graphs in their papers. Science is intended to make the picture clearer and not more cloudy.
“More academic investigations would be equally pointless, be performed at the same abysmal level, and I think I’ve seen enough senior researchers publicly humiliate themselves shilling for a “fellow†researcher.”
I would have thought that most of these institutions would be pointing to “academic freedom” if someone were to suggest an investigation of scientists without due cause and be firmly on the side against such actions. That these institutions instead embarked on these puffy investigations rather gives away the rationale form doing them, i.e. put a positive, or at least less negative, spin on the revelations from climategate.
I personally do not favor these investigations because you could get persecutions of academics with unpopular views or a puff investigation used for damage control when views are more popular.
Re: Kenneth Fritsch (Jan 8 10:50),
Travaglini (in #88406) is, like some of the other recent entrants in the multiproxy recon arena, coming from a more mature field where statistical treatment of time series is a commonly-used practice. Like them, he seems to have been motivated to publish after thinking, “Gee whiz, there’s a lot of bitter controversy here, but the underlying problems seem pretty old-hat, at least from a statistics point of view. Let’s at least get the beginner-level mistakes out of the way, and see what sorts of reconstructions result, and what power they have.”
Being a stats dunderhead myself, I would need more hand-holding than the paper provides to answer your questions about it. However, I think it’s likely that Travaglini has not made the sorts of silly and obvious errors that underlie many of the canonical Climate-Science recons (as Carrick has discussed in this thread and elsewhere).
.
> …you could get… a puff investigation used for damage control when views are more popular.
Now that you mention it, that does seem like a possible outcome.
There’s also a reconstruction method being developed by a set of statisticians that is much more rigorous in its approach. I had it at my finger tips before I had a hard-drive crash… I’ll see if I can dig it up on Monday.
I’ll admit one of the more amazing things about Mann’s early work is just how much of the statistics he does is so totally ad hoc. There’s nothing with ad hoc if you put it under enough scrutiny, but using “tried and true” methods is certainly less risky… (Anyway, IMO, Mann’s real problem was he was producing junk, and he was going through contortions trying to prove to himself that the reconstruction worked, rather than accepting that it was falling to verify and what that meant.)
“You can’t actually be serious/I nominate this for the silliest comment on this blog thread award.”
I don’t think so. I also wonder what it achieves. There is usually enough data out there from the raw sources to replicate a paper independently. Which is what people usually do anyway, since someones code or logic could be wrong. Oh wait, that’s not the point, the point is to burn a witch at the stake.
The other point is that code that has been demanded of people is released, for example the GISTEMP code, then everyone pretty well ignores it.
Re: bugs (Jan 8 14:48),
That’s a new contender for silliest statement on this thread. I’ll leave it as an exercise for the reader to figure out why.
“There is usually enough data out there from the raw sources to replicate a paper independently”
so, if you happen to want to see just which data the scientists selected out of all the available data, you should be prevented? Surely, part of the interest is in the criteria for selection and then, how the extremely sophisitcated people actually handled the data.
bugs, the empirical research on reproduceability shows that you are on crack.
funnily some researchers have not able to reproduce their OWN work
bugs:
Wow, another amazingly ill-informed comment from bugs.
Just because YOU ignored it, doesn’t mean we have. Jeez.
Most people use the replication of GISTEMP produced by the Clear Climate Code people. I run it when doing numerical experiments, and have occasionally posted results here (like the land-mask study based on KevinC’s work).
How would they have been able to replicate the original FORTRAN + shell scripts in Python without access to the code?
steven mosher, it’s funny how the people who know nothing about code verification and validation are always the first to defend the practice of hiding code and data.
I have a student who sent me his code, but didn’t send the wired-in data files associated with it or the output it is supposed to produce. I have to find him tomorrow and knock on his noggin. How am I going to be able to verify it’s performing the same as his version without some sort of verification suite?
Carrick:
It is a waste of time to engage bugs as if the exchange is about the merits. All bugs’ comments derive from these two principles and these two principle only.
Axiom (I): If a published article is deemed loyal to the CAGW Narrative, mistakes are always insignificant, spin is OK and overt data manipulation, selective omission, deletions or anything else inconsistent with academic professionalism cannot be condemned because of Axiom II.
Axiom (II); Any writing that proceeds from a state of disloyalty to The Narrative means by definition that (a) the writer is motivated by some form of malice and therefore (b) the writer thus has no standing to make and criticism. And of course, making such a criticism is itself proof of disloyalty.
I suspect that bugs is actually a North Korean chat room bot that has been slightly modified to focus on climate issues.
Actually, I think Bugs has made some progress: he seems to have (finally) ended his monomania about Steve McIntyre… that was really getting old.
.
Now, if he could just wrap his head around the reality that much of what is published in every field is either flat wrong, has serious problems, or is irrelevant, then people would stop making comments like #88435. There are papers that make real contributions (in the sense that they advance understanding) of course, but these are a minority. If he could see that, he could (finally!) stop trying to defend every paper in climate science…. no matter how bad.
SteveF:
I’ve seen numbers like 85% of all peer reviewed publications being totally wrong, grossly in error or irrelevant.
Actually I don’t think it gets any better in terms of quality for what gets approved for funding. If anything, due to nepotism and the highly (within field usually) politicized nature of how proposals get reviewed, the percentage of “useful” grants actually approved is likely even lower than that.
The miracle of climate science is that no errors are ever made, if made, they never matter. It is immune from the foibles of other fields, apparently, and we should spend trillions of dollars based on economic policy recommendations from scientists with no training in finance, or any field that would qualify them to make these recommendations.
Here’s a recent publication from the medical field, which is arguably much more transparent and more heavily scrutinized than climate science. It suggests the percent of correct publications is very low indeed. No comment on the veracity of it….following the author’s lead, it has a very small chance of actually being right. 😛
(Let’s face it, nobody is going to make billions based on Michael Mann’s latest PC-based dendro reconstruction software.)
The reason mistakes made in Climate Papers never, ever affect the main conclusions of Climate Papers is because the main conclusion of Climate Papers is always “it is getting warmer and its man’s fault”.
The only error which can change that conclusion is when the conclusion contravenes.
Bugs,
http://www.youtube.com/watch?v=dF1-nkqwmjI
@SteveF
“SteveF (Comment #88436)
January 8th, 2012 at 7:55 pm
Actually, I think Bugs has made some progress: he seems to have (finally) ended his monomania about Steve McIntyre… that was really getting old.
.
Now, if he could just wrap his head around the reality that much of what is published in every field is either flat wrong, has serious problems, or is irrelevant, then people would stop making comments like #88435. There are papers that make real contributions (in the sense that they advance understanding) of course, but these are a minority. If he could see that, he could (finally!) stop trying to defend every paper in climate science…. no matter how bad.”
I’m not trying to defend every paper in climate science. I have already said that, mistakes are made, mistakes will be made. What I object to is this public pillory, which is used to incite a hatred akin to that of Goldstein in 1984. McIntyre is the one who cannot move on past Mann, the word “mannian” is one of his favourite derogatory terms.
“Axiom (I): If a published article is deemed loyal to the CAGW Narrative, mistakes are always insignificant, spin is OK and overt data manipulation, selective omission, deletions or anything else inconsistent with academic professionalism cannot be condemned because of Axiom II.”
What is wrong with the science to date? As time passes, it is has only been demonstrated to be fundamentally correct. Research continues.
@Mosher
Are you trying to tell me that code and data should be available? I agree it should. I’m not going to burst a blood vessel over it, Moberg or UAH. If you’re not trying to tell me that, then just tell me what you think, it saves a lot of messing around.
Bugs,
Fundamental is an interesting word. A fundamental difference between religion and science is that in science when observations are in conflict with your thinking, you change your thinking. In religion, you call on faith to support your theories despite the evidence.
As a religion, climate science has nothing wrong at all. In fact all it needs is faith to believe that it is innately perfect. Of course, no apostasy can be tolerated.
As a science, it looks very tarnished. The attempts by various individuals to find fig-leaves to cover the defects – noble cause corruption – rather than allowing full exposure and open debate does no service to the science, and actually inhibits the natural corrective mechanisms which apply in most other sciences from bringing about improvements in understanding. The science becomes mired in dogma through its own inability to examine criticism objectively, and then to change/evolve as necessary. Eventually the science and the dogma are indistinguishable, and they both become increasingly removed from evidential support.
Your statement “As time passes, it is has only been demonstrated to be fundamentally correct.” seems to me to be already removed from reality. Almost every key indicator is diverging from prediction, and not a month goes by without new data appearing which challenges basic precepts. In a healthy science, this should be seen as an opportunity for renewal or complete bottom-up rebuild. Which bit of your fundament are you still attached to?
Flectamus genua. Verbum Domini (et Phil Jones et Michael Mann) dictum est.
(Quod erat demonstratum est per #88435)
I’ll expand on some recently-raised points, while trying not to overstate the obvious.
The reader can place most comments on this thread on a scale ranging from fully supportive of Mainstream climate science views, to entirely opposed to them.
There’s also an orthogonal scale, which I’ll call “X-factor.” Commenters who are high-X-factor engage in this discussion to teach readers, and also to learn about the subject matter. They value the intellectual traditions of science, including acceptance of the idea that their views could turn out to be partly or even largely wrong. They try to evaluate current and past views on climate in light of relevant data and data-processing. The latter includes consistency of ideas with underlying physical process, and the maths used in analysis — particularly the application of statistics. “Meta” issues also matter, such as transparency, the rules of publication (e.g. peer review), and archiving.
Commenters who are low-X-factor typically pay lip service to these matters. Their remarks typically focus on criticsms of work that questions their stated beliefs on climate, or on defenses of work that supports those beliefs. The criticisms are ad hoc, wide-ranging (e.g. including ad hominems), and inconsistently applied. The critiques that are most challenging to the stated beliefs are the ones that these commenters ignore, or address only on a superficial level. To low-X commenters, science-blogging isn’t a search for truth. Instead, they seem to see an analogy between the process of engagement and the adversarial legal model, with a “defense” battling a “prosecution” to obtain acquittal or conviction in the court of public opinion.
I suspect that the gulf between high-X and low-X is as great as that between Mainstream-accepting and Mainstream-questioning. Communication is helpful, but it becomes progressively more difficult beyond a certain point.
bugs:
I don’t see McIntyre engaged in public pillory of Mann, in fact the opposite may be true—both of Mann and of you. People who dare to question the consensus get viewed as The Enemy. You have to come up with a reason, other than a logical one, for why McIntyre would be scrutinizing Mann. It must be all about emotions and prejudices. (After all, that’s how you decide what’s true here, why shouldn’t everybody else?)
I would certainly hope not.
What you don’t get (and a lot of climate scientists don’t either, apparently), is that complete external scrutiny is a requirement for validation of methodologies and conclusions in any field. A lot of fields don’t get this, because nobody cares (e.g., literary criticism). What you see as intensive scrutiny, I see as “just about the right level given the stated importance of the work.”
Even then, it’s a paucity compared to what happens in medical science. Even in terms of PR, you have fossil fuel companies to generate wacked-out-conspiracy theories on, they have big pharm companies to villfy, even though these companies are responsible for about 90% of the financial contributions to the development of new drugs.
Can you imagine how you’d be if climate science were primarily funded by fossil fuel companies rather than governments?
The other big point here is—upon external scrutiny, it has become clear that climate science still suffers from a transparency problem. Abuse of FOIA, hiding of data and actual methods used, distortion of conclusions to fit in with policy objectives—these are just a few of the problems they have. No excuse you can generate justifies these behaviors, especially CAGW talking points about being pilloried and being inundated by requests from the public.
Boris wondered what the point of MBH98 is… one of those is seeing what SOP looked like before any real scrutiny occurred. Tracing the history of the responses of Mann to his various critics and the eventually evolution to releasing all his data and (only when the Barton committee compelled it), teaches you which is the chicken and which is the egg. The public criticism followed the discovery of methodological errors, and the criticism intensified when Mann “pushed back” against the criticism with half-truths.
MBH98 may not be very useful as a research paper, but as a history-of-science object, it is extremely informative—we’re back to the meta issues that some of us discuss, and others strenuously object to.
Lest you think I’m solely attacking climate science here, e.g., nutritional science has many of the same problems, especially on policy driven conclusions. And climate scientists generally look like rocket scientists compared to the quality of much of the nutritional science that’s been done. And in terms of immediate quality of life, shoddy nutritional science has had an immense deleterious impact on human life.
AMac, here’s the other study I was referring to. This has six authors and holds promise for advancing the state of art.
As I said above, what has been sorely missing, when statistics amateurs like Briffa, Jones and Mann were running things, is the lack of any formal methodology applied to the model reconstruction. This brings some hope the field may eventually mature to the level where everything isn’t quite so seat-of the-pants.
Paulo_K, AMac, Carrick,
Good comments all.
Bugs,
Ok. Let’s talk specifics: do you agree that Mann et al (98) and Steig et al 2009 (calculated Antarctic warming distribution, refuted by O’Donnell et al 2010) both suffer from serious errors of methodology which make their conclusions largely incorrect?
.
Public pillory? Nah, honest technical evaluation. The anger/frustration which leads to hostile comments (and there are many of these) comes not from the papers themselves, but from certain people (especially certain authors) refusing to ever acknowledge obvious errors and consistently refusing to release adequate information to allow duplication.
SteveF, if you want to see how far the public pillory of critics of status quo can go, read this measured commentary by John Christy, then this idiotic response by Skeptical Science.
SkS money quote: “By his testimony, Christy did no service to his country.”
LOL. Morons!
Carrick,
Some of SKS arguments have merit (an ENSO index like Nino3.4 is a better way to account for the impact of ENSO on the temperature history), some… not so much… the discrepancy between modeled ratio of surface to mid tropospheric warming and the measure ratio sure seems both real and most consistent with the models not having it quite right. But the comical thing with the SKS post is that the models really do project 0.27C/decade warming (on average), while the data, even accepting Tamino’s ‘corrections’ at face value, is only 0.17C/decade. Only at SKS would such a glaring discrepancy be ignored. Could Tamino’s corrections be tilted in any way? Not sure, but could be. And don’t forget: an adjustable dose of aerosol effects fixes all discrepancies.
SteveF, you’ll note they largely skirted any of the comments that Christy made with respect to extreme weather.
For example the Australian Flooding and the London flooding, and the issues with the definition of what it means to be “extreme”. (Definitionally, there is always extreme weather each year, so how to you generate a metric of “how extreme” weather is, so you can establish whether a measurable trend is present or not?)
I thought this criticism was pretty salient by Christy:
It’s always harder to measure the tail of the distribution than the centroid. Any experimentalist worth his own salt knows this. Even if you measure it, the unquantifiable aspects of error uncertainty often bite you in the b*tt (which is CERN “discovers” so many non-existent particles). The basic problem is that the “central theorem” that states than the mean a series of values from any stationary process will converge to a normal distribution (making knowledge of the underlying distribution irrelevant).
The trouble is the convergence isn’t uniform—it occurs more rapidly at the center of the distribution than the tails, and further out on the tails you go, the closer you get to your original distribution, for any finite number of measurements.
So when you try and characterize extreme events, the CLT does not protect you and in fact it becomes critical to know the underlying probability distribution associated with a single measurements, in order to model the CL associated with your measurement.
Shorter verison: Trying to use extreme weather as evidence of climate change is a very s*ucky approach. Or “what Christy said”.
Carrick,
Subtle evaluation is not SKS’s forte; hell, it doesn’t seem to exist in their universe. Critical thought, if that might conflict with advancing their agenda, seems verboten. A perfect example of some fairly smart people wasting their own and other people’s time. They would be better off leaving the advocacy to the crew at RealClimate and applying honest skepticism to their analyses. Gonna happen’? Not on your life!
Carrick,
I would argue that the dispersion of the centroid, extrapolated to extremes, is the the best way to judge the “rarity” of extreme events…. accounting for the vast number of locations where an extreme event might occur. I rather suspect that ‘extreme’ events will turn out to be not so rare after all.
.
Back in the 1960’s, I remember reading about the probability of catastrophic flooding in New Orleans in the event of a strong hurricane striking with onshore winds to the east of the city (raising the level of Lake Pontchartrain). Conclusion: terrible flooding highly likely unless the levee height is substantially increased. New Orleans then then becomes the poster child for extreme events due to global warming when the projected event happens. Yikes! And people wonder why the public is put off by claims of extreme weather being caused by global warming.
SteveF…#88462 goes into the public policy sphere. In the 1950s, after a catastrophic flood, the Dutch looked at the consequences of various extreme weather events in the future and began to construct the Delta Works, based on probabilistic modelling and the consent of the public…
http://en.wikipedia.org/wiki/Delta_Works
It seems that New Orleans, Louisiana, and the federal government did not follow such a pathway. In terms of public policy, it is interesting to consider why 2 communities took such different approaches. probably a key thing was that I imagine there had not been any catastrophic event in New Orleans – Mississippi flooding tended to affect cities further upstream and there had been no disastrous hurricane.
diogenes,
I think it was a lack of funding brought about by political expediency (AKA politics, as usually practiced). I mean, it wasn’t like global warming in the sense that people thought the risk of flooding was overstated (I remember talking with people in Louisiana about the danger in the late 1990’s; lots of people understood)…. it was that politicians don’t get a lot of votes out of spending money on important public safety infrastructure like levees… unless the voters have already suffered from flooding. The Dutch knew that the sea would and could reclaim much of their country if the seawalls were breached, so there was probably a lot more urgency than in New Orleans.
@Carrick McIntyre not hung up on Mann?
http://lmgtfy.com/?q=site%3Aclimateaudit.org+mannian
bugs:
I believe you said “pilloried” not whether somebody was paying attention to Mann.
Are you getting low on oxygen over there?
Re: bugs #88465,
Amusing link, thanks for that (lmgtfy = Let Me Google That For You).
I think bugs has demonstrated that Prof. Mann’s papers attract a lot of attention from blogger Steve McIntyre.
bugs, here’s my suggestion for the next step.
Pick one of the ten McIntyre posts linked on the lmgtfy.com first page. Then explain why McIntyre’s technical arguments are in error, or irrelevant to the scientific points the paper in question is making. If you wish, we can stipulate that McIntyre’s tone is unpleasant; that might save you from spending time on an unimportant detour.
(Not that I can prove it, but this isn’t a post hoc proposal — I haven’t looked at the search results to see whether these particular ClimateAudit posts are among that blog’s stronger or weaker offerings.)
Here they are:
climateaudit.org/2008/12/02/emulating-mannian-cps/ Dec 2, 2008
climateaudit.org/2011/01/07/uc-on-mannian-smoothing/ Jan 7, 2011
climateaudit.org/2009/05/22/steig-in-antarctic-a-mannian-algorithm/ May 22, 2009
climateaudit.org/2008/11/25/replication-problems-mannian-verification-stats/ Nov 25, 2008
climateaudit.org/2009/05/09/mannian-collation-by-santer/ May 9, 2009
climateaudit.org/2008/11/01/mannian-cps/ Nov 1, 2008
climateaudit.org/2009/08/07/bs09-and-mannian-smoothing/ Aug 7, 2009
climateaudit.org/2008/11/07/mannian-cps-stupid-pet-tricks/ Nov 7, 2008
climateaudit.org/2008/03/10/mannian-pca-revisited-1/ Mar 10, 2008
climateaudit.org/2006/06/28/bloomfield-and-the-mannian-average/ Jun 28, 2006
bugs, if this is an reasonable suggestion, can you explain why?
Carrick, again you should be cautious. You say:
First, you shouldn’t just refer to MBH98. The followup paper, MBH99, is very important for these discussions (normally they’re referred to collectively as MBH). It extended Mann’s reconstruction farther back (from 1,400 to 1,000), and it is a serious source of problems.
Second, and perhaps more importantly, Mann never released all of his data and code (I assume that was the missing word in your sentence). For example, to this day, nobody knows how Mann calculated the confidence intervals for MBH99. He never released that.
(Mind you, you weren’t actually wrong about that since you did limit your comment to MBH98.)
Brandon, thanks. I had just been focusing on MBH98 as that’s the one that has the most documentation. It gets very frustrating trying to discuss papers with big holes in them. MBH99 qualifies for that, including in the 2001 IPCC report or not.
AMac,
UNreasonable suggestion… ?
Carrick:
I understand. I only brought it up because you rarely see MBH98 used on its own. It only goes back to 1,400, so it’s mostly meaningless on its own.
In theory, MBH99 shouldn’t have introduced many new issues. It was supposed to just be an extension of MBH98’s methods to more data. Of course, it came from Mann, so…
j ferguson (Comment #88473)
Yes, typo:
reasonableUNreasonable suggestion.Typo or not, I doubt bugs will respond. As s/he hasn’t upthread (e.g. #s 88202, 88218, 88264, 88267, 88341). This is a form of “low-X” discourse (#88451).
[Edit: respond meaningfully]
Brandon, it’s my understanding that, like MBH98, MBH99 fails verification, when the verification is done properly. So it’s results are basically meaningless. Add that to the paucity of detail as to MBH calculated their CIs, and you clearly have a useless paper.
Never mind, it keeps getting used in spite of that. Useless in a technical sense doesn’t mean it doesn’t have value as a political instrument for those dedicated to the cause and those gullible enough to swallow it.
Skeptical Science does a pretty good job pointing out the flaws in Christy’s testimony, particularly the “fingerprint” issue that we discussed at The Blackboard a couple years ago and the ENSO and sensitivity stuff.
I agree that using extreme weather events as a metric is wrong, but that doesn’t mean we shouldn’t look for trends in extreme events to get an idea of what might happen in the future.
Pretty lame, but not the lamest statement I’ve seen on a site that claims to be about the science.
AMac (Comment #88482),
It seems pretty clear that bugs will never admit any specific paper or finding in climate science is incorrect, nor admit that specific climate scientists have ever committed errors (whether scientific or otherwise). It is the nature of being an political advocate.
Boris,
The key issue in the SKS review of Christy’s testimony is not that they they point out where his statement was technically weak, it is that they ignore where his statement is technically strong. In other words, they aim to discredit him rather than honestly evaluate his statement. They are acting as advocates, nothing more. (“By his testimony, Christy did no service to his country.â€) Do you think that statement belongs at the end of a technical analysis? If so, why? Here is what they say about their blog:
Well, much of what Christy said does indeed have a scientific basis, and is supported by the literature (No significant trends in hurricanes, strong tornadoes, flooding, etc.), yet there is not one single word acknowledging that anything Christy said is correct. The other bizarre issue is that SKS chooses to critique Christy’s testimony because they see him (a long practicing climate scientist!) as a ‘denier’, only because he disagrees with projections of extreme future warming and doomsday consequences.
.
They are a bunch of political hacks, nothing more. I find their technical analysis so consistently tilted by their political goals as to be comparable in value to chicken dung.
SteveF:
Since the main topic was on extreme weather, the whole question of how robust the warming is, is a bit of a side show, and I think they conflated things Christy said in his presentation with things he said in other contexts to boot (I happen to generally agree with their analysis, and I don’t typically agree with some of the claims made by Christy in that context, though they miss the real punchline—which is that the long warming trend shown by the data is not really consistent with the model predictions, even after the data’s been taken to a Cuban prison by Tamino and tortured till it confessed).
If they wanted to talk about extreme weather, which was the point of Christy’s commentary, they should have stuck with that. And on that, I think Christy’s testimony was pretty solid.
I think I understood you to say above, one should concentrate on central values of tendencies (like changes on temperature and precipitation patterns), that is something I would agree with. People often get lost in what “uniform convergence” means and think the central limit theorem protects them in places where it really doesn’t.
[This is the basis of why there are so many “3+ sigma” results that turned out to be spurious later. You really need wompingly huge numbers before you can trust the tail to +3 sigma to look Gaussian, unless you’re starting with normal data to begin with.]
Carrick (Comment #88488),
I meant that people seem to try divine the tail distribution when they perceive a point or two that seem unusual, rather than rely on the statistics of a large representative population. The problem seems often that people don’t get a representative population before concluding that a single data point represents an extreme even. I mean, if a rainfall even is going to be judged “extreme”, you need a lot of reliable historical data for the region for the seasonal period in which the event happens, and you need to define very well the areal variation of rainfall events, to know what constitutes ‘extreme’. If a 20 cm downpour from a large thundercloud is a common but very localized event, then the day it happens over a weather station may be declared “extreme” even it is fairly common, but seldom recorded.
I guess you have a point, but the same is true for just about every skeptic site out there: focus on possible errors and never point out what’s right. WUWT is much worse in this regard than SKS.
Skeptical Science used to be a lot better in terms of their tone. I guess some of the people changed or something.
SteveF, here’s a good money quote from the WIkipedia:
To give an idea, if you want to measure the mean of a distribution to a certain resolution given your S/N, lets say you need 100 values. If you want to establish the 95%, you’ll need as many as 20x more points than that, roughly 2000. Circa 2000 is the typical the population size in medical science when they are doing 95% CL studies chosen for this reason.
The whole point of this is that while extreme weather is more noticeable than “typical” weather, the long term variation in “typical” weather is much more informative for climate. If you need 30 years to measure a climate trend, likely you’ll need several hundred years of measurements of extreme weather events to establish a trend in them.
Boris (Comment #88490),
Clearly there are lots of posts (probably a majority) at WUWT which are essentially nonsense from a purely technical POV. There are some posts that are much better. But yes, the tone at WUWT (unfortunately) is pretty uniformly hostile toward climate science in general; SKS has in the last couple of years (also unfortunately, IMO) moved in the direction of outright hostility towards anyone who does not accept the consensus view 100%.
I have raised the issue of max and min temperatures before. The number of record maximums are double the number of record mimimums.
bugs (Comment #88493),
Which means, of course, that with a rising average temperatures, there will ALWAYS be more record maximums than record minimums. This is only relevant for those who claim there is no warming; when you are talking to people who expressly say there is warming, this is a straw man argument.
.
Too bad you don’t address AMac’s questions listed at #88482, and my question about two specific doubtful papers (Mann and Steig). I was actually holding out some hope for you… but alas, my hopes seem to be dashed again.
An additional issue wrt to record max/min temps is that unlike anomalies, there is no magic formula applied to adjust the data to account (purportedly) for siting issues. Thus, UHI and other issues simple get included in the measured temperature, which naturally enough. gives a bias toward more record highs and fewer record lows.
Re: SteveF (Comment #88492)
There’s no “probably” about it, IMO.
And yet, I still look at it more or less regularly, since every now and then there is a post like this one:
http://wattsupwiththat.com/2012/01/10/the-climate-science-peer-pressure-cooker/#more-54575
drawing attention to a just-published paper in GRL (Gillet, N.P., et al., 2012. Improved constraints on 21st-century warming derived using 160 years of temperature observations. Geophysical Research Letters, 39, L01704, doi:10.1029/2011GL050226.) which apparently concludes that the Transient Climate Response is a “relatively low and tightly-constrained” 1.3–1.8°C.
The backstory is also interesting (or infuriating, depending on your POV).
It seems that now that Mann and his thugs do not completely control the narrative, the lukewarmer estimates (TCR in the 1.3-1.8 C range, equilibrium climate sensitivity almost certainly less than 3 C) are gaining traction in the peer-reviewed literature. I find this hopeful, in more ways than one.
julio:
No doubt about it.
For the lack of any slack I give SkS, most of their posts are technically literate, even if written from such a narrow point of view as to undermine their own credibility.
(My chief lament with SkS is they are blowing a golden opportunity by focussing exclusively on “debunking skeptics”.)
bugs:
To echo SteveF a bit, this is just evidence that the Earth is warming, and not at a particularly alarming rate compared to natural variability (otherwise you’d have more than twice the number of record maximums compared to minimums).
If you want to make the claim that the Earth is more variable, what you really want to look for is heteroskedasticity (you should try that as a pick up line some time), that would be an objective and quantifiable evidence for an increase in variability of climate, and doesn’t suffer from this problem of degradation of the “normality” of the distribution at the tails.
At the moment, any such signal is swamped by the loss of stations prior to 1950, so … not much information there at least in the global temperature reconstruction. And you have to be very careful in looking at individual stations because the seasonal effect swamps any potential signal from climate-induced increase in variance.
Not saying it isn’t doable, but it would be a project that needs some careful thinking to get the methodology right.
SteveF (Comment #88495)
I was addressing the point that we can’t tell anything from extremes.
As for Tiljander, I am not qualified to say. From what I have read, there are arguments both ways. Which is hardly abnormal in science. There are science wars over every issue. http://en.wikipedia.org/wiki/Archaeopteryx
Another example, I was reading that you can never use smooth data, under no circumstance, no threat, blah blah blah. Then BEST use smoothed data, and say that’s OK, we know what we are doing.
omg…is bugs resurrecting Hoyle’s weird attempts, very late in life, to claim that archaeoptoryx was a fake…cement on the fossil plates, etc?
apologies but I could not be arsed to retrace bugs’s trains of thought…but the mention of archaeopteryx brought it all back
julio,
I’m a bit too old, I think, to harbor much hope for the future of climate science. I see too many relatively young climate science researchers who are recruited straight from the ranks of the enviro-loonies; no hope for them. But I am pretty damned sure the true climate sensitivity lies at or below the low end of the IPCC range. You may be young enough to see how it plays out by 2040; I am not.
.
I hold out hope only that the most nutty of projections (like extreme sea level rise!) will be discarded before I am gone.
Oh, c’mon, Steve, 2040 is just around the corner! I certainly plan to be around, and I don’t know why you wouldn’t.
Here’s my inspiration:
http://en.wikipedia.org/wiki/Carl_Barks
julio,
An inspiring example, yes, but you have to choose your parents very well, and be more than a bit lucky. The dour empiricist in me notes the age at death of my immediate relatives… and that is a sobering reality, which says clearly I will almost certainly not be around in 2040, unless in an urn on some mantle.
Bugs,
“I am not qualified to say.”
Then why are you posting on a technical blog?
.
Spending your time saying endlessly that “you folks are mean to climate scientists” serves no useful purpose that I can see. Some are no doubt wonderful human beings, some utter a**holes; none of which matters. What does matter is the quality and rigor of their analyses.
.
Let me make a gentle suggestion: offer your best evaluations, back them up with arguments, and be willing to accept reality if you turn out to be mistaken. You will see (if you look) that those who offer technical arguments here almost always follow that path…. AKA learning.
bugs (Comment #88503)
> As for Tiljander, I am not qualified to say.
OK… Then what SteveF said. I’ll let the invitation for a guest post lapse.
> From what I have read, there are arguments both ways.
There are notions about how green cheese contributes to the composition of the Moon, but they are unserious. To my knowledge, no substantial arguments exist that justify Mann08’s use of the
fourtwo uncalibratable and upside-down Tiljander03proxiesdata series.Please link the posts you have in mind. Unless they are already listed here — in that case, don’t bother. Because the defenses of Tiljander03-in-Mann08 that were offered up in those posts and threads have already been discussed, and been found wanting.
SteveF (Comment #88508)
There is a fair dollop of political content here, not to mention a pointless game about guessing temperatures. And most of the participants are not actually technically qualified to understand the more complex aspects of the climate science, (I make no statement about their capacity to learn it).
Bugs,
Isn’t this one of the greatest challenges we face in an age in which the questions can be highly technical – how to divine whether, or which of, the disputants actually understands the nuances of what he/she is contending?
I take it that you don’t – no crime there, I don’t either.
I’ve come to think that my comprehending 80% of a post is insufficient to conclude anything since there may be dragons in the other 20%.
Piling up understandings based on 80% comprehension of each of a series of arguments may be a lot like .8 * .8 * .8 * etc. A very frustrating possibility.
bugs:
Ignorance in its purest form.
If you admit not knowing it yourself, how the **** do you know this?
bugs,
“And most of the participants are not actually technically qualified to understand the more complex aspects of the climate science”
.
What the?!? A bunch of practicing scientists and engineers aren’t up to understanding it? Unadulterated rubbish.
I give up on you bugs. Bye.
Bugs, re: “a pointless game about guessing temperatures”
How dare you sir! Apologise or forever be gone!
… as if there was ever a point to any game besides having fun. 🙄
(And building a sense of community, another alien concept to CAGW obsessed losers.)
Games can prevent climate change too!
http://www.psfk.com/2012/01/al-gore-climate-change-games-video.html
SteveF’s query to bugs in #88508 was “[bugs, if you write, ‘I am not qualified to say,’] Then why are you posting on a technical blog?”
bugs’ response in Comment #88510:
bugs, I take it that you are not embarrassed by what you wrote. When it comes to Kinsley gaffes (accidentally telling the truth), anonymity is definitely your friend.
As far as your weaselly claims about the the lack of technical qualifications of “most participants,” I claim that I am fully qualified to have made every technical remark that I have made at the Blackboard.
Notes:
(1) I can’t argue from authority as I, too, contribute pseudonymously. For all you know, I flunked out of junior high. But I’ve posted the work behind every original comment, and addressed criticisms as directly as I could.
(2) Only a silly person would claim to be right all the time. That’s different from being scientifically-literate and qualified.
Other “regulars” can make as-strong and stronger cases.
You should put up (by linking to the comments that you have in mind that support your assertion) or retract. Your default style — tossing off a charge and then moving on to a new grievance — is evidence of laziness. You’d do better to focus on improving your skills; that might or might not alter some of your opinions about CAGW.
By the way, that last paragraph is an opinion, not a technical remark.
“What the?!? A bunch of practicing scientists and engineers aren’t up to understanding it? Unadulterated rubbish.
I give up on you bugs. Bye.”
I explicitly said you may well be up to understanding it. What I said was you aren’t specialists trained in it.
Re: bugs Comment #88536,
> I explicitly said you may well be up to understanding it.
Statement of the obvious? Damning with faint praise?
> What I said was you aren’t specialists trained in it.
Search the thread for the string “specialists”.
Your comments verge on silly.
bugs:
There is no “specialization” called climate scientist.
Anyway, if you are trained in physical sciences or engineering, your knowledge is portable to many different disciplines.
Re: bugs (Comment #88536)
Even so, how would you know this?
Bugs,
I had always thought that dogs, pigeons, but not cats were “trained.” everyone else is “helped” The absurdity of imagining scientists “trained” always seemed naive, or maybe insulting.
Perhaps scientists can be trained…
Thanx to AMAC and K. Fritsch for pointing out my effort on testing for the existence of the Hockey Stick in millennial-scale records. I’ve done precisely what Fritsch says: “hey guys, I’m an outsider in climate matters but since many scientists and climatologists use statistics why not entering the arena?”. My conclusions were unexpected even by myself when I started the analysis, as the instrumental variable used for updating past series is by no means stationary, in fact, it is a Hockey Stick itself. Yet most of my results do not support that hypothesis, red noise has a vindicative side after all! In other words: data mining does not pay off. Regards to everybody in this board. P.S. The paper has not yet been submitted to a peer-reviewed journal, due to minor changes operated in the last days.
Guido:
.
I guess I also need a bit of explanation for your work.
.
At first sight, it seems you have taken 10 reconstructions, most of which do not show a hockey stick, extended them by a complicated regression with the recent instrumental record, and verified that most of them still aren’t hockey sticks after that extension.
.
I’m probably wrong.
julio:
Looks like SkS has caught Michaels deleting data he doesn’t like from a graph. Again.
Wake me when WUWT posts a correction of the misleading graph.
Does Chip Knappenberger still read The Blackboard? I’d love to see someone defend Michaels here. Blow my mind.
Re: Boris Comment #88665,
Thanks for the link. Those look like serious instances of misrepresentation on Michaels’ part. Too bad that SkS is an unreliable source (IMO), but “unreliable” doesn’t mean “wrong”. Presumably Michaels will defend his conduct at some other venue, or retract.
Please link additional episodes of this story, if and as they arise.
Julio’s link to WUWT seems to have rotted. I believe it went to the 1/10/12 post The climate science peer pressure cooker, which is a reprint of the Michaels article “Will Replicated Global Warming Science Make Mann Go Ape?” The WUWT post includes as a figure, one of the three instances that SkS discusses as Michaels’ misrepresentation of others’ work.
You mean Michaels (who most don’t pay any attention to anyway) was caught cherry picking, by the king of cherry pickers, SkS. Boy, that’s a real controversy.
Please link me to the post where SkS deletes data from a skeptic’s graph or anything similar.
RE: #88665
Boris, do you mean to tell us that someone has ignored, deleted, pasted over, or inverted climate science data that didn’t agree with their hypothesis or conclusions??? I am shocked!! How very unscientific. I wonder if anyone has ever done this before in climate science? AMac? Anyone?
/sarc off
Yes Boris, even some misguided skeptics cross over the line once in a while. It is a poor reflection on state of climate science don’t you think? I wonder If Michaels was able to squeeze this garbage through peer review like so many IPCC authors before him.
To toto (Comment #88660):
you’re absolutely correct. The ten NOAA millennium-scale series needed some updating and this was performed by means of an instrumental variable, the HADCRUT series. Only three updated series manifested global maxima in the very recent years, the others didn’t.
Boris:
Please point me to that university where you learned your straw-man arguing. I said something about cherry picking and SkS, which is true and verifiable (using GISTEMP to compare the model ensemble).
Anyway, Mann did exactly this. In Mann07 he deleted the portion of the MXD data that he didn’t like, then replaced it with artificial (what he jokingly call “infilled”) data using RegEM.
Or does peer reviewed disqualify it from consideration?
Re: Guido Travaglini #88683,
Thanks for coming by.
I have a basic question… perhaps too much so, such that the best advice you can give me is to hit the textbooks. It relates to a point that Carrick and I have discussed with respect to paleo recons.
What does it mean to “update” a candidate proxy data series? In other words, suppose one has access to a proxy that runs 1200 to 1980. Treering MXD, lake sediments, ice d-O18, whatever.
It seems to me that “what you have is what you have.” I understand that it would be very helpful to extend the data series from 1980 to the present (2010, say). It would appreciably expand the calibration period as far as one of the instrumental temperature records (CRUTEM3v, BEST, etc.).
Carrick, who knows something about real-world applications of such issues, isn’t troubled by this (sorry, I can’t fish out the link to the prior discussion right now). Intuition is a limited guide, but intuitively I’m troubled by the notion of such extrapolation. Which, as best I can tell, is what you did with the data series under consideration in your MS.
My exposure to this was via Mann08’s use of RegEM to extend certain of their proxies from the stop-date to the end of the calibration-validation period. In the case of the Tiljander03 varves, this was 1986 through 1995. In the event, the Mann08 implementation of proxy extension was clearly erroneous, in that Mann et al lacked a physical understanding of the numbers that they were manipulating, thus leading themselves astray.
But this is a different, more general question. Is it justifiable at all to try to improve calibration to candidate proxies by “synthetically” extending them past the point where data collection or archiving stopped?
“Anyway, Mann did exactly this. In Mann07 he deleted the portion of the MXD data that he didn’t like, then replaced it with artificial (what he jokingly call “infilledâ€) data using RegEM.”
Carrick, Mann(99) removed and added back other data because a proxy showed too much growth later in time and Mann(08) removed data and added back other data because it showed too little growth. I have not looked at Mann(07) recently, but did that paper remove and add back data?
By the way, we had a discussion on another thread with Nick Stokes where Stokes implied that perhaps in Mann(08) the added back portion of the proxy, Schweingruber’s MXD series, never made it into the final reconstruction. I thought that I had showed good evidence that indeed it was used in the reconstruction and in selection process for r=>0.13. Nick just walked away from that thread and never answered. I was surprised by this behavior from someone who is touted to be posting to keep skeptics on the straight and narrow.
Re: Boris (Comment #88665)
Thanks for the link. I’m not going to comment on the other instances of Michaels’ alleged misconduct (I really don’t have time to go over all that), but only on the specific instance I brought up.
Boris, what Michaels did in this case is called “spin,” and what SkS is doing is also spin. So let us look at who’s spining faster.
The title of the paper Michaels was discussing, as anyone could see from the bit quoted at WUWT, was “Improved constraints on 21st-century warming derived using 160 years of temperature observations” (emphasis mine).
In the abstract, the authors state “Our estimate
of greenhouse-gas-attributable warming is lower than that
derived using only 1900–1999 observations. Our analysis
also leads to a relatively low and tightly-constrained
estimate of Transient Climate Response of 1.3–1.8°C, and
relatively low projections of 21st-century warming under
the Representative Concentration Pathways.”
So, the authors’ clearly believe that it is better to use 160 years of temperature reconstruction, that this leads to an improved result (that is, a better result than just using “only” 1900-1999 observations), and that this “improved” result is clearly the “take home” message of the paper.
So what does Michaels do? He puts up a figure showing (1) the unconstrained model (yes, it is there), and (2) the constrained, improved (in the authors’ opinion) projection. He does delete the result “derived using only 1900–1999 observations.”
On the face of it, is this unreasonable or deceptive? Not at all. The figure shows what the authors believe is the main result of their paper, namely, the improved projection. It shows that using 160 years of observations strongly constrains the model, which is again the point of the authors. It does not show that using only 1900-1999 observations does not constrain the model very much–a point of some technical interest, but irrelevant, in the final analysis, if you accept the authors’ premise that it it is better to use more data than fewer data.
In short, Michaels’ figure does not misrepresent the paper’s results or conclusions. It leaves out part of what could be called the technical details, which is perfectly acceptable when writing a popular summary.
So how do SkS “spin” their story? “Patrick Michaels: Serial Deleter of Inconvenient Data”
I don’t know about you, but I know whose credibility comes out in worse shape from this incident, and it is not Michaels’.
“What does it mean to “update†a candidate proxy data series? In other words, suppose one has access to a proxy that runs 1200 to 1980. Treering MXD, lake sediments, ice d-O18, whatever.”
AMac, the only real update of a proxy is to go to the same source of the original data and take measurements and use the same calibration to determine how well the missing years compare with the instrumental temperature record. If one has a substantial time period for the update some questions concerning the selection of the original proxy with in-sample data can be answered, since we would now have something closer to out-of-sample data.
Adding the instrumental record onto the end of the reconstructions is just plain wrong and misleading (I do not need a text book to tell me that). To make the attachment at the end one has to assume that the instrumental and proxy thermometers are equally valid and that just is not the case in anyone’s estimation.
Using other schemes to do this, like Guido did, is to merely extend the instrumental record depending on the proxy response, I think, and never answers the questions/assumptions about the validity of the proxy as a thermometer, or the selection process used, in the first place. I would suppose there are all kinds of schemes to extend a time series, but one has to judge those methods based on what is assumed.
If you play with time series with ARIMA models and particularly those with LTP you will immediately appreciate how easy it is to obtain a series that has a hockey stick blade facing upwards or downwards towards the end. Now if your selection process was made, not a prior with reasonable selection criteria, but rather posterior, it is easy to see how the downward hockey blades (and flat ones) are eliminated.
.
You don’t mean Mann 2008, right?
Re: Kenneth Fritsch (Comment #88691),
Nick Stokes strikes me as an intelligent, thoughtful, learned person, with a very well-balanced personality. As much as an online persona can effect such impressions, anyway.
As far as global warming, though, I recall that Nick noted in a recent thread (sorry, no link) that he sees the issue as being one where “the prosecution” has to make a “beyond a reasonable doubt” type of case against the prevailing wisdom on CAGW, while “the defense” should look to attain a “not guilty” verdict. He seems to clearly self-identify as a member of the defense team.
IMO, this courtroom model doesn’t map particularly well onto technical discussions of scientific issues — say, of the “journal club” variety back in grad school. Sometimes vigorous legal-style “defense” borders on spouting nonsense, from the perspective of somebody interested in understanding the science. I criticized Nick for this last summer in a ClimateAudit thread on Tiljander03-in-Mann08, because some of the points he was making were plainly factually incorrect, while others were spurious. Link.
That said, Nick makes some of the best technical arguments in climate blogland. Just not on a consistent basis.
Guido: thank you for your response!
.
.
But wait. We already know that these series do not show the recent warming. The reason, for at least some of them, is that they diverge from the instrumental record over the recent decades.
.
Since we already know that they don’t reflect recent warming, what do we gain by extending them (based on their current correlation, or lack thereof, with instrumental temperatures) and comparing the extension with past temperatures?
.
I wish you good luck for your journal submission, but if I understood what you are trying to do (probably not!), then I’m a bit pessimistic.
Oh, so your comment wasn’t an attempt at false equivalency? If you say so.
AH, but at least Mann let you know.
Nice try at a defense. But Michaels didn’t even make his own graphic. He just erased lines from the original authors’ figure. Without mentioning it.
One thing you notice if you look at the paper is that the TCR numbers for the 1851-2010 regression period and the 1901-2000 period barely overlap. One of them is almost surely wrong. Gillet also points out that the study does not take into account uncertainties in observations, and the 1851-1900 data has larger uncertainties, which might be the cause of the discrepancy.
I have no problem with the paper or its conclusions, but skeptics are making far too big of a deal about a paper that has a lot of uncertainty and only uses one model. In the case of Michaels, these skeptics are deceptively downplaying the uncertainty to suit their own goals. I look forward to further research in this area, though.
“Since we already know that they don’t reflect recent warming, what do we gain by extending them (based on their current correlation, or lack thereof, with instrumental temperatures) and comparing the extension with past temperatures?”
Yes, as toto notes here, you may have trouble with the reviewers since the standard practice is to use a spaghetti graph of proxies/reconstructions with or without the offending divergence data and then tacking the instrumental record onto the end of the series. I suppose equally problematic for a publication would be to show the reconstructions/proxies without the instrumental record attached to the end.
Better to mention – even if only in passing – that there is a divergence problem and then proceed to tack that instrumental record onto the end. It is the best of all worlds.
Concerning Guido Travaglini’s MS and the pros and cons of extrapolation of datasets: The prior Blackboard discussion on such issues started halfway down a long comment thread in late December 2011. As is often the case, it’s interwoven with other conversations, but the main points can still be gleaned. Here’s the link, if it’s useful.
Boris:
That’s Boris for you. Can’t address what a person actually says..so he has to invent things to argue against that they didn’t say, and then he still impugns the motives of the person he’s making his r-tard replies to. In this case, what do you mean by false equivalency?
They’re both equally bad.
Me:
Boris:
Actually he didn’t “let you know”. It is possible to deduce it, but he certainly never stated that’s what he did.
I won’t use the words “he hid it” because he’s actually not very clever and he probably didn’t even know that’s what he was doing (meaning deleting pesky data that shows a divergence and replacing it with artificial data that increases in temperature with year).
Re: Boris (Comment #88699)
Oh, I can do better. Please note that the bars that Michaels deleted, and that SkS apparently wants reinstated, represent results obtained with fewer data than the ones he kept. (1990-1999 vs. 1851-2010). So in this case it is SkS that wants to delete inconvenient data.
I’d characterize the SkS post as pure FUD.
AMac: I’m not sure the main problem here is with the extrapolation itself. The way I understand it, the result obtained from the extrapolated series is entirely expected from the non-extended series. That’s the problem (under the pretty dicey assumption that I got it right).
.
We already know that the series in question do not track recent warming. Therefore, we would expect a priori that extending the series (by utilizing their covariation with the instrumental record) will also fail to track this warming. Yet the latter point seems to be the main conclusion of the paper.
.
So it’s not clear what contribution is made there. Though it probably arises from my own confusion.
Er… if they want to delete data, why are they showing it?
toto–
I think the claim is SkS is showing a derived result that was computed from an underlying set from which data are deleted. That constitutes “deleting data”.
In contrast, deciding to limit a graph to including computed results only if the computing the results did not involve the analyst deleting data is not ‘deleting data’. It is simply showing what the computed results are if you limit your graph to results computed without deleting data.
Michaels acknowledges simplifying the graph in the allegedly offending post:
written right below his edited version of the graph.
Focusing on that graph to make some some sophomoric j’accuse rather than address the substance of Michaels’ take on the Gillette paper is rather typical of SkS posts. They would do well to study Michaels’ lighter touch. It makes for more more palatable reading and is more persuasive than ad hominem screeds.
RE: lucia (Comment #88706)
Thanks, Lucia. But anyway, toto is smarter than that; he knows what I mean. I was using very slight hyperbole to make a point, and the point is, in this case, it is SkS that is attempting to spread FUD. What they want the casual reader to think is “oh, look, that evil Michaels wants us to believe that the result of the paper is the lower curve, when in fact it might just as likely be the upper curve.” But that, of course, is precisely not what the paper says. The paper says “if you only look at 1900-1999, you get the upper curve, but if you look at more data you get the improved lower curve”.
Fear, Uncertainty and Doubt, indeed.
julio:
And deliberate sowing of confusion, and distortion of the truth and other slight of hand parlor tricks.
I don’t doubt that toto is prevaricating here either when he so innocently asks “Er… if they want to delete data, why are they showing it?”
Playing games with truth.
Amac, I excerpted this gem from toto in the BlackBoard December 2011 discussion of cutting and replacing data in Mann (08) that you linked:
“You bet they did. And on top of that no one bothered to submit even a comment on this supposedly fraudulent method. Doesn’t that make you pause for even a second?”
I suspect we all paused for second. Now what?
Actually I agree with toto on his impression of what Guido did in his extension of reconstruction/instrumental series beyond the reconstruction data. I think he merely takes the instrumental record beyond the reconstruction data and “calibrates” the reconstruction beyond that point. It is one step back from merely tacking the instrumental record on the end. I suppose you could employ that method to show the how misleading tacking the instrumental record on the end can be when viewing a graph, but in the end all you really have is the reconstruction up to an end date and a proper and illuminating graph should show the reconstruction as a standalone.
I spent some time awhile back at Nick Stokes blog when he was doing some good graphical work using an R program to highlight one reconstruction at a time in a spaghetti graph. Those graphs unfortunately did not distinguish between the instrumental and reconstruction part of the series. I finally got Nick to show the graphs without the instrumental record attach. What a difference a couple of decades makes (without the instrumental part).
Micheals cn avail himself of the “decline defense”
since the line in question was present some other place in the literature and since it was discussed nobody is misled
Further, in defending the WMO case SkS argued that the WMO briefing wasnt as important as IPCC stuff.
So, micheals did a blog post. Not as important as the IPCC. The line isnt hidden because it appears elsewhere, and everybody knows this.
Standard “hide the decline” defense.
Of course, as reviewers, we can suggest that Micheals change the chart back to what it was. everybody can judge for themselves.
Its just a click away. WIth AR4? I dunno, how many readers could easily access the data that was deleted.. err hid.. err not presented.
Note: the reference period that Michaels deleted was 1901-2000, not 1990-1999. (I see this is corrected later.)
This is some first class spinning. Firstly, SkS presented the figure as it appeared in the original paper. It’s pretty amazing how you twist that around to deleting data. I suppose that when Michaels erased the lines form the graph he was heroically restoring data?
Secondly, and most importantly, the authors chose to show two different periods to test whether the choice of periods made a difference. Guess what? They do. What’s more: they barely overlap.
Oh and last let’s RTFR:
Ah, the irony of people who think climate models are incapable of telling us anything useful and who then fall in line behind the one climate model they like. Only in “skepticism.”
I love all this anger and frothing about me inventing things that you’ve said and then two sentences later, you admit that I was right and you were trying to draw a (false) equivalency between SkS and Michaels.
I mean, I get the strategy. If you can maintain this act of utter disbelief about what is written by the “CAGW types” then people probably won’t notice that you haven’t linked to anything that SkS has done as bad as deleting lines from published graphs. Or maybe they’ll just accept that erasing lines from a graph is okay, but reproducing the graph as it appears in the literature is “spreading FUD.” I’m sure I’m “spreading FUD” by quoting from the actual research. What spreads deceit more than actually backing up what you say with sources? Nothing does, in Skepticland.
Boris, you are not making any sense.
To get the upper curve you need to delete data. Period. To present them both as if they were equally possible alternatives (“here are two possible choices of data to look at, we do not favor either one”) is profoundly misleading, and contrary to both the spirit and the letter of the paper by Gillett et al., who clearly (for good reason) favor one alternative–the one Michaels highlighted.
And who are you trying to impress by quoting the bit where they say their estimates are consistent with previously derived ranges? Who do you think you are talking to, anyway? Flat-earthers? All lukewarmers I know agree that the warming is probably within the lower half of the IPCC range. Of course anything within that range is consistent with the IPCC range. D-uh.
No, you aren’t making sense. It’s like you don’t understand that the original figure that SkS shows comes from Gillet et al. Are you accusing them of deleting data too?
Also, please show the quote from the paper where Gillet say explicitly that the 1901-2000 regression period is better. None of the quotes you have shown so far claim this. What’s more, your inference is at odds with the authors showing the supposedly useless regression in the first place.
Apparently I’m talking to people who think it is FUD to show a graphic from a paper that they themselves think is pretty good. It’s very weird.
No, that wasn’t the point of what I quoted. I quoted that passage to show that this paper is based on a single model and, thus, has larger uncertainties than you are letting on.
What do you think, friends? Should I keep trying to talk to Boris or is it time to give up?
Should I, for instance, try to point out that it is perfectly possible to mislead by presenting a literal quotation (or in this case, a figure) out of context?
Should I repeat that the title of the paper explicitly refers to an Improved result based on 160 years of data, and the abstract explicitly distinguishes between what they call “our” result and the result based on only 20th century data, making it clear (to anybody other than Boris, apparently) that what they consider their (“improved”) result is the 160-year one?
Finally, should I politely point out that I don’t give a damn what the uncertainty of the model is, since it has nothing to do with the subject we are discussing, namely, the relative merits of the (mis)representations of Michaels and Skeptical Science, respectively?
Nah…
It’s a waste. Boris never admits being wrong about anything, even when it’s bloody obvious.
Thanx to toto, AMAC and K. Fritsch about the concerns they expressed on my MS. Criticism is always welcome if it is suitably grounded. And this is their case. The reason for updating (i.e. extrapolating) NOAA millennial series which end well before the last decade(s) resides simply in the attempt to test for the Hockey Stick (HS) hypothesis, which by definition assumes GW has occurred in the very latest years.
So if some of the NOAA original series do not exhibit HS behavior in the eighties or in the nineties, it is by no means granted that they will/ would behave in this manner also well into the 21st. century. I’m afraid that the contention that “we already know that they don’t reflect recent warming†and thus extrapolation is worthless is methodologically feeble from a statistical viewpoint. A volcanic or earthquake time series may be stationary for centuries but then eventually explode!
Anyway, it’s simple curiosity that compelled me to perform updating of those series by means of instrumental variable(s) that be highly correlated (i.e. cointegrated) with them. HADCRUT was selected due to its renowned reliability and correlations were significant.
The procedure I’ve utilized for the entire thing is Kalman Filter, somewhat different from EM algorithm and maybe more complicated, hopefully more efficient as it embodies two-stage estimation of each updation step: prior and posterior parameters. Also, the implied variances converge, although I didn’t report them in the MS.
Obviously, extrapolation produces artificial series, but I accounted for this by running Montecarlo (MC) simulations thereby letting the outcomes to move within a certain confidence band. And if MC is applied to the entire series, original+updated, more than one maximum can be identified in probability. Oddly enough, the Mann&C. series stand out by exhibiting abnormal maxima!
You know, julio, you can’t produce a quote and have to rely on a possessive pronoun to build your case. That’s fine. The real issue is the utterly bizarre claim that SkS is being deceptive by reproducing the unaltered graphic from the Gillet et. al. paper. How dare they not also erase lines from the graph! Those basterds!
The fact that you don’t care about the uncertainty of the model is pretty unsurprising. I mean, why would a “lukewarmer” or whatever it is you call yourself actually want to understand the Gillet results in context when you can just point to the lower TCR numbers, wipe your hands on your pants and claim victory? You even get Carrcik to put on his cheerleading uniform for you. I get it.
Some readers, myself included, have taken only a cursory look at the Michaels/SkS/Gillet source material. The reason: Life Is Short. We’re reading the back-and-forth to get a bit more education on the subject, at least to the level of identifying the key points of contention. It is helpful to have them sketched out by commenters with diverse perspectives.
Issues not personalities, please.
Boris:
Like I said.
Being Boris means never having to admit you’re wrong.
Julio makes a good point and you’re either too stubborn or just not smart enough to follow it. I’m assuming stubborn, but admit I could always be wrong.
SkS is filled with id*ots, and you’re cheerleading for them. Grand times.
Guido, thanks for your Comment #88720.
Earlier, Kenneth Fritsch and I had each expressed the concern that your procedure is fatally flawed. That argument is simple: the various time-series that can (or might) serve as temperature proxies only extend as far as the last year that they were collected. The data points for the following years are unknown.
While it is possible to estimate what the unknown values of a time-series X are, the method of estimation must employ some guideline. The favored guideline appears to be “search for a correlation between series X and some other relevant time-series (HADCRUT) during the period of overlap, then extend that correlation past the end-date of X.”
However, this seems to run a risk of circular reasoning.
The most contentious question is: How reliable is the observed correlation between series X and the instrumental temperature record? Can this correlation be used for the pre-instrumental period (say, 1000 to 1850)? Would this correlation hold for the period between the end of series X and the present?
If the correlation between series X and (say) HADCRUT that was established for (say) 1850 to 1960 can be shown to hold for (say) 1961 to 2011, this helps to validate the correlation-based model, and strengthens the case that the same correlation can be applied to other time periods (say 1000 to 1850).
That, in turn, helps to answer the question: What is the best estimate of the shape of the long-term global average temperature curve? (Is it a Hockey Stick?)
In this context: what is the point of using the period of overlap between time-series X and HADCRUT to estimate the recent values of time-series X? It seems to me that to do so is to make an elementary logical error, by assuming that the observed correlation extends to later periods (and by extension, to earlier periods).
But those are just the things that we do not know.
For some time-series, it has turned out that collection of post-1960 data showed that they did not continue to correlate with instrumental records. Hence the Divergence Problem.
Amac
You’re probably right. Snippy, angry post deleted.
re: AMac (Comment #88723)
Michaels has responded to SkS here. WRT to the Gillet part it echoes what Julio has suggested.
Boris (Comment #88729)
And yet, they have clearly succeeded at deceiving you.
LL #88728 —
In the linked Michaels post, he attempts to rebut the three “serial deleter” charges leveled by SkS: Hansen 1988, Schmittner et al 2011, and Gillet et al 2011. Looking at the first case:
Hansen 1988
Hansen’s original graph shows three projections 1988-forward: Scenarios A, B, and C. Michaels retained only the highest plot, A. He writes that this was “Hansen’s (in his words) ‘business-as-usual’ (BAU) scenario.”
SkS writes, “Scenario A assumed continued exponential (accelerating) greenhouse gas growth. Scenario B assumed a reduced linear rate of growth, and Scenario C assumed a rapid decline in greenhouse gas emissions around the year 2000. Hansen believed Scenario B was the most likely to come to fruition, and indeed it has been the closest to reality thus far.” (link is SkS’)
So, Michaels says that Scenario A was Hansen’s 1988 BAU projection. SkS doesn’t address this contention. They say Scenario A was continuing exponential GHG growth 1988-on, and Scenario B was reduced linear GHG growth 1988-on.
This raises two questions of fact.
(1) Was Scenario A indeed Hansen’s 1988 view of likely temperature rise, assuming BAU?
(2) Did the actual record of GHG emissions correspond more closely to the projections behind Hansen’s 1998 Scenario A, or his Scenario B?
(I don’t know, maybe somebody knowledgeable can weigh in.)
The SkS link is interesting in its own right. It goes to a 2009 RealClimate post authored by Gavin Schmidt. Dr Schmidt indeed shows that the temperature record 1988-2009 is more consistent with Hansen’s Scenario B than with his Scenario A. (My Mark I eyeball analysis is that the temp record is as consistent with Hansen’s Scenario C as with B, a point elided by SkS.)
However, go back and read the SkS sentence that includes the link. The unwary reader might think that the RealClimate post is a look back at the GHG emissions assumptions of Hansen’s Scenarios A, B, and C. It isn’t — that post is only about temperature trends. The words “greenhouse” and “GHG” aren’t used (they are in the 908 comments, which I didn’t read).
My interpretation is that SkS’ interpretation of Hansen 1988 is supported by the RC post if GHG emissions 1988-2009 corresponded to Scenario B. Michaels’ interpretation is supported by that RC post if GHG emissions 1988-2009 corresponded to Scenario A.
Which is it?
.
Schmittner et al 2011
Michaels claimed to reproduce the Schmittner et al abstract in its entirety. Pulling the MS via SkS: he did so. SkS provides a lengthy quote from a Schmittner co-author expressing dismay at Michaels’ treatment of the paper’s figure. However, ISTM that Michaels is correct in saying that his alteration/simplification of the figure is consistent with the main thrust of the paper as expressed in its abstract.
AMac:
Gavin answers your question in the RC post:
From when this was big in the blogosphere, I recall that some people wanted to look only at the CO2 emissions in Hansen’s scenarios and not total forcing, which is a dumb way of evaluating a climate model. So yes, the total forcing was closest for scenario B, and even that was a little high.
And there’s a reason we don’t cite abstracts, people.
Re: AMac (Comment #88733)
You ask a good question about Hansen’s paper and I hope someone will answer it.
Re: Schmittner, you write “ISTM that Michaels is correct in saying that his alteration/simplification of the figure is consistent with the main thrust of the paper as expressed in its abstract.”
The same is true, only more so, of his treatment of the figure in Gillett et al.’s paper.
The figure shows in solid bars, the result of a calculation with the full set of 1851-2010 temperatures, and in dotted bars the result of a calculation with only 1901-2000 temperatures. Michaels deleted the dotted bars. This is consistent with
(i) the paper’s title, which claims an improved constraint derived using 160 years of temperature observations. This alone establishes that, in the authors’ view, the solid bars are, in fact, an improvement over the dotted ones.
(ii) statements in the abstract that repeatedly refer to the 160-year calculations as *the* paper’s result, first by contrast with the estimates derived using only 1900-1999 observations (the bit I reproduced earlier), second by clearly stating that
“Our analysis […] leads to a […] Transient Climate Response of 1.3–1.8°C”
which is what figure 3c shows for the calculation based on 160 years, not for the other one.
All of this already should be enough to make it clear that if one wanted to present just the main result of the paper, presenting the solid bars would be the way to go.
If one also wanted to show the dotted bars, one should label them clearly as not endorsed by the authors. For instance, one could label one set as “old studies” and the other set as “this study”, or “reduced dataset” and “full dataset”, or “not so good constraint” and “improved constraint”.
Reproducing them, as SkS does, without these caveats, is misleading and deceptive.
Finally, here’s the authors’ position on the rationale for using the full dataset:
Boris:
As SteveF likes to point out, we don’t actually know the total forcings. To a degree, indirect aerosol forcings are used as a fudge factor to make the models look “good” during the verification period.
Even if we assumed the “best” numbers on aerosol emissions were correct, the models don’t handle anthropogenic aerosol emissions correctly. (The real aerosol forcings are very short lived and are much more “lumpy” than the models that assume—e.g. globally constant forcings.)
If you dig around RC, you can find where Gavin says this too about the models.
Re: Boris (Comment #88734),
> the total forcing was closest for scenario B, and even that was a little high.
Good catch on Gavin writing “The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%)”. I missed it in my scan of the RealClimate post.
So it looks like the forcings assumptions behind Hansen’s 1988 Scenario B turned out to be closer to what actually transpired, 1988 – present, than Scenario A. (The issue of whether aerosols are used as a kludge (and if so, how important they are) remains.)
Aerosols aside, that would mean that Michaels’ choice of comparing Hansen’s 1988 Scenario A temperature predictions to the real-world temperature record was in error. Either he made the mistake unwittingly (i.e. equating A to “BAU” to “what actually happened”), or wittingly (i.e. in order to mislead readers).
These “caveats” are your own inventions and do not represent the authors’ views. Here’s what the authors say about the figure in question:
The fact that the 1901-2000 period is shorter is already represented in the graphic as presented on SkS.
Wow Boris.
“If you can’t dazzle ’em with brilliance…”
OK, I officially give up now.
Boris, you have never actually written a scientific paper, have you? You certainly don’t know how to read one.
Amac –
I’m not sure that is entirely fair. The original Hansen paper described scenario A as a 1.5%/year increase in emissions, which would correspond to about a 23% increase from 1990 to 2004. According to the AR4, this seems to be about what happened, and is perhaps even an underestimate (I’m not about data in recent years?).
So, if Gavin is correct about the forcing change being closer to scenario B, this would seem to be based on (as Carrick pointed out) uncertain estimates of the aerosol forcing stemming from these emissions (with perhaps a tiny bit of solar thrown in). Scenario A does not include any stratospheric aerosols from volcanic eruptions, but the effect from Pinatubo is likely negligible on surface temperatures today.
Furthermore, if you lay out an emissions scenario, are you to be judged on your estimates of the future forcings from these emissions (which would include aerosols), or purely on the modelled temperature responses to forcings? If only the latter, both Gavin and SKS have acknowledged that the climate sensitivity of Hansen’s model used in the paper is too high, so what is left to argue about? While modellers may view being judged on the former (temperature response to an emissions scenario) as unfair, this is what is currently available for influencing policy. I don’t think there is an easy answer, but saying Michaels was wrong for comparing the emissions scenario which best tracked the real-world emissions to the real-world temperatures does seem a bit unfair (although I would grant that he was trying to show that which would best advance his POV, and I do not condone deleting the other scenarios).
Re: Troy_CA (#88747),
> Furthermore, if you lay out an emissions scenario, are you to be judged on your estimates of the future forcings from these emissions (which would include aerosols), or purely on the modelled temperature responses to forcings?
I don’t really understand what you mean. Hansen in 1988 couldn’t predict the future trend of GHG emissions, so he offered a set of them, Low-Medium-High. ISTM that his prediction should be judged on the basis of which (if any) of the three scenarios ended up corresponding with what’s happened since 1988. Michaels and SkS seem to agree with this — it’s just that Michaels says Scenario A is the right choice, while SkS and Gavin say it should be Scenario B.
Part of the question is who-actually-wrote-what. From the Hansen 1988 PDF you link (pg 9343 col. 2):
I googled for a quick and dirty recent estimate of the rate of GHG rise over the past few decades and found “Climate and Capitalism’s” reprint of an 11/21/2011 WMO press release, Greenhouse gas concentrations reach new high; rate of increase accelerates. According to the attached Fig. 3b, CO2 has had an average annual rate of increase of between 1.6% and 1.9% from 1988 to 2010.
This is strong evidence that Hansen’s 1988 Scenario A is the correct choice for the comparison that Michaels and SkS want to make.
This would mean that Michaels is right on this issue, and that SkS and their source RealClimate are in error.
AMac –
The complicating factor is that while the GHG emissions growth of scenario A is the closest real-world match, the real-world net forcings (WMGHG + aerosols) match up closer with scenario B (at least according to Gavin, I haven’t checked). This may be because Hansen overestimated the GHG forcing change associated with the emissions scenario, or (most likely) because he did not properly include the off-setting forcing of aerosols associated with the increase in emissions.
This was what led to my question – you point out that because he couldn’t predict emissions growth, that’s what led to his different scenarios. But once you’ve moved along the path of a particular scenario (e.g., 1.5% growth/year), is it fair game to criticize the modeller/estimator if the projected forcings associated with the emissions don’t match up with the real-world forcings? Both SKS and Gavin seem to suggest that this is not fair…one should only compare to the scenario with the closest projected forcings. If this were solely the result of unpredictable events (e.g. volcanic eruptions), I would agree, but it is trickier here because modelling forcings based on emissions is part of “the science”.
For instance, if thirty years from now we had continued on the “business as usual” path, but experienced significantly less warming than the “business as usual” projections, how would you feel about the modellers claiming that the models were right, except that the forcings path actually followed closer to the “drastically cut reductions” scenario? I think it would be of little comfort, especially if there had been drawn-out fights over policy actions based on the scenarios.
I think that you could compare to either scenario A or B based on what you want to address, but I certainly wouldn’t tell someone that scenario A is irrelevant to the real-world path so far.
You have to look at total forcing increase, not just CO2. Scenario A had increases in other gases, particularly CFCs that didn’t come to pass.
Re: Boris (Jan 18 17:14),
> You have to look at total forcing increase, not just CO2. Scenario A had increases in other gases, particularly CFCs that didn’t come to pass.
Boris, you don’t supply a link or a reference for your assertion. I can’t tell if you’re serious or hand-waving. Hansen estimated that CO2 alone was 80% of the forcing; do you disagree that that was the basis of his Scenario A? What was the quantitative impact on Scenario A of the CFCs that didn’t come to pass?
Re: Troy_CA (Jan 18 17:08),
The RealClimate crew has an unfortunate weakness for claiming something is true, when it’s actually more like kinda-sorta-maybe true. Um, okay, how much kinda sorta maybe? Gavin wrote “the total forcing was closest for scenario B, and even that was a little high.” Well, now we know the C02 forcing and thus, it seems, the total GHG forcing wasn’t “closest for Scenario B” — it was closest for Scenario A (recall that Hansen 1988 claims CO2 forcing as ~80% of total GHG forcing). See Comment #88751 above.
As far as non-GHG forcings, remember that we — that is, Michaels & SkS & Gavin & me & Boris & you — are talking about what Hansen wrote in 1988, not about what people may now think about the effects of the aerosol inputs that Hanson estimated in 1988.
So, what did Hansen actually write in 1988 about his projections of the magnitude of aerosol forcings in Scenario A?
My guess is, not much. The answer should be in the 1988 paper’s Appendix A.
If my guess of “not much” is correct, that leaves Michaels in the right, and SkS/RealClimate in error, ISTM. If someone shows that this guess is wrong, I’ll check back tonight and confess my mistake.
To AMac (Comment #88726):
The correlation between a NOAA proxy, call it X that ends in 1990, and the HADCRUT series can only be established for the period 1850-1990. It cannot be computed backwards, because other instrumental records before 1850 are unavailable as far as I know. Hence, we must hinge on the reliability of the correlation between the two series for that period only. However, the correlation for the more recent period 1991-2011 can be computed, which I haven’t done. Good point of yours, thank you! In any case, the KF procedure has some built-in “security measures” that enable the operator to avoid gross errors, like the explosion in variance of the state variable (X in this case) and of the feedback variance with the mesurable variable (HADCRUT). Although unreported in the MS, these parameters for all of the NOAA series converge toward zero as canonically expected. Regards, Guido.
P.S. Think of the KF as a method for guiding a missile (X) toward a given target by adjusting its trajectory step by step through an optimal control device (HADCRUT).
AMac et al.
The Hansen scenarios, in terms of GHG concentrations, are available here thanks to Gavin. I converted them to forcings in a spreadsheet using the formulas here attributed to Myhre et al. 1988.
Let’s look at concentrations first. Here are the forecast for scenarios A&B, along with actuals for 2010:
gas 2010A 2010B 2010actual
CO2 391.51 389.09 392 ppmv
N2O 0.33 0.33 0.323 ppmv
CH4 2.58 2.22 1.788 ppmv
CFC11 1.18 0.54 0.24 ppbv
CFC12 2.03 0.94 0.54 ppbv
and forcings (relative to 1984 values in the Hansen dataset) per formulas, all in W/m2
gas___2010A_2010B_2010actual
CO2___0.70__0.66__0.70
N2O___0.09__0.08__0.06
CH4___0.28__0.16__0.01
CFC11_0.24__0.08__0.00
CFC12_0.53__0.18__0.05
Total__1.83__1.16__0.83
So his forcing projections were too high, even scenario B. And his sensitivity was too high. Not surprisingly, his projected temperatures were too high.
Let’s look at why Hansen’s forcings are too high. He nailed CO2; both A&B are extremely close to actuals. And CO2 constitutes the lion’s share of actual forcing (>80%). But the others, not so close. I can’t really blame him for overestimating CH4; its growth has slowed drastically from what seemed apparent in the 80s. For the CFCs, exponential growth was never a reasonable model; the data showed at most linear growth over 10 years prior to 1988. And naturally the Montreal Protocol reduced CFC growth strongly.
I should add that scenario A, in which CFCs account for 0.77 W/m2 (40%) of the forcing change between 1984-2010, was “juiced” by Hansen. He doubled the CFCs’ concentration, to approximate “potential effects of several other trace gases”. This has a noticeable effect by 2010; without it the forcing change of scenario A decreases by about 0.5 W/m2, to 1.35 W/m2. The effect is even more pronounced at the end of the projection period in 2050, because the forcing is linear in the CFC concentrations. By 2050, CFC-12 forcing exceeds that of CO2 (in scenario A) & CFC-11 forcing is about half CO2’s.
Apologies for formatting of table in above post. My last edit to fix it must have extended past the allowed edit time.
HaroldW, that’s a brilliant post. Thanks for all the legwork involved in digging up the figures and plowing through the formulas.
A small comment: methane looks “off” to me.
1.8 ppmv -> 0.01 w/m2
2.2 ppmv -> 0.16 w/m2
2.6 ppmv -> 0.28 w/m2
What sort of formula gives a curve like that? Or, was methane around 1.8 ppmv in 1984, thus giving negligible added forcing if at much the same level in 2010?
Also, in Comments #88751 & #88755, I stated that Hansen 1988 claimed ~80% of GHG forcing was from CO2, which is inconsistent with the figures you show. I’ve re-read the PDF (unfortunately it’s scanned and not searchable), and don’t see “80%” — so I must’ve gotten that wrong. Apologies for the confusion.
As far as the “serial deletion” charge that brought us here: It looks as though Hansen’s Scenario A — which Michaels selected on the basis it was the “BAU” model — had little to do with the way that GHG concentrations actually played out, 1988-2010.
Scenario B would be a better match to the actual atmospheric history, even though it isn’t particularly close in important respects.
So, a demerit to Michaels for his choice of A over B. It looks like this SkS criticism has legs, after all.
What a flip-flopper I’ve turned out to be!
I’ve enjoyed reading the discussion here so far – without feeling I have anything to add. Here, I would say that my feeling would depend on whether Hansen made a statement that A was a BAU scenario. Michaels made a big deal about it over at WUWT – and I agreed with him [and Chip who was doing the numbers :)]
*
I think this is quite important and, if you like, lets Michaels off the hook. Hansen, if so, was making claims about “if we don’t do X, then Y” and it doesn’t matter whether his error was in the emissions or the sensitivity or whatnot – BAU (which transpired in Michaels interpretation] somehow failed to follow Hansen’s prediction – which was for political and public consumption.
*
As it happens, I think exactly the same about the IPPC FAR. It makes a clear point that BAU is what will happen “if few or no steps are taken to limit greenhouse gas emissions”. Of course, some other things are specified too [equiv’ doubling of Co2 by around 2025]..
*
I think the definition given above, if you like, takes precedence – it says if X, then Y where X is doing nothing or little, and Y is 0.3 degrees C warming per decade for the foreseeable. Exactly how the prediction (as it was called in those days) gets to be about 100% too high doesn’t matter if we accept that BAU is what has transpired. There is of course, always going to be a reason why the prediction was wrong – but I don’t think it matters what it is.
AMac –
1) The forcing for CH4 varies primarily according to the square root of the concentration, forcing (in W/m2) ~ 0.036*sqrt(M), M being the methane concentration in ppbv. E.g., going from M=1751 ppbv (in 1984) to scenario A’s 2580 ppbv is an increase of approximately 0.036*(sqrt(2580)-sqrt(1751)) = +0.32 W/m^2. However, there’s an interaction with N2O, presumably because the absorption bands of CH4 & N2O overlap. You’re going to regret asking, because the entire expression is:
forcing = 0.036*(sqrt(M)-sqrt(M0)) – (f(M,N0)-f(M0,N0))
where
f(M,N) = 0.47*ln[1+2.01E-5*(MN)^0.75 + 5.31E-15*M*(M*N)^1.52]
and M0,N0 represent initial concentrations of CH4 & N2O resp. in ppbv.
[This is straight from Myhre et al. GRL 1998, Table 3.]
2) I came to the opposite conclusion that you did; that is, I believe that Hansen’s 1988 prediction should be judged by comparing to his “A” curve. He described it as “business as usual”, and there having been no great mitigation steps, I think that’s what we’ve seen. [I’d give him some leeway for Pinatubo, but that’s not a big adjustment, nor is it relevant when looking at trends.] Let me put it this way — Hansen was attempting to influence policy by suggesting that if nothing happened, things would turn out like scenario A. If you’re a policy maker, that’s what you go by.
Hansen forecasted CO2 remarkably accurately. However, his prediction for CFC forcing in particular was extremely high. This is partly due to his doubling to account for completely unknown “other trace gases”. The other factor was the exponential growth in CFCs; given that the Montreal Protocol had been agreed in 1987, it was entirely illogical to expect that. Indeed even without Montreal I would not have hazarded an exponential fit to CFCs. So he got the GHG forcing high by a factor of 2. Saying “well, the forcing is closer to B, so we should use the B curve” is allowing him to get half of the prediction business wrong. [I’m reminded of “if my aunt had wheels, she’d be a bicycle”, or something like that.] The model’s sensitivity, 4.2 K per CO2 doubling, also seems high. Harder to say what it should be, of course, but I think the consensus (!) is moving to the 2-3 K range.
Hansen’s ScenarioB assumed forcing would increase by 1.31 W/m2 from 1984 to 2011. The upcoming IPCC AR5 is showing that forcing increased by between 1.28 and 1.31 W/m2 from 1984 to 2011.
So Scenario B is almost bang on in terms of what really happened in terms of forcing but off by quite a bit in the actual temperature response (if it is depicted objectively of course).
http://www.realclimate.org/data/H88_scenarios_eff.dat
http://www.pik-potsdam.de/~mmalte/rcps/
Anteros, HaroldW —
Did Michaels’ selection of Hansen88’s Scenario A to the exclusion of B and C make him a “serial deleter,” as SkS claims?
It seems to me that the principal parties are phrasing the issue to suit their preferences and preconceptions.
As so often in climate science, the framing of this question is as important as the question itself. To say nothing of the answer(s).
If we wish to evaluate Hansen’s 1988 vision of what “business as usual” would look like, and what the implications for policy (i.e. temperature rise) would be, then we should look to Scenario A.
If we want to get a sense of Hansen’s 1988 understanding of the connections between rising GHGs (ppmv), forcings (W/m2), and temperature rises, then we should look at A, B, and C, with the focus on B (because B’s summed forcings (1984-2010) of 1.16 W/m2 are closest to the actual value of 0.83 W/m2; see #88759).
If we want to understand the 1988 consensus view of the likely influences of the various potential major-player GHGs to forcings in the 1984-2010 period, we should look at the listings of gasses in A and B.
What’s typical and frustrating is that both Michaels and SkS have the knowledge, interest, and technical proficiency to perform the analysis that HaroldW showed in #88759. That breakdown is necessary for the scientifically-literate layperson to gain an understanding of the underlying issues.
Yet, each for their own reasons declined to do so, preferring to skip ahead to broad-brush self-congratulation and condemnation of the enemy.
Are we approaching the point where we are arguing that Hansen is “right” because he predicted the wrong forcings?
If anything, this is illustrative that climate models have no real-world predictive capability, because that would requiring knowing the unknowable: Future anthropogenic and natural forcings.
As we see, it’s not hard to get reasonably close to the correct CO2 curve, but that’s because the curve is very slowly varying (contains little high frequency content outside of the seasonal component). Just extrapolating using a curve like that introduces much less error than e.g.., temperature, which is in any given period, dominated, by short-period climate fluctuations. Or anthropogenic aerosols, which we not only have no easy way of knowing future forcings, we really don’t even have a good handle on historical forcings.
AMac:
Which is why neither is a reliable source.
They are great fodder for blog writers, because depending on your audience, you can always find fresh meat on one of these blogs that you can use to stir up a conversation amongst your blog denizens.
Amac –
I very much agree, and you sum up the relevant nuances well. It’s a pain when you know everyone is spinning everything they can. In a way I expect it from Michaels, which I know is a bit odd, but I don’t think he pretends that much that he’s impartial, so I don’t feel conned. SkS pretend much more vigorously so I feel a bit more irritation. But still, some dispassionate honesty would be refreshing once in a while.
*
On the SkS subject, Bishop Hill has a link to Shub’s site with a particularly unedifying take on some of their twisting (vis a vis Michaels).
*
Am I the only one who thinks the FAR predictions are at least as interesting as Hansen’s from the same period?
Anteros, thanks for the link. It does look like SkS has deliberately misquoted Michaels. (But I’d want to look at edit history on Michael’s article to make sure he didn’t change what he said, in which case the tomato throwing can recommence.)
AMac (#88766) –
I’m not in a position to comment on the “serial deleter” charge, because I’ve only looked at one item.
As far as that one item, I agree with Michaels that scenario A is the curve that should be compared to the historical record. However, I would have reproduced the entire chart, and then made the case for selecting curve A as opposed to B. [I don’t think anyone’s backing C.] But then I’m not trying to score rhetorical points; Anteros’ comment in #88771 above is spot on.
I propose the following as common ground that all can agree on: Hansen got everything right, except those things which he got wrong. 😉
I think this discussion of GHG emission scenarios makes too major points:
1. It really does not matter that much that some choose to use incomplete information as long as the reader is willing to put in the effort to look at all the data, and in this case scenarios, to gain a more complete understanding and implications of the results.
2. The discussion also points to an added uncertainty in using models to predict future temperatures and that is the uncertainty of finding the proper scenario to use.
Seems to me a bit too much to critique Hansen based on his projections of future emissions, after all, he could not foresee the Montreal protocol on halocarbons, nor the flattening in methane due to greater care in production, transport, and use (more expensive materials are handled more carefully, no surprise there).
.
Hansen’s predictions can be justifiably critiqued because he predicted a rate of temperature rise that was too high, even for scenario B. He used (and continues to use) a high value for climate sensitivity….. so in spite of a partially compensating overstated ocean heat uptake, he must also use post-hock assumed aerosols offsets to match temperature records. Hansen’s latest screed continues the charade by claiming higher than measured ocean heat uptake and aerosol offsets which are exactly high enough to match the temperature record (surprise!). This does not (IMO) qualify as rigorous science. So long as climate science gets a pass on obvious arm waves and kludges, the claimed high climate sensitivity will never be revised downward, no matter what the future evolution of temperatures; too much fright potential (and funding) is at stake.
Amac and others, I looked at and collated forcing data a couple of years ago – see here
http://climateaudit.org/2008/01/17/hansen-ghg-concentration-projections/
http://climateaudit.org/2008/01/18/hansen-scenarios-a-and-b-revised/
http://climateaudit.wordpress.com/2008/01/27/hansen-and-hot-summers-in-the-southeast/
http://climateaudit.org/2008/07/28/hansen-update/
Also tagged as http://climateaudit.org/tag/scenario-b/
At the time, Gavin Schmidt, unsurprisingly given his employment, was functioning very much as Hansen’s other bulldog and was very quick to counter the slightest criticism of his boss.
Re: Steve McIntyre (Jan 19 11:48),
Whew, a lot of work (and controversy) about Hansen88 that passed me by in 2008. Rip van Winkle. The graphic at the end of the linked Climate Audit post Hansen Update allows one to get a good feel of the relative contributors to the total forcing projections for Scenarios A, B, and C.
SteveM,
Thanks. I hadn’t appreciated the importance of OTG’s.
re: SteveF (Comment #88782)
January 19th, 2012 at 11:46 am
“he must also use post-hock (sic) assumed aerosols…”
More post-prandial perhaps?
Paul_K,
Post hoc, not hock.
Steve McIntyre –
Thanks for the links; I should have thought to look through CA’s archives. I found the discussion here very interesting.
It is much more reasonable to describe scenario A as “plausible worst-case”, especially given the speculative (& exponential!) allowance for “other trace gases”. And even Hansen acknowledged that the exponential extrapolation must eventually over-estimate. However, Hansen’s Congressional testimony explicitly calls scenario A “business as usual”. In that context, I interpreted his curves to mean:
A = no policy changes
B = moderate policy change
C = strong, immediate policy change (Hansen used the phrase “draconian emissions cuts”)
/speculate on
I wonder if Hansen found himself in a “double ethical bind” when preparing for his Congressional appearance.
/speculate off
Re: HaroldW (Comment #88815),
> Hansen’s Congressional testimony explicitly calls scenario A “business as usualâ€.
Michaels and others have claimed that Hansen called scenario A “BAU”, but it wasn’t clear to me that that is actually the case. It is: some Googling brought up a condemnation of Michaels from a 2007 Deltoid post by Tim Lambert:
So there seem to be competing concepts about the idiom “BAU.” In Michaels’ favor on this point, his understanding tracks the dictionary definition.
AMac –
Your comment leads me further back towards Michael’s interpretation. I think what Hansen can be surmised to have meant on re-analysis after the event [or even at the time] is very much irrelevant. What is important is the impression Congress had – or was likely to have had – given the circumstance. Which is “if we don’t do anything this is what is going to happen” – scenario A.
Sure, you can blame the model or expectations of methane etc but from the point of view of the relevant audience and the circumstances, I think Michaels was right.
If Hansen said B was “most likely” all he is saying is he expected some emissions controls. Not that B was most likely if we didn’t do anything at all.
I would think that we spend way too much band width on the Michael’s and Hansen’s persona and too little on what can be learned from this experience. Give Hansen credit for going before congress and boldly making some predictions/scenarios and in the end showing just how difficult it is to get the models and scenarios correct.
The testimony of so-called expert witnesses called before congress with the intentions of affecting policy should be taken with a grain of salt and plenty of your own background study and analysis. How many expert witnesses called the recent financial crises and falling home prices? Certainly not the consensus of experts.
Yes, Carrick, climate models cannot predict how emissions will unfold. But Hasnen’s model got pretty damn close considering the forcings. That’s what Patrick Michaels wants to hide and that’s why he hides behind the “business as usual” phrase and deletes the rest. No skeptic wants to admit that a climate model has skill in predicting temperature rises with known forcings.
I have posted the ZOD for WG1 Ch4 at:
http://www.davidappell.com/ZODS/
and am looking to host the other chapters. If you have copies and will share, please click on that link and write me.
AMac –
Thanks for the Deltoid quotation. For one thing, it spares me having to type in Hansen’s testimony, because that part is accurate. Lambert’s interpretation of Hansen’s use of “business as usual” would make Humpty-Dumpty proud.
Anteros, in #88823, made the same interpretation as I did in #88815, based on the plain meaning of what Hansen said in his testimony, and the context. It would have been quite easy to say that B is a reasonable guess at future emissions (barring changes in policy), and A is a plausible upper bound; and that would not be taken incorrectly.
Now, in the Hansen et al. paper, it’s described rather differently:
As we’ve seen, even scenario B over-estimated the greenhouse gas forcing, notably that of methane and CFCs. The discussion at CA indicates that the scenarios predate the testimony by several years (which finally explains to me why the scenarios diverged after 1984 rather than closer to 1988), and the Montreal Protocol’s effect on CFCs would not necessarily have been anticipated at that time. By the time of Hansen’s 1988 testimony & paper, however, the Montreal Protocol had been agreed, and it would have been fairer to add the caveat that the CFC trajectory was overstated in both A&B.
@HaroldW
“As we’ve seen, even scenario B over-estimated the greenhouse gas forcing, notably that of methane and CFCs.”
From what I can tell, he didn’t. The forcings he considered are reasonable, he just didn’t at the time have the ability to predict how important other forcings are, and how they would all pan out now. The problem with relying on particulates as a negative forcing is that they only have a short lifetime in the atmosphere, while CO2 has a long one.
The relevant question about Hansen’s 1988 testimony is not whether he got it right, but whether it had more skill than a naive extrapolation. Obviously it had more skill than a prediction of no change, but then so would a linear extrapolation of the previous ten or twenty years.
bugs (#88829)
““As we’ve seen, even scenario B over-estimated the greenhouse gas forcing, notably that of methane and CFCs.â€
From what I can tell, he didn’t.”
See #88759 above. Scenario B predicted 1.16 W/m2 in ghg forcing over 1984-2010; actual was 0.83.
One reasonable and interesting question is, “what are the best estimates of future GHG concentrations?”
Another is, “Given particular GHG concentrations (ppmv), what will the forcings (W/m2) be?
A third is, “Given particular forcings, how will future temperatures differ from present ones?”
I think Michaels’ 1998 testimony was misleading in that he conflated these questions, leaving the unwary listener to assume there was only one. Along the lines of, “Do climate scientists like Hansen have a clue about what they are doing?”
Hansen’s skill in 1988 in estimating ppmv’s and thus W/m2 out 20 years in a “business as usual” scenario was poor: Scenario A didn’t come to pass.
But Hansen’s 1998 ability to translate ppmv’s into W/m2 forcings was presumably pretty good. His Scenario B estimate of total forcing turned out to approximately match reality, and his Scenario B estimate of temperature did, too (about the same as Scenario C did).
We don’t know whether these predictions surpassed contemporaneous naive predictions, such as linear extrapolation (viz #88830).
So Hansen’s performance can be judged good (RealClimate) or bad (Michaels), depending on how you want to look at it. It would have been more respectful of listeners/readers for Michaels to have left Hansen’s figure intact, rather than stripping out B and C.
Re: Amac #88835:
Regarding the discussion of the Hansen scenario, one interesting aspect is that it gives a blueprint of the type of debate we may see 40 years from now regarding who was “right” vs. “wrong” with respect to the climate change issue. Essentially, the 1988 Hansen paper used a model with too high a sensitivity, and described a BAU scenario (A) which overestimated the future forcings.
Now suppose that 40 years from now we encounter a “lukewarming” scenario, were we find that the current set of GCMs yields too high of a sensitivity (with actual ECS closer to 1.5 C), and that the current BAU projections overestimate forcings because of future energy technology breakthroughs, so that the climate change issue is deemed of little significance. Apologists for the consensus/IPCC at that time may note that 1) The ECS of 1.5C actually was within the set of predictions, and that models were skillful other than being a bit too sensitive, and 2) They could not have foreseen these technological breakthroughs. These is essentially the argument we see WRT to Hansen 88.
Skeptics may point out that they were right about climate change just being another doom-and-gloom prediction (showing the vast discrepency between projected and actual warming for BAU), despite many of them being wrong in their prediction for global cooling and/or the complete lack of effect CO2 has on climate. The lukewarmers will be wishing they had wagered more quatloos.
Troy_CA,
My honest guess it that it is not (and has never been) about the exact extent of warming. Those who support immediate and drastic action do so independent of future warming. Global warming/climate change/climate disruption has always been about a very different view of the Earth and humankind’s relationship to the Earth than the majority view. Global warming is a convenient means for a neo-Malthusian minority to achieve its goals; the real objective is to force people to “change the way they live their lives”. Which is why technical engagement on the technical substance (uncertainty, conflicting evidence, evidence of modest sensitivity, etc.) is so very carefully avoided by those ‘concerned’ about global warming.
.
The real disagreement is about philosophy, morals, values, and the like. The actual outcome (the true ECS, for example) being different from projections will in no way change the nature of the conflict…. there will always be another reason to be concerned about how horrible humankind is for the Earth. The best we can do is make sure the worst of the rubbish is technically refuted, so the majority is not coerced into taking idiotic actions.
I’m late to the party, and I apologize if interest has waned, but I just started looking into the accusations from SkepticalScience (SKS) Boris linked to, and I wanted to share my impression, and see how it meshes with that of others. For the moment I’m only looking at the second and third examples (I want to read up a bit more on the Hansen before commenting on it).
In regard to the Schmittner 2011 paper, SKS says:
Much to my surprise, I agree with SKS. Pat Michaels said this of his figure:
These two quotes are perfectly in line, as is the caption for the figure:
In regard to Gillett 2012, SKS says:
Again, I agree with SKS. Michaels said:
And caption to his figure says:
In all of this, I see absolutely no contradiction. Everybody agrees Pat Michaels eliminated things from figures because they were “inconvenient” to his narrative. Nobody claims he hides his deletions, just that he makes them to give the narrative he wants. That’s my understanding of the situation.
Given this, my impression is SKS is making an issue out of nothing. So what if somebody deletes things from a figure to give the narrative he wants? There is nothing inherently wrong with that. If there are three lines in a figure, but he’s only talking about one, he has no obligation to show all three. He is obligated to inform people he is making alterations, but that’s all he’s obligated to do, and he did it.
Sure, it might be nice for him to be explicit about what is changed and why. It could be helpful if he referred to the various caveats a paper he references makes. But failing to do so isn’t dishonest. He accurately discussed the results of Schmittner 2011. He accurately described a conclusion drawn from Gillett 2012. There was no deception involved in the alterations he made to the figures. Despite this, SKS says (emphasis added):
This is simply untrue. The data Michaels deleted did not contradict his arguments. For example, the individual components of Schmittner 2011’s averaged series do not support/contradict what Michaels says on their own. They can only be taken as a combined result. Moreover, since Michaels removed both individual components, you’d have to say Michaels also deleted data which supports his arguments.
To me, it seems SKS has made an issue out of nothing just so they could smear Pat Michaels, and in the process, they’ve made at least one claim that is both false and nonsensical.
In regards for the Hansen example, this is my impression:
Hansen provided three scenarios. His Scenario A was his “prediction” for what would happen assuming “business as usual.” Michaels discussed the flaws with this scenario. Michaels is now being criticized for not discussing Scenario B or C, the results for which assume things would happen that didn’t actually happen.
As best I can tell, the criticism is Michaels didn’t make it clear in what way Hansen was wrong. Moreover, he didn’t point out if you correct for what Hansen got wrong, you could get a reasonably accurate answer. In other words, Michaels said Hansen was wrong, but he didn’t say if you change Hansen’s prediction, his prediction is kind of right…
Please tell me my impression is wrong.
Sorry for the triple post, but I just saw a comment on SKS I had to share. It approvingly quotes Paul Krugman saying:
Mind you, the user who made that comment is Albatross, a moderator for SKS. The absurdity of his comment is too much for words.
Brandon Shollenberger-
Sorry – no can do. I’m not in a position to tell you it’s right either – but it is remarkably similar to the impression I have myself. Like you I’ve spent some time SkS looking at their interpretation of the kerfuffle.
I went over there after wondering if I was as open-minded as I like to think. The discussion here seemed so admirably non-partisan I thought I’d better visit SkS with some goodwill and see if their perspective is merely an equally justifiable narrative to Michaels’.
I honestly don’t think it is. Michaels is being a political operator and making a case for something that suits his agenda. I think he is careful to avoid dishonesty, but makes his case with what is available. Mostly I think it is actually quite fair [particularly regarding the public statements about ‘BAU’, which are the three most important letters in Hansen’s Congressional testimony (and that is the relevant, political, context)]
In sharp contrast, SkS’s agenda is, as Brandon says, to smear Michaels – and also to give the impression of carrying the torch for scientific integrity and the spirit of true scepticism. I find that a little bit repugnant, and it makes me more inclined to give Pat Michaels some leeway when he’s painting his particular picture, which in any event, is not unreasonable.
To re-visit my pet enthusiasm, The IPCC’s first assessment takes Hansen’s public message and makes it even more explicit. In their Overview they say that their (model based) prediction is that temperatures will rise by 0.3 degrees C per decade (0.2 – 0.5 range of uncertainty)
They say that this will [because it needs to be spelled out, perhaps] lead to a temperature rise, before the end of the century, of 4 degrees C above pre-industrial levels.
The take-home (or send-home) message is contained in the Overview’s definition of BAU – it says
And I think that it is in the context of this political – and policy relevant – message that the prediction should be assessed.
Re: Brandon Shollenberger (#88843),
If I were the referee judging the SkS – Michaels mud wresting bout, I would award the Hansen88 round to SkS. Lest SkS celebrate too much, it would be on points, and the decision would be close.
In his Congressional testimony (link in #88835), Michaels paints a picture that is likely to mislead the unwary listener. The focus on Scenario A and the failure to mention Scenario B is a part of that; see #88835.
Michaels’ overarching theme was that Mainstream climate scientists’ projections are total and abject failures. It seems clear to me that any balanced discussion of this issue would have required him to have discussed B as well as A. Per SkS, he did not.
Michaels writes (speaks) well: I don’t have a smoking-gun quote quote to buttress this contention. However, I think a read of his testimony supports this interpretation.
None of this is to dispute SkS’s evasions of what, exactly, Hansen claimed in 1988. Or the distortions in their own discussion of Michaels’ distortions.
In SkS’s favor, they provide more than Michaels does in the way of links that allow the wary reader to get beyond their preferred narrative of events. Still, it’s a decision on points.
Re: AMac (Jan 21 09:30),
Michaels is an advocate of a policy. I don’t believe that he hides this under the guise of being policy neutral, but I could be wrong about that. SkS is also an advocate. However, my impression is that they try to hide this by claiming that they’re just correcting false statements. That would make them stealth advocates in Pielke, Jr. ( The Honest Broker ) terminology. But as advocates, whether stealthy or not, one has to weigh what either of them say by realizing that they are advocates and will leave out of their arguments anything that is inconvenient to their cause.
SteveF (Comment #88841)
I agree with your assessment of the motivations for immediate AGW mitigation and judge that with that assessment as a basis it becomes a much simpler proposition to makes sense of the arguments put forth by the warmists and members of the consensus on AGW.
Primarily they see no unintended consequences or other detrimental effects coming out of immediate and intense mitigation, and further, it is important to note, that they apparently do not need much evidence about the extent of future warming and its consequences in order to move forward with mitigation policies.
I think the Michaels’ case being discussed here puts forth the issue of whether the so-called skeptics camps have members who put forth counter arguments to the consensus in a manner of the IPCC where both sides of the issue are not represented or at least an attempt is not made to equally represent all sides of the debate. In my view, indeed Michaels should have presented a more complete picture of the scenarios and Hansen’s predictions.
Michaels and Hansen both, I think, missed an opportunity to make a point of the need not only for a reliable climate model but an ability to find a scenario that will hold into the future in order to predict climate in the century ahead. It is not sufficient to talk only of climate models and the capability of the models – given a scenario – to predict climate, but also getting scenarios correct. The take away from the Hansen lesson is that both are difficult to do even over the short haul.
These discussions also show that if one is to get a clear view of the issues involved in these debates it is best to look behind the information being presented in testimonies and in peer reviewed papers as something important may have been left out or not given the prominence that that something deserves.
An interesting sidelight to this discussion would be what Lucia presents in her tracking of the climate models capability to forecast global temperatures. The two aspects of that prediction are the models capability given a scenario and the scenario itself. Interested individuals are forced to choose amongst scenarios because climate model results are not updated with actual data – that I am aware of.
“But as advocates, whether stealthy or not, one has to weigh what either of them say by realizing that they are advocates and will leave out of their arguments anything that is inconvenient to their cause.”
DeWitt, I posted before I saw this in your post, but the point is that the IPCC or skeptics can provide honest information that can be added to ones information banks – but with the proviso you are probably not getting all the information depending on how partisan they are.
I also agree that the stealth issue is real and embraces more controversial issues than just AGW. What I found humorous is that partisans on one side of an issue often claim their sources of information are neutral and the other sides are biased.
Kenneth Fritsch,
I agree that both Michaels and Hansen act primarily as advocates, and shade their presentation of data (including leaving out data that complicates their stories) in every case. IMO, neither is worth listing to if you want to know the truth…. no advocate ever is.
AMac, I’m curious about this comment of yours:
You say Pat Michaels did not give a “balanced discussion.” Of course not. Nobody has claimed he did. Maybe you’d like for him to give one, but he has no obligation to do so. SKS didn’t give one when it wrote that smear piece. Hansen didn’t give one when he testified before Congress. Congress didn’t even expect one from Michaels.
This leaves me with a question. First, so what? Second, how do you go from saying Michaels didn’t give a “balanced discussion” to saying SKS was won “on points”? As I quoted before, SKS said:
Nothing Michaels “deleted” contradicted anything he said. This statement is simply untrue. How can SKS come out as the victor when they made an untrue claim? There were two parts to the prediction Michaels discussed. He said the prediction was wrong without discussing whether both parts* were wrong. The worst you could say of this is Michaels didn’t distinguish between the projection made by the model and the prediction made by Hansen based on that model. That’s not a very damning criticism, especially not when you consider Michaels discussed the possibility that future emissions were overestimated, asking, “Was the increase in greenhouse gases overestimated?”
In a separate issue, I poked around SKS for a while, and I noticed it has discussed the Michaels testimony in a number of different articles on their site. It’s remarkable to see them accusing Michaels of misrepresenting Hansen’s model given how many untrue things they say about it. For example:
In reality, there is absolutely no difference in the predicted growth rates of CFCs between Scenarios A and B. Despite that, SKS offers paragraph after paragraph discussing this issue based upon a complete fabrication. I especially like:
Apparently Scenario A wasn’t BAU because BAU was more like Scenario B, which was identical to Scenario A… Deep stuff.
*For the record, both parts were wrong. Hansen’s model overestimated warming for all three scenarios because it had too high a climate sensitivity.
SteveF:
The only big difference is Hansen arguably has a lot more insight than Michaels does. And there are topics he writes about (like determination of the mean temperature of the Earth) where he stays relatively neutral. Even when he isn’t neutral, he’s usually pretty good at labeling where that boundary is, and disclosing the assumptions he’s made. (That is he is not a sloppy-a$$ed hack unlike certain paleoclimatologists, whatever else he is.)
I’ve gotten pretty good insight by reading Hansen carefully. No slam meant on Michaels (it’s just a statement of my experience), but that hasn’t happened for me from reading him. He’s purely an advocate as far as I can tell, who doesn’t do any original research, just spins data.
Anteros:
To me, this is the most important comment one can make. The only two questions I view as important in regards to Michaels are:
1) Is he wrong?
2) Is he dishonest?
If you say no to both of those, he didn’t do anything wrong. SKS’s pieces on him fall apart. Pretty much every conversation about him falls apart.
Brandon:
I think it’s clear that Michaels is an advocate for a particular political /policy strategy, and as far as I can tell, makes no claim to be anything else. SkS is also an advocate, but pretends neutrality, and it is this cognitive dissonance generated by the difference between their words and deeds that is a natural irratant.
I think this is why I ignore Michaels but he doesn’t bother me, and I have trouble ignoring SkS and they do bother me—like many warmingists, pretense of neutrality and the associated higher intellectual ground is part of their appeal to authority.
(It’s almost as grating as the farcical “reality based community” meme that some were spewing during the Bush-Kerry campaign.)
Brandon Shollenberger (#88862)
Your link in the above post to SkS doesn’t work for me. [And the phrases in your quotations don’t ring any bells for Google.]
Can you fix please?
HaroldW, my apologies. I didn’t notice an oddity about that page. Apparently, it is “EMBARGOED UNTIL 24 January 2012.” Presumably, the link was blocked sometime after I accessed the page. Fortunately, I still had the page open, so I can still view it. Perhaps more importantly, I was able to take screenshots of everything I discussed. I suppose this may qualify as breaking an embargo, but I never agreed to it, so meh.
http://i259.photobucket.com/albums/hh289/zz1000zz/SKS1.png
That’s the first image. You can reach the rest by changing the number in the URL. It goes up to ten, and it should cover the entire article. You’ll find both quotes I provided in SKS3.png. By the way, SKS10.png really reinforces the fact I have no obligation to refrain from posting these screenshots. In it, you’ll find a note which says, “Note: This post has been used to update the Advanced rebuttal to ‘Hansen’s 1988 prediction was wrong.'” At the point they’re modifying other articles based on it, it’s definitely fair game.
But yeah, I am sorry about that. I saw the note at the top of the page when I read it, but it didn’t occur to me the date was set in the future. I figured I could follow the link so everyone would be able to.
Carrick, I understand what you mean, but you ought to be careful:
I don’t think any “cognitive dissonance” is actually generated here. Cognitive dissonance is not when two conflicting beliefs are simultaneously held as true. Cognitive dissonance is the distress caused by that happening. If no such distress is caused (such as by people not noticing the conflict), there is no cognitive dissonance. You know SKS is putting up pretenses, so you shouldn’t have any cognitive dissonance in regard to this issue. I suppose people at SKS may be suffering from it, but you wouldn’t be able to know that to be true, and I can’t imagine it would cause you much irritation if you did.
And er… sorry. That really doesn’t matter. I just kind of like definitions, and that’s a phrase which gets misused all the time.
It’s amazing to me how much time and energy Hansen’s supporters are spending trying to prove he was “right”.
In the end, their argument amounts to “Hansen’s scenarios were completely accurate, except he got the climate sensitivity wrong.”
Remember that old joke where the punchline was
“Other than that Mrs. Lincoln, how did you enjoy the play?”
@Brandon Shollenberger (#88872)
Thanks, I was beginning to wonder if I had lost my marbles — the selections certainly *sound* like SkS but why couldn’t I find it?
Presumably their post should be considered “draft reports, prior to acceptance, to be pre-decisional, provided in confidence to reviewers, and not for public distribution, quotation or citation.” [(c) IPCC]. From your selections, it doesn’t sound as though SkS has made any progress on a more balanced perspective, not that I harbor any illusions that they are striving for that. I’ll just leave it at that.
What? No one is saying that. Hansen’s scenarios were obviously wrong, but B was closest. Hansen’s model has a sensitivity of 4C, which might account for his model overshooting reality. Or it might not, because a sensitivity of 4C would still be well within the error bars for the time period in question.
Hansen’s 1984 model was wrong in a lot of respects, but it had skill.
Carrick,
“The only big difference is Hansen arguably has a lot more insight than Michaels does.”
.
Well, maybe. But my impression is that he is an advocate (and a rather extreme on at that) through and through. Trains carrying coal are ‘death trains’… crimes against humanity for energy executives, etc.
.
My carefully studied analysis: he is a total nut cake… and this makes his analyses either outright rubbish or extremely doubtful.
Brandon:
You’re absolutely right… wrong term here.
SteveF:
I’m just saying there’s things I’ve learned by reading him. And even when he’s done something I think is wrong in his analysis (like his analysis of variance over time in this paper), it’s still thought provoking.
I can’t think of an example where I’ve learned anything from what Pat has written or even found it particularly thought provoking. This is just my opinion and experiences, so YMMV.
SteveF/Carrick –
I can readily agree with you both. I think the moment Hansen is in a position to use his imagination [ie whenever he thinks about the future] he is utterly off the rails – deluded and cranky. As a climate scientist he may deserve every accolade he’s ever received, but when he starts talking about ‘ineffable disasters’ he sounds deranged – he certainly has no idea what he’s talking about.
Who is claiming that Michaels is a great scientist, or even quite a good one? That’s not his role in life – he is a pragmatic political operator selling something reasonably down to earth. And something that is often a healthy antidote to to the feverish claims [by guess who] that coal
Pat Michaels is probably a bit too right wing for me but I still enjoy his books. Hansen should be corralled into a ‘science-only’ pen, if nothing else, for the sake of the AGW movement…
Carrick,
I will grant that Hansen is more of a ‘real scientist’ than Michaels.
.
But that doesn’t change the painful reality: the West Side Highway in New York is not going to soon be under water due to rising sea level (it probably averages 4-5 meters above sea level!). Executives at energy companies are not going to be tried for crimes against humanity… nor should they be. Trains carrying coal to power plants are not equal to, or even remotely similar, to trains heading to Auschwitz loaded with Jews. Hansen is so disconnected from reality that you trust anything he says at your peril…. he is a nut-cake, pure and simple, and I have no doubt that his nuttiness influences everything he thinks and says.
.
Personally, I have no interest at all in dealing with the ravings of such crazies.
When I disparage Hansen, I tend to be saying nothing about his science. Probably I’m not in a position to legitimately judge.
Similarly, when I think of someone I think of in the same light – Paul Ehrlich – I don’t evaluate him as a biologist. He may well have been the most insightful, brilliant biologist of the 20th century. I know him for his stupid, wrong-headed, fear-drenched prophecies concerning demographics.
Hansen’s Scenario C forcings all peaked between 1997 and 2000. The total forcing only grew by 50% of the estimates for what actually happened (despite dana1981’s unending ability to make up new data).
Scenario C temp prediction for 2011 was +0.588C
GISTemp for 2011 (and is on the same baseline) was +0.52C
So forcings grew twice as much as Scenario C, yet Scenario C is higher than the temperature to date.
Why does 50% show up so often.
Bill Illis –
Uncanny. The FAR BAU prediction was 0.3C per decade leading to 2 degrees above pre-industrial by 2025. Guess what? The warming rate after 22 years is… 50% of that.
Boris,
Folks can decide for themselves if the extensive discussion threads linked below “are consistent with†(sorry) my comment.
http://wattsupwiththat.com/2012/01/17/a-response-to-skeptical-sciences-patrick-michaels-serial-deleter-of-inconvenient-data/#comment-867910
http://wattsupwiththat.com/2011/06/07/creating-an-agw-quotation-collection/#comment-678273
But, t’s always helpful to have a graph to look at, especially when folks say things like “but B was closestâ€.
http://i41.tinypic.com/169i49d.jpg
(Hope my links don’t trigger the spam-o-matic)
John M
Great links. Cogent argument. Very convincing graph.
And they insist Hansen was ‘prescient’?
Too funny 🙂
The FAR did not account at all for aerosols, though. And 0.3/decade was an average for the whole 21st century, so that’s not a valid comparison to the last 22 years.
How can you determine which scenario was closest in terms of forcing by looking at a graph of temperature? Please explain.
Boris –
The FAR BAU prediction was about 0.2K increase from 1990-2000, and 0.3 K /decade for the entire 21st century. The graph is quite linear — did you even look at it before commenting?
Anteros –
Well, I make the FAR’s prediction for 2 K above pre-industrial to be closer to 2030, but your point is correct.
Actually, taking the OLS trend of the HadCrut3 series Jan.1990-Nov.2011, I get around 0.015 K/yr, so about 0.33K over 22 years. FAR BAU prediction was about 0.56 K rise over 22 years, taking it as 0.02K/yr for the 1990-2000, and 0.03 K/yr thereafter.
So by that metric (Hadcrut3 OLS), the actual is about 60% of the FAR BAU projection.
However, when estimating *changes* (as opposed to trends), I’ve never believed that OLS is a sensible method. In this case, the OLS trend is increased by Pinatubo so close to the start of the interval, and decreased by the unusually low 2008. So let’s try it another way. The average temp [Hadcrut3] for the 12 months preceding 1990 was 0.18K; the average temp for the most recent 12-month period is 0.34 K; a difference of 0.16K. A far greater difference (to the FAR projection) with this method.
Harold, I’m not sure I follow your arguments on using not using trends. OLS isn’t idea, because of its sensitivity to impulsive events like Pinatubo, especially when they occur near end points, but you can minimize the sum of absolute deviations (or “MAD”, sometimes also called a “median fit” ). But what you find isn’t so big of a difference.
Using from HadCrut3gl, I get 0.144 °C/decade and 0.142 °C/decade using m. So in this case, the difference isn’t so much. It would be a much bigger deal if you were just using say 1990 to 1999, because of Pinatubo near the beginning and the major ENSO event in 1998:
OLS 0.253°C/decade
MAD 0.148°C/decade
The big advantage of regression over e.g. differences in end points is that with regression, as you increase the distance between the end points, you improve rejection of short-period climate noise. With simple differencing, there is almost no effect on the uncertainty in the difference in the interval chosen (the only way the signal to noise improves is the AGW signal presumably gets larger as you increase the interval).
Boris,
If you meant “B was the closest in terms of forcings“, maybe you should have written “B was the closest in terms of forcings.”
But anyway, if the argument is about climate sensitivity, I kinda sorta think temperature has to be in there someplace.
Or maybe Hansen is actually a climate psychologist, and he meant something else by climate “sensitivity”.
Carrick –
Perhaps it’s nitpicking, but if someone asks what the change is from (time)A to B, I don’t think it’s appropriate to provide a metric which depends on the path taken. (Which OLS, or MAD, do.)
This isn’t a perfect analogy, but let me try…You want to know whether your net worth is more now than it was, say, twenty years ago. Does the answer really depend on how high the value of your portfolio was during the tech bubble, or how low after the credit crunch?
Basically, it comes down to this: OLS (or MAD) is an excellent metrics if the subject variable has a linear trend (plus noise, or noise-like, effects). If someone asks for a trend, well, that presupposition is already present; let’s go ahead and use OLS. But if one is asking for a change, I don’t see the reason for making that assumption. I take your point about wanting to gain some protection against fluctuations by including more data points. Which is why I took an average (in this case, 12 months) rather than a difference of individual months. Perhaps a longer interval would be superior; I happened to have the 12-month running averages handy…
At any rate, knowing that the OLS method would be the one least likely to raise eyebrows, I included that calculation first. And in retrospect, the question really is more about the trend over the interval than the absolute change.
So, as Gilda Radner (as Emily Litella) put it so well…never mind. Go with the OLS-based figure.
HaroldW:
Yep I agree. And the other point is that if you don’t have a definite trajectory that the temperature curve follows (as is the case here), then there will necessarily be additional things you have to specify, such as start and stop intervals as well as the function you fit to.
HaroldW@88920
The
wasn’t my take on a graph, but a paraphrase of the words they used in the Summary. They did actually specify 2025 but I admit that in a moment of caution they did say ‘about’ 2 degrees…
The caution is welcome but I think given the audience [policy makers of the world] the ‘about’ wasn’t something bandied about in subsequent years, and that is why I tend to ignore it. I don’t think it is dishonesty!
Just a quick update. It seems the SkepticalScience article I linked to before is now open for public viewing. It appears to be the exact same as when I viewed it initially.
I don’t intend to post on SKS (I don’t trust them to treat my comments fairly), but I think it’ll be interesting to watch the comments section of that page. It already provides this little gem from dana1981:
I got a kick out of that. I know people have argued about just what BAU means, but this would be a bit of an extreme attempt at redefining the phrase.
Of course, it’s just a mistaken word choice. His next sentence makes it clear he doesn’t actually mean “BAU” is having the same “rate of emissions” as in previous years.
Edit: I spoke too soon. The link did change, and apparently so did some of the content. That said, both of the quotes I provided are still in the article, so it shouldn’t affect anything.
So, it turns out SKS decided to split that article into two parts. That’s one big change from what I first saw. However, that wouldn’t change any of the material.
On the other hand, some of the content did change. The second figure in the article was modified, and some text was added to the section it’s in (there may be other changes as well). I won’t dwell on the changes, but I have to point out this doozie:
This is a dumbfounding remark. There is no meaningful divergence in the predicted CO2 forcings between the three scenarios provided by Hansen until the year 2000. If Pat Michaels had done “this simple check” recommended by dana1981, he would have found all three scenarios were equally consistent with the CO2 record.
So here we have SKS making a baseless accusation which is extremely stupid in order to criticize someone it doesn’t like. Raise your hand if you’re surprised.
Boris –
Not quite, or not only. The FAR pointedly says that the increase [under BAU] will lead to 2 degrees above pre-industrial by 2025. They have the caveat that they say ‘about’ 2 degrees, which is fair enough but as far as I know has never been repeated by politician, newspaper article or blogger – it’s not very sexy/dramatic.
They also have the caveat that the 0.3 degrees per decade will not be even (which is reasonable), so give an interim point at 2025 – 35 years for the noise to even out.
Still, how is that prediction looking two thirds of the way there?
***
Don’t forget that the definition of BAU was “few or no limits to the emissions of greenhouse gases“
P.S. I’ve lost my FAR report, so quote from memory. The word ‘limits’ might be incorrect. ‘Restrictions’?
Brandon Shollenberger (Comment #88983),
What I find interesting about the revised article is that at the end they make a blanket statement that climate sensitivity is not low. They choose of course (as is their want) to ignore clear evidence, such as much lower than expected ocean heat uptake (ARGO), which points to considerably lower sensitivity than the middle of the IPCC sensitivity range, and consistently discount any peer reviewed publication which suggests lower sensitivity.
.
So long as SKS chooses to ignore and/or discount the most convincing evidence contrary to their position, it is impossible to take their claim of ‘honest skepticism’ seriously. Truly skeptical people will consider them nothing more than the advocates they behave like. If they were honest about their analysis, they would accept that there is considerable evidence in support of much lower sensitivity than the middle of the IPCC range, and that we do not not know for sure what the climate sensitivity is, but evidence is mounting that is is not nearly so high as many have claimed.
.
One thing SKS does not need to be at all skeptical of is the reality that confirmation of lower climate sensitivity would drastically reduce the interest in and urgency for immediate draconian changes in energy use. I rather suspect that is the issue they are most concerned with.
Predicting 2 degrees above preindustrial in 2025 would be over .4C/decade from 1990. That doesn’t really jibe with their other projections. Not saying you are wrong, but I’d need to see the quote in context to really evaluate it.
Of course, we know a lot more than we did then, so I’m not sure why the FAR is important now.
SteveF
They actually just say that there is a lot of evidence that CS isn’t low, which is true.
This is generally true of warmers. (Skeptics obviously have the opposite problem.) The truth is there is some evidence for low sensitivity, a lot of evidence for the IPCC range and some for the high sensitivity. Personally, I don’t find the arguments for high or low sensitivity very convincing, but I think it’s important that we acknowledge such evidence exists. I’d like to see more articles at SkS and elsewhere taking down really high sensitivity estimates, but James Annan does that pretty well.
I don’t think this is true at all. But I will agree that SkS is an advocacy site. I liked it better when it was more neutral. But then it was more focused on popular arguments against AGW (e.g. “Mars is warming.”)
I for one would welcome such confirmation. I’m pretty sure the folks at SkS would too, even if it means the climate skeptics would get to crow about getting the right answer. 🙂
Brandon:
Let me guess…without notification that it had been changed.
That sir is SkS BAU.
Boris,
I could be mistaken, but I think they would hate it if climate sensitivity were shown to be low, because I think their primary motivation (just as with many people who are involved with global warming/climate science) is a desire to substantially reduce humanity’s “environmental footprint” on the Earth as a purely moral issue. Greatly reduced energy consumption is a key part (indeed, probably THE key part) of that process. I am reminded of many fundamentalist Christians’ early reaction to the spread of aids: God is punishing the wicked. Much the same for catastrophic global warming and material wealth: nature will punish wicked, wasteful humanity.
.
So on this, we will probably have to agree to disagree. 🙂
Boris –
This is the source of my info’. It’s the Overview of the FAR, and I do find myself at odds with people with other sources. The BAU prediction is on page 2 I think
.http://www.ipcc.ch/ipccreports/1992%20IPCC%20Supplement/IPCC_1990_and_1992_Assessments/English/ipcc_90_92_assessments_far_overview.pdf
I agree that we know much more than we did in 1990. I suppose the reason I think the FAR it is still relevant is that the mean best estimate of the 5 most likely AR4 scenarios [ie not including B1] is exactly 3 degrees by 2100. Which is uncannily identical to the prediction from 22 years ago.
That’s not a problem, but the observational record over the last 22 years in the light of that repetition is quite relevant. Or, if you like, it seems to need a fair bit of explaining.
It perhaps puts a hefty burden on the unexpected effects of aerosols
Carrick:
To be fair, I apparently wasn’t supposed to see the earlier version of the article. Because of that, there was no deception here. I stumbled upon a rough draft, and it got changed before it was published. No big deal.
That it got changed doesn’t bother me. What bothers me is it was changed to include a baseless accusation used to criticize someone. It seems strange to me they would update an article “at the last minute” by adding such a stupid paragraph. Did someone read the article, decide it wasn’t harsh enough, and just add the first random thing which came to mind? It sure seems that way.
Boris (#88992):
“Predicting 2 degrees above preindustrial in 2025 would be over .4C/decade from 1990. That doesn’t really jibe with their other projections.”
From Anteros’ link in #88997, the FAR overview said:
“This [BAU emissions] will result in a likely increase in the global mean temperature of about 1 degC above the present value by 2025 (about 2 degC above that in the pre-industrial period), and 3 degC above today’s value before the end of the next century (about 4 degC above pre-industrial).”
In the FAR Summary for Policymakers, Figure 8 contains the temperature projections for business-as-usual. [By the way, the words just above that figure match exactly the Overview quote above.] The figure isn’t easy to read, as it lacks grid lines, but by enlarging the figure one can extract reasonably accurate numerical values. According to Figure 8, the temperature by 1990 (date of the FAR) was already 0.9 K above that of 1765. The temperature curve (for the “best estimate” of climate sensitivity) rises about 0.2 K for 1990-2000, and around 0.3 K/decade over 2000-2100. Using those slopes, 2025’s temperature is projected to be 0.2 K + 2.5 decades*0.3 K/decade = 0.95 K above 1990’s, or 1.85 K above 1765’s (hence, “about 2 degC”). [As I wrote above in #88920, I made it closer to 2030 that the projected global temperature would reach 2 K above their baseline of 1765 temperature. But I think the IPCC was more interested in saying what would happen at a certain year, namely 2025.]
The values I give above are only approximate, and you may well be able to make more precise measurements, but I think the words in the Overview do jibe with the figure from the SPM.
Part 2 has now been posted. The comments section is pretty ridiculous. You’ll remember, in the first part of the article, dana1981 said:
As I pointed out before, this is a stupid statement. The difference in CO2 values for the three scenarios was negligible. If the actual CO2 record was consistent with one, it would be consistent with all three. The stupidity is heightened now as in a comment on the new article, dana1981 said:
dana1981 freely admits there is no meaningful difference in the three values, yet he criticized Pat Michaels for not checking which value was consistent with reality… But it gets better. The conversation actually continues with Tom Curtis saying:
And dana1981 agreeing:
Apparently, a 2.03 ppm difference really matters to these people. Imagine how they’d feel if Michaels had testified six months later. Then they would have to look at the 1998 data instead of 1997 data:
Scenario A: 367.18
Scenario B: 366.90
Scenario C: 364.81
Measured: 366.65
Would you look at that? With the 1998 data in, the actual values were more consistent with Scenario A than Scenario C! Apparently six months is all it takes to greatly alter the criticisms SKS would level. It just goes to show, if a completely negligible difference exists which offers no meaningful support for something SKS likes, SKS has no problem promoting it as being significant. If it fails to offer such support, SKS won’t discuss it.