Unfortunately, I was only able to attend one day of the AGU this year, due to my current job thinking less highly of extracurricular activities than my prior company. Conveniently enough, however, both my poster and talk were scheduled for Tuesday.
Our AGU Poster. Click to massively embiggen.
The poster we presented will likely be of interest to folks here. We compared the effectiveness of the Berkeley and NCDC homogenization processes both on real world data (where only raw data with biases is available, but no ground truth is known) and eight synthetic worlds created by sampling the temperature fields of six different GCMs at the locations of the 7000-odd U.S. CO-OP stations to generate synthetic station data. This synthetic station data had artificial biases added to it, with different worlds having different types of biases (big breaks, small breaks, positive trend bias, negative trend bias, documentation, no documentation, false documentation, etc.). Peter Thorne created the synthetic worlds for the Williams et al 2012 paper that came out last year (well worth a read!), and we adapted them for this additional test. We also set up a system such that we were blinded to the true trends of each world and the types of artificial errors introduced until we finished our analysis. That way we could not consciously or subconsciously tune the algorithms to obtain a better outcome.
The overall results were quite promising for both algorithms, though the Berkeley method did not perform as well when metadata was excluded. However, after the poster went to print Robert Rohde made some improvements to the Berkeley method’s breakpoint detection that brought it more in-line with NCDC in the no metadata case. These tests are only an initial step, as they only really consider step-change breakpoints rather than more gradual trend-biases. I am happy to relate, however, that our JGR paper examining U.S. UHI and the extent to which it is handled by homogenization has been accepted for publication, so I will have much more on that soon.
The next steps for the project we presented this year are to separate out the breakpoint detection schemes (Berkeley, NCDC, and the new Bayes Factor approach) from the homogenization methods (Berkeley and the PHA) to create six permutations of possible homogenization/breakpoint detection methods. This will allow us to better evaluate which method performs best in detecting breakpoints and homogenizing the detected breakpoints. We would also like to see how well the results hold up (especially in the no metadata case) when the network is much sparser, to get a better sense of how well these methods will work in the rest of the world which (by and large) is more sparsely sampled.

Hi Zeke,
One issue that came up when i discussed the results with Steve mc. and Anthony at dinner on tuesday, was the concern about bad stations swamping good stations.
This gets to the nub of the issue with station quality, for example.
If you have one ‘good’ station and 10 bad ones, what does homogenization do.
Thinking through our algorithm, i’m pretty sure the scalpel would leave the “good’ stations alone. ( need to test ) But if it didnt, if it cut a good station, then the reweighting process would have the opportunity to re weight it, and potentially shift it toward the regional expectation..
Any way, Maybe I can persuade Anthony to give me at least a couple of his newly defined ‘good’ stations and we can test that.
In a nutshell the basic concern is that if you have 1 good station and 10 bad ones that ’empirical’ approaches that work to minimize differences between stations will effectively move the good toward the bad.
Of course the problem is how does one decide that a station is “good”. hmm.
Zeke,
“due to my current job thinking less highly of extracurricular activities than my prior company”
Surely jobs do not think. Bosses, yes (at least sometimes), ‘management’, yes (in a group-think sort of way), but jobs… nah. 😉
Zeke,
I did not make it to your poster session, sorry.
I attended the AGU session chaired by John Cook (the climate blogger) entitled “Facebook, Twitter, Blogs: Science Communication Gone Social ‘The Social Media 101’â€. I thought his chairing an AGU session on climate science communication amusingly ironic . . . to say the least. A SIDE NOTE: Lewandowsky was scheduled to do a poster session at the AGU meeting. I tried to see Lewandowsky but couldn’t connect with him. He seemed to be in association with John Cook because of Cook’s unusually heavy citing of Lewandowsky in a Cook presentation a couple days later at the AGU meeting.
I heard your talk at that Cook session which you entitled ‘The Role of science blogs in communication and collaboration (invited)’.
I think it would make interesting discussion if you made a post on one of your presentation charts. I suggest you post the chart with the varied sized blue circles representing all the blogs on climate including IPCC supporting ‘consensus’ ones, skeptical ones, very technically statistical ones and so-called ‘lukewarmer’ ones. That chart’s interpretation might be enjoyable here at Lucia’s place.
Also, I noted what appeared to me to be your very positive reviews of SkS site relative what I think you implied was lesser value of the more open skeptical sites. Would you please clarify what you said in that regard? I may not have perceived what you said correctly because you were moving right along in the all too short ~13 minutes you were allowed to talk. And my note taking ability is wanting for sure.
I saw Steve McIntyre in the audience at Cook’s session, but he later said to me at the conference that he had unfortunately missed your talk because you were first in the session and he arrived later.
Enjoyed my first time AGU experience . . . . although I think the climate blog discussions are a better format to have a meaningful dialog than the AGU meeting.
John
Is there something I’m missing? I keep looking at the bottom of the middle section and seeing panel d show BEST producing warmer trends than NCDC or the true trends. That seems like it can’t be right.
Mosh,
Only if all 10 bad stations had a simultaneous inhomogeniety that was not present in the good station. As long as inhomogenities are relatively temporally stochastic it should be possible to alias out each separately.
.
Steve F,
Fair enough. I should have said the somewhat cantankerous billionaire CEO.
.
Brandon,
That graph is slightly misleading, since Berkeley has a much lower spatial resolution and the color schemes don’t have that many gradations. The middle left trend graph labeled World 1 shows the results more clearly, and both Berkeley and NCDC perform almost identically in terms of overall CONUS trends for that particular synthetic world.
.
John,
I’ll post that graph at some point; I got a fair amount of feedback from folks on it (it was pretty qualitative to start with!), and want to make a few tweaks. Regarding skeptical science, I complimented its approach to having Basic, Intermediate, and Advanced sections for each topic area as a good way to engage a broader audience. It serves a very different role than more technical blogs like this one.
Zeke, I’m having trouble understanding how BEST’s figure could show universally higher temperature trends yet get the answer right. Could you explain? I would think limited temperature gradations would increase imprecision not introduce spurious warming.
When I look at those maps, the impression I get is BEST is less precise and less accurate than NCDC. I don’t think that was the intent.
@Mosher
“This gets to the nub of the issue with station quality, for example.
If you have one ‘good’ station and 10 bad ones, what does homogenization do.”
The ten “bad” ones are all going to have the same rise in temperature that is too high over the same period of time. This pattern will be repeated globally. So while the world is not warming, the cryosphere is retreating. Weird. How could that be.
bugs (Comment #107222) —
sorry too cryptic
Mosher offered the hypothetical of 1 ‘good’ & 10 ‘bad’ stations to illustrate a technical concern. Zeke gave a general response in Comment #107218.
Zeke gave a response, but the ‘technical concern’ is also known colloquially as ‘clutching at straws’ or ‘arm waving’. I don’t know why McIntyre and Watts even bother any more.
Why do you not present you plots as difference spectra?
You use model map – known map, and this shows only you bias/error.
Both reconstructions are pretty crap. Indeed, these look like the MRI reconstructions of phantoms in the 80’s, but they had the excuse of only having 64K.
Zeke,
“the somewhat cantankerous billionaire CEO”
If my company’s operating profit looked like that I might be somewhat cantankerous too. 🙂
But I’m no billionaire; for him it is probably more like a hobby.
Bugs
“The ten “bad†ones are all going to have the same rise in temperature that is too high over the same period of time. This pattern will be repeated globally. So while the world is not warming, the cryosphere is retreating. Weird. How could that be.”
That is not the issue. I am trying to represent as best I can the concern that others voiced. It is a technical issue. In the end I think it is best answered by defining a test case and running the algorithms.
“When I look at those maps, the impression I get is BEST is less precise and less accurate than NCDC. I don’t think that was the intent.”
Correct. The resolution of the map is just a consequence of the gridding that robert happened to select for display. Hmm I think he used 1/4 degree
to compare the methods you really want to look at the tables.
DocMartyn,
The four maps on the bottom are primarily instructional, to make it easy for passerby’s to grok what we were doing. The more interesting results are the 8 world trend plots in the middle, since what we were really interested in is CONUS-wide reconstructions.
One other somewhat interesting (albeit expected) result is that homogenization doesn’t really do anything when the data has no breakpoints and perfectly reflects the underlying climatatology (e.g. World 8).
Zeke, we have millions of years of evolution to thank for our pattern recognition software and we might as well use it.
The major discontinuity is where land meets water, this does not appear to be tested for in any of your ‘worlds’.
I would just run the two again but make Illinois, Missouri, Arkansas and Louisiana lakes. Then see if you have the same temperature differences.
Doc.
yes, the regression equation only considers latitude, longitude and altitude.
I suppose one refinement would be to add a factor for ‘distance from water’ . Some folks who krigg temperature do in fact use this as part of the regression. the complication there is defining what constitutes a “body” of water. Still, I’ve noted in some regressions that if you add that term you get higher r^2, which would result in a tighter bounds on the estimate of temps.
Zeke, below I have linked a table that summarizes my analysis of the condition that I had found rather puzzling when I calculated the differences (amount of adjustment) between the GHCN Adjusted and Unadjusted temperature series for various countries. I found that the average amount of adjustment varied significantly by country. I did further calculations where I categorized the absolute difference found between the Adjusted and Unadjusted temperatures for all GHCN stations. I used 7 categories where the measure derived was calculated by subtracting the monthly mean Adjusted and Unadjusted GHCN series and taking the mean value (over months) of the difference series. In this manner we are not merely looking at differences that would be expected to be greater with greater length of the difference series but in effect looking at differences per month.
I further calculated the average length of the difference series between the reference station series and its 40 nearest neighbors for each category. In addition for each category, I calculated the standard deviations of the difference series of the Adjusted and Unadjusted series with their 7 nearest neighbors and the average distance to the reference stations 40 nearest neighbors.
Without going into a lot of detail of table linked below, I would simply like to point to the relationship of the differences categories and the average overlap of the difference series of the reference station with its 40 nearest neighbors and the length of the reference series itself. Knowing the limitation of detecting breakpoints and particularly smaller and trending breakpoints with shorter time series, my analysis points to short station series perhaps “hiding” legitimate breaks due to non climate effects. I have been able to use simulated data without breaks and then introducing breaks of various types and sizes to show that indeed these shorter series lead to a loss in the breakpoint method to find these introduced breaks.
Of course, my analysis would bear primarily on the issue of breakpoint detection and consequential adjustments and not those adjustments made from documented station changes. I emailed the GHCN people mainly to determine whether they planned in the near future to provide the information by individual station of the number of adjustments made and whether by undocumented or documented changes found. I asked the same question to Steven Mosher about BEST providing this information and am now asking you also.
When you benchmark these homogenization algorithms are the station lengths critically taken into account and/or tested?
I was rather surprised to learn that GHCN has official historical data that would allow them to make documented changes to US stations only. Does BEST have historical data from non US stations that allows for making adjustments for documented changes?
http://imageshack.us/a/img819/7646/ghcndiffvsoverlap.png
All this is fun intellectually. But does any government really take global warming seriously enough to take significant action? Not if the results of COP 18 in Doha, which ended yesterday, mean anything. Did you even hear or read about it? Probably not. I heard a brief mention on NPR, and that was probably on the BBC news hour. It was the usual disaster. The major offenders in increased emission growth, China and India, excuse themselves from taking action until the rich countries like the US, Canada and Japan take action. But emissions from the rich countries are becoming less and less important compared to China and India. As of 2010, the combined emissions from those two countries nearly equaled twice the emissions from the US and in 2011 they emitted more than twice as much. And their emissions are increasing rapidly ( see China for example ), whose emissions are increasing faster than the developed world could possibly cut theirs. At 11 Gt CO2/year, China and India in 2011 emitted nearly half what the entire planet emitted in 1990.
Some people think the Kyoto Protocol was a useful first step in controlling global carbon emissions. To me it shows the complete lack of seriousness towards the issue as most of the signatories never met their goals. Those goals, btw, are trivial compared to what would actually be needed to stabilize the atmospheric CO2 concentration. Mitigation simply isn’t going to happen any time soon.
A far more pressing and interesting “technical issue”. One point of interest, China is also the biggest driver of lowering the cost of alternate sources of energy. The price of solar panels has plummeted thanks to the Chinese, for example. IIRC, the biggest renewable projects in the world are being built by the Chinese.
Wikipedia, (I know, this article is putting a positive point of view), but China is doing a lot towards energy independence.
http://en.wikipedia.org/wiki/Renewable_energy_in_the_People's_Republic_of_China
Re: bugs (Dec 8 16:01),
You don’t get it. China is doing the same thing with solar panels they did with rare earths. They’re selling them at a loss to drive everyone else out of business. China takes a long view on this sort of thing.
That Wikipedia article reads like a press release so it probably is. They pretty well maxed out on hydroelectric power a long time ago and the rest is a drop in the bucket. Those numbers for wind and solar are likely peak capacity. Actual power generated will be much less. The only way to balance the grid using wind and solar without actually increasing emissions is to vary the amount of hydroelectric power generation. So the net renewable power probably hasn’t changed much. I’ll believe they’re serious about reducing CO2 when their CO2 emissions stop increasing exponentially, they stop commissioning new coal fired power plants at a rate of 1/week or so and they stop using lack of action in the developed world as an excuse to increase their emissions.
A pretty immature standoff, as adults maybe we should just do what is the right thing to do. Then the other side has no excuse any more.
China isn’t making a loss on panels. They have just created a volume of manufacturing that benefits from economies of scale. One of the problems that has been plaguing renewable energy is that vital part of the implementation has been missing, so it is constantly dismissed for being too expensive.
Let’s factor the impact on human misery into the cost of cheap Chinese solar panels, bugsy.
http://www.chinahush.com/2009/10/21/amazing-pictures-pollution-in-china/
@Mosher
“That is not the issue. I am trying to represent as best I can the concern that others voiced. It is a technical issue. In the end I think it is best answered by defining a test case and running the algorithms.”
I think from past performances on their blogs, a “technical issue” becomes another headline scandal. Or it would be if Michael Mann’s name was on the paper.
China isn’t making a loss on panels
Bugs knows. he has the data. Mike Mann told him.
bugs.
“I think from past performances on their blogs, a “technical issue†becomes another headline scandal. Or it would be if Michael Mann’s name was on the paper.”
that doesnt mean that its not a technical issue. Can you stop being stupid for two seconds. Unlike you, I choose to put aside the political and personal issues. They raise an interesting issue, I think if I look at it that I am more prepared to answer questions in the future from other people who may have the same issue.
“You don’t get it. China is doing the same thing with solar panels they did with rare earths. They’re selling them at a loss to drive everyone else out of business. China takes a long view on this sort of thing.”
It’s clear to anyone who has done business in China. Looking at fiasco’s like solyndra I just shake my head. What’s worse is that the guys running that company knew damn well what it is like to compete against the chinese. But then they were playing with OPM
other peoples money
Actually, most Chinese module makers are hurting.
“In many eyes, particularly of those of foreign entities, LDK is already bankrupt and only artificially held upright by the government.”
“Yingli Green Energy has posted a substantial loss for Q3 2012. The net loss of U.S. $152.6 million is an increase on the loss of $92 million in Q2, and $29 million in Q3 2011.”
“The analysts forecast that about 180 existing module manufacturers will either expire or acquiesce to acquisition by 2015. The report estimates that 54 of the 180 ill-fated firms will come from China. Most of these are so-called “solar zombies” – companies with manufacturing capacities of less than 300 MW that have operated with the advantage of government. China’s number of ill-fated firms could be much higher if not for an aggressive downstream build-out that will prop up select domestic suppliers.”
Read more: http://www.pv-magazine.com/news/details/beitrag/dead-manufacturers-walking–study-predicts-solar-firms-survival_100008863/#ixzz2EWAFZvFD
Read more: http://www.pv-magazine.com/news/details/beitrag/dead-manufacturers-walking–study-predicts-solar-firms-survival_100008863/#ixzz2EWA89jHv
Read more: http://www.pv-magazine.com/news/details/beitrag/domestic-module-shipments-hoped-to-lift-q3-results-for-the-chinese-solar-industry—-_100009128/#ixzz2EW9bneTS
that is how it works tom.
check out cell phones.
over produce with government help.
crush the market so foreign competitors see no chance at
roi. shake out. the strong or well connected survive.
its a formula.
product dev and volume is not demand driven
its capacity driven. keep plants running at full
capacity.
Mosh: cell phones are demand driven. Everybody needs one or believes they do. Solar products are not so much. They benefit from artificial demand created by governments and others who hope that forces other than market will drive demand. I know you will disagree, but it seems to me that CAGW was at least in part conceived to help create this artificial demand which is based on incomplete and/or inconclusive science, no matter what the IPCC says.
@Mosher
“that doesnt mean that its not a technical issue. Can you stop being stupid for two seconds. Unlike you, I choose to put aside the political and personal issues. They raise an interesting issue, I think if I look at it that I am more prepared to answer questions in the future from other people who may have the same issue.”
You act that way, then you start the old routine on blogs about irrelevant garbage about FOI requests or other nonsense whenever you lose the nice, new, reasonable persona you have been trying to create. If you could actually rise above that, I might believe you, but the old Mosher keeps re-appearing.
Bugs: “You act that way, then you start the old routine on blogs about irrelevant garbage about FOI requests or other nonsense whenever you lose the nice, new, reasonable persona you have been trying to create.”
I don’t see two Moshers. I suspect you only see two Moshers when you are as ekstremely myopic as you are, Bugs.
And “nice, new” are not adjectives I would use to describe Mosher anyway 🙂
What I am most interested in is the distribution of the break-points/discontinuities.
Are there more flagged break-points that are going down versus going up.
What is the temporal distribution of the down and up discontinuities.
If there is an underlying warming trend, does that mean more down break-points will be flagged than ones going up. Does that then imply the scapel discontinuities / break-point algorithms themselves will just result in an accelerated up trend versus the true raw trend.
The Berkely Earth methods paper says they scapeled the 7,280 GHCN stations into 44,840 “effective” stations. The methods paper says this is expected to trend to neutral in terms of affecting the trend. But is that actually true.
Zeke, any chance of your replying to my post above.
I think perhaps Bugs, like a number of others I hear, implies that the instrumental temperature record is set in stone while those like myself see it as a work in progress. I see the evidence for the recent warming and plateauing as rather good but still see a need for better uncertainty limits being applied for that time period. Going back in time, let us say past 1920, I see more uncertainty in the temperature records and an uncertainty that is not well defined.
I think the work that I see being done at GHCN and BEST is critical to other studies even when the changes found might appear to be at the margins. Recently GHCN found a 10 % greater trend in the global temperature from 1900 to present on going from the previous version to the current one. I have not gone back and made this calculation with the newest GHCN version or with BEST but the controversial subject of tropospheric warming being less than the surface and being counter to what most climate models predict could be closer to significance with the newer temperature versions and data sets.
I am hoping that more critical analyses and studies are made of the instrumental temperature data and go behind merely attempting to confirm what has already been found.
Testing these algorithms as Zeke’s poster describes is no easy task although a necessary one if we are to place reasonable uncertainty limits on the data and methods.
One can determine I would suppose a reasonable background temperature series for these benchmarking tests but the difficulty is in determining and placing the realistic non climatic temperature effects onto the background. Obviously one is attempting to find these effects by using the algorithm/method and one now is required to put these same effects into simulations of the temperature.
It might not be well appreciated but one can devise non climatic effects that, while not necessarily realistic, could make the algorithm fail greatly in finding the truth. It is important to point to those kinds of potential effects and determine under what conditions those effects might apply to the real world. To the extent that I see this not being done in benchmarking tests or given only cursory attention, I am disappointed.
Bugs,
It is an economic and political impossibility that developed nations are going to fall on the sword to set a good example for the Chinese, Indians, and to a lesser extent, a host of other developing countries. Voters will not empoverish themselves for no measurable reduction in future warming. If you are half as concerned about the influence of rising atmospheric CO2 level as you purport to be, then you should dedicate your life to pushing for improved energy efficiency and rapid nuclear development everywhere, but especially where CO2 emissions are growing most rapidly. So long as you keep to the “must be renewables” story line, few will take your concern seriously. Your stickt is amusing, yes, but superficial and silly.
“Voters will not empoverish themselves for no measurable reduction in future warming.”
Tell that to the Germans and British and Spanish electricity consumers who are paying way more for electricity than they should if not for the big green renewables scam.
Add Oztralia and the broke nation-state of California to your list, Bruce.
“theduke (Comment #107249)
December 8th, 2012 at 11:36 pm
Mosh: cell phones are demand driven. Everybody needs one or believes they do. Solar products are not so much. ‘
####################
you misunderstood. Part of the way Chineese will operate in a market is to build to capacity, NOT to demand. They open the plants and then run them to capacity. Build as much as you can.
”
What I am most interested in is the distribution of the break-points/discontinuities.
Are there more flagged break-points that are going down versus going up.
What is the temporal distribution of the down and up discontinuities.
If there is an underlying warming trend, does that mean more down break-points will be flagged than ones going up. Does that then imply the scapel discontinuities / break-point algorithms themselves will just result in an accelerated up trend versus the true raw trend.
The Berkely Earth methods paper says they scapeled the 7,280 GHCN stations into 44,840 “effective†stations. The methods paper says this is expected to trend to neutral in terms of affecting the trend. But is that actually true.
##############################
As I recall it was trend nuetral. Over the next couple of weeks We will be dropping a new release. The code should be easier run, so any detailed questions you have can be answered by looking at the code or running whatever you like.
SteveF (Comment #107254)
“It is an economic and political impossibility that developed nations are going to fall on the sword to set a good example for the Chinese, Indians, and to a lesser extent, a host of other developing countries. Voters will not empoverish themselves for no measurable reduction in future warming.”
SteveF, I am not so sure that you have captured the political realities of the matter. Take the seemingly contradictory issue to AGW of the huge government programs of Social Security and Medicare with those failing programs even larger unfunded liabilities. Around the globe nations of the world and the leading intellectuals in those countries are into big time denial about a need to fix these systems. The issue of AGW is approached by the ruling intellectuals as a matter of saving the globe for future generations. That contradiction is rather easily understood when in the case of failing government programs the big government intellectuals do not want to admit those failings whereas AGW is an opportunity for enlarging the role of already big government.
AGW mitigations by governments will be not be sold on the basis of impoverishing the voters but rather probably on the basis of some crises the intellectuals can connect, no matter how tenuous, to AGW. In the US the immediate influence of voters has been avoided by the recent rulings that will allow the EPA to regulate CO2 emissions and the consequences thereof. The EPA will not fall from the graces of the voting public anytime soon or to the extent that its charter will changed , since such changes would be touted by its defenders as a step back into the pre-EPA days of pollution.
Obviously the easiest approach to mitigation by governments is to front load the regulations and back load as much as possible the cost to the taxpayer and voting public. Worse comes to worse, I could see the US government and probably other big government nations of the world subsidizing the detrimental effects of mitigation and paying for it by indebtedness to future generations and/or printing money. If someone wants to point them to the morality of off-loading AGW mitigation as we have SS and Medicare to future generations, the intellectuals have the middle finger Keynesian reply of “we are all dead in the long run”.
Bugs.
Only you would fail to understand the importance of FOIA.
to keep the discussion technical , we need the code and the data.
If you dont give the code and the data, when you publish, then
we ask nicely. If you say no, we ask the journal. if the journal says no, we use FOIA. If you break the law fighting FOIA, then YOU
have made it a non technical discussion.
You guys have never understood this.
Re: Kenneth Fritsch (Dec 9 11:34),
The problem is that you can’t do that with mitigation. The effect of any regulations that would have a significant effect on emissions is negative and immediate while the benefits are future and uncertain. In quality management positive reinforcement terms, this is the worst possible case. Somewhat the same goes for fixing Social Security and Medicare. A large part of the unfunded liability in Social Security could be fixed by reductions in benefits to future retirees rather than pushing Grandma off the cliff today. But Medicare was pretty much hopeless even before the passage of Medicare Part D and adding the Affordable Care Act to the mix will only bring on the collapse sooner. But of course that’s the plan, just like the so-called fiscal cliff. Another recession would be an excuse to expand government even more. In case you hadn’t noticed, that was the real purpose of the ‘stimulus’. Any increase in economic activity was purely coincidental and unintended.
Zeke, Mosh, someone – I’d be interested in a comparison of BEST and NCDC data for areas where historical adjustments have become notorious, eg: Iceland. Is that a simple job for we simple folk?
DeWitt Payne (Comment #107261)
“The problem is that you can’t do that with mitigation. The effect of any regulations that would have a significant effect on emissions is negative and immediate while the benefits are future and uncertain.”
Actually if you subsidize the negative effects of regulations by borrowing against future generations you would have the rationalization that that borrowing would put the cost, or at least some of it, onto the generations that would benefit the most – by proposing a scary future scenario that if allowed to occur would be devastating to future generations.
Now you could logically reply that deficit spending and onerous regulation to the degree required could either result in ultimate bankruptcy or bring the economy to a grinding halt, but that is definitely not what the intellectuals of our day judge would happen. A prime representative of those media favored intellectuals, in the name of Paul Krugman, has certainly advocated for more debt and government spending as a cure for our current economic ills. He talks about his hopes for a threatened space invasion (and avoids the politically incorrect reference to a war here) to promote a crisis that would create more government spending on a cause with no return on investment assuming a real invasion is not threatened. Could not AGW mitigation serve his purposes here?
Oh, and by the way, the current intellectual answer, that is evidently acceptable to the media and the public, is that no matter what occurs following government intervention and action “it would have been worse without the intervention/action”. With that reasoning these interventions/actions never fail.
DeWitt Payne (Comment #107240)
December 8th, 2012 at 4:17 pm
“They(China) pretty well maxed out on hydroelectric power a long time ago and the rest is a drop in the bucket.”
The ‘max’ of China’s hydro resources is about 560GW. They plan to have 400GW online by 2020 up from about 220GW now. That has been their plan since 2010 and they are on schedule. 1st goes up the big dams with big reservoirs then you add ‘run of the river’ later.
See Columbia River Hydro Dams for an example of how it’s done.
http://en.wikipedia.org/wiki/List_of_dams_in_the_Columbia_River_watershed
Grand Coulee is about 7GW. There is around 20 GW on the Columbia River(US Side).
Grand Coulee involved Flooding lots of land…most of the rest took advantage of river flow with some minor flooding. You could hit a golf ball across the ‘lake’ formed by Chief Joseph Dam..flooding an area smaller then a golf course isn’t that big a deal.
Total US new generating capacity demand absent retirements between now and 2040 is 200GW(about 7 GW/year) according to the latest US EIA figures which have always been historically high(better to have too much capacity then too little!).
So if we split that 7 GW/year between the various technologies. 1 GW for wind,1 GW for solar, 1 GW for nuclear,1 GW for offshore wind, 1 GW for geothermal, 1 GW for ‘clean coal’ and 1 GW for gas everyone would starve to death. The fact that we would have to split each of the technologies between multiple manufacturer’s in order to have ‘competition’ makes it even worse.
So to make sure the ‘pie’ would big enough for all the manufacturers to achieve economies of scale we would have to force massive retirements of existing generating capacity(lot’s of screaming from the owners of existing capacity and lot’s of screaming from people who are going to have to pay for ‘new’ generating capacity they don’t need) or pick ‘winners and losers’ up front. Picking winners and losers up front creates lot’s of screaming as well. Of course some of the winners will be chosen based on ‘political considerations’ and we will spend a lot of money deploying technology that makes no sense.(See UK Solar Panel Subsidies…they are a ‘winter peak’ country…the solar panels don’t work when they need the electricity most)
In China…the size of the ‘electricity generating capacity pie’ is so big(50+GW/year) that every technology can get a slice of pie big enough to achieve economies of scale.
At the moment…anyone with an even marginally viable energy technology is ‘building away’ in China and having a nice meal. The ones without a ‘financially viable’ technology are whining and crying in Washington that there isn’t enough money in the US Federal pork trough for them to eat and that the world isn’t fair.
2012 Capital Investment in Electric Generating Capacity in China was running $2 for clean capacity for every $1 in coal capacity.
The Chinese can easily say ‘solar’ should get a minimum of 10% of the pie(they have) and that will keep the most efficient of their solar manufacturers in business. 5GW per year for solar is a pretty big pie. The least efficient manufacturers will go under and there is plenty of talk about ‘consolidation’ in the Chinese Solar panel industry.
On the nuclear side in China…Areva, Westinghouse and Rosatom are all all eating well. Just last week the Russian Energy minister managed to flog China another pair of VVER-1000”s and left off some sales brochures for a ‘fleet’ of floating nuclear power plants. Rosatom’s ‘demonstration’ floating power plant should start operation next year.
Windmill installations in China are running at 15+GW per year with a 2020 target of 200 GW.
The Chinese have also set aside some money for a ‘wave’ energy demonstration project and an offshore wind demonstration project.
They just started looking at the potential of geothermal this year and will in do course set aside a small portion of their massive energy pie so geothermal can have a nice meal as well. Probably not until the 13th power plan which begins in 2016.
Sorry, but the Chinese domestic ‘all of the above’ energy plan is impressively well thought out given the resources they have.
They are burning a lot of coal but ‘spinning up’ alternative energy industry’s and figuring out workable grid integration strategies takes time. They’ll build coal to fill in the shortfall until such time as there is no shortfall. Coal is expensive in China…it costs about $60/ton to ship a ton of coal from Wyoming to China. Not exactly cheap considering the ‘average’ price of US coal delivered to electric utilities is about $40/ton.
.
bugs, Steven Mosher, and others —
I’m typically delighted to encounter one of Zeke’s analyses of temperature records (it was an earlier post of his that arrested my development as an ‘unbeliever,’ and pushed me into the lukewarmer camp).
I’ve added my two cents to the side discussion (FOIA etc.) at the recent open thread. (You need not submit a bet on UAH to post!)
Re: harrywr2 (Dec 9 14:43),
And if you believe all that central planning is actually going to work, I have this bridge I’d like to sell you…
I sense a massive bubble just waiting to burst.
Kenneth Fritsch,
I think the political realities of higher energy costs and concurrent loss of competitive ability take some time to make their way to the ballot box, but ultimately that can’t be avoided; I even expect it is going to happen in places like Germany, the UK, and Australia. The parallel with social security and medicare is not a good one. Those programs will be modified to reduce costs as the baby boomers rapidly drive the ‘trust funds’ toward insolvency.. and make clear to all that politicians have simply promised more than can be paid for.
DeWitt Payne (Comment #107268)
December 9th, 2012 at 5:29 pm
“I sense a massive bubble just waiting to burst.”
Chinese per capita domestic steel consumption is about 2 times US domestic steel consumption. Their per capita domestic cement consumption is about 5 times US consumption. Steel and cement account for about 1 billion tones of their coal consumption.
There are lot’s of bubbles that are going to burst.
State run industries frequently run with zero return on investment.
Only about 20% of installed solar panel costs are the solar panels. The rest is due to ‘labor intensive’ things. Labor is cheap in Chinese. The Domestic Solar Panel business will do okay, probably no return on investment…but what’s the return been on ‘Government Motors’?
Medicare is fully funded until 2024 according to the most recent trustees’ report. Before the ACA was passed, that date was expected to come 7 years earlier.
http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/ReportsTrustFunds/Downloads/TR2012.pdf
http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/ReportsTrustFunds/downloads/tr2009.pdf
cce,
The trustees make clear up front that they must calculate based on implementation of current law as written, but go on to basically say that is very uncertain, since it will require reductions in the growth of costs which are historically unprecedented (AKA,very unlikely). The most likely outcome is exhaustion of the fund long before the current official projection. The only way to avoid that is to set the benefits age at 67 and then index the age for benefits to average lifespan. Even with those changes, other measures to control costs (limiting services) and higher taxes will be needed. There is no free lunch.
cui bono (Comment #107262)
December 9th, 2012 at 1:20 pm
Zeke, Mosh, someone – I’d be interested in a comparison of BEST and NCDC data for areas where historical adjustments have become notorious, eg: Iceland. Is that a simple job for we simple folk?
############
Sure. Go look at our version of iceland and pull the data.
What we can do is take any Shapefile ( err see Arcgis) and then grab that portion of the whole field. We are adding provinces and states for the lergest countries as well.
Let me add that there is another dataset in preparation from peter thorne, called ISTI.
This will probably be a bit larger than BEST, cause it looks like they might use the Enviroment canada data which I showed them how to scrape off the web ( 7000 stations and no http://ftp.. yikes )
Hmm as it stands they are at 39000 stations.. I havent finished comapring them with the BEST collection.
I’ve usually seen plots of how mean temperature has changed with time, but not how Tmin and Tmax have changed. I was surprised to see that the 1998 El Nino produced an appreciably larger change in Tmin than in Tmax. Is there anything interesting to learn from a graph of Tmax-Tmin vs. time?
The future, of course, is uncertain but the “most likely outcome” based on current law and available evidence is the date given in the report. This is beside the point since contrary to popular belief, the ACA extended the life of Medicare the old fashioned way, by raising revenue and cutting spending. It also tries a lot of pilot programs to “bend the cost curve” but they are given little credence in these reports.
FWIW, here is a chart of “projected years of solvencey” since 1990. Medicare has been projected to be “solvent” an average of 14 years from the dates the reports were issued, with a low of 4 years in 1997 and a high of 28 in 2001 and 2002. The current estimate is 12 years.
http://healthaffairs.org/blog/wp-content/uploads/Goldberg-Figure-1.jpg
Steven Mosher (Comment #107260)
December 9th, 2012 at 11:35 am
You have never understood that the aggression and hatred has set up a totally dysfunctional situation. Every release of code and data has not advanced the science. Zeke has done what any scientist would, some hard yards, his own work, and verified completely independently the work of the scientists so publicly and ferociously vilified. You don’t need anyone’s code and data. In fact, when data and code have been released, because this was an emerging technology, which was evolving as the science progressed, code and data management was in it’s infancy. Much has been lost, not due to a massive conspiracy, but just due to the state of the art. Data was proprietary. Much of the FOI has nothing to do with code and data. The Model E code was released, and promptly ignored. One brave soul muddled around with it, had no idea what he was doing, and everyone lost any interest in it.
To date, we have seen no vicious and relentless attacks on the UAH satellite code. It is nothing to do with research, it’s about hatred, conspiracy theories and a group madness that has obsessed a vociferous few.
Look at what Zeke has done. Not an unkind word, some hard work, and some results that to a large extent confirm what we already knew. That is not a waste of time, confirmation is important. But he has achieved far more with his posts here, alone, than McIntyre ever has or ever will.
Bugs writes “You have never understood that the aggression and hatred has set up a totally dysfunctional situation.”
The aggression and hatred stems from the fact that someone would dare to audit their work simply to find flaws with it. How dare they question an honest scientist’s work and put his creditability into doubt!?!
The scientific method is a continual process of research and questioning. This wasn’t questioning how to advance science, but attacking individuals. Model E, how many people looked at the source code. Very close to none. If the questioning had been honest it would have wound up at the same place Zeke and the BEST arrived. How many FOI applications did Zeke have to lodge to write his poster?
Thanks Steven. I’ll take a look.
Re: cce (Dec 9 19:35),
Can you say “smoke and mirrors”? It’s fully funded because ACA cuts Medicare reimbursements by $716B by cutting reimbursements to hospitals, gutting Medicare Advantage and many other smaller cuts. There are additional cuts to physician reimbursements that were put in place in 1997 but have been always been overridden by Congress. If they aren’t overridden, expect to see a lot of doctors dropping out of Medicare altogether because their Medicare reimbursements would drop by 27%. My bet is that none of these cuts, except possibly Medicare Advantage, will actually happen. Smoke and mirrors.
Re: bugs (Dec 10 05:09),
Correct. The only thing you neglect is that the individuals that were attacked first were McIntyre and McKitrick. The climategate emails prove this beyond a shadow of a doubt.
cce,
Only a government program would use a bogus measure like years of solvency. The standard is unfunded liability which was $38.6T in that same 2012 trustees report. And that’s only for people alive today, i.e. 75 years into the future.
DeWitt:
I think he’s neglecting a lot of other things here besides that. For example, when McIntyre and others were trying to get CRU to release their data and code via FOIAs, that data and code obviously were not available yet. They are now, so Zeke is able to make use of the work that other people have done in getting these organizations to become more open (that includes GISTEMP in my opinion).
bugs is also now in the awkward position of simultaneously attacking the people who made it possible for people like Zeke to do what he does, while simultaneously praising Zeke as if he solo’d all this by himself, instead relying in part on the hard work of other people, including Stephen Mosher.
The hypocrisy of bug’s crowd never ceases to amaze.
I suppose I should feel bad about being so off topic here, but I do admit I feel less bad since Zeke has apparently shown us some of his work and then disappeared – at least temporarily.
The talk of an actual trust fund for handling future Medicare and Social Security payments is a myth that allows politicians and those either ignorant of the facts or merely stringent defenders of these programs to talk about funding data that is misleading to the uninformed public. The trust funds contain IOUs for revenues that the government spent on other programs. The only critical issue for the funding of SS and Medicare is the amount of revenue required to fund them from general revenues currently and in the future.
Anyone interested in the truth of the matter can read the results from the trust fund report linked and excerpted from below and Table II.B1 in the link. That table shows that Medicare in 2011 required $222 billion from general revenues while at the same time having a shortfall of revenues to expenditures of $35 billion.
I did not find a reference to it on skimming this link, but I know that the CBO discusses this issue in its reports and that issue is that every year Medicare payments to providers are supposed to be reduced and every year Congress does not enforce these planned reductions. Yet these reductions are used to determine the future financial condition of the funding of Medicare and the life of the “Trust Fund”. And a reduction in those payments could result in many provider participants leaving the program.
I also disagree with SteveF’s implication that these programs will be fixed by simple fine tuning of eligibility age or increased payroll taxes. These programs are not referred to as the third rail of politics as some exaggeration. Does one see the NYT or Wash Post talking about these problems and carefully explaining to the public the trust fund myth and the real burden on current expenditures and debt? There is a rather extensive denial in the intellectual community about any deficiencies of big government. Even proposals to reduce benefits for the wealthier participants in these programs is rejected by those who evidently feel this to be too much of an admission of failure.
“HI expenditures have exceeded income annually since 2008, and projected amounts continue doing so through the short-range period until the fund becomes exhausted in 2024.
..The SMI trust fund is adequately financed over the next 10 years and beyond because premium and general revenue income for Parts B and D are reset each year to match expected costs.
..The difference between Medicare’s total outlays and its “dedicated financing sources†reaches an estimated 45 percent of outlays in fiscal year 2012, the first year of the projection. Based on this result, Federal law requires the Trustees to issue a determination of projected “excess general revenue Medicare funding†in this report. This is the seventh consecutive such finding, and it again triggers a statutory “Medicare funding warning†that Federal general revenues are becoming a substantial share of total financing for Medicare. The law directs the President to submit to Congress proposed legislation to respond to the warning within 15 days after the date of the Budget submission for the succeeding year.
..Transfers from the general fund are an important source of financing for the SMI trust fund and are central to the automatic financial balance of the fund’s two accounts. Such transfers represent a large and growing requirement for the federal budget. SMI general revenues currently equal 1.5 percent of GDP and would increase to an estimated 3.0 percent in 2086 under current law (or to 4.4 percent under the full illustrative alternative to current law).”
http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/ReportsTrustFunds/Downloads/TR2012.pdf
Kenneth,
Sorry for the delay in getting back to you. I was in a cabin in Tahoe over the weekend and was subject to many distractions from blogging :-p
Your analysis looks quite interesting, and I suspect that your conclusion (regarding the shorter series hiding breakpoints) is on the mark. I haven’t explicitly looked at how the length of the record affects breakpoint detection, but I will bring it up on our next call as an area to explore as we work to turn this poster into a more detailed paper.
On the metadata subject, it is used by USHCN. I’m not sure if U.S. GHCN stations use metadata in their adjustments, though in general station histories are rather meager in most of the world.
You might also be interested in a new method of breakpoint detection (the Bayes Factor approach mentioned in the poster) that the NCDC folks are pursuing:
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00052.1
http://www.cicsnc.org/science-meeting-2011/pdfs/presentations/Zhang,%20Jun%20Presentation.pdf
Kenneth,
“I also disagree with SteveF’s implication that these programs will be fixed by simple fine tuning of eligibility age or increased payroll taxes.”
What about if the age for benefits is moved to 74 and means tested? Things which can’t possibly continue will not continue, and these are things which can’t possibly continue in their present form. The only question is the extent of benefits reductions needed to make the system sustainable in the future, with ‘the future’ being limited to the potential political lifetimes of politicians. So maybe 10-15 years max is the window of concern about solvency for political whor… um, er… ‘politicians’.
.
Since giving medicare and social security to wealthy people is the easiest change to make (politically), means testing will be a first step, but that will not be nearly enough, since there are not so many wealthy people; increasing the age to start receiving benefits (a lot!) is inevitable.
Zeke,
Back on topic:
I noted that the performance of the Berkley algorithm in accounting for trend bias (at least with the synthetic data) depends very much on the use of meta-data. What fraction of CONUS stations and stations outside the CONUS actually have suitable metadata? Based on that, how well do you expect the Berkley algorithm does with biases in CONUS data? With biases in world-wide land data?
#107295,
Sorry, “Since giving medicare and social security to wealthy people” should have been “Since NOT giving medicare and social security to wealthy people”…
SteveF,
As I mentioned in the original post, “after the poster went to print Robert Rohde made some improvements to the Berkeley method’s breakpoint detection that brought it more in-line with NCDC in the no metadata case.”
We will likely have final reruns of the Berkeley method later this week, but from what I’ve seen so far it will produce no-metadata results comparable to NCDC.
Zeke (Comment #107294)
Sorry, Zeke, but as an older man I do not always appreciate those activities that might (pre) occupy a younger person. Thanks for the reply.
I have been of a mind and stated it many times on these blogs that benchmarking, where these algorithms and methods of temperature adjustments are tested with simulated truths, is the single best way of determining the measured temperature uncertainties and for that reason I am most interested in work in that area. If one could produce a synthesis of the truth that introduces and contains truly real world non climate effects and a method was found to perform well in that test, it would give the kind of confidence in these data for which I am looking. The critical part is, of course, determining those real world non climate effects.
With a little playing with breakpoint calculations on temperature series and introducing artificial non climate effects that could have a very large effect on the end result – such as trends – one readily understands the limitation of breakpoints in finding these artificial effects. Large abrupt non climate changes with longer length temperature series are rather easy to find, but continuous and incremental changes are not so easy to find and particularly when the series are shorter.
In order to address these problems a study would have to be done that provided the types of non climate changes that would make a given method fail and fail badly and then determine whether those changes could occur and under what circumstances in the real world. I have my doubts that such a negative approach would be appealing to most researchers in the field.
Zeke, I am having a difficult time following why a modification of the breakpoint method would bring the BEST algorithm result more in line with GHCN without the documented meta data when the data with documented data between the BEST and GHCN is more closely matched. It appears the bigger differences in no meta data results between BEST and GHCN are with Tmax for the periods 1950-2011 and 1979-2011. Also these old eyes cannot find the yellow x for the 1979-2011 period denoting the GHCN default method.
Kenneth,
The gold X in the 1979-2011 period Tmax is not visible because it is precisely under the blue X.
The reason why the “without metadata” case is more divergent for Berkeley in the TMax data is because there are significantly more breakpoints in the TMax data, and Berkeley’s old method did not effectively detect them. You can see similar behavior in the World 1-8 trend charts, where the Berkeley method without metadata often only corrects a portion of bias vis-a-vis the NCDC method.
I do like the idea of trying to find cases where the algorithms fail badly and seeing how likely these occurrences are in the real world. World 6 in our poster tries to do this to some extent with the many small breakpoints, though we could do much more (e.g. with artificial gradual biases in trends).
SteveF (Comment #107295)
Back to off topic. In a rationale world I suspect one would admit the failures of these programs and get on with better ideas, but the politics here are truly one of denial. Obviously means testing done down to lower wealth categories and increasing the age requirement up to including only truly old people would merely be a way of putting the program on a welfare footing. With the politics of denial I do not see that happening, but if it did you would still face a couple of very large problems.
People have come to depend on these programs and our culture has changed because of them. People in general have not saved or invested sufficiently to survive very well without these programs. That culture would be required to change again if benefits and age requirements are changed. I think that would be good thing, i.e. less dependence on government, but I doubt that the general population does.
Further, if payroll taxes are increased to continue benefits at the same current level or even near that level and with the Ponzi approach to financing these programs, the required taxes would have to be at an economically ruinous rate when fewer workers are supporting more retirees.
Finally, what is often not mentioned in these discussions is that all revenue for these programs is spent as soon as the government gets its hands on it. There are no true trust funds and proposed changes do not consider true trust funds – as doing so would require coming up with money even the government cannot get its hands on to fund future payments that have already been forfeited.
Therefore part of these fixes maintains the fiction of a trust fund full of government IOUs that would continue to require the use of current general revenues to pay off.
Zeke (Comment #107300)
Breakpoint detection would contribute to a major part of the adjustments due to undocumented non climate changes, but I was under the assumption that documented non climate changes were adjusted directly and by-passed any breakpoint detection calculation. In a no-documented only change situation I would suppose that the breakpoints that were previously documented in the mixed simulation would have to be found directly and thus the result you show, Zeke, would mean that a goodly portion of the documented changes in the mixed simulation produced smaller breaks that were more difficult to detect. And further that documented changes contribute significantly to trends that result from the adjusted data. Would that be what we think represents the real world?
Bugs:
“You have never understood that the aggression and hatred has set up a totally dysfunctional situation. ”
Yes and you need to ask yourself why Mann and Jones engaged in this hatred and aggression toward Warwick Hughes, Willis Eschenbach, steve McIntyre and the rest. read my book. The mails will give you a clue. Mann stupidily thought that Mcintyre was an oil shill conservative. He poisoned Jones, when Jones HAD ALREADY SHARED DATA WITH MCINTRYE IN 2002.
“Every release of code and data has not advanced the science. Zeke has done what any scientist would, some hard yards, his own work, and verified completely independently the work of the scientists so publicly and ferociously vilified. You don’t need anyone’s code and data. In fact, when data and code have been released, because this was an emerging technology, which was evolving as the science progressed, code and data management was in it’s infancy. ”
Nobody claims that every release will advance the science. We do know that failure to release almost always retards science versus what could have been. Go to Judith. read about stratospheric uncertainty and the fact that everyone has been using data that has not been peer reviewed and CANNOT be reviewed because the methods were not described. Read about Nic Lewis and the Forrest disaster. A paper at the heart of sensitivity estimates has its data gone missing. Further, in some cases you will see Jones refusing to share code, NOT because he has lost it, but because the code will reveal that the method in the paper doesnt match to the method in the code. You cannot get away with the bullshit of data processing was in its infancy because we asked for the same damn data that peter webster was given. Get your facts straight.
Look, when I started at Northop in 1985, there were cases full of punch cards. racks full of 1/4 in streamers, libraries full of 9 track, and paper records. But hey, we were not saving the planet, so our data wasnt that important. Defending people by insulting their technical competence, is not a winning strategy.
“Much has been lost, not due to a massive conspiracy, but just due to the state of the art. Data was proprietary. ”
1. We asked for the subset of data that was NOT proprietary. CRU
responded it was too hard to do this. A month after climategate
they posted segregated data. They were economical with the truth.
2. The data may have been proprietary, however, the law allowed for its release if it was in the public interest.
3. CRU had policies for getting proprietary data. They violated those policies.
“Much of the FOI has nothing to do with code and data. ”
Wrong. Up until CRU lied about confidentiality agreements, requested were most limited to data requests.
“The Model E code was released, and promptly ignored. One brave soul muddled around with it, had no idea what he was doing, and everyone lost any interest in it.”
You dont mean ModelE ( which has always been available ) you mean Gistemp.
1. It was not promptly ignored. I worked on it with two other guys for about two weeks. Because of some compiler issues that I could not sort out ( remotely) I talked to EM smith about getting a system that had the same OS as the GISTEMP people. The shop “weird stuff” in sunnyvale has virtually everything. EM, decided he would rather just work on a linux version. He completed that. Peter O’neil also worked with the code. You can find Hansen giving him credit for finding problems. Peter’s version is really cool. Since you dont know what you are talking about you’ve never heard of peter. Steve Mcintyre used the code to write his own emulations in R. I ran the code to get intermediate datasets. I also used the code to figure out several things about metadata and nighlights and to understand the UHI adjustment approach.
“To date, we have seen no vicious and relentless attacks on the UAH satellite code. It is nothing to do with research, it’s about hatred, conspiracy theories and a group madness that has obsessed a vociferous few.”
You want to know the reason? going after the on board code we hit a wall called ITAR. You dont know what ITAR is, but Ive worked in the ITAR world and UNLIKE CRU who tried to hide behind bogus confidentiality agreements, ITAR is a whole different matter. Folks who have ITAR protection dont have to lie like CRU did. The other issue is that there is no good FOIA lever to use on Spenser or RSS for their off board code.
Trust me, if one appears, I will be doing a FOIA. And If they try to lie and tell me they have a confidentiality agreement that restricts release to “non academics” I will FOIA those agreements. And if they lie, I’m hoping that they get punished before the statute of limitations expires.
“Look at what Zeke has done. Not an unkind word, some hard work, and some results that to a large extent confirm what we already knew. That is not a waste of time, confirmation is important. But he has achieved far more with his posts here, alone, than McIntyre ever has or ever will.”
How do you even make such a comparison? you have apples and oranges. We have been lucky and grateful that Matt and Claude post all their data and their PHA code. I downloaded it years ago.
They are the example that everyone should follow. Stop defending the Mann’s and Jones of the world and start praising the guys who are using best practices. Everytime ramesdorf or Tamino posts code I try to praise them. You should to. You can believe in AGW AND believe that mann and jones were wrong to fight data release. Gavin believes that, you dolt. That is basically all I am saying. The release of data and code says NOTHING about the truth of the science. That entails that you can believe in AGW and be critical of people who dont release data and code. The only reason that some idiots fight the release of code and data is because THEY saw it as a political fight. THEY handed the skeptics a club. Crap, even Briffa realized this in feb of 2005.
Kenneth,
“People have come to depend on these programs and our culture has changed because of them. People in general have not saved or invested sufficiently to survive very well without these programs.”
Sure, but I repeat, what can’t possibly continue will not continue. The details of how it all shakes out in the world of politics is not terribly important: ultimately (less than 15-20 years), benefits will be substantially reduced compared to the level people enjoy today. The rest are details.
SteveF (Comment #107296)
December 10th, 2012 at 1:14 pm
Zeke,
Back on topic:
I noted that the performance of the Berkley algorithm in accounting for trend bias (at least with the synthetic data) depends very much on the use of meta-data. What fraction of CONUS stations and stations outside the CONUS actually have suitable metadata? Based on that, how well do you expect the Berkley algorithm does with biases in CONUS data? With biases in world-wide land data?
#######################################
One way to understand this is to classify the various types of
Metadata changes. There are 3.
1. Station Moves, specifically UNDOCUMENTED, station moves
documented stations moves are easy to find. The typical case is you have a station that has the same identifier ( USAF1234567) but two different locations or two different altitudes. IN the ROW these are found by the duplicate identification code. We dont count these as “breaks” but very often where n the past you might have one series ( through station combining in GHCN) we end up with two different stations.
2. TOBS changes. The US is almost alone in the need for this data.
Norway, canada, australia, and japan may also require TOBS adjust. In some cases where you have hourly data, this change is
“in” the data. For the US daily and monthly it is “in” the metadata file.
3. Instrument changes. Only the US has this data.
All of these changes are typically step changes. You dont need metaata for find SOME of them. But, take tobs for example. Some of the step changes are very small .1C or less. Instrument changes? we know what they are in the US. Exposure changes? moving to the CRS. typically not documented from what I have seen
OT, but it looks like Lucia’s model validation posts are closed to comments.
Have climate change projections come true?
http://www.climatespectator.com.au/commentary/have-climate-change-projections-come-true
So about a third of the rise since 1990 is attributable to “natural variations”. I wonder how much of the warming since 2001 has been due to “natural variations.”
If only I had been smart enough to retroactively “allow for that” with my own data through the years.
Steven Mosher (Comment #107305),
Thanks. So what I am hearing (please correct me if I am wrong) is that the algorithms tend, in the absence of metadata, to not completely correct for bias (either positive or negative), but with good metadata, the failure to capture bias is minimal. And we can probably say with some confidence that the well known temperature histories are accurate to within a modest fraction of the measured warming over the last century, including any likely biases. Is that fair?
DeWitt,
“Only a government program would use a bogus measure like years of solvency. The standard is unfunded liability which was $38.6T in that same 2012 trustees report.”
Hear, hear!
This needs to repeated often and loudly. If we all ran our personal and company finances as the Federal government does, the country would already be bankrupt. I can almost see the eyes of cce et al rolling back in their sockets.
.
Yes, I know, governments are not like individuals, etc. etc. But here is the key: government does not make the pie any bigger, it only changes the way the pie is divided. If/when government becomes a means by which to inhibit the creation of wealth by making that process too difficult, not worth the effort, or even impossible (think Venezuela), then a decline in national wealth for sure follows. Mr. Obama is, IMO, on a reckless course.
I haven’t made any comment on their practices. You say ‘praise the guys who are’. Hooray for that. What has me scratching my head is why what is not best practice is reason for a lynching. This at a time when best practice was still being developed, when technology was much more primitive, and far less reliable. They didn’t have a fancy IT department to use for the menial housekeeping tasks. ITIL was still decades away.
Strangely enough, they resented being treated like a public pillory, and people wonder why they saw no need to co-operate with their tormentors.
@Mosher
The Gistemp code was supposed to reveal gross incompetence and fraud, that the warming record had been deliberately manipulated. Nothing like that happened.
Smith has this to say of AGW.
“So basically, all the major temperature series agree because they are built on the same “cooked booksâ€:”
Hardly a contribution to the science. Just another conspiracy theorist. That’s all this has been about. Free the code, for what?
http://chiefio.wordpress.com/2009/07/30/agw-basics-of-whats-wrong/
bugs you said:
Actually, GISTEMP has been replicated using ClearClimateCode.
That’s hardly the same as ” One brave soul muddled around with it, had no idea what he was doing, and everyone lost any interest in it.”
In fact many of us are interested in this project and it is ongoing.
So when your absurd prattled is challenged, you find one person who says equally absurd nonsense as you do to try and buttress your original nonsense???
I have a suggestion, the only person your comments are reflecting poorly upon is yourself. Instead of spewing out spittle-flecked nonsense, perhaps you need to a) work out that GISTEMP isn’t Model E, b) do a little googling and find out your preconceptions were wrong, then c) not post wrong preconceptions.
Maybe people won’t just laugh at you then.
Re: John M (#107306)
Is there a non-paywalled version of Frame & Stone 2012? One can get a preview of the first page only here, which hints that the reconciliation between prediction and observation is done by adjusting forcings.
The article claims “During this period [1990-2010] global mean temperature has risen by 0.35 degC according to the HadCRUT3 data set of land and ocean temperatures or 0.39 degC if we use the GISTEMP data set.” Over the (nearly) 22-year period Jan1990-Oct2012, I get 0.29 and 0.36 for those two indices. [Which would have to be compared with about 0.56 for the FAR prediction, if I remember correctly.] Smells a little of cherries. I’d like to see the full article.
@bugs
“The Gistemp code was supposed to reveal gross incompetence and fraud, that the warming record had been deliberately manipulated. Nothing like that happened. ”
No. I asked for the code ( recall that I started posting on RC (free the code) because I wanted to do a variety of things with the code.
Things like: change the nighlights , change the urban adjustment code. From and engineering perspective it makes more sense for me to get their code and change it rather than write code in the dark ( the papers do NOT describe the code ).
Of course some people thought they would find massive errors. They are stupid. By doing simple averages before we ever got the code we ruled out huge errors. Duh!.
Your problem bugs and Manns problem is thing that everyone who argues against you is connected in a network of people with similar ideas. Were not.
Zeke, I notice in the 8 World representations that the difference for the trend with and without non homogeneities will always place the algorithm result somewhere in between depending on, evidently, how well the algorithm detects breaks corresponding to non homogeneities. The algorithm will thus bias the result in the direction of the raw data, i.e. data with the non homogeneities included.
Unfortunately the differences were not keep the same across all Worlds and thus a quick glance from the casual viewer may not see and appreciate what I described above.
You also mention varying some parameters in the benchmark testing but not the temperature series lengths. Was that done here or do you plan to look at it in future testing?
SteveF (Comment #107304)
“The details of how it all shakes out in the world of politics is not terribly important: ultimately (less than 15-20 years), benefits will be substantially reduced compared to the level people enjoy today. The rest are details.”
If we look to the Europe example I think that “how it all shakes out” is very important. The populations’ resistance to change and the politicians submission to that reaction could put a nation on the brink of bankruptcy and/or slow the economy drastically.
The US does have an arrow that the EU nations as individual nations do not have and that is the capabilities of printing money, but that way leads to high or even hyper inflation.
Mosher: “Your problem bugs and Manns problem is thin[kin]g that everyone who argues against you is connected in a network of people with similar ideas. Were not.”
That’s why this blog is so interesting to read and why RC and SkSci and similar places are so …..yawn.
Mann: “the important thing is to make sure they’re [the skeptics] loosing the PR battle. That’s what the site [Real Climate] is about.”
Kenneth,
“If we look to the Europe example I think that “how it all shakes out†is very important.”
Sure, the economic and social damage politicians can do (and often have done!) is almost without limit. I would never argue otherwise. Still, one way or another (with or without economic damage) Social Security and Medicare will decline in benefits over the next 15-20 years.
@Mosher
No,i have not thought that. The so called ‘skeptics’ seem to be a broad group of people with mutually contradictory scientific beliefs, united in the belief that scientists like Jones should be treated with contempt.
McIntyre has outlined his motivation, he doesn’t like anything that looks like a hockey stick, believes it to be a fraud, and has dedicated a ridiculous amount of time to proving it to be so. His only problem is that he has wasted a large part of his life to doing so, AGW is real, the hockey stick is only one small part of the case for AGW, even if it is the easiest to understand. Anthony still has hordes of people thinking the science is wrong, and still can’t even decide if it is warming or not. McIntyre has used used a loopy know nothing like Lucy Skywalker as a source of information.
McKitrick has said “I have been probing the arguments for global warming for well over a decade. In collaboration with a lot of excellent coauthors I have consistently found that when the layers get peeled back, what lies at the core is either flawed, misleading or simply non-existent.” He even wonders if a global temperature exists.
Maybe if some skeptics would audit each other, then they would be better occupied with their time. The net effect is that people think the science is dubious, while the skeptics don’t have to prove anything, or even hold themselves to their own standards. As long as it is anti-agw, you are in the club.
bugs,
“united in the belief that scientists like Jones should be treated with contempt”
Nah, even though some of the behavior revealed in the UEA emails was at least somewhat contemptuous.
.
I actually have a fair amount of respect for Jones. His work on global temperature history has stood up pretty well, he showed the good sense to pull back from the political spotlight and focus on science when his embarrassing behind-the-scene activities came to light, and he has finally released all (or nearly all) the data he refused to release prior to the end of 2009. Hard to say for sure if he is still trying to twist the arms of publishers and editors to block publications he doesn’t like, but I doubt it; any one of them could just make his pressure tactics public, and he would look REALLY, REALLY bad. Unfortunately, that same good sense seems lacking in a number of others who were involved in Jones’ shenanigans.
.
With regard to skeptics auditing each other: Read over any of the long technical treads following posts at this blog and take note how the commenters treat the content of the posts. You might note that there is quite a lot of ‘skepticism’ in those threads…. which is the normal process of scientific discussions, by the way.
.
“Anthony still has hordes of people thinking the science is wrong, and still can’t even decide if it is warming or not. ” Humm.. you seem to imagine that “skeptics” are somehow united in misunderstanding; you are mistaken. Yes, there are nutcakes everywhere; visit a frenzied alarmist blog some time and you will see examples of obvious nutcakes on the other side as well. Ok, so there be nutcakes in the world. But read back over Paul_K’s recent tread, and you will see that the Dragonslayers are told they are nutcakes and they would be better off just talking to other nutcakes.
.
For the record, I personally believe that rising atmospheric GHG’s must warm Earth’s surface, I believe that there has been about a 0.8C rise in average temperature over the past 100 years, that heat is accumulating in the oceans, that glaciers and sea ice are melting in response to warmer temperatures, and that sea levels will continue to rise due to a combination of melting glaciers and warming oceans. See, no disagreements with main stream climate science. The things I doubt are the projections of rates of future warming, future rates of sea level increase, and a host of other projected consequences, ranging from more acne to more malaria, to the extinction of corals. I doubt those things because the science used to develop those projections seems to me remarkably weak and remarkably doubtful… in many cases, it is pure rubbish. I will resist as best I can people using rubbish projections of future doom to force policy changes that I think are costly, damaging, and politically (rather than scientifically) motivated.
.
So enough of the broad-brush characterizations of skeptics, OK?
Re: bugs (Dec 11 14:29),
That’s your interpretation. I seriously doubt that you could find actual quotes from McIntyre that affirm your position without torturing the language. I can’t read his mind so I don’t know whether he believes that MBH98 was fraudulent, but he has gone to great lengths to separate himself from those that have taken that position publicly. He’s retired. He has nothing but time.
McIntyre has never said otherwise. He, like a lot of others, would like to see better documentation of how some of the numbers involved are derived. That would be his request for engineering level documentation, something with which most academics are unfamiliar.
In that regard, it was a real PITA for me to find decent documentation without spending a ton of money on papers behind moneywalls and expensive textbooks. R. Caballero’s lecture notes on Physical Meteorology (free) and Grant Petty’s A First Course in Atmospheric Radiation (cheap if bought directly from the publisher) were extremely helpful in that regard. The Science of Doom site didn’t exist when I started and RC was nearly useless.
McIntyre has often stated that his original motivation was simple curiosity as to how the math of MBH98 worked. If his request for data had been granted and people had been civil, ClimateAudit would likely not exist.
“Skeptics” have widely differing interests and objectives.
My bag is to try to get someone, anyone, in the MSM or Canadian government to tell us what the expected outcome of expensive emissions programs is in DEGREES of global warming averted, rather than tonnes of CO2.
From my perspective, the “selling” of “green energy,” for example, is intellectually dishonest in the extreme as long as governments conceal the expected results behind a CO2 tonnes figure that is meaningless to the general public that understands temperature in degrees!
Oil companies are invited to send cash.
Seems Wood for Trees.org has some problem… going to the Wood for Trees site takes you to the “Apache Tomcat” home page.
Re: Political Junkie (Dec 11 16:22),
You don’t actually need degrees. Just ratio the tonnes of CO2 saved to current global emissions. As of 2011, that was 31.6 E9 metric tons/year and growing. Odds on, the savings will amount to significantly less than 1%.
One last comment on unfunded liability. The meaning is how much money we would need right now in addition to future revenues to pay scheduled benefits for the next 75 years. The alternative is an income stream that has a net present value of the unfunded liability. That depends on the interest rate used to discount the future income. Let’s be completely ridiculous and assume a discount rate of 1% and an unfunded liability of $39E12. In round numbers, a discount rate of 1% means that you will need to collect about twice as much money over 75 years as the liability. It’s basically paying off a loan of the amount of money in question. A 1% annual percentage rate will double your money in ~70 years. So, again in round numbers, we would need a tax increase of ~$1E12/year right now to keep Medicare solvent for the next 75 years assuming that all the proposed cuts in outlays are implemented, which is even less likely than a discount rate of 1%. As of 2010, there were 117,538,000 households in the US. That’s $8,500/household/year. And that’s just Medicare. Double the discount rate and approximately double that amount. There simply isn’t that much money. Period.
The longer we delay, the worse the problem gets.
If you believe that AGW will be a disaster and are into severe depression, run the same calculations on the percentage reduction in CO2 emissions/year required to stabilize atmospheric CO2 at some constant level. That isn’t going to happen either.
DeWitt,
” run the same calculations on the percentage reduction in CO2 emissions/year required to stabilize atmospheric CO2 at some constant level. That isn’t going to happen either
I am more sanguine than you about the maximum levels of atmospheric CO2. For sure there will be 30-40 years of fairly rapid increases (eg 2.5 to 3.5 ppm per year), but by that time I suspect global energy efficiency, the technology of nuclear power, and (bite my tongue) even solar power should be considerably improved, and the relative cost for fossil based energy simultaneously higher. At some point, it will just make economic sense to stop burning reduced carbon for energy, and CO2 emissions will drop. I also note that a large fraction of CO2 emissions will continue to be taken up by the ocean (lets not argue again about the long term accuracy of the Bern model!), so bending the curve to flat, followed by a gradual decline, seems to me likely to happen by the second half of this century. CO2 may reach 600 PPM give or take a bit, but the really high numbers some have discussed (900 PPM or more) strike me as very unlikely.
DeWitt,
Probably the best we can hope for is that we burn up, before we go broke.
DeWitt,
Compared to Medicare, Social Security is in financially good shape, with an unfunded liability of ‘only’ $8.6 trillion. Scaling your calculation to include Social Security, and assuming a 2.5% discount rate (much more realistic), the annual tax increase needed to cover both over the next 75 years is about US$26,000 per year per household. What we need to balance the books on Medicare and Social Security is about $3 trillion a year in new taxes. Not going to happen.
.
But I think Mr. Obama and his supporters will try to take as much money as they can from ‘the rich’ in the next decade or so to put off benefit cuts for a short while (and feel very smug knowing how righteous they are to be soaking the rich). But even if 100% of the income of the top 1% of households were confiscated, that would generate only about $400 billion… and outright confiscation has the very bad side effect of making people stop trying to earn money. Very high rates, short of outright confiscation, say 60% of the total income of the top 1% would only generate $150-200 billion or so more than today… and would lead people to adopt tax avoidance strategies… so that $150-200 billion would be temporary. Major benefit cuts, especially for Medicare, in the not too distant future is the only possible outcome.
DeWitt (Comment #107332)
My point is that for the man on the street the absolutely valid question is:
You are raising my electricity bill by x dollars.
By how much, in degrees, is this reducing global warming?
DeWitt, I don’t know what you think “smoke and mirrors” means, but ACA cuts spending for various programs and raises taxes. That is how you pay for things. Most of that money and savings goes toward paying for the ACA itself. Some of it goes toward Medicare.
Healthcare costs cannot compound at a rate higher than wages forever. It is a mathematical certainty. If doctors and hospitals don’t like that fact, they need to talk to someone in real estate. All of the projections of financial doom for the US revolve around impossibly high healthcare costs. The “unfunded liability” of Medicare depends on projections of healthcare costs that cannot occur because there will not be enough revenue to pay for it. Regardless of whose “plan” you choose, doctors and hospitals aren’t going to get all of that mythical healthcare money of the future, either from individuals or government.
Kenneth, the reason that the trust funds are “solvent” is because the payroll tax has collected more money than those programs have paid out in benefits. In addition, the government pays interest for the use of that money. That money, which was collected on the first dollar of income and thus regressive, was spent on all kinds of things that “regular” taxes are supposed to pay for. If it was OK to raise payroll taxes while simultaneously cutting income and capital gains taxes as was done in the ’80s, and then proceed to spend that money as if it were ordinary revenue, then it is similarly OK to raise income and capital gains taxes today to pay back money that subsidized the artificially low rates of the past 30 years. There is no free lunch, as they say.
As was mentioned, fixing Social Security is easy from a policy standpoint. Raising the income cap so that it applies to 90% of all income (as was the case prior to the ’80s) and changing the cost of living adjustments can fix any shortfall in revenue for any reasonable time period. Actually doing that will require political cover from both parties — Democrats who created Social Security, and Republicans who are supported by it.
You are right cce. Doctors and hospitals are just going to have to suck it up and work harder for a lot less money. If this ACA thing works out as promised, we will get the Affordable Mercedes Act, the Affordable Rolex Act, etc.
@De Witt
http://www.spiegel.de/international/world/climate-catastrophe-a-superstorm-for-global-warming-research-a-686697-druck.html
Oh Bugs, you accidentally removed a quotation mark when you copied from that article. Let me fix that for you:
But McIntyre was suspicious. “In financial circles, we talk about a hockey stick curve when some investor presents you with a nice, steep curve in the hope of palming something off on you.”
The stubborn Canadian pestered one scientist after another to provide him with raw data — until he hit pay dirt and discovered that the hockey stick curve was, in his opinion at least, a sham.
cce,
You are correct that health care costs can’t increase faster than wages forever. But I think you have cause and effect confused. Health care costs have exploded because people, mostly people recieving Medicare benefits, but also people with generous private health insurance, are in a position to select health care options without concern for costs…. they are receiving benefits without personal cost (spending other people’s money), so only the very best treatment options will do. The explosion in health care costs, much of which turns into huge incomes for medical doctors, drug complanies, and others, are an automatic consequence of this disconnect between who pays and who benefits. There are only two ways out of this morass: you can limit costs by forcing providers to accept lower payments and limiting treatment options for beneficiaries (the European style plans all do this) or you can align the interests of the patient so that they will want to reduce health care expenditures, and make more financially prudent choices for care. People of the left usually want European style systems, with greater public control of private activities, which is at least consistent with everything else they support (eg. government taking > 50% of GDP). What you can’t do is continue with a system that allows costs to increase without control. Which unfortunately seems to be what Mr Obama plans to do.
bugs: too funny! You just confirmed that your side can’t have an honest conversation. It does not matter what you protest, the problem is the evidence. Just as there is evidence that motivated reasoning occurs for “skeptics”; it occurs in “alarmists”. The answer always is what is the evidence.
P.S. I do know that the argument above is not technically correct wrt representing all “alarmists”; your post and Niels were just too funny to pass.
You brought a smile to me today and I thank you.
Bugs,
You only have the one quote and it’s a statement of fact. The second sentence is the opinion of the author of the article, not McIntyre. You do seem to be more or less correct that McIntyre’s original motivation was that a hockey stick curve was suspect. But that’s irrelevant. The data and code should have been freely available to anyone. That’s the way science is supposed to work. Articles are published in journals so the work can be replicated and verified. By anyone. It’s not art where you either like it, don’t like it or are indifferent and it’s not meant to be replicated.
cce,
You missed my point about the ACA cuts in reimbursements. Sure they reduce the unfunded liability somewhat if they are actually implemented. But that’s unlikely given the history of trying to cut Medicare reimbursements to physicians. And if they were to be implemented, there will be unintended consequences. Just like costs can’t increase faster than economic growth forever, you can’t cut reimbursements to further and further below actual costs forever either. At some point hospitals will have to drop out of the system or go bankrupt.
Bugs says
“McIntyre has outlined his motivation, he doesn’t like anything that looks like a hockey stick, believes it to be a fraud, and has dedicated a ridiculous amount of time to proving it to be so.”
Interesting, and Mann seems to be motivated mostly to make the hock stick handle straighter – see his AGU New Fellows presentation and the claimed refutation of Esper et. al. 2012 (very end).
cce (Comment #107337)
“Kenneth, the reason that the trust funds are “solvent†is because the payroll tax has collected more money than those programs have paid out in benefits. In addition, the government pays interest for the use of that money. That money, which was collected on the first dollar of income and thus regressive, was spent on all kinds of things that “regular†taxes are supposed to pay for. If it was OK to raise payroll taxes while simultaneously cutting income and capital gains taxes as was done in the ’80s, and then proceed to spend that money as if it were ordinary revenue, then it is similarly OK to raise income and capital gains taxes today to pay back money that subsidized the artificially low rates of the past 30 years. There is no free lunch, as they say.”
Sorry, cce, but this is the sanitized version of a government program that is seen in Civics 101 and has nothing to do with political reality or the damage that can be done by that reality. The trust funds were raided by politicians because initially the income was greater than the expenditures and the politicians used those funds in order to grow government with the political expediently this situation provided.
Benefits were added to the programs and demographics were known to be changing that were affect future income to these programs and the yet politicians were allowed to borrow from future generations in order to make these failing government programs look good to the voting public. This situation is by no means unique to the federal government for state and local governments have done exactly the same thing. When we talk of unfunded government liabilities we should include those sources also as it makes the sum total even more ridiculously huge.
The trust fund myths and the Ponzi approach to financing these programs was another political expediency perpetuated to fool the voting public and give the defenders of these program grist for their disinformation mills. What people such as you either miss or ignore is that people have come to depend on these programs and thus any reductions in them is the third rail of politics. The Democrats blithely know this and have successfully shown that ignoring the problem works – at least as a political expediency. The experience in Europe shows that fixes to government programs that are not affordable can be delayed by the voting public to the point of national bankruptcy and economic desperation.
I really get a kick out hearing that a “solution” for Social Security is that of eliminating the cost of living factor on payments when the law was changed because even the lesser informed voting public and users of the program figured out that the by the government inflating the dollar they could in effect reduce the payments (or at least their worth) to recipients. Under those circumstances the government and the politicians have every reason to promote inflation.
Taxing for Social Security based on all income, or 90% of it, would under current law provide huge monthly retirement payments for the wealthy when they retire and thus to do nothing in the way of balancing expenditures and outlays. In order to use the extra income to balnce the system would require a complete severing of payments into and out of the system. It becomes more readily recognized as a welfare program and the myth of a retirement savings programs should be gone forever. It is interesting to put this suggested “improvement” in context with what the politician’s currently have going. They reduced the payroll tax into Social Security in the name of help for recovery from the recession and currently appear to want to perpetuate that reduction by daring other politicians to even suggest reinstating it back to the former rate. And all this in the face of current deficits of income to expenditures for Social Security of around $150 billion in 2012.
If one can face the reality that there is no trust fund and that none of the additional revenues over expenditures, for the immediate to near future, goes into a mythical trust fund to fund future expenditures, where the rate of expenditures once again exceeds the rate of income, one can readily see that all these current added revenues do is fund bigger government for the near future and continue to create crises for the time when current expenditures once again outrun income. And none of this says anything about the detrimental effects of the additional taxes on the economy – think fiscal cliff and the fact that we are supposed to be well on our way out of recession.
Zeke, I see you commenting on other threads here and was wondering whether you might address the query I put to you sometime ago.
Kenneth Fritsch (Comment #107317)
December 11th, 2012 at 12:41 pm
“You also mention varying some parameters in the benchmark testing but not the temperature series lengths. Was that done here or do you plan to look at it in future testing?”
Further I am very interested in what kinds of changes one would predict might occur from slowly changing micro site conditions and whether one would expect to be able to capture those changes by breakpoint algoritms and adjustments. Here I am thinking of the changes that might occur over (long) times to cause the CRN rating changes that Watts and his team came up with.
I have commented previously that those authors in several papers that have attempted to address those CRN changes have been woefully inadequate as to the 1980-current time period studied and the fact that the current CRN rating says absolutely nothing about when the change occurred and over what time period was it changing.
I am currently continuing my detailed analyses of the GHCN Adjusted versus Unadjusted temperature series and am finding what I judge to be very major impediments in the real world to finding the non homogeneities via breakpoint methods and properly adjusting for what was found. I am hoping that if you publish a paper based on your poster displayed here that you go into great detail on the parameters you used to benckmark test the algorithms. I am developing some ideas of my own on real temperature series and the resulting difference series with nearest neighbors that would truly test the performance of these algorithms.
@Pittman
Here is the cut/paste I originally did
————————————–
But McIntyre was suspicious. “In financial circles, we talk about a hockey stick curve when some investor presents you with a nice, steep curve
in the hope of palming something off on you.”Druckv ersion2-2C/1li2m/2a0te12Catastrophe: A Superstorm f or Global Warming Research – SPIEGEL ONLINE – News – International
http://www.spiegel.de/international/world/climate-catastrophe-a-superstorm-f or-global-warming-research-a-686697-druck.html 5/14
The stubborn Canadian pestered one scientist after another to provide him with raw data — until he hit pay dirt and discovered that the
hockey stick curve was, in his opinion at least, a sham.
—————————————
I tried to cut the middle out, and one of the characters accidentally deleted was a quote symbol. You assume it was intentionally done to deceive. Says more about you than me.
Mosher, somewhere up ther:
You have some response from Christy and Spencer?
Eli
Be honest. Your objection has been countered so many times that it gets to be boring when you keep bringing it up. Maybe that is a definition of “senility”. Meanwhile, Jim Bouldin has done a series of posts saying that deriving temperatures from tree-rings is simply impossible.
Diogenes (Comment #107765)
“Meanwhile, Jim Bouldin has done a series of posts saying that deriving temperatures from tree-rings is simply impossible.”
When and where was this done?
Kenneth Fritsch (#107819) —
See Bouldin’s blog. So far, seven posts. Best to read them in chronological order, I should think. The series seems to summarize a paper on the topic of the reliability of dendro-based temperature reconstructions, which Bouldin has so far not been able to get published.
He provides some R code in post five of the series, analyzing the effect of RCS calibration. His approach is similar to what Lucia did with her analysis of correlation-based screening. That is, generate ideal synthetic data with known attributes, apply the algorithm, and see how well the estimate(s) matches the known underlying parameter(s). In Bouldin’s case, the simplest form of synthetic data is tree growth having a linear relationship to temperature, with no confounding factors. The temperature is a linear trend plus noise, and he looks at the resulting estimate of trend. Unfortunately, I haven’t had a chance to run his code yet.
One of his citations is a brief white paper by Briffa and Cook (2008), which says, “Frequently, the much vaunted ‘verification’ of tree-ring regression equations is of limited rigour, and tells us virtually nothing about the validity of long-timescale climate estimates.”
HaroldW (Comment #107826)
I followed your link and have skimmed the contents of Bouldin’s dissertation on TRW chronologies. What I have excerpted below sounds like it well could be heretical to TRW reconstructionists and thus be a reason for Bouldin’s disappointment that these people are not joining the discussion.
The discussions I have had with Bouldin had to do with low and high frequency validation of TRW reconstructions and the validity of selecting TRW proxies based on a posterior high/low frequency correlation with no a prior criteria. At that time I thought he was working on a better chronology for relating TRWs to climate and specifically temperature.
His simulation models assume first that there is a valid linear response of TRWs to climate, but even with that assumption, he finds major limitations to the use of any available (I think) TRW chronology.
“To summarize briefly, the essential problem is that changes in tree size and long-term (“low frequencyâ€) climate changes (~ century scale and longer) both affect ring response and can and do occur concurrently.
Therefore, there is no obvious way to determine how much of each year’s ring response is due to tree size vs climate–a classic example of statistical confounding.
The RCS method was developed to address this issue, by estimating the “expected†(~ mean) ring response for any given tree age (but size is actually the better predictor and my analyses use it, rather than age). This expected response is assumed to apply to every tree, and the deviations of each tree ring from it are therefore assumed to represent the effects of climate. For this concept to work however, there must be a good mixture of tree ages in the sample, so that (hopefully) each calendar year is sampled by rings from many different tree sizes (and conversely, each tree size occurs across a large part of the range of climate states experienced over time). I mentioned before that the method also requires that trees have as similar responses to climate as possible, and that they also experience similar non-climatic environments (e.g. soils, topography, competition etc.).
This seems at first glance like a reasonable solution to a potentially thorny problem. There is one small hitch though: it doesn’t work. A second small hitch is that there are no other known solutions to the problem. Only in certain highly optimal situations, completely unrealistic for most real tree populations, does it approximately remove the tree size effect and return an adequate long term trend estimate. Otherwise it will fail badly, usually with a definite directional bias. Big time problem. As in, leading to an uncertainty that is fatal to confidence in resulting chronologies and climate reconstructions. I’m not exaggerating; keep reading.
So why do people use it? The first answer is that a lot of times they actually don’t, they still use the older ICS method instead (described briefly in part two), with its known severe weaknesses in recovering long term trends. When they do use it, it’s presumably because it’s thought to be better than the ICS method, which it is, and because there are no other methods available (other than variants of RCS which have the same set of problems).”
HaroldW, thanks for pointing out Bouldin’s blog. I wasn’t aware it existed until today. I’ve added it to my blog list and will start working through it.
Kenneth Fritsch:
Technically that makes it two rather small hitches (it doesn’t work and it can’t be fixed).
I was always impressed with Cook’s and Briffa’s white paper and its forthrightness and have quoted from it often. Those people appear to be aware of the limitations or at least some of the limitations of TRW reconstructions, but never seem to push the issue with others doing the reconstructions. I suppose these scientists could consider that they still have work to be done by attempting to continue to come up with better methodologies. I had thought that was what scientists, like Bouldin, Briffa and Melvin were attempting to do. Bouldin appears to be saying that it is an impossible task.
I have exchanged emails with Melvin about a new and improved TRW chronology that he and Briffa had published a paper on. Gergis et al claimed to be using that method or at least footnoted it in their withdrawn paper. Melvin claimed that that was unlikely the case because the Gergis authors had not contacted him and that the details of the method have not yet been made public. Melvin said he would inform me when the code for the method was ready for public consumption. I have not heard from him for about 3 months.