Some years ago, I attended a climate change debate at Argonne National Lab. The “cooler” side was represented by Fred Singer; the “warmer” side by Michael Shlesinger of the Univerisity of Illinois. Recently, during the discussions of climate sensitivity triggered by various reactions to Nic Lewis’s work and that of “the Norwegians”, and James Annan’s blog post which has been covered by Revkin, Richard Betts mentioned that Shlesinger’s group had recently published Causes of the Global Warming Observed since the 19th Century. This paper happens to also provide an estimate of Equilibrium Climate Sensitivity:
Additionally, our estimates of climate sensitivity using our SCM and the four instrumental temperature records range from about 1.5°C to 2.0°C. These are on the low end of the estimates in the IPCC’s Fourth Assessment Report. So, while we find that most of the observed warming is due to human emissions of LLGHGs, future warming based on these estimations will grow more slowly compared to that under the IPCC’s “likely†range of climate sensitivity, from 2.0°C to 4.5°C. This makes it more likely that mitigation of human emissions will be able to hold the global temperature increase since pre-industrial time below 2°C, as agreed by the Confer- ence of the Parties of the United Nations Framework Convention on Climate Change in Cancun [54]. We find with our SCM that our Fair Plan to reduce LLGHG emissions from 2015 to 2065, with more aggressive mitigation at first for industrialized countries than developing countries, will hold the global temperature in- crease below 2°C [55].
For those wondering about the sorts of things Michael Shlesinger has written in the past, I point to
The Dangers of Climate Change, Michael Schlesinger (Part 4 of 6), TUC Radio (transcript) where he warns of a shut down of the gulf stream and a book chapter discussing climate sensitivity.
It is also worth nothing that even with the lower climate sensitivities Shlesinger now estimates he thinks the earth’s temperature rise will exceed 2C if we follow the “Representative Concentration Pathway 8.5 (RCP-8.5) greenhouse gas (GHG) emission scenario” (‘the highest of these scenarios’). His estimate for temperature anomalies relative to 1765, emissions and concentrations under this scenario are provided in A Fair Plan to Safeguard Earth’s Climate and shown below:
Eyeballing the graph, I’m seeing a projected temperature rise of 3C to 3.5C by 2100 in the third graph. Whether and equilibrium sensitivities of the magnitude Shlesinger now suggest result in temperature rises of that magnitude depends on the assumed emmissions path. I’d say it’s worth taking steps to reign emmissions in some– but then I’ve always said that.

I’m imagining that this isn’t really moving the goalposts; probably different scientists had different estimates of sensitivity, and different estimates of how much problem each sensitivity was.
But it sure _feels_ like moving the goalposts. This is tremendously good news: The sensitivity is on the smaller side of what we thought, plus the really high numbers are now considered very unlikely. But no – it’s still just as urgent to do everything now as it was last year.
Seems that people like Bjorn Lomborg ought to get a lot of mileage out of this: the real disaster scenarios may be out of play now. We can all sit back and do a little more careful analysis of just what the impacts of two or three degrees C would really be, and what it’s worth to us to mitigate them.
By moving the goal posts– do you mean Shlesinger discussing actual temperature rise?
I don’t think that’s moving the goal posts. What we really care about is temperature rise. But low sensitivity means that all other things being equal, lower temperature rise. Higher ghg emissions: all other things being equal, higher temperature rise.
But right now, in response to Revkin’s article about the drift down in estimated climate sensitivity, it looks like “the warmer troops” are out there trying to make it sound like there is no drift down in estimated sensitivity while “the cooler troops” are trying to make it sound like the drift is some sort of collapse. In fact, the recent estimates are trickling in– and tend to be lower. And it’s not just people frequently labled as “skeptic/deniers” coming in with lower numbers. It isn’t even just people like Annan who have always vocally criticized the plausibility of the very high end– but equally (or more) vocally criticized very low values.
Lower estimates of sensitivity are coming from people like micheal shlesinger whose warmer creds are– I assure you– iron clad. These are appear in in papers whose main message is “climate change is real”– but the lower sensitivity IS reported in the body of the paper.
Well, I don’t know what I mean – that’s why I come here, to find out. But if I understand right, what’s usually called CAGW is dependent on a number of parts. Some of them are climate science, some are ~ biology and ecology and anthropology, and some are economics. If one of the climate science pieces gets smaller, that should decrease the pressure on some of the other pieces.
However, what we conservatives really would like to know is a yes-or-no question: Is the pressure down far enough that we don’t need to do anything? Obviously, YMMV on the answer to that. Or, as a somewhat watered-down version, Is the pressure down far enough that we can afford to spend the next decade gathering data and refining models – because it makes a lot more sense from a Bayesian perspective to find out more first and then plan, before we start messing with the world economy in a panic mode.
Different question, a la Steve Mosher: Is the decrease and tightening of sensitivity estimates simply a matter of good luck – now we have more data, and the results could have ended up wherever in the earlier pdf, and this is where they did? Or, is it more of a Millikin Oil Drop type change, where we are gradually squeezing out the bad science with the good? I don’t see why my politics has to determine my answer to this question.
Annan seems to be arguing for a little of both: We are lucky that the sensitivity is turning out somewhat lower, and we are also struggling to deal with some people who insist on the higher numbers for the wrong reasons.
His baseline emission pathway has upwards of 1700 ppm. While that is probably theoretically possible in a future world with massive coal use and rapid economic growth, I suspect that we will phase out most coal use even in the absence of climate-driven mitigation simply due to air quality and cost issues. But I’m a bit of a technological optimist.
Also, most BAU scenarios I’ve seen are closer to 750-950 ppm. That still gets you a lot of warming even under a 2C sensitivity, so its probably best avoided.
MikeR–
Some tightening is to be expected over time because people are interested in the problem, collect data, and moreover, techniques in independent fields advance.
For example: Hargreaves and Annan’s analysis depends on being able to get estimates of last grand minimum temperatures. Those estimates depend on better understanding of chemistry which permits people to get estimates based on… (dunno what. Biological stuff?) (Note: Discussion at Real Climate suggests those estimate might be revised– if so, expect H&A’s climate sensitivity to pop up.)
Meanwhile. statistical methods like those H&A use are developed and explained to some extent based on what data are even available. (They wouldn’t have tried to do anything based on a fit between tropical temperatures in LGM and Climate Sensitivity in climate models if there were no climate models or f there were no estimates of temperature during the LGM.)
As more data become available, or more thought put into the problem, people can start doing a larger variety of studies and eventually, you should get tighter and tighter answers. But it takes more time.
That said: that you get tighter answer can’t help you guess whether the correct answer is more likely to go up or down.
Zeke
Fracking. . .
I personally think his projected CO2 levels are insane high. Eyeballing it, I’d say he’s off by at least a factor of two.
I agree with Carrick. Recall the Hansen 1988 business as usual scenario. We are now told that he badly overestimated co2 remaining in the atmosphere ( at least at skeptical science). Seems like the biosphere is better at absorption than people thought. Perhaps this author just needs to catch up with recent data.
David Young (Comment #109631)
As I recall looking at Hansen’s scenarios he had high levels of ChloroFluoroCarbons [CFCs] going forward and that might account for his scenarios being wrong. I think Hansen also claimed that if we ever wanted to avoid another ice age that a small plant manufacturing and spewing into the atmosphere CFCs would do the job nicely.
Lucia,
Fracking certainly makes massive coal use less likely, at least if we invest in LNG technology for transport.
David Young (#109631) –
As I recall, one of the things Hansen predicted very well in his 1988 paper was CO2, so I don’t understand why SkS are arguing that he “badly overestimated co2 remaining in the atmosphere.” But then there are many things which I don’t understand about SkS’s arguments. Including their interpretation of “business as usual”.
Harold, Could be. I will double check it. I think even Schmidt has made this point, namely that Hansen’s assumed forcing was far too high.
David Young (Comment #109631)
“We are now told that he badly overestimated co2 remaining in the atmosphere ( at least at skeptical science).”
Do you have a link? I looked at their latest and couldn’t find that. They did say he overestimated sensitivity.
One thing often overlooked about Hansen 1988 is that he compared his model data to Hansen/Lebedeff 1987, which was a land only index. That’s all they had then, and is presumably what he was predicting. Yet people now compare to land/ocean indices.
David Young,
Going by memory, I’d agree that total forcing was too high. Certainly scenario A over-estimated forcing considerably. [In Hansen et al. 1988 Scenario A seems to be more of a worst-case scenario, but it was described by Hansen in his Congressional testimony as “business as usual”.] As I recall, scenario B over-estimated forcing in CFCs and methane. But they got CO2 right.
Edit: updated link, GISS website gives a “403” error.]
Its complicated. I believe scenario A is “business as usual.” But actual GHG concentrations were in fact somewhat below scenario B. Where did all those GHG’s go. They must have gone into the biosphere or the oceans. Skeptical Science is I believe actuallly trying to obfuscate this point.
David Young (#109640) –
I won’t go further down the SkS rathole as it’s been “done to death” here; but obfuscation is one of their strengths.
.
As to “where the GHGs went”, well, it’s not so much that they went somewhere as that they weren’t created. Scenario A posited a wholly-unjustified doubling of CFC forcing due to “other trace gases” which might, hypothetically, contribute significantly to forcing. Never existed, except in scenario A’s hypothetical world. CFC concentrations were assumed to continue to increase exponentially; the Montreal Protocol ensured that not only was there no increase, but significant decreases. Methane had been growing but for some reason — that is, I don’t know the reason — rather levelled off. So the predicted forcing wasn’t realized, but it wasn’t because GHGs were removed from the atmosphere faster than anticipated — they just weren’t put into the air as fast as anticipated.
Nick Stokes (#109638) –
Hansen & Lebedeff 1987 is not a land-only evaluation. And Hansen et al. 1988 consistently says “global mean surface air temperature.”
HaroldW (Comment #109642)
“Hansen & Lebedeff 1987 is not a land-only evaluation.”
From the final Sec 6 of H&L:
“We have shown that the network of meteorological
stations which measure surface air temperature is sufficient
to yield reliable temperature change for both the northern
and southern hemispheres, despite the fact that most
stations are located on the continents.”
They called it a global index and filled boxes on a global grid. But there was no SST data. Met stations only. It’s the ancestor of their current Met stations index.
On Montreal, I’d note that it came into operation start 1989. It’s not surprising if Hansen underestimated its eventual success.
Nick Stokes –
Yes, H&L used land measurements only, they didn’t use SSTs. But it purported to estimate global averages — it was not just a land-only index (like BEST). And in fact H&L compare their estimate to Jones’ global average which included marine measurements.
.
Your previous (#109638) complained that Hansen et al. 1988 was being unfairly compared to global (land+ocean) indices when it should be compared to land-only. No, it was intended as a global average and therefore should be compared to global indices. If you’re trying to establish that Hansen et al. 1988 was a land-only estimate, it would be more convincing to give a quotation from the paper to that effect.
It might be possible to use land only stations, if you used coastal and island stations as proxies for SST. Couldn’t be worse than using a few proxies to estimate global mean temperature…
The point is, they have maintained the Met stations index to now, and it has risen quite a lot more than LOTI. Since it’s all GISS used at the time, and used in the comparison plots, it’s reasonable to expect that those are the numbers he was predicting.
Nick–
Not necessarily. It’s reasonable to expect Hansen predicted what he said he predicted. He then compared to whatever he thought was the closest approximation of the thing he predicted.
Had he said he predicted land only the or now it would be reasonable to believe that’s what he predicted. Presumably he knows whether he computed the temperature over land only or over land and sea. Has he said he computed temperature only over land? Or not? Because if he has not, I would tend to believe he computed over the full globe including over oceans.
Lucia,
I was inexact in my earlier reference to a land-only measure – I meant land stations only. No, I’m sure he computed over sea as well. It’s quite likely that he computed mean air temperature for the lowest layer of grid cells, maybe with a correction to extrapolate to the surface.
But that isn’t LOTI either, which uses SST as a proxy for SAT.
So I think the right thing to do is to simply compare with the index he did, which has been maintained.
Nick
This is an odd standard. If he had wanted to predict land station only because that was the observation he had to had, he could have picked out the temperatures at land stations and predicted that specific thing. But he chose not to do that.
To say that we should make an apple to oranges comparison because the “orange” index existed when he knowing it existed chose to compute “apple” instead of “orange” is just odd.
What makes sense is to compare to the index that one believes most closely represents the thing he actually computed and predicted whether or not that index existed at the time.
I should add: It’s especially odd to say that we need to compare what he predicted to the index that existed at the time and not to GISS is odd because he both computed a global temperature projection and then worked to create GISTemp, a global series that might be suitable for comparing to his projection. It makes no sense to say we should think he wasn’t predicting the series he later labored to create!
” It makes no sense to say we should think he wasn’t predicting the series he later labored to create!”
No, it makes very good sense. You predict the index you have, because you know how it behaves. You may aspire to a better index in the future, but you can’t know how it will behave or what tradeoffs will be made.
H&L are well aware of the deficiencies of ocean measurement. SST gives coverage, at the expense of measuring a proxy. Most people think that is a good tradeoff – I certainly do. Ocean SAT we simply can’t record to a sufficient extent. Hansen is no doubt in his model estimating at some distance above the surface, maybe with subsequent correction. Whether that measure corresponds best to his MET stations index or to a later index with SST is a judgment that I don’t think should be made for him.
Hi Zeke–I’m a techno-optimist, too. But… 3,000 quads per year. Over half from coal… That’s what 2075 looks like to me.
Im not clear what exactly folks are trying to evaluate.
hansens ability to forecast forcings? probably outside his area of expertise.
His model’s ability to get things right?
would be nice if they kept the old model and we’d just run it with updated forcings.
I suppose he didnt keep his old code..
@lucia (Comment #109650)
February 4th, 2013 at 11:02 pm
Scientists didn’t build the LHC without the expectation of discovering the boson they had predicted.
Steven Mosher (#109653): “would be nice if they kept the old model and we’d just run it with updated forcings.”
I agree that it’s unlikely that the old code is available anywhere (but it would be fun to find a huge fan-fold listing in someone’s file drawer somewhere). Hansen does say that the model’s equilibrium sensitivity for doubled CO2 for global mean surface air temperature is 4.2 K, which suggests that it would significantly overestimate response if run with historical forcings.
.
As a bit of an aside, they note that this ECS value is near mid-range of contemporaneous GCMs. But it would place at the high end of the AR4 models, which averaged 3.2. (Max 4.4) I haven’t seen a survey of the CMIP5 models’ ECS, although Gavin reports that the GISS CMIP5 models have sensitivities in the range of 2.4 to 2.7. [GISS-EH and GISS-ER in AR4 both reported ECS of 2.7, so it suggests not much change there.]
Nick Stokes, “SST gives coverage, at the expense of measuring a proxy. Most people think that is a good tradeoff – I certainly do.”
It is a good trade off, but you have to be careful mixing databases. You could end up with 0=1 if you are not careful 🙂
Zeke (Comment #109626)
February 4th, 2013 at 4:43 pm
“I suspect that we will phase out most coal use even in the absence of climate-driven mitigation simply due to air quality and cost issues”
Yes…even if we look at the Chinese case….the amount of non-fossil generating capacity being built is substantial. Wind/Hydro and Nuclear compete well against coal in China on a cost basis. Various non-economic reasons are slowing deployment of alternatives to burning coal. Viewing global emissions thru an American cost lens is one of the largest ‘pet peaves’ I have with the ‘warmist’ side.
Nick,
Who is ‘you’? Different people make different choices. The way to figure out what he predicted is to either (a) read the paper and find out how he computed his temperature anomalies or (b) if the former leaves ambiguity, ask Hansen. The paper and Hansen himself has always given the impression he computed the temperature over the full surface including over the ocean. It doesn’t make sense to then say “[anonymous/you] predict the index [anonymous/you] have”. However this “you” of yours is, and whatever you think dictates the logic of this “you” and whatever we would do if Hansen had done what you think this “you” ‘would’ have done, if Hansen didn’t predict over land only, then it doesn’t make sense to compare his predictions to land only. You seem to be torturing “logic” to get the result you “want”.
This is pretty irrelevant to how we decide which metric is most suited to comparing to Hansen’s projections.
Oh? Look, when testing party “A’s” prediction, party “B” gets to make judgements based on what makes sense. Hansen’s predictions are global.
Nick Stokes –
I confess that I haven’t paid any attention to GISS’ met station index; I’ve only looked at their global land-ocean temperature index. As you say, the met index is a global estimate based solely on meteorological station data. [That is, no marine SAT or SST included.]
.
The GISS page describes their met-station index (MSI?) as an update of figure 6(b) of Hansen et al. (2001). Looking at said figure 6(b), one sees that (relative to a 1951-1980 baseline), the temperature anomaly as of 2000 was a shade shy of 0.4 K. The current graph shows anomalies around 0.6 K. These are calculations based on recent measurements, so there shouldn’t have been a lot of new meta-data available since 2001. And the prior calculation was “modern” enough that large effects such as Tobs should have been fully compensated in it. Such a large change, relative to recent temperatures, and relative to a “modern” index, seems improbable on its face. Now Mosher tells me that such adjustments are kosher, and I’ll take his word for it. But it doesn’t look right.
Roy Spencer has posted UAH January 2013: +0.51. It does sort of look like an El Nino.
Wow! Didn’t see that spike coming. From Bob Tisdale posts it does not look like El Nino. Interesting!
http://bobtisdale.files.wordpress.com/2013/01/preliminary-global.png
bugs:
Important nuance: the LHC was built to test the sector of the standard model that predicts the HIggs boson. The way you describe it they went looking for confirmatory evidence, which would indeed be badly flawed research objectives.
Climate modelers do not have to get the scenario right but they should be interested in applying their old code to provide out-of-sample testing of their models. What we overlook sometimes, I think, in these predictions of future climate change is that we need not only the models getting it right (and verified with out-of-sample tests) but we need to get the scenarios right. I suspect the former will be an easier task than the latter.
I don’t think it means an el Nino. E Pacific was cool. Mainly just a recovery from the big freeze last month in Russia and NW N America.
HaroldW (Comment #109661)
Harold, here is a plot of Hansen’s original with GISS LOTI and MSI to end 2011 superimposed. MSI tracks LOTI pretty well, but doesn’t have as much cool-side divergence in recent years.
Nick Stokes (#109683)
Thanks! It’s a very good match, rather remarkable to think of it. But I think if you update your database to GISS’s latest, it won’t match as well.
I think the contribution of QPOs (which include AMO) are not plausible.
http://www.eoearth.org/files/172601_172700/172603/amo_timeseries_1856-present-1.jpg
1904-1944 increasing AMO -> QPOs the main driver of warming.
1944-1976 decreasing AMO -> QPOs the main driver of cooling.
1976-2010 increasing AMO -> QPOs almost no influence.
And this is not only an “Atlantic” issue, because it is the same story with the PDO which appears not to be included in QPOs.
I think the misjudgement of AMO and ENSO contributions to warming will be the next big issue.
In a previous posting vom Zecke Hausfather, an estimate was given for the contribution of AMO to recent warming, which is also much higher than Ring et al. result.
http://rankexploits.com/musings/2011/the-atlantic-multidecadal-oscillation-and-modern-warming/
And even Zecke’s approach has a flaw and it is an underestimate.
The error is, that he assumes the increase in the rest of the world’s oceans are anthropogenic as he does not consider a long term effect of ENSO.
ENSO is totally misrepresented in climate science, where typically the temperature effect of the ENSO process is assumed to be linear to the ENSO index. This is false !
El Ninos leftover warm water pools continue to linger around for years and are driven polewards out of the ENSO index region. This is clearly visible in sea surface plots or anecdotal evidence, such as tropical fish off Alaska after El Ninos. Once they have left the ENSO index area their warming is no longer mirrored in the ENSO index. This is the long term warming effect of El Ninos.
Bob Tisdale has shown, that recent warming can be split up into step ups through El Ninos and constant temperature thereafter.
I would suggest that the constant temperatures are a result of a slow decay from the leftover warm water superseeded by a warming trend. A significant part of the overall trend is then due to the postive PDO since 1976/77.
It should further be noted, that La Ninas do not produce leftover cool water pools. Cool water sinks down again, once the upwelling stops due to gravity.
Hansen et al. 1988 had a too high climate sensitivity 4.2C/CO2 doubling and forcing scenarios which were low which somewhat compensated, but if you actually READ the 1988 paper you find
More detail at RR
Eli, “Therefore climate sensitivity would have to be much smaller than 4.2 C, say 1.5 to 2 C, in order for us to modify our conclusions significantly.”
I must have missed the press conference.