I came up with a simple empirical model to estimate climate sensitivity based on a global mean surface temperature record and estimated values of climate forcing. Using Land-Ocean GMST values from NASA GISS, I obtain a climate sensitivity of 1.7 C. This is below the range of climate sensitivity predicted by most GCM’s.
Better yet, I actually get a prognostic tool to predict the future temperatures based on expected forcings. I’ll make predictions in a future blog post. For now, here’s a chart showing how my “model” compares to Land/Ocean data from GISS.

Figure 1:GISS land/ocean data are indicated in purple; error bars indicate uncertainty intervals of 0.05C. An estimate of real world heat flux forcing between 1880 and 2005 provided by Gavin Schmidt provided at Real Climate is illustrated in light blue. Model predictions with a time constant of 14.5 years and an effective heat capacity of 1000 MJ/m2 are indicated in red. The correlation between the model data is R2= 0.89.
Model Description
This predictive curve is based on a two-parameter model that treats the climate as a point mass with single temperature and heat capacity. This basic model for the planet’s climate is described in Schwartz (2007); when estimating the climate time constant, Schwartz’ assumed the forcings acting on the planet were spectrally white.
To create a predictive model, I replaced the assumed white noise with the historical estimates of forcing Gavin provided at Real Climate and I also explicitly accounted for the heat capacity of the climate.
According to this model, the ‘climate temperature anomaly’, θ is governed by:
Where α-1 is the the effective heat capacity per unit area, τ is the time constant for the climate and q is a forcing (a heat flux.)
If we know the temperature anomaly at time zero, θ(0), and we know forcing between a time period 0 and Δt, we can obtain the temperature at time δt by integrating:
where integration is from time 0 to Δt.
If the forcing is held constant at qo during the short time period, this becomes:
Determination of τ and α
Given a series of values for the forcing as a function of time, qo(t), an initial condition for the temperature, and values for τ and α it is possible to integrate (3) numerically to obtain model values for the temperature anomaly, θ.
Since this is possible, I was able to determined estimates for the parameters τ and α by determining the values that minimized the mean square error between the predictions and GISS Land/Ocean temperature data. I obtained τ= 14.5 years and α-1=1000 MJ/m^2 (This corresponds to a body of water with a depth of 150 m; Hansen et al. 1988 report the global averaged value of the mixed layer depth of the ocean is 240 m.)
Determination of Climate Sensitivity
The climate sensitivity to a known level of forcing, q2xCO2 can be obtained by setting the time dependent term on the left hand side of (1) to zero. This results in:
Using the time constant and heat capacity obtained above, and a forcing of q2xCO2=3.75 Watts/m2 for doubling of CO2, I determined a steady state temperature anomaly associated with a doubling of CO2 of θ2CO = 1.7 C. This is greater than Schwartz’ estimate of 1.1C, and just brushes IPCC range of 1.5 C to 4.5 C .
Can we take this model seriously
Who knows? This is a rather simple model for the climate system. That said, quite often, something can be learned from simple zero dimensional models that average out the effects of the internal workings of a system. Also, the model has only two adjustable parameters, and resulted in a correlation coefficient of 0.87 when compared to measured data. This high correlation coefficient suggests that the dominant physical processes may average out, rendering the simple model a reasonably faithful approximation.
Predictions
Based on this model, I can make projections for future temperatures (assuming I can guess the level of GHG’s!) I’ll provide some estimates of projected values soon. (If someone tells me how to determine uncertainty intervals, I’ll calculate those too!)
===
Updates: 2/19/2008. Incorporating better volcanodata raises the sensitivity of this model. I’m still exploring, but the sensitivity is now around 2C.
lucia,
I’ve been playing with a lumpy-like model, but the heat capacity is throwing me off.
You state above that you get 1000 MJ/m2/degK and that this is equivalent to 150m of water. (I assume you just missed the degK part of the heat capacity). If I start with 1000MJ/m2/degK I get a slightly different answer:
(1000 MJ/m2/degK) / (4180 J/kg/degK) / (1000 kg/m3) = 239m
Does that seem right?
John– Your number are right.
I actually do (α)/ (ρ cp ) with ρ=1 kg/m^3 and cp=4200J /kg-k. But the spreadsheet I opened quickly has 1/α = 475 MJ/(kg-K) and matches h=110m.
The exact values of α and τ vary from analysis to analysis. I’ve now learned to upload the EXCEL spreadsheets to store on the server filed nicely by WordPress, but I’m afraid back in January, I hadn’t learned to do that. I figured I’d just remember which spreadsheet went with which post.
So, I don’t know if my typo was the 1000, or the level for water associated with the particular calculation in this post! :(. Things changed when I did the monthly and as I learned a few more things (like get monthly data for forcing etc.)
The only factors that control the effect of CO2 on climate are the amount of thermal radiation from the Earth in the 13.5 to 15micron band and the saturation of this band at the concentration levels of CO2.
At our current concentration of 385ppmv the band is 85% – 90% saturated. At 770ppmv it will be about 88% – 92% saturated.
A doubling of CO2 from our current level of 385ppmv to 770ppmv will only increase the global temperature by something less than 0.3°C not the 3°C of the climate models that use a relationship based on 100ppmv increase from preindustrial 280 to 380ppmv resulting in the entire 0.6°C of warming in the IPCC 2001 report.
If nothing else, since the MBH98 temperature proxy was discredited, the IPCC can no longer use the 0.6°C value but have to reduce this by the observed natural warming that occurred since the little ice age of 0.5°C/century.
This leaves only 0.1°C of warming possibly due to CO2 and when this is factored in the IPCC models will yield results of no more than .5°C for a doubling of CO2.
This is nothing to worry about.
The observed cooling of the past 6 years and the prediction that this will continue to 2030 is however of concern because while global warming is beneficial, global cooling brings on hardships to a large segment of the population both homan and wildlife.
Norm – the absorption saturation level of the band is not what determines the temperature response to GHG levels, rather it is the overall effective thickness of the layers that already have close to 100% saturation in those bands, plus the effects of pressure broadening that increase absorption outside of those bands (some complicated numerical analysis shows the response is roughly logarithmic in CO2 concentration at close to present levels).
Think of a layer of dirt covering some structure (below ground, say). The dirt is 100% absorbing (of light and infrared) at even very low thicknesses; nevertheless, a thicker coating of dirt still increases the insulation of the below-ground structure from temperature changes above. This is because the thicker layer reduces the rate at which heat can transfer from above to below.
Same thing here, except that the GHG layers are transparent to incoming radiation (sunlight), but opaque to much of the outgoing thermal radiation. Thicker layers (even if already 100% opacity) result in slower heat transport from surface to space, so you get heating of the surface beyond any “saturation level” effect.
Not that I understood the model very well, but doesn’t it assume that the observed temperature and the equilibrium temperature are the same? How well does your model hind-cast?
Either way, I’ve written a completely different analysis of my own:
http://residualanalysis.blogspot.com/2008/07/heres-how-you-can-estimate-co2-climate.html
The result I get is 3.46C, and the hind-cast looks better than I expected. I’ll write about that some other time. Scrutiny is welcome.
Joseph– No it doesn’t assume the observed temperature is the same as the equilibrium temperature. Not by a long shot. In fact, the way I’ve scaled the “forcing”, I expressed that as the equilibrium temperature at any particular time “t”. You can see it’s not the same as the current temperature.
Well, I stand corrected. But 1.7C is impossibly low, isn’t it?
Let’s see. At 1.7C, equilibrium temperature would be given by
T’ = 5.65 log C + b
where C is the CO2 concentration and b is a constant. If we consider a likely point of equilibrium from the past, say, 285 ppmv and a temperature of -0.2C, then
b = -0.2 – 5.65 log 285 = -14.07
So
T’ = 5.65 log C – 14.07
What would be the equilibrium temperature at present? Let’s say CO2 concentration is 380 ppmv (I think a bit of an underestimate). Then
T'(present) = 0.5 degrees
So the equilibrium temperature would be slightly lower than the observed temperature. That is, observed temperature would have to be in a downward trend.
Joseph–
You need to use more words. Where does the 5.65 come from? Are there units in that?
As for the 1.7C, it would be on the low side. But, the sensitivity to doubling of CO2 discussed here would include feedbacks due to things like water vapor, cloudes, ice albedo etc. So, if the real sensitivity is lower than that due to CO2 alone, that would suggest the net effect of everything else is to lower the overall sensitivity.
That said: This whole method could be flawed. It’s based on a very simplified model of the earth’s response. But, the purpose of this exercise was to see what happened if I used the simplified model Schwartz suggested and applied it a different way.
Sorry for skipping the explanation. If you consider the concept of doubling, there’s clearly a logarithmic relationship between equilibrium temperature and CO2, as follows:
T’ = a log C + b
(This would have to be an approximation, a good one though, but I’m not going to get into that.)
Sensitivity is thus:
(a log 2C + b) – (a log C + b)
or
a log 2
Your estimate says:
1.7 = a log 2
Which means
a = 5.65
To calculate b you just need to assume an equilibrium point. What this means is that we could come up with a time series of equilibrium temperatures, and I believe you would see that this time series is below the observed temperature time series in average. What should happen instead is that the difference between the equilibrium temperature time series and the observed temperature time series should be proportional to the net forcing (or a log of it perhaps).
I can’t say if there’s an error in your analysis, but from the data I’ve seen, the estimate seems lower than plausible.
Joseph–
The Schwartz paper itself came out with sensitivities that were even lower. Most sensitivities are based on models. This one is an empirical estimate.
However, there are assumption in the estimate. For this blog post, they key tenuous assumption is equation 1. The next one is that the forcings from GISS are ok. The third is that the temperature series is accurate.
That’s pretty much it.
btw, this is the hindcast based on my 3.46C model. You might be interested to know that the hindcast predicts the temperature change rate at 2.2C/century from 1998 to the present. And yet, the predicted temperature trend is not too far below the observed temperatures for the same range. I’m quite confident the trend will rejoin 2C/century sooner or later.
Joseph– So you neglect all aerosols, albedo etc? I guess we’ll see which matches best in a while!
I do, and I think that makes the model all the more surprising 🙂
Obviously, CO2 concentrations probably confound all sorts of other anthropogenic things. The formulas are theoretically for CO2 only, but they must be proxying other forcings in there, imperfectly of course. This is a limitation if we were trying to find out the precise effect of CO2 only, but when it comes to a hindcast, it probably helps.
Joseph,
I wonder if there is some confusion about whether one should use log with base 10 or natural log with base e. Log10 of 2 is 0.301 but ln 2 is 0.693. Most of the calculations and formulas that I have seen about CO2 concentration use the natural log function. I did some sums a while back trying to relate temperature changes to the log of the CO2 ratio and found my answers were different to those normally mentioned. I traced it to the fact that I was using log10 instead of ln.
Saludos desde España
Lucia,
Joseph– No it doesn’t assume the observed temperature is the same as the equilibrium temperature.
I think it’s interesting if we assume the current flatish period is the temperature in equilibrium plus weather.
If we use the average temperature anomaly (.54) and average CO2 concentrations (376.68) for 2001-2007 and utilize Joseph’s formulae:
“Equilibrium constant” b
b = -0.2 – a log 285
or
b = .54 – a log 376.68
so with:
-0.2 – a log 285 = .54 – a log 376.68
a = 6.167
sensitivity = a log 2
sensitivity = 1.86
Though I probably find that interesting because I am easily amused. 🙂
Lucia, while I think discussons of climate “sensitivity” are important, I think that they far too often distract from an even more important sensitivity, namely, how quickly the environment responds to a given forcing.
It seems rather clear to me that our environment is responding very quickly to the forcing we’ve made so far, and can expect even more changes ahead. Ocean acidification is proceeding apace, we have had an increase in drying el Ninos, an in increase in heavy/rapid downpours, pronounced warming and tundra/glacial/ice sheet melt in high latitudes. Lloyd’s and other insurers agree with the scientist who report this, as I note here: http://mises.org/Community/blogs/tokyotom/archive/2008/07/18/marlo-lewis-cei-serves-up-refreshingly-distracting-climate-science-and-policy-distortions.aspx.
It’s time to start thinking how to prepare our societies for the risks we face.
Tom–
The other sensitivity you mention may be related to the time constant.
The “Lumpy” analysis also seek to determine the time constant. The one associated with this particular analysis in this particular blog post says the time constant is short compared to the value predicted by modelers.
If this model were correct, a) we’d expect whatever happens to happen more quickly than the modelers predict but b) ultimately, the total effect (amount of melting etc.) is less.
The first prediction (a) would fall right in line with what you are seeing.
That said: This model is likely over simplified and/or doesn’t use the right data to get the time constant of the ocean.
Actually, Lucia, it doesn’t seem that your attention to the time constant addresses my point at all, which is not how rapidly temperatures may rise, but how quickly various other manifestations of climate change – like ice/shelf/sheet/glacier melting/sea level rise/ocean pH changes – occur.
Tom–
Actually, time constant is the right parameter– in the context of this particular physical model!
This model was based on exploration of Schwartz paper that proposed the simplest possible physical model of the earth. In this context, the time constant and sensitivity are the “only” things, and there is only 1. So, the time constant sort of does encapsulate all those things, but smears everything together.
The problem is the physical model is greatly simplified. The whole planet has only one temperature. So, a physical model of this sort could never begin to say whether one bit of the earth might respond faster than another.
This particular model, fit to data, suggests the time constant is quick. That would mean when we apply forcing, the earth heats up quickly.
But the real issue is: The model could be totally wrong! (Or not.) I would certaintly not trust it unless it forecasts. I shlepped up the fit near the beginning of the year. I’ve gotten diverted on “perfecting” the fit. I may go back in a bit.
I’d be happier if I could at least have a “two -lump” model with an atmosphere and an ocean. But for that, it would be better to have good ocean temperatures and good atmosphere temperatures since the 1800s. I don’t think I have them.