All posts by SteveF

Human Caused Forcing and Climate Sensitivity

Some recent exchanges in comments on The Blackboard suggest some confusion on how a heat balance based empirical estimate of climate sensitivity is done, and how that generates a probability density function for Earth’s climate sensitivity.  So in this post I will describe how empirical estimates of net human climate forcing and its associated uncertainty can be translated to climate sensitivity and its associated uncertainty. I will show how the IPCC AR5 estimate of human forcing (and its uncertainty) leads to an empirical probability density function for climate sensitivity with relatively “long tail”, but with a most probable and median values near the low end of the IPCC ‘likely range’ of 1.5C to 4.5C per doubling.

We can estimate equilibrium sensitivity empirically using a simple energy balance: energy is neither created nor destroyed.  At equilibrium, a change in radiative forcing will cause an change in temperature which is approximately proportional to the sensitivity of the system to radiative forcing.

The IPCC’s Summary for Policy Makers includes a helpful graphic, SPM.5.  This graphic is shown in Figure 1.

Fig1

The individual human forcing estimates and associated uncertainties are shown in the upper panel, and the combined forcing estimates are shown in the lower  panel.  Combined forcing estimates, relative to the year 1750 are shown for 1950, 1980, and 2011.  These forcing estimates include the pooled “90% uncertainty” range, shown by the thin black horizontal lines, meaning there is a ~5% chance that the true forcing is above the stated range and a ~5% chance it is below the stated range.   The best estimate for 2011 is 2.29 watts/M^2, and there is a roughly equal probability (~50% each ) that the true value is above or below 2.29 watts/M^2.  Assuming a normal (Gaussian) probability distribution (which may not be exactly right, but should be close… see central limit theorem), we get the following PDF for human forcing from SPM.5:

Fig2

Figure 2 shows that the areas under the left and right halves of the probability distribution are identical, and the cumulative probability (area under the curve) crosses 0.5 (the median probability) at 2.29 Watts/M^2, as we would expect.  The short vertical red lines correspond to the 5% and 95% cumulative probabilities.  The uncertainty represented by Fig 2 is in net human forcing, not in climate sensitivity.  The shape of the corresponding climate sensitivity PDF is determined by the uncertainty in forcing (as shown in Fig 2), combined with how forcing translates into warming.  When the forcing PDF is translated into a sensitivity PDF, the median value for that sensitivity PDF curve must correspond to 2.29 watts/M^2, because half the total forcing probability lies above 2.29 watts/M^2 and half below.

We can translate any net forcing value to a corresponding estimate of effective climate sensitivity via a simple heat balance if:

1) We know how much the average surface temperature has warmed since the start of significant human forcing.

2) We assume how much of that warming has been due to human forcing (as opposed to natural variation).

3) We know how much of the net forcing is being currently accumulated on Earth as added heat.

The Effective Sensitivity, in degrees per watt per sq meter, is given by:

ES = ΔT/(F – A)      (eq. 1)

Where ΔT is the current warming above pre-industrial caused by human forcing (degrees C)

F  is the current human forcing (watts/M^2)

A  is the current rate of heat accumulation, averaged over the Earth’s surface (watts/M^2)

For this post, I assume warming since the pre-industrial period is ~0.9 C based on the GISS LOTI index, and that 100% of that warming is due to human forcing. (That is, no ‘natural variation’ contributes significantly.)

Fig3

Heat accumulation is mainly in the oceans, with small contributions from ice melt and warming of land surfaces.   The top 2 Km of ocean (Figure 3) is accumulating heat at a rate of ~0.515 watt/M^2 averaged over the surface of the Earth, and ice melt of ~1 mm/year (globally averaged) adds ~0.01 watt/M^2. Heat accumulation below 2 Km ocean depth is likely small, but is not accurately known; I will assume an additional 10% of the 0-2KM ocean accumulation, or 0.0515 watt/M^2.  That comes to: 0.577 watt/M^2.  The heat accumulation in land surfaces is small, but difficult to quantify exactly;  for purposes of this post I will assume 0.02 watt/M^2, bringing the total to just under 0.6 watt/M^2.

Climate sensitivity is usually expressed in degrees per doubling of carbon dioxide concentration (not degrees per watt/M^2), with an assumed incremental forcing of ~3.71 watt/M^2 per doubling of CO2, so I define the Effective Doubling Sensitivity (EDS) as:

EDS = 3.71 *  ES  =  3.71 * ΔT/(F – A)        (eq. 2)

If we plug in the IPCC AR5 best estimate for human forcing (2.29 watts/M^2 as of 2011), a ΔT of 0.9C and heat accumulation of 0.6 watt/M^2, the “best estimate” of EDS at the most probable forcing value is:

EDS = 3.71 * 0.9/(2.29 – 0.6)  = 1.98 degrees per doubling     (eq. 3)

This is near the lower end of the IPCC’s canonical 1.5 to 4.5 C/doubling range from AR5, and is consistent with several published empirically based estimates.  2.29 watts/M^2 is the best estimate of forcing in 2011 from AR5, but there is considerable uncertainty, with a 5% to 95% range of 1.13 watts/M^2 to 3.33 watts/M^2.  By substituting the forcing values from the PDF shown in Figure 2 into equation 2, we naively translate (AKA, wrongly translate) the forcing probability distribution to an EDS distribution, with probability on the y-axis and EDS on the x-axis.  Please note that according to equation 2, as the forcing level F approaches the rate of heat uptake A, the calculated value for EDS approaches infinity.  That is, any forcing level near or below 0.6 watt/M^2 is likely unphysical, since the Earth has never had a ‘thermal run-away’; we know the sensitivity is not infinite.  The PDF shown in Figure 2 shows a finite probability of forcing at and below 0.6 watt/M^2, so if we are reasonably confident in the current rate of heat accumulation, then we are also reasonably confident the PDF in Figure 2 isn’t exactly correct, at least not in the low forcing range.

Fig4

Figure 4 shows the result of the naive translation of the forcing PDF to a sensitivity PDF.  To avoid division by zero (equation 2), I limited the minimum forcing value to 0.7 watt/M^2, corresponding to a sensitivity value of ~33C per doubling.  There is much in Figure 4 to cast doubt on its accuracy.  How can a sensitivity of 18C per doubling be 10% as likely as 2C per doubling?  How can it be that 50% of the forcing probability, which lies above 2.29 watts/M^2, corresponds to a tiny area to the left of the vertical red line?  Figure 4 seems an incorrect representation of the true sensitivity PDF.

We expect from Figure 2 that there should be a 50:50 chance the true human forcing is higher or lower than ~2.29 watts/M^2, corresponding to ~2C per doubling sensitivity (shown by the red vertical line in Figure 4).  Yet the area under the PDF curve to the left of the vertical line in Figure 4, corresponding to human forcing greater than 2.29 watts/M^2, lower climate sensitivity, is very small compared to the area to the right of the vertical line, corresponding to human forcing below 2.29 watts/M^2, and higher climate sensitivity.  Why does this happen?

The problem is that the naive translation between forcing and sensitivity using equation 2 yields an x-axis (the sensitivity axis) which is “compressed” strongly at low sensitivity (that is, at high forcing) and “stretched” strongly at high sensitivity (that is, at low forcing).  By “compressed” and “stretched” I mean relative to the original linear x-axis of forcing values.  Compressing the x-axis at high forcing makes the area under the low sensitivity part of the curve smaller than correct, while stretching the x-axis at low forcing makes the area under the high sensitivity part of the curve larger than correct.   The result is that relative areas under the forcing PDF are not preserved during the naive translation to a sensitivity PDF.  The extent of “stretching/compressing” due to translation of the x-axis is proportional to the first derivative of the translation function:

‘Stretch/compress factor’ = δ{1/(F-A)}/δF = -1/(F-A)^2                      (eq. 4)

The negative sign in eq. 4 just indicates that the ‘direction’ of the x-axis is switched by the translation (lower forcing => higher sensitivity, higher forcing => lower sensitivity).  If we want to maintain equal areas under the curve above and below a sensitivity value of ~2C per doubling (that is, below and above 2.29 watts/M^2 median forcing) in the sensitivity PDF, then we have to divide each probability value from the original forcing PDF by the inverse square of the forcing value less the heat being accumulated (1/(F-A)^2), for each point on the curve.  That is, we need to adjust the ‘height’ of the naive PDF curve to ensure the areas under the curve above and below 2C per doubling are the same.  For consistency of presentation, I renormalized based on the highest adjusted point on the curve (highest point = 1.000).

{Aside: In general, any similar translation of an x-y graph based on a mathematical function of the x-axis values will require the y values be divided by an adjustment function which is based on the first derivative of the translation function:

            ADJy(x) = δG(x)/ δx                                           (eq. 5)

where ADJy(x) is the adjustment factor for the y value      

             G(x) is the translation function.}

 

Fig5

The good news in Figure 5 is that the areas under the curve left and right of the vertical line are now the same, as we know they should be.  But the peak in the curve is now at ~1.55C per doubling, corresponding to a forcing of 2.75 watts/M^2, rather than at ~2C per doubling, corresponding to the most probable forcing value of 2.29 watts/M^2 (from Figure 2). What is going on?  To understand what is happening we need to recognize that the adjustment applied to maintain consistent relative areas under the curve is effectively taking into account how quickly (or slowly) the forcing changes for a corresponding change in sensitivity.  To examine how much the original forcing value must change for a small change in sensitivity; let’s look at a change in sensitivity of ~0.2 at both high and low sensitivity ranges.

Sensitivity       Corresponding Forcing

1.5041                        2.82 watts/M^2

1.7036                        2.56 watts/M^2

0.2005                        0.26 watt/M^2   =>  ~1.3 watts/M^2/(degree/doubling)

 

4.512                          1.34 watts/M^2

4.281                          1.38 watts/M^2

0.231                          0.04 watt/M^2  => ~0.173 watt/M^2/(degree/doubling)

 

For the same incremental change in sensitivity, it takes ~7.5 times greater change in forcing near 1.6 sensitivity than it does near 4.4 sensitivity.  A large change in forcing at a high forcing level corresponds to only a very small change in sensitivity, while a small change in forcing at a low forcing level corresponds to a large change in sensitivity.   But the fundamental uncertainty is in forcing (Figure 2), so at low sensitivity (high forcing) a small change in sensitivity represents a large fraction of the total forcing probability.  That is why the peak in the adjusted PDF for sensitivity shifts to a lower value; it must shift lower to maintain fidelity with the fundamental uncertainty function, which is in forcing.

If you have doubt that the adjustment used to generate Figure 5 is correct, consider a blind climate scientist throwing darts (randomly) towards a large image of Figure 2.  The question is: What fraction of the darts will hit left of 2.29 watts/M^2 and below the probability curve and what fraction will hit to the right of 2.29 watts/M^2 and below the probability curve? If the climate scientist is truly blind (throwing at random, both up-down and left-right), the two fractions will be identical.

If enough darts are thrown, and we calculate the corresponding sensitivity value for each dart which falls between the baseline and the forcing probability curve, we can count the number of darts which hit narrow sensitivity ranges equal distances apart (equal width bins), and construct a Monte Carlo version of Figure 5, keeping in mind that uniform bin widths in sensitivity correspond to very non-uniform bin widths in forcing.  The blind climate scientist throws darts at wider bins on the high side of the forcing range, each corresponding to equally spaced sensitivity values, than on the low side of the low forcing range.  The most probable bin to hit, corresponding to the peak on the sensitivity PDF graph, will be the bin with the greatest total area on the forcing graph (that is, when the height of the probability curve times the forcing bin width is the maximum value).  If many bins are used, and enough darts thrown, the Monte Carlo version will be identical in appearance to Figure 5.

 

Comments And Observations

 Based on the AR5 human forcing estimates and a simple heat balance calculation of climate sensitivity, the median effective doubling sensitivity (EDS) is ~2C, and the most probable EDS is ~1.55C.  There is a 5% chance that the EDS is less than ~1.25C and a 5% change that it is more than ~6.3C.  These values are based on the additional assumptions:

1) The PFD in forcing is approximately Gaussian.  This seems likely based on the uncertainty ranges shown for each of the many individual forcings shown in SPM.5 and the central limit theorem.

2) All warming since pre-industrial times has been due to human forcing.  If 0.1C of the long term warming is due to natural variation, then the median value for sensitivity falls to ~1.76C per doubling.  If there is a long term underlying natural cooling trend of 0.1C, which partially offsets warming, then the median sensitivity increases to ~2.2C per doubling.

3) Total current heat accumulation as of 2011 was ~0.6 watt/M^2 averaged globally (including ocean warming, ice melt, and land warming).

If any of the above assumptions are incorrect, then the calculations here would have to be modified.

 

Relationship of EDS to Equilibrium Sensitivity

Earth’s equilibrium climate sensitivity is linear or nearly linear in equilibrium temperature response to forcing in the “forcing domain”, which is the same as saying EDS is a good approximation for Earth’s equilibrium climate sensitivity to a doubling of CO2.  This seems to be a reasonable expectation over modest temperature changes.  There is some disagreement between different climate modeling groups (and others) about long term apparent non-linearity.  For further insight, see for example: http://rankexploits.com/musings/2013/observation-vs-model-bringing-heavy-armour-into-the-war/.

 

Impact of Uncertainty in Forcing

The width of the AR5 forcing PDF means that calculated sensitivity values for the low forcing “tail” of the distribution reach implausibly high levels; eg. 2.5% chance of EDS over 10C, which seems inconsistent with the relative stability of Earth’s climate in spite of large changes in atmospheric carbon dioxide in the geological past.  I think the people who prepared the AR5 estimates of forcing would have been well served to consider the plausibility of extreme climate sensitivty; a slightly narrower uncertainty range, especially at low forcing, seems more consistent with the long term stability of Earth’s climate.

A reasonable question is: How would the sensitivity PDF change if the forcing PDF were narrower?   In other words, if it were possible to narrow the uncertainty in forcing, how would that impact the sensitivity PDF?  Figure 6 shows the calculated sensitivity PDF with a 33% reduction in standard deviation in total forcing uncertainty, but the same median forcing value (2.29 watts/M^2).

Fig6

The peak sensitivity is now at 1.72C per doubling (versus 1.55C per doubling with the AR5 forcing PDF), while there is now ~5% chance the true sensitivity lies above 3.6C per doubling, indicated by the vertical green line in Figure 6 (versus 6.3C per doubling with the AR5 forcing PDF).  Any narrowing of uncertainty at any specific forcing estimate will lead to a relatively large reduction in the estimated chance of very high sensitivity, and a modest increase in the most probable sensitivity value.  Since most of the uncertainty in forcing is due to uncertainty in aerosol effects (direct and indirect), it seems prudent to concentrate on a better definition of aerosol influence to improve the accuracy of empirical estimates of climate sensitivity; replacing and launching the failed Glory satellite (global aerosol measurements) would be a step in that direction.

Finally, there is some (smaller) uncertainty in actual temperature rise over pre-industrial and in heat accumulation.  Adding these uncertainties will broaden the final sensitivity PDF, but the issues are the same: the dominant uncertainties are in forcing, and especially in aerosol effects.  Any broadening of the forcing PDF leads to an ever more skewed sensitivity PDF.

Note:

I am not interested in discussing the validity of the GISS LOTI history, nor anything having to do with radiative forcing violating the Second Law of thermodynamics (nor how awful are Hillary Clinton and Donald Trump, or any other irrelevant topic).  The objective here is to reduce confusion about how uncertainty in forcing translates into a PDF in climate sensitivity.

More on Estimating the Underlying Trend in Recent Warming

This post is an update on my earlier post on the same subject.  In my earlier post I regressed linearly detrended Hadley global temperature anomalies since 1950 against a combination of low pass filtered volcanic forcing, solar forcing, and ENSO influence.  For ENSO influence, I defined a low pass filtered function of the detrended Nino 3.4 index, which I called the Effective Nino Index (ENI).

My regression results suggested the rate of warming since 1997 has slowed considerably compared to the 1979 to 1996 period, contrary to the results for Foster & Rahmstorf (2011), which showed no change in the rate of warming since 1979.   There were several constructive (and some non-constructive) critiques of my post in comments made here at The Blackboard…. and elsewhere.  This post will address some of those critiques, and will examine in some detail how the choices made by F&R generated “no change in linear warming rate” results; results which are in fact not those best supported by the data.

Limitations of the Regression Models

 It is important to understand what a regression, like done by F&R or in my earlier post, can and can’t do.  If each of the coefficients which come from the regression are physically reasonable/plausible,  then the quality of the regression fit indicates:

1) if the assumed form of the underlying secular trend is plausible, and

2) if there are important controlling variables not included in the regression

The difference between the regression model and the data (that is, the model ‘residuals’) is an indication of how well the regression results describe reality: Is the form of the assumed secular trend plausible? Do the variables included in the regression plausibly control what happens in the real world?   However, if the coefficients from the regression output are not physically reasonable/plausible, then even a very good “fit” of the model to the data does not confirm that the shape of the assumed secular trend matches reality, and the regression results may have little or no meaning.

Some Substantive Critiques from My Last Post

The fundamental problem with trying to quantify and remove the influences of solar cycle, volcanic eruptions, and ENSO from the temperature record was pointed out by Paul_K, who noted that selection of a detrending function, which represents the influence of an unknown secular trend, is essentially circular logic.  The analyst assumes the form of the chosen secular function (how the secular function varies with time: linear, quadratic, sinusoidal, exponential, cubic, etc, or some combination) in fact represents the ‘true’ form of the secular trend.  The regression done using a pre-selected secular function form is then nothing more than finding the best combination of weightings of variables in the regression model which will confirm the form of the assumed secular trend is correct.

Hence, any conclusion that the regression results have “verified” the true form of an underlying trend is a bit circular… you can’t verify the shape of an underlying trend, you can only use the regression to evaluate if what you have assumed is a reasonable proxy for the true form of the secular trend.  In the case of F&R, the assumed shape of the secular trend was linear from 1979; in my post the assumed secular trend was linear from 1950.  Both suffer from the same circular logic.   F&R also allow both lag and sensitivity to radiative forcing to vary independently, which allowed their regression to specify non-physical lags and potentially non-physical responses to forcings, which together lead to the near perfect ‘confirmation’ of their assumed linear trend.   All of the regressions in this post, as well as in my original post, require that both solar and volcanic forcings to use the same lag, though that lag is free to assume whatever value gives the best regression fit, even if the resulting lag appears physically implausible.

Nick Stokes suggested substituting a quadratic function (with the quadratic function parameters determined by the regression itself) and went on himself to compare the regression results for linear and quadratic functions for 1950 to 2012 and 1979 to 2012.   Like me, Nick used a single lag for both solar and volcanic influences.  Nick concluded that with a quadratic secular function, there is some (not a lot) deviation from a linear trend post 1979, which varies depending on what temperature record is used.  Nick’s results are doubtful because simply choosing a quadratic secular function is just as circular as choosing a linear function.   Some of the lag constants Nick’s regression found for the 1975 to 2012 period (eg. ~0.11) appear physically implausible (much too fast).

Tamino (AKA Grant Foster of F&R) made a constructive comment at his blog: a single lag constant for solar and volcanic influences (a “one box lag model”) was not the best representation of how the Earth is expected to react to rapid changes in forcing like those due to volcanoes, and that a two-box lag model with a much faster response to account for rapid warming of the land and atmosphere would be more realistic.  I have included this suggestion in my regressions.

Commenter Sky claimed that basing ENI on tropical temperature responses was “a foolishness” (I strongly disagreed) but his comments prompted me to look for any significant correlation between the ENI and non-tropical temperatures at different lags, and I found that there is a very modest but statistically significant influence of the ENI of non-tropical temperatures at 7 months lag.  Incorporating both ENI and 7-month lagged ENI slightly improves the regression fit in all cases I looked at, and generates an estimated global response for ENI (not lagged 7 months) which is close to the expected value of half the response for the tropics.   (I will describe the (modest) modifications I made based on Tamino’s suggestion and on a 7-month lagged ENI contribution in a postscript to this post.)

Finally, Paul_K suggested that a way to avoid logical circularity was to try a series of polynomials in time, of increasing order, to describe the secular trend (with time=0 at the start of the regression model period) and with the polynomial constants determined by the regression itself.  The resulting regression fits can then be comparing using a rational method like AIC (Akaike Information Criterion) to determine the best choice for order for the polynomial (the minimum AIC value is the most parsimonious/best).  For a linear regression with n data points and M independent parameters, the AIC is given approximately by:

AIC = n * Ln(RSS) + 2*M

Where Ln is the natural log function and RSS is the Residual Sum of Squares from the regression (sometimes also called the “Error Sum of Squares”).  M includes each of the variables: solar, volcanic, ENI, Lagged ENI, secular function constants, and the constant (or offset) value.  Higher order polynomials should allow a better/more accurate description of secular trends of any nonlinear shape, but each added power in the polynomial increases the value of M, so a better fit (reduced RSS) is ‘penalized’ by and increase in M.   A modified AIC function, which accounts for a limited number of data points (called the AICc) is better when the ratio of n/M is less than ~40, but this ratio was always >40 for the regressions done here.

AIC Scores for Polynomials of Different Order

The ‘best’ polynomial to use to describe the secular trend, based on the AIC, depends as well on whether or not you believe that the influences of volcanic forcing and solar forcing are fundamentally different on a watt/M^2 basis.  That is, if you believe that solar and volcanic forcings are ‘fungible’, then those forcings can be combined and the regression run on the combined forcing rather than the individual forcings.   In this case, the best fit post 1975 is quadratic.  Troy Masters (Troy’s Scratchpad blog, based on a suggestion from a commenter called Kevin C) has showed that summing the two forcings improves a regression model’s ability to detect a known change in the slope of secular warming in synthetic data.

If a regression is done starting in 1950 (as in my original post) with solar and volcanic forcings treated separately, then it appears the best, or at least most plausible polynomial secular trend is 4th order, which represents a ‘local minimum’ in AIC… lower than 5th order.  AIC scores of 6th order and above are smaller than 4th order,  but the regression constants for solar and volcanic influences do not “converge” on similar values for each higher order polynomial; they instead begin to oscillate, indicating that the higher order terms (which can simulate higher frequency variations) are beginning to ‘fit’ the influences of volcanoes and solar cycle, rather than a secular trend.  In any case, using  a 4th order polynomial for the regression starting in 1950 generates a much improved fit compared to an assumed linear secular trend.

1975 to 2012

Figure 1 shows the AIC scores and lag constants for regressions from 1975 to 2012.

Figure1a 

The minimum AIC score is for a third order polynomial.  The corresponding regression coefficients (with +/- 2-sigma uncertainties) are shown below:

 

Volcanic                                0.14551 +/-0.0248

Solar Cycle                           0.47572 +/-0.2034

ENI                                         0.08253 +/-0.0143

7 Mo Lagged ENI                 0.03179 +/-0.0144

Linear Contribution             0.01335 +/-0.00897

Quadratic Contribution      0.0003753 +/- 0.000542

Cubic Contribution              (-8.655 +/-9.18)*10^(-6)

Constant                                0.02384 +/-0.0383

R^2                                         0.828

F Statistic                               306

Figure 2 shows the regression model overlaid with the secular component of the model.  The secular component is what is described by the above linear, quadratic, cubic contributions plus the regression constant.  Figure 3 shows the regression model overlaid with Hadley temperature anomalies.   The model matches the data quite well.  The residuals (Hadley minus model) are shown in Figure 4.  The residuals are reasonably uniform around zero, and show no obvious trends over time.

Figure2a

Figure3a

Figure4a

Figure 5 shows the individual contributions (ENSO, solar cycle, volcanic) along with their total.

Figure5a

Figure 6 shows the original and ‘adjusted’ Hadley data, where the influences for ENSO, solar cycle, and volcanoes have been subtracted form the original Hadley data.  I have included calculated slopes for 1979 to 1997 (inclusive) and 1998 to 2012 (inclusive).  The best (most probable) estimated trend for 1979 to 1997 is 0.0160 C/yr, while from 1998 to 2012 the best estimate for the trend is 0.0104C/yr, corresponding to a modest (35%) reduction in the rate of recent warming.  (edit: 0.0160 should have been 0.0171, 0.014 should have been 0.0108; the reduction is 37%)

Figure6a

 

1950 to 2012

Figure 7 shows the AIC scores and lag constants for regressions from 1970 to 2012.

Figure7a

The local minimum (best) AIC score is for a fourth order polynomial.   At orders 6 and above the AIC score continues to fall, but without convergence of solar and volcanic coefficients, which suggests to me that the higher order polynomials are beginning to interact excessively with the (higher frequency) non-secular variables we are trying to model, and the continued fall in AIC score is not indicative of a true improvement in accuracy of the higher order polynomials as a secular trend.  I adopted the fourth order polynomial as the most credible representation of the secular trend.  The corresponding regression coefficients (with +/- 2-sigma uncertainties) are shown below:

Volcanic                                0.1346 +/-0.0234

Solar Cycle                           0.3283 +/-0.170

ENI                                         0.0972 +/-0.0122

7 Mo Lagged ENI                 0.0245 +/-0.0121

Linear Contribution             0.00804 +/-0.00828

Quadratic Contribution       -0.000772 +/- 0.000542

Cubic Contribution              (2.97 +/-1.32)*10^(-5)

Quartic Contribution           (-2.7 +/-1.06)*10^(-7)

Constant                                -0.0137+/-0.0374

R^2                                         0.83691

F Statistic                               473

Figure 8 shows the regression model overlaid with the secular component of the model.  The secular component is what is described by the above linear, quadratic, cubic, and quartic contributions plus the regression constant.  Figure 9 shows the regression model overlaid with Hadley temperature anomalies.   The model matches the data quite well.  The residuals (Hadley minus model) are shown in Figure 10.

Figure8a

Figure9a

Figure10a

Figure 11 shows the original and adjusted Hadley data, where the influences for ENSO, solar cycle, and volcanoes have been subtracted form the original Hadley data.  I have included calculated slopes for 1979 to 1997 (inclusive) and 1998 to 2012 (inclusive).  The best estimate for the trend from 1979 to 1997 is 0.0164 C/yr, while from 1998 to 2012 the best estimate for the trend is 0.0084C/yr, corresponding to a 49% reduction in recent warming.

 

Figure11a

 

But what if radiation is fungible?

The divergence between the regression diagnosed ‘sensitivity’ to changes in solar intensity and volcanic aerosols is both surprising and puzzling.   The divergence is reported (albeit to a smaller extent) in the 1950 to 2012 regressions as well as the 1975 to 2012 regressions. In each case, the regression reports a best estimate response to solar cycle forcing which is more than twice as high as volcanic response on a watts/M^2 basis.   Lots of people expect the solar cycle to contribute to total forcing in a normal (fungible) way.  Figure 12 (from GISS) shows that for climate modeling, the folks at GISS think there is nothing special about solar cycle driven forcing.

Figure12a

For the diagnosed divergence between solar and volcanic sensitivities to be correct, there must be an additional mechanism by which the solar cycle substantially influences Earth’s temperatures, beyond the measured change in solar intensity.  I think convincing evidence of such a mechanism (changes in clouds from cosmic rays, for example) is lacking, although I am open to be shown otherwise.   But if we assume for a moment that there really is no significant difference in the response of Earth to solar and other radiative forcings, which seems to me plausible, then the above regression models ought to be modified by combining solar and volcanic into a single radiative forcing.

When the regressions are repeated for 1975 to 2012 with a single combined forcing (the sum of individual solar and volcanic) the minimum AIC score is for a quadratic secular function (rather than cubic when the two are independent variables), but the big change is that the regression diagnoses a longer lag for radiative forcing and a stronger response to radiative forcing (which is of course dominated by volcanic forcing).  Figure 13 shows the AIC scores and lag constants for polynomials of different orders when solar and volcanic forcings are combined.  The minimum AIC score with the combined forcings (quadratic secular function, 649.2) is slightly higher than the minimum for separate forcings (cubic secular function, 648.2), which lends some support to higher sensitivity for solar forcing.

Figure13a

Figure 14 shows the regression model and Hadley data, and Figure 15 shows the Hadley data adjusted for ENSO and combined solar and volcanic forcing.

Figure14a

 

Figure15a

 

The 1998 to 2012 slope is 0.0072 C/yr, while the 1979 to 1997 slope is 0.0165 C/yr; the recent trend is 44% of the earlier trend.

Why did F&R get different results?

The following appear to be the principle issues:

1.  Allowing volcanic and solar lags to vary independently of the others.

2.  Accepting physically implausible lags.

Treating solar and volcanic forcing as independent, combined with number 1 above seems to have some unexpected consequences.  Figure 16 shows the lagged and un-lagged volcanic forcing along with the un-lagged solar forcing.  The two major volcanic eruptions between1979 and 2012 (El Chichon and Pinatubo) happen to occur shortly after the peak of a solar cycle.

 

Figure16a

The solar and volcanic signals are partially ‘aliased’ by this coincidence (that is, both acting in the same direction at about the same time), while the decline in solar intensity following the solar cycle peak in ~2001 did not coincide with a volcano.  Since there was a considerable drop in rate of warming starting at about same time as the most recent solar peak, and since the regression can “convert” some of the cooling that was due to volcanoes into exaggerated solar cooling due to aliasing, the drop in the rate of warming after ~2000 can be ‘explained’ by the declining solar cycle after ~2001.  In other words, aliasing of solar and volcanic cooling in the early part of the 1975-2012 period, combined with free adjustment of ‘sensitivity’ to the two forcings independently, gives the regression the flexibility to increase sensitivity to the solar cycle by reducing the sensitivity to volcanoes, so that the best fit to an assumed linear secular trend corresponds to a larger solar coefficient.  Allowing the solar and volcanic forcing to act with different lags further increases the ability of the regression to increase solar influence and diminish volcanic influence.  All of which contributes to the F&R conclusion of “no change in underlying warming trend.”

Of course, the same aliasing applies to the regression for 1950 to 2012, but since there are more solar cycles and more volcanoes in the longer analysis, and those do not alias each other well, the regressions for the longer period reports a smaller difference in ‘sensitivity’ to solar and volcanic forcings.  For example, with an assumed linear secular trend (similar to F&R, but using one lag for both solar and volcanic forcings), the 1975 to 2012 regression coefficients are: volcanic = 0.1294, solar = 0.5445, while the best fit regression from 1950 (4th order polynomial secular function) the coefficients are: volcanic: 0.1346, solar = 0.3283.

It will be interesting to see how global average temperature evolves over the next  6-7 years as the current solar cycle passes its peak and declines to a minimum.  If F&R are correct about the large, lag-free influence of the solar cycle, this should be evident in the temperature record…. unless a major volcano happens to erupt in the next few years!

Conclusions

There is ample evidence that once natural variation from ENSO, solar cycles and volcanic eruptions is reasonably accounted for, the underlying ‘secular trend’ in global average temperatures remains positive.   But it is also clear that the best estimate of that secular trend shows a considerable decline in warming compared to the 1979 to 1997 period.  The cause for that decline in the rate of underlying warming is not known, but factors other than ENSO, volcanic eruptions, and the solar cycle are almost certainly responsible.

 

Postscript

In my last post I showed how the low pass filtered Nino 3.4 index correlates very strongly with the average temperature between 30N and 30S, and can account for ~75% of the total tropical temperature variation.   To check for the possible influence of ENSO (ENI) on temperatures outside the tropics, I first calculated the “non-tropic history” from the Hadley global history and Hadley tropical history (non-tropics = 2* global – tropics).  I then checked for correlation between the non-tropics history and the ENI (which is calculated from the low pass filtered Nino 3.4 index) at each monthly lag from 0 to 12 months.  Significant correlation is present starting at ~4 months lag through ~11 months lag, with the maximum correlation at 7 months lag from the ENI.  I then incorporated this 7-month lagged ENI as a separate variable in the regressions discussed above, and found very statistically significant correlation in all regressions.  The coefficient was remarkably consistent in all regressions, independent of the assumed secular trend function, at ~1/3 that of the global influence of ENI itself.  (The ENI coefficient itself was also remarkably consistent.)  The 7-month lagged ENI influence was significant at >99.9% confidence in all regressions.  The increased variable count was included in the calculation of AIC.

I used a simple dual-lag response model (two-box model) instead of a single lag response because land areas (and atmosphere) have much less heat capacity than ocean, and so react much more quickly to applied forcing than the ocean.  The reaction of land temperatures would be expected to be in the range of an order of magnitude faster based on relative (short term) heat capacities, which on a monthly basis makes the land response essentially immediate (lag less than a few months).   If land and ocean areas were thermally isolated, we would expect the very fast response of land to represent ~30% of the total, and the slower response of the ocean to represent ~70%.  That is, in proportion to the relative surface areas.  However, land and ocean are not thermally isolated, and the fast land response is reduced by the slower ocean response because heat exchanges between land and ocean pretty quickly.  Modeling this interaction would appear to be a non-trivial task, but I guessed that a simple approximation is to reduce the relative weightings of land and increase that of the ocean, and assumed 15% ‘immediate’ response and 85% lagged response.  The lag constants optimized in the regression applied to only 85% of the forcing; 15% of the forcing was considered essentially immediate.  The above appears to improve R^2 values a bit in nearly all regressions I tried, but does not impact the conclusions: the underlying secular trend appears lower from 1998 to 2012 than from 1979 to 1997.

 

Estimating the Underlying Trend in Recent Warming

Introduction

Foster & Rahmstorf (1) used a multiple regression model based on solar variation, volcanic aerosols, and ENSO to estimate how those factors have influenced surface temperature since 1979; the paper is basically a rehash, with some changes, of earlier published work by others (see for example http://www.agci.org/docs/lean.pdf and references). F&H adjusted measured changes in Earth’s surface temperature based on the results of their regression model, and claimed that the apparent slowdown in warming over the past 10+ years is entirely the result of natural variation, and that there has been absolutely no change in the underlying (secular) rate of warming since 1979. Oh yes, they also concluded that it is critical for people stop burning fossil fuels immediately…. though it is not immediately obvious how a multiple regression model on global temperatures leads to that conclusion.

Many people found the F&R paper to be technically weak, and its conclusions doubtful; my personal evaluation was that the paper was little better than a mindless curve-fit exercise. In spite of the coverage the paper got in some publications, I would normally prefer to ignore such things. But since the F&R paper seems to now have taken on the character of an urban legend, and is pointed to by warming-concerned folks whenever someone notes that warming has been much slower recently, I figured any reasoned critique of F&R is a useful endeavor.

F&R considered the influence of the solar cycle, a change of about 0.1% in solar intensity from peak to trough of the cycle, separately from the effects of stratospheric volcanic aerosols, even though both are expected to change the intensity of solar radiation reaching the Earth’s troposphere and surface. (Why should solar intensity change and volcanic aerosol forcing not be fungible?) Like some earlier publications, F&R also (strangely) limited their analysis to post 1979, even though data on volcanoes, solar cycles, and ENSO over longer periods is available. F&R concluded that variation in solar intensity has much greater influence on surface temperature, on a degree/watt/M^2 basis, than an equivalent change due to stratospheric volcanic aerosols, and further conclude the response of surface temperature to solar intensity variation is essentially instantaneous (no lag!), while stratospheric aerosols influence surface temperature only with considerable lag. Odd, very odd.

Here I offer what I believe is a more robust regression analysis of the same three variables (volcanic aerosols, ENSO, and solar cycle) on temperature evolution since 1950. I will show:

1) An improved index for accounting for ENSO.

2) The best regression fit is found when volcanic aerosols and solar intensity variation are lagged considerably due to thermal inertia of the system. The estimates for the influence of both (on a degrees/watt/M^2 basis) are very similar, not dramatically different.

3) After taking ENSO, volcanic aerosols, and solar cycles into account, the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996.

 

 

I. A Slightly Improved Method for Estimating ENSO Influence on Temperature Trends

The Nino 3.4 index is the monthly average temperature anomaly, in Celsius degrees, for the roughly rectangular area of the Pacific ocean bounded by 120 and 170 degrees west, 5 degrees north and 5 degrees south. This region represents only about 2.5% of the surface area of the Earth’s tropics (~30 north to ~30 south), yet is known to be strongly correlated with the ENSO and with changes in average temperature in the tropics. (For a more complete description see: http://www.ncdc.noaa.gov/teleconnections/enso/indicators/sst.php). Some months ago in a comment at The Blackboard, Carrick showed that Nino 3.4 shows little or no correlation, at any lag period, with temperatures outside of the tropics.  That is, ENSO strongly influences tropical temperatures but does not influence temperatures outside the tropics very much.  I concluded that if one is going to “account for” the influence of ENSO on global average temperatures using Nino 3.4, then it would be best to estimate the influence based on the variation in temperature anomaly for the tropics only. Eliminating uncorrelated temperature data from higher latitudes ought to improve signal to noise ratio, and yield a more accurate estimate of ENSO driven temperature changes.

The Nino 3.4 index, lagged two or three months, correlates reasonably well with temperature variation in the tropics, and can account for ~65% – 70% of the measured variation in average temperature. But can Nino 3.4 actually provide more information than gleaned from the 2 or 3 month lag correlation?

The answer seems to be that there is a bit more information available. If we consider ENSO to be a cyclical redistribution of heat that accumulates in the tropical Pacific, then it becomes clear the response to a change in ocean surface temperature in the Nino 3.4 region can’t be immediate, nor is the influence going to be accurately described by a specific lagged monthly Nino 3.4 value. During La Nina, stronger trade winds push warm surface water westward toward the Pacific warm pool, and that water is replaced with cooler water which upwells, mainly in the eastern Pacific. When the trade winds drop, an El Nino begins, with warm water flowing eastward from the Pacific warm pool, while the rate of upwelling in the eastern Pacific drops, which leads to warming in the eastern Pacific. The temperature response of the tropics ought to be something other than a simple lag of the Nino 3.4 index, since it takes time for heat to be distributed throughout the tropics.

So how can we use a direct measure of the tropical Pacific temperature anomaly (Nino 3.4) to better estimate the response of global average tropical temperature to ENSO? I reasoned as follows: The temperature rise in the tropics that is associated with an increasing Nino 3.4 temperature takes time to be distributed over all of the tropics, so any response should be gradual. As the tropical temperature rises, heat loss to space increases, and the warming influence for any single month should decay gradually to nothing. The influence of an instantaneous change (eg. a rise in the Nino 3.4 index from 0 C to 2C for only one month, followed by a flat Nino 3.4 index of 0 C for many months) ought to show an exponential-like decay from an initially strong influence.  There is not a single monthly Nino 3.4 influence at an optimal lag time, but rather a continuously evolving influence over some extended period. A strong El Nino or La Nina continues to have influence on tropical temperatures even after the Nino 3.4 index has returned to a neutral state.

I modeled the evolution of Nino 3.4 influence by iteratively calculating a new monthly index I call the “Effective Nino Index” (ENI):

ENI(n) = k * ENI(n-1) + (1-k) * Nino34d(n-1)

where ENI(n) is the Effective Nino 3.4 Index
n is the current month
(n-1) is the previous month
Nino34d(n-1) is the detrended Nino 3.4 index for the previous month
k is a constant between zero and one

ENI(n) is essentially a low pass filtered representation of all earlier Nino 3.4 values.   I tested several values of k to see what value generates an ENI which best correlates with temperature evolution in the tropics. Since ~1997, the temperature trend in the tropics has been relatively flat and not influenced by major volcanic eruptions, so I ran a regression of ENI against the detrended tropical temperature anomaly for 1997 to present (I used the Hadley Hadcrut4 tropics history, downloaded from Wood For Trees).  The best correlation between ENI and average tropical temperature is at k = 0.703. In other words, in any single month, the running history (2 and more months past, represented by ENI(n-1)) contributes 70.3% of the ENSO influence on average tropical temperature, and the previous month’s Nino 3.4 index contributes 29.7% of the influence on average tropical temperature. Figure 1 shows how the relative influence of any single month of Nino 3.4 declines over time, with zero months lag meaning the current month.  (Click on any image to view at the original resolution.)

Figure1

A comparison of Nino 3.4 with ENI is shown in Figure 2. The lagging effect of the low-pass filter function is clear.  Please note that ENI is not a temperature index per se, but an index that represents the weighted contribution of all past Nino 3.4 temperatures, with the relative influence of earlier Nino 3.4 values falling rapidly in importance the further back in time you look.

Figure2

Figure 3 shows the ENI and the detrended tropical temperature for 1997 to 2012 (Hadcrut4, downloaded from Wood for Trees) on the same graph, and Figure 4 shows the detrended tropical temperatures and ENI for 1950 to 2012.

Figure3

Figure4

There is very good correlation in the 1997 to 2013 period, where volcanic influences are minimal. You may note in figure 4 that periods of significant deviation between the detrended tropical temperature anomaly and the ENI correspond to the aftermath of major volcanic eruptions, which is consistent with significant aerosol cooling. The “adjusted” tropical temperature model based on the ENI regression against tropical temperatures is:

Tadj = Torg – (0.1959 +/- 0.016) * ENI

Where Torg is the original Hadley temperature anomaly for the tropics. +/-0.016 is the two sigma uncertainty for the model coefficient.

For the 1997 to 2012 period, the model’s F statistic was 594 (very highly significant), and the R^2 value was 0.756, meaning 75.6% of the total variance in tropical temperatures is predicted by the ENI value. It is important to note that ‘predicted’ is a suitable word, since the ENI influence is due to the combination of all earlier months’ Nino 3.4 values, not the present Nino 3.4 value.  Since the ENI is based on the detrended Nino 3.4 index, there is no net contribution to ENI from any general warming of the ocean surface over time.

Figure 5 shows the above ENI adjustment applied to all the Hadcrut4 tropical temperature data since 1950. As we might expect, the influence of volcanic aerosols from Pinatubo shows up much more clearly than in the unadjusted temperature data.

Figure5

I will use the ENI in the combined regression analysis that includes volcanic and solar effects.

 

II. About Those Natural Forcings

NASA GISS provides data for their estimate of aerosol influences from 1850 to present (http://data.giss.nasa.gov/modelforce/strataer/). The data is in the form of Aerosol Optical Depth (AOD at 550 nm wavelength), which is converted into a net forcing value (watts/M^2) by multiplying the AOD by a constant of 23 (NASA’s value). The GISS volcanic aerosol forcing since 1950 is shown in Figure 6.

Figure 6

Direct measurements of solar intensity over the solar cycle are only available since 1979 (via satellites), but the correlation between sunspot number (SSN) and measured changes in solar intensity is good, so it is possible to estimate the historical variation in solar intensity based on SSN records. To estimate solar intensity variation, I used the following empirical equation, which comes from regressing measured solar intensity with sunspot number (data from a spreadsheet by Leif Svalgaard):

Solar intensity = 1365.45 + (0.006872 * SSN)  watts/M^2

Where solar intensity is measured above the Earth’s atmosphere, and SSN is the monthly sunspot number. The R^2 for this regression was 0.984.  The variation of solar intensity about an average value is then:

Variation = 0.006872 * (SSN – AvgSSN) watts/M^2

Where AvgSSN is the average number of sunspots over the period being studied (in this case from 1950 to 2012).

If we assume Earth’s albedo is 30%, and average over the entire surface (a factor of 4 compared to the cross-sectional area Earth presents to the Sun), the variation in solar energy reaching the Earth (including the troposphere) is:

Variation = (0.7/4) * 0.006872 * (SSN – AvgSSN) = 0.001203 * (SSN – AvgSSN) watts/M^2

Since the solar cycle is ~11 years long, we expect solar forcing to generate a temperature response with peaks separated by ~11 years. Figure 7 shows the calculated solar forcing since 1950.

 

Figure7

 

III. Regression Model

The regression model has three independent variables: the ENI, with nominal units of temperature (as described above), lagged volcanic forcing, and lagged solar cycle forcing (both with nominal units of watts/M^2). We do not expect an instantaneous temperature response to volcanic and solar forcing, since the thermal mass of the Earth’s atmosphere, land surface and ocean surface are expected to slow the response… that is, to introduce lag between the applied forcing and the response.

A very accurate estimate of the global temperature response to solar and volcanic forcing history would require an accurate model of ocean heat uptake at different latitudes over time, as well as an accurate model of heat transport between high and low latitudes, between land and ocean, and between Earth and space. Since this type of model arguably doesn’t exist, I am forced to use a much simpler lag-type model. The lag model is based on a single constant value with a repetitive monthly calculation that approximates a low pass filter function:

EF(n) = EF(n-1) * (1 – K) + F(n) * K

where:
EF(n) is the effective forcing for month n (solar or volcanic)
F(n) is the actual forcing for month n (solar or volcanic)
K is a decay constant

When K =1, the effective forcing is identical to the actual current forcing. Smaller values of K introduce increasing lag in the response. This type of function is essentially equivalent to the expected response of a “slab” type ocean, or to Lucia’s ‘Lumpy’ model response.  Please note that the lag applies to both solar and volcanic aerosol forcings, since these are both radiative forcings.

Since I did not a priori know the best value of K, I tried different values of K and found the value which gave the best fit regression (that is, the highest R^2 value) for the three variables against the detrended monthly Hadley temperature series from 1950 to 2012. The best fit for K was 0.031. Figure 8 shows the “step response” of the lag function with K = 0.031, with F(n) starting at zero and then stepping to constant value of 1 at month 1.

Figure8

Detrending of the temperature series was used prior to regression because the underlying long-term secular trend, whether due to GHG forcing alone or in combination with other long term influence(s), can’t be accurately modeled by the three variables in the regression, since these three variables are all expected to have relatively short term influence. Using the original temperature data (not detrended) distorts the regression fit by essentially forcing the regression to explain all the temperature change, including any slow secular trend, using the three short-influence variables, and so yields very poor (even physically nonsensical) results.

Figure 9 shows the original and lagged volcanic aerosol forcing, and Figure 10 shows the original and lagged solar forcing.

Figure9

Figure10

The best fit regression (with K = 0.031) yields the following constants:

ENI:         0.1099 +/-0.0118 (+/- 2-sigma uncertainty)
Volcanic: 0.2545 +/- 0.0277
Solar:       0.233 +/- 0.231

R^2 for the regression was 0.445 (44.5% of the variance was accounted for by the model).

The much greater uncertainty in the solar influence is due to the solar forcing being quite small compared to the other two. Still, it is encouraging that the regression shows the best estimates for response to both radiative forcing variables are very similar… just as one might expect, since radiation is fungible.

Figure 11 shows the temperature influence of the three variables and their combined influence based on the regression constants for each.

Figure11

Figure 12 shows an overlay of the detrended Hadley temperature series and the sum of the three adjustments (both offset to average zero, which makes visual comparison easier), and Figure 13 shows the adjusted and unadjusted Hadley global temperature series.

figure 12

Figure13

I have added the slope lines for the adjusted series from 1979 to 1996 (inclusive) and from 1997 to 2012 (inclusive). The slope since 1997 is less than 1/6 that from 1979 to 1996.

 

IV. Comments, Conclusions, Caveats, and Uncertainties

Warming has not stopped, but it has slowed considerably. This analysis can’t prove the cause for that change in rate of warming, but any suggestion that solar cycles, volcanic aerosols, and ENSO are completely responsible for the recent slower warming rate is not supported by the data. Some may suggest long term cyclical variation in the secular warming rate has caused the recent slow-down, but this analysis can’t support or refute that suggestion.

It is encouraging that the influence of the ENI on global temperatures (as calculated by the by the global regression analysis) is just slightly more than half the influence found for the tropics alone (30S to 30N): 0.1099+/- 0.0118 global versus 0.1959+/-0.016 tropics. Since Carrick showed almost no correlation of ENSO with temperatures outside the tropics, and since 30S to 30N represents exactly half the Earth’s surface, we could reasonably expect the regression constant for the entire globe to be about half as large as for the tropics… and it is indeed very close to half (and within the calculated uncertainty limits).

The analysis indicates that global temperatures were significantly depressed between ~1964 and ~1999 compared to what they would have been in the absence of major volcanoes.

Here are a few caveats and uncertainties. First, the analysis is only as good as the data that when into it. Historical volcanic forcing from GISS is at best an estimate for all eruptions before Pinatubo; if the GISS volcanic forcing is wrong, then this could distort the regression results. The same is true for all other data, including the Hadley temperature series and the sunspot number model used to calculate solar forcing. While sunspot number is an excellent proxy for solar intensity over the last 3 solar cycles, that does not guarantee sunspot number has always been an equally excellent proxy for solar intensity.

Second, the single constant low-pass filter function used to calculate lagged solar and volcanic forcings is a fairly crude representation of reality. While the true lag function is almost certainly similar in shape, it will not be identical, and this too could distort the regression analysis to some extent. The reality is that there are a multitude of lag constants associated with heat transfer to/from different locations, especially different depths of the ocean.

Third, it is tempting to infer very low climate sensitivity from the regression constants for volcanic aerosols and solar cycle forcing (these constants have units of degrees/watt/M^2, and the values correspond to a climate sensitivity of a little less than 1C per doubling of CO2). This temptation should be resisted, because the model does not consider the influence of (slower) heat transfer between the surface and deeper ocean. In other words, the calculated impact of solar and volcanic forcings would be larger (implying somewhat higher climate sensitivity) if a better model of heat uptake/release to/from the ocean were used.

 

Request for only constructive comments:  Skydragon slayers and rabid catastrophic warmers should not feel their comments are required or requested.

 

 

(1) Grant Foster and Stefan Rahmstorf 2011 Environ. Res. Lett. 6 044022

Pseudo-Cyclical Contribution of the PDO to Earth’s Recent Temperature History

Over the past several months there have been a number of exchanges at The Blackboard about natural cyclical or pseudo-cyclical contribution to Earth’s temperature history. With the primary point of contention being whether or not natural cyclical variation has misled us about how much of the post 1975 warming has been due to man made GHG forcing. Most of these exchanges have been good natured, but the underlying disagreement is real enough: was the relatively rapid warming from the late 1970’s to the early 2000’s simply the result of GHG forcing combined with man made aerosol effects, or was some (or even all) of the warming over that period due to natural cyclical processes? In other words, is the recent warming representative of Earth’s true response to forcing, or has the warming been significantly ‘overstated’ by the contribution of a naturally occurring positive cyclical component to the measured warming? There is some evidence for both POV’s; my personal position has always been that a significant cyclical contribution seems likely, if based only on the evolution of temperatures since the instrumental record began.

Some have pointed to the de-trended AMO as a reasonable proxy for natural cyclical variation, while others have noted (fairly, I think) that the AMO may in fact be only a proxy for the actual global average temperature; implying that regressions of the AMO against global temperatures is nothing more than regression of something against a proxy of itself, so always yields an uninformative correlation.

Is there a natural cyclical contribution? Certainly we need to look beyond a simple temperature proxy like the AMO to decide.

A clever analysis of Milankovitch forcing is perhaps an illustrative example of the sort of thing that is needed. It is obvious that Earth’s orbital variations cause substantial changes in solar forcing at high latitudes, and in the 1940’s Milankovitch already claimed these were responsible for ice age cycles. But even though it has been widely accepted there have been great changes in Earth’s total ice volume over time, and it has been widely believed that these changes were related to orbital forcing, it was only recently recognized that the rate of change in total ice volume is what best correlates with orbital forcing at high latitudes, not total ice volume. (http://earthweb.ess.washington.edu/roe/GerardWeb/Publications_files/Roe_Milankovitch_GRL06.pdf) The Milankovitch forcing correlation with total ice volume is not good, but there is excellent correlation against the rate of change in ice volume. (I note that the paper’s author Gerard Roe was one of Richard Lindzen’s doctoral students… but that is a different discussion.)

So in the spirit of Gerard Roe’s paper, I suggest the following hypothesis: The Pacific Decadal Oscillation (PDO) does not directly correlate with cyclical variation in Earth’s average surface temperature, but the cumulative influence of the PDO over fairly long periods does correlate strongly with Earth’s historical temperature variation, and perhaps is in large part responsible for the observed cycle-like variation in the historical temperature record.

 

A bit of information about the PDO

The name “PDO” was coined by Mantua et al (Mantua, Nathan J. et al. (1997), “A Pacific interdecadal climate oscillation with impacts on salmon production”, Bulletin of the American Meteorological Society 78 (6): 1069–1079).

Wikipedia (http://en.wikipedia.org/wiki/Pacific_decadal_oscillation) notes:
“The prevailing hypothesis is that the PDO is caused by a ‘reddening’ of the ENSO combined with stochastic atmospheric forcing. A PDO signal has been reconstructed to 1661 through tree-ring chronologies in the Baja California area.”
and goes on to list several proposed physical mechanisms for the PDO, none of them related to human activities. So it seems unlikely that human GHG forcing is a causal influence on the state of the PDO. The PDO is described by JISAO (http://jisao.washington.edu/pdo/) as:

“a long-lived El Niño-like pattern of Pacific climate variability. While the two climate oscillations have similar spatial climate fingerprints, they have very different behavior in time.”

JISAO has produced a PDO monthly index, from 1900 to the present, with the PDO index defined as the leading principal component of North Pacific monthly sea surface temperature variability pole-ward of 20N for the 1900-93 period. JISAO presents this graphic to depict the difference between “warm phase” (left) and “cool phase” (right) PDO states: (click on any graphic to view at original resolution)

pdo_warm_cool3

 

Wood For Trees includes the JISAO PDO index among their sea surface temperature indexes. (http://www.woodfortrees.org/plot/jisao-pdo)
jisao-pdo

It is pretty clear from inspection of the above graphic that here is rather poor correlation between the PDO index and Earth’s recent temperature history. But let’s suppose that the PDO index influences Earth’s average surface temperature by a cumulative, rather than an immediate effect. That is, let’s suppose that an integral of the PDO index over time can influence Earth’s average surface temperature. What could we look at to evaluate the cumulative effect of the PDO over time? I can think of two reasonable choices: 1) the continuous time integral of the PDO from 1900 forward, and 2) the trailing average (over a specified time, say 25 – 30 years) of the PDO. The figure below shows the cumulative total (the integral) of the monthly PDO index starting in 1900, along with the Hadley HADCRUT4 temperature history.
PDO1

The mid 1940’s peak in temperature correlates well with the peak in the cumulative PDO index, and the slight decline in temperature from the mid-1940’s to the mid 1970’s tracks the cumulative PDO index almost perfectly, while the rapid warming post 1976 corresponds perfectly to a rapid increase in the cumulative PDO index. Finally, the recent “plateau” in temperature corresponds to the leveling off and decline in the cumulative PDO index.

The graph below shows an adjustment of the HADCRUT4 data based on the cumulative PDO index.

PDO2

The adjusted HADCRUT4 data no longer has a significant mid-1940’s peak in temperature, no decline in temperature from the mid 1940’s to the mid 1970’s, and slower warming since the mid 1970’s. All of which seems more consistent with estimates of the historical man made GHG forcing. (The constant of 0.00125 was chosen to maximize consistency with that forcing.)

One doubt about the cumulative PDO index is that it implicitly assumes the state of the PDO index in 1900 (and all times since then) continues to influence temperatures today. To avoid this assumption (and limit the period of influence), a trailing average of the PDO index can be used instead. The graph below shows the 25 year trailing average along with the HADCRUT4 historical record.

PDO3

The graph begins in 1925 because the PDO index data starts in 1900, and 25 years is needed to generate the first data point for the 25 year trailing average. Once again, the variation in the HADCRUT4 trend seems to track the trailing average of the PDO index quite well. The graph below shows an adjustment of the HADCRUT4 data using the 25 year trailing average of the PDO.

PDO4

Once again, the adjusted temperature trend appears to reasonably follow the historical man made GHG forcing. Finally, the graph below shows the slope of the adjusted temperature trend since 1975.

PDO5

Observations and Comments

Unlike like the AMO index, which tracks the Earths average temperature closely, the PDO index itself is much more variable, and does not correlate well with average temperature. However, it is clear that a long term cumulative measure of the PDO index correlates quite well with changes in the slope of the temperature trend over the recent past. It is of course possible that this is just coincidence, but the correlation is awfully good, so a causal relationship seems plausible (and IMO likely).  Nobody appears to suggest that the PDO is driven by man-made GHG forcing or man made aerosol effects; it is by all accounts a natural process.

Assuming there is a causal relationship, what mechanism is responsible? The honest answer is that I do not know, but it must be related to gradual changes in ocean heat content in the first few hundred meters of the ocean, since this is the part of the ocean which is substantial enough in thermal mass to account for relatively long cumulative effects, and which also has immediate influence on Earth’s average surface temperature. Changes in heat content at great ocean depth would not be expected to have direct influence on the Earth’s surface temperature except on multi-century time scales. So I would expect variations from any secular trend in ocean heat content for the top ~300 meters to be a reasonable measure of the cumulative influence of the PDO over 2 – 3 decades, with much less (and much slower) influence of the cumulative PDO on heat content at greater depth.

Since the state of the PDO tends to persist over multi-decade periods, and since 1998 the index has been mostly negative, it seems likely that the influence on the trend in global temperatures will continue to be negative, at least for the next 15-20 years. Based on past behavior, if the 25 year trailing average index reaches -0.6, then the trend in temperatures could be reduced by about 0.12C over that period, compared to what that trend would otherwise have been. Unless the PDO changes unexpectedly to consistently positive, we can reasonably expect the recent slow rate of global temperature increase to continue for some time. My personal SWAG is a rate of warming over the next 15-20 years of about 0.06-0.07 C per decade, with a true (underlying) secular trend of about 0.12C per decade.  If this happens, then the IPCC’s climate model projections are going to look even worse in the coming years than they do today.

Estimates of Mass and Steric Contributions to Sea Level Rise

An important issue for the future is the potential for significant sea level rise due global warming. Sea level rise consists of two parts: a steric contribution (volume increase due to an increase in average ocean temperature, which reduces average seawater density), and a mass contribution (volume increase due mainly to melting of land supported ice… glaciers, ice caps, and ice sheets). It has been projected by some (Vermeer & Rahmstorf, and others) that the mass contribution will accelerate rapidly in the next few decades as average surface temperature increases; these projections suggest a rapidly accelerating sea level rise, starting very soon. While the IPCC AR4 projections of sea level rise through 2100 are modest, some more recent projections suggest increases of well over 1 meter by 2100. My guest posts at the Blackboard in July 2011 addressed this issue, and the results of my model suggest future sea level increases in reasonable agreement with the AR4 projections, for a wide range of assumed rates for surface temperature increase.

The steric contribution is clearly defined if warming of the ocean is known. The ARGO system of floats measures ocean temperatures between the surface and 2,000 meters depth, with the highest quality data between the surface and 700 meters. ARGO became fully operational in ~2003. But some earlier ocean heat data exists, with the best data set (Levitus et al) dating from 1955. The mass contribution to sea level rise since 1955 is less certain, due in part to uncertainty in the available tide gauge data. However, accurate satellite measurements of total sea level rise are available from 1993 to present (~18 years).

The graph below shows the satellite measurements of sea level, with seasonal variation removed. Seasonal variation in sea level is caused by seasonal accumulation of precipitation on land (ice, water) as well as seasonal variation in total ocean heat content (seasonal steric contribution).

Note that there has been an apparent reduction in rate of sea level rise starting ~2003-2004. What caused this reduction? It is well known that ocean surface temperatures have not risen as much over the last decade as in the 2 decades before, and this has reduced the rate of heat accumulation in the oceans. So a reasonable expectation is lower steric contribution. The regularly updated ocean heat data from Levitus et al (http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/) in the graph below, shows that the 0-700 meter steric contribution fell to near zero starting in 2003.

If you subtract the 0-700 meter steric contribution from the total sea level change, the net is an estimate of the mass contribution alone, that is, the contribution mostly from the melting of land supported ice. The graph below shows the satellite data with the steric contribution removed.

Note that the steric data is only available as yearly averages, while the satellite data points represent a few weeks each, so subtracting one form the other I had to use annual average trends for the steric contribution.

There are a couple of interesting things in this graph. First, it is clear that once the change in steric contribution is accounted for, the trend in the remaining data (that is, mostly ice melt contribution) has remained relatively constant for the last 18 years, at about 2.3 mm per year. There is no indication of an acceleration (nor deceleration) in melt contribution over that 18 years. Second, the influence of the ENSO becomes more obvious once the 0-700 steric contribution is removed: there are clear positive and negative deviations from the overall (linear) trend corresponding to the 1998 and 2010 el Ninos and the 2008 la Nina. In fact, it looks like much of the overall remaining variation correlates quite well with an ENSO index like Nino 3.4. This suggests that either a) short term variation in average temperature is causing small but rapid changes in snow/ice melt contribution, b) there is short term ocean heat content change which is not captured well by the Levitus data, or c) a combination of these effects.

The graph below shows the 0-700 meter steric adjusted satellite sea level compared to the Nino 3.4 index; the heavy blue line is an 11 point centered moving average filter. The Nino 3.4 index was multiplied by a factor of 2.5 and a linear slope was added to match that of the satellite data. Most (but not all) variation from a linear trend in the satellite sea level data is matched by the Nino 3.4 index. There is essentially no time offset between the Nino 3.4 variation and the seal level variation, unlike global average surface temperature, which tends to lag 3-6 months behind changes in the Nino 3.4 index. The lack of lag suggests that short term changes in ocean heat content is the most likely explanation for the correlation, rather than rapid changes in melting of ice/snow.

Conclusions, Observations, Cautions
The satellite sea level data shows no evidence of acceleration in the rate of ice melt over the past 18 years, and the observed reduction in the rate of sea level rise since ~2003 is consistent with a much reduced rate of ocean heat accumulation. The relatively short period of the satellite record means that one should be cautious about drawing too many conclusions from the data; the measured trend of the last 18 years could be influenced significantly by unaccounted factors. The good news is that continued satellite measurements of sea level and ARGO measurements of ocean heat content will allow more confident estimates of any change in the long term trend over the next decade or two. I personally expect the sea level trend to not accelerate nearly as much as some have suggested, and I expect that projections of extreme rise in sea level by 2100 will be largely refuted within 15- 20 years.

Any accumulation of heat below 700 meters is not included in the Levitus et al data, though it is reasonable to believe that at least some deeper accumulation is taking place (and there have been some estimates which range up to ~0.35 watt/M^2). Therefore, the contribution due to mass increase is almost certainly somewhat lower than 2.3 mm/year. 0.35 watt/M^2 of “deep” accumulation would be contributing up to ~ 0.4 mm per year to total expansion, so assuming 0.35 watt/M^2 deep heat accumulation, the net mass contribution to sea level rise is likely to be somewhere near ~1.9 mm per year. This is consistent with Cazenave et al (2008), who concluded (based on GRACE satellite measurement of ocean mass) that the mass increase in the ocean between 2003 and 2008 was 1.9+/- 0.1 mm/year.

The satellite sea level data is adjusted to compensate for estimated growth in ocean basin volume (about +0.3 mm per year is added to the measured level) due to slow glacial rebound in regions where the heavy ice sheets of the last ice age depressed the local surface by hundreds of meters. But the satellite data is not adjusted to account for potential accumulation of water in man-made reservoirs, which reduces the measured rate of rise, nor for pumping of groundwater (‘ancient water mining’), which increases the measured rate of rise. Recent estimates of these contributions are a significant fraction of the measured change in sea level. The graph below (from one of my earlier posts) shows my rather crude graphical presentation of two published estimates of these factors (Wada et al and Chao et al).

The estimated rate of reservoir accumulation was more rapid between ~1955 and ~1985, yielding a net year on year removal of water from the oceans, but from 1985 to present, the estimated rate of groundwater pumping has been greater than reservoir accumulation, yielding a net year-on-year addition of water to the oceans. The cumulative net contribution year-on-year since 1993 is about +7 mm, or approximately 0.39 mm per year. In other words, during the satellite period, the true overall sea level change due to warming could be significantly lower than the measured trend; ~2.71 mm/year instead of ~3.1 mm/year. (For reference: the average rate of rise since 1910, based on tide gauges, is about 1.9 – 2.0 mm per year.)

Combining the potential effects of deep ocean heat accumulation, reservoir accumulation, and ground water pumping, the true average melt contribution during the satellite period may be as low as: 2.3 – 0.4 – 0.4 = 1.5 mm per year, while the true average steric contribution for that period may be as much as 1.2 mm per year.

References:

Global ocean heat content 1955–2008 in light of recently revealed
instrumentation problems; Levitus et al, Geophysical Research Letters (2009)

Global sea level linked to global temperature; Martin Vermeer and Stefan Rahmstorf, PNAS (2009)

Sea level budget over 2003–2008: A reevaluation from GRACE space gravimetry, satellite altimetry and Argo; Cazenave et al, Global and Planetary Change (2008)

Global depletion of groundwater resources; Wada et al, Geophysical Research Letters (2010)

Impact of Artificial Reservoir Water Impoundment on Global Sea Level; Chao, et al., Science (2008)

Update to A First Order Estimate of Future Sea Level Rise

Several comments from my original post suggested using the more recent (2010) Church & White (C&W10) sea level data set instead of the 2006 data set. Others raised questions about the potential impact of water accumulation in reservoirs (which would artificially reduce the measured sea level rise) and ground water depletion (which would artificially inflate the measured sea level rise). Vermeer and Rahmstorf (2009.. V&R) included a contribution due to water accumulation in reservoirs, but not ground water depletion. B. F. Chao, et al (2008) estimated the impact of reservoir accumulation on sea level. Since V&R was published, Wata et al (2010) published an estimate of ground water depletion from 1960 to present, along with an estimated impact on sea level.

Impact of C&W10 Data Set
The graph below shows the 2006 and 2010 data sets, but with the 2006 data set adjusted with a constant off-set to match the basis of the 2010 data set.

The principle difference is in the 1915 to 1945 period, where the 2010 data set has less of a downward deflection in the trend. The newer data set goes until 2009, but drops the 1870 to 1879 data that was present in the original (2006) data set. To simplify my calculations (since everything was already set up to calculate starting in 1870), I (rather rashly?) added the original 1870 to 1879 data to the more recent data set, with an appropriate off-set. I then used the total set (1870 to 2009) to redo the calculations from the original post. The figure below shows the model fit, along with the mass-only portion of the model.

The fit to the updated C&W data yield somewhat less acceleration (compare the quadratic fits to those from the original Figure 5). The fit was identical in R^2 to the original model – 0.987. The figure below shows the resulting projections of sea level rise based on the newer data set.

All projections are reduced somewhat, but with the largest reductions for the higher assumed warming rates (especially 0.25 and 0.3 C per decade). For no assumed warming, the rise is approximately 180 mm (about 7 inches), versus about 250 mm (about 10 inches) in the projection based on the earlier data set. The projection for 0.3C per decade warming rate is about 550 mm (about 19 inches), instead of about 690 mm (about 27 inches) based on the earlier data set. The lower rates are the result of a smaller contribution of melting to the total rise.

Adjusting for Reservoir Accumulation and Groundwater Depletion
I (rather crudely) converted the graphical data form sea level contribution from Chao et al into a four linear segment approximation to their original curve. The Wata et al estimate of groundwater contribution covered only 1960 to 2000, so I extrapolated the (almost linear) annual contribution trend backward to 1925 (where it was ~0) and forward to 2009, and then integrated to yield an (roughly!) estimated contribution to sea level over time. The graph below shows the two separate contributions, along with their sum.

The sum of the two adjustments was added to the C&W10 data set to generate a “corrected” C&W10 data set.

The adjustment makes a relatively modest difference, with the biggest change in the 1980’s. I then repeated the calculations for the model using the adjusted C&W10 data, yielding the results in the following two graphics.

The model R^2 remained 0.987. Adding the combined adjustments for reservoir accumulation and groundwater depletion increases the projected sea level rise only slightly compared to the unadjusted C&W10 data set. In any case, substitution of the more recent C&W data yields a significant reduction in projected sea level increases through 2100. The maximum projected increase is only slightly greater than the IPCC AR4 range of values. None of these results supports the extreme sea level increases (1+ meter) suggested by V&R, among others. Extreme rates of sea level rise would appear unlikely based on all the above results. Of course, these projections are only as reliable as the model formulation and the C&W10 data set.

Steric Only Contribution
Steve Mosher asked me to generate the steric contribution (that is, total sea level increase less the mass increase calculated by the model). The graph below shows this data.

References for adjustments to Church and White:
Global depletion of groundwater resources, Wada et al, GEOPHYSICAL RESEARCH LETTERS, VOL. 37, L20402, doi:10.1029/2010GL044571, (2010)
Impact of Artificial Reservoir Water Impoundment on Global Sea Level; B. F. Chao, et al., Science 320, 212 (2008)

A First Order Estimate of Future Sea Level Rise

Introduction
One of the credible threats of warming of the Earth’s surface is a rise in sea level. If of sufficient size and rate, future increases in sea level could cause substantial economic and social disruption by inundating many low lying regions. It is important to note that both the magnitude and rate of future rise need to be known to evaluate the potential cost/disruption, since economic costs and social disruption very far in the future may have very little current economic value.

Sea level rise is reasonably well documented from about 1870 to 1993 based on tide gauges, and since 1993 based on a combination of tide gauges and satellite altimetry; the total rise since 1870 is about 225 mm (~8.9 inches). But estimates of future rise vary widely. It is common to consider sea level to at least the year 2100 in discussing sea level rise, and estimates of rise to year 2100 range from as low as ~175 mm (~7 inches), at the low end of the IPCC AR4 range to as high as 1500 mm (~62 inches); even higher estimates (2+ meters, ~79+ inches!) are routinely suggested by people like James Hansen. Sea level increases in the upper range of currently published estimates would indeed be very disruptive in many regions, requiring large Netherlands-like dike projects and/or abandoning many coastal regions to the sea. So sea level increase is of considerable interest.

Some Earlier Studies
Many studies of sea level increase have focused on whether or not the rate of sea level rise is accelerating or is reasonably constant. The work of Church and White (2006, hereafter C&W) suggested considerable acceleration in the rate of rise between 1870 and 2001, but other authors (eg. Houston and Dean, 2011, hereafter H&D) have suggested that the rate of rise is not accelerating, at least not since 1930, although acceleration prior to 1930 seems clear from the available data. Vermeer and Rahmstorf (2009, hereafter V&R) proposed a “semi-empirical” model, where they fitted the historical data from C&W (with some adjustments), and concluded that future acceleration of sea level increase would be very high, leading to most probable estimates of 1 to 1.4 meters by 2100, depending on which IPCC emissions scenario is assumed.

This post was motivated by my dissatisfaction with the quality of analyses of both V&R and H&D; I found neither very credible, though for very different reasons. As Vermeer and Rahmstorf pointed out in a recent published refutation of H&D (as well as in a very critical post at RealClimate) the appearance of no acceleration was the result of a “cherry pick” of starting date (1930), which seemed designed to minimize the appearance of acceleration. It is a valid critique, although I would not accuse the authors of ‘cherry picking’; they were, however, a lot less than complete in their analysis.

V&R suffers from a different kind of problem. Their semi-empirical model does yield extreme rates of sea level rise, but it uses what seems like an odd combination of one physically justifiable, but incorrectly formulated function, and one physically unreasonable function, to generate that very high projected rate. V&R assume that the rate of sea level rise consists of two components, the first is that the rate of net melt of land-locked glaciers/ice sheets will increase in proportion to the rise in temperature over that temperature where glaciers/ice sheets would be in equilibrium (neither net melt nor accumulation). This is then the mass component of the increase in sea level. At any constant temperature above the equilibrium temperature, there should be a fairly constant rate of melt for a very long time (since glaciers/ice sheets hold a lot of ice and are expected to melt only very slowly).

But V&R appear to include thermal expansion (due to warming) with the increase in mass (due to melting of landlocked glaciers and ice sheets), which strikes me as simply incorrect, and bound to lead to errors. An increase in melt rate due to an increase in temperature ought to continue to have approximately constant continuing influence, since ice will keep melting unless the temperature falls. Thermal expansion is limited in contribution; once the temperature of the upper layers of the ocean increase in response to surface warming, thermal expansion should slow rapidly. These two factors can reasonably be expected to behave quite differently over time, yet V&R appear to have not attempted to separate their effects.

The second component in V&R’s semi-empirical model is the first derivative of temperature with time. Now you might expect that an increase in the rate of warming (positive first derivative) would have some positive impact on sea level rise, though it is not immediately obvious to me what the magnitude of that effect would be. But it is certainly reasonable to expect an increase in warming rate would cause an increase in rate of sea level rise. In fitting their semi-empirical model to historical data, V&R concluded that the best fit corresponds to the first derivative of the temperature with time having a strongly negative effect on the rate of sea level rise. In other words, an increase in the rate of warming will tend to reduce the rate of rise. I found that to be a remarkably non-physical result, and one which is contrary any reasonable expected behavior of warming water and melting ice. V&R noted that it was indeed unexpected, and offered a few possible explanations. I found none of them even slightly convincing; had it been me, I would have concluded that the approach was not connected to reasonable physical behavior, and tried to determine what was wrong.

At the time I read V&R, I expected that they would be criticized widely by the climate science community for proposing a model which was so physically unreasonable (and so contrary to common sense!).

Getting Started
As far as I know, that did not happen. Perhaps my expectations were too high. But it motivated me to start thinking about a physically sound way to model sea level rise.

My “A-ha!” moment came a few weeks back when I noted that the physically reasonable part of the V&R model (ice melting increases in proportion to temperature rise over an equilibrium value) was a good starting point; all that was needed was a way to quantify the ocean heat content over the past 140 years or so, and use that to estimate how the average density of water in the ocean has decreased (warming) or increased (cooling) over the past 140 years, and into the future. In other words, if a reasonable estimate of the steric (density change) contribution to the sea level could be developed, then that contribution could be subtracted from the measured historical sea level trend, to yield an isolated glacier/ice sheet melt contribution trend. Once so isolated, the melt-only curve could then be fitted (like V&R) to the known temperature history, yielding an explicit function for how the melt contribution can be expected to change under any assumed warming projection. But how to estimate past (and future!) ocean heat content (OHC), when the only good data available (Levitus et al) cover just 1955 to present? Clearly a model relating sea surface temperature (with data available starting 1850) to OHC was needed.

Modeling Ocean Heat Uptake
It has been recognized for some time that an adjustment following World War II was made in the Hadley SST2 data set to try to account for an assumed sudden decrease in the fraction of ship engine intake readings of temperature and simultaneous increase in bucket sampling, following the war. It has recently been concluded that this adjustment was not accurate. The most recent Hadley data set (SST3) has an off-set of ~+0.3C after WWII (compared to SST2) which decreases linearly to 0C by ~1970. Since I did not have the SST3 data set available, I manually adjusted the SST2 set to reasonably match the new data set.

ENSO causes confusion when trying to connect changes in ocean surface temperature to changes in heat content, because ENSO appears to mainly move ocean surface heat around, not increase or decrease it, while at the same time causing significant changes in the measured average sea surface temperature. ENSO therefore adds a lot of noise to the analysis. Many have noted that average sea surface temperature rises about 1 year following an El Nino event, and falls about 1 year following a La Nina event. A correlation of the one-year lagged Nino 3.4 value against the trend in sea surface temperature showed that an increase of 1C in the Nino 3.4 value corresponds to an increase of about 0.08 -0.1C in the average sea surface temperature 1 year later. (La Nina does approximately the same but in the opposite direction.) So I made a further adjustment in the ocean surface temperature to account for the ENSO (using an assumed constant of 0.1C per C in Nino 3.4). Both of these adjustments are shown in Figure 1.

The Nino 3.4 adjustment is a short-term adjustment in SST, because the Nino 3.4 index varies between positive and negative values, but averages out to near zero. Long term (decadal and more) changes in OHC should not be influenced much by the Nino 3.4 adjustment, but short term changes in OHC will be influenced.

To model heat diffusion into the ocean, I set up a series of 21 spread sheet columns, each column representing ~56 meters of ocean depth, with the temperature value in the first cell of the first column being the Nino-adjusted surface temperature in 1850, the second cell in that column being the Nino-adjusted surface temperature in 1851, so forth down the column until reaching 2010. The first cell in every other column (corresponding to the temperature at that corresponding depth in 1850) was set to -0.4 C (these are anomaly values like the Hadley SST’s, not absolute values!), which was my estimate of the approximate 1850 average temperature anomaly.

The second cell in the second column was calculated by averaging the first cell in the first column with the first cell in the third column. The second cell in the third column was calculated by averaging the first cells in the second and fourth columns, and so forth. Each step downward then represents a 1 year step forward in time, and changes in surface temperature (first column) are damped as their influence migrates and is “averaged” into the columns which represent deeper water. (A very small amount of damping, giving slightly slower response than simple averaging, was needed to reduce the tendency for the calculations to oscillate.) The resulting trends in temperature at several different depths are shown in Figure 2.

Note in Figure 2 that the variation in surface temperature year-on-year is rapidly damped at greater depths; indeed, there is only a very modest change in temperature between 1850 and 2010 at the maximum depth of 1120-1176 meters. The final step in developing the steric part of the model was running a regression between the measured changes in 0-700 meters OHC (Levitus et al) and the average temperature of the first 13 columns (nominally, 0-728 meters). This yielded the model equation:

OHC = 90.05(+/- 3.7) * (average temperature) + 19.8 (+/- 0.8) (Eq. 1)

The regression gives an R^2 value of 0.918. Figure 3 shows the modeled trend back to 1871, along with the measured trend from 1955 to 2011.

The calculations of heat diffusion started in 1850, but I treated the 1850 to 1870 period as a “windup”, because choosing an assumed initial temperature (-0.4C) could lead to significant inaccuracies in calculated heat, especially in the first two decades. Once the calculations have been based on actual surface temperatures for a couple of decades, the influence of any inaccuracy in the assumed initial temperature is greatly reduced. Figure 4 shows the scatter plot of the OHC model against the measured OHC.

While the fit is by no means perfect, it is at least reasonable, and gives a first order estimate of ocean heat content before 1955. The same combination of heat diffusion calculation and an assumed future surface temperature trend can (of course) be used project the trend in future OHC.

Figure 3 also shows the estimated steric contribution to sea level change based on the total change in heat from the surface to the deepest water (~1150 meters). The pattern of oscillation in ocean heat and the resulting contribution to steric sea level change is clear. The estimate of steric contribution is a little uncertain because the coefficient of expansion of sea water changes substantially with temperature; colder water expands less for a given temperature change than warmer water. A more exact calculation would require knowledge of the absolute temperature profile in addition to how the profile changes with time.

Determining the Mass Contribution
Subtracting the calculated steric contribution from the historical measured sea level should yield a much improved estimate of the true mass increase (increase due to melting of glaciers/ice sheets). The steric contribution from Figure 3 was subtracted from the C&W sea level data. The Ho, A, and To parameters in the following equation were then hand optimized to give a reasonably good match to the curve for level increase due to increasing mass:

H(t) = Ho + A* ∫ (T(t) – To) ∂t (Eq. 2)

Where: t is time
H(t) is mass related sea level as a function of time
Ho is a constant
A is the “melt rate constant”
T(t) is the average ocean surface temperature over time
To is the “equilibrium” (no melt) temperature

Integration is from 1870 to time t, with t between 1871 and 2001 (the period covered by C&W data).

The resulting fit values were:

A = 2.30 mm/deg-yr
Ho = -49.3 mm
To = -0.722

Note that the above equation says the “no-melt” average sea surface temperature is 0.722C below the zero point of the Hadley SST2 data (or about 0.722C below the temperature in 1980), which seems a reasonable value. The results are shown graphically in Figure 5, where the upper curves are the melt model and the steric adjusted measured sea level from C&W.

The lower curves in Figure 5 show the original C&W data along with the combined model: mass contribution model plus the steric contribution from the SST model. The fit appears quite good. The model estimate of sea level trend from 1993 to 2010 is almost exactly the trend independently measured by satellite altimetry. The quadratic fits for the mass increase part of the model and the mass portion of the C&W data are essentially identical. The mass increase portion of the model clearly shows net acceleration (0.0106 mm/yr^2), contrary to the claim of no acceleration by H&D.

Figure 6 is a scatter plot of the C&W data against the combined model. The fit is quite good. This suggests that the model is capable of simulating sea level changes with reasonable accuracy.

Future Sea Level Projections
(Drum roll please…) Continue reading A First Order Estimate of Future Sea Level Rise

A Simple Analysis of Equilibrium Climate Sensitivity

Lots of people (including me!) have made estimates of climate sensitivity based on linear regressions.  These include regressions of observed historical temperatures against an assumed forcing history, as well as regressions against time, taking into account known factors like ENSO, volcanic eruptions, AMO, PDO, and more.  Others have made estimates based on simple models, like Lucia’s ‘Lumpy’ model and Arthur Smith’s two time constant model (and Tamino’s various efforts).  And of course, there are climate models of great complexity which also generate estimates of climate sensitivity.  All of these analyses depend on a set of assumptions, and many/most of those assumptions can be legitimately questioned, if only because the data being used is doubtful.  It really is not known what the man-made aerosol forcing history was over the past century.  It is not even known what today’s aerosol forcing is.  It is not very accurately known what historical ocean heat uptake has been, especially before Argo started operating.  So, not a lot of confidence can be assigned to the resulting estimates of sensitivity.

But I think there is a simpler approach to estimating basic climate sensitivity, one that requires only these assumptions:

 

  1. The Stefan-Boltzman equation is accurate.
  2. The sun’s intensity (as measured in space) is reasonably accurate
  3. The measured surface temperature today is reasonably accurate (within a degree or two).
  4. Earth’s albedo (as measured from space) is reasonably accurate.
  5. Radiation is fungible.

 

The average solar intensity above the atmosphere is ~1366 watts per square meter.  Earth’s average albedo is ~ 32%, meaning the intensity of what is absorbed is ~232.2 watts/M2, averaged over the Earth’s surface.  According to the Stefan-Boltzman equation, this corresponds to blackbody temperature of ~253o K.  Earth’s average surface temperature is ~288oK (15oC).  The difference is the total “greenhouse effect” of Earth’s atmosphere, about 35oC.  So passage of an average of ~232.2 watts/M2 through the atmosphere to space requires an average of ~35oC of ‘delta-T’, and the resistance constant of the atmosphere to the passage of heat is ~35/232.2 = 0.1507oC/watt/2.

Now suppose that we were to increase the net absorbed radiation by 1 watt/M2 by increasing the sun’s intensity.  The blackbody equivalent temperature would then be expected, based on Stefan-Boltzman, to rise by 0.270oC.  But heat flow through the atmosphere would also have to increase by one watt, so we could expect the total surface warming at equilibrium for a 1 watt/M2 increase in absorbed solar intensity to be approximately:

0.270 (blackbody) + 0.1507 (atmosphere) = 0.4207o C/watt/M2

This is a rough estimate of the Earth’s current sensitivity to radiation. Please keep in mind that this value has nothing to do with the accuracy of historical temperature records, it is insensitive to assumed ocean heat uptake (assuming ~0.5 watt/M2 current ocean heat uptake makes little difference), and does not assume the existence of aerosol offsets.  It does not even have much sensitivity to the accuracy of today’s measured surface temperatures.  If the true surface temperatures are a little higher or lower, that will not change the estimated resistance constant for the atmosphere very much.  A one degree change in the assumed value for surface temperature changes the diagnosed sensitivity by only about 1%.

I assume here that radiation is fungible, which means it should not matter very much if net solar intensity increases by 1 watt/M2 or increases in GHG’s generate 1 Watt/M2 of “back-radiation”, the average temperature rise at the Earth’s surface should be comparable.  While the Earth’s effective blackbody radiation at equilibrium as measured from space can’t change due to added GHG’s in the atmosphere, the apparent temperature of space, as viewed from earth, does change due to GHG’s and so the effect on average surface temperature should be comparable to an equal increase in solar radiation.   [I will not participate in a food fight about whether ‘back-radiation’ from GHG’s is real, but I will be happy to participate in a discussion about the fungibility of radiation.]

So based on this very simple analysis, the diagnosed equilibrium climate sensitivity to a doubling of CO2 (which is ~3.71 watts/M2 forcing) absent feedbacks that are due to a change in forcing, is ~1.56oC per doubling of CO2 (or 0.4207oC/watt/M2).

Positive or negative feedbacks would show up as changes in the resistance constant of the atmosphere with changes in surface temperature.  One can argue the true sensitivity to changes in GHG’s may be higher or lower than this value.  But the above value is a reasonable starting point; meaningful feedbacks (negative and positive) have to be shown to move the atmosphere away from it’s current average resistance value in order for the sensitivity to forcing to be substantially higher or lower.

 

Weaknesses

Like most all simplified calculations, the above very simple analysis has weaknesses, of course.

There is not one blackbody radiation temperature for Earth, but a range; different places on Earth have very different blackbody radiation rates.

 

 

A change in overall forcing will not produce the same change in blackbody temperature everywhere.  For example, a change of 1 watt where the emission rate is 200 watts/M2 changes the blackbody emission temperature by ~0.30oC, while that same change of 1 watt where the emission rate is 300 watts/M2 changes the emission temperature by only ~0.23oC.  Applying a uniform forcing is not expected to cause a uniform change in blackbody temperature; the change should be somewhat higher where the surface is on average cold than where it is on average warm (which is consistent with observations of greater average warming at high latitudes than at low).  The use of a single “average” blackbody temperature (253oK) for Earth may somewhat overstate the blackbody contribution to the overall sensitivity.

Another weakness is that there is not a single resistance value for the atmosphere, but rather a range.  The resistance in the tropics (with a deeper troposphere and higher atmospheric moisture) is for certain higher than at high latitudes, where temperatures are lower, the moisture content of the air is generally low, and the troposphere is thinner.   Much heat absorbed in the tropics is transported by the oceans and atmosphere to higher latitudes, where it can more easily escape.  It is very likely that there would be some increase in average resistance with higher temperatures (a positive feedback), which would mean the above analysis could understate the overall sensitivity.

A third weakness is that the Earth’s albedo is implicitly assumed to be constant with different levels of forcing, which may not be accurate.  If warming temperatures significantly increase atmospheric moisture (mostly due to higher ocean evaporation rates), then it is reasonable to expect convective clouds and average albedo to increase with forcing… a negative feedback.  Which would mean the above analysis could overstate the overall sensitivity.

Finally, GHG forcing is not exactly the same as solar forcing, because solar forcing is much stronger in the tropics, and much weaker at high latitudes, while GHG forcing should be fairly uniform.  Once again, the expected effect is that, compared to solar, GHG forcing is expected to cause relatively more warming at high latitudes than at low.

If you see other weaknesses I have not noted, please feel free to point these out in comments.

 

Final Thoughts

The observed warming since pre-industrial times is somewhere near 0.9oC (Although some will argue about the accuracy of this value!).  If the above estimate of sensitivity is reasonably accurate, and if we accept 0.9oC as a reasonable estimate of warming, then this means that only about:

0.9/0.4207 = 2.14 watts/M2

‘extra’ GHG forcing is currently warming the earth (compared to pre-industrial times).  Since current total GHG forcing is estimated to be ~3.05 watts/M2, the difference (0.91 watt/M2) must be accounted for by increases in albedo (aerosols) or by accumulation in the ocean.  Argo data and some recent estimates of deep ocean heat accumulation suggest a current ocean accumulation of ~0.25 – 0.35 watt/M2, or a bit less, leaving an estimated aerosol “offset” of ~0.56 – 0.66 watt/M2.  This is almost exactly the lower limit of the IPCC’s uncertainty range for aerosol effects in the 4th Assessment Report.  So the above estimated sensitivity is not inconsistent with the best available aerosol data and its associated uncertainty range.

Much higher sensitivity (say 0.81oC /watt/M2, or 3oC per doubling) is only consistent with much higher current aerosol off-sets.  3oC sensitivity requires a current aerosol offset of ~ 1.63 watts/M2, while 4.5oC sensitivity requires ~2 watts/M2 of current offset, which is (I think) at or above the upper limit of the IPCC uncertainty range.