Some recent exchanges in comments on The Blackboard suggest some confusion on how a heat balance based empirical estimate of climate sensitivity is done, and how that generates a probability density function for Earth’s climate sensitivity. So in this post I will describe how empirical estimates of net human climate forcing and its associated uncertainty can be translated to climate sensitivity and its associated uncertainty. I will show how the IPCC AR5 estimate of human forcing (and its uncertainty) leads to an empirical probability density function for climate sensitivity with relatively “long tail”, but with a most probable and median values near the low end of the IPCC ‘likely range’ of 1.5C to 4.5C per doubling.
We can estimate equilibrium sensitivity empirically using a simple energy balance: energy is neither created nor destroyed. At equilibrium, a change in radiative forcing will cause an change in temperature which is approximately proportional to the sensitivity of the system to radiative forcing.
The IPCC’s Summary for Policy Makers includes a helpful graphic, SPM.5. This graphic is shown in Figure 1.
The individual human forcing estimates and associated uncertainties are shown in the upper panel, and the combined forcing estimates are shown in the lower  panel. Combined forcing estimates, relative to the year 1750 are shown for 1950, 1980, and 2011. These forcing estimates include the pooled “90% uncertainty” range, shown by the thin black horizontal lines, meaning there is a ~5% chance that the true forcing is above the stated range and a ~5% chance it is below the stated range.   The best estimate for 2011 is 2.29 watts/M^2, and there is a roughly equal probability (~50% each ) that the true value is above or below 2.29 watts/M^2.  Assuming a normal (Gaussian) probability distribution (which may not be exactly right, but should be close… see central limit theorem), we get the following PDF for human forcing from SPM.5:
Figure 2 shows that the areas under the left and right halves of the probability distribution are identical, and the cumulative probability (area under the curve) crosses 0.5 (the median probability) at 2.29 Watts/M^2, as we would expect. The short vertical red lines correspond to the 5% and 95% cumulative probabilities. The uncertainty represented by Fig 2 is in net human forcing, not in climate sensitivity. The shape of the corresponding climate sensitivity PDF is determined by the uncertainty in forcing (as shown in Fig 2), combined with how forcing translates into warming. When the forcing PDF is translated into a sensitivity PDF, the median value for that sensitivity PDF curve must correspond to 2.29 watts/M^2, because half the total forcing probability lies above 2.29 watts/M^2 and half below.
We can translate any net forcing value to a corresponding estimate of effective climate sensitivity via a simple heat balance if:
1) We know how much the average surface temperature has warmed since the start of significant human forcing.
2) We assume how much of that warming has been due to human forcing (as opposed to natural variation).
3) We know how much of the net forcing is being currently accumulated on Earth as added heat.
The Effective Sensitivity, in degrees per watt per sq meter, is given by:
ES = ΔT/(F – A)     (eq. 1)
Where ΔT is the current warming above pre-industrial caused by human forcing (degrees C)
FÂ is the current human forcing (watts/M^2)
AÂ is the current rate of heat accumulation, averaged over the Earth’s surface (watts/M^2)
For this post, I assume warming since the pre-industrial period is ~0.9 C based on the GISS LOTI index, and that 100% of that warming is due to human forcing. (That is, no ‘natural variation’ contributes significantly.)
Heat accumulation is mainly in the oceans, with small contributions from ice melt and warming of land surfaces.   The top 2 Km of ocean (Figure 3) is accumulating heat at a rate of ~0.515 watt/M^2 averaged over the surface of the Earth, and ice melt of ~1 mm/year (globally averaged) adds ~0.01 watt/M^2. Heat accumulation below 2 Km ocean depth is likely small, but is not accurately known; I will assume an additional 10% of the 0-2KM ocean accumulation, or 0.0515 watt/M^2.  That comes to: 0.577 watt/M^2. The heat accumulation in land surfaces is small, but difficult to quantify exactly;  for purposes of this post I will assume 0.02 watt/M^2, bringing the total to just under 0.6 watt/M^2.
Climate sensitivity is usually expressed in degrees per doubling of carbon dioxide concentration (not degrees per watt/M^2), with an assumed incremental forcing of ~3.71 watt/M^2 per doubling of CO2, so I define the Effective Doubling Sensitivity (EDS) as:
EDS = 3.71 * ES = 3.71 * ΔT/(F – A)       (eq. 2)
If we plug in the IPCC AR5 best estimate for human forcing (2.29 watts/M^2 as of 2011), a ΔT of 0.9C and heat accumulation of 0.6 watt/M^2, the “best estimate” of EDS at the most probable forcing value is:
EDS = 3.71 * 0.9/(2.29 – 0.6) = 1.98 degrees per doubling    (eq. 3)
This is near the lower end of the IPCC’s canonical 1.5 to 4.5 C/doubling range from AR5, and is consistent with several published empirically based estimates. 2.29 watts/M^2 is the best estimate of forcing in 2011 from AR5, but there is considerable uncertainty, with a 5% to 95% range of 1.13 watts/M^2 to 3.33 watts/M^2. By substituting the forcing values from the PDF shown in Figure 2 into equation 2, we naively translate (AKA, wrongly translate) the forcing probability distribution to an EDS distribution, with probability on the y-axis and EDS on the x-axis. Please note that according to equation 2, as the forcing level F approaches the rate of heat uptake A, the calculated value for EDS approaches infinity. That is, any forcing level near or below 0.6 watt/M^2 is likely unphysical, since the Earth has never had a ‘thermal run-away’; we know the sensitivity is not infinite. The PDF shown in Figure 2 shows a finite probability of forcing at and below 0.6 watt/M^2, so if we are reasonably confident in the current rate of heat accumulation, then we are also reasonably confident the PDF in Figure 2 isn’t exactly correct, at least not in the low forcing range.
Figure 4 shows the result of the naive translation of the forcing PDF to a sensitivity PDF. To avoid division by zero (equation 2), I limited the minimum forcing value to 0.7 watt/M^2, corresponding to a sensitivity value of ~33C per doubling. There is much in Figure 4 to cast doubt on its accuracy. How can a sensitivity of 18C per doubling be 10% as likely as 2C per doubling? How can it be that 50% of the forcing probability, which lies above 2.29 watts/M^2, corresponds to a tiny area to the left of the vertical red line?  Figure 4 seems an incorrect representation of the true sensitivity PDF.
We expect from Figure 2 that there should be a 50:50 chance the true human forcing is higher or lower than ~2.29 watts/M^2, corresponding to ~2C per doubling sensitivity (shown by the red vertical line in Figure 4). Yet the area under the PDF curve to the left of the vertical line in Figure 4, corresponding to human forcing greater than 2.29 watts/M^2, lower climate sensitivity, is very small compared to the area to the right of the vertical line, corresponding to human forcing below 2.29 watts/M^2, and higher climate sensitivity. Why does this happen?
The problem is that the naive translation between forcing and sensitivity using equation 2 yields an x-axis (the sensitivity axis) which is “compressed” strongly at low sensitivity (that is, at high forcing) and “stretched” strongly at high sensitivity (that is, at low forcing).  By “compressed” and “stretched” I mean relative to the original linear x-axis of forcing values.  Compressing the x-axis at high forcing makes the area under the low sensitivity part of the curve smaller than correct, while stretching the x-axis at low forcing makes the area under the high sensitivity part of the curve larger than correct.   The result is that relative areas under the forcing PDF are not preserved during the naive translation to a sensitivity PDF. The extent of “stretching/compressing” due to translation of the x-axis is proportional to the first derivative of the translation function:
‘Stretch/compress factor’ = δ{1/(F-A)}/δF = -1/(F-A)^2                     (eq. 4)
The negative sign in eq. 4 just indicates that the ‘direction’ of the x-axis is switched by the translation (lower forcing => higher sensitivity, higher forcing => lower sensitivity). If we want to maintain equal areas under the curve above and below a sensitivity value of ~2C per doubling (that is, below and above 2.29 watts/M^2 median forcing) in the sensitivity PDF, then we have to divide each probability value from the original forcing PDF by the inverse square of the forcing value less the heat being accumulated (1/(F-A)^2), for each point on the curve. That is, we need to adjust the ‘height’ of the naive PDF curve to ensure the areas under the curve above and below 2C per doubling are the same. For consistency of presentation, I renormalized based on the highest adjusted point on the curve (highest point = 1.000).
{Aside: In general, any similar translation of an x-y graph based on a mathematical function of the x-axis values will require the y values be divided by an adjustment function which is based on the first derivative of the translation function:
           ADJy(x) = δG(x)/ δx                                          (eq. 5)
where ADJy(x) is the adjustment factor for the y value     Â
            G(x) is the translation function.}
The good news in Figure 5 is that the areas under the curve left and right of the vertical line are now the same, as we know they should be. But the peak in the curve is now at ~1.55C per doubling, corresponding to a forcing of 2.75 watts/M^2, rather than at ~2C per doubling, corresponding to the most probable forcing value of 2.29 watts/M^2 (from Figure 2). What is going on? To understand what is happening we need to recognize that the adjustment applied to maintain consistent relative areas under the curve is effectively taking into account how quickly (or slowly) the forcing changes for a corresponding change in sensitivity.  To examine how much the original forcing value must change for a small change in sensitivity; let’s look at a change in sensitivity of ~0.2 at both high and low sensitivity ranges.
Sensitivity      Corresponding Forcing
1.5041Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 2.82 watts/M^2
1.7036Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 2.56 watts/M^2
0.2005Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0.26 watt/M^2Â Â =>Â ~1.3 watts/M^2/(degree/doubling)
4.512Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 1.34 watts/M^2
4.281Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 1.38 watts/M^2
0.231Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0.04 watt/M^2Â => ~0.173 watt/M^2/(degree/doubling)
For the same incremental change in sensitivity, it takes ~7.5 times greater change in forcing near 1.6 sensitivity than it does near 4.4 sensitivity. A large change in forcing at a high forcing level corresponds to only a very small change in sensitivity, while a small change in forcing at a low forcing level corresponds to a large change in sensitivity.  But the fundamental uncertainty is in forcing (Figure 2), so at low sensitivity (high forcing) a small change in sensitivity represents a large fraction of the total forcing probability. That is why the peak in the adjusted PDF for sensitivity shifts to a lower value; it must shift lower to maintain fidelity with the fundamental uncertainty function, which is in forcing.
If you have doubt that the adjustment used to generate Figure 5 is correct, consider a blind climate scientist throwing darts (randomly) towards a large image of Figure 2. The question is: What fraction of the darts will hit left of 2.29 watts/M^2 and below the probability curve and what fraction will hit to the right of 2.29 watts/M^2 and below the probability curve? If the climate scientist is truly blind (throwing at random, both up-down and left-right), the two fractions will be identical.
If enough darts are thrown, and we calculate the corresponding sensitivity value for each dart which falls between the baseline and the forcing probability curve, we can count the number of darts which hit narrow sensitivity ranges equal distances apart (equal width bins), and construct a Monte Carlo version of Figure 5, keeping in mind that uniform bin widths in sensitivity correspond to very non-uniform bin widths in forcing. The blind climate scientist throws darts at wider bins on the high side of the forcing range, each corresponding to equally spaced sensitivity values, than on the low side of the low forcing range.  The most probable bin to hit, corresponding to the peak on the sensitivity PDF graph, will be the bin with the greatest total area on the forcing graph (that is, when the height of the probability curve times the forcing bin width is the maximum value).  If many bins are used, and enough darts thrown, the Monte Carlo version will be identical in appearance to Figure 5.
Comments And Observations
 Based on the AR5 human forcing estimates and a simple heat balance calculation of climate sensitivity, the median effective doubling sensitivity (EDS) is ~2C, and the most probable EDS is ~1.55C. There is a 5% chance that the EDS is less than ~1.25C and a 5% change that it is more than ~6.3C. These values are based on the additional assumptions:
1) The PFD in forcing is approximately Gaussian. This seems likely based on the uncertainty ranges shown for each of the many individual forcings shown in SPM.5 and the central limit theorem.
2) All warming since pre-industrial times has been due to human forcing. If 0.1C of the long term warming is due to natural variation, then the median value for sensitivity falls to ~1.76C per doubling. If there is a long term underlying natural cooling trend of 0.1C, which partially offsets warming, then the median sensitivity increases to ~2.2C per doubling.
3) Total current heat accumulation as of 2011 was ~0.6 watt/M^2 averaged globally (including ocean warming, ice melt, and land warming).
If any of the above assumptions are incorrect, then the calculations here would have to be modified.
Relationship of EDS to Equilibrium Sensitivity
Earth’s equilibrium climate sensitivity is linear or nearly linear in equilibrium temperature response to forcing in the “forcing domain”, which is the same as saying EDS is a good approximation for Earth’s equilibrium climate sensitivity to a doubling of CO2. This seems to be a reasonable expectation over modest temperature changes. There is some disagreement between different climate modeling groups (and others) about long term apparent non-linearity. For further insight, see for example: http://rankexploits.com/musings/2013/observation-vs-model-bringing-heavy-armour-into-the-war/.
Impact of Uncertainty in Forcing
The width of the AR5 forcing PDF means that calculated sensitivity values for the low forcing “tail” of the distribution reach implausibly high levels; eg. 2.5% chance of EDS over 10C, which seems inconsistent with the relative stability of Earth’s climate in spite of large changes in atmospheric carbon dioxide in the geological past. I think the people who prepared the AR5 estimates of forcing would have been well served to consider the plausibility of extreme climate sensitivty; a slightly narrower uncertainty range, especially at low forcing, seems more consistent with the long term stability of Earth’s climate.
A reasonable question is: How would the sensitivity PDF change if the forcing PDF were narrower? Â Â In other words, if it were possible to narrow the uncertainty in forcing, how would that impact the sensitivity PDF? Â Figure 6 shows the calculated sensitivity PDF with a 33% reduction in standard deviation in total forcing uncertainty, but the same median forcing value (2.29 watts/M^2).
The peak sensitivity is now at 1.72C per doubling (versus 1.55C per doubling with the AR5 forcing PDF), while there is now ~5% chance the true sensitivity lies above 3.6C per doubling, indicated by the vertical green line in Figure 6 (versus 6.3C per doubling with the AR5 forcing PDF). Any narrowing of uncertainty at any specific forcing estimate will lead to a relatively large reduction in the estimated chance of very high sensitivity, and a modest increase in the most probable sensitivity value. Since most of the uncertainty in forcing is due to uncertainty in aerosol effects (direct and indirect), it seems prudent to concentrate on a better definition of aerosol influence to improve the accuracy of empirical estimates of climate sensitivity; replacing and launching the failed Glory satellite (global aerosol measurements) would be a step in that direction.
Finally, there is some (smaller) uncertainty in actual temperature rise over pre-industrial and in heat accumulation. Adding these uncertainties will broaden the final sensitivity PDF, but the issues are the same: the dominant uncertainties are in forcing, and especially in aerosol effects. Any broadening of the forcing PDF leads to an ever more skewed sensitivity PDF.
Note:
I am not interested in discussing the validity of the GISS LOTI history, nor anything having to do with radiative forcing violating the Second Law of thermodynamics (nor how awful are Hillary Clinton and Donald Trump, or any other irrelevant topic). The objective here is to reduce confusion about how uncertainty in forcing translates into a PDF in climate sensitivity.




























































