Some recent exchanges in comments on The Blackboard suggest some confusion on how a heat balance based empirical estimate of climate sensitivity is done, and how that generates a probability density function for Earth’s climate sensitivity. So in this post I will describe how empirical estimates of net human climate forcing and its associated uncertainty can be translated to climate sensitivity and its associated uncertainty. I will show how the IPCC AR5 estimate of human forcing (and its uncertainty) leads to an empirical probability density function for climate sensitivity with relatively “long tail”, but with a most probable and median values near the low end of the IPCC ‘likely range’ of 1.5C to 4.5C per doubling.
We can estimate equilibrium sensitivity empirically using a simple energy balance: energy is neither created nor destroyed. At equilibrium, a change in radiative forcing will cause an change in temperature which is approximately proportional to the sensitivity of the system to radiative forcing.
The IPCC’s Summary for Policy Makers includes a helpful graphic, SPM.5. This graphic is shown in Figure 1.
The individual human forcing estimates and associated uncertainties are shown in the upper panel, and the combined forcing estimates are shown in the lower  panel. Combined forcing estimates, relative to the year 1750 are shown for 1950, 1980, and 2011. These forcing estimates include the pooled “90% uncertainty” range, shown by the thin black horizontal lines, meaning there is a ~5% chance that the true forcing is above the stated range and a ~5% chance it is below the stated range.   The best estimate for 2011 is 2.29 watts/M^2, and there is a roughly equal probability (~50% each ) that the true value is above or below 2.29 watts/M^2.  Assuming a normal (Gaussian) probability distribution (which may not be exactly right, but should be close… see central limit theorem), we get the following PDF for human forcing from SPM.5:
Figure 2 shows that the areas under the left and right halves of the probability distribution are identical, and the cumulative probability (area under the curve) crosses 0.5 (the median probability) at 2.29 Watts/M^2, as we would expect. The short vertical red lines correspond to the 5% and 95% cumulative probabilities. The uncertainty represented by Fig 2 is in net human forcing, not in climate sensitivity. The shape of the corresponding climate sensitivity PDF is determined by the uncertainty in forcing (as shown in Fig 2), combined with how forcing translates into warming. When the forcing PDF is translated into a sensitivity PDF, the median value for that sensitivity PDF curve must correspond to 2.29 watts/M^2, because half the total forcing probability lies above 2.29 watts/M^2 and half below.
We can translate any net forcing value to a corresponding estimate of effective climate sensitivity via a simple heat balance if:
1) We know how much the average surface temperature has warmed since the start of significant human forcing.
2) We assume how much of that warming has been due to human forcing (as opposed to natural variation).
3) We know how much of the net forcing is being currently accumulated on Earth as added heat.
The Effective Sensitivity, in degrees per watt per sq meter, is given by:
ES = ΔT/(F – A)     (eq. 1)
Where ΔT is the current warming above pre-industrial caused by human forcing (degrees C)
FÂ is the current human forcing (watts/M^2)
AÂ is the current rate of heat accumulation, averaged over the Earth’s surface (watts/M^2)
For this post, I assume warming since the pre-industrial period is ~0.9 C based on the GISS LOTI index, and that 100% of that warming is due to human forcing. (That is, no ‘natural variation’ contributes significantly.)
Heat accumulation is mainly in the oceans, with small contributions from ice melt and warming of land surfaces.   The top 2 Km of ocean (Figure 3) is accumulating heat at a rate of ~0.515 watt/M^2 averaged over the surface of the Earth, and ice melt of ~1 mm/year (globally averaged) adds ~0.01 watt/M^2. Heat accumulation below 2 Km ocean depth is likely small, but is not accurately known; I will assume an additional 10% of the 0-2KM ocean accumulation, or 0.0515 watt/M^2.  That comes to: 0.577 watt/M^2. The heat accumulation in land surfaces is small, but difficult to quantify exactly;  for purposes of this post I will assume 0.02 watt/M^2, bringing the total to just under 0.6 watt/M^2.
Climate sensitivity is usually expressed in degrees per doubling of carbon dioxide concentration (not degrees per watt/M^2), with an assumed incremental forcing of ~3.71 watt/M^2 per doubling of CO2, so I define the Effective Doubling Sensitivity (EDS) as:
EDS = 3.71 * ES = 3.71 * ΔT/(F – A)       (eq. 2)
If we plug in the IPCC AR5 best estimate for human forcing (2.29 watts/M^2 as of 2011), a ΔT of 0.9C and heat accumulation of 0.6 watt/M^2, the “best estimate” of EDS at the most probable forcing value is:
EDS = 3.71 * 0.9/(2.29 – 0.6) = 1.98 degrees per doubling    (eq. 3)
This is near the lower end of the IPCC’s canonical 1.5 to 4.5 C/doubling range from AR5, and is consistent with several published empirically based estimates. 2.29 watts/M^2 is the best estimate of forcing in 2011 from AR5, but there is considerable uncertainty, with a 5% to 95% range of 1.13 watts/M^2 to 3.33 watts/M^2. By substituting the forcing values from the PDF shown in Figure 2 into equation 2, we naively translate (AKA, wrongly translate) the forcing probability distribution to an EDS distribution, with probability on the y-axis and EDS on the x-axis. Please note that according to equation 2, as the forcing level F approaches the rate of heat uptake A, the calculated value for EDS approaches infinity. That is, any forcing level near or below 0.6 watt/M^2 is likely unphysical, since the Earth has never had a ‘thermal run-away’; we know the sensitivity is not infinite. The PDF shown in Figure 2 shows a finite probability of forcing at and below 0.6 watt/M^2, so if we are reasonably confident in the current rate of heat accumulation, then we are also reasonably confident the PDF in Figure 2 isn’t exactly correct, at least not in the low forcing range.
Figure 4 shows the result of the naive translation of the forcing PDF to a sensitivity PDF. To avoid division by zero (equation 2), I limited the minimum forcing value to 0.7 watt/M^2, corresponding to a sensitivity value of ~33C per doubling. There is much in Figure 4 to cast doubt on its accuracy. How can a sensitivity of 18C per doubling be 10% as likely as 2C per doubling? How can it be that 50% of the forcing probability, which lies above 2.29 watts/M^2, corresponds to a tiny area to the left of the vertical red line?  Figure 4 seems an incorrect representation of the true sensitivity PDF.
We expect from Figure 2 that there should be a 50:50 chance the true human forcing is higher or lower than ~2.29 watts/M^2, corresponding to ~2C per doubling sensitivity (shown by the red vertical line in Figure 4). Yet the area under the PDF curve to the left of the vertical line in Figure 4, corresponding to human forcing greater than 2.29 watts/M^2, lower climate sensitivity, is very small compared to the area to the right of the vertical line, corresponding to human forcing below 2.29 watts/M^2, and higher climate sensitivity. Why does this happen?
The problem is that the naive translation between forcing and sensitivity using equation 2 yields an x-axis (the sensitivity axis) which is “compressed” strongly at low sensitivity (that is, at high forcing) and “stretched” strongly at high sensitivity (that is, at low forcing).  By “compressed” and “stretched” I mean relative to the original linear x-axis of forcing values.  Compressing the x-axis at high forcing makes the area under the low sensitivity part of the curve smaller than correct, while stretching the x-axis at low forcing makes the area under the high sensitivity part of the curve larger than correct.   The result is that relative areas under the forcing PDF are not preserved during the naive translation to a sensitivity PDF. The extent of “stretching/compressing” due to translation of the x-axis is proportional to the first derivative of the translation function:
‘Stretch/compress factor’ = δ{1/(F-A)}/δF = -1/(F-A)^2                     (eq. 4)
The negative sign in eq. 4 just indicates that the ‘direction’ of the x-axis is switched by the translation (lower forcing => higher sensitivity, higher forcing => lower sensitivity). If we want to maintain equal areas under the curve above and below a sensitivity value of ~2C per doubling (that is, below and above 2.29 watts/M^2 median forcing) in the sensitivity PDF, then we have to divide each probability value from the original forcing PDF by the inverse square of the forcing value less the heat being accumulated (1/(F-A)^2), for each point on the curve. That is, we need to adjust the ‘height’ of the naive PDF curve to ensure the areas under the curve above and below 2C per doubling are the same. For consistency of presentation, I renormalized based on the highest adjusted point on the curve (highest point = 1.000).
{Aside: In general, any similar translation of an x-y graph based on a mathematical function of the x-axis values will require the y values be divided by an adjustment function which is based on the first derivative of the translation function:
           ADJy(x) = δG(x)/ δx                                          (eq. 5)
where ADJy(x) is the adjustment factor for the y value     Â
            G(x) is the translation function.}
The good news in Figure 5 is that the areas under the curve left and right of the vertical line are now the same, as we know they should be. But the peak in the curve is now at ~1.55C per doubling, corresponding to a forcing of 2.75 watts/M^2, rather than at ~2C per doubling, corresponding to the most probable forcing value of 2.29 watts/M^2 (from Figure 2). What is going on? To understand what is happening we need to recognize that the adjustment applied to maintain consistent relative areas under the curve is effectively taking into account how quickly (or slowly) the forcing changes for a corresponding change in sensitivity.  To examine how much the original forcing value must change for a small change in sensitivity; let’s look at a change in sensitivity of ~0.2 at both high and low sensitivity ranges.
Sensitivity      Corresponding Forcing
1.5041Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 2.82 watts/M^2
1.7036Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 2.56 watts/M^2
0.2005Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0.26 watt/M^2Â Â =>Â ~1.3 watts/M^2/(degree/doubling)
4.512Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 1.34 watts/M^2
4.281Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 1.38 watts/M^2
0.231Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0.04 watt/M^2Â => ~0.173 watt/M^2/(degree/doubling)
For the same incremental change in sensitivity, it takes ~7.5 times greater change in forcing near 1.6 sensitivity than it does near 4.4 sensitivity. A large change in forcing at a high forcing level corresponds to only a very small change in sensitivity, while a small change in forcing at a low forcing level corresponds to a large change in sensitivity.  But the fundamental uncertainty is in forcing (Figure 2), so at low sensitivity (high forcing) a small change in sensitivity represents a large fraction of the total forcing probability. That is why the peak in the adjusted PDF for sensitivity shifts to a lower value; it must shift lower to maintain fidelity with the fundamental uncertainty function, which is in forcing.
If you have doubt that the adjustment used to generate Figure 5 is correct, consider a blind climate scientist throwing darts (randomly) towards a large image of Figure 2. The question is: What fraction of the darts will hit left of 2.29 watts/M^2 and below the probability curve and what fraction will hit to the right of 2.29 watts/M^2 and below the probability curve? If the climate scientist is truly blind (throwing at random, both up-down and left-right), the two fractions will be identical.
If enough darts are thrown, and we calculate the corresponding sensitivity value for each dart which falls between the baseline and the forcing probability curve, we can count the number of darts which hit narrow sensitivity ranges equal distances apart (equal width bins), and construct a Monte Carlo version of Figure 5, keeping in mind that uniform bin widths in sensitivity correspond to very non-uniform bin widths in forcing. The blind climate scientist throws darts at wider bins on the high side of the forcing range, each corresponding to equally spaced sensitivity values, than on the low side of the low forcing range.  The most probable bin to hit, corresponding to the peak on the sensitivity PDF graph, will be the bin with the greatest total area on the forcing graph (that is, when the height of the probability curve times the forcing bin width is the maximum value).  If many bins are used, and enough darts thrown, the Monte Carlo version will be identical in appearance to Figure 5.
Comments And Observations
 Based on the AR5 human forcing estimates and a simple heat balance calculation of climate sensitivity, the median effective doubling sensitivity (EDS) is ~2C, and the most probable EDS is ~1.55C. There is a 5% chance that the EDS is less than ~1.25C and a 5% change that it is more than ~6.3C. These values are based on the additional assumptions:
1) The PFD in forcing is approximately Gaussian. This seems likely based on the uncertainty ranges shown for each of the many individual forcings shown in SPM.5 and the central limit theorem.
2) All warming since pre-industrial times has been due to human forcing. If 0.1C of the long term warming is due to natural variation, then the median value for sensitivity falls to ~1.76C per doubling. If there is a long term underlying natural cooling trend of 0.1C, which partially offsets warming, then the median sensitivity increases to ~2.2C per doubling.
3) Total current heat accumulation as of 2011 was ~0.6 watt/M^2 averaged globally (including ocean warming, ice melt, and land warming).
If any of the above assumptions are incorrect, then the calculations here would have to be modified.
Relationship of EDS to Equilibrium Sensitivity
Earth’s equilibrium climate sensitivity is linear or nearly linear in equilibrium temperature response to forcing in the “forcing domain”, which is the same as saying EDS is a good approximation for Earth’s equilibrium climate sensitivity to a doubling of CO2. This seems to be a reasonable expectation over modest temperature changes. There is some disagreement between different climate modeling groups (and others) about long term apparent non-linearity. For further insight, see for example: http://rankexploits.com/musings/2013/observation-vs-model-bringing-heavy-armour-into-the-war/.
Impact of Uncertainty in Forcing
The width of the AR5 forcing PDF means that calculated sensitivity values for the low forcing “tail” of the distribution reach implausibly high levels; eg. 2.5% chance of EDS over 10C, which seems inconsistent with the relative stability of Earth’s climate in spite of large changes in atmospheric carbon dioxide in the geological past. I think the people who prepared the AR5 estimates of forcing would have been well served to consider the plausibility of extreme climate sensitivty; a slightly narrower uncertainty range, especially at low forcing, seems more consistent with the long term stability of Earth’s climate.
A reasonable question is: How would the sensitivity PDF change if the forcing PDF were narrower? Â Â In other words, if it were possible to narrow the uncertainty in forcing, how would that impact the sensitivity PDF? Â Figure 6 shows the calculated sensitivity PDF with a 33% reduction in standard deviation in total forcing uncertainty, but the same median forcing value (2.29 watts/M^2).
The peak sensitivity is now at 1.72C per doubling (versus 1.55C per doubling with the AR5 forcing PDF), while there is now ~5% chance the true sensitivity lies above 3.6C per doubling, indicated by the vertical green line in Figure 6 (versus 6.3C per doubling with the AR5 forcing PDF). Any narrowing of uncertainty at any specific forcing estimate will lead to a relatively large reduction in the estimated chance of very high sensitivity, and a modest increase in the most probable sensitivity value. Since most of the uncertainty in forcing is due to uncertainty in aerosol effects (direct and indirect), it seems prudent to concentrate on a better definition of aerosol influence to improve the accuracy of empirical estimates of climate sensitivity; replacing and launching the failed Glory satellite (global aerosol measurements) would be a step in that direction.
Finally, there is some (smaller) uncertainty in actual temperature rise over pre-industrial and in heat accumulation. Adding these uncertainties will broaden the final sensitivity PDF, but the issues are the same: the dominant uncertainties are in forcing, and especially in aerosol effects. Any broadening of the forcing PDF leads to an ever more skewed sensitivity PDF.
Note:
I am not interested in discussing the validity of the GISS LOTI history, nor anything having to do with radiative forcing violating the Second Law of thermodynamics (nor how awful are Hillary Clinton and Donald Trump, or any other irrelevant topic). The objective here is to reduce confusion about how uncertainty in forcing translates into a PDF in climate sensitivity.






Prepare to be unamazed by elementary cost-benefit and business economics.
Outstanding timing SteveF!
Now I’ll shut up and read it. 🙂
Thanks for this. I’ve been studying (frequentist, I believe it’s called) stats. I’ll have to spend some more time with this, but I appreciate the detailed explanation.
Nice post SteveF. This actually makes sense to me, at least up through the translation function. I’m still chewing on the rest.
Thanks so much!
SteveF said:
1) The PFD in forcing is approximately Gaussian. This seems likely based on the uncertainty ranges shown for each of the many individual forcings shown in SPM.5 and the central limit theorem.
*****
I need to look at SPM.5. However, I’m a bit confused about your reference to the central limit theorem as motivation for the assumption that the forcing PDF is normal.
If I understand correctly, even if the real forcing PD took the form of a three-humped camel, the application of the CLT would result in a normal distribution if enough means of samples of the real distribution were taken and plotted.
So how does the CLT affect your judgement of the shape of the real forcing PDF?
jim2,
The CLT says that when an average (or composite) of many statistically independent variables is formed, which is what the net forcing is, the average (or composite) will tend to be normally distributed, even if the individual variables are not all normally distributed. Of course, the individual forcings (from SPM5) don’t look all that far from normally distributed individually, so the composite is probably pretty close to normal. Slight variation form normal doesn’t change much in this case; the PDF in sensitivity still represents a transformation of the PDF in forcing, even if it is not perfectly normal.
“Earth’s equilibrium climate sensitivity is linear or nearly linear in equilibrium temperature response to forcing in the “forcing domainâ€, which is the same as saying EDS is a good approximation for Earth’s equilibrium climate sensitivity to a doubling of CO2. This seems to be a reasonable expectation over modest temperature changes.”
I’m not so sure. I have the feeling here that you are following the AR3, Ch 9 here. What they say is:
“The effective sensitivity becomes the equilibrium sensitivity under equilibrium conditions with 2xCO2 forcing. The effective climate sensitivity is a measure of the strength of the feedbacks at a particular time and it may vary with forcing history and climate state.”
What they don’t say is that the effective CS (EfCS) stays the same as you approach equilibrium. So I don’t think you can identify current effective CS with equilibrium CS (EqCS).
An analogy to your formula is passing a current (forcing) into an earthed R1 and C in parallel. EqCS is the DC resistance. R1 is estimated by dividing V (T) by net current in (F – CdV/dt). So far so good – should work.
But what if the forcing passes through another resistor R2 before reaching R1//C, and you start the forcing with no charge on C? Then initially, your formula, with current into C subtracted, will return EfCS=R2. This will increase over time to the DC (EqCS) value of R1+R2.
The AR5 is definite on this. The Glossary (Annex III) says, under Climate Sensitivity:
“The effective climate sensitivity (units: °C) is an estimate of the global mean surface temperature response to doubled carbon dioxide concentration that is evaluated from model output or observations for evolving non-equilibrium conditions. It is a measure of the strengths of the climate feedbacks at a particular time and may vary with forcing history and climate state, and therefore may differ from equilibrium climate sensitivity.”
Nick Stokes,
Of course the two may differ by some amount (I referenced a discussion of Armour et al… maybe you didn’t read that part). The effective climate sensitivity is a reasonable proxy for equilibrium sensitivity. As for AR5 being ‘definite’ on this:
“and therefore may differ from equilibrium climate sensitivity.†(emphasis is mine). Seems to me they are just invoking the ever present uncertainty of climate science. Not sure how ‘may’ becomes somehow ‘definite’.
Nick Stokes,
Thanks for this and for the discussion. I’d [strike always] [occasionally] wondered what the heck the difference between effective and equilibrium sensitivity was but never bothered to go track it down.
Thank you for this very clear explanation. As always, if my questions invoke a term of art, assume naive usage.
Nick’s observation seems to me to suggest that your explanation assumes no unknown actors on the stage, nor actors whose effect shows up for first time as temperature converges on equilibrium. Is that what you think he’s saying?
I also read that warming for this explanation is entirely human-perpetrated, and that the underlying natural change, if any, could be warming, none, or cooling, and then the sensitivity number would change. Nick mentioned underlying cooling in a note on WUWT a week ago.
Does it make sense that the change can be driven by an ensemble of warming by CO2 increase, cooling by associated particulates (aerosols), and other human generated drivers, AND natural warming or cooling and that we, so far, have no way to isolate and quantify each of these drivers. Was that what the study of efficacies was attempting?
Again, thank you for writing so clearly. I’m reminded of the story of Napoleon and the Hessian general.
Recapping
” a heat balance based empirical estimate of climate sensitivity can be translated to climate sensitivity.”
“At equilibrium, a change in radiative forcing will cause an change in temperature which is approximately proportional to the sensitivity of the system to radiative forcing.”
–
ES = ΔT/(F – A) = = 0.9/(2.29 – 0.6) = 0.5325
Is this lower than what most people quote?
–
ΔT of 0.9C
[“I assume warming since the pre-industrial period is ~0.9 C based on the GISS LOTI index, and that 100% of that warming is due to human forcing.”].
ok.
“Heat accumulation is just under 0.6 watt/M^2.”
Does that correspond to an actual ocean temperature increase of say 0.001 degrees Centigrade? Would you have the real figure available? I would presume the ocean temperature could not have gone up more than 0.1 degree centigrade and all estimates of ocean heat content carry such a large degree of uncertainty that one could not be 95% confident that the ocean heat content might actually have dropped.
–
The best estimate for EDS FOR 2011 is 2.29 watts/M^2,”
ok.
Thank you for providing the formula and assumptions that most people should and would use.
I am not sure of the way the probability distributions work to give such a fat tail, If it does and it is implausible then the formula leading to it might be faulty?
–
SPM.5 listing natural and anthropogenic seems to miss out on H2O which is the main but variable greenhouse gas and whose presence in different concentrations could prevent well mixed gases at lower important levels.This could give much higher natural variability and might impinge/impunge on the ability to calculate true EDS.
jferguson,
Nick is saying that there are some people who suggest true equilibrium sensitivity is higher than empirically derived sensitivity due to how feedbacks change at higher temperatures, in particular due to different rates of warming at different latitudes. This mostly comes from climate model projections of how temperatures approach equilibrium. Seems the differences are usually modest, and some models don’t show much effect at all.
.
Of course, considering the divergence of model projections from reality, using models to cast doubt on empirical determinations is for me an eye-roller. The recent Marvel et al paper is a little different, invoking low “efficacies” for certain forcings and high for others, to justify a relatively low net measured warming relative to forcing…. and so defending the honor of models which are calculating much higher sensitivity relative to empirical estimates. Once again, model divergence from reality makes this a dubious argument in my mind.
SteveF,
As I remember, it wasn’t different warming rates at different latitudes that caused non-linearity, but something to do with clouds. A lot of the models have clouds as a positive feedback. Or they hand wave and say some types of clouds have a positive feedback and those are the types that will increase in a warmer climate, but not right away.
Since models don’t do clouds particularly well, I don’t buy that.
SteveF
If the PDF represents our understanding of forcing, it seems to me that it’s the PDF in forcing is not Gaussian in the tails. We are relatively certain it is not zero. So it certainly dies off abruptly at 0. But likewise, it dies off rapidly at very high forcings. We really aren’t uncertain that we haven’t forcing hasn’t been 1000 W/m^2 we are just as certain we haven’t done that as we are certain it’s not less than 0 W/m^2. So, in some sense, truncating the PDF on both the very high and the below zero end would make some sense.
The difficulty is, of course, precisely where to do truncate on the high end and how to do it at the low end. Zero is such a firm boundary it’s easier to propose that as a place to break– if breaking is even what you want to do. But sadly, while it is difficult to decree a upper range of forcing for truncation, that fact it is not done dramatically distorts the high end of the sensitivity PDF estimate using the “naive” method you describe above.
I’m not sure if just shifting is the most elegant way to adjust for this problem. But it’s simple and at least fits the level of the other simple features.
Possibly, a pdf for forcing with a sufficiently “thin tail” on the high end and low ends might fix the problem. (But of course “some” would be grumpified by it that choice. That said: unlike case where our uncertainty in something is due to a set of uncertainties each of which is itself gaussian, there is nothing truly magic about the Gaussian uncertainty. Somethings do have fatter or thinner tails. Our uncertainty in forcing almost certainty does have a truncation on the low end and a thin tail on the high end.)
SteveF, thanks for a well written explanation. I realize your main focus is on uncertainty, so I apologize for being a bit off track here.
We’ve had the discussion about transient versus equilibrium climate sensitivity before, so you probably remember my thoughts—my intuition is you need to measure the system for a period comparable to the known relaxation time of the system before you could accurately measure the equilibrium climate sensitivity.
So I’m basically agreeing with Nick and I think his analogy is a good one. My guess is, for your median value of 2, you’ll you need to multiply your distribution by a factor of 1.55 to get to the equivalent ECS.
(This is just a bit more than a SWAG, I’m using this model to estimate the correction.)
I’ll point out this blog post by Isaac Held, which discusses the issue differently, but I think generally points to the same conclusion.
As we’ve also covered before, the time scale for true equilibrium climate sensitivity to be realized is something on the order of 2000 years. It’s not very likely that the CO2 transient associated with humans burning of fossil fuel will be long enough to fully achieve that amount of increase.
Anyway, in practice, if you wanted to predict the temperature change associated with a CO2 pulse that has a duration of a few hundred years, TCS is probably a better starting point.
Lucia,
“If the PDF represents our understanding of forcing, it seems to me that it’s the PDF in forcing is not Gaussian in the tails. We are relatively certain it is not zero. So it certainly dies off abruptly at 0. But likewise, it dies off rapidly at very high forcings. We really aren’t uncertain that we haven’t forcing hasn’t been 1000 W/m^2 we are just as certain we haven’t done that as we are certain it’s not less than 0 W/m^2. So, in some sense, truncating the PDF on both the very high and the below zero end would make some sense.
.
Yes, the forcing PDF can’t logically extend to any value close to the current rate of heat uptake, or else the calculated sensitivity range extends to infinity, which we know is not correct. So it is absolutely required to truncate the forcing PDF in the low forcing range range. There is less need to do so on the high end of the forcing range, because the high end forcing does not lead to un-physical sensitivity values (or at least not very un-physical values). This is because the sensitivity becomes ever less responsive to increases in forcing at high forcing levels. If you were to simply cut off the forcing distribution above 3.4 watts/M^2 (set the probability of forcing above 3.4 watts to zero), it would eliminate only the probability of the true sensitivity value being between ~1C and ~1.25 C per doubling. Do the same thing on the low forcing side, and you eliminate all possible sensitivities above ~6.3C per doubling… you eliminate all the very un-physical values for sensitivity. So yes, I agree that it makes sense to use a non-Gaussian forcing distribution in both tails of the forcing curve; it just doesn’t make much practical difference at the high end of the forcing range. Nobody cares much if the sensitivity is 1.25 C per doubling instead of 1.1C per doubling.
lucia:
My guess is if you looked at all phenomena predicted by a given climate model, you’d find gross inconsistencies between observations and models with ECS’s above about 4.5 °C/doubling.
In other words, people focus too much just on the global mean temperature metric, when trying to validate the models. One other metric to look at (which is again covered ground) would be ENSO.
Maybe it’s okay to have really fat tails for the ECS PDF, as long as you are only explicitly considering the mean temperature course over the last 130 years. Never mind that some higher ECS models might have the surface of the Earth melting in 50 years (that’s a bit of an exaggeration).
If the focus of the analysis is to get the “correct” PDF given just the comparison to global mean temperature, it’s likelythat otherwise absurd models could still be admitted as solutions.
The real question is, “what doe the joint PDF look like?”, once you’ve incorporated other prior knowledge, such the fact the Earth has never gone into runaway greenhouse effect scenarios, or what happens once you’ve rejected models that have clearly inconsistent ENSO phenomenology.
My guess is you end up with a much shorter tail, with a definite cutoff at some higher value of ECS. (There must be an ECS value, above which the Earth would have already gone into catastrophic and irreversible runaway global warming… )
I remember reading Lubos write once of quantum mechanics that you can’t just truncate the probability distributions or you’ll violate unitarity. You need to include the “absurd” improbable probabilities to get accurate average answers.
Carrick,
The value I am calculating is not an estimate of TCR, because heat uptake is included in the calculation. The possible difference between the empirically calculated value and the equilibrium value that Nick Stokes refers to is in fact not large, at least not anything like 1.55. That value is a typical ratio between equilibrium sensitivity and TCR in GCM’s. If you ignore the heat uptake in the empirical calculation, the resulting estimate is the ‘transient sensitivity’ (or maybe ‘pseudo-transient’ is better); the pseudo-transient value based on AR5 is 3.71 * (0.9/2.29) = 1.45C. That makes the effective sensitivity/pseudo-transient sensitivity ratio 2.0/1.45 = 1.38. This is lower than the ratio in GCM’s (about 1.55 like you said), but that is expected, because the equilibrium/transient ratio is closely related to the overall sensitivity.
Nic Lewis has said recently that in the GCM’s he has looked at, the average equilibrium sensitivity value is about 10% higher than the effective value you would calculate from the GCM behavior, using the above heat balance method.
SteveF:
Thanks, that makes sense.
For your environmental climate sensitivity of 1.45C, I get a multiplicative factor of 1.4±0.3 with my model, which translates into a central value for ECS between 1.6-2.5°C/doubling.
It’s surprising to me, the value is this close. It’s be nice to see the values for the full (or a good subset of the) CMIP ensemble .
If you measure the response of a system over a finite period of time—especially if the measurements are very noisy—I still don’t see how you can expect to be reliably measuring the long term response of the system. That goes against my experience with inverse solutions (which typically are problem spaces where there is a lot of model and measurement uncertainty).
But perhaps this is a case of my intuition based on prior experience failing.
Andrew_FL:
That’s probably true, but with classical systems, you can often set hard upper limits. E.g., we can say that the maximum stable lapse rate is the dry adiabatic one.
My guess, if you used the known paleoclimate forcing history, for a sufficiently high or low ECS climate model, you’ll end up with a very alien looking modern Earth. Either “Venus-Earth” or “slushy ice ball”.
I suspect Nic only looked at those GCMs with ECS closer to the lower 2C median to arrive at his 10% ratio while eliminating models with higher ECS as unreasonable, but it is true for the GISS E2-R and E2-H models . My notes for this paper are as follows, excluding the CH4 feedbacks:
GISS E2-R
ECS (Andrews et al.) ECS (actual)
2.1 (NINT) 2.3
2.2 (TCAD) 2.3
2.4 (TCADI) 2.4
GISS E2-H
2.3 (NINT) 2.5
2.3 (TCAD) 2.4
2.5 (TCADI) 2.5
The first column is explained as follows:
Looks like there is a paper by Miller et al. showing the TCR/ECS ratio for GCMs.
SteveF, I would call your post a very good teaching moment. Sometimes when we have a post related to empirical ECS or TCR estimates there is not the space to explain the more basic details as you have done in this post. I see that you have avoided the quick criticism that some might pose about the estimation of ECS (or a close facsimile) by clearly stating the assumptions you have made in your calculations. Nevertheless there are some daunting uncertainties in these calculations which I have no problem with as long as an attempt is made to estimate the uncertainties. In fact I think the work being performed to obtain a reasonable empirical estimate of ECS/TCR is critical in judging the climate modeling estimates.
If there is an argument about the linearity of the ECS (the AR5 rendition of calculating ECS involved extrapolating a linear regression of net TOA radiation versus change in surface air temperature to 0 net TOA radiation) I would think it would be prudent to run some climate models for at least a few thousand years. Just like climate models that have only a single run or a few runs, whereas to clearly define the individual model natural variation requires numerous runs, the cry is always the expense to make extended runs in time or number. We appear to be in much bigger hurry to expend huge funds on a problem than in spending much less determining how big that problem actually is.
TCR appears a more viable measure to both model and use for policy than ECR. Can the models tell us with any degree of assurance where the heat increase going to the oceans will reside and how long will it take to appear in the atmosphere?
A bit off topic but the attempts to determine ECS uncertainty in AR4/AR5 should be discussed at every turn of discussions on ECS. “Most of the observational instrumental-period warming based ECS estimates cited in AR5 use a ‘Subjective Bayesian’ statistical approach.iv The starting position of many of them – their prior – is that all climate sensitivities are, over a very wide range, equally likely. In Bayesian terminology, they start from a ‘uniform prior’ in ECS. All climate sensitivity estimates shown in the AR4 report were stated to be on a uniform-in-ECS prior basis. So are many cited in AR5. Use of uniform-in-ECS priors biases estimates upwards, usually substantially. When, as is the case for ECS, the parameter involved has a substantially non-linear relationship with the observational data from which it is being estimated, a uniform prior generally prevents the estimate fairly reflecting the data. The largest effect of uniform priors is on the upper uncertainty bounds for ECS, which are greatly inflated.”
http://www.climatedialogue.org/wp-content/uploads/2014/05/Nic-Lewis-guest-blog-def1.pdf
Nic Lewis has a recent post on ECS/TCR updating at Climate etc.
SteveF,
What you’re talking about is the change of variables formula for transforming a PDF, correct?
I think I actually get this now. I don’t follow all of the other subsequent observations you make, but. Maybe if I play with this for awhile eventually the rest of what you’re saying will make sense.
mark bofill,
I don’t have time to read your link in detail, but It looks right. I don’t know any formal name for the transformation process, because I figured it out many years ago when faced with a “transformation of x-axis” problem (long before the days of easy internet searches). I do know that transformation of the x-axis causes lots of confusion….. until you ‘see’ the need for preserving relative areas under the curve.
Dang [strike dude]. I’d have never worked that out for myself. Kudos.
[Edit: My apologies for calling you ‘dude’. Chalk it up to incredulity.]
mark bofill,
Necessity is the mother of invention. If you really need to do something, turns out you very often can.
d.
It’s surprising to me, the value is this close. It’s be nice to see the values for the full (or a good subset of the) CMIP ensemble .
chapter 9 Ar5
SteveF,
“Seems to me they are just invoking the ever present uncertainty of climate science.”
No, they are saying that there is no reason to expect EfCS to stay the same as equilibrium is approached. And so give no basis for your claim that “The effective climate sensitivity is a reasonable proxy for equilibrium sensitivity.”. I think you need to substantiate that claim.
I think it won’t be. Sensitivity is the measure of what has to happen for the net flux (your F-A) to exit to space. Something has to get warmer. But heat is radiated from TOA, from the surface (via atmospheric window) and from intermediate levels of the atmosphere. Why is it OK to relate it to just one temperature (surface)?
The reason is that at equilibrium, those various temperatures are linked. The enforcement is that if they get out of line, a corrective flux will occur. The surface has to warm; it can’t just be high atmosphere, for example.
But in the transient case, there is such a flux – your A. That passes from some level in the atmosphere where IR is absorbed by GHG, down to the ocean. And it passes via various modes of transmission, some involving a temperature gradient. So temporarily, the upper air is warmer relative to surface. As A diminishes this temperature difference fades, and the surface then warms. This process is what I was getting at with the RC analogy.
Nick Stokes,
“No, they are saying that there is no reason to expect EfCS to stay the same as equilibrium is approached. ”
.
I must admit that you are at least predictable: anything which suggests sensitivity is unlikely to be high is something you do you best to criticize and discount; anything which suggests sensitivity is high to very high, you do your best to support. Comically transparent bias seems to me the best explanation.
.
Putting that aside, who said it would stay the same as equilibrium is approached? Not me. But the effective sensitivity is still a reasonable proxy for equilibrium sensitivity, especially considering that the approach to equilibrium takes well over 200 years, by which time atmospheric CO2 will certainly have long since been falling…. so the Earth’s climate will NEVER approach equilibrium very closely. As many have noted before, from a policy perspective, transient sensitivity is actually a more important parameter. The quibbling about the difference between effective and equilibrium sensitivity is small potatoes (as they say in Queens, New York), since equilibrium is never going to happen.
.
Yes, there are plenty of climate models which say the equilibrium sensitivity is higher than the effective sensitivity, but others which say the two values are not very far apart (GISS, for example). A more important observation is that the difference between empirical estimates (Otto et al, Lewis, Lewis & Curry, and several others) and the mean GCM calculated sensitivity is far larger than can be reasonably explained by the difference between effective sensitivity and equilibrium sensitivity.
.
And finally, the continuing divergence between GCM projections and reality suggests that the models are simply too sensitive to forcing; the higher the sensitivity of the model, the greater the chance it will diagnose a big difference between effective and equilibrium sensitivity. I would not hang my hat on models which don’t match up with reality very well if I were you. IMO, in 20 years the question will be moot, because most of the model projections will be so far from reality that they won’t be at all credible.
Nick,
I think your analogy gets at a part of the story, that is – the equilibration being a slow process. But as explained here , you’d have to go to multiple time constants as opposed to a single one in your analogy to arrive at the apparent non-linearity due to spatial variations for an underlying dynamics that is still linear.
The way I understand it is this – whether you say v= i0*(R2) or v=i0*R1+ i0*R2*(1- exp(-t/tau)), if R1, R2, tau were the same in every region, the global mean would have a linear relationship with i0. It is the spatial variation in R1, R2 and taus that give a non-linearity for the relationship of the global mean to the radiative imbalance. On the other hand, I believe Nic and SteveF are arguing that the tau is small (fast equilibration), so that when you globally average over a century, you get very close to a linear relationship for the global mean. I understand that to be the underlying assumption behind a linear fit as described by Gavin.
Carrick,
I noted earlier that “the pseudo-transient value based on AR5 is 3.71 * (0.9/2.29) = 1.45C.” I failed to add one important observation. The reason I called it “pseudo-transient” is because transient response is defined (as you are probably aware) as the surface warming after 70 years of linearly increasing CO2 forcing fro a 1% per year increase in CO2 concentration, or equivalently (3.71 watts/M^2)/70 = 0.053 watt/M^2 per year increase. The real Earth has been subjected to an increase in GHG forcing (net of aerosol effects) of ~2.29 watts/M^2 (from AR5) over a period of >70 years, so the rate of increase in forcing has averaged somewhere less than 2.29/70 = 0.033 watt/M^2 per year. So, the Earth has had more time to respond to the increased forcing than had the rate of increase been 60% higher. So, the pseudo-transient value I calculated is very likely to be somewhat higher than the correct transient response, and the ratio between effective sensitivity and pseudo-transient response I calculated (1.38) is almost certainly somewhat lower than the correct value.
RB,
“I believe Nic and SteveF are arguing that the tau is small (fast equilibration)”
.
I never specifically made that argument, but it is certainly correct that lower sensitivity is consistent with faster equilibration and a smaller ratio between transient response and equilibrium (and effective) sensitivity. See Carrick’s graph here: https://dl.dropboxusercontent.com/u/4520911/Climate/GCM/winton2010-ratio-vs-teq-v2.pdf
Right. And if I recall, Nic makes an allowance for ‘warming still in the pipeline’ to calculate his ECS. For reasons as explained here , the discussion of ECS seems relevant mostly in the context of emissions never going to zero in the future. But it would also imply that future emissions only need to compensate for the atmospheric CO2 decay rate to eventually achieve ECS – which then leads to the other discussion of decay rates.
RB,
“I think your analogy gets at a part of the story, that is – the equilibration being a slow process.”
It’s more trying to illustrate that if you set up a quasi-steady process, with a flow A running through, you can get a EfCS, but the flow A through the system has an effect that won’t be there at equilibrium. Incidentally, I don’t think any of that is really non-linear.
I’m not saying that EfCS is necessarily a bad number to think about. It may be more relevant to what will happen to us than EqCS. I’m just saying that they aren’t the same, and identifying the two can lead to error. EqCS is conceptually easy, and is rightly regarded as something that should be discounted (scaled down) for practical use. It would be inappropriate to discount TfCS in the same way.
RB,
Warming in the pipeline has two parts:
1) heat accumulation in the ocean (and smaller sinks)
2) aerosol offsets associated with combustion of fossil fuels
.
Were all aerosol emissions to stop tomorrow, and atmospheric GHGs to remain constant (neither going to happen!) then warming would increase first from the rapid loss of most all man-made aerosols (over a few weeks) and then more slowly as accumulation of heat asymptotically approached zero. The best estimate additional ‘net forcing’ from these totals about 1.4 watts/M^2, based on AR5. At a sensitivity of 0.53 C/(watt/M^2), that is about 0.75C additional warming ‘in the pipeline’ of course, stopping fossil fuel use is inconsistent with constant GHG forcing, so that 0.75C is not ‘real’. How much warming is actually ‘in the pipeline’ depends on a number of assumptions.
.
Nic Lewis certainly takes heat accumulation into account (just as I did for this post), but I don’t think he considers aerosols except as how they influence the current net forcing.
angech,
“ES = ΔT/(F – A) = = 0.9/(2.29 – 0.6) = 0.5325
Is this lower than what most people quote?”
The mean value from GCM’s is much higher, about 0.86 C/(watt/M^2). 0.5325 is a little higher than some other empirical estimates, at least in part because I used the GISS LOTI warming instead of Hadley warming.
.
“Does that correspond to an actual ocean temperature increase of say 0.001 degrees Centigrade?”
It doesn’t correspond to a specific warming value. The surface layer (‘well mixed layer’), where a stable surface layer exists, has increased by a large fraction of a degree. Below 2 Km the warming is miniscule. Warming varies in between, but most of the accumulation has been in the top 700 meters. The profile of warming over the entire surface and over all depths is were the estimated heat accumulation value comes from.
James Empty blog has been down a path similar to this already.
http://julesandjames.blogspot.com.au/2013/02/a-sensitive-matter.html
He arrives at 3C up to 4.5C and has published a paper on it. 3C is still enough to cause serious problems.
Bugs,
In that comment when asked if he still thinks it’s 3K James writes
Lucia,
Reality usually insists it is right at some point. Pollyannaish or not, it seems James sees approaching reality pretty clearly; some folks appear a lot more near-sighted, for reasons which I honestly don’t understand.
SteveF
In the post bugs linked, Annan linked to his post discussing the paper. This is what James has to say about the 4.5C number in his own paper.
http://julesandjames.blogspot.jp/2006/03/climate-sensitivity-is-3c.html
In comments he write
So bugs claim of “[James] arrives at 3C up to 4.5C and has published a paper on it. ” is rather misleading. Or at least Bug’s numbers don’t line up with what James actually says.
As Steve Mosher noted, the AR5 chapter 9 has this to say about actual model ECS vs the ECS calculated using heat balance methods.
Hope ultra high sensitivity is true. Bring on the deluge!
lucia:
It seems to be worse than that to me.
Not only does bugs completely screw up the actual interval for ECS quoted by James Annan, but James is also admitting there is referee pressure to “highball” the ECS range. That’s pretty much cutting bugs argument off at the knees.
My take home from this is that James is conceding that there is institutionalized bias towards higher ECS values. The existence of such a bias is hardly a shock to most of us on this blog.
So, more likely than not, there’s a relationship between the fact that climate models are running too hot and an institutional bias towards higher ECS values.
While there aren’t specific control knobs for the various feedbacks, it’s widely admitted within the climate modeling community that the climate models can be tuned.
A similar problem occurs in physics in the measurement of fundamental constants. There’s a strong tendency for new measurements to fall closer to prior ones than can be explained by the admitted uncertainty of the measurements.
Humans have bias, and bias influences outcome. Some researchers introduce a offset to their final computed value in their software (this offset isn’t known until after they’ve finished with their measurements) in order to prevent them “fishing for errors” that moves their answer closer to the expected one.
Carrick
Absolutely. If you read the post bugs linked James Annan makes it very clear there are people who take pains to promulgate and maintain high ball estimates. Mind you, he thinks some push the low ball side too. But I’d say the drift is pretty much that the IPCC is being “conservative” in the sense of sticking to high estimates even as evidence indicates the high ball end is can be ruled out.
I remember James Annan’s blog entry ‘a sensitive matter’ here where he talked about IPCC resistance to his criticism of using uniform priors:
Which brings us round full circle to the discussion that caused SteveF to write this post in the first place:
[Edit: It’s funny, on the one hand there’s a ‘97% consensus’. On the other, a respected scientist intimate with the process speaks of a ‘private opinion poll’. But I’m sure John Cook and the rest of the PR hacks are totally right and scientists like James Annan are totally mistaken. 🙂 ]
I have to go back to proposal writing, so some other quick comments:
Thanks again SteveF for your comments. I have a much better picture now of the distinctions between EfCS and TCR (I had been erroneously conflating them).
RB & Steven Mosher—I know I wasn’t making myself clear, but when I said “It’s be nice to see the values for the full (or a good subset of the) CMIP ensemble,” I was referring to Nic’s methodology. I was already aware of the AR5 Ch9 work, though it’s still good to remind people of it’s existence, I suppose.
Mark Bofill, even more funny/ironic is that Gates has not commented once on this thread… What was the discussion about when he showed up? Ah yes, that Brandon S was a hacker, until he was shown that was not true. We also learned that Gates admires Willard and only hopes to emulate his trolling… Like I said on another thread, he is a “Fan of more…something”, like someone was at Curry’s.
Sue,
.
Oh yeah. ‘A Fan Of More Intercourse’ or something similar. I’d forgotten about that person.
.
Yeah. I’m still glad SteveF posted this, despite everything else. I’m slowly picking up some scraps of the math and science I ought to have learned or learned and forgot way back when. At this rate I might eventually die knowing nearly as much as I should have had down 25 years ago. 🙂
The classic example of this is the value of the charge on an electron. Millikan used an incorrect value for the viscosity of air and was slightly low. The accepted value drifted upward over time.
I’m guessing that’s a pretty big fan club.
TE,
.
True. I am Sparticus!
Carrick,
“Thanks again SteveF for your comments. I have a much better picture now of the distinctions between EfCS and TCR (I had been erroneously conflating them).”
.
You are most welcome. And thanks to you for your critiques of a (much) earlier draft of the content of this post.
marc bofill,
” I’m still glad SteveF posted this, despite everything else. I’m slowly picking up some scraps of the math and science I ought to have learned or learned and forgot way back when.”
.
Truth be told, and as Ken Fritsch observes, there is really nothing much complicated in this post. You just have to think about things a little differently to see it more clearly.
marc bofill,
“Which brings us round full circle to the discussion that caused SteveF to write this post in the first place”
.
The first draft of this post was actually written before Nic Lewis wrote a much more elegant version of pretty much the same thing…. my motivation here was not to add to Nic’s efforts, which wasn’t needed, but to clarify the process of generating a PDF in sensitivity from a forcing PDF, which can be obscure if you haven’t gone through it before.
“If you have doubt that the adjustment used to generate Figure 5 is correct”
I am still bemused by Figure 4.
Due to my lower grade of maths.
The tail seems to flatten out at around 10% .
hence 10% is a likelihood at 8,18 or 80 degrees or 800 degrees.
Something is obviously wrong with the formula giving this outcome.
A Gaussian distribution could imply negative response which according to Lucia would not be right either although it would take into account the fact that side products of CO2 formation [aerosols] could conceivably inflict more downside than CO2 upside on rare occasions.
I am glad you got a Figure 5 going.
Nick Stokes (Comment #146618)
” Sensitivity is the measure of what has to happen for the net flux (your F-A) to exit to space. Something has to get warmer.”
True, note surface layer air will generally be the warmest layer as it is closest to the IR producing land and sea and is also the densest and most GHG rich layer.
” heat is radiated from TOA, from the surface (via atmospheric window) and from intermediate levels of the atmosphere. Why is it OK to relate it to just one temperature (surface)?”
Because we live in the surface layer?
Because the models model the surface layer?
Because we have temperature measuring devices in the surface layer?
“at equilibrium, those various temperatures are linked. The enforcement is that [when] they get out of line, a corrective flux will occur. The surface has to warm”
Well it has to warm, cool or be neutral actually as they get out of line all the time. It’s called day and night, convection,wind and currents
” it can’t just be high atmosphere, for example.”
It can never be high atmosphere in earth’s atmosphere although it does get “warmer” with height at certain high levels the high atmosphere is much colder than the surface and excels in letting more IR out the higher it goes til you reach so called TOA.
” there is such a flux – your A. That passes from some level in the atmosphere where IR is absorbed by GHG, down to the ocean. And it passes via various modes of transmission, some involving a temperature gradient. So temporarily, the upper air is warmer relative to surface.”
Nick,this explanation.
I am surprised no one else has commented.
Heat from the sun comes in varying wavelengths and IR incoming is a very small component.
The higher energy and most incoming IR reaches the ocean, heats it up and the large amount of IR produced by the ocean heats the adjacent lowest surface air up . The temperature gradient is approx 6 degrees drop per kilometer going upwards.
There is no magical higher layer compared to colder lower layers.
Yes the higher layers are warmer than they would be otherwise due to all that ricocheting IR but 1 K higher they are 6 degrees colder.
I do not understand why you would want to use this argument
“As A diminishes this temperature difference fades, and the surface then warms.”
–
Sorry for disagreeing with the conclusions you have drawn.
A belated thank you for a helpful post.
I know your target was the probability-density aspect, but I’m embarrassed to confess that (1) I had never run down the distinction before between effective sensitivity and similar concepts and (2) before I scratched my head over your Equation 1 for a couple of minutes my initial reaction had been to challenge it.
So thanks again; the rare occasions when I can pick up nuggets like this are why I still occasionally lurk at the climate blogs.
Excellent post Steve and also a very persuasive argument for more research in aerosol forcing.
David.
SteveF. Is always clear and convincing
Joe Born, I am glad you found the post helpful.
Thanks to David Young and Steve Mosher for your kind words. I do try to be clear; I’m glad to hear you thought I succeeded this time.
Interesting post. I think this post by Isaac Held might also be relevant. It’s not necessarily that the feedback response might be non-linear, it’s also possible that the spatial structure of the response may vary with time. As Nick Stokes points out, one has to be careful of assuming that the EFS is going to be a reasonable proxy for the ECS. Personally, I think it’s probably quite reasonable if you’re happy with a ballpark figure. If you’re looking for something more precise, then I think you have to be more careful.
Ken Rice,
As I said earlier in this comment thread, equilibrium sensitivity is only approached hundreds of years after constant GHG forcing is applied. And that is just not likely to happen; the Earth will almost certainly never approach an equilibrium response. From a public policy POV, the transient response is probably more relevant than either effective sensitivity or equilibrium sensitivity.
.
WRT to Issac Held’s post: Of course, a globally averaged measure (or a global average projection) is a simplification, and one that may well obscure important behaviors. The problem is, the models are even worse at matching regional behaviors than global averages; comparing modeled average temperature to measured average warming probably makes the models appear more capable than they really are. Besides, empirical estimates are the only easily understood reality check we have.
.
A simple heat balance calculation like I have done here yields estimates for transient and effective sensitivities you can use to project future warming, and see how that compares to model projections. Based on empirical estimates like this one, I would project average warming will likely fall in the range of 0.13-0.14C per decade over the next few decades. The models are saying ~0.23 per decade.
SteveF,
It’s almost as if you think my comment was some kind of criticism. It wasn’t.
Well, yes, but the non-linearity, or spatialy variability, issue probably applies to the transient response, as well as the equilibrium response. We haven’t yet doubled atmospheric CO2.
Possibly, but that still doesn’t change that spatial variability may play a role in determining the response.
Ken Rice,
“It’s almost as if you think my comment was some kind of criticism. It wasn’t.”
.
Seemed that way to me. ‘Interesting post….. but wrong’ will often be interpreted as criticism.
RB,
Actually, no. The comparison made in AR5 is between, on the one hand, an ECS estimate derived from the Andrew’s methodology – which involves a regression of the net flux vs temperature relationship for a fixed step-forcing, given by the 4xCO2 experiment, extrapolated to the zero net flux intercept, and, on the other hand, an estimate derived from the total GCM feedback estimated a la Soden et al.
This latter does not, repeat not, correspond to an “ECS estimated using heat balance methods”. In fact, the AR5 comparison merely confirms a serious methodological problem arising from the enforced linearisation of a curve in the estimation of GCM feedbacks using Soden’s radiative kernel approach. This approach uses a secant gradient to approximate an upwards-concave curve and leads to an arbitrary dependence on the period chosen for the analysis, but always with a systematic bias to overestimation of the “true” long-term feedback in the majority of GCMs. This then translates into an underestimate of the “true” ECS for that GCM.
For a more appropriate comparison of energy balance approaches under assumption of constant feedback with GCM results, it is necessary to apply the actual energy balance approach used to the GCM flux and temperature data, and then compare forecast ECS with reported for each GCM tested. Nic Lewis’s comparisons are based on this latter approach.
It is, however, very easy to get sucked into a conversation where it is assumed that the GCM results yield a more correct answer for ECS than the estimation of effective ECS from observations under a constant feedback assumption. I would strongly recommend the work of Andrews et al 2014 http://centaur.reading.ac.uk/38318/8/jcli-d-14-00545%252E1.pdf
Andrews confirms sytematically that the majority of GCMs display a curvature in the net flux -temperature relationship. However, in nearly all cases the curvature is largely driven by cloud feedbacks – one of the most dubious features in the GCMs. When warming is prescribed – the relationship between net flux and surface temperature is very close to linear. In other words, as yet, there is no physics-based reason to believe that the assumption of constant feedback in the realworld is ill-founded.
SteveF,
I didn’t say it was wrong. I can’t see anything wrong with it. I was simply highlighting that might be relevant but typically can’t be included in such analyses.
SteveF,
A very useful educational article.
What you describe often goes under the heading of “Transformation of Random Variables”, and there are indeed a number of treatments of the subject in stats texts. It can be explained in succinct form by reference to the CDF’S of the RVs.
Suppose we have a random variable X which has PDF fx and CDF Fx, and you want to define the distributions of a new RV, Y, which is related via the functional relationship Y = G(X), and G(X) is monotonic decreasing, and continuously differentiable (as in your case), then we have:-
G(X) is invertible over the range of interest in this example.
Hence, we can define the inverse function, H, such that X = H(Y)
Pr (X<x) = Fx(x)
Fy(y) = Pr (Y H(y)) = 1 – Fx(H(y))
The inequality is reversed in the above because for your relationship the new transformed Y value (equivalent to your EDS) is monotonic decreasing against the original X variable (equivalent to your F).
Differentiation of the above CDF yields the pdf for Y. That is:-
fy(y) = -fx(H(y))* dH/dy
Hopefully, this should give the same answer as you found (Smiley). If the relationship had been monotonic increasing, then the minus sign disappears, so the above can be generalised to any monotonic relationship by throwing in a modulus sign round the dH/dy and eliminating the minus sign.
For what it is worth, this process can be extended to multivariate functions, and the differential term is then replaced by a Jacobian matrix – the multidimensional equivalent of the differential term. Nic Lewis did this to good effect in his paper correcting Frame et al and other works.
SteveF,
My equations in the above have been turned to gobbledegook by WordPress. I think it is because I had less than and greater than inequalities on the same line. WordPress has interpreted them as HTML brackets and eliminated the bit in the middle! The line should have read:-
Fy(y) = Pr(Y is less than y) = Pr(X is greater than H(y)) = 1 – Fx(H(y))
Paul_K:
In case you were wondering, there is at least one of us out here whose brief dalliances with probability are so infrequent as to require him to re-derive that transformation whenever the need for it arises, so your comment did render a service.
Paul_K,
For future reference if you want to include < and > in equations and not have them interpreted as HTML brackets, use the HTML special characters format: ampersand lt semicolon. Replace lt with gt for the greater than symbol. Watch out if you edit your post, though. They’ll turn back into brackets.
Here’s one list of those characters:
http://www.degraeve.com/reference/specialcharacters.php
If you can’t find what you’re looking for in that list, search on ‘HTML special characters’.
Another option is to use Latex, which this site does support.
The nonbreaking space character ampersand nbsp semicolon can be particularly useful for formatting data in columns as it isn’t removed when you post like extra regular spaces.
Latex can be very useful, but for simple equations, it’s too much bother as, since I don’t use it regularly, I always have to relearn the conventions.
Thx DeWitt. You are a gentleman and a scholar, sir.
Paul K,
Thanks for the link to Andrews et al. Their CMIP5 ensemble graphic of TOA flux versus temperature increase (Figure 1C) suggests an equilibrium sensitivity value about 7C for an applied CO2 forcing of ~7.6 watts/M^2, or ~0.92 degree/(watt/M^2), based on the ensemble response from 21 to 150 years. The corresponding implied sensitivity from the response over the first 20 years is 5.8 degrees, or ~0.76 degree/(watt/M^2), and 83% of the ‘equilibrium sensitivity’.
.
There are several interesting things about this graphic.
The first is that the implied sensitivity for the first 20 years response (0.76 degree/(watt/^2)) if correct, corresponds to a best estimate warming today of about 0.76 * (2.29 – 0.6) = 1.28C over pre-industrial, which is a long way from the observed warming of ~0.9C…. 43% too high. And the measured response is for a long period of slowly increasing forcing, not a step change; that to me suggests the models are WAY too sensitive. Were I a climate modeler, this would make me a little nervous. And especially so because the high model sensitivity comes mainly from the least clearly understood factor (clouds).
.
Second, the “break” in the curve near 20 years is at ~3.9C warming response. My immediate reaction to this is: does the change in the slope happen because of reaching that level of warming (~3.9C) or is it due mostly to dynamic processes which require ~20 years to kick in? If the 4X CO2 were instead 2X CO2, would the same break in slope happen after ~20 years? I don’t know the answer to this question, but my guess is that it is a dynamic response, and the same thing would happen with a 2X step in CO2, but at a smaller temperature increase.
.
Third, the average model response to the step change after 20 years is ~75% of the response at 150 years, reinforcing, for public policy decisions, how much more important the transient response is than the equilibrium (or effective) sensitivity.
.
At the end of the paper, the authors write:
Which suggest to me they understand equilibrium responses are not so meaningful for public policy.
The blogger Anders has a new post about how “Maybe we really are screwed” in which he writes:
Something about this seems wrong. I wonder if anyone else can see it.
Brandon S,
.
The structure of the post is interesting to me. The rhetorical style is convoluted. It seems to consist of a series of apologetic statements each of which Anders promptly contradicts:
1.A.
Contradiction:
1.B BUT:
.
2.A
2.B. BUT:
.
3.A.
3.B. BUT:
.
4.A.
4.B. BUT:
.
Only straight statement in the post, the conclusion:
.
.
My response would be:
1.A
1.B. BUT:
.
Now it’s entirely possible that these phrases:
are being completely misunderstood and therefore grossly misrepresented by me. That’s the risk someone takes by note speaking straight I guess, or maybe the risk I take in trying to read and find meaning in something that’s not spoken straight.
~shrug~
There are other, entirely innocent interpretations. Perhaps he is advocating for more engagement? Perhaps he is advocating for more discussion. It’s impossible to know because he hasn’t provided enough information to uniquely identify the bounds or expectation of conduct or constraints of the system he is referring to.
.
My trouble with these more innocent interpretations is this – if his meaning was innocent, why didn’t he speak plainly? My first thought is that he’s not speaking plainly because his meaning is not innocent.
.
Of course there is room for error in this as well. Maybe Anders just doesn’t talk that way. Maybe he just hasn’t gotten around to clarifying yet. Maybe unicorns are stomping on his hands whenever he tries to explicitly type his meaning. Who can know.
.
~shrug~ But whatever he means, what he typed is ambiguous. Readers can read it as supporting stepping over whatever ‘normally acceptable bounds’ or ‘expectations of conduct’ or ‘system constraints’ they’d like to.
Brandon S.
Let’s see. If the radiative forcing from CO2 is 4.4W/m² per doubling, then to get to 24W/m², or 20 W/m² after subtracting where we already are, would mean the CO2 concentration, ignoring feedbacks, would have to increase 23 fold. That’s 9000ppmv. I don’t think there’s that much fossil carbon. Even if the feedback is a factor of three, that’s still 3,000ppmv or 1,500ppmv for 10%.
Mark Bofill:
Anders has a long history of not being clear then relying on his lack of clarity to claim he “didn’t say that.” Then again, he also has a long history of refusing to acknowledge what he did say when he was clear, so I don’t know what’s up. My favorite example is when he came to my site a while back and responded to a user by, according to him, intentionally being curt. Curt means rudely brief. I suggested he not be rude to other commenters. He responded by denying having been rude. I still can’t figure out what he was thinking when he said he was “rudely brief,” not “rude.”
I think the best thing to do is just not try to be confident in how you interpret anything he says unless he is perfectly clear, at which point I’d still be somewhat hesitant. Still, the part I quoted is quite clear and pretty entertainingly wrong.
DeWitt, Mark,
Strikes me as typical Ken Rice worrying about impending climate doom. Seems most of his posts are at least similar in that respect (paraphrasing): ‘Oh my God! What can we do? What should we do? Our great great grandchildren will think we didn’t do enough. Why won’t people listen?! OH… MY… GOD!’ And I bet he thinks Paul R. Ehrlich is a visionary.
.
I’m not interested in such ranting.
DeWitt Payne:
We have to use CO2 equivalent since there are other greenhouse gases (like methane), but yeah. What I thought was really funny is the IPCC uses projections of emission growth to consider what may happen in the future. They’re called Representative Concentration Pathways (RCPs). The four notable RCPs are RCP2.6, RCP4.5, RCP6.0 and RCP8.5. The numbers after each represent the increase in radiative forcing since pre-industrial times (e.g. +8.5 W/m2) projected to be reached at 2100.
When the RCP8.5 scenario is extended to 2300, it only reaches +12 W/m2. That is, in the highest emission scenario considered by climate scientists, we’d only reach Anders’s 10% in 2300. I’d argue that scenario is completely unrealistic, but even if we accepted it as gospel truth, the idea we’d reach Anders’s 20% of +24 W/m2 isn’t close to anything we would ever reach.
I don’t know how Anders came up with those numbers, but the moment I read them, I immediately thought, “How would we reach triple the radiative forcing of the worst case scenario?” (The answer lies in his calculation, which is wrong. A better estimate for the current radiative forcing by humans would be something like +1.7 W/m2.)
Thanks Brandon, Thanks SteveF.
I think I was wrong in my interpretation anyways. What he does say is along the lines of try harder to convince people of the seriousness of this situation.
Maybe he’s feeling guilty about promoting the Climate Feedback [dot org] thing. I don’t know. Possible but unlikely. 🙂
By the way, I should clarify the problem isn’t in the math of Anders’s calculation, but rather in the inappropriate assumptions his calculation relies upon. What he’s done is basically say: “Without feedbacks, a change of ~3.7 W/m2 would cause ~1 degree of warming. We’ve seen ~1 degree of warming, so that tells us the change in forcing is ~4.0 W/m2.” This, of course, assumes humans have caused ~1 degree of warming and no feedbacks have been at play.
That’s not a direct translation of what he wrote, but it’s the basic gist. The problem should be obvious. Even if we accept humans have caused 1 degree of warming (not something like .7C due to uncertainty/natural influences), the calculation would only work if there were no feedbacks. Anders frequently says people who argue climate sensitivity is lower than ~3C seem to be ignoring science, yet here he relies upon a climate sensitivity of ~1C in calculating the anthropogenic radiative forcing.
I have no idea why one would decide to use this approach instead of a more direct calculation based upon the estimated radiative forcings of greenhouse gases which are readily available and used by little groups like the IPCC. But if one wanted to use this approach, they certainly couldn’t justify doing it while arguing climate sensitivity is most likely 3 or greater.
TL;DR: Anders’s calculation uses a variety of questionable assumptions. The strangest of these is the assumption climate sensitivity is ~1C per doubling of CO2 even though he frequently criticizes those who say the climate sensitivityis less than 3C per doubling of CO2.
Brandon S.
I think you’re including aerosol offsets to get 1.7W/m². Using the formula ΔF = 6.3ln(C/Co), the forcing change from 279ppmv to 400ppmv CO2 alone is 2.27W/m². The IPCC FAR calculated 2.45W/m² total ghg forcing change from 1765 to 1990. In 1990 the CO2 level was 354ppmv. Ignoring other ghg’s, 354 to 400 ppmv is worth another 0.77W/m², for a total of 3.22W/m².
Using the temperature to calculate the forcing is what’s wrong. Even the IPCC thinks that some of the increase in temperature was not anthropogenic in origin.
By the way, if you want to see the IPCC projections of future radiative forcing changes, you can find them in Working Group 1 Chapter 12 of the latest IPCC report. Or you can just look at this figure:
http://www.climatechange2013.org/images/figures/WGI_AR5_Fig12-3.jpg
I’m not sure why Anders is unaware of things like this.
Hey I wasn’t wrong. Well, I might have been wrong about what Anders meant, but there’s John Hartz:
The revolution awaits…
DeWitt Payne:
I’m actually not doing the calculations myself. I’m just going off the IPCC numbers. They do include aerosol offsets though. If you exclude those, the combined forcings of everything else is something like 2.7 W/m2. I can provide an exact breakdown of the forcings as used by the IPCC later when I’m on my computer again. In the meantime, I should caution against using the IPCC FAR numbers as things have changed a fair amount since that report came out. Forinstance, I think the current mainstream formula uses 5.35 instead of 6.3.
By the way, while I get many people disagree with the IPCC’s estimate for the radiative forcing caused by aerosols, you do have to account for aerosols when estimating the anthropogenic radiative forcing. You cannot compare the effects caused by all anthropogenic forcings to the size of some anthropogenic forcings and expect to get an accurate result (unless you have some compelling reason to believe the forcings you’re ignoring are very small in size).
Brandon,
I think you are still missing the funniest part of ATTP’s post.
Just consider the calculation here:-
Really?? I am seriously concerned that any physicist could so misunderstand the radiative balance. Since the sum of the temperature-dependent (restorative) flux at TOA and the TOA net flux balance should yield the forcing to date, ATTP seems to believe that to date we have already had a TOA forcing of 4Wm-2, which he then confuses with a net flux perturbation. Moreover, the further good news is that ATTP believes that there are no positive feedbacks to offset the Planck response and whacks in a full 3.2Wm-2oK-1. We should then be able to reassure him that his fears of imminent calamity are unwarranted. On his calculation basis, a doubling of CO2 should yield about 1.2 deg K total temperature rise.
In answer to the question: “What is the maximum sustained net radiative perturbation induced to date”, most IPCC climate scientists would arrive at the answer 0.6-0.8Wm-2 on ATTP’s figures i.e. the current radiative imbalance, which should represent the cumulative applied forcing less the restorative flux associated with today’s temperature. The latter (restorative flux) of course is, according to the IPCC, far less than ATTP’s assumed Planck value of 3.2Wm-2 because of significant offsetting positive feedbacks.
Somewhat ironic to include this in an article on representing good science.
Brandon,
I see you had already picked up the point I have just made. (Your comment #146893 was not visible on my screen.)
Paul_K, Brandon –
I was about to make a similar comment, that aTTP takes an ~1 K warming and converts to forcing using ~1K/doubling, but predicts dire results based on 3 K (or higher)/doubling ECS. Eating one’s cake and having it, too.
If he really wanted to talk of current forcing, it’s in innumerable CMIP5 and AR5 documents. Total anthropogenic forcing is currently about 2.2 Wm-2 according to the RCP6.0 scenario. But then his article isn’t about science at all, is it? [Oops, RQ. My answer – no it isn’t.]
I’ll also respond to “we could produce a perturbation that is 10-20% of the total Greenhouse effect. This is no longer small, and a large perturbation of a non-linear system can produce big, and unexpected, changes.”
Perturbations of *stable* non-linear systems tend to produce diminishing returns.
Paul K,
The sad part is that the ’emotional distress’ of global warming seems to short rational analysis. Then very silly things are then said, and hysteria begins.
Harold W,
No, it has little to do with science. It has to do with emotion, politics, and personal preferences and priorities.
HaroldW,
Replace the question mark after “is it” by a period and it’s no longer an RQ.
Paul_K, HaroldW, if you want some additional humor, check out this comments by Anders:
So on a post where he performs calculations which require him assume the planet’s climate sensitivity is ~1C per doubling of CO2, Anders comments to say the TCRE has a range of 0.8 to 2.5C per GtC. TCRE and TCR (or ECS) are obviously not the same thing, but it would be impossible to have a range for TCRE like that (which I think is questionable anyway) while the climate sensitivity was only ~1C.
Now I want to see TCRE related to other measures of climate sensitivities so we can convert between them. It’d be entertaining to know just what climate sensitivity range one gets if they convert Anders’s TCRE range since his post (effectively) says TCR is ~1C. Anyone up for the challenge?
Apparently Brandon R. Gates has decided to defend Anders’s post, writing (I removed a malformed link Gates tried to post to my comment introducing this topic):
That thread is up to 80 comments, and now Gates is linking to here. Maybe somebody over at Anders’s place will catch onto the silly error of his post. It’s kind of embarrassing after 80 comments, nobody there has. And Gates is even speaking out against us for pointing it out!
Oh, and for the record, I don’t think this is Anders resorting to “tricks” like Gates claims. I think Anders just has no understanding of what he is doing.
Sorry for the triple post, but I just found out Anders has updated his post to add this note:
This doesn’t do anything to justify what he said in his post:
In fact, it doesn’t address what he said in his post at all. Anders seems to have realized he screwed up big time and tried to find some other way to justify his numerical range without dealing with his huge mistake.
After all, people who can recognize his error for what it is are just “confused” by his point. It’s not that what he said is wrong and foolish.
There are so many ways to calculate a number and call it the magnitude of the greenhouse effect that it’s not really a very good measure.
1. One can calculate the increase in emission to space if all the ghg’s were removed from the atmosphere. That’s about 130W/m² for the 1976 US Standard Atmosphere. But that’s not valid because if you remove water vapor, you also remove clouds. That changes so much that this measure can’t mean much.
2. The 33K figure suffers from exactly the same problem as 1.
3. Another way is to look at the energy balance. The gross flow of energy upward from the surface is 493W/m² (TKF09), 396W/m² from radiation and 97W/m² from latent and sensible convection, and upward to space is 238.5W/m². One could then argue that the greenhouse effect is the difference between those numbers or 254.5W/m². The advantage of this measure is it doesn’t require calculating some arguably non-physical hypothetical. The disadvantage is that it’s difficult to calculate how a change in ghg’s would change the number at steady state. One can, of course, easily calculate the change for an instantaneous change in forcing.
De Witt,
We already know what the sun sends us.
We apparently have a measure of the earth radiance from earthlight reflected off the moon.
Surely someone, somewhere can reconcile them and say the earth is gaining in energy.
Or not.
Sorry, looking for moonbeams. there was one, this, which was interesting.
“This proxy shows a steady decrease in Earth’s reflectance from 1984 to 2000, with a strong climatologically significant drop after 1995. From 2001 to 2003, only earthshine data are available, and they indicate a complete reversal of the decline. [due to albedo changes/clouds].
Pallé E, et al
. Science 2004;304(5675):1299-1301.”
Koonin calculates the anthropogenic influence as a percentage of the ~340 Wm-2 downward thermal flux (which would be zero in a transparent atmosphere).
HaroldW,
A perfectly transparent atmosphere means no clouds. Same problems as my 1. and 2. You have an unphysical hypothetical for comparison, as there’s no such animal as a perfectly transparent atmosphere. Even a noble gas atmosphere would have collisionally induced absorption/emission. It wouldn’t be much, but it wouldn’t be zero either. There wouldn’t be absorption of incoming solar in the atmosphere, not to mention the albedo change from no clouds.
Brandon S.
Its a little weird to discuss ATTPs post over here.
childish at worst. weird at best.
There was time I didnt like that blog. then I just decided ease up a bit. it got better. Then I decided to do a post there.. or two. it got better. Not perfect. better.
I’ve said a lot of shitty things to ATTP. he had no problem ignoring that and giving me a venue.
just sayin..
Steven,
To be fair, I don’t let Brandon comment on my blog, so maybe he feels like discussing it here. I don’t really care. He also appears to have completely made up his mind (as seems usual) so actually discussing it with him (unless you happen to agree) seems entirely pointless.
I don’t really mind when people sometimes say shitty things. I’ve done the same myself. It’s a contentious topic and it’s easy to say things that you might not say if you were face-to-face. My issue is really with those who seem incapable of saying anything other than shitty things, or who seem incapable of letting bygones be bygones and continually judge you for the shitty things you might have said in the past, even if you acknowledge their shittiness.
Steve Mosher, “you’ve come a long way baby”…
Steven Mosher:
It isn’t weird for people to discuss a blog post someone has written that is relevant to the post at hand. This post discusses the relation between radiative forcing and climate sensitivity, the exact aspect of Anders’s post I’ve been discussing here.
Perhaps it’d be better to discuss his post on his site, but given a number of commenters here have been banned from it, that’s not possible.
Anders:
I have made up my mind that using the Planck response to calculate radiative forcing while completely disregarding feedbacks is wrong if you believe such feedbacks have a strong effect. I’ve also made up my mind you believe such feedbacks have a strong effect since you’ve argued feedbacks more than double climate sensitivity many times.
You can paint this as me being close-minded if you want, but the reality is anybody who understands the Planck response gives a “no-feedback” sensitivity will think using it while believing feedbacks are at play is wrong. That’s not being close-minded. It’s having a basic understanding of things.
If you could, by some chance, show estimating radiative forcing by using a “no-feedback” sensitivity while believing there are strong feedbacks was right, despite it giving results far out of line with all of mainstream science, I’m sure people would change their minds. But people recognizing that would be impossible to do doesn’t make them close-minded.
I didn’t.
I hate it when Steven Mosher’s right.
.
Apologies for my remarks Anders.
Mark,
No worries.
Anders:
This sort of comment is completely unhelpful. Believe it or not, people here are willing to discuss topics to try to resolve disagreements and even acknowledge when they’re wrong if such a discussion shows that they are. It does require people saying more than, “Nuh-uh” though.
PaulK, thanks for the link to the Andrews paper. I have only glanced through the paper at this point in time. I want to check the paper’s results against those that I derived sometime ago. Are you satisfied that the paper statistically shows non linearity. The paper appears to show not so much non linearity in the curve but rather two straight lines with different slopes and a rather arbitrarly drawn breakpoint. I am wondering what a breakpoint analysis will show.
I just went back and reviewed my OLS regression plots of net TOA radiation versus surface air temperature for the CMIP5 4XCO2 experiment and do not see what the Andrews paper shows in its plots. The closest my plots come is with Figure 1 (a) where you can see the relatively sparse plots from the rapid change for temperature and net radiative energy in time for the early years and then the thick plots of the later years where the change slows down precipitously. My plots do not show the flattening tail that the Andrews paper shows. In fact the data points start piling up at the end of line used for extrapolation in my plots as would be expected from the slowing of temperature and net radiative changes with time. I need to read this paper more carefully, but at this point I do not see how the tail of more or less single file data points shows in Figure 1 (a).
I believe this is the paper about which, or the contents of which in another paper, I asked Andrews whether he had looked at TLS regression and he told me no.
I’ve said a lot of shitty things to ATTP. he had no problem ignoring that and giving me a venue.
just sayin..
Completely different experience – I can’t recall posting anything uncivil at ATTP but comments are excluded. Appears to be a lost cause of ideology to me.
Mark,
Mosher is not right, I have no idea what you are apologizing for. Ken apparently blocks comments on his blogs and blocks most ppl on twitter who question him. His blog is for the “chorus”, just like WUWT and unfortunately Bishop Hill now. For Mosher to say it “got better” is comical and sad…
TE,
IIRC, I suggested that I would delete comments of yours that were simply figures without explanations. You also peddle the same stuff over and over again, which does get rather tedious. You don’t have to like it, but I don’t run my blog for your benefit. You are free to whine away, though.
Brandon,
I wasn’t really trying to be helpful.
However, here is what I was describing. The standard energy balance formalism is
$latex N = dF + \lambda dT,$
where $latex N$ is the planetary energy imbalance, $latex dF$ is the change in forcing, $latex dT$ is the change in temperature$, and $latex \lambda$ is the feedback response.
You can separate the feedback response into the Planck response and the rest, and rewrite the above as
$latex N = dF + W_{\rm feed} dT – 4 \epsilon \sigma T^3 dT,$
where the last term is the Planck response, and the second to last is all other feedbacks. Therefore if you can estimate the Planck response and the planetary energy imbalance, you can estimate the change in forcing plus the non-Planck feedback response, which given that we’ve warmed by about 1C and still have a planetary energy imbalance of 0.6 – 0.8W/m^2, is about 4W/m^2. Of course there are uncertainties, etc, etc, etc, but it is just a ballpark figure.
If we now consider the Greenhouse effect, the equivalent radiative impact is about 120 W/m^2. So, we’ve enhanced this by about 4 W/m^2, or a few percent.
I don’t know if the latex is going to work, but I’ll try anyway.
Now that I have actually explained this, you can – of course – feel free to continue to claim that I’m wrong, or I’m not wrong but you’re still correct, or I’m rude because I didn’t write my comment as you would have done, or I’m dishonest because I didn’t immediately agree with you about something, or some combination of all of these, or something entirely new and unexpected.
Sue,
.
Yeah. I sort of jumped on the opportunity Steven Mosher’s post provided more because I was ashamed of my remarks than any other reason. The thing is, I know better than to indulge in … derogatory speculation, let’s say, about people I disagree with in an environment where others may tend to agree with me. It’s polarizing and I think there’s too much divisive behavior going on already, so why am I contributing to the problem? No good reason, I was just being a jerk mostly. It’s like I was saying earlier; do I want to talk or do I want to fight? Can’t do both.
.
I had a point, and I could have made it in a more constructive way. My point mostly boils down to the simple observation that it’s not conspiracy theory at all to note that there are some on Ander’s side of the fence who appear to support the idea of radical political change or moving towards one world government [edit, add : in the context of dealing with climate change or global warming]
.
Thanks Sue, I always appreciate hearing your thoughts.
Mark, I am moving my comments to the other thread so these guys can talk math/physics without us interfering. Hopefully others do the same…
aTTP: “you can estimate the change in forcing plus the non-Planck feedback response, which given that we’ve warmed by about 1C and still have a planetary energy imbalance of 0.6 – 0.8W/m^2, is about 4W/m^2…”
Or you could look up the anthropogenic forcing in (say) the CMIP5 files, as 2.2 Wm-2 in 2016.
Adding part of the response term is, well, meaningless.
Anders:
I get making unhelpful comments and writing things like this may be the approach you wish to take. I have no intention of doing the same. I have been trying to discuss the physics problem you got wrong. On that note, I’d like to discuss your description of what you were doing. Specifically, I want to focus on where you say:
This isn’t exactly how I’d write it, but it is equivalent to what I’d work with. The problem arise from the fact dT exists in both the feedback and Planck response parameters. That’s because the planet’s equilibrium temperature for the amount of forcings at any given time is determined by the amount of warming that has been caused by the Planck response, the amount of warming that has been caused by feedbacks and the planet’s energy imbalance. So when you write:
You cause the problem I’ve been highlighting. There is no way to estimate the Planck response without considering the feedback parameter. You wrote:
But this is based upon you only plugging a value for dT into the Planck response parameter even though the term appears in the feedback parameter as well. Plugging a value into both the feedback parameter and the Planck response would result in you having two unknown variables, making it impossible to come up with a single numerical value. Put simply, your calculation completely overlooks the feedback parameter, which to save on the need for LaTex, I’ll label P.
If one wants to create a correct formalism for this problem, they have to account for the time component and what portion of the feedback parameter will have manifested in whatever timeframe is being considered. In other words, what portion of the warming we’ve seen (assuming it is all anthropogenic) is due to the direct Planck response, and what portion is due to feedbacks? Without an answer to that question, it is impossible to isolate the forcing caused solely by the Planck response.
For a direct demonstration, this is what we get if we plug values used in your post into the formula you gave:
4W/m^2 = 0.8W/m^2 + P + 3.2W/m^2
it only balances out if one assumes the feedback parameter is 0.
By the way, I hate using LaTex and always screw it up. I hope people will forgive me for avoiding it as much as I can.
HaroldW:
Or you could even just look at the pretty figure created by the IPCC, which I provided upthread. Here it is again. It gives a lower number than you give though. Do you happen to know why? I could probably figure it out, but I haven’t looked at the table you refer to.
Brandon (#146944) –
I suspect the difference is that in Figure 12.3(a), the forcings are efficacy-adjusted, while the CMIP5 forcing file which I used (link in #146901) is not.
No, it isn’t.
Anders, I get you may be happy to intentionally make unhelpful comments, but I think things would go better for everybody if you didn’t respond to people’s reasoned discussions with, “Nuh-uh.”
If I misapplied the formula you gave, it should be easy to demonstrate by simply plugging the numbers in yourself. Given you already did these calculations, doing so should take only a trivial amount of effort more than what you put into your unhelpful response.
Given that I’m in a discussion with you, I seriously doubt it. I don’t think there is a way in which I could respond tht would allow things to proceed in a manner that anyone reasonable would describe as “better”.
Anyway.
$latex N = dF + W_{\rm feed}dT – 4 \epsilon \sigma T^3 dT.$
The LHS is the planetary energy imbalance so about 0.8W/m^2. The first term on the RHS is the change in external forcing, mainly anthropogenic, so about 2.3 W/m^2. The next term on RHS is the non-Planck feedbacks. The last term is the Planck response, so about 3.2W/m^2/K. Solve for $latex W_{\rm feed} dT$ using $latex dT = 1$.
.
Error bars?
.
Trenberth’s energy budget was based in part on CERES data, but for more than a decade ago.
.
The complete CERES series of net radiance, likely with very large errors is…
0.0 W/m^2.
.
OHC? Yes, Levitus indicates increase. But, error bars? Probably larger than let on. Fails to measure the ice covered polar areas where most of the known mixing occurs. Fails to measure the very large amounts of deeper waters.
aTTP –
You haven’t said why adding a partial response term to the forcing (or alternatively, a partial response term to the net energy balance) yields a meaningful value. What meaning do you ascribe to it?
TE,
There’s a constraint on the total rate of OHC increase, sea level rise. Due to the physics of saline solutions, the thermal expansion coefficient of sea water doesn’t change much with depth. The thermal expansion part of the sea level rise is largely explained by the change in the OHC from 0-2000m. There is very little chance of significant amounts of heat being gained or lost below 2000m.
TE,
Funny that they’re important to you know. If you read my earlier comment on this, I did say
TE,
Funny that they’re important to you know. If you read my earlier comment on this, I did say
Of course there are uncertainties, etc, etc, etc, but it is just a ballpark figure.
Right:
.
+0.8 W/m^2 imbalance is possible.
.
But the satellite measured 0.0W/m^2 and even negative imbalance are also possible.
Not according to the OHC.
TE,
The oceans have to be thermally expanding, i.e. OHC increasing, because sea level rise cannot be explained by increase in mass from land ice melting and fossil water recovery. There are, of course, uncertainties in GRACE mass measurements, TOPEX/Poseidon, Jason-1 and Jason-2 satellite altimetry, global isostatic adjustment and ARGO OHC measurements, but when you combine them, they tell the same story: OHC is increasing, which means that the radiative balance at the TOA is not zero. By the way, the satellite energy balance measurements have far and away the worst precision and accuracy compared to the others listed.
DeWitt,
.
.
I always thought that was the case anyway. My impression was that while the seas are rising, we pretty much have to be in radiative imbalance to explain that fact. If that’s wrong, I’d love to find out how and why.
[Edit: I didn’t always think that was the case. I’ve thought this was the case for a while now.]
After reading Anders’s response, I realized I accidentally switched two parameters around as he places the energy imbalance on the left side while I typically place the change in forcing on the left side. That’d change:
To:
Which is basically the same thing, just rearranged (the only difference being the sign in front of P). However, according to his latest comment, it should actually be:
This doesn’t solve to anything like what he said in his post:
His post claims “the net radiative perturbation is about” 4W/m^2. I don’t see how that arises from this formula. This formula shows the change in forcing is 2.3W/m^2 (plus 0.6 – 0.8W/m^2 when you add in the energy imbalance term).
Moreover, I can see no explanation of how Anders’s 3.2W/m^2 is arrived at given it. The formula does not give rise to that value at all.
So Anders, could you please provide the calculation you used to get the radiative forcing you attributed to the Planck feedback in your post? I know your latest comment tells us to Solve for the feedback parameter given the Planck response is 3.2W/m^2, but that tells us absolutely nothing about how the value of 3.2W/m^2 was derived.
By the way, I’m not sure why I chose P for the variable I’m assigning to the non-Planck response feedback parameter. I wish I would have picked something else as I worry it’d make someone think “Planck response” instead of “non-Planck response feedback.” I know it shouldn’t change anything, but… eh.
It’d probably be better for me to just refresh myself on the LaTex syntax and use that instead. That’s just way more work. I’ve never found LaTex intuitive, and as far as I can see, this should be very easy to resolve.
Speaking of which, if anyone can figure out how Anders derived his value of the Planck response’s contribution to the energy balance (for the 1C of warming he uses for his calculations) as 3.2W/m^2, feel free to speak up. As alluded to in Anders’s last comment, if we know the contribution of the Planck response, the net change in forcing and the energy imbalance, we can just plug the numbers in to get the climate sensitivity value. That’d be quite useful.
Ken Rice has shaded his “estimate” of net radiative imbalance by selecting values which are at the upper end of the plausible range. 1C warming (not 0.9C), 0.8 heat uptake, not 0.6. His calculation of “4 watts perturbation” is actually more like:
.
0.6 = 2.3 + 0.9 * (non-Planck) – 3.2 * 0.9
.
leading to a “non-Planck” value of 1.31. Adding this 1.31 to 2.3 (man-made forcing) yields a “perturbation” of 3.61 watts/M^2.
.
I don’t see the advantage of even thinking about it this way, save for the bigger “perturbation” number sounding scarier. For what it is worth, it seems to me it just says that the feed-backs are equivalent to 1.31 watts/M^2 of additional forcing on top of 2.3 actual forcing, and so the measured warming is higher than the expected ‘black-body warming’ by a factor of about 3.61/2.3. It’s hard to get too excited by this observation.
Brandon S.
If you use MODTRAN with the 1976 US Standard Atmosphere 100km looking down and everything else default, emission to space is 260.118W/m². Now add a surface temperature offset of 1 degree and recalculate with water vapor pressure constant, the new output is 263.446. That’s a difference of 3.328W/m². But that’s not entirely fair as that’s for clear sky. Add cumulus cloud cover, first option, the output is 222.971W/m² with a 0 degree offset. Change the offset to 1 degree and you get 225.954W/m² for a difference of 2.983W/m². If cloud cover is 60%, then the average increase is 3.121W/m². That’s not 3.2, but it’s pretty close.
But that’s no feedback. Holding relative humidity constant instead of water vapor pressure will reduce the increase in emission. I leave that calculation as an exercise for the reader.
SteveF:
But again, where do you get the 3.2 number from? Anders doesn’t provide any derivation of it. As DeWitt Payne seems to suggest, the value is roughly what you get if you estimate the forcing of the Planck response if it, the Planck response, caused 1 degree of warming. That would be completely inappropriate if you’re then going to use the value derived from assuming no effect from feedbacks to estimate the strength of feedbacks.
Now, I don’t think Anders’s formalism in his latest comment matches what he used in his post. I don’t think he used ~2.3 as the change in forcing. What I think he’s done is somehow taken the original formalism where he attributed the total warming effect to the Planck parameter to calculate the forcing associated with it (~3.2) to plugged that result into this new equation, allowing him to solve for the feedback parameter. And in the process, he’s suddenly plugged the value of 2.3 in for his change in radiative forcing as opposed to the 4.0 he referred to in his post (which actually balances his equation if we attribute all warming to the Planck response).
The reality is with dT showing up in both the Planck response parameter and feedback parameter, there is no way to solve for either, at least not for a numerical value. You could solve for the parameters in that it is possible to do things like specify a ratio between the two parameters as being equal to the remainder of the formula. This would let you plot potential answers in which one parameter increasing would mean the other parameter decreased,
TL;DR: When you have two variables like this, you cannot solve for a single numerical solution.
By the way, I want to stress I don’t know what Anders’s thinking behind these comments is. I may be misunderstanding him somehow. That is why I wish he would avoid the sort of unhelpful comments he’s been prone to. This could all be resolved very easily. All he’d have to do is explain how he derived the value for the Planck response parameter he used in his post.
I’ve already stated what I believe he did to derive it. It seems he might disagree with that I said, but he hasn’t offered an alternative explanation. If and when he does, there should be no further uncertainty about what was done for his argument.
Brandon S.,
You have it exactly backwards. The Planck response is caused by the surface temperature increase. That’s why it’s a negative feedback.
The point here that is most interesting is the peer pressure to shade sensitivity estimates to the high side. Annan being more honest had the temerity to point it out. And of course Annan and Lewis seem to have finally won the uniform prior debate even though without any gratitude expressed. And Lewis gets accused of shading his estimates low! 🙂
And that’s the thing about ATTP’s site. In its slavish devotion to the science as truth narrative, any discussion of the very real and important issues such as the replication crisis, the aforementioned publication and peer biases (and the list gets longer every day), is deleted or denied.
Oh and I forgot that just like models of planetary formation, CFD and GCM’s main use is in gaining qualitative understanding. Except when they are not.
mark bofill,
.
.
This is me again breaking a promise to not engage, but I’m sorry, I just can’t lay off.
.
It’s difficult to do both at the same time with the same person. But completely possible to do one or the other at the same time with multiple people as the situation with individuals warrants.
.
Nobody is perfect, we all get out of hand from time to time. There is no shame in my book for being human. That is all.
SteveF,
Make it 3.6W/m^2 then. That’s still a perturbation of the natural greenhouse effect of a few percent. That’s really all I was trying to get at. Apparently acknowledging that is still beyond some people.
Brandon,
Thanks for proving me right, once again. I kind of have, but you haven’t bothered thinking, and it’s a pretty well understood number.
Here’s a derivation for you, that I’m sure you’ll find reason to criticise (or my tone, or something else).
Given that we have an average surface temperature of about 288K and an average planetary temperature (non-Greenhouse) of 255K, the ratio of the outgoing flux, to surface flux, is 288^4/255^4=0.61.
Therefore you can write the outgoing flux as
$latex F_{\rm out} = \epsilon \sigma T_{\rm surface}^4,$
with $latex \epsilon = 0.61$ and $latex \sigma = 5.67 \times 10^{-8} W m^{-2} K^{-4}$ being the Stefan-Boltzmann constant.
If we perturb the systen, we can write the Planck response as
$latex dF = 4 \epsilon \sigma T^3 dT.$
If you solve the above using the numbers I’ve already given and using $latex T = 288K$, you get 3.3W/m^2/K. Not exactly 3.2, but pretty close.
And I wish you would avoid calling people liars and dishonest, even if you believed it to be true. Since you appear to lack the basic common decency that most would regard as reasonable, I don’t care how I respond to your comments and I don’t care what you think of my comments. If anything I shouldn’t really give you the time of day. I’m amazed that you have the gall to complain about my tone. It’s almost seems that you think the world revolves around you. Everything you do is fine because you believe you’re right and anyone you choose to criticise still has to respond politely because that’s what you want. Bizarre.
Correction: 255^4/288^4 = 0.61.
aTTP has still not given a reason why his combination of terms means anything. The Planck / feedback decomposition is an abstraction; the system responds as a whole.
Let me try an analogy. You’re a book publisher. You pay $500 to the author and printer, to print up 1000 copies of a book. You give the books to a bookseller with the agreement that he pays you $1 for each book sold. He sells 600 books, giving you $600 and recycling the remainder. At the end of the day, you’ve netted $100. Most people would describe the book as costing $500 and earning $600. But one could add up all of the negative items, saying that the book “cost” $900 ($500 actual expense and $400 of unsold books) and “earned” $1000 (potential value of all books). That would, to my mind, be a bizarre way of accounting, but it leads to the same result mathematically.
So it is with aTTP’s accounting. The Planck response didn’t happen. [Just as the 1000 book sales didn’t happen.] All that one can see in the climate is the net response ($latex \lambda dT $ in aTTP’s formulation).
The entire goal of the forcing approach is to be able to sum up in a single number the anthropogenic effect, and that’s F. Period. Not F plus a partial response term.
Harold,
I haven’t given you a reason because I can’t make head nor tail of what you’re getting at. This, for example, doesn’t make any sense.
You’re, of course, free to describe the system in a different – but equivalent – way, but suggesting that the Planck response didn’t happen is just slightly odd.
aTTP,
What I’m saying is that the Planck response was not realized. The system had a transient response of dT; that’s what happened, and it offset the forcing by $latex \lambda dT$. Decomposition of that term into “Planck” and “feedback” components is a human invention.
Harold,
Science is a human invention. The surface warmed by dT, that produced a change in outgoing flux that one can express as
$latex dF = 4 \epsilon \sigma T^3 dT.$
This is typically referred to as the Planck response. It doesn’t really matter what caused the change in temperature for this to be the case.
On the other side of the equation we have the change in external forcing and the non-Planck feedbacks. Combining it all together can allow one to estimate how out of balance the system will be (assuming that it started in balance).
aTTP:
F is the measure of how “out of balance” the system is. Or would have been, without a change in temperature.
Feedbacks — both Planck and non-Planck — arise from the dT caused by F. Adding the partial feedback to F is double-counting.
Harold,
No, F is not the measure of how out of balance the system is.
No, it is not.
ATTP,
The problem with your derivation is that the ratio of surface emission to TOA emission is not a constant. Absent a change in albedo or incident solar radiation, emission to space is nearly constant, but ghg’s can change the surface temperature.
Assuming the pre-industrial temperature was ~1K less than now, or 287K, emission to space was still, as far as we know, 239W/m², ~255K. But using an emissivity of 0.98 for the surface, the emission was 377W/m² for a ratio of 0.63.
DeWitt,
Yes, I realise that. I wasn’t trying to build a full three-dimensional climate model and do a time-dependent simulation. I was simply trying to illustrate that we’ve perturbed the natural Greenhouse effect by a few percent. Make it 0.62 if you wish.
Ken Rice,
I think Harold is getting at the fact that the Earth’s infrared emission to space has not increased. If the sun’s flux had increased, then considering the increase in blackbody radiation would make physical sense. Point is, the whole “greenhouse forcing” construct is an abstraction, not real. What is really happening is the insulating effect of the atmosphere has increased. If the greenhouse effect is stated as a temp increase over the blackbody temperature of an earth without atmospheric influence, then the increase in surface temperature at equilibrium relative to that value is how much the greenhouse effect has changed. So far it is a few percent.
Hooray. FWIW, I said this in an update to my post a few days ago. You can also get the same basic result doing it my original way, but I guess it would be a bit much to expect people to acknowledge this. I’ll just be thankful for small mercies.
I will say that I don’t know what this means
What does “real” mean?
Ken Rice,
By ‘not real’ I mean the GHG influence is treated as equivalent to an increase in solar intensity (an increase in surface flux). When people talk about ‘forcing’ I think it mostly confuses the issue. The resistance of the atmosphere to heat flow is what is changing as GHG concentrations rise. Heat accumulating on Earth is actually decreasing the rate of loss to space.
Possibly, but if properly defined it shouldn’t be a problem.
Sure, but you can think of this as changing the energy balance, which – at a basic level – is the same as if there were an equivalent change in solar insolation. Of course, the details can be different (stratospheric cooling, for example), but if we’re simply considering basic energy balance formalisms, the difference is not that relevant.
This doesn’t sound right. What do you mean by this?
Ken Rice,
“This doesn’t sound right. What do you mean by this?”
.
Ummm… unless the amount of net energy from the sun has changed, the total heat lost to space plus accumulating in the Earth system has to be constant. So if heat is accumulating in the system, and we know that it is, less heat is being lost to space. And this kind of confusion (even among technically trained people) is why I say the conventional abstraction of “GHG forcing” is confusing… it complicates the discussion more than it clarifies, especially among the public. GHG’s restrict the rate of heat loss to space, and so lead to a surface temperature increase. That’s all there is to it. The only really important question is how much restriction.
SteveF,
Yes, but it’s not the heat accumulating ton Earth that is decreasing the rate of loss to space. Heat/energy is accumulating because we’re losing less energy per unit time than we’re gaining. At the moment (as you’ve said, I think) this is because we’re increasing the amount of GHGs in the atmosphere, which acts to reduce the rate at which we lose energy to space, causing the surface to warm until we retain energy balance. Using “forcings” is simply a way to describe the impacts of this increased GHG concentration.
ATTP,
Radiative imbalance is also a way to describe the effect of increased ghg concentration and, IMO, is less confusing than ‘forcing’ even if it’s two words rather than one. To put it another way ‘forcing’ is scientific jargon for radiative imbalance.
DeWitt,
No, it’s not. A ‘forcing’ and a ‘radiative imbalance’ or not the same thing. If we apply a change in forcing and determine the radiative imbalance prior to any response to this change, they will be the same, but they are not – in general – the same.
Ken Rice,
” Heat/energy is accumulating because we’re losing less energy per unit time than we’re gaining.”
You can say it that way if you want. I prefer to think of it as heat accumulating in the system because the surface is warming, and the surface is warming because GHG’s are increasing, restricting heat loss to space. In any case, accumulation of heat only happens as a result of a change in surface temperature, and that means until a new equilibrium surface temperature is reached, the loss of heat to space is reduced by the amount being accumulated.
.
” Using “forcings†is simply a way to describe the impacts of this increased GHG concentration.”
.
Sure, and as I said, it is the conventional abstraction of the real process. I just don’t think it is in any way ‘simple’. I think it mostly confuses people.
DeWitt Payne:
That’s not how I see it used in discussions of climate sensitivity. The Planck feedback is a negative one, to be sure, but the Planck response is described as the warming caused by an increase in radiative forcing (or cooling if radiative forcings decreased) absent feedbacks. I’ll provide two quick results from Google to highlight this:
And:
The Planck response is the temperature change caused by a change in forcings absent feedbacks. The reason for this is when discussing feedbacks, one needs to have a reference system defined as your “no-feedback” system. What is typically used in discussions of climate sensitivity is the idealization of a blackbody planet.
The Planck response is not a feedback. It’s what we use to determine our reference system for the “no feedback” sensitivity. There is also a Planck feedback, but that is called the Planck feedback.
I guess Anders might have used the wrong phrase though. If he had meant “Planck feedback” when he wrote “Planck response” then I would have misunderstood him. I don’t know why he wouldn’t have pointed out what he meant when I made it clear what I interpreted “Planck response” to mean though.
SteveF,
Okay, I agree.
I didn’t say it was necessarily simple, but we are all people who’ve discussed this topic for some time, so I wasn’t expecting it to confuse people here.
Ken Rice,
“I wasn’t expecting it to confuse people here.”
.
And I doubt it confuses most people here. I think it does confuse the public… in fact, if you wanted to find a way to “explain” the influence of rising GHGs which would maximize confusion among the public (and facilitate lunatic blog discussions about things like back-radiation among those who have absolutely no understanding of the concept…. on both sides of the political divide) then the “equivalent to a forcing of” abstraction is, IMO, just about optimal.
Brandon,
I used Planck response pretty much exactly as in your quoted text.
If you invert 0.3 to get it into units of W m-2 K-1, you get a value of 3.333.
Not according to your own quoted text it’s not. You’ve misunderstood the final part of your quoted text, which is from here. The final bit is simply describing the no-feedback climate sensitivity (multiply 0.3 by 3.7 to get about 1.1C).
Anders:
You clearly referred to the “Planck response,” which is a very well understood term. As I explained to DeWitt Payne just above, it is defined at the reference system considered to be the “no feedback” system when looking at climate sensitivity. The Planck feedback is a very different thing. That I was unable to realize you meant one thing while using a term which referred to a different thing in no way suggests I “haven’t bothered thinking.”
Perhaps I could have realized what you meant if I had thought about it enough, but I would have certainly have realized what you meant if you had simply chosen not to keep making unhelpful comments. If we wanted to be needlessly combative and try to disrupt communication, We could say it is only because you “haven’t bothered thinking” you didn’t realize why I was saying things so different from you. I don’t see a point in that sort of behavior though.
The simple reality is you used the wrong phrase, and because I wasn’t familiar with how the Planck feedback is written out, I didn’t recognize your mistake. Consequently, I took your writing as literally referring to the Planck response when you actually meant the Planck feedback. That should be no big deal. Minor issues with communication like that are natural and easy to solve. You can choose to behave in a manner that makes minor issues like these far bigger deals then write:
But remember, you wrote:
When a person makes a genuine effort to understand you without any hostility or even snark, to the point they don’t even response to it coming from you, I believe the best thing to do is try to help them understand you. Responding to people who are genuinely trying to understand you by treating them like garbage and writing unhelpful comments just serves no good purpose. It’s just being rude and obnoxious for no reason.
Brandon,
Not by you, it isn’t. Look at the units, for goodness sake!
I don’t believe that you are genuinely trying to understand me and I have every reason to be obnoxious and rude towards you.
Anders:
The text goes on to say:
The “no feedback” sensitivity is called the Planck response. That’s why it is compared to the “with feedback” sensitivity. As for the numerical values, of course they’ll come out the same. The same formula is being used.
That said, I suppose it is possible “Planck response” is used in more than one way, and I’ve inadvertently only been exposed to only one usage. I haven’t seen the phrase in a glossary or other source for a formal definition. I just keep seeing papers that use it the same way, like this one:
I chose this one as an example because there is no possible ambiguity in what it says the Planck response is, but I’ve seen plenty of others which use it the same way. I don’t think I’ve ever seen it used a different way. Maybe I’m wrong about what the phrase means, but if so, it’s because I haven’t seen it used any other way.
Regarding OHC, I don’t doubt that there is some amount of turbulent mixing, and with warming at the surface and lower atmosphere, some amount of heat gain.
.
But there are uncertainties ( observation bias because we’re not measuring under the sea ice where significant exchange takes place in Polynyas ).
.
Further, I don’t believe anyone actually knows what the relative exchanges are.
.
The temperature of the oceans at depth is a lot closer to zero than it is to the average temperature of the surface. This indicates to me that the convection of cold water waters formed at the mostly unsampled poles is much larger than turbulent mixing. Indeed, during summer, when the surface waters are warmest is when mixing is the least because of stability. Also, the age of waters in the deep North Pacific are very old ( a millenium? ). The contribution of turbulent mixing to these waters must be close to zero.
.
That doesn’t mean the turbulent mixing term is zero globally and OHC increase in the sampled areas is positive.
.
But there too, there is uncertainty.
.
The reference 2012 Levitus paper indicates: “This warming [ 0 to 2000m, 1955 thru 2010 ] corresponds to a rate of 0.27W/m^2 per unit area of earth’s surface.”
.
0.27 W/m^2 is not the CERES0.0 W/m^2 and also not the 0.8 W/m^2 listed above.
.
As far as Ken’s formulation and it’s relevance to the hypothetical ECS versus observed response, I have no problem with the OHC term ( though of uncertain value ).
Whatever contribution to OHC is within the ocean atmosphere system. The question becomes at what rate is the OHC available to the atmosphere? Pielke argues that the duration is long ( four centuries? ) and that the rate would be slow.
Brandon,
I’m clearly wasting my time, but I’ll try and explain this.
The Planck response (according to the text you quoted) is 0.3 K/W/m^2. If there were no other feedbacks, then if we doubled atmospheric CO2 (which would produce a change in forcing of 3.7W/m^2) then the Planck response alone would act to return to equilibrium and the resulting warming would be
0.3 K/W/m^2 x 3.7 W/m^2 = 1.11 K
If, however, there are other feedbacks operating, then returning to equilibrium will require warming that is different to this and will depend on whether the net effect of these other feedbacks is to amplify the forced warming or not. However, this does not change the magnitude of the Planck response. It is still 0.3K/W/m^2.
If you really want to understand this, you can always try reading Soden & Held (2006). In particular Table 1. You may note that their Planck term is -3.2, but that’s because they’ve included the negative sign and also written it as W/m^2/K, rather than as K/W/m^2.
Anders:
Explaining what people have already indicated they know does tend to be a waste of time. The issue here is not the formula being used, but how the phrase “Planck response” is used. My contention has been that it is used in a particular way when discussing climate sensitivity, which is different from how you used it, causing me to misunderstand what you meant.
Which I consider a minor matter. If my misunderstanding of what you meant had been pointed out at the start, I wouldn’t have thought anything of any of this. I’d have dropped the matter as just a minor communication problem. So while you can say things like:
The reality is the only reason this discussion has extended beyond a couple comments is you’ve repeatedly chosen to engage in an unhelpful manner rather than just have a simple and reasonable discussion. You can choose to justify that behavior based upon your negative opinions of me, but your beliefs about me are wrong.
As far as I know, the phrase “Planck response” is used to refer to the “no feedback” sensitivity of the planet. Based on that, when I read your post which referred to the “Planck response,” I took it to mean a reference to the “no feedback” sensitivity. That’s all. There was nothing nefarious about that.
TE,
Levitus, 2012 is out of date.
Using the current data from 2005-2015, the linear fit has a slope of 1.07E22Joules/year. Converting to W/m² for just the ocean surface area that gives 0.94W/m². If we assume that the OHC is actually the global imbalance, then it’s 0.66W/m². Eyeballing says the slope is increasing, but I don’t think four years is long enough for statistical significance.
Even looking at 2005-2011, the global imbalance is 0.46W/m², not 0.27.
Brandon S.
Then you should use the opportunity to use it correctly rather than repeat the mistakes of others.
Or in terms of parent to child: If Billy jumped off a cliff, would you jump too?
If we assume that the OHC is actually the global imbalance, then it’s 0.66W/m².
Yes, I think that’s the relevant area to consider.
Even looking at 2005-2011, the global imbalance is 0.46W/m², not 0.27.
Right, so we’ve got:
.
0.80W/m² above
0.66W/m² (2005 thru 2015)
0.46W/m² (2005 thru 2011)
0.27W/m² (1955 thru 2010) and
0.00W/m² (2000 thru 2015, CERES)
.
They’re all different periods and probably some natural variability. But is there a compelling reason to use a particular of these values?
DeWitt Payne:
If the publishing scientists of a field use a phrase a certain way, I would say using it the same way is, in fact, using it correctly.
Brandon,
1. You can use – by definition – the Planck response to get the no-feedback climate sensitivity. This does not mean that it is the only relevant to the no-feedback climate sensitivity.
2. This discussion went exactly as I expected. You started with a claim that someone else was wrong (not possibly wrong, but wrong). That person points out that they’re not wrong. You double down and then complain because they aren’t explaining things properly – ignoring that if you’re so certain then they shouldn’t need to. It then turns out that your initial claim was wrong, but you then go on to complain that they didn’t use the terminology as you think it should be used, therefore it is still their fault.
3. You have a post on your site calling me a liar. This is based on a draft of a paper that you found and publicised despite it saying confidential and you having no right to make it public. Despite this you still have the gall to complain about my tone when responding to you. While that post exists you won’t get better. If you have the decency to correct your post, maybe I will respond in a better way. I don’t hugely care. You’re the one complaining.
Brandon S.
Cite please.
“I have made up my mind that using the Planck response to calculate radiative forcing while completely disregarding feedbacks is wrong if you believe such feedbacks have a strong effect. I’ve also made up my mind you believe such feedbacks have a strong effect since you’ve argued feedbacks more than double climate sensitivity many times.”
HUH?
Here is my official position on ECS. Its has not change in years.
ECS falls somewhere between 1.5C and 4.5C
IF
YOU
ASK
ME
TO
TAKE
AN
UNDER/OVER
BET
at 3C…….
I will take the under.
Ideally I would rather just talk about lambda.
and feedbacks.
DeWitt Payne (Comment #147017),
.
.
Since the IPCC estimate is 0.93 of warming goes into the oceans, it would it not make sense to do 0.66 W/m² / 0.93 = 0.71 W/m²?
.
.
I would tend to agree, but my warmist eyeballs have difficulty ignoring what looks like an acceleration.
.
.
Sure, but again, six years is probably not enough for statistical significance. Also somewhat dated now, but Stepehns et al. (2012) is the latest energy budget paper I know of. I assume you’ve seen it, but I thought I’d throw it into the mix. Your back of envelope calcs make sense to me, especially as they are close enough to my own, and are consistent with the result of 0.6 +/- 0.4 W/m² at TOA from the paper.
Steven Mosher (Comment #147025),
.
.
I know, right?
.
.
S’posed to be baked into lambda, innit?
.
1) ΔTs = λ ΔF
2) ΔF = α ln(C/C₀)
3) ΔTs = λ α ln(C/C₀)
.
ΔTs = 0.8 * 5.35 * ln(2) = 2.97 K
.
You win the under. But … non-linearities …
Brandon Gates,
0.66W/m² is calculated by converting the slope of the linear fit of the annual OHC change from joules to watts and dividing by the entire surface of the Earth, 5.101E14m² rather than just the surface area of the world ocean, 3.619E14m². Since the world ocean is 71% of the Earth’s surface, 3.619/5.101*100 = 70.95%, then the conversion is 0.94 *0.71 or 0.66/0.71 depending on your assumption.
DeWitt Payne (Comment #147028),
.
Indeed, I understand the surface area adjustment to get a global imbalance figure. I also understand that 6% of absorbed energy is going to warm land surface and melting ice. The remaining 1% is atmospheric warming. Very approximate figures.
.
I’m not arguing that the *answer* of my calc, 0.66 W/m² / 0.93 = 0.71 W/m², is correct, only asking whether the *method* makes sense when using *only* delta-OHC [edit: d/dt OHC] to guesstimate energy balance.
… ps, it’s unfortunate that some of the figures we’re using are coincidentally similar when they represent completely different things …
Steven Mosher (Comment #147025)
Hi.
“Here is my official position on ECS. It has not changed in years.”
Steve,
Why not?
I believe you said you would change your position, up or down,depending on how new information came in, in a comment to me 3 years ago and confirmed this view in a later comment.
New information has come in leading to a decrease in the accepted views of ECS mid values.
Why have you not changed your mind re both the range and the midpoint?
Real question.
Comment that not changing your view in years is either misleading given your past statements or evidence that you are not adapting to changes in information as they come in.
Your current position seems intenable?
Brandon G.
The TOA imbalance to a first approximation is the amount that’s going into the ocean converted to global area. If an additional 7% is also being absorbed by other than the oceans, that would give 0.66/0.93 = 0.71. That’s the same number to one significant figure. I don’t think you can really justify more than one significant figure given only 11 data points, the significant uncertainty in the OHC measurements and the somewhat sparse ARGO float coverage in the early years.
It will be interesting to see what, if any, effect the current El Nino has on the change in OHC for 2016 and 2017.
DeWitt Payne (Comment #147032),
.
Ok, the sig figs argument makes sense, thanks. The Stephens (2012) result remains the best I know of, and it does give me some confidence that our OHC-rate calcs do seem to give a good first approximation to those conclusions.
.
[Edit: and yes, I’m interested to see what El Nino does to OHC, if much, down to the full 2 km depth usually reported.]
.
Cheers.
Steven Mosher (Comment #147025)
Hi.
“Here is my official position on ECS. It has not changed in years.â€
Steve,
Why not?
I believe you said you would change your position, up or down,depending on how new information came in, in a comment to me 3 years ago and confirmed this view in a later comment.
New information has come in leading to a decrease in the accepted views of ECS mid values.
1. I would STILL take the under bet. Nothing has changed.
Do you see how framing it as a bet works?
2. If estimates of ECS go down, I WILL STILL TAKE THE UNDER BET at 3C.
3. Here is a way to think out it. I am unsure about the shape
of the PDF from 1.5C to 3C.
4. IF you were smart would would have asked me if I took the under bet at 2.5C
5. A longer pause — say 5 more years would have been compelling
Why have you not changed your mind re both the range and the midpoint?
Real question.
I havent seen sufficiently compelling data to force a change.
Nic providing some good reasons for moving a midpoint down.
One paper is not decisive. As I said above an interesting bet
might be at 2.5C. or 2.7C
Comment that not changing your view in years is either misleading given your past statements or evidence that you are not adapting to changes in information as they come in.
Your current position seems intenable?
1. Brandon seemed to imply that I had argued many times that
feedbacks were high or double or some such thing. Funny I dont recall that, which was the
the reason why I reinterated my position as a normally state it. As a bet.
2. My position hasnt changed. I would STILL take the under bet
If the world had gotten rapidly WARMER I would Not take
the under bet. You might take the under bet at 2C. I would
not.
3. Taking the under bet at 3C does not mean I think the mid point is 3C. I want to win bets after all. What you dont know is
the percentage of winnings I find interesting.
“The issue here is not the formula being used, but how the phrase “Planck response†is used. My contention has been that it is used in a particular way when discussing climate sensitivity, which is different from how you used it, causing me to misunderstand what you meant.”
I think even Monckton knows what it means..
But you could always practice CHARITY.
1. Based on your interpretation you thought ATTP made a mistake.
2. You could always ask… “Hey are you using that term to mean this?”
Here is the thing. When we say “X” doesnt make sense or X is Wrong or X is lying, there is always a choice we make.
We choose to believe our own reasoning.
We could choose to ask questions first… to make sure.
DeWitt, here’s a reference:
THE CONFUSION OF PLANCK FEEDBACK PARAMETERS
[Note there are some problems with this paper, see Lucia’s post here, but I think it gets the basic definitions correct.]
Steven Mosher,
.
I’ve got to say, I think it’s wonderful but strange that you only bother to deliver these lectures to the folks on one side. It’s wonderful because it’s my side that’s gets all the good advice. I appreciate you looking out for us like that, although I guess it must be rough on you to favor us in such a nakedly partisan way.
.
Thanks though.
Re: Steven Mosher (Comment #147035)
One of the main things I learned during my academic time, trying to mix it with incredibly smart people: always try to work out why yr antagonist might be *right* before leaping into the fray …
The paper above by Kimoto references this paper by S. Bony et al 2006:
How Well Do We Understand and Evaluate Climate Change Feedback Processes?
[See in particular Appendix A.]
Steven Mosher (Comment #147034)
“4. IF you were smart would would have asked me if I took the under bet at 2.5C”
True.
Reasonable intelligence [not smart* enough], prepared to learn,biased viewpoints subject to very slow change. Maths not up to speed yet which is why rarely engage with Carrick, DeWitt,yourself on the maths unless something seems totally misinterpreted.
[No, not a job interview].
Probability is the problem with Climate change on so many levels and issues. Lucia and Tamino seem to have a better grasp than most on mistakes in these areas.
ATTP here discussing his ideas is really good.
Neutral [sort of] ground.
Not sure who is winning at the moment.
ATTP seems very sure but do his comments imply a low climate sensitivity that he is reluctant to admit?
No, I won’t ask you’re under 2.5, as I said paradigms have changed.
If you can seriously ask about under 2.5 I feel you should state that you have changed your range somewhat and that you are not stuck in the past.
Thanks for replying, as usual when you do respond you make a lot of sense.
Sorry Anders, but if you’re going to try to rehash old points rather than discuss the issue at hand, this is going to be a waste of everybody’s time. This is simply wrong:
As I didn’t make anything public. I simply took what was already published in a public location and linked to it. People have every right to post links to things they find published in public locations.
As for me calling you and your co-authors liars, you can dislike that all you want, but the reality is I gave a clear explanation for my accusations. If the explanation is accurate and true, then the accusation is perfectly appropriate. That’s true even if you don’t like it. People are not obliged to refrain from pointing out a person’s wrongdoings simply because that person doesn’t want them pointed out.
If you’re going to use things unrelated to the topic at hand as an excuse to avoid having a reasonable discussion, then everyone would be better off if you’d just not say anything. Engaging in a discussion with the intention of causing problems doesn’t help anything.
Carrick,
In both links, the Planck feedback is referred to as being caused by the increase in temperature. I see no support for saying that the Planck feedback causes the temperature to increase as Brandon S. would have it. Just one example:
JOC, 19, p.3477.
I think I’m done for the night. I just got to Steven Mosher’s comment where he responds to someone else entirely and writes:
Even though that doesn’t respond to anything I had said. Spamming Caps Lock to yell things at people unrelated to what they say is just too much. I get there are people who stand on street corners and yell about how the end of the world is coming, about how aliens are real or whatever else, but I try to ignore them too.
Night guys. I kind of dread seeing what sort of strangeness will be posted while I’m gone.
Oh, before I go, I should point one thing out. DeWitt Payne, I had provided three examples prior to you asking for any. One had the text:
I’ve already suggested it might be that the phrase is used in different ways and failed to realize that because I’ve only been exposed to usages like in this example. But I’ve seen the phrase many times in discussions of climate sensitivity, and in each case, “Planck response” was used to refer to the no-feedback sensitivity.
Anyway, I just wanted to put that out there before leaving for the night. It feels a little weird to simply repeat myself, but maybe it could help. And now seriously, I’m out. I have way too much yard work to do tomorrow to be staying up late.
Steve Mosher,
I find the empirical evidence compelling; I’d take a bet at 2.2 and be pretty confident of winning, save for that I would be unlikely to live long enough for >2.2 to be convincingly excluded by data. The reason I am comfortable with 2.2 is that it seems to me presumed high aerosol offsets have long been used to buttress claims of high sensitivity. At some point aerosol influences will be more clearly defined, and it seems to me likely that definition will be centered well below the current IPCC most probable value.
“After two decades of wrestling with this issue, I’m not sure how useful the concept of “feedback†is in the context of the climate system. J Curry December 29, 2010 “.
It seems to me the controversy is a bit like the stock market. When one person thinks it is right to buy another thinks it is time to sell.
The current state of the market reflects accurately all known factors and forcings.
So it is with the climate. All the effect of a rise in CO2 is already present and were it possible to freeze all things as they are eg CO2 level then the current state is exactly where it should be.
If we add more CO2 then that will cause a change but what has been added cannot currently cause any further change. Its effect is already there.
What the effect of future forcings will be can be quibbled scientifically but all estimations of forcings by guessing deserve limited respect and all forcings by science [CO2 doubling] that do not eventuate, as in the pause, should throw cold water on the concept of modelling in a vacuum of knowledge.
Brandon,
Then stop whining when I respond to you as I do. I’m simply telling you what it will take for me to respond to you in a different manner. This is not complicated! You can dislike it, but the fact that you would spend comment after comment whining about it is what I find utterly bizarre. You don’t get to call others liars and then expect a better tone from them.
A decent person would have contacted the authors to tell them that something that they thought was private was not. You did no such thing. You publicised it and then judged the authors on the basis of a draft. I regard that as extremely poor behaviour (and that is being polite). This is why I respond to you as I do. In fact, I regard my responses to you as far, far better than you deserve. So, please stop whining about my tone when I do choose to respond to you, because it’s not going to get any better while your post calling me a liar still exists. Do you undersand this? Was this clear enough for you? Were there any words that weren’t clear that I should explain again? Are you confused about any of it?
I’ll also point out why – I think – Brandon is still confused and why what he’s quoting isn’t quite saying what he thinks it is saying. Climate sensitivity is often written with units of K/W/m^2 or W/m^2/K. If you then know the change in forcing (3.7W/m^2 for a doubling of CO2) you can get the change in temperature.
The Planck response has units of K/W/m^2 or W/m^2/K and therefore has the same units as climate sensitivity (or as climate sensitivity is often presented). Therefore in the absence of other feedbacks, the Planck response is the no-feedback climate sensitivity (since all that you have is the change in forcing and the Planck response). However, this doesn’t mean that the Planck response suddenly changes if other feedbacks are operating. It stays the same. What changes is the climate sensitivity (or the change in temperature required to return to equilibrium).
Any one who discussed personal gripes:
On this thread to the extent possible, try to avoid discussions of personal gripes or inserting them in otherwise technical comments. Discussions of gripes belong on http://rankexploits.com/musings/2016/ban-bing-is-too-long/.
Note: I’m seeing very few of these ‘gripe’ discussions, but I see arguments about ‘liar’, just above. I request one avoid both the initial volley and the hit back (with it’s nearly inevitable back and forth here.)
So try to separate these. SteveF’s post is about climate sensitivity. Try to stick to that. Thanks.
aTTP:
Rather than use forcing as the measure of anthropogenic effects, you assert (#146983), “On the other side of the equation we have the change in external forcing and the non-Planck feedbacks. Combining it all together can allow one to estimate how out of balance the system will be (assuming that it started in balance).”
.
Reviewing your equations:
Basic equation:
$latex N = F + \lambda dT$
where N is the net radiative imbalance, F is forcing.
You decompose the lamdba term into a Planck response and other feedbacks:
$latex N = F + W_{feedback} dT – P dT$
providing a value for coefficient P = 3.2 Wm-2/K.
Finally, you propose a new metric (AM = aTTP’s metric) defined by:
$latex AM = F + W_{feedback} dT = N + P dT$
.
Let’s apply these equations to a CO2 doubling experiment, in which pCO2 is doubled at time t=0 and maintained as the system equilibrates. For the sake of argument, take ECS as 3 K/doubling. Over time, dT increases from 0 at t=0 to 1 K (perhaps at t=50 years) to 2 K (t~200 years) and eventually to 3 K (t~millennia). As the forcing is fixed, only dT is necessary to evaluate the above equations. Let’s see what happens to your metric over time.
[Coefficients: With F_2x as 3.7 Wm-2, lambda = -F_2x/ECS = -1.23 Wm-2/K, W_fb = +1.97 Wm-2/K.]
dT = 0K, AM = 3.7 Wm-2.
dT = 1K, AM = 5.7 Wm-2.
dT = 2K, AM = 7.6 Wm-2.
dT = 3K, AM = 9.6 Wm-2.
.
Recap: In this thought experiment, forcing (F) is held at 3.7 Wm-2. Net radiative imbalance (N) decreases from 3.7 Wm-2 to zero. Yet a metric which is intended to “estimate how out of balance the system will be” increases to 9.6 Wm-2, ~2.5x any imbalance which ever existed. Does this make any sense? No.
Harold,
My metric wasn’t to estimate how out of balance the system would be, it was to estimate how much we have perturbed the natural Greenhouse effect. So, if the ECS is 3K, then the overall feedback response (W_{feed} + P) will be 1.25 W/m^2/K. If we want to divide this into a Planck response and non-Planck feedbacks, then we get
W_{feed} = 1.95W/m^2/K
P = 3.2W/m^2/K
So, in your example,
dF = 3.7W/m^2
dT = 0K, N = 3.7W/m^2, PdT = 0, W_{feed}dT = 0
dT = 1K, N = 2.45W/m^2, PdT=3.2W/m^2, W_{feed}dT = 1.95W/m^2
dT = 2K, N = 1.2W/m^2, PdT=6.4W/m^2, W_{feed}dT = 3.9W/m^2
dT = 3K, N = 0 W/m^2, PdT = 9.6W/m^2, W_{feed}dT = 5.85W/m^2
So, the system is back in energy balance (N = 0), but we’ve perturbed the natural Greenhouse effect by 9.6W/m^2 which is just under 10%, since the natural Greenhouse effect is about 120W/m^2.
Here’s a question I was wondering about. Why, in Anders’s formalism, does he say the total greenhouse effect is about 120Wm^-2? Every reference I’ve seen gives the actual back-radiation from the Earth’s atmosphere as something like 330 – 340Wm^-2, not 120Wm^-2. Similarly, the Earth’s surface radiation is 390 – 400Wm^-2. It’s radiation at the top of the atmosphere is ~240Wm^-2.
More reasons to question OHC.
.
Why so concentrated in the Southern Ocean?
.
Why so little OHC increase in the Northern Hemisphere?
.
The ocean transfers heat to the atmosphere, intensely so in the narrow bands around Antarctica, where cooled water subsides. What role does horizontal motion of replacement water play in estimates of OHC?
PaulK, I have further analyzed a number of the 33 CMIP5 models used in the 4XCO2 experiment where I have plotted the regression data of the net TOA versus the GMST and found that I cannot agree with the treatment or conclusions from the Andrews paper that you linked above.
What I find is that the linear plots are all within the bounds of linearity except in some cases where the first and sometimes the second year data points are above the fitted regression line. The data points are spread such that these 2 early points have a very high leverage in determining the slopes of the the Andrews paper reports using the first 20 years data points. I would counter the Andrews non linearity or 2 line segments with different slopes argument by my analysis that shows the last 148 and/or 149 year data points have excellent linearity and that the first and/or second data points are the outliers and probably because the net TOA and temperature are changing relatively much more rapidly than the succeeding years and points and thus can have much more variation. The biasing on the high side is probably due, at least in part, to averaging in the first 2 years a relatively rapidly declining monthly trend data for presentation as annual data points. How the data is adjusted with the pre-industrial (pi) control can also have an effect on the spread of the data around the fitted regression line whereby using the mean of the pi control shows much less scatter than when the entire corresponding pi control series is subtracted from the net TOA and temperature series.
I also find it a bit misleading for the Andrews paper to not have plotted the fitted regression line for the entire regression but rather to have used 2 line segments that were segmented rather arbitrarily and shown only the (different) slopes of those 2 lines. As it turns out the year 1 and 2 data points have very little influence on the slope of the fitted regression line.
To clarify which sort of exchanges do not belong on this thread:
Brandon Shollenberger (Comment #147060)
..and Then There’s Physics (Comment #147062)
These will be moved to “ban bing”.
aTTP (#146983), “On the other side of the equation we have the change in external forcing and the non-Planck feedbacks. Combining it all together can allow one to estimate how out of balance the system will be (assuming that it started in balance).â€
.
aTTP (#147059): “My metric wasn’t to estimate how out of balance the system would be, it was to estimate how much we have perturbed the natural Greenhouse effect.”
.
[My emphasis on both.]
Harold,
In that comment, I realise that I said “other side of the equation”: but I meant that if you combine all the things together you can determine how out of balance the system is. I certainly did not mean that the Planck response plus the radiative imbalance gives you how out of balance the system is, which I kind of thought would be obvious.
Do you at least agree, after my earlier comment, that 1K is about a 3% perturbation of the natural GHE, 2K is about 6%, etc. That was kind of the key point.
SteveF, Why do you say 2.2 insead of perhaps 1.8? I guess Ecs vs Efcs is important here too. I do believe that it’s hard to nail down the second significant digit.
The one thing that all should be able to agree on is that given observational history the GCMs are substantially overstating warming responses and should be either fixed or discarded for quantitative predictions. In short they underperform properly tuned simple models. Not a ringing endorsement for a billion dollar line of “research.”
Do you at least agree, after my earlier comment, that 1K is about a 3% perturbation of the natural GHE, 2K is about 6%, etc. That was kind of the key point.
In the “we’re screwed” (or whatever it was titled) post you worried in conclusion about a 10-20% perturbation. 20% being > 6 degrees right? When do you expect that to happen? An “in X hundred years” would be sufficient accuracy on the guess. Thanks
“Steve Mosher,
I find the empirical evidence compelling; I’d take a bet at 2.2 and be pretty confident of winning, save for that I would be unlikely to live long enough for >2.2 to be convincingly excluded by data. The reason I am comfortable with 2.2 is that it seems to me presumed high aerosol offsets have long been used to buttress claims of high sensitivity. At some point aerosol influences will be more clearly defined, and it seems to me likely that definition will be centered well below the current IPCC most probable value.”
We agree that the aerosol numbers are key. What ever happened to a replacment for Glory?
aTTP (#147082)
“I meant that if you combine all the things together you can determine how out of balance the system is.”
N is the measure of how out of balance the system is.
F is the measure of the anthropogenic effect.
.
Your metric is a meaningless combination of terms.
Harold,
Just because you don’t understand it, doesn’t make it meaningless. Do you at least agree that a warming of 1K is about 3% of the GHE, 2K 6%, etc?
David Young,
I think ~1.8C will turn out to be very close to the correct number. I would bet on it being under 2.2C because I would like better than a 50% chance of winning the bet. 😉
.
I agree that the models are being used incorrectly; they are simply not capable of making a reasonable projection of future warming, as their current divergence from reality makes clear. The thing that is so strange is that even the really bad models (crazy high sensitivity and warming, crazy variability) continue to be considered plausible, even when they clearly are not. Some may argue that these models help give “insight” to some behavior or another… but the behavior which really matters, and the reason the models are funded, is warming in response to GHG forcing. If the models can’t do that (and most clearly can’t) then they are useless as tools to inform public policy decisions.
Kenneth,
Can you link to a graph which shows the difference between your results and the Andrews et al results?
SteveF I’ll do that tonight.
Steve Mosher,
I have heard not a word about a replacement, not even from the group who were going to be responsible for receiving the data. I find it very strange. If the project was equal in worth to a Solyndra company before the launch failed, why is it not worth replacing? I have no idea, and have never heard a convincing explanation. I have to believe it would cost less to duplicate the satellite than the first one cost, since all the engineering work is already paid for.
SteveF, here are the first 12 regression plots (Net TOA versus GMST per Andrews) from the CMIP5 4XCO2 experiment that I plotted most recently. If the plots are too small to readily view let me know and I will put an Excel file into Dropbox for viewing. These plots should show what I was suggesting in the posts above. I have data for plotting 33 model results, but I believe these 12 results give a representative picture.
http://imagizer.imageshack.us/v2/1600x1200q90/923/ciihnl.png
Kenneth,
Thanks. Maybe the discrepancy is partially explained by the averaging of all the models, combined presentation of a few individual models that happen to suggest a break point. In any case, the obvious linearity after the first few years through 150 years sure indicates the difference between effective and equilibrium sensitivities is minimal. After all, the 4X step change is the extreme case, which would tend to exaggerate any difference. Real world forcing has increased quite slowly, even compared to the 1% per year for 70 years (to evaluate transient response). So warming experienced so far lies between transient and equilibrium responses. Will you be contacting the authors about the discrepancy?
“Will you be contacting the authors about the discrepancy?”
Probably will. I have contacted Andrews previously about using TLS and OLS for these calculations and why at the time they were not using all the available CMIP5 model 4XCO2 experiment results. He was rather blasé in his replies with and I paraphrase here: no he had not tried TLS and he was not aware of more available data.
Kenneth,
I realized my previous comment may not be clear. What I was trying to say was that most of the applied forcing until now was applied for more than the 20 years Andrews et al suggests there is a break point, so even if the break point were real, most warming we have experienced is long past that period: empirically determined effective sensitivity should be very close to equilibrium sensitivity, at least based on the results from Andrews et al.
DeWitt Payne (Comment #146960) ” OHC is increasing, which means that the radiative balance at the TOA is not zero.”
It must always be zero or it would not be the TOA.
–
DeWitt Payne (Comment #146986).” Absent a change in albedo or incident solar radiation, emission to space is nearly constant, but ghg’s can change the surface temperature.”
–
the equilibrium temperature does not depend on the size of the planet, because both the incoming radiation and outgoing radiation depend on the area of the planet.Because of the greenhouse effect, planets with atmospheres will have temperatures higher than the equilibrium temperature [in the atmosphere component].
–
Energy imbalance just seems the wrong term.
Perhaps Total Usable Energy?
Mirror earth low .
Black earth high.
More CO2 higher.
TUE sensitivity rather than Climate Sensitivity?
Once the heat in the ocean and air have built up enough to radiate the required energy back into space it will stop gaining energy. The level that this has to occur at for ocean heat content is minuscule and happens immediately.
The air is at the right temperature for the amount of CO2 it contains right now and is in overall perfect balance with the ocean and land.
please ignore the rant,
I am trying to get the terms Climate sensitivity and Equilibrium Climate sensitivity into context, is there a ratio of 3.7? plus deal with feedback responses and the Planck response and ATTP’s sensible posts and I cannot do it!
angech,
“It must always be zero or it would not be the TOA.”
No, if heat is accumulating in the oceans (and to a much lesser extent, elsewhere) then there is less heat being lost to space than energy arriving from the sun. The accumulating heat is the “imbalance” at the top of the atmosphere.
.
“the equilibrium temperature does not depend on the size of the planet”
.
DeWitt most certainly did not suggest the temperature depends on the size of a planet. I think you misunderstood what DeWitt wrote.
.
“Once the heat in the ocean and air have built up enough to radiate the required energy back into space it will stop gaining energy. The level that this has to occur at for ocean heat content is minuscule and happens immediately.”
.
What has to change for equilibrium to be established is the temperature of the surface, not ‘heat’. The accumulation of heat depends on the effective heat capacity of the surface (including the heat capacity of the oceans) along with the required increase in surface temperature to establish a new equilibrium. When GHGs increase, the surface has to warm to allow the energy from the sun to escape to space. But for the surface to warm, energy MUST accumulate in the system. How much heat is accumulated (and how quickly the system approaches a new equilibrium temperature) depends on the effective heat capacity. Unfortunately, there is no single value for heat capacity, it depends on the length of time a change in energy flow is applied. The longer the time period evaluated, the larger the apparent heat capacity of the system. Approaching a new equilibrium most certainly does not happen immediately; approaching a new equilibrium, with little additional heat accumulation, would take on the order of 1,000 years (which is roughly the ocean’s thermohaline circulation turnover time).
SteveF (Comment #147115)
The Andrews first 20 data points are, of course, from the abrupt 4XCO2 release model experiment in CMIP5. Andrews argues in that paper that the taking the first 15, 10 or 5 points does not change his conclusions.
“Note that this choice is somewhat arbitrary, but our purpose is not to best fit the nonlinear behavior; rather, it is to illustrate that the slope (the feedback parameter) changes during the simulation. Whether the separation is chosen at 5, 10, 15, or 20 yr does not affect this conclusion.”
That argument reinforces my counter argument about the high leverage the first points for a 20 point or a 15 point or a 10 or a 5 point line segment have on the segment slope that Andrews claims shows non linearity. Once again, the slope of the fitted regression line is little affected by the first few points and it is those points that by the rapidly changing net TOA and temperature in the early years of the experiment will have the most variation and can be biased by using annual averages of the monthly data and plotting that point at year 1 and not year 0.5. I need to go back and demonstrate the biasing conclusively for myself.
In my estimation the confusion that the Andrews paper creates would have been a non starter if the overall regression line had been plotted.
angech,
It isn’t an energy imbalance, it’s a power imbalance. The term has units of watts per square meter, not joules. Ocean Heat Content is measured in joules. The rate of change of OHC, however, is power and can also be expressed in watts per square meter.
You continue to labor under the false assumption that there can be no power imbalance at the TOA. Note that radiation upward does not originate from the TOA, it’s measured at the TOA. It originates from the surface and the atmosphere well below the TOA. For computing purposes, the TOA is assumed to be 100km. But the atmosphere extends far beyond that. However, it’s so diffuse that emission and absorption above 100km is insignificant.
angech,
This is because doubling CO2 is expected to produce a change in forcing of 3.7W/m^2. Therefore if you write climate sensitivity in units of K/W/m^2, you simply multiply this by 3.7 to get what the temperature change would be once we return to equilibrium.
For example, if the climate sensitivity is 0.75K/W/m^2, then the ECS is 2.8K. There’s quite a good description of this on pages 28-30 of Hansen & Sato (2012) but I can’t find a downloadable copy of that article.
.
Something that helps me get this; consider nighttime and day. I think that at any given instant at any given spot the TOA energy [im]balance is riding a wave – during the day it’s much more likely that the net flux will be energy IN, during the night it’s much more likely that the net flux will be energy OUT. At any given spot, it’s likely to be in balance maybe twice a day for an instant? Maybe this is going too far; actual mileage will probably vary, but. Ignoring everything else and just thinking about the Earth’s rotation and such, it might be something like that.
.
I could be wrong. I like to throw this stuff out there so people can disabuse me of my misconceptions. 🙂
SteveF:
A quibble, but possibly relevant: To the extent that redistributions of heat around the system can occur naturally without any net accumulation, the surface may “warm,†in the sense of radiating at an overall greater power, without the Earth’s accumulating more energy; if redistribution causes a wider spatial temperature variation, the surface would tend to radiate more. (Obviously, I’m making the unwarranted assumption that a higher-radiation state is one in which what we refer to as the global average surface temperature is also seen to be higher, but let’s hold that questionable assumption in abeyance if we can.)
I have no idea of what such an effect’s magnitude might be, so this is likely all theoretical. But I mention it because it reminds me of a problem I had when I first read the head post’s statement that “the forcing PDF can’t logically extend to any value close to the current rate of heat uptake, or else the calculated sensitivity range extends to infinity.â€
Specifically, if any significant change in effective temperature does indeed result from natural redistribution alone—and I’m perfectly open to being convinced that such an effect if any is de minimis—then, since your Equation 1’s $latex Delta T$ is only that part of the temperature change that’s caused by human forcing, $latex Delta T$ could be zero even though the global average surface temperature actually observed is significantly positive.
My problem was that in such a case a difference between forcing and accumulation that approaches zero would not seem necessarily to dictate a sensitivity value that approaches infinity.
Joe Born,
Seems to me that you are arguing a measured increase in average surface temperature could be due to non-human caused changes (eg an ‘natural’ shift in the distribution of temperatures changing the measured average). Well if that is what you are arguing, then sure, it is possible. Just as is the case that the underlying natural trend is in fact negative, and is masking some of the influence of GHG driven warming. I think any such suggestion requires a physical justification and data to back up the speculation. Since atmospheric GHGs are demonstrably rising, and since the expectation is for those to restrict loss of heat to space (to some extent), I suspect William of Ockham would tend to accept GHG forcing as the likely cause.
Joe Born,
The distribution of temperature is important. You can have different temperature distributions with the same total amount of radiated power. However, because radiated power is proportional to the fourth power of temperature, any distribution that isn’t uniform will have a lower global average temperature than if the surface were isothermal. The math principle that governs this is called Hölder’s Inequality.
Earth’s moon, with its low surface heat capacity and slow rotation rate, is an extreme example of this. If the surface were isothermal, the average temperature, assuming an albedo of 0.1, would be 271K. Instead, it’s 196K.
SteveF:
And you’re right in implying that I have no physical justification on offer.
Still, the facts that a significant secular temperature trend prevailed before most scientists think greenhouse-gas changes had much effect and that many think temperature-trend values are rather autocorrelated might be considered relevant to how confident we are in our truncation of the forcing probability-density function at the heat-accumulation rate.
DeWitt Payne:
Precisely. That was the point of my saying that “if redistribution causes a wider spatial temperature variation, the surface would tend to radiate more.”
Perhaps I’d have made myself clearer if I had added “for a given spatial-average temperature.”
Steve Mosher, You should mention the Glory satellite issue to Muller. It looks to me like a very important thing to lobby for. One would think all would agree on its importance. Muller has testified before Congress in the past.
Steve, I don’t think what you’ve done here adds a lot of value, unfortunately.
The chief problem with your approach is that a heat balance model, in order to balance, has to involve comparing variables from exactly the same time. You don’t appear to be doing that. You are using ocean heat uptake numbers from 1993-?, temperatures from an unspecified year (not the present year, certainly — the 13-year average perhaps?) and the forcing estimates from the last IPCC report.
Not only is there too much uncertainty in ALL of these estimates to constrain climate sensitivity, by matching up one time or period average to another, different time or period average, the whole purpose of the exercise, to balance the inputs and outputs, is completely defeated.
Luckily, I may have found a solution to these shortcomings. More here: http://theidiottracker.blogspot.com/2016/05/stevef-makes-hash-of-climate.html
Robert,
I thought Lucia had permanently banned you, but apparently not.
.
The warming is from the start of GISS (roughly the pre-industrial temperature) through 2011. The human forcings and associated uncertainties are from AR5 (2011), as noted in the post. The rate of ocean heat uptake (0 – 2,000 meters) is from Levitus (NOAA updated), and is based on the average slope for the 10 years through 2011. The other hear uptake values (ice melt, ocean below 2,000 meters, land uptake) are as described in the post.
.
Based on your comments, it appears you may not have understood the post, or perhaps didn’t read it very carefully. In any case, the objective of the post was to show how forcing and its associated uncertainty translates into PDF for sensitivity via a heat balance. If you think the values I used for warming since the pre-industrial period, AR5 forcing & associated uncertainty, and the estimates of heat uptake I used are not correct, then you are free to offer your own values and show where they come from.
Joe Born,
“Still, the facts that a significant secular temperature trend prevailed before most scientists think greenhouse-gas changes had much effect and that many think temperature-trend values are rather autocorrelated might be considered relevant to how confident we are in our truncation of the forcing probability-density function at the heat-accumulation rate.”
.
Actually, the data I have seen shows pretty good correlation between estimates of human forcing and historical warming, with modest variation around the long term trend (the often postulated and discussed ~60 year oscillation or +/- ~0.1C), although consistently on the low end of the IPCC ‘likely range’. You can (and I strongly suspect do in fact) reject the validity of any and all estimates of GHG driven warming, no matter who offers them and no matter how the estimate is done. Combine that with you having no data in support of any alternative explanation for warming, and it is clear we don’t have much to discuss.
Cio.
SteveF,
As I said above, Levitus, 2012 is out of date. The rate of change using linear least squares for OHC from just 0-2000m ARGO data increases a lot when you add 2013-2015 to 2005-2012. As I posted above, it’s enough to change the global imbalance from 0.46 to 0.66W/m².
DeWitt,
There is enough noise in the data to make any estimate of trend somewhat uncertain. That being said, if you increase the 0.515 watt/M^2 I used for the upper 2,000 meters to 0.66 Watt/M^2, then the median doubling sensitivity increases from ~1.98C to ~2.12C. The sensitivity PDF becomes (of course) a bit more skewed.
“I thought Lucia had permanently banned you, but apparently not.”
No, it was more a case of my losing interest in a site that degenerated into petty insults, pictures of cats, and posts of recipes. I thought perhaps serious discussion might have crept back into fashion, but your opening with ad hom does not bode well as to that.
“The warming is from the start of GISS (roughly the pre-industrial temperature) through 2011. The human forcings and associated uncertainties are from AR5 (2011), as noted in the post. The rate of ocean heat uptake (0 – 2,000 meters) is from Levitus (NOAA updated), and is based on the average slope for the 10 years through 2011. The other hear uptake values (ice melt, ocean below 2,000 meters, land uptake) are as described in the post.”
I suggest, first of all, that you revise your post to make your data choices much more clear. You mention 2011 a total of once in the original post, and nowhere do you identify that as the moment you have chosen for your epically oversimplified heat balancing exercise.
Second, I suggest you review my response and the post linked to and try to understand why you can’t use a ten-year average of heat uptake together with the temperatures and forcings of a La Nina year.
“Based on your comments, it appears you may not have understood the post, or perhaps didn’t read it very carefully.”
Now you seem to be projecting. I understand what you tried to do. You tried to balance the day’s till using the cash in the register, the average inventory for the last week, and the average daily sales over the past year. You unfortunately do not seem to understand why you failed. I suggest you reexamine your assumptions and simplifications, as you have gone far astray.
Wow. And here I thought I knew what an asshole was.
Robert,
You came here and commented on a post, and invite people to visit your blog to get the ‘real story’. Please. If you have substantive information about that post, then I suggest you include it in your comments.
Cio.
Mark,
Robert has a long history of obnoxious comments on this blog. My experience ’till now is that Robert is unwilling to engage in a substantive discussion, and prefers to engage in condescension….. and worse. The weirdest thing is the refusal to present an argument unless you go to his web site; bizarre.
SteveF,
.
Well, it helps put things in perspective. Anders and Brandon Gates are a downright pleasure to talk with in comparison, even when they tell you that they’re deliberately trying to be obnoxious.
.
~shrug~
“You came here and commented on a post, and invite people to visit your blog to get the ‘real story’. Please. If you have substantive information about that post, then I suggest you include it in your comments.”
Steve, I gave you everything you need in the post above, if you have the slightest idea what you’re doing, to see where you went wrong. I included a link to further details. Since you failed to understand my point, I suggested you review the argument in full. Neither you nor anyone else needs to go there; it’s a courtesy on my part. Feel free to guard your ignorance as jealously as you would like.
I’m sorry you’re unable or unwilling to engage in a constructive discussion about your ideas. I’m here fully prepared to engage with any additional facts or counterarguments you want to proffer, but you have obviously concluded you can’t win on the facts, and have no interest in improving your own understanding of the issues. Which is just sad.
Robert,
” Feel free to guard your ignorance as jealously as you would like.”
.
Mark’s take on you was spot on. Please feel free to wait for comments at your silly and mainly unvisited blog. This will be my last comment addressed to you.
.
‘A Deus, mas espero sinceramente que você vá na outra direção, cu puro.’
SteveF:
Wow, where did that come from?
1. Contrary to what you “strongly suspect,” I’m actually (somewhat) inclined to believe that greenhouse gases have an effect. In fact, I last year offered Watts Up with That a post outlining a simple numerical demonstration of the effect for those to whom it continued to seem non-physical. Unfortunately, that offer was made after I had persisted in contending that Monckton et al.’s “Why Models Run Hot” had no merit either logically or technically, upon which Anthony Watts stopped accepting my posts. (He seems to have considered it ill-mannered of me not to accept what apparently passes in Lord Monckton’s circles for a rigorous argument.)
On the other hand, I’m fairly humble about what I know; just as a kitchen stove’s tendency to make the house warmer can, if the thermostat is poorly located, actually make it colder, I’m not as confident as some that a greenhouse-gas change’s’ ultimate effect will turn out to be what it theoretically appears. Perhaps your “pretty good correlation” would add to my confidence if it were laid out in more detail. But I have not been impressed with the (admittedly, very few) papers (such as Monckton et al.) I’ve actually dug into on the subject.
2. My previous comment was merely intended to explain my difficulty in understanding your rationale for truncating the probability-density function where you did. I’ve been eliciting information on technical matters from experts since the ’60s, and I have to say that defensiveness such as you exhibited in response to that attempt of mine to learn something has not in my experience been typical of people who really know their subject.
However that may be, it was not my intention to put you on the defensive. I want to help keep this site a safe space.
Joe Born,
Sorry if I misread your comment. The key point in my mind is that any suggestion GHGs are not causing warming does damage to the credibility of the very reasonable position that the warming is real but not catastrophic. As Mosher frequently argues, the real technical argument, and the one that is politically important, is about what the sensitivity to GHG forcing is. Claims that GHG warming is insignificant or non-existant damage the very reasonable position that public policy should be based on rational evaluation of actual warming and actual estimates of GHG forcing. Forgive me, but I grow weary of people who refuse to acknowledge 1) that GHG forcing is real and 2) is almost certainly causing significant warming.
.
If that is not you, then I completely retract my earlier comment.
SteveF, very excellent post.
.
It’s always necessary in solving a difficult puzzle to circle back to the beginning logic periodically and plug in unknowns modified with any new perspectives to define the field of possibility.
.
Without diverting the discussion I would briefly remind that there are many relevant unknowns to the political debate besides EfCS, TCR and aerosol forcing. They include:
1) Volcanic activity of the next 100 yrs.
2) Current radiative imbalance relative to SST
3) Degree of polar amplification effect
4) Degree of change of polar precipitation profile.
5) UHI and land use affect on surface temp historical time series.
6) GHE relationship with diurnal transfer range
7) Ocean warming affect on strength of AMOC
8) Warming’s affect on creating weather extremes
9) Benefits of warming and CO2 fertilization.
10) Cost of restricting economic activity before replacement energy technology is online versus costs of potential mitigation.
11) Chances of cooling influences caused by solar minimum, asteroid strike, large volcanoes, nuclear winter or interruption of AMOC.
12) Chances of technologically engineering weather and/or albedo to offset polar melt or change radiative flux.
13) The relative importance of any of these as compared with a handful of other wicked problems that would have deeper impact without investment (or saving) of resources to deal with.
.
Steve, although I’m sure everyone already knows the list maybe acknowledged the list can be like a disclaimer to focus again on the PDF of EfCS and your post. Great stuff.
“The calibre of climate “skepticism†post at ATTP’s has a very funny Larsen Far Side cartoon that has been doing the rounds.
Pointing out wackiness I guess.
Even funnier is the rubber ducky the person is wearing as protection against rising sea level. Obviously not a true Skeptic then?
His comments on blog intemperance ring true however.
They apply to all sides.
TOA is a contentious issue.
Measuring it by satellite implies that it can be measured at varying heights with some degree of inaccuracy due to scatter.
Finding a true definition is even more difficult.
Mark covers this a little with his comment on varying heights for TOA.
My understanding is that it should be the level at which the incoming and outgoing fluxes match.
Since this is not what some here would ascribe to, so be it.
Steve F, the world has been having forcing changes for millions of years.
I realise that I only annoy you with my lack of science on this aspect.
I still feel that when the forcing changes the energy is distributed rapidly in relation to that change. The slow changes you talk about are more to do with the conduction, convection and currents of that energy which wherever they go, must keep radiating away.
Angech,
.
No, I wasn’t talking about varying heights I’m sorry I wasn’t clear.
My point was just this: Incoming and outgoing fluxes generally don’t match. You ought to let go of this idea.
SteveF,
Robert’s never been banned. I did adjust a script to add “(IdiotTracker)” after his name because he and other roberts were asked to add an initial or name after the “robert” name and he did not. (As some know, regulars whose names overlap others are required to disambiguate. In this case, robert’s comments were being confused with those of Robert way and I think someone else and it was leading to inefficient cross talk.)
Soon after the disambiguation was added, Robert seemed to lose interest in visiting. I don’t have any reason to believe that was cause and effect, but the timing was interesting.
Angech,
I guess what I was trying to say was just this, that anyplace on Earth where the temperature starts going up after the sun rises, peaks sometime in the afternoon, and starts to drop as night falls and on through the night until the sun starts coming up again in the morning – anyplace that gets warmer during the day and colder during the night in short, has a nonzero flux. There is some instant in the morning after sunrise where the net flux over any given spot is zero; the spot isn’t cooling anymore and it’s not yet warming. There is some instant on towards sunset there the net flux is zero; the spot isn’t warming anymore and it’s not yet cooling. Other than that, most places just aren’t in balance; they are either warming or cooling almost all the time due to the sun and the fact that the Earth rotates.
Mark,
Presumably what you mean is that regions that get warmer and then colder typically don’t have a non-zero flux at any time (other than when they switch from warming to cooling). However, that doesn’t mean that if you averaged over that cycle the net flux wouldn’t necessarily be zero. If the net flux averaged over some cycle like that is non-zero, then it would mean that you have some cycle super-imposed on a longer term warming.
In fact, if you consider the planet, the tropics and equator do have a positive radiative imbalance (i.e., on average they receive more energy than they radiate away) and the poles have a negative radiative imbalance (they radiate more energy than they receive). This is balanced by transport of enery from the tropics to the poles.
Anders,
.
Yes. That’s what I mean, and I agree with you. Averaged over time it could be zero, and if it’s not zero then there’s longer term warming or cooling over the timeframe, absolutely.
.
Thank for bringing up the equator vrs the poles, that’s another possibly clearer example of what I was trying to get at.
.
Based on prior conversations with Angech I had a (possibly wrong) impression that he was stuck on an idea that the flux is almost always zero in all cases. I was trying to help demonstrate that even if it sometimes averages out that way, at any given time [and place/locality] it really usually isn’t zero.
.
Thanks Anders.
Mark,
Sorry, I slightly mis-understood what you were getting. You’re right that it is probably rarely zero at any given location and – in fact – probably rarely averages to zero at any given location (apart from maybe at the boundary between the tropics and the poles where it goes from positive to negative).
Anders,
.
No problem. The point I was making isn’t one that there’s usually any [use] in pointing out, so there’s no particularly good reason you should have followed what I was trying to get across.
SteveF:
I sympathize with your exasperation at some skeptics’ embarrassing positions; I felt the same at seeing the Morano film prominently feature Chrisopher Monckton and two of his co-authors; these are guys who, despite its having been brought to their attention, have failed to acknowledge—indeed, have actively denied—that their paper’s central tenets are fundamental errors (which, by the way, an undergraduate engineering major would know enough to detect).
I’m happy to kiss and make up, but I have to admit to being uncomfortable with the “almost certainly†bit.
What little I know does indeed lead me to believe that (1) estimates such as those by LIndzen, Spencer, and others establish a significant if relatively modest sensitivity of temperature to forcing on short time scales and (2) radiative physics does establish that greenhouse gases cause non-negligible forcing. One might therefore expect me to consider it almost syllogistic that the temperature response to CO2-concentration changes is significant (but, again, relatively modest).
However, just as some catastrophists criticize those sensitivity estimates’ modest sizes on the ground that they’re based on measurements of only short-time-scale effects, I don’t consider it self-evident that over the longer term there aren’t effects that ultimately cause the responses on which those sensitivity estimates are based to decay. Or maybe the observed appearance of sensitivity results only from that forcing’s momentarily accelerating the system along some limit-cycle trajectory and not from its changing the trajectory itself.
I know, I know: just rank speculation without any physical justification, right? And I am indeed more inclined than not to believe that sensitivity lies in the significant (but modest) ranges identified by the clearer-minded researchers. Given how profound our ignorance of the climate system’s myriad contributions is, though, I guess I feel more uncertainty than most appear to. I’m just a spectator, but I get the impression that researchers are looking for their keys only under the lamppost.
So, although it may be only because I have failed to avail myself of information on which people more informed than I have relied, I remain somewhat this side of “almost certain.â€
mark bofill (Comment #147148)
“consider nighttime and day. I think that at any given instant at any given spot the TOA energy [im]balance is riding a wave – during the day it’s much more likely that the net flux will be energy IN, during the night it’s much more likely that the net flux will be energy OUT.”
Well you have made me rethink at any rate.
Though what it may mean is that there can be no “given spot” for the TOA.
It seems clearer that the TOA may relate to an abstract concept of energy in to energy out for the planet as a whole if it is the level where incoming flux balances outgoing flux.
After all most of the energy is only coming in on the sunny side.
–
DeWitt Payne (Comment #147145)
. “You continue to labor under the false assumption that there can be no power imbalance at the TOA”
I still do but I am having great trouble working out where this TOA should be at night time when there is no incoming radiation.
On the ground with GHG back radiation?
Will look into it more. Thanks both for educating and confusing me.
Angech,
.
TOA meaning Top Of Atmosphere I think, right? It doesn’t change locations. It’s someplace up there in the sky; I forget exactly what the altitude for TOA is.
There isn’t really a single height. On average it’s at about 5km, since that is the height in the atmosphere with a lapse rate of 7K/km where the temperature will have dropped by about 33K relative to this surface.
However, if you consider the outgoing spectra in this post some of the energy is radiated directly to space from the surface, some comes from within the troposphere, and some from within the stratosphere.
The key point is that there is a certain amount of energy being radiated to space per second, per square metre, and whether or not we’re warming will depend on whether or not this is the same as the amount of energy we receive per square metre per second.
Thanks Anders.
[Edit: removed bit where I pointed out typo; nevermind, you got it. 🙂 ]
Yes. This is what it’s all about.
Thanks, I had to do two edits to get that one right 🙂
Angech: “My understanding is that it [TOA] should be the level at which the incoming and outgoing fluxes match.”
.
I believe that the tropopause (the boundary between the troposphere and the stratosphere) is defined as the location where radiation overcomes convection as the primary means of energy flux (or perhaps just longwave flux). So this is the point where 50% of the outgoing radiation never returns and 50% is backscattered.
.
Therefore I am not sure that this location changes significantly from day to night. Is anyone certain of the definitions of TOA versus tropopause? I mean I usually see TOA used when calculating an energy balance model.
angech,
That’s wrong and that’s your problem. There is no requirement that incoming and outgoing fluxes match. Over time after a perturbation that generates an imbalance, the flux difference will decrease because the planet warms or cools depending on the sign of the flux difference.
The TOA is the altitude beyond which atmospheric emission, absorption and scattering of incoming and outgoing radiation are insignificant, not where their levels match. For MODTRAN and most other radiative transfer calculation programs, that’s 100km. It doesn’t change with the rising and setting of the sun or pretty much anything else. The emission spectrum of the Earth doesn’t change beyond that level. It would look the same from the Moon or from the other side of the solar system, assuming you had an instrument sensitive enough.
Ron,
The tropopause is indeed where convection stops being significant, but that has not much to do with radiation. The tropopause is also a good place to measure changes in radiative imbalance, forcing, using a program like MODTRAN. Forcing at the TOA is defined as the flux difference after the stratosphere reaches a new steady state, which happens fairly quickly. But MODTRAN won’t do that, so looking down from the tropopause gives a reasonable approximation of forcing after changing ghg concentration.
.
Convection at the surface is very large.
Convection at the ‘Top Of the Atmosphere’ is close to zero.
Convection through the tropopause is small, but non zero.
.
To some extent ( not enough to change the basic assumptions of forcing at the tropopause ), exchange of air mass from a cooling stratosphere and a warming troposphere acts as a small negative feedback.
.
Holton estimated Strat/Trop exchange at about 10% annually so it is small.
SteveF wrote: “The Effective Sensitivity, in degrees per watt per sq meter, is given by:
ES = ΔT/(F – A) (eq. 1)
I’m trying to understand this subject better by posing some intelligent questions/speculation.
Is this the definition for effective sensitivity (as opposed to ECS)? Once the ocean has warmed and no net heat is flowing into it (A=0), ES and ECS appear to be similar.
F/ΔT is known as the climate feedback parameter (CFP). It seems to me that the CFP is THE fundamental property of the planet that controls its response to a forcing. If forcing prevents 3.7 W/m2 from escaping to space, CFP tells us how much warming needs to occur to emit and reflect another 3.7 W/m2 of OLR and SWR. If you know CFP, do you need to wait until heat flux into the deep ocean has stopped? Furthermore, changes in OLR and reflected SWR with temperature don’t need to be forced; they occur whether warming is forced or unforced.
In the short term (several months), changes in OLR and rSWR with temperature depend on the intrinsic nature of materials to emit more LWR when warmer (Planck feedback), modified by WV, LR, Cloud, and seasonal snow cover feedbacks. Only a small decrease in surface albedo as ice caps melt (which takes millennia to reach equilibrium coming out of ice ages) is missing from these fast feedbacks (and probably irrelevant to climate sensitivity for the next century or two).
I can rearrange the terms in your equation to get:
ES = ΔT/F * (1- A/F)
ES = (1- A/F) / CFP
ES * F = ΔT * (1- A/F)
Can I apply the lower equation to the output from a instantaneous 4X-CO2 model run (where F conveniently is constant). Warming and heat flux going into warming the planet (mostly the ocean) must evolve so that the product on the right side always equals the constant on the left.
Which brings me around to the question of definition of forcing. Is it instantaneous forcing (Fi), after stratospheric adjustment (Fa) or some effective forcing ERF from climate models? The latter appear reflect feedbacks, which are already present in the the CFP. Yet Otto and Lewis&Curry use ERF, something that doesn’t represent a radiation flux. ERF arises from modeled warming, Climate models are good at predicting radiation fluxes and lousy at predicting warming.
.
Before convection becomes insignificant to radiation as the mode of energy transfer in the 12-15 micron band GHG is not causing nearly the GHE that it does when radiation dominates. Right?
.
After all, the EGHE is based on GHG impeding OLR by the lag in thermalizing and re-emission, raising the EEH (effective emission height) in the OLR band.
.
.
I think you mean forcing imbalance at TOA is defined…
.
.
True, except for longer term feedbacks like cloud formation, increased convection current and latent heat (humidity and condensation) affects on the lapse rate.
Ron,
Forcing is by definition an imbalance.
This is from the IPCC FAR(2007), but I doubt it changed in the fifth report.
True, forcing is by definition imbalance the same way cash is looked at commonly by (non accountants) as net cash.
Ron Graf:
r
Is forcing like net worth? Sum of assets and debts? Such that you might have no idea what the magnitude of assets might be nor the debts; only the sum?
I worry about the anthropogenic warming forces being summed with a natural temperature decline and the net called ‘forcing.’ SteveF referred to this possibility above as did Nick Stokes in another place. But we couldn’t know.
There seems something a bit loose about ‘forcings’ being a ‘net’. Apparently ‘forcings’ has nothing to do with ‘force’.
or it’s too late in the evening.
jferguson:
The following may or may not be helpful on the forcing concept.
Let’s stipulate that the “forcing” definition quoted above by DeWitt Payne is what is generally accepted and therefore more accurate than the following rough-and-ready simplification. But this simplification is what works for me:
Start with a situation in which radiation is (on average over some appropriately long time frame, like a year) in balance: the Earth is radiating away just as much power as it is receiving. Now suppose you instantaneously increase CO2 concentration to a new, constant level, thereby instantaneously suppressing outward radiation by, say, 1 W/m^2 so that initially a 1 W/m^2 imbalance prevails. That 1 W/m^2 initial imbalance is the forcing increase associated with the concentration increase.
Now, that imbalance so causes the Earth to warm as eventually to restore the balance. But the quantity they call forcing–in this case, the previous forcing plus 1 W/m^2–remains the same even though the imbalance doesn’t; after all, the new concentration with which that forcing is associated has not changed.
So it’s true that the forcing is an imbalance, but it’s a hypothetical imbalance. If you want to know what forcing is associated with the current set of forcing factors, such as CO2 concentration, you (1) imagine an equilibrium state in which the forcing factors are different, namely, the factors with which you have (arbitrarily) associated a forcing value of zero, (2) instantaneously change those factors (e.g., CO2 concentration) from that zero-forcing set to the current set, thereby temporarily upsetting the balance, and (3) observe the initial imbalance. Roughly, that imbalance is the forcing.
Again, this description omits subtleties such as stratospheric adjustment, but to me at least it serves as a helpful simplification.
Joe Born,
What you’ve written has really helped. I can see that this is how it has to be understood in order to quantify these processes. It likely is folly on my part to suppose that any of this can be understood at the junior high school level. Thanks.
DeWitt Payne (Comment #147209)
“The TOA is the altitude beyond which atmospheric emission, absorption and scattering of incoming and outgoing radiation are insignificant, not where their levels match. For MODTRAN and most other radiative transfer calculation programs, that’s 100km.”
–
This seems more a definition of where the atmosphere becomes outer space [hence practically wrong though technically correct], and though it is a reference level where the earth’s energy budget can be calculated from it is a poor definition in that many other altitudes and reference points have been considered by those not wanting to use MODTRAN specifically.
–
“To estimate the earth’s radiation budget at the top of the atmosphere (TOA) from satellite-measured radiances the measurement viewing geometry must be defined at a reference level well above the earth’s surface (e.g., 100 km).
since TOA flux represents a flow of radiant energy per unit area the optimal reference level for defining TOA fluxes in radiation budget studies for the earth is estimated to be approximately 20 km.the 20-km reference level corresponds to the effective radiative `top of atmosphere’ for the planet”.
–
…and Then There’s Physics (Comment #147201)
There isn’t really a single height. On average it’s at about 5km, since that is the height in the atmosphere with a lapse rate of 7K/km where the temperature will have dropped by about 33K relative to this surface.”
–
mark bofill (Comment #147200)
“TOA meaning Top Of Atmosphere I think, right? It doesn’t change locations. It’s someplace up there in the sky; I forget exactly what the altitude for TOA is.”
“Climate Change‎ | Science”
The only way the climate system can achieve equilibrium, which is required by conservation of energy, is for the lower levels to warm, emit more energy as long wave radiation, which in turns warms the atmosphere, and changes the effective emission height and temperature. This adjustment continues until the shortwave and long wave budgets are balanced at the top of the atmosphere.
-If there is virtually no atmosphere the TOA becomes the surface level.
[MB It’s someplace up there in the sky for earth yes, the moon no].
–
So many definitions,
–
DeWitt Payne (Comment #147209)
The TOA is the altitude beyond which atmospheric emission, absorption and scattering of incoming and outgoing radiation are [insignificant] does not sound right.
The TOA is used frequently as a term to assess the energy budget of earth.
It is supposed to represent the height at which incoming energy balances outgoing energy.
As in “If there is more incoming energy than outgoing energy there is quote, an energy imbalance”.
As both Mark and ATTP kindly point out there is more outgoing energy than incoming energy at night.
Which begs the question of there “should be more incoming energy than outgoing” during the day.
There must be a level for the whole earth combined which satisfies the balance criteria.
I guess any level has a TOA rating, even the surface if you assign it the value of being TOA.
The problem being what to do about the pesky atmosphere which if you go high enough does not matter any more.
–
DeWitt Payne (Comment #147216)
“Forcing is by definition an imbalance.”
-And in the steady state at equilibrium??
Equilibrium by definition means the forcing is balanced?
–
not being “funny”, I just do not get it.
angech, I’m right with you. My hunch is that TOA is a purely theoretical but non-definable place due to complexities of multiple converging gradients in a chaotic system. TOA is only created to make a useful tool for calculating energy budgets to simplify and model results from a change in flux (forcing).
.
I had been under the apparently false impression that forcing could describe any and all changes in flux, whether caused by GHG or aerosol or solar variation. And, following this logic, radiative balance simply was the sum of all positive and negative forcing canceling out; for example, like a stationary object on the Earth’s surface resulting from the force of the ground up equaling the force of gravity downward. But I suppose this model does not work in climate science because there are multiple feedback in unkown amount and unknown time by uncertain mechanisms. A change in solar forcing of 1W/m2 may not have the same effect as a change in forcing by decreased aerosols of 1W/m2, lets say, because the aerosols also aided cloud seeding for precipitation (for hypothetical).
.
So, it seems that forcing is an hard to define as TOA.
Interesting, because this is related to the point being made in Kate Marvel’s recent paper. In a simple sense, we might expect the response to a change in forcing to be same irrespective of what causes the change. However, their argument is this is not quite true and that it can depend on what produces the change in forcing and – in particular – how that forcing is distributed. It’s a possible reason why these energy balance estimates for climate sensitivity produce different results to those obtained using different methods.
Joe Born (Comment #147224)
” suppose you instantaneously increase CO2 concentration to a new, constant level, thereby instantaneously suppressing outward radiation by, say, 1 W/m^2 so that initially a 1 W/m^2 imbalance prevails. That 1 W/m^2 initial imbalance is the forcing increase associated with the concentration increase.”
–
“instantaneously suppressing outward radiation”
–
Is this physically possible, Joe.
–
The imputation is that the earth gained more energy and will be 0.27 degrees warmer instantly at the surface atmosphere.
Further it will stay this warm at this concentration.
But it will still be putting out the same amount of energy to space from this warmer surface layer as the previous cooler, lower CO2 did.*
This implies that the TOA would have dropped temporarily for, as Neven would put it, a couple of Hiroshima bombs worth of energy, then recovered to the same TOA as previously.
Would it have dipped for even a second?
[“Every second the sun pours 2700 Hiroshima bombs of energy on the Earth at the top of the atmosphere.” ].
That forcing is of course then exhausted.
–
*barring cold fusion.
Joe Born (Comment #147224)
“Now, that imbalance so causes the Earth to warm as eventually to restore the balance.”
–
Is it minuscule or majuscule?
The heat always seems to drain into the earth ignoring that if the heat goes into the oceans there must be less going out to space unless the CO2 rich air is magically producing heat.
If the hotter CO2 layer is sending more into space [which it should do being hotter] and more into the ocean, where is it coming from?
The sun output remains constant.
I hate atmospherics.
angech:
No, I don’t think that’s implied. What I said was that the CO2 concentration gets a step increase, not that the Earth’s temperature increased instantaneously. We put a blanket over the kid instantaneously, temporarily reducing how much heat the kid-blanket system emits, but the kid warms up only gradually.
Now, it isn’t physically possible for the CO2 concentration to make a finite jump instantaneously, either. But for the sake of (roughly) defining forcing we’re pretending that it somehow magically does experience a jump, and then we observe what happens.
Like the blanket, that CO2-concentration increase slows what was cooling the body without slowing what was heating it, so the body gradually warms to the point where it’s again losing as fast as it’s gaining, i.e., until the concentration-change-caused imbalance disappears.
The reason for assuming that the CO2-concentration increase is instantaneous is to have a point in time where we could define the forcing change by reference to the imbalance. Even though the new concentration persists–and the forcing associated with it therefore does, too–the imbalance by which we (again, roughly) defined forcing decays; it is only the imbalance’s initial value that the forcing equals. If we had (as in the real world) accomplished a given concentration increase only gradually, we wouldn’t have an imbalance value we could use to define the forcing associated with that increase.
angech:
I don’t think I’ve ignored that.
The oceans are part of the Earth in my simple explanation, which ignores accumulations of heat that are not reflected in temperature changes: I’m treating the Earth as one lumped heat capacity, so it warms up if it absorbs heat. In my (again, simplistic) explanation, there is indeed less radiation into space when heat is absorbed by the oceans or any other part of the Earth.
In fact, the heat’s going into the oceans–i.e., the inward power flow’s exceeding the outward–is why there’s that initial imbalance. Again, that imbalance causes the warming that increases the outward flow to the inward and thus eliminates the imbalance.
Ken Rice,
I would find Marvel et al and other explanations for the difference sensitivity calculated by GCMs and that from empirical estimates a lot more convincing if the projections from individual models were consistent (in a statistical sense) with measured warming. That is, the estimate of each model’s variability from multiple runs places the measured warming for the last decade below the 95% inclusive range for many (most?) of the individual model projections; many models seem clearly inconsistent with measured reality, and all project (averaging multiple runs) greater to much greater warming than has actually happened. Not a single model warms too little. In light of this, efforts like Marvel et al seem to me dubious at best.
.
I always find post-hoc explanations, usually accompanied by rapid arm-waving, of dubious value.
SteveF,
Whatever you think of Marvel et al. I was simply making the point that what Ron said appeared similar to what is presented in their paper. My own view is that it is unlikely that the efficacies are all unity (given that not all forcings are homogeneous) but we’ll have to wait and see if their estimates turn out to be reasonable or not.
This may be true, but you do have to bear in mind that there is only one reality; the one we actually followed. We can’t rule out that internal variability simply acted in such a way that what happened in reality lead to temperatures that tracked along the lower bound of what the models suggest. I don’t know if this is the case, and – given the range of ECS values suggested by the models – some are clearly too sensitive (I think). However, I don’t think that not a single model warms too little is necessarily all that significant at this stage.
Ken Rice,
“This may be true, but you do have to bear in mind that there is only one reality; the one we actually followed. We can’t rule out that internal variability simply acted in such a way that what happened in reality lead to temperatures that tracked along the lower bound of what the models suggest.”
.
Yes, there is only one reality. We still expect a valid model and its accompanied variability to include reality.
All,
Forcing is not just a function of ghg concentration. Calculating forcing requires knowing the surface temperature and temperature and humidity profile in the troposphere. Note the Ramaswamy, et.al. definition:
If you want to look at cumulative forcing over time for continuous changes, you have to have a fixed reference point. That point, for anthropogenic climate change purposes, is the assumed pre-industrial steady state.
The discussion of TOA is such a hash that I give up. Believe what you want.
angech,
The term ‘begging the question’ does not mean what you think it means. Lots of people make that mistake, though.
Begging the question is a logical fallacy where the premise of your argument assumes the conclusion.
And – as far as I’m aware – some indeed do, especially if you also update the forcings.
ATTP,
And the climate sensitivity of those models is (not rhetorical)?
DeWitt,
I’ll have to try and find the paper (maybe even papers). One of those might be GISS-E which – I think – has an ECS of between 2.5 and 3. If I get a chance to look for the other papers, I’ll get back to you.
This is the paper I was thinking of. Haven’t had a chance to look at it again in any detail.
Ron Graf,
.
.
I’m glad you mention this, because it’s an example I’d like to point out. It reminds me that years ago when I first became interested in climate science blogs, I thought this. Mainstream apologists told me in no uncertain terms that I didn’t know what I was talking about, that that was impossible, and that I needed to shut up and study the science more.
.
It’s a good thing all these details are settled science. I mean, obviously this wasn’t true a few years ago; a few years ago we thought the science was settled and it wasn’t, but NOW we can be sure that whatever our leading scientists believe is settled science for certain.
.
Yeah.
By my calculations, over 1/3 of the CMIP5 models in AR5 ran cooler than HADCRUT4 over the 1861-2014 interval:
http://1.bp.blogspot.com/-ZY_oL2cq4r4/VQiX3rRH2aI/AAAAAAAAAYo/0VNOKoRIQJw/s1600/CMIP5%2Bvs%2BHADCRUT4%2Btrend%2B1860-2014%2B01.png
Brandon Gates,
Any time range that includes the historical record before the model was run is meaningless because the models are tuned, whether they admit it or not, to match that record. What counts is how the model projections match numbers that haven’t been seen by the modelers.
How closely the models match absolute temperatures would be interesting too, because things like saturation water vapor pressure are strongly dependent on the absolute temperature, not the temperature anomaly.
Except this only becomes really relevant – I think – if the feedback responses are very non-linear. If not, then even if the models are not matching absolute temperature, they may still do a reasonable job of representing – in some sense – how the system will change under changing external forcings.
ATTP reference this paper:
“The question of how climate model projections have tracked the actual evolution of global mean surface air temperature is important in establishing the credibility of their projections. Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution. Such comparisons are not evidence against model trends because they represent only one realization where the decadal natural variability component of the model climate is generally not in phase with observations. We present a more appropriate test of models where only those models with natural variability (represented by El Niño/Southern Oscillation) largely in phase with observations are selected from multi-model ensembles for comparison with observations. These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.
A wonderful example of cherry-picking FOUR model runs out of a composite that began with 102. Sure, a few runs with the most appropriate 15-year trend in the Nino 3.4 region agree better with observations of 15-year trends than the whole set. I’m sure that will continue to be true in the coming centuries too, at least of climate models. Kosada and Xie (2013) showed that changing SSTs in the Nino 3.4 region of a model to match observations increased hindcast warming in 1975-1995 and decreased hindcast warming in the 2000s. The question is whether or not models on the average in the coming century project the appropriate trend in the Nino 3.4 region. If models project too many/intense El Ninos, they will project too much global warming. If they project too many La Ninas, they will project too little global warming. Models on the average projected too few La Nina and too many El Ninos in the 2000’s and therefore too much warming. That problem is apparent looking at the whole CMIP5 ensemble, and disappears when you cherry-pick.
For models to be useful, the must be able to hindcast accurately without cherry-picking any aspect of their performance. As a group, they failed during their first opportunity to forecast a period that was not used in their development – the 2000s. The IPCC makes projections using these models as a group.
.
Yes, every IPCC report more or less resets the starting line of predictions. So only old predictions are verifiable(falsifiable). But bringing up an old models invokes: “well, that’s old – we’ve fixed things up since then”.
.
.
On of the ideas is that atmospheric humidity will evolve with a more or less constant relative humidity. That may well prove true, but there are large portions of the atmosphere that have a relative humidity closer to 0% than to 100%. No doubt, that’s because of dynamics ( subsidence ) of air moving from elsewhere ( poles to sub-tropics ). And it’s conceivable that such motion conserves relative humidity, but it still uncertain and may amount to slightly less than constant RH.
A more relevant comparison of the observed to model temperature series would be the AGW warming period. Trends should not be assumed to be linear. Other important comparisons between observed and modeled are detrended series variance and auto correlation. Warming ratio of southern to northern hemispheres is an important comparison for observed to modeled as is 25 year and following 15 year trends (that represents the recent warming slowdown in the observed series).
Of course, the low trend model readings in the BG linked histogram over the entire historical period should be considered no differently than the high readings compared to the observed. In fact it makes the model to model comparison look worse.
If you want to consider that large range as some natural variation, the model uncertainty becomes ridiculously large.
At Climate Audit, there are two new posts on this issue of comparing models and observations that convincingly show that there is a significant mismatch over the satellite era and giving a statistical analysis that concludes that over 99.5% of model runs will be “too warm.”
These posts also show once again that the statistical expertise of GISS scientists and Schmidt in particular is poor with elementary errors being defended over long time periods and these same errors finding their way into the climate science literature.
The natural variability excuse is interesting because it cuts both ways. If natural variability can cause the climate system to warm much more slowly in the short term than the long term trend, it could also cause a substantial part of recent warming as Curry points out.
Frank: “Models on the average projected too few La Nina and too many El Ninos in the 2000’s and therefore too much warming…As a group, they failed during their first opportunity to forecast a period that was not used in their development
.
I don’t think failure was ever conceded. I did hear about model “proven” success with natural variabilty in M&F(2015). “No systematic errors in climate models.”(Science Daily 2-2015)
.
Could it be coincidence that modelers over-tuned (parameterized) for El Ninos? Is it innocent that models do not have volcanic eruptions projected, only penned into the record after the fact? I suspect this is intentional methodology. It allows Brandon Gates’ arguments:
“By my calculations, over 1/3 of the CMIP5 models in AR5 ran cooler than HADCRUT4 over the 1861-2014 interval.”
.
A cake and eat it too scenario — forecast perpetually high but never err. How is that successfully managed?
[see SteveF on post-hoc explanations.]
DY,
Why don’t you pop across to Realclimate and advise Gavin on how he can avoid any more embarrassing mistakes. You could also point out that even though you’re a modest person you clearly have more experience than he does. I’m sure he’d be thrilled to get advice from someone with your obvious expertise.
David Young,
I can’t get past the silly idea that variability across an ensemble of models has any meaning at all. Really, each model is a logical implementation of the modeler’s understanding of physical behavior; if the range of variation of a single model does not include reality, then the model is just WRONG. To use the variation between many different models (many of which are clearly wrong!) as Gavin does to claim the models are ‘OK’ because the huge ‘variability’ of the ensemble touches the edge of the uncertainty range in measurements is so disconnected from physical reality as to be shocking. How on Earth did Gavin, the IPCC, Santer et al, and the whole ‘modeling community” get this far out in the statistical weeds? I can’t understand what they are thinking, but find it all truly and utterly bizarre.
.
What is even worse, this kind of statistical nonsense appears to inhibit discounting (and eliminating public funding!) of useless models, and motivating needed improvements in the better models.
ATTP,
Couldn’t resist the opportunity for snark, could you.
DeWitt,
Why would you regard my comment as being snarky? DY has publicly stated that Gavin Schmidt is making embarrassing mistakes and that he has more expertise than Gavin. Why wouldn’t he go and provide his much needed advise. He doesn’t have to, though; I was simply suggesting that he could try.
Anders,
.
.
Sure, because [edit: if not snark, then] it appeared to be an argument from authority, which is a type of argument most people seem to prefer to avoid, it being a widely known logical fallacy.
.
Truthfully though, I was wondering if you meant snark or not. While argument from authority is in fact a logical fallacy, I think it can be a helluva good heuristic sometimes. ‘Because Gavin’ depends on how impressive people think Gavin is I guess. I don’t know all that much about him.
Mark,
You thought mine was an argument from authority?
Anders,
If we are talking about this:
Then yes. I think this is either snark or an argument from authority, depending on whether or not you intend it as a serious argument.
Mark,
I’m confused about who you think I was suggesting has authority.
I’m sorry Anders. Maybe I was smoking crack (I don’t actually smoke crack, I use it as a metaphor for being insanely off base – being insanely off base is something I’ve got a passing familiarity with 🙂 )
So, first off I thought your comment was sarcasm. According to merriam-webster:
I thought this because Gavin Schmidt has impressive credentials;
If David Young has similar credentials, I am most disgustingly ignorant of the fact.
.
SO, if you intended your argument to be serious, you were suggesting that Gavin has authority because of his credentials.
If you did not intend your argument to be serious, still; the snark comes from the fact that even though argument from authority is a fallacy, it is a pretty good heuristic.
.
Are you really confused, or are we playing games now? Not rhetorical.
.
Thanks Anders.
Mark,
I’m struggling to see how you’ve interpeted this way. I think my comment was self-evident. DY has publicly claimed to have more experience than Gavin (and, yes, as I understand it, DY does have relevant expertise) and that Gavin is making embarrassing mistakes.
Wow, I made a crazy connection there that you didn’t intend. Well, ok. It’d be awful, wouldn’t it, if you were mocking me by playing dumb, considering the conversation we just finished about good faith [edit: calling people liars instead of seeking clarification in good faith]. It’d be like punishing people for not doing what Brandon did.
I’ll take your word for it, thanks Anders. Forget I said it.
Before I would agree with the logic of authoritative correctness I take into account:
1) Is the authority refuted by any other individual that is likely to have equivalent skill and knowledge on the stated issue?
.
2) Is the authority qualified such to be assumed infallible from honest mistake?
.
3) Are there any possible ulterior motivations possibly distorting the truth of the authority’s assertion?
.
4) If #3 is Yes then has the authority shown any past bias, or has there been bias among their peer group?
.
5) If #4 is Yes then how testable is the authorities assertion?
.
If the authority was aided in gained their authority by political favor, and there is motivation for bias, and there is past examples of revealed of actual bias, and the assertion is generally un-testable except by the authority, then the authority in itself provides no weight to the assertion.
.
Because of the above I think that science is not settled until one’s most skeptical opponent can have the opportunity to reproduce the results. Denying one’s skeptics of that opportunity by smearing them, calling them mentally deficient or delusional, for example, does not add support to results.
.
Numerous contributions have been made by individuals outside of their main fields of study, sometimes because they are outside of the common assumptions of the of the mainstream, so that can be a good thing. The meteorologist Wegener telling the geologists what for was great.
.
But is Schmidt, strictly speaking, a climatologist?
.
BA (Hons) in mathematics at Jesus College, Oxford
PhD in applied mathematics at University College London.[3]
Ken Rice,
“Gavin is making embarrassing mistakes.”
.
Well, I very much doubt Gavin is going to admit to embarrassing mistakes unless under extreme duress. My take is that Gavin is, shall we say, more sensitive to the politics than most. And that is not intended as a criticism per se, but rather as an admission that Gavin runs an organization which is dedicated to advancing ‘the consensus’ of GCM projected warming, and to using climate model projections in support of political efforts to reduce fossil fuel use. Seems to me science took the back seat of Gavin’s car a very long time ago.
DeWitt Payne (Comment #147283),
.
.
Whether beer brewers admit it or not, they all use yeast to ferment the wort.
.
.
That would be ideal. All we need do is wait 30+ years to get a climatically relevant interval to test against.
.
OTOH, we wouldn’t need to be so reliant on model fidelity to reality if we weren’t driving changes to reality.
.
.
Anders covered this, but I’ll chime in: absolute TAS is has a ~7 K range for the CMIP5 ensemble: https://drive.google.com/file/d/0B1C2T0pQeiaSSGFhdjlnd3hkX0U/view
.
I gave Gavin Schmidt a bit of an earful about this: http://www.realclimate.org/index.php/archives/2014/12/absolute-temperatures-and-relative-anomalies/comment-page-1/#comment-621580
.
Let me attempt to focus this rant into a constructive question or two. Where can an avid but amateur hobbyist such as me go to get my mind around what due care has been taken for other metrics, either global or regional? A concise list of other metrics would be a good start, but I’m particularly interested in precipitation and ice sheets. If you don’t think those two are the most particularly interesting, what two or three metrics top your list as the most critically uncertain?
.
[Response: Surface mass balance on the ice sheets is a good example. As are regional rainfall anomalies, sea ice extent changes etc. Basically anywhere where there is a local threshold for something. Most detailed analyses of these kinds of things do try to assess the implications for errors in the climatology, but it is often a little ad hoc and methods vary a lot. – gavin]
.
I can’t exactly complain about the unvarnished nature of his response.
Turbulent Eddie (Comment #147304)
“But is Schmidt, strictly speaking, a climatologist?
BA (Hons) in mathematics at Jesus College, Oxford
PhD in applied mathematics at University College London.”
Yes.
I wish I had his maths background myself.
Does that make him a good climatologist?
–
“In my TED talk I specifically said (pace Box) that all models are wrong. You are also myopic if you think that observations are perfect or that the experiments we perform in order to compare to reality are ideal. Indeed, all of those things need improving as well.
But none of that undermines the fact that CO2 is a greenhouse gas, we are putting a lot of it out and the planet is warming (and will warm more) as a result. – gavin]”
–
Strange he does not see it all undermines the premise if only a little bit.
–
“a model run will be [statistically*] warmer than an observed trend more than 99.5% of the time;” the Terminator.
Ken Rice and Mark,
You did not say anything about McIntyre’s detailed argument that Schmidt and Santer were guilty of an embarrassing statistical mistake that remains uncorrected in the literature to this day. It’s not my authority at issue, but a lot of very skillful statisticians vs. a couple of amateur statisticians who call themselves climate scientists. Of course, the tactic from Ken and fellow travelers has always been to ignore these issues. The track record is not too good however, for example on the uniform prior issue where climate scientists were clearly wrong and I think no one argues it anymore. But I doubt if Schmidt has admitted any error. His track record is not good here.
I note that neither of you made any substantial technical argument. McIntyre’s post shows that in fact GCM’s perform poorly even in the gross metric of Global Temperature. If you have a disagreement with that post, please put it forward or else just admit the point.
The problem here is that no one can make a real technical argument that GCM’s are effective for climate. Instead, most just quote authorities, who themselves have no real arguments. Schmidt’s argument is “every time I run the model, I get a reasonable climate.”
There are of course good technical arguments as to why GDM’s might not be very good and might have large errors. Gerald Bronwing has made some of them. We (yes I have a lot of collaborators) have made some of them in a new paper we are submitting for publication. It either of you want to read it, I can send you a copy.
Ken Rice, you are particularly disingenuous on this issue as you have a history of false claims about GCM skill that quotes the literature out of context. As always, the propagandist never admits error, he just continues to assert untrue things.
And of course, Rice knows I have sent some papers to Schmidt quite a few years ago. His suggestion that I just bop over to Real Climate is disingenuous and shows his bad faith.
Yes SteveF, there is no reason whatsoever to regard the set of model runs as a random sample from some mythical master supply of all possible model runs. It’s nonsense. However, one thing that can be said is that the model errors cannot be less than the range of results shown by such an analysis. Since the models are so heavily tuned, the true error or range of results is almost certainly much larger. We employed a similar method for our new uncertainty paper, while admitting the problems implied by the method and suggesting how to get better results in the future.
Just for the record, it is pretty clear that Schmidt is a very good mathematician, with a good grounding in PDE theory and numerical practice in the limited world of General Circulation Modeling. He is not a statistician and is relatively inexperienced in classical computational fluid dynamics, which is a very large and well developed field in its own right. GCM’s are a very small nice part of that field.
I also agree with SteveF that Schmidt has assumed a role in “communicating” climate science that casts doubt on how forthright he will be about problems with climate science.
David,
.
Yes.
.
Speaking for myself, again, yes.
.
Perhaps you [are] under the impression that I am trying to argue on behalf of Anders point (whatever that might be). If so, you are mistaken.
.
Thanks David.
Speaking of Gavin Schmidt infallibility reminds me of a string here a few months ago on UHI where I had been cautioned by Steven Mosher and DeWitt that the satellite’s LT temp is not the same as surface temp. When I asked the correction factor there were crickets then shrugs. Later I found a string in CA from 1-2011 where McIntyre had noted Schmidt and Lindzen were in agreement that the surface to LT factor was 1.4.
.
Down-thread from Mc’s post Steve Mosher and Robert Way were just about to do calculations based on the acceptance of the 1.4 factor when a lurker tagged “Gavin” popped in and said the factor was 0.95, which was instantly accepted by all, strangely, including McIntyre who said he would edit the post.
.
I found this a mindblowing climate science moment.
.
I am supposing when it was beneficial to the consensus to have the factor high as an explanation for the slow to warm SST they had math that supported 1.4. Lindzen’s acceptance of this metric likely alarmed Schmidt, rightly, realizing the implications of giving support to the suspicions of UHI contaminated land records as the cause of the growing satellite indexes divergence from the surface indexs. I am guessing the 0.95 represents the ratio of the lapse rate drop of a 2K high avg LT to the surface (~272/285 degs K), and thus the fraction of warming at that height for 2K warming. This assumes the temp at the TOA is a constant proportional fraction of the average surface temp.
.
The 1.1C from doubling CO2 without feedbacks I’ve yet to see challenged. Yet, since it’s not enough warming for political action it’s by definition insignificant. Enter the bias alarm. If it is true GCMs are heavily tuned then I would be inclined to be skeptical that they might be nothing more than a shill for scientific assertions regarding water vapor amplification and cloud positive feedback, and anything else that leads to high ECS.
Ron: The histograms in Figure 1 in the introduction to Marotzke & Forster (2015, Nature) clearly demonstrated the failure of climate models to hindcast 15-year warming trends that weren’t known at the time climate models were being refined. IMO, this flawed paper did nothing to explain that failure. (The regressions residuals from a flawed regression equation can not be interpreted and unforced variability and their regression did not separate forced and unforced variability. The ERF term contained unforced variability.)
http://people.oregonstate.edu/~schmita2/Teaching/ATS421-521/2015/papers/marotzke15nat.pdf
http://www.nature.com/nature/journal/v517/n7536/full/nature14117.html
My comments about models predicting too few or too many El Ninos were based on Kosaka and Xie (2013)
http://scholarspace.manoa.hawaii.edu/bitstream/handle/10125/33072/Kosaka&Xie2013.pdf?sequence=1
Ron,
It should be possible to calculate from climate model or reanalysis data the value of tlt for a given surface temperature, using the published satellite MSU weighting factors, and thus a ‘correction’ factor. In fact, it should be easier to do that than the reverse. Any empirical calculation based on satellite and surface observations suffers from the problem that both probably have systematic biases.
Whether Gavin did that or not, I don’t know. But it would be relatively easy for him to do. McIntyre would have access to “Gavin’s” email and IP address from his post to verify his identity.
and Then There’s Physics (Comment #147280)
–
“Well-estimated global surface warming in climate
projections selected for ENSO phase” Lewindowski et al
This is the paper I was thinking of,
–
“We present a more appropriate test of models where only those models with natural variability (represented by El Niño/Southern Oscillation) largely in phase with observations are selected from multi-model ensembles for comparison with observations. These tests show that climate models have provided good estimates of 15-year trends,”
–
We pick out those models which accidentally fit the observed temperatures and then show that they give a good estimate of those temperatures.
And discard the rest for the purpose of this study.
ie we pick out the apples and show that they look like apples?
Come on ATTP.
Is this fair.
–
worse if possible in the paper
There are several different approaches that have been used to
bring the models in to phase with the real world.
–
Alternatives.
1. “The desired solution to this problem is to do a climate forecast, not a projection. For a climate forecast one would test models that were initialized in 1998 and run with ensemble perturbations designed to track the major decadal instability modes (ENSO, PDO) in the ocean.”
–
Does anyone know what this means? when is a forecast not a prediction? and tracking for what purpose?
–
2 Forced/restored projections
Another approach to ensure that the models have the same ENSO
phase as the real world is to impose observed sea surface temperatures (SSTs) or observed winds in the tropical Pacific region
of the model.
–
I think this means you add in the real world observations as they occur in place of using the model assumptions.
which somehow defeats the purpose of having a model in the first place-
–
Though one must admit Lew has a statistical background.
perhaps he meant to say There are several different approaches that have been used to bring the real world in to phase with the models.
angech,
That isn’t what they did. They selected models in which the internal variability was in phase with what actually happened.
Again, this is referring to the internal variability. Models cannot – at this stage at least – predict when an ENSO event will happen. Therefore if you run a large suite of models, the ENSO events in the models will happen at different times. All they’re suggesting is finding a way to consider those that produce such events at the same time (or close to) the times they happened in the real world.
They said projection, not prediction. The point here is that a projection is conditional on your assumptions (such as the timing of internal variability and emissions) matching what happens in the real world. If they don’t, your model may still be a good model, but it won’t quite match reality becauses your assumptions didn’t match what happened in the real world. If, however, you can make it more likely that your initial assumptions will match the real world, you can start making actual forecasts, rather than projections.
No, I think it means initialising them for decadal predictions using the known SSTs. I don’t think it means updating the SSTs all the time with what is known.
SteveF (Comment #147185)
May 5th, 2016 at 9:11 pm
“Almost certainly”??? Glad to know that the science is settled …
Steve, thanks for the work you’ve put into this. Here’s the problem that I have with your explanation. It ignores the fact that the climate RESPONDS to changes. So for example, as I’ve shown, in the tropics when it gets warmer, inter alia the following things happen:
1) Thermal cumulus clouds form earlier in the day and are denser, increasing the albedo. This is shown clearly in the CERES data. So as warming increases, solar forcing decreases … where is this in your analysis?
2) Thunderstorms form earlier in the day and are more frequent, increasing evaporation and both sensible and latent heat loss to the atmosphere and cooling the surface in a variety of ways. In addition, thunderstorms are not simple feedback—they can cool the surface BELOW the initiation temperature. Where is this in your analysis?
3) In all areas of the planet, when it is warmer”dust devils” form earlier in the day, increasing sensible heat loss from the surface. Again, this is missing from your worldview.
Between them, these changes strongly cool the surface, and this is far from an exhaustive list.
Steve, where is any of this in your formulation? Yes, as CO2 increases, downwelling forcing increases … but that doesn’t mean that the earth will warm. Your analysis totally ignores the many, many emergent phenomena that act to cool the earth when it is warm and that warm it when it is cool.
Let me recommend a quick read of my post called “Emergent Climate Phenomena” for an overview of the difficulties with your “simple physics” style of analysis.
I’m more than happy to provide citations for all claims above.
Best regards,
w.
Well, for some reason my link didn’t work. Emergent Climate Phenomena is at
https://wattsupwiththat.com/2013/02/07/emergent-climate-phenomena/
w.
Willis,
‘Yes, as CO2 increases, downwelling forcing increases … but that doesn’t mean that the earth will warm.’
Really? Then why does the Earth warm in the day and cool in the night? I think the answer has to do with changing downwelling radiation.
.
I am quite aware of your thinking on emergent phenomena which ‘actively cool’, and your many times stated ‘governor theory’. As I think I have told you in the past, passive responses (cloud formation, dust devils, etc.) most certainly are not acting as a ‘governor’; governors are active controls (eg. PID controllers) with a set point, and they do not allow significant deviation from their set point. The Earth is nothing like that: we can observe a huge range of temperatures over the seasons in many places, so we know that the Earth’s surface temperature responds, and rather dramatically, to changes in solar energy input. I do not doubt that there are many physical processes where heat loss from the surface changes with changing temperature, but I also do not doubt that the surface temperature of the Earth does respond to changes in ‘forcing’, just as I do not doubt seasonal responses.
.
Regardless of emergent processes, the heat balance based estimate of a response to forcing from GHG’s does not need to explicitly incorporate internal processes like cloud formation, dust devils, etc. Neither does the calculation need to explicitly include Hadley circulation, ocean heat transport, and a hundred others, even though we recognize these natural processes are responsible for a huge amount of heat transfer. The internal processes are irrelevant in making an estimate of the overall response. And it is just an estimate, with considerable uncertainty.
.
Willis, you do a lot of interesting work, and many of the things you write about are informative. But claims of active control of temperature by passive processes (governor theory) are neither interesting nor informative. They are just mistaken.
Ron Graf
“When I asked the correction factor there were crickets then shrugs”
With good reason. They are different places – there is no reason to expect a constant ratio. What is the ratio between New York and Sydney? You’ll have trouble finding a single number anyway, because of LT changes. The UAH V5.6 operative then had a trend from 1979 to now of 0.15 C/decade; now it is 0.12. 25% difference – makes a big difference to a notional factor of 1.4.
But you’ve mis-quoted Gavin. He quoted a number of 0.95 over land. Over sea (the main part) he said that it depends on moist adiabat, and since it’s hard to put a number on that (he didn’t), no ratio is then quoted.
…and Then There’s Physics (Comment #147317)
” I don’t think it means updating the SSTs all the time with what is known” , seems at odds with
–
“to ensure that the models have the same ENSO phase as the real world is to impose observed sea surface temperatures (SSTs) in the tropical Pacific region of the model.
this method requires that the model SST field be restored rapidly to the observed fields.”
–
Frank (Comment #147285) “For models to be useful, they must be able to hindcast accurately without cherry-picking any aspect of their performance.”
–
Here is the nub of the argument ATTP should use. Models cannot be expected to be correct in predicting unpredictable events like the next El Nino. So models are allowed to be wrong on natural variation.
Where they are not allowed to be wrong is in producing unreal climate sensitivities when examined over the longer time frame and subtracting the missed natural variations both ways
The model ensemble is collectively wrong.
angech,
“Where they are not allowed to be wrong is in producing unreal climate sensitivities when examined over the longer time frame and subtracting the missed natural variations both ways. The model ensemble is collectively wrong.”
.
Yes. Most sensible thing I have seen you write.
angech,
Okay, I stand corrected.
I do not think this conclusion can be drawn yet. Natural variations in the real world cannot be subtracted both ways.
Ken Rice,
Earlier int he thread you (appeared) to agree that the projected rate of warming, combined with run-to-run variability, in at least some of the models (the more sensitive ones) excludes the measured warming rate, suggesting those models are too sensitive to GHG forcing. Now it appears (#147324) that you are not ready to reject any of the models. Which is your understanding?
SteveF (Comment #147323)
“angech, Yes. Most sensible thing I have seen you write.”
–
Thanks, I think. A fluke.
–
Would like to ask DeWitt re PIOMAS version 1 and 2 why do they persist in using the version 1 which has an error.
– Then I will try to take a 3 day breather .
– Thanks for contributing here ATTP , There is common science achievable even if we see different end points .
SteveF has touched on an important notion that you and others found interesting and relevant.Nearly everyone has had something sensible to say.
SteveF,
Actually I said that there appear to be some models that – if you align the internal variability – appear to match observations. I don’t know what one can say about the others. I don’t know if anyone has convincely shown that we can eliminate some of the models for which this may not be the case.
Anders,
You don’t know what one can say about the others. Why not? Not rhetorical.
Willis,
the convective processes you mention are very fast responses. So if they were truly effective, you should be able to answer why summer is still hot and winter is still cold.
But also, consider: you are making a case for climate change, specifically change that negates warming.
I think more of the converse is true: earth will warm, but climate won’t change much ( because global average temperature is not that significant to climate ).
Nothing prevents warming, nor climate change, or some combination, but the case for global warming is robust while the case for climate change appears weak if not altogether contradicted by observations.
Sorry to tease, but I’m working on a look at AGW in the Context of the Glacial Cycles, something widely studied, of course, but which I believe will address the relative extents of warming and climate change.
I dislike both Schmidt’s and Christy’s plots, because they fail to more broadly examine the nuances of temperature trends for the various latitude bands versus pressure.
.
Such nuances both verify and falsify the hypotheses produced by the GCM runs.
.
Verifying: stratospheric cooling with a max cooling near the stratopause, Arctic warming in association with Arctic sea ice loss, and a generally consistent global average surface temperature rise
.
Falsifying: the stark lack of the large ( more than half the globe ) Hot Spot.
.
Evidently, the lack of a Hot Spot is related to failings of the dynamics of the models, rather than the radiative physics. The models suffer from a false feature, the Double ITCZ Problem. ( google it )
.
A double ITCZ did appear for a few months during the end of the 97/98 ENSO, but is rare.
.
The fact that the GCM runs falsely create a double ITCZ might mean they also falsely warm the upper troposphere at twice the rate.
.
This has large implications.
.
The Hot Spot is what provides the only significant modeled negative feedback ( the Lapse Rate feedback: Soden & Held, 2006 ). Since this feedback is not occurring, it would imply that surface warming should be unrestrained and even larger than modeled, but it is not. In fact, surface warming is at the low end of model projections.
.
So there is missing heat, not just in the global mean, but in the various internal processes of the models. As Gavin might say, that doesn’t revert the likely warming from CO2, but the models are significantly screwed up.
Ken Rice,
“We can’t rule out that internal variability simply acted in such a way that what happened in reality lead to temperatures that tracked along the lower bound of what the models suggest. I don’t know if this is the case, and – given the range of ECS values suggested by the models – some are clearly too sensitive (I think). ”
.
“Actually I said that there appear to be some models that – if you align the internal variability – appear to match observations. I don’t know what one can say about the others.”
.
I must admit to being a little confused by the above two comments. Are some models clearly too sensitive or not? (not rhetorical)
angech:
You can’t expect the models to predict the specific pattern of the ENSO, but it must be able to reproduce its features: Namely amplitude and frequency.
See for example the AR4 writeup.
Turbulent Edie brings up the double ICTZ and the “hot spot” seen in models but not data.
My opinion is the later is a more fragile test because the actual trend in temperature of the upper troposphere is less well understood.
angech,
Why do you think PIOMAS is using version 1? According to their website, they are using version 2.1. Version 2.0 had an error that only affected the data from 2010-2013. That’s been fixed.
Arctic sea ice volume is still at or near record lows. But it hasn’t actually crashed. May and June will be the test.
I notice that NSIDC has given up, for the moment, updating their near real time sea ice product. Cryosphere Today ought to do the same. The data they are using are fatally flawed. JAXA and MASIE still seem to be providing valid data.
angech,
The last PIOMAS data file I downloaded that was version 2 was in January, 2014. All the rest are labeled as v2.1. Perhaps they forgot to change a label on one of their graphs, but the data are all v2.1.
Willis, I have an idea/challenge to go onto your list of analyses ( which is your appreciated forte ).
.
I have seen some of your posts analyzing monthly ceres data.
.
The idea is this: for some period of years ( the satellite era is good ),
1. access the monthly global average surface temperature ( Tsfc, in degrees K, absolute, not anomalies ). Reanalysis or other is fine. ( so you have hundreds of data points for all the monthly absolute data values )
2. access the monthly global average outgoing longwave radiance (OLR) at the top of the atmosphere (TOA). CERES or reanalysis is fine.
3. From the OLR, compute an effective radiating temperature (Te in degrees K ) by solving the Steffan-Boltzman equation for T ( Te = fourth root of [OLR/sigma] ).
4. Plot each month’s data, plot a scatter plot point of Te versus Tsfc ( a square plot from 200K to 300K on each axis )
5. On the plot, include the Unity line ( where Te = Tsfc ).
6. Also include on the plot, a linear regression of Te versus Tsfc
Examine the results
.
I believe this plot will be valuable and instructive for everyone following the issues.
SteveF,
I thought the “I think” would be a dead giveaway. I do think it quite likely that some of the models are too senstive. I don’t think, however, that we’re yet in a position where we can formally reject those models.
Nick Stokes: “But you’ve mis-quoted Gavin. He quoted a number of 0.95 over land.”
.
Nick, the context of my point was that a climate science metric of central importance got officially adjusted down by a whopping 32% via a semi-anonymous blog comment. And it was immediately accepted.
.
Nick: “Over sea (the main part) he said that it depends on moist adiabatic, and since it’s hard to put a number on that (he didn’t), no ratio is then quoted.”
.
Why did he give a land only metric with a +/- .07C and provides no guidance on sea temp to make a rough calculation? I think he wanted to weaken any assumptions and thus undermine an undesirable point about to be made.
.
Correct me if you disagree but the ratio of sea surface LT should be in the 1:1.1 to 1.2 range if the ratio to land is 1:0.95 since the oceans are expected to transfer more latent heat (humidity) to land then the reverse.
“Steve, where is any of this in your formulation? Yes, as CO2 increases, downwelling forcing increases … but that doesn’t mean that the earth will warm. Your analysis totally ignores the many, many emergent phenomena that act to cool the earth when it is warm and that warm it when it is cool.”
the problem is Willis is that the “possibility” that emergent phenomena “may” act to cool the planet GLOBALLY ( rather than just locally) is just that: a mere possibility.
The fact is we can explain the increase in temperature without regard to these “possibilities”
To be sure if you can provide explicit and well defined descriptions of these ( you havent) then one could test whether they improve model performance.
By explicit I mean math or code. not words. not a few charts.
math. code. Many GCMs are open source. Go code “emergent” phenomena into them.
“Any time range that includes the historical record before the model was run is meaningless because the models are tuned, whether they admit it or not, to match that record. ”
models are not tuned to match the record.
1. There is very little evidence or documentation on HOW models are tuned. So blanket assertions about how they do it are kinda silly.
2. They IN FACT dont match “the record”
3. What evidence I’ve read suggests different approaches
to tuning, some of which may include “checks” against
a few decades of observations.
4. you want better tuning.
“Go code “emergent†phenomena into them.”
.
First we need models that are validated. We kind of know the GCM clouds don’t work, for example. If they did work Marvel could plug high/low aerosols and test forcing efficacy.
.
Models should be able to provide quick bench feedback. The aim should be to gain a fingerprint of each type of change in forcing, especially if each has an individual efficacy dynamic. Look how much information in historical data, (beside the monthly global anomaly in Tavg,) is not used. We should look at trends in Tmax, Tmin, the diurnal temp range trend, the diurnal wave function, the seasonal wave function, precipitation associations, and do this for every classification of station environment. This would provide score of matrices in which to separate out UHI from micro-climate from CO2 from solar intensity.
.
Perhaps somebody is doing this because this is where the science can come in. Saying we have to wait 30 more years to see if the models are definitely 95% wrong projecting GMST is a lame IMO.
Mosher,
I didn’t mean that models were fine tuned to match the historical record as closely as possible. Obviously, they aren’t and probably can’t be because of things like ENSO and volcanoes. However, I seriously doubt that a model that was grossly different would see the light of day. As evidence of coarse tuning, at least of older models, we have the high aerosol forcing, high climate sensitivity vs low aerosol forcing and low climate sensitivity correlation. That appeared to be 100% related to explaining the slowdown in global temperature increase from the late 1940’s to the early 1970’s. Heaven forfend that there might be quasi-periodic oscillations that aren’t modeled.
There actually seems to be quite a bit of detail available as to how models get tuned. See for instance Mauritsen et al 2012.
As I commented here, it doesn’t seem trivial to tune models to match GMST.
When you do that, I would call that “fitting the model to the data”, which is a different thing than just tuning.
Tuning gets done for a variety of reasons, as explained in the Mauristen article. Here’s Mauristen’s take on how well the models represent observations over the period where tuning occurs (emphasis mine):
They may not “match” the record (in the sense of curve fitting), but there certainly is a fair amount of fiddling that goes on. We can call it fiddling , if that makes the people engaged in lawyering happier.
Indeed.
“There actually seems to be quite a bit of detail available as to how models get tuned. See for instance Mauritsen et al 2012.”
Note the plural. The paper cited, however, is about One model.
“There actually seems to be quite a bit of detail available as to how A model got tuned. See for instance Mauritsen et al 2012.”
The tuning for CCSM, for example, is different.
My point is pretty simple.
It would be good to see the before and after for the tunings
along with the knobs turned.
THEN, we could make statements about ‘the models’ or ‘models’
http://www.cgd.ucar.edu/staff/gent/ccsm4.pdf
Steven Mosher:
The tuning is about one model. The paper discusses the results of many models, hence the appearance of the statement “It seems unlikely that the anti-correlation between forcing and sensitivity simply happened by chance.”.
There doesn’t seem to be many models where the tuning is discussed transparently. AR5 WG1 CH9 references just one other:
Hourdin et al 2013. What they say is:
It should be noted that multiple runs by the same model are not always possible, so “curve fitting” types of tuning cannot even be done in many cases. However, most climate models are themselves derived from previous models, and obviously the experience from the previous model is carried forward (including parameterizations and choices of parameters). So tuning gets done, but it’s not as simple as “knob twiddling”. Even Mauritsen et al 2012, only used a single run output from the model (MPI-ESM) they report in depth on.
The point though is that the “twiddling” involves the how the physics gets parametrized, as well as the choice of parameters for a particular model. Thus, while I agree “it would be good to see the before and after for the tunings along with the knobs turned,” in many cases, this apparently is not available even to the modelers themselves.
WRT tuning of GCMs:
Of course the modelers are aware of the historical temperature record; it is (IMO) absurd to suggest that they do not ‘adjust’ to better conform with that record. After all, were a model to be nowhere near the historical record, how could they NOT make adjustments to get better agreement? (IMO, they couldn’t)
.
The more insidious ‘tuning’ is in assumed aerosol offsets, which can, if ‘correctly’ chosen, make any model, no matter how absurdly wrong, appear credible. The GISS aerosol offsets are just about exactly 50% of the GHG forcing from 1900 to now…. so nothing in the historical temperature record can be used to validate (or invalidate) the GISS model. ‘Intellectually corrupt’ is, IMO, an understatement.
“it would be good to see the before and after for the tunings along with the knobs turned…â€
.
Does a black box prevent fiddling or just make inner analysis of fiddling more difficult? (not rhetorical)
.
Should a model’s construction be undertaken without a practical plan for swift validation? (slightly rhetorical)
I think Carrick said it well; the ‘lawyering’ seems pretty silly and sad to me. I don’t quite fathom why it’s such an ordeal to admit the simple and evident fact that the models seem to mostly have been running hot. I’ll admit to some confusion as to whether or not this has been remedied, and if remedied if enough time for any meaningful validation has passed, but. None of this changes that the models seem to mostly have been running hot.
.
The ‘tuning the models’ issue seems strange to me too. Why on earth wouldn’t the modelers tune to the record? [I know of no reason they shouldn’t] What are they tuning to, if not the record? [I can’t begin to imagine] Surely they are not adjusting settings and parameters to make the models not match the records, right, I think that’d be pretty absurd. I don’t get what the big deal is about admitting this either. It doesn’t have to mean anything nefarious that modelers tune to the record when the past record is all they have to work with if they don’t want to wait decades at a time to see how changes or tunings affect model performance. Let me rephrase – if modelers aren’t tuning to the record, I think they ought to.
.
All the hemming and hawing on this thread about stuff that shouldn’t be controversial in the first place is pretty sad IMO.
mark,
The big problem seems to me to be that multiple different combinations of parameters produce about the same results. Seeing that, it’s hard for me to accept that models reflect the actual physics and aren’t just massive kludges that manage to get a few things right.
Which is not to say that kludges can’t be useful. But it’s something like getting the right answer on a math problem using incorrect methods. It happened to work once, but if you change anything it probably won’t do it again.
Tuning to improve performance notwithstanding; assigning aerosols whatever remainder value is necessary to get a specific result is also pretty silly, if that’s what’s really happening.
DeWitt,
I agree that not having a unique solution is a problem. Still, whatever the problems, I still don’t fault modelers for trying to make the models match the historical record. I don’t see what else they can do.
Anyways.
[Edit: I guess if that’s the case DeWitt then maybe trying to model without more information to constrain whats really happening is pointless, which might be the case. But if someone is going to model, then…]
DeWitt,
“But it’s something like getting the right answer on a math problem using incorrect methods.”
.
The difference is that the correct answer via the wrong methods on a math problem can’t lead to foolish, damaging, and costly public policy.
Parameterization is an entirely valid approach. One can sympathize with the modelers that they do not have the wealth of experimental data that modern semiconductor process modeling requires.
RB,
‘We just don’t know’ is an even more valid approach. If the modelers do not have enough solid data to make credible projections, then by all means, they should not make projections at all.
SteveF,
I disagree. This clearly falls under what can be done to assist ‘Decision making under uncertainty’.
Every time I read Mauritsen et al. (2012), the following passage jumps out at me:
.
.
Point (1) was a jaw-dropper the first time I read it.
.
The first part of point (2) is the crux of this conversation, and was the main topic of Marvel et al. (2016). The latter part often goes missed, or worse, is disregarded as an ad hoc “excuse” when papers such as Schmidt et al. (2012) are published: http://www.blc.arizona.edu/courses/schaffer/182h/Climate/Reconciling%20Warming%20Trends.pdf
.
My main point of all the above is to opine that contending the climate modelling community is somehow ignoring or otherwise sweeping issues under the rug is silly. And I think it’s beyond ridiculous to suggest that they aren’t constantly evaluating model output to observation in an effort to reconcile discrepancies. Finally, suggestions that there is concerted malfeasance amongst particular modelling groups to obtain a motivated and non-falsifiable conclusion with the only “evidence” being a review of publicly available data are not only beyond the pale, they also strike me as absurd.
.
I of course cannot prove a negative, so I offer instead a fitting version of Hanon’s Razor: Never ascribe to malice that which can be explained by incompetence.
.
I offer a corollary: Never ascribe to incompetence that which can be explained by complexity.
.
I never tire of saying this: climate models arguably stink on ice, and will for the foreseeable future because modelling multiple planetary-scale phenomena isn’t a trivial undertaking. It therefore is probably most prudent to not have to rely so heavily on them by continuing to change the radiative properties of the atmosphere.
Brandon G, the jaw-dropper for me was earlier:
Controlling the global mean surface temperature and climate drift
This is a step or 2 (hundred thousand?) beyond the goal of “lets try to slow down warming” in my opinion. Unless you know of means to “control the global mean surface temperature” of course.
.
If you do, it would be interesting to hear how we could control GMST. I’m not sure I’d believe them any more than the cold fusion claims, but hey – throw ’em out there!
.
Apologies in advance if I’m sealioning and you don’t want to address this exciting advance in climate control.
To clarify – “control the GMST” to me means the ability to turn it up, down, or keep it steady, which was how I read that. If they meant it like “control a forest fire” my bad on bias.
SteveF (Comment #147355),
.
.
Better would be “We don’t know AND we’re doing everything we can to find out.”
.
.
In an ideal world, everyone would agree on what constitutes “solid” data and “credible” projections therefrom. The real world isn’t ideal, and simply does not work that way on any complex policy issue. Think central banking and foreign policy for starters.
“Parameterization is an entirely valid approach. One can sympathize with the modelers that they do not have the wealth of experimental data that modern semiconductor process modeling requires.”
Yup. when I sat through my first briefing on backend modelling
I kinda wondered how chips even worked.
but they obviously worked..
Mark,
“The ‘tuning the models’ issue seems strange to me too. Why on earth wouldn’t the modelers tune to the record?”
Blog talk about tuning often lacks understanding of what models actually do, and that is why it is resisted. People see mainly surface temperature output, they know about surface temperature history, so why not just match them up? What’s wrong?
GCM’s solve a huge collection of equations describing the physics. Their evolution assumes that those equations remain valid. That is their essence. Now any huge set of equations is likely to be ill-conditioned. You think you have an equation to determine each variable, but some say much the same thing. Some particular pattern can be added to the solution without change. No uniqueness.
Then you must add other information. The classic case is boundary conditions. But it has to be done carefully. You mustn’t overwrite the physics that you are relying on to determine evolution.
With bc’s, it can still be ill-conditioned. You need to add more information, again without losing physics. GCMs used to find TOA radiation didn’t balance, so they tuned by varying some cloud parameters. That loses some original physics, but it’s physics that was poorly known anyway. You add the extra info of what you know about TOA balance.
So what would happen if you just added in a requirement that surface station history be matched? GCM’s solve the heat equation, inter alia, and on land have a normal flux condition at the surface, zero or near. That fully describes that physics locally. If you constrain the temperatures, you’ll lose that. The system will meet the requirement not by improving atmosphere solution, but with spurious fluxes into the ground. You just can’t do it that way.
What you have to find is combinations of ill-determined outcomes, knowledge, and parameters (or something) that you can change without losing the physivs that you need. That is what Mauritsen et al are describing.
TerryMN (Comment #147358),
.
.
I’m a little fuzzy about whether you’re talking about controlling GMST in climate models or the real system. I would naively say that controlling it in the models is less difficult.
.
.
Controlling interannual and interdecadal variability is quite beyond our present reach. Multi-decadal secular trends obviously are not — we’ve already been doing it by changing the radiative properties of the atmosphere.
.
.
Nah, I don’t think this is Sealion territory.
.
.
Goals differ depending on whom one asks, mine is stabilization. By that I mean little to no long-term secular trend as discussed above. There are arguments for bringing CO2 down to 350 ppmv, which would be expected to allow some cooling. In my estimation that is simply asking too much at present. WAG: bringing emissions to zero in a century seems doable without causing economic collapse, especially the sooner we get started.
Nick,
If you’d have put “attempt to” in the middle of “GCMs solve” throughout that post, I could agree with it 100%.
Nick,
Out and about, but thanks for your interesting and substantial looking resp.
Nick Stokes:
There is absolutely nothing wrong with this. You’d do that as part of the model verification process for virtually any modeling problem.
The problem is confusing model verification with validation (e.g., Brandon Gates using the period from 1861-2014 instead of say 2000-2014).
Goals differ depending on whom one asks, mine is stabilization. By that I mean little to no long-term secular trend as discussed above. There are arguments for bringing CO2 down to 350 ppmv, which would be expected to allow some cooling. In my estimation that is simply asking too much at present. WAG: bringing emissions to zero in a century seems doable without causing economic collapse, especially the sooner we get started.
Thanks Brandon – to me, there’s a wild difference between controlling C02 levels (sure, we can probably do that) and controlling GMST or any of the other 30 or so metrics that comprise the state of the climate, and/or current state, and/or whatever you/I/whoever wants to call it.
RB:
I’d say it “can be a valid approach”. It’s a method of approximation, so whether it is suitable for a given problem depends on how much accuracy you can achieve with it.
But there are many places where the parametric model can be shown to be equivalent to the Taylor series approximation of a more complex form. In cases like that, the radius of convergence can be well studied, and you can make precise statements about where the parametrization is strictly valid.
The operational question is “can you validate the model when you use a parametric form”? If you can’t, it’s not a valid approach (definitionally).
Carrick,
“There is absolutely nothing wrong with this.”
There’s nothing wrong with looking to see. What can go wrong is adapting the model to force a match. You have to match the new physics you want to enforce with some other part of the model equations that you can afford to lose. With surface temperature that isn’t easy, and you’ll see that Mauritsen et al use other things (some use of SST).
Nick Stokes,
“What you have to find is combinations of ill-determined outcomes, knowledge, and parameters (or something) that you can change without losing the physivs that you need. That is what Mauritsen et al are describing.”
.
Yes, it is a difficult problem. Not sure what ‘the physics that you need’ means. Seems to me the physics you need includes everything that goes into an accurate model. Fudges don’ t make it.
.
No, the model projections don’t match reality. Will they ever? Don’t know, but there is zero reason to believe the model projections today; altogether too much fudge in the recipe I think.
Zero is an impossible goal, like setting zero as the legal level for contaminant concentration without considering available technology and cost/benefit, and I think you’re far too optimistic about the time scale absent some major breakthroughs or some major disasters.
TerryMN
Minnesota Terry, I’m sure many here have asked the question what can be done to optimally expend/conserve resources to prepare for the broadest range of likely futures (for posterity).
.
I believe many here feel the preponderance of scenarios do not call for urgent action and the defacto loss of liberty. Here are some:
1) Warming is by insignificant amount gives little time constraint. (The threat was exaggerated.)
2) Sea level rise issues can be managed by mitigation tech.
3) Increased polar precipitation can be seeded or occurs naturally to balance melt. — Freeman Dyson
4) Unforeseen event brings cooling.
5) Geoengineering is developed to increase albedo (reversibly and thus providing a control knob).
6) Resources were better saved to address greater problems of civilization.
7) The political “regulated intervention” did nothing to speed the advancement of the track we were already on.
mark bofill (#147349): “What are they tuning to, if not the record?”
As I understand it, modellers adjust individual processes within the model to match observations. For the sake of an example, let’s say evapotranspiration rate, perhaps as a function of CO2, temperature and humidity. [Just an example.] They don’t tune (directly at least) to the global temperature, possibly because there is a vast parameter space, with non-obvious connections to the global average temperature.
At the same time, it would be naive to think that the results of such experimentation are not examined to see if they correspond to “a reasonable climate”. As anyone who has dealt with hill-climbing algorithms can attest, sometimes to get to the global optimum one has to take backward steps.
RB,
“Decision making under uncertainty’
.
My experience is that it is usually best to reduce the uncertainty before making the decision.
.
But don’t worry, there will be high CO2 emissions for the next 15-20 years (or more). That will help reduce the uncertainty.
Nick,
.
Thanks for your response.
.
To the extent that I knew anything about what these models do in the first place, I agreed with you. The rest of what you said sounds reasonable enough. This leaves me wondering why you elected to tell me all this.
.
I guess I must’ve come across as if I thought it was an easy thing to do or something, or if I thought it was some simple curve fitting exercise. I didn’t mean to imply that, or to minimize the complexity at all. Most (all?) of what I was trying to get at is that I don’t see why it’s such a shocking or outrageous thing to suggest that modelers try to make their models match the historic record.
.
I think the models are cool, FWIW. If I was doing climate science I’d be a modeler, no doubt in my mind.
.
I appreciated your comment regardless don’t mean to come across as ungrateful, I always value hearing what you have to say.
.
Thanks Nick.
Harold,
.
.
Yes, I gathered this from reading Carrick’s link and from other occasions I’ve read about GCM’s. TY.
HaroldW,
Evapotranspiration rate is another of those anti-correlations like aerosol forcing and climate sensitivity. Models with a high rate of change of evapotranspiration with temperature have lower climate sensitivity.
Brandon G,
I wholeheartedly agree with this in the context of climate models. I don’t think there’s anything nefarious going on, it’s just darn hard to get right.
SteveF
Well, yes, that’s a decision too.
.
Of course, uptake is about 2.5ppm per year, so going to 2.5ppm per year is effectively no change – we don’t have to go to zero emissions. And of course, most nations have falling emissions rates already.
TE,
But uptake drops over time because it’s proportional to the difference in concentration between the atmosphere and, mainly, the ocean. An immediate 50% cut in CO2 emissions would cause atmospheric concentration to hold steady for a few years and then begin to climb again at a slower rate. But any cut is simply not in the cards, probably for decades to come. It’s possible, though, the rate of increase will slow some in the short term.
You’re right, though, we don’t have to go to zero emissions, just a few tenths of a percent of what we emit now. The geologic cycle can handle that. I don’t see that happening even in a century. If we’re lucky, the atmospheric concentration will peak at less than 800ppmv.
Lets start the model with what we agree. [Natural] Climate Sensitivity due to a rise in CO2 without feedback factors.
All agreed. after all we do all know what CS is due to a pure doubling of CO2.
Good.
Except those who believe climate sensitivity is an emergent phenomenon, Many here.
CS as an emergent phenomenon is the [emergent] CS that is produced after the model or nature does its trick and comes out the other end. Plug in the expected CO2 rise, [as they do], then the other [huge collection of equations describing the physics] parameters.
“Now any huge set of equations is likely to be ill-conditioned.?]
I do not think so.
We just do not know if we will have two or 3 La Ninas or El Ninos in a row as with a lot of other parameters.
We specify the model with 3 runs , Neutral and one to each side, run for 2 years and then give free rein. Every 2 years you readjust to reality and rerun.
In the meantime you check the difference with reality and reassess your parameters [guesswork based on physics.
You do not add in clouds or aerosols to cover your backside.
You add in clouds or aerosols as well as you know they occurred.
Then you can back check which parameter could be at fault and adjust for the next 2 year run.
Simple and hopefully already being done.
“If we’re lucky, the atmospheric concentration will peak at less than 800ppmv.”
.
DeWitt, I can’t tell if you are impersonating Paul Ehrlich or Eeyore. Things can change very fast once the technology is market competitive. Solar panels are going up all around our area now. Solar City is hanging out at Home Depot making what seems to be a successful blitz. I got rejected due to too much shade though. Fusion reactors and driver-less electric cars are going to happen. Earth will peak at 550ppm in 2080. Now it’s up to Lucia to preserve our predictions.
mark bofill (Comment #147375)
“I think the models are cool, FWIW.”- double entendre not
Nick Stokes (Comment #147362)
“Blog talk about tuning often lacks understanding of what models actually do, and that is why it is resisted.”
Very good post.
–
“You think you have an equation to determine each variable, but some say much the same thing.
–
and? That is not a bad thing.
–
Some particular pattern can be added to the solution without change.
No uniqueness.”
–
I find this comment intriguing, like the dog that did not bark.
How can a particular pattern be “added to the solution without change?”
The pattern is equivalent to zero. That in itself is unique.
and hence impossible.
Even if it replicated another pattern exactly [perhaps what you meant] it must change the solution.
By its mere presence. After all you would have 2a instead of 1a on one side.
–
Basically if you find patterns that all say the same thing you may be well on your way to solving Climate models. Solutions are part of the question. Solutions that do not change are extremely important, like Natural Climate sensitivity. They are the skeleton, outline and form just needing a touch of filling in.
angech
“How can a particular pattern be “added to the solution without change?—
Here’s a very simple case. Solving y”=0 over three points 0,1,2 by finite difference.
The best approx to y” at each point is y_2-2*y_1+y_0=0. Three equations in 3 unknowns, but all the same, so really only one equation. It is satisfied by any line – a sequence 1,1,1 or 0,1,2 etc. The de is also satisfied by y=0. The set of lines is the nullspace. You can add any line to solution y, (y+ax+b), and the equation is still satisfied.
It’s a 2nd order de and you need 2 extra conditions, say values at each end. There is then only one line that satisfies – a unique solution.
GCM’s are vastly more complex, but what goes on is similar.
DeWitt,
800 PPM is much higher than I think plausible. The ocean’s capacity to dissolve CO2 is too large for that number to be reached.
Ron Graf,
Unless the solar panels are being installed with big battery banks, they won’t have as much impact on emissions as some may suggest.
SteveF,
You should read this.
A summary:
1. If we continue to emit CO2 into the atmosphere, atmospheric CO2 will continue to increase. Whether or not we reach something like 800ppm will therefore depend on how much we emit.
2. Something like 20-30% of our emissions will remain in the atmosphere for thousands of years (or, more correctly, the increase in atmospheric CO2 that will remain for thousands of years will be equivalent to 20-30% of our emissions). If we take 800ppm, then that would imply emitting a total of around 4400GtC; about 7-8 times more than we have to date. So, yes, seems very unlikely, but – in my view – this is because we’re unlikely to emit that much, rather than because the oceans would take up most of it if we did.
3. On the other hand, we also expect the airborne fraction to increase if we continue to follow a high-emission pathway (RCP6 or greater). Therefore we could reach 800ppm after emitting less than the 4400 GtC suggested above. If the airborne fraction remains at around 50%, then total emissions of around 2200GtC would get us to about 800ppm. However, if we then stopped emitting, it would – within a few hundred years – drop to around 540ppm.
Ken Rice,
Current rock weathering rates globally total about 300 million tons per year of sequestered CO2. Were CO2 to double, the rate of weathering would also increase, with a doubling of that rate a likely lower limit, since weathering increases with CO2, rainfall, and temperature. So, there will be a significant continuing drawdown.
.
With regard to what fraction of CO2 will remain in the atmosphere for thousands of years: I am a lot more sanguine than you. Should it be necessary, humanity will reduce atmospheric CO2, reduce solar intensity reaching the Earth’s surface (very slightly!), or both, to control temperatures. Even simple things like biochar sequestration on farmland can have huge long term impacts, since carbon in char is effectively removed for many thousands of years.
SteveF,
My understanding is that the Archer et al. paper includes all those changes due to weathering. Bear in mind that 300GtCO2 is about 1% of our current emissions. However, you are correct that if we stopped emissions, atmospheric CO2 would initially drawn down quite fast (an e-folding time of about 100 – 200 years). However, that it is still the case that the expectation is that atmospheric CO2 levels will be enhanced for thousands of years and that the amount of extra CO2 in the atmosphere (above the pre-industrial 280ppm) will be about 20-30% of our total emissions (depending on what our total emissions actually are).
As far as what we could do, that 20-30% will remain in the atmosphere for thousands of years is based on us not actively doing anything to reduce it. There are clearly are things we could do. We could also consider avoiding putting ourselves in a postion where those become necessarily. Each to their own.
However, the main reason that I highlighted that paper was because I do not think that your claim that 800ppm cannot be reached is correct (or, at least, is not consistent with our current understanding).
Ken Rice,
Current emissions are a bit under 5 PPM atmospheric concentration, of which about half remains in the atmosphere, so ~2.5 PPM increase per year. If global emissions were to double, it would still take 80 years to reach 800 PPM. In light of the existing downward trend in emissions in developed countries, doubling emissions seems to me very unlikely, and even more unlikely considering that fossil fuel prices are likely to increase in the long term, making alternatives like nuclear power and renewables more attractive. Economics, combined with demographics, are destiny.
SteveF,
Once again, I give you the PETM. Estimates for the injection of fossil carbon range from 2,000 to 7,000 GtC. Global temperature increased by at least 6K in less than 20,000 years and took ~100,000 years to get back to baseline. So far, in the Industrial Age, we’ve burned something on the order of 500GtC and there’s plenty left in the ground and possibly the ocean floor.
The PETM cuts both ways, though. It wasn’t a major extinction event unless you were benthic foraminifera and it was already several degrees hotter to start with than it is now. The argument is that the temperature is increasing faster now than then so adaptation will be more difficult. But, IMO, loss of habitat from human encroachment is far more damaging.
Ron,
Self driving electric cars would be a drop in the bucket and we don’t have the generating capacity to keep them charged. Solar City would go belly up in a heartbeat if net metering and government subsidies ended and they should. Look at Arizona. Net metering is immoral, as I’ve said before and subsidies are almost as bad.
In technology, small things, like computers and cell phones, can change rapidly. Big things can’t. They cost too much. It would take decades to build the infrastructure to build nuclear power plants fast enough to make a difference. The choke point is capital investment. There isn’t enough money to go any faster.
We still don’t know, and won’t for another decade, if fusion power is practical. I don’t think it will be, and the Greens would fight it if it just as hard as they fight against fission power.
We’ll muddle through somehow, though. We always have. But it probably won’t be pretty. Besides, we actually have bigger problems than climate change. I’d list some, but it’s too depressing.
DeWitt,
I have no doubt that had an advanced civilization been around during the PETM, they would have taken steps to control atmospheric CO2.
.
The Archer et al paper Ken Rice linked to assumes people will do nothing if rising temperatures become a problem. I don’t believe that is a realistic assumption; closer to an absurd assumption IMO.
DeWitt
Neven comments
“This year’s PIOMAS thickness trend line is slightly higher than those of [some] other years, but that’s because sea ice extent is extremely low right now”-
there’s no telling where [extent] may end up, especially if vast tracts of the (Pacific side of the) Arctic get covered in melt ponds and lots of melting momentum is built up.”
–
I thought the Pacific side melted out pretty quickly and early every year as there is just not that much ice there.
–
I was a little lost on V1/V2 and V2.0 v V2.1.
The Polar Science Center could be accused of using known wrong graphs in their information page on PIOMAS.
They say,
Version 2.1
“We identified a programming error in a routine that interpolates ice concentration data prior to assimilation. The error only affected data from 2010-2013. These data have been reprocessed and are now available as version 2.1”
But they do not show it. Instead they continue displaying the map
Version 2. 0 [The error laden one]
This time series of ice volume is generated with an updated version of PIOMAS (June-15,2011).
The update is for unrelated new parameters only and does not incorporate the programming error that led to V2.1.
They do in fine print [small graph]
admit
-Fig 5. shows the differences in volume between Version 2.0 and Version 2.1 (click to enlarge)
But they never show version 2.1.
– Unsubstantiated allegation?
PIOMAS February 2014 (upgrade to Version 2.1)
Neven says
“I’m not sure whether the PIOMAS sea ice volume anomaly graph has also been upgraded to Version 2.1 (it says V2 in the file name)”
So he knows it is wrong but continues to use erroneous data.
Chris Reynolds says
“the difference between PIOMAS V2.0 and V2.1 as a percentage of volume in the month and year stated, … this is shown to be large”.
Note unsubstantiated and edited rudely but surely I am wrong?
SteveF,
I agree with your latter comment. I suspect we won’t emit enough to get to 800 ppm. However, that doesn’t change that to do so would require emitting about 2000 GtC in total, which is just over 3 times what we’ve emitted to date. So, only unlikely because we wouldn’t do it, rather than because it just can’t get that high.
Not only do I think this misrepresents the paper, I don’t even understand how it could have done anything else. It is trying to present what will happen if we emit certain amounts of CO2. We may well not do so, but that doesn’t change that these kind of studies give an indication of what would happen if we did. It is extremely difficult for science to incorporate what we might do. It would require subjective judgements. It is much reasonable to present possibilities and leave it to others to determine the significance of those possibilities.
One of the key points in that paper is probably from the conclusion
This is pretty simple. Take 20-40% of what we’ve emitted, convert to ppm, and that is going to give the range of long-term atmospheric CO2 enhancement.
Ken Rice,
” It is trying to present what will happen if we emit certain amounts of CO2.”
.
Well, no. It is trying to present what may happen if humanity chooses to do absolutely nothing… not reduce emissions, not sequester carbon, and not control the intensity of solar energy reaching the Earth. In addition, if I read the Archer paper correctly, they choose to ignore the sequestration of land plants when they calculate that 20%-40% residual value. Since our best estimates are land plants are currently sequestering ~1.25 PPM worth of carbon dioxide per year (about 9 gigatons), it seems to me the decay curves in Archer et al are both somewhat speculative and pessimistic.
.
Archer et al come right out and say that they think we need to consider the very long term warming consequences (multiple centuries to multiple millennia) of fossil fuel use. I would reply that a projection of multiple centuries to multiple millennia, assuming humanity does absolutely nothing to address any problems over that period, is a bit silly and reflects a certain, err… green, POV.
SteveF,
My bad.
angech,
It looks like the SSMIS on F-17 has failed completely. Cryosphere Today has been stuck at 13.6761236Mm² Arctic sea ice area for three days in a row. JAXA uses a different sensor on a different satellite, so it’s still producing data. MASIE uses multiple sources so their data also looks to be still reliable. JAXA has been at record lows for some time. MASIE is close to record lows. Only 2006 is lower and its anomaly is going up while 2016 is going down.
I also don’t care about the graphs on the PIOMAS site. I download their daily data, which is labeled v2.1, and calculate my own anomalies. Since I still have v2.0 files, I can compare them with the current version sometime.
That carbon cycle review is quite candid about how little they know, and the disparities between models. Worse than climate models. Which is to be expected when modelling living systems.
CO2 emissions appear to have peaked three years ago.
TE,
Cite please. My search shows 2014 total emissions above 2013 by ~0.5% and official data for 2015 isn’t out yet. Or at least I haven’t found it. Now if you mean the rate of growth peaked, that’s possible, but we don’t have enough data yet to confirm that, much less to say it’s gone negative.
There have been temporary slowdowns in emissions before:
https://www3.epa.gov/climatechange/images/ghgemissions/global_emissions_trends_2015.png
There was a decline in the late 1970’s, for example.
angech,
I compared the data from v2.0 and v2.1. There is zero difference through the end of 2008. The difference is small in 2009 and my plot of the difference for 2010-2013 looks just like theirs, except mine is a bit noisier because I’m using daily data and they plot monthly. Tell me exactly which graph on their site you think is still using v2.0 data and I’ll check.
DeWitt Payne
I’m not so sure about the subsidies part. Comparing the cost of solar with that in Germany , I think some of the costs in the U.S. might be a result of the subsidies, similar to perhaps how mortgage interest deduction has resulted in inflating the prices of homes. In fact, the expectation is that there are substantial price reductions in our immediate future.
RB,
from your link: “He expects at least half of the 140,000GW of power capacity to be installed in the Middle East and north Africa in the coming decade to be solar.”
.
Since total US generating capacity is about 1,000 GW, that is a damned impressive projected growth rate. 😉
.
But even assuming an error of 1,000 times, installing even 70 GW of solar capacity in the next decade seems optimistic. Of course, if that 70 GW is peak power (and it probable is) then the real (average) capacity would be about 25% of that. Still a very rapid growth rate.
SteveF,
I cannot find the quoted interview, but MW seems likely. I suppose 25% of peak power is a fair number since the US average is ~5 sun hours per day.
BTW, in the last two years, the drop in installed cost has been driven by a fall in some of the soft costs .
Change in CO2 emissions flat for second year.
.
.
Yes, there have.
.
But there have never been slowdowns due to the secular factors of falling, slowing, and aging populations as there are today. These are truly unique times.
.
The systemic demographic factors have to do with transition from agricultural to industrial to information economies which would appear unlikely to ever revert.
DeWitt: “We’ll muddle through somehow…”
.
The Doomsday clock is currently 3 minutes to midnight. So, I suppose agonizing over the prospects of the next century could be looked at a irrational exuberance. But I’m glad to see you cheering yourself up, DeWitt.
RB,
The recently commissioned Noor I solar thermal plant in Morocco is rated at 160MW peak. You would need at least 400 plants like that to be built in the next decade to reach the incredibly optimistic goal you quoted. If the capacity factor is, optimistically, 40% and we’re talking about total energy generated, then you would need over 1,000 plants. That’s serious money even if costs fall.
Since there isn’t a large grid with the capacity to absorb intermittent power generation in the Middle East, you need either batteries or a solar thermal plant like Noor I. That raises costs a lot. In fact, it’s likely that a proper cost accounting of rooftop photovoltaic solar electricity generation cost would be a lot higher than the cost of parts and installation. They depend on the existence of the grid, but almost certainly aren’t paying their fair share.
DeWitt,
If rooftop consumers aren’t paying their fair share, which I believe is likely, I expect it will play out in the marketplace forcing changes in the role of utilities as the number of rooftop installations (surely and inevitably) increase.
Ken Rice,
“My bad.”
.
Not bad, but tilted. Papers like Archer et al seem to me designed mainly to elicit public alarm based on a set of (IMO) very improbable assumptions about human behavior, human wealth, and advancing technology.
SteveF,
I think you may have mis-interpreted my comment. If you think I was approving of your “but the greens” gambit, you’d be wrong. I think it was undeserving of a response containing more than two words.
RB,
Rooftop installations are primarily motivated by the false economics of power buybacks, which, as DeWitt correctly points out, mean that solar rooftop systems are being subsidized by other users. When/if the true cost of a roof-top system must be paid, I suspect the number of installations will drop, not rise. If batteries with modest cost, sufficient capacity, and suitable lifetime become available, that could change, but I am not holding my breath.
Ken Rice,
” If you think I was approving of your “but the greens†gambit, you’d be wrong.”
.
I can assure you that I would never think you would approve of a “but the greens’ gambit. That approval is about as likely as pigs sprouting wings and taking to the air. There are some who see the same tilt as I do, but certainly not you.
Very back of napkin estimate: if emissions were to continue linearly at the rate over 2000-2014, we’d hit 800 ppmv CO2 in 2080. This assumes that countries already showing a negative trend (there are a few) hit zero and don’t go below.
… and if emissions go to zero worldwide in 2120 following a linear decline starting in 2020, we top out at 575 ppmv. I forget who wagered 550 ppmv above, but it wasn’t a bad bet according to my scratch calcs.
SteveF,
Wholesale module prices are already 50c per watt. There will be a day when retail prices get there as well. I think the winds are very favorable for rooftop solar.
SteveF, The green POV is obvious with Ken. The renewed squabble over model trends is the latest example. We are apparently supposed to believe that we cannot reject a model unless ALL its runs fail to get close to the data. Of course it’s impossible to run all the possibilities so we can never reject a model. Very odd and certainly something that would never fly in any other field.
The problem here is that even very long term trends are now quite at odds and all previous statements about the length such trends could be badly wrong apparently are now inoperative. Watching the pea is critical here
RB,
Yes the modules are very cheap now. But the ‘package’ is not. Retail price for panels ~$1 per watt. Installation ~ $1 per watt.
Converters (DC/AC) ~ $0.35 per watt. Battery cost: uncertain but high, and have to be routinely replaced. Net yield is about 25% of ‘peak capacity’. So on a ‘continuous watts’ basis, the capital cost is on the order of $9-$10 per watt, or $9,000 -$10,000 per KW. Plus a continuing substantial cost of batteries… with limited lifetime.
.
If we remember that the capital cost for combined cycle gas fired plants is on the order of $1,000 to $1,500 per KW capacity, then the problems with solar power become clear: too much capital is tied up, and continuing costs (batteries and maintenance) make the whole thing non-viable. In fairness, there are no distribution/grid costs for a stand-alone solar system, so the economics are a little better…. but not a huge amount better.
.
It is only grid-tie systems with buy-back requirements (and with installation costs usually subsidized by government) that look economically reasonable. The systems themselves are a non-starter.
David Young,
Yes, it seems there is no way the GHG concerned will EVER reject a model as useless for being nutty-high in sensitivity to forcing. I am not surprised by this; the disagreements are not and have never been primarily about a reasoned scientific evaluation. The disagreements are and have always been about goals, priorities, morality, and ‘intergenerational obligations’, AKA, politics. The ‘personal moral values’ involved are the key, and people do not usually compromise on personal moral values. Hence blog discussions which go nowhere. The best we can hope for is to keep dubious science and exaggerated risks from being used to justify damaging and costly public policies.
Here’s the conclusion from our new paper. This is for a much simpler class of flows than GCMs try to model:
This paper provides evidence of a substantial, case-dependent
variability in highly credible CFD results. We believe a thoughtful
comparison between this and the CFD literature generally suggests that the literature suffers from positive results bias. In the last decade it is becoming more common to find reporting of negative results, but that is not the norm. Further, sensitivity of results to parameters of methods or codes is often not reported, in many cases due to lack of data and/or resources.
We believe that caution must be used in evaluating CFD results,
particularly those reported after long studies in which data is known
beforehand and tuning of inputs and gridding is extensively used but
often not reported. A more scientific and rigorous evaluation of CFD requires reporting the sensitivity of results to the tunable parameters of the codes, and there are a very large number of such parameters, including those enumerated in this report and also a more systematic attempt to generate a wide variety of test cases with a wide variety of flow types and phenomena.
The chief lesson here is that repeatable and accurate use of CFD, even for increments, will require careful attention to verification and validation in new situations and constant vigilance against common sources of bias that are prevalent in the CFD literature.
David Young,
Has the paper passed peer review?
Not yet but it has passed a more rigorous internal review including world class people.
We intend to submit it in a couple of weeks.
ATTP: The 102 runs of the CMIP ensemble prove that models as a group are wrong: Models either produced too much warming over the hiatus OR too little unforced variability. There were 102 chances to get the 1998-2013 trend right.
ENSO is the most important form of unforced variability. Cherry-picking a few models whose behavior in the Nino3.4 region is similar to that observed in during the hiatus (which is what you are doing by choosing runs “largely in phase with observations”) doesn’t address the problem that exists in the collection of 102 models runs.
Kosaka and Xie (2013) demonstrated that forcing SSTs in the Nino3.4 region of a model run to match historical SSTs (by adding or removing heat from that part of the ocean) caused the model output for the whole planet to agree more closely with the historical record. In particular, the unusual number of El Ninos during 1975-1995 that warmed the Nino3.4 region enhanced global warming and the unusual number of La Ninas after 2000 created a hiatus in warming.
SteveF,
I don’t believe that the levelized costs will differ by 8X despite the grid maintenance costs being underestimated for rooftop solar. At least based on the utility scale purchase agreements, they are apparently already cost competitive (with subsidies).
RB: Rooftop solar is “already cost competitive (with subsidies).”
Yes, and I can compete with Usain Bolt in the 100m dash, if he gives me a head start. But it would take about a 90m head start.
For those familiar with the most recent CA post on Gavin’s statistical mean oops ATTP seems to have raised McIntyre’s hackles enough to lure him over to a debate at ATTP’s. Very entertaining.
SteveF wrote: “I have no doubt that had an advanced civilization been around during the PETM, they would have taken steps to control atmospheric CO2.
.
The Archer et al paper Ken Rice linked to assumes people will do nothing if rising temperatures become a problem. I don’t believe that is a realistic assumption; closer to an absurd assumption IMO.”
We don’t have an “advance civilization”. CO2 emissions will grow in the future mostly because undeveloped countries are desperately trying to escape current poverty that is worse than future climate change. Fossil fuels are the cheapest way to do so. (China’s emissions are double the US’s and rising, while their per capita emissions are already equal to the EU’s.)
Do you think our collection of world governments represent an “advanced civilization” capable of dramatically reducing CO2 emissions? I’m not sure that any of our governments currently represents an “advanced civilization” capable of producing 80% reduction in CO2/GHG emissions. 50% reduction; maybe.
If people would support dramatic emissions reductions once rising temperatures became a problem, then Ross McKitrick’s idea of a carbon tax that rises with GMST makes a whole lot of sense.
Frank,
A less than 1C increase, combined with rapidly falling global poverty, is not enough to change priorities. In the future, and in a much richer world, a change in priorities could happen. I am not suggesting that there will be a sudden change, but if reality on the ground requires action, then I suspect that action will happen.
RB,
Cost data supporting your claim of solar being cost competitive please.
.
As others have noted, if I get a big enough ‘subsidy’, I can outrun the Kentucky Derby field.
Ron,
Growth in emissions from fossil fuel consumption for 2007-2014 was well above the linear trend line from 1946-2006. We could have flat to falling emissions for a few years and still just be returning to the longer term trend. Sure, the population in the developed world is aging, not increasing rapidly and manufacturing less. But that’s not from where a larger and larger fraction of emissions are coming. China is the worlds biggest carbon emitter and they still have most of their population living in less developed world conditions. Then there’s India, Brazil and eventually Africa to go. Somebody still has to make stuff and that takes energy. Fossil carbon energy is still the energy of choice.
SteveF, with solar PV, like any other product, prices are will come down with mass market/mass production. The gov pays the lion’s share of the installation cost and allows the retailer to profit from the long-term investment. This is slated to end this year but I think they will extend it at the 12th hour. Battery tech is also critical for busting open the emergency backup power market by allowing PV to go off-grid in outages.
SteveF,
I meant at the utility-scale, it is cost competitive. There is a wiki link . While rooftop solar is not, as I said earlier, there is a lot of room for costs to fall in module price as well as in soft costs.
BTW, these are all costs of production. Apparently, there is more to what the customer pays.
DeWitt, I never said it would be easy. I just have more confidence in human ingenuity. And, I don’t want the government to try to help except, maybe in modest temporary incentives; that’s all.
Progress is necessary for many reasons not related to CO2.
“For those familiar with the most recent CA post on Gavin’s statistical mean oops ATTP seems to have raised McIntyre’s hackles enough to lure him over to a debate at ATTP’s. Very entertaining.”
Ya,, maybe Lucia should head over.
FWIW I think this problem is intractable.
DeWitt Payne (Comment #147399)
“JAXA has been at record lows for some time. MASIE is close to record lows. ‘”
Yes.Food for reflection. I have to accept it.
–
DeWitt Payne (Comment #147403)
“I compared the data from v2.0 and v2.1. There is zero difference through the end of 2008.”
Yes.
The error occurred in 2010 and ran through til 2013 and is still excluded from V2.0, the one they show.
–
“The difference is small in 2009″ ?
No difference should exist in 2009.
–
” my plot of the difference for 2010-2013 looks just like theirs,”
Yes.
–
“Tell me exactly which graph on their site you think is still using v2.0 data and I’ll check Version 2.1.”
The only version they have on their page, PIOMAS daily arctic ice volume, the second graph down, is unlabeled at their site but as said Neven presumed it was still V2.0.
]”We identified a programming error in a routine that interpolates ice concentration data prior to assimilation. The error only affected data from 2010-2013. These data have been reprocessed and are now available as version 2.1. Differences in ice volume are up to 11% greater in late spring.”].
Steve Mosher, I don’t understand what Cauley is saying. It seems to me to imply that it is virtually impossible to discard a model if you must show that observations are outside the range of runs with all initial conditions and parameter choices. I don’t see how this has any utility at all in the real world. Do you understand the point?
Steven Mosher (Comment #147437)
“the most recent CA post on Gavin’s statistical mean oops ATTP seems to have raised McIntyre’s hackles at ATTP’s. Very entertaining. Ya, maybe Lucia should head over.”
–
So many comments over there incuding this apt
Dikran Marsupial says: May 10, 2016 at 4:14 pm
“In statistics it is very important to understand the problem well enough to be able to formulate the question properly before trying to answer it.”
–
But then he spoils it with,
–
“We have two means, one of the observations and one of the model runs, but that doesn’t mean the correct test of model-observation consistency is to see if the means are plausibly the same.”
–
1, We only have one observation, so technically not a mean.
2 We have many model runs and can do infinite combinations with them to obtain infinite means not just one. This is a very important statistical point in view of the comments re possible divergence of models others quote later on.
3 The correct test of consistency is to show plausibility of the result being the same.
True you can argue that there are good reasons why a model mean diverges from the reality it is supposed to be estimating. But this in no way implies that divergence is a good thing ever in proving consistency.
RB,
The cost to the customer includes capital expense, operating staff cost, maintenance, fuel, distribution costs, and ROI for shareholders (which in most places is regulated because utilities are essentially regional monopolies).
The wiki link you provide indicates that PV solar is among the most expensive grid scale power sources in most places (in levelized cost). Are you seeing something I am not? In any case, cost estimates are very tricky to get right for solar due to the combination of intermittency (and so the need for back-up non-solar capacity) and unpredictability (cloudy days are bad). Science of Doom had a long series of informative posts on utility scale renewables. If you haven’t seen it, then it may be worth a read.
angech,
Dirkan Marsupial doesn’t know what he is talking about.
SteveF,
That’s a brilliant rebuttal. Do you want to expand on it?
David Young (Comment #147439),
It is nothing more than saying they can always ‘adjust’ model parameters to match almost any measured reality, so all models are OK, in fact they can never be proven wrong by reality… equal to saying parameter choices which determine model behavior are not part of the model. It’s nuts.
Ken Rice,
See my comment directly below yours addressed to David Young.
SteveF,
That isn’t what Dikran is saying. Why not rebut what he’s actually saying? They can, of course, be shown to be wrong by reality.
David Young,
I understand what Gavin Cawly is saying.
http://rankexploits.com/musings/2008/sd-or-se-what-the-heck-are-beaker-and-the-other-talking-about/
I guess I’ll have to discuss this again. The correct argument should not be over “should we use the standard error in the mean vs. should we use the spread of runs”. The correct arguments should be:
1) Which questions should we care about? (I think we should care whether the model mean is biased high and also care about whether they contain the earth realization. But, evidently, some people for some reason only want to consider one of these questions. Either they don’t understand there are different questions or they … well.. whatever.)
2) Given the question, what method should we use to estimate the spread due to “earth weather”. Should we estimate using a time series applied to actual observations? Should we estimate using the average weather in a model? (Which is the average spread given what models claim is what we should expect for uncertainty in an observation.) Should we estimate it based on the spread of all runs in all models (which is essentially what Gavin has been pushing for a long time and which is ri-don-culous if we want to figure out if a batch of models is biased on average btw.)
3) Are methods of presenting data misleading? Some are. In particular there is a popular (easy) presentation that tends to cause people to over estimate uncertainty by a factor that is often as large as the square root of 2 and leads to gross misimpressions of the correct conclusion — in particular making people think the model mean is ok when the data strongly suggests it is not. (Gavin pushed for some of these in a tweet. They are bad ways to present data. Popular but bad.)
4) Finally, given the question investigated where any actual errors made in any particular analysis. In Douglas an error was made. He forgot to account for the uncertainty in the observations. But the fact that Douglas did it wrong doesn’t mean one can’t compare observations to the model mean. You can. Santer compared observations to the model mean- in a paper with jillions of coauthors including Gavin. What it means is you need to include uncertainty in the observations in your test of the model mean. (Santer did so. I did back when I was doing comparisons.)
Ken Rice,
Of course they can. Multiple runs of the same model define a plausible envelope of behaviors. When reality falls outside that envelope, the model is wrong.
Basically ATTP is defending an argument that any run of models can produce a result that is different from the expected mean.
The more model runs the more likely that any particular model mean will deviate from the expected mean.
This is true and is an expected outcome in statistics.
It would be much more remarkable if any model or model mean actually faithfully [correctly] modeled the single observation in this case or the exact model mean.
Note that this virtually impossible occurrence does actually happen with extreme frequency in the real world.
As in Lucia’s smartest and least smart students both getting identical test answers for example.
Bridge hands with 13 spades.
With models this can be seen in the inability of any model to have an overall negative trend, ever.
To continue, ATTP’s use of this truth is not relevant
–
Gavin said “the formula given defines the uncertainty on the estimate of the mean – i.e. how well we know what the average trend really is. But it only takes a moment to realise why that is irrelevant. Imagine there were 1000’s of simulations drawn from the same distribution, then our estimate of the mean trend would get sharper and sharper as N increased. However, the chances that any one realization would be within those error bars, would become smaller and smaller.”
–
In practice the probability would remain exactly the same [Help please from the roomful of mathematicians]. Statistically the probability of the particular run falling within range of the observation or expected mean can be described by standard normal distribution.
Thus it is far more likely that any one model run will fall within one standard deviation [64.3%] and 95.4 fall within 2 standard deviations. There should in fact be 50% of the possible distributions below the actual observation in most model runs.
Perhaps the inability to incorporate larger Natural variation parameters is the fourth reason for the model divergence.
Gavin is right, McIntyre is right and Gavin?.
Yes, this is why using the uncertainty on the mean, rather than the model spread, is incorrect when comparing with observations.
Only if the spread is the 100% envelope, and only if you also include the uncertainties in the data. Typically the model spread is 95% (i.e., 5% of the models are outside this range at any time), so it would need to be outside that range more than 5% of the time for it to be regarded as wrong. Also, you would to be pretty convinced that the observations are 100% correct to immediately draw this conclusion, and that is in dispute in the case of satellite data. If the models and data are inconsistent, it isn’t correct to immediately assume that it is the models that are wrong.
I should add, in comments at his own blog, it appears Anders doesn’t understand how comparisons to the model mean are made.
Anders at one point wrote:
Anders is incorrect that the method used by Santer (traditional t-test) would result in a perfect model nearly always failing. This is because on is required to include the standard error for the observations and that does not decrease with the number of model runs. It’s an estimate of the spread due to “earth weather”.
The standard error in the model mean is merely and estimate of the spread in means we would obtain if had happened to have a different batch of “N” model runs randomly chosen. But it’s not the whole error used in the “t-test”. The standard error in the observations is the larger term.
(And, fwiw, one should certainly not consider the standard error in the observations to be the combined values from the spread in the models and the estimate for “weather noise” from the earth. One should either use one or the other– or figure out some Bayesian way to use the models as a prior and the earth weather as the observations– but one should only use this once. Otherwise, the uncertainty bands increase by about a factor of 1.4, which is huge.)
angech,
No, I’m not. I’m saying that we cannot know what the expected mean is if we only have a single set of observations from a single system. Okay, to be clear, we might have multiple measurements of the same system, but it is a system with only one set of initial conditions and so the observations do not cover the range of all possible realisations for that system.
Lucia,
I didn’t mention Santer, and I didn’t mention a t-test, so I can’t be incorrect about what Santer did. Try reading what I actually wrote, not what you think I wrote and not what Steve McIntyre thinks I should have written. I really do only have to defend what I actually said. My post had nothing to do with Santer, or what they did.
Both Steves (McIntyre and Mosher) appear to refer to Climategate emails on that thread. Anders and the moderators aren’t having it. Can anyone over here explain what any of those emails have to do with it / what their point is?
[Edit: I refer to these comments, here, and here.]
Ken Rice,
“Only if the spread is the 100% envelope, and only if you also include the uncertainties in the data. Typically the model spread is 95% (i.e., 5% of the models are outside this range at any time), so it would need to be outside that range more than 5% of the time for it to be regarded as wrong.”
.
100% envelopes don’t exist; we reject a model which falls outside a defined envelope (90%, 95%, 99%) as being unlikely to be correct… or if you prefer, likely to be wrong.
SteveF,
That’s not correct. The model spread is typically presented as the range within which 95% of the models fall at any one time. Observations can therefore also fall outside this range 5% of the time without us concluding that the models have failed.
Ken Rice,
“it is a system with only one set of initial conditions and so the observations do not cover the range of all possible realisations for that system.”
.
There is a big difference between uncertainty of observations, which should be included in comparing a model to reality, and talking about “possible realizations” of the real system. The range of possible realizations of measured reality is estimated from multiple runs with different starting conditions of the model; multiple model runs cover that uncertainty. You don’t get to count ‘weather uncertainty’ twice. The correct test is if measured reality (and any associated uncertainty in the measurements) fall within the model’s envelope of variability defined by multiple runs. If not, the model is likely wrong.
Ken Rice,
We are talking about different things. I have said nothing about spread between models, only about testing the plausibility of individual models.
Anders
The spread of the envelope is one possible valid tests. So for now I’ll just comment on this:
I don’t know how you define being outside the range more than 5% of the time. But “some percent of the time” is really not a good test of models fidelity especially not if you are comparing temperature anomalies and use a “baseline”.
In the normal real world if the earth was always in the lower 1/2 of the spread 100% of the time but never veered “out” of the ±95% normal people would easily recognise the models were biased. Now it may be that people who really, really, really don’t want to admit flaws in their models would insist the real would try to convince normal people that the models are ok because they never veered out. But… heh. No. The “must be outside more than 5% of the time” standard before we think models are detectably flawed is nutty. Pathologically so.
One should always account for the uncertainty in both the model mean and the observations. And when accounting for the uncertainty in observations one needs to account for both measurement uncertainty and uncertainty that the individual realization is “typical”.
But one should account for each of these uncertainties onceand the total uncertainty should be obtained by pooling (sum of square), not by adding. Visuals should also be carefully chosen not to mislead — many tend to guide the eye to “add” not “pool”.
SteveF,
I’m not trying to count it twice.
Agreed, and this is roughly what I was saying in my post (is agreeing with me just impossible?). However, I still maintain that if you present the 95% spread of the models, then observations can fall outside that 5% of the time without one concluding that there is an inconsistency. This is not a p-test.
Lucia,
I have to go to a meeting. I agree that if observations were inside the spread, but always in one half, that would indeed indicate a bias. I’m not suggesting otherwise. I was simply suggesting that being outside the 95% spread does not immediately indicate an inconsistency.
Ken Rice,
“Agreed, and this is roughly what I was saying in my post (is agreeing with me just impossible?).”
.
Not impossible. That just isn’t what I got from your post.
Anders,
I didn’t say you mentioned Santer nor did I say you mentioned a t-test.
I read what you wrote. And I quoted it
The final conclusion is false.
I get that you somehow don’t want to talk about Santer. But –whether you like it or not– a t-test is a formal type of comparison and you are making a very general claim about what makes a fair comparison. My discussing a specific example a well known form of comparison does not constitute my not reading what you wrote. It is showing a specific counter example that shows your general claim mistaken.
FWIW; If you are thinking that one should not compare a single earth realization (i.e. earth observation) to the model mean without accounting for ‘the uncertainty in the observation’ you are trivially correct. No one should do that.
But that’s doesn’t make “…comparing the model mean with the observations is unlikely to be a fair test”. It means doing the comparison without accounting for the uncertainty in observations” is unlikely to be a fair test.(In fact, it’s not only unlikely, it’s flat out wrong.)
Comparing the model mean to observations is entirely fair provided one accounts for the uncertainty in the observations. This is true whether you wish to discuss Santer or not. The relevance of Santer is that it’s clear that comparing model mean to observations is entirely fair, routine and so on. It is a specific counter example to your general claim. And is so whether or not you mention Santer and whether or not you mention t-tests.
If you want to continue to claim your general claim about “comparison” is correct, you need to explain why it is ok for Santer to have made the comparison that stands as a specific example that refutes your general claim. (In fact you will fail to support your general claim because it is wrong. You must either modify it, or admit it’s wrong.)
I may be wrong but the focus on surface temperature alone seems to ignore the scope of what the models are doing.
This description of a model appears to describe a comprehensive look at the model’s “skill”.
bugs
Nonesense.
Comparisons and specific tests must always “focus”. But doing one test does not preclude doing others as well. It is not either or.
That said: We have better data for surface temperature than many other things. So comparisons to surface temperature are attractive for that reason. Also, those making projections themselves focus on surface temperature. So it’s natural to spend more time evaluating the the feature they focus.
That models may be useful for reasons other than making projections is true. No one objects to model development in that context. Certainly, there would be little non-academic discussion of the models were merely an academic exercise in gaining understanding.
Anders
Fair enough. But I think you have to admit that’s not what your previous wording seemed to claim. (Mind you, my comments are also sometime not quite what I mean either.)
bugs,
The general subject is warming caused by GHG forcing. The concern raised by many is about consequences of GHG driven warning. GCMs project rapid warming, and reality is not cooperating. Future temperate increases are the most important issue, the rest of model behaviors falls in the realm of “who cares”. That is why temperatures are usually focused on.
SteveF (Comment #147469)
SteveF, I think the other aspects of comparisons of observed and modelled temperature series are important (other than global trends in AGW period) and primarily because we need to know whether trends for models might match closely the observed for the wrong reasons. For example, the ratio of NH and SH warming might be all wrong yet the global warming matches.
Even auto correlation and variance should be considered in these comparisons.
Lucia,
The context of the post was Gavin’s comment about Douglass et al. Please try to read what I write in the context of what I actually wrote, not in some other context that is based on something I didn’t even mention.
Maybe even think about this
Seriously, I have no interest in defending what I didn’t write.
Kenneth,
Yes there are other legitimate tests (like those you mention), and the structure of model behavior versus reality (eg auto correlation in the trend) is also a legitimate comparison. But if a models is demonstrably unable to produce a plausible range of average warming, then the rest seem to me to be secondary.
.
Yes, it all does remain to be observed, but China is now a source of decreasing rather than increasing emissions ( in line with demographics ). India and Africa have demographics of increasing population, growth and co2 emissions.
.
But there is value in those emissions. Much energy is expended in economic development. But once developed, economies become more and more efficient ( and have lower population growth ) and so have falling emissions once peaking after development.
.
Children consume only what their parents provide.
Elderly consume only at the rate their fixed income will allow.
As the remaining working age population declines, so too will demand and emissions.
Anders,
Whatever the context of your post, I am criticizing your concluding statement
It is false to say “if you want to compare models and observations, you can’t use this as the uncertainty, if the observed trend is not the mean of all possible observed trends.” That is your claim in your post.
If you limit Gavin Cawley’s criticism to applying only to Douglas’s incorrect attempt — which omitted the uncertainty in the observation– then we can all agree that Douglas‘s comparison was incorrect.
But your claim “if you want to compare models and observations, you can’t use this as the uncertainty, if the observed trend is not the mean of all possible observed trends.”
does not follow. You perfectly well can use the standard error in the means when comparing models to observations. Santer did so– and did so correctly. And he did so even if you don’t want to discuss a specific example of a case where the standard error in the mean was used to compare models to observations.
Anders,
I should further note that while Gavin Cawley goes on and on about using the standard error in them mean in the models that’s not the mistake in Douglas. The mistake in Douglas is to not include the uncertainty in the observations.
So even if the context of your post is Gavin Cawley’s comment about Douglas, that only shows that you also don’t understand the problem in Douglas and you don’t understand that there is absolutely nothing wrong with using the standard deviation in the mean when comparing models to observations.
If you think there is something wrong with using that– which Gavin Cawley (aka beaker) claimed over and over, and which you certainly seem to be claiming, you need to admit your claim means the Santer paper that did use the standard error in the model mean was done incorrectly.
Because either
(a) your broad general claim is correct and Santer’s paper is done incorrectly or
(b) your broad general claim is incorrect and Santer’s paper — which does something you claim is incorrect– is incorrect.
There are no other options because Santer (along with all his co-authors) did use the standard deviation in the model mean when comparing observations to models.
Lucia,
We’ve once before had a discussion like this where you completely misinterpreted what I had said and we wasted a good deal of time and energy discussing something that was irrelevant. I will repeat: my post is about Gavin Schmidt’s comment regarding Douglass et al. in which the uncertainty on the observations were not included. Everything in my post is in that context. That’s it. Even that sentence is in that context. I don’t need to add everything to every sentence in my post, when the context should be obvious from what the post is about. You and Steve McIntyre have introduced Santer et al. I didn’t mention it at all.
I really do not see the point of us discussing your interpretation of what I said, when I have repeatedly told you that it isn’t what I said. I’ve already agreed with you that if you include the uncertainties in the observations that you would have a more appropriate test than that carried out in Douglass et al. I guess you can ignore that and continue to criticise the conclusion in my post if you wish, but then you’re just criticising a strawman and I have – as I’ve already said – no interest in defending what I didn’t write.
I don’t completely agree with this
but I have no interest in discussing it further if you’re going to continue taking a sentence in my post out of context.
Lucia,
I don’t need to do anything. What a bizarre thing to suggest. Maybe if my post had been about Santer et al. you’d have a point, but it wasn’t.
As I said, I don’t completely agree with your claim about Santer et al., but if you’re going to start telling me what I have to do, then I’m quite happy to discuss it with others.
We tend to concentrate on temperature series at these blogs but another important variable in climate and climate change and its effects on humanity is precipitation and in that area of modeling the models perform relatively poorly. The difference between modeling temperature and precipitation is the larger number of factors that need to be considered for precipitation.
http://phys.org/news/2015-04-climate-real-world-differences-precipitation.html
“Precipitation is one of the most poorly parameterized physical processes in global climate models.”
“These variations (both among models and between models and observations) include differences between daytime and nighttime, warm and cold seasons, frequency and mean precipitation intensity, and convective and stratiform partition. Further analysis reveals distinct meteorological backgrounds for large underestimation and overestimation precipitation events.”
I have to recall my reactions every time this subject of using the SE of the mean is brought up in the context of Gavin Schmidt’s blog comments about the Santer/Douglass debate and the fact that Schmidt was a coauthor and arguing against use of what appeared in the paper. I too thought it might be a misprint when we finally saw the paper and it used the SE of the model means.
While I judged that using the SE of the model mean was proper, based on Schmidt’s comments, I fully expected the paper to not use the SE of the mean. Lucia quickly straightened me out on this matter that the paper did indeed go against what Schmidt was advocating and it was not a misprint. I still have a difficult time comprehending how a coauthor of a paper could get it so wrong, but no doubt Schmidt certainly did.
lucia:
I don’t think bugs was saying that you shouldn’t focus, just the the focus is too narrow.
I think he’s right.
Lucia,
It is a bizarre thing to suggest. It’s almost like you think Anders wants to be logically consistent or something. I’ve no idea where you get such notions.
.
/saaaarc, I know, not helpful. 🙁 But it’s not like this looked to be going anywhere anyway.
The problem with the error of the mean is the “n” you use in the $latex 1/\sqrt{n}$ factor is the number of independent measurements. Good luck figuring out what that is with the reported GCM outputs. It’s certainly less than the number of curves in the ensemble.
The problem with the mean of the model outputs is the mean is an unbiased estimate of the central tendency of the model ensemble only if sample population is itself unbiased. If for example, modelers were to preferentially report model outcomes that had large ECS’s, that leads to a net bias in the sample population.
When you have a bias in the sample population compared to the ensemble of all possible model outcomes, it is an error to use statistical methods for validation of the model compared to the measurements.
You can still use the comparison of the model outcomes to the data using normal statistic methods. But here you’re testing “reliability” rather than “validity”.
My preference for model validity is to do the validation testing one model at a time against measurements, and include both weather and climate variability in the uncertainty estimate. (But you have to be careful not to double count.) You can see which, if any, models are valid against your preferred metrics.
I don’t think it makes any logical sense to create an ensemble of models outputs generated by models of admittedly different levels of validity, then make a comparison of that practically meaningless ensemble against the measurements, for the purpose of determining model validity.
“Steve Mosher, I don’t understand what Cauley is saying. It seems to me to imply that it is virtually impossible to discard a model if you must show that observations are outside the range of runs with all initial conditions and parameter choices. I don’t see how this has any utility at all in the real world. Do you understand the point?”
Vaguely.
First. I think comparing observations to the Mean of all models
or to the mean of a given model can lead to erronous rejection
of a good model.
The best I can do is sketch out the approach I would take.
1. you have a process that is sensitive to initial conditions.
2. You build a model, if done right this too should be sensitive
to intitial conditions.
3. you runn the model many times ( they actually dont do this
you get many models run once or 3-4 times)
4. You have a range in the results and a mean. Its unclear
what the pdf of these results will look like. gaussian? bi modal?
uniform? gamma? what?
5. The earth runs once. It produces observations with measurement error. Since the earth is sensitive to initial
conditions it could have been different? How different?
Question: Do the observations fall within the range?
Q1a) what if the model has crazy wide range?
Question: is the real earth above or below the model mean.
Even with one model the problem is thorny
“My preference for model validity is to do the validation testing one model at at time against measurements, and include both weather and climate variability in the uncertainty estimate. (But you have to be careful not to double count.) You can see which, if any models, are valid against your preferred metrics.”
+1
Steven Mosher,
“Even with one model the problem is thorny”
.
Sure. But if you look at those models with multiple runs (from different starting conditions), and find some of those models have a relatively small spread in trends compared to the difference between their mean trend and Earth’s trend over the period of model projections, then you can reasonably evaluate (not perfectly, but still evaluate) whether the model is a plausible representation of Earth. I mean, if the model’s mean trend is 0.26C per decade, and the range for four runs is 0.22C to 0.28C per decade, then with Earth at ~0.13C per decade, the model looks highly doubtful. What I think people tend to object to is the (apparent) refusal to discount models which are clearly not good representations of the Earth’s sensitivity to GHG forcing.
.
If someone were to publish a paper that said: ‘Models A, B, C, D and E are unlikely to accurately project future warming’, I might just throw a party to celebrate. It is the apparent unwillingness to discount obviously wrong models which chafes, and inhibits progress.
Thanks Lucia. It does seem as if some don’t want to address the question of whether the models are biased high. Given the long record (40 years or more) it seems as if the “weather” excuse may be wearing thin. I still see Cauleys claim as making it practically impossible to “falsify” any model of a chaotic process , especially if we allow changes to model parameters, grid, etc. I will read your earlier when time permits
BTW, Lucia, Ken Rice has used this technique before in our debate on GCM skill. He picks a small point where he is partially right and then refuses to discuss the more important issues where he is wrong. It’s really a political ploy to obscure rather than to enlighten.
I submit that comparing incomplete climate models with a not-understood climate is simply a waste of time from the get go.
Andrew
DY,
You really are an irritatingly dishonest individual. The truth is that we said something completely true. You claimed we were being dishonest. I refused to discuss it further until you withdrew your accusation. You didn’t withdraw it and you continue to promote your dishonest interpretation of that even now. I think it is entirely reasonable to not discuss the details of something with someone who starts the disucssion with a claim that is not true, and refuses to withdraw that claim when challenged.
Anders,
Let me clarify: if you want your claims to be correct you need to …
Obviously, if you don’t care whether your claims are correct you don’t need to adjust what you say in anyway. That’s obviously up to you.
Carrick
My view is that depends on what he means.
I’d be perfectly happy to discuss the fact that the models disagree with each other on the size on the variability of “N-year” trends on a different thread and at a different time. And I’d be happy to discuss whether they disagree with each other or the earth’s climate on other things about earth climate.
But that doesn’t mean we need to fragment every single discussion into parts. Certainly this discussion doesn’t need to spin off into a hodge-podge of separate discussions of every possible comparison that can be made. And I tend to view his insertion of that as the sort of attempt at scope-crete people do when they don’t like the conclusions one obtains by looking at a particular aspect of model-observation comparison.
Obviously, bugs could start his own blog and discuss whatever other topics he thinks need discussing.
Anders
Which claim of mine about Santer do you disagree with?
Lucia,
Let’s be clear, my claim is NOT that comparing the observed trend plus uncertainty in that trend with the mean of the model trends will typically fail. That, however, appears to be how you’ve chosen to interpret what I said, despite me saying that it isn’t what I said. So, the reason I don’t have to accept what Santer did is incorrect is because my claim is not what you seem to want it to be. Is it possible that this is clear now, or are you going to continue to insist that I’ve claimed something that I very obviously have not?
Carrick
That’s my preference too.
No logical sense except …. Well except for the fact that the ensemble of model outputs has been used for projections. Since it is in a sense “the” projection, the only way to test whether that projection is biased is to test that projection.
The fact that doing projections that way might not test individuals well, doesn’t make testing projections unneccesary. It just means that the test of models is best done by testing individual models while the test of projections is best done by testing whatever those projections are– warts and all.
Lucia,
Then why do you appear to think that the Santer test is correct, or have I misunderstood what you said?
Carrick (#147482): “Good luck figuring out what that [# of independent measurements] is with the reported GCM outputs. It’s certainly less than the number of curves in the ensemble.”
I vaguely recall a paper which attempted to evaluate the relative independence of the various models. From memory — not to be relied upon — I think they came up with a value for effective degrees of freedom of 8 or so, from a couple of dozen models.
I’ll try to locate the paper later.
[Edit: Possibly this paper, which says, “For the full 24-member ensemble, this leads to an M_eff that, depending on method, lies only between 7.5 and 9”.]
David Young (Comment #147487)
I don’t want this thread to debate people’s debating techniques. We don’t need everyone’s views on how everyone behaved elsewhere on some other thread. Comments like Comment #147487) belong on the “ban bing” thread as does Anders response. (But I can’t really fault Anders for answering it here.) If you guys want to argue about who is dishonest, evasive and so on, go over there.
SteveF (Comment #147485)
I prefer to use multiple runs of a climate model for comparison of temperature series to the single realization of the observed. The idea being that the observed can or cannot be fit into the desire probability limits of the multiple model runs. There is a caveat and that is a model with a huge spread in multiple runs may simply be getting the “natural” variance wrong. This situation points to the importance of more completely comparing the parameters/metrics between models and the observed.
SteveF, I hope this thread stays open for a while longer as I have some interesting analysis of the Andrews paper 4XCO2 CMIP5 experiment to report.
Anders
It correctly does what it sets out to do: Compare observations to the multi-model mean under the assumption runs are statistically independent realizations.
When making that comparison you use the standard error in the mean, not the spread of runs.
My preference is to do a similar test for each individual model– not the collection of “all models in all runs”. But if someone does want to do the test of “all models in all runs”, Santer’s method is largely correct. (And to the extent it’s imperfect– as all tests are– it the use of the standard error in the the multi-model mean is not the problem.
There are problems in Santer’s method– and I’ve pointed out some in the past. It’s just that using the standard error in the multi-model mean to test whether the multi-model mean is biased compared to observations isn’t one of them.
Anders
My understanding is your claim is this
And that by “this […] uncertainty” you mean the standard error in the multimodel mean. That is, I assume “this” refers back to the uncertainty discussed in the long quote that immediately preceded your claim which I quoted. In that quote “beaker/Gavin” is discussing “standard errors of the mean”.
So could you clarify what you intended to claim when you wrote
Because if you mean something other than what I interpreted, I am unable to divine it from the words you actually wrote.
Lucia,
“Obviously, bugs could start his own blog and discuss whatever other topics he thinks need discussing.”
.
My guess is it would be a lonely place.
Lucia,
I don’t see why this matters. I’ve explained the context to you. I’ve explained that I didn’t mean what you seem to think I meant. I’ve even agreed with you. If that isn’t good enough for you, fine, but I really can’t see what we’ll achieved by you continuing to tell me that you can’t work out what I meant from the bit of my post you’ve chosen to quote (while clearly ignoring the context).
Lucia,
” It just means that the test of models is best done by testing individual models while the test of projections is best done by testing whatever those projections are– warts and all.”
.
May better to first remove the ‘warts’ by eliminating from the ensemble those models which are wildly discordant with reality.
Lucia,
In a qualified sense, I agree. We want to understand if the models can reproduce the observations, which would seem to require understanding if there are realistic conditions that can match what was observed.
This depends on what you’re trying to test. If you’re trying to test if the multi-model mean is – as you said – biased wrt to the observations, sure. However, if you do this test and it fails, what do you conclude?
Kenneth Fritsch (Comment #147499),
.
With daring protagonists engaged in near hand-to-hand combat, why would I ever want to close the thread? (Answer: I wouldn’t.)
Kenneth,
“This situation points to the importance of more completely comparing the parameters/metrics between models and the observed.”
.
Of course, I completely agree. If the model is wildly more variable than reality on multi-year to multi-decade scales, then of course that artificially inflates the model’s ‘95% confidence limits’. Hell, make the model sufficiently crazy, and model uncertainty range will always encompass a measure trend.
Anders,
I know you gave “context”. But that context doesn’t clarify your claim. I’m asking you to clarify your actual claim something you should easily be able to do if you know what you are actually claiming.
To be clear, I read this which you wrote.
I note:
1) Santer did compare models to observations.
2) He did use the standard error in the multimodel mean (which appears to be “the” uncertainty to which you refer and
3) the observed trend he compared the models to was not the mean of all possible observed trends. It was one trend.
As far as I can tell, reading what you claimed in rather plain English would mean that either
(a) you think Santer was wrong to use the standard error in the mean when comparing observations and models or
(b) you think Santer didn’t use the standard error in the mean when comparing observations to models. (He did use it and he did it when comparing models to observations.)
(c) you think Santer compred the model to the “mean of all possible trends”. This is impossible for anyone to do. We have only one realization of earth weather.
(d) our you think something I can’t even begin to imagine that somehow reconcile what you actually wrote with Santer having done what he actually did. What he did was to compare a single realization of earth weather to the multi-model mean and using the standard error in the mean while doing so.
As far as I can see, the “context” that you were somehow discussing Beaker/Gavin was criticizing Douglas and didn’t happen to be discussing Santer doesn’t change anything about your claim. It might change how badly off Beaker/Gavin is in the aggregate of thigns he said. But I’m trying to tease out your claim, not Beaker/Gavin’s.
Lucia,
I give up.
Ken Rice,
“In a qualified sense, I agree. We want to understand if the models can reproduce the observations, which would seem to require understanding if there are realistic conditions that can match what was observed.”
.
What? Do I sense progress in this discussion? I hope so. Can you define a little better “if there are realistic conditions that can match what was observed”. I am honestly not certain what you mean by realistic conditions.
Anders,
Since you asked me to specifically think about this bit, I’ll quite and diwscuss it.
This statement of yours is false “Therefore comparing the model mean with the observations is unlikely to be a fair test, because even if the model were a perfect model, it would almost always fail this test.”
In fact:
1) Santer compared the model mean to observations.
2) He used the standard error in the multi-model mean.
3) A perfect model would not almost always fail the test he used.
If your whole context is to explain that Douglas was wrong:
Yes. Everyone — SteveMc and I included have always agreed Douglas was wrong. Not for the reason Beaker/Gavin gave but Douglas was wrong.
But none of that makes things you are saying correct. It’s possible something you are thinking is in the vicinity of correct. But the things you are writing are not correct.
Anders
I should think that’s pretty obvious: you conclude the model mean is high. That the truth is lower than the model mean.
This is actually important to know as it implies we should expect the future trend to be more likely in the lower end of the distribution than the higher end. I should think one would want to know that.
On the other bit
Ok. Well, then I still don’t know what you claim. I can only read what you seemed to claim.
For what it’s worth, Gavin Schmidts’s criticism about uncertainty here:
http://www.realclimate.org/index.php/archives/2016/05/comparing-models-to-the-satellite-datasets/
is of an entirely different nature to the observational uncertainty omitted in Douglas.
Also, Beaker/Gavin’s discussion of “standard error in the mean” is irrelevant to Gavin S’s recent criticism of Christy because Christy used the model spread to represent “weather” in his presentation– that’s what’s captured in the “spaghetti”:
Interesting discussion in pointing to the statistical limitations of the observed temperature series being a single realization of potentionally many. With models we can make multiple runs that will allow estimation of the noise or natural variation. Unfortunately in practice with climate modeling the numbers are in too many cases too few.
That limitation for the observed series could be overcome if the trend could be defintively estimated and the residuals decomposed into periodic and
red/white noise. We would than have a stochastic model that will allow Monte Carlo treatment to estimate confidence intervals. I have found that with most models determining, for example, trend confidence intervals gives reasonably close intervals using the modeling approach from single realizations as noted immediately above or from multiple model runs.
A very belated reply to RB (Comment #146606) on April 26th, 2016, which I have just seen. In response to SteveF (Comment #146603) statement:
“Nic Lewis has said recently that in the GCM’s he has looked at, the average equilibrium sensitivity value is about 10% higher than the effective value you would calculate from the GCM behavior, using the above heat balance method.”
RB wrote:
“I suspect Nic only looked at those GCMs with ECS closer to the lower 2C median to arrive at his 10% ratio while eliminating models with higher ECS as unreasonable, but it is true for the GISS E2-R and E2-H models.”
RB is wrong. My calculation of the ratio of ECS to effective sensitivity in models included all CMIP5 models used in AR5 for which the requisite data was available. That included all 23 models listed in AR5 Table 9.5 for which ECS estimates are shown, and six additional models for which the requisite data had become available. My ECS estimates were in line with those for AR5 Table 9.5 for the models included in that table.
Lucia, I read your old post and it is clear and convincing. As you say, The Cauley (and Schmidt) question is not helpful if our goal is to develop better models. And that is the frustrating part and why I now believe that improving models may not be the goal. Justifying models and giving laymen false confidence in them is a goal consistent with the Cauley question framing. I see this all the time in CFD too where the goal is often to justify the models. Our paper may be a good antidote, but only time will tell.
Nic Lewis,
“RB is wrong. My calculation of the ratio of ECS to effective sensitivity in models included all CMIP5 models used in AR5 for which the requisite data was available.”
.
Thanks. I was feeling abandoned, just a little. Perhaps you could weigh in on the Lucia/ Ken Rice confrontation on models V. reality.
David Young,
“I now believe that improving models may not be the goal.”
,
Join the crowd. Seriously, it has been clear for a long time that the purveyors of the GCM based alarm are not interested in improving the GCMs; they are clearly interested in promoting ‘green’ public policies.
.
Here is the puzzle for me: do these folks really believe the rubbish they promote, or is it all a dishonest charade? I really do not know the answer.
SteveF, In CFD they often believe it. Most people are not really very experienced and believe the biased literature. Those of a skeptical turn of mind are not as successful as those of a “positive” frame of mind because the whole system rewards positive results even if they are biased and wrong. Think about all the rigorously indefensible treatments done by MDs trying to make a buck or because they have selected only the successful cases to remember. Science itself need reform and it’s going to take a lot more hard nosed skeptics to overcome the Rice’s and Cawleys of the world. Science has become a new left wing secular faith. Being a Defender of the faith can make the self righteous feel very good.
BTW SteveF my growing realization of how bad the CFD literature was about 15 years ago is what prompted me to start paying attention to the climate issues because I thought things could not be any better for GCMs which after all are infinitely more ambitious than industrial CFD. And my expectations were if anything too optimistic. In GCM research they have learned nothing and forgotten nothing since I was a graduate student and that was a long time ago
SteveF (Comment #147528)
“Here is the puzzle for me: do these folks really believe the rubbish they promote, or is it all a dishonest charade? I really do not know the answer.”
Perfect Schrodinger puzzle and answer.
Both are possible at the one time.
When belief triumphs, maths temporarily goes out the door,
When chicanery is the cause, maths temporarily goes out the door.
When people do not have their livelihood or reputation at stake [most AGW believers] the former is true.
When they do, like Mann and Gavin, possibly the latter is true, though they could still be just extreme cases of the former.
ATTP and Dikran know their maths and statistics, it is sad to see the convoluted path they take akin to trying to prove the planets move around the earth.
TE,
A blip at the end of a curve does not a trend make. The Chinese economy is also in serious trouble right now. Demographics has nothing to do with it. Most Chinese are still dirt poor and not happy about it. The emissions rate curve for India look like it will more than make up for any small decline in the rest of the world.
.
Yep. They overleveraged ( a recurring theme through history ).
But demographics have a lot to do with their current state and predictable future for half a century.
.
.
That is possible. India’s demographics indicate a growing workforce for about the next third century, and commensurate growth will mean continued CO2 emissions growth. But will India’s growth match China’s decline? We’ll see.
Here is the link to the model I left out before.
http://www.cawcr.gov.au/technical-reports/CTR_021.pdf
They seem to be your ordinary, everyday scientists to me, working hard to advance science.
As for rorting the taxpayer, most of them are going to lose their jobs for not pleasing their conservative political masters.
Seriously?
I do not understand the focus on a few scientists. There are thousands of them out there. None of those working on this model appear to be names that are well known.
HaroldW—thanks.
Based on these numbers, leaving out the $latex 1/\sqrt{n}$ factor overestimates your uncertainty by a factor of 2.9. Using the incorrect n=24 underestimates it by a factor of 1.7.
These are both off enough for me to describe either approach (not using error of the mean and using n=all curves) as “badly wrong”.
In my opinion, Gavin is wrong for saying don’t use error of mean, Sanders et al. is wrong for using the wrong value for “n” and Douglas is wrong for not including observational uncertainty.
I seem to remember that some people double counted weather noise. Others used the range in model outputs to estimate short period variability (that’s completely and totally wrong, IMO… much of the model variability has to do with different physical assumptions…they are much less “the same mode run multiple time” than “different models, each run just a few times”).
This seems to be a problem that people have trouble addressing very well.
SteveF
” Perhaps you could weigh in on the Lucia/ Ken Rice confrontation on models V. reality.”
I haven’t read every comment (not having seen this thread until yesterday), but I think Lucia is right (along with Carrick, whose position seems very similar) and Ken Rice/ATTP/Anders is wrong.
I also agree with Carrick that it makes better sense to compare individual models with observations, albeit that is not what this statistical debate is about.
Nic,
Just to be clear, I think that what is wrong is Lucia’s interpretation of what I said, not what I actually said, or what I actually meant. Of course, if it makes you feel better to regard me as wrong, carry on. It is rather irritating to find yourself defending a strawman, but it’s pretty standard, so I should probably not be surprised. I guess it is much easier to introduce something that wasn’t even mentioned in my post, and then use that to show that what I didn’t say is wrong, is easier than sticking to what I actually said in the post.
Yes, it does make better sense. I think I’ve already agreed with that. Given that the debate is about how best to compare models and observations, I think it’s entirely relevant. The statistical debate you’re having is largely with yourselves. Just a thought, rather than savaging your own strawmen, why not try actually discussing what is really being discussed? Is that too much to ask?
Ken Rice,
” Just a thought, rather than savaging your own strawmen, why not try actually discussing what is really being discussed?”
.
I’m pleased that most everyone (appears) to agree ‘it makes better sense to compare individual models with observations’. So moving on: I honestly can see no meaningful information which can be gleaned from a comparison of the model ensemble and measured reality. Do you see any utility in that comparison? If so, what do you think that comparison tells us?
Nic,
Everyone around here has long agreed about that. That’s why, for example, I was looking at individual models back at least as far back as 2013
(I show this 2013 one because it’s the first of its type in the google list searching for them. There’s nothing special about it.)
But– with regard to the current dust up– its important to note the argument about the “standard deviation” doesn’t go away when you test individual models.
The exact same issue rears it’s head when deciding whether you are going to test the multi-model mean of an individual model or test whether the observation falls in the spread.
Because this particular statistical question absolutely does not ‘go away’ by saying one would prefer to test individual models. So is better if people are clear about what it is they don’t like about the use of “the standard error in the mean” (or a t-test in general.)
When doing so they should give a clear explanation that doesn’t say things that are simply wrong. (It is easier to do this if you understand what the standard error in the mean is, you know what a test does and so on. That Gavin “beaker-Dikran Marsupial” Cawley did not seem to understand that seems likely as he couldn’t manage to figure it out without writing Santer. In contrast, over at climate audit, Hu has mentioned a reason he doesn’t like the t-test method in comments.)
But one thing is true: The ‘problem’ see in the use of the “standard error in the mean” Anders or Gavin “beaker – Dikran Marsupial” claim is not a ‘problem’. It’s use does not cause the issues they claim. (Using this standard error in the mean in a comparison does not result in a test that nearly always rejects even a perfect model when the ensemble gets very large. It. Just. Doesn’t.)
And make no mistake. The “attack” on uses of “the standard error in the mean” is pretty strong. Gavin “beaker – Dikran Marsupial” Cawley has gone as far as to use claim the use of the standard error in the mean is irrelevant to comparing models and observations– Anders quoted that and said things that sure and shooting sound like he’s agreeing with Gavin “beaker-Dikran Marsupial” Cawley on that score. But the (well accepted truth) is that if one is going to use a t-test to check the model mean for biase — whether for individual models or the multi-model means– the standard error in the mean is not “irrelevant”, it is required. Leaving that out would be just as much an error as the one Douglas actually made which was leaving out the uncertainty in the observation mean obtained from 1 realization.
Obviously, people are going to correct Anders, Gavin “beaker-Dikran Marsupial”, Chris Colose and anyone who makes bogus claims of this sort. There is no reason to allow a group of people who don’t know what they are saying to create the impression that an entire method of testing models is off limits because they have decreed that use of the “standard deviation in the mean” is “irrelevant” to comparing models and observations. It is not “irrelevant”. It is requrired if we wish to compare a model mean to a single observation.
Anders
You made particular statistical claims that are wrong and did so in a blog post you elected to write. None of us made you write that post nor make the silly claims you made. That’s what triggered the current kerfuffle.
The rest of us don’t really have an argument about use of the “standard error in the mean” or how it might be used to test models.
The fact that many would prefer to test individual models is orthogonal to the issue you raised in a post you wrote on your blog. One can use the “standard error in the mean” to test individual models even more easily that using it to test the multi-model eman.
Lucia,
The kerfuffle is based entirely on your unwillingness to accept that how you’ve chosen to interpret what I said is not what I was intending it to have said. I have clarified the point on numerous occasions. Your complete lack of willingness to accept that I don’t mean what you have interpreted it to mean is utterly bizarre.
Nothing, I stress, nothing I have said could possibly be interpreted as me suggesting that you couldn’t/shouldn’t test individual models using the standard error on the mean. My entire post was about ensembles and about a situation in which someone did NOT include the uncertainty in the observed trends.
What are you trying to achieve here? As far as I can see you’re doing your utmost to ensure that we don’t reach any kind of agreement. I have better things to do with my time than waste it talking to people who simply choose to be disagreeable for the sake of it.
Anders,
If you want me to interpret what you said differently then all you need to do is tell us what you intended to say and do so in a way that means something different from what I interpret your previous (and repeated) statements to mean. You have not yet done so.
It’s worth also nothing: people here aren’t interpreting what you wrote through the lens of what I say you wrote. Nearly everyone here has read your post themselves. And they appear to interpret what you wrote to mean more or less what I interpret it to mean. Heck, if someone else who read your post explained that their reading of your word differed from mine, I’d be happy to read that explanation.
If you intend to say something different, say it. And make sure to highlight what it is that you claim that is different from my interpretation of your claim. (And no “I didn’t mention Santer”, doesn’t constitute being “different” from my interpretation of what you claim because I never claimed you said anything about Santer. So: please focus, figure out precisely what you think I misintepreted and tell us.)
Anders,
Nonesense. If your argument for why the standard error in the mean applied to the multi-model mean would apply in equal force to comparing observations a model mean. Either it can be used for both or neither. So yes: what you wrote can be interpreted as saying you cannot use it for testing individual models and can be even if you didn’t say so.
In fact: if you admit it can be used to test individual models that admission stands as a specific counter example to your claim it cannot be used to test the multi-model mean. Which means: your claim it cannot be used to test the multi-model mean is mistaken.
As I noted before: people are allowed to present you with specific counter examples to the truth of your general claim. And the fact is: your claim about the multi-model mean cannot be correct if, in fact, you think the standard error in the mean can be sued to test individual models.
You can keep yammering on about how you didn’t say anything about individual models; (no one said you actually did). Or you can keep yammering on about being misunderstood. Or you can explain your theory about why it can be used to test individual models but not the multi-model mean; (that would be interesting to hear). Or you can do any number of things.
But right now: your claim about whether the standard error in the mean can be used to test the multi-model mean and reasons give for why it could not apply just as strongly to testing individual model means. And unless you explain otherwise, people are going to see this consequence and point it out. I and anyone who does do is perfectly justified in doing so even if you didn’t actually say anything about testing individual models.
And in your post
(a) you said nothing about uncertainty in observed trends and
(b) made specific claims about an entirely different feature.
If you want to now want say, “Yes. It’s perfectly fine to use the standard error in the mean when comparing observations to models provided that the standard error in the observed trends is included in the test.” Do so. Directly. It’s easy enough to do.
But I’m not going to put words in your mouth. Either say this directly, or refuse to say so directly.
“Context” doesn’t magically infuse this meaning into your words for a huge number of reasons. (In fact, given the long history of this argument involving Gavin Beaker-Dikran-Marsupial Cawley and Gavin S, “context” that you are saying either one, the other or both were right “back then” cuts against the claim that you meant it’s ok to use the standard error in the mean provided the standard error in the observations is included.) And beyond that: your claims have broader consequences and people are allowed to point out that when applied to other issues it’s clear your general claims (at least as actually worded) are mistaken.
Somewhat off topic but relevant to modeling turbulent flows (the atmosphere is very turbulent) for anyone wanting to understand in a simpler case how models are constructed and tuned I recommend the Ph. D. thesis of Prof. Mark Drela currently at MIT. Written in 1985 it describes in detail how Drela built his 2D model for aeronautical flows including viscous effects. It has the additional virtue of being fully honest and the model produced is still state of the art today.
I quote just one thing that should be taken seriously by all those thinking about GCM’s and atmospheric modeling:
“This formula (6.30) despite having been derived solely from a special class of equilibrium flows, is now assumed to apply to all turbulent flows in general. As with the majority of useful statements about turbulent flow, this is mostly a leap of faith, justified primarily by the argument that in the laminar formulation decoupling the local dissipation coefficient from the local pressure gradient led to substantial accuracy gains.”
With the caveat that I’ve pretty much given up on this topic gaining traction here, I offer this plot for consideration. I asked for and received critical feedback on my method; my defense and justifications of same are detailed here.
.
The main takeaway is that the CMIP5 ensemble running ~10% hot compared to HADCRUT4 suggests that we have on the order of 6 additional years to reach the 2 C threshold over per-industrial IF RCP6.0 is a reasonable projection of BAU emissions AND the implied TCR of the CMIP5 model ensemble isn’t grossly higher than my best fit of the historical runs to HADCRUT4 implies.
dikranmarsupial at ATTP May 12, 2016 at 8:42 pm
“BTW if anyone at Lucia’s wants to ask me any scientific or statistical questions about the Douglas/Santer/Schmidt tests, ask them here and I’ll do my best to answer them”.
–
Brandon G,
You post to a link where evidently you asked for critical feedback on your method. Where do you describe your method? I see a graph. But I don’t see any discussion of how you created your graph or what your method is.
angech:
Assuming Anders’ not particularly open dialog blog would accept any comments or questions from anybody who had anything relevant to add.
But there’s nothing much to comment on…It’s a simple fact that Gavin Cowley screwed up.
I think there is a substantive question about Sanders—whether he correctly computed the error of the mean.
I wonder about how 17 authors actually uniquely contributed to Sander’s paper to the point where all of them earned co-authorship. But climate science is a bit “special” that way and these papers turn into something more like “letters to the editor” rather than serious independent research efforts.
Of course people no doubt still claim it on their resume, even if they didn’t do anything other than agree to be listed as a coauthor.
SteveF:
If you don’t mind, I’ll take a stab at that.
I’ve discussed above why I think the statistical test of the ensemble of model output versus observations shouldn’t be used as a test of model validity.
You are doing something though, when you compare the ensemble of model outputs to observations–you’re testing the reliability of that data product. This test would be the same as a validation test of the models themselves only under circumstances that I would argue are unlikely to be present.
lucia,
.
I didn’t describe the method in great detail over there, but I’m happy to do so here:
.
1) I obtained the CMIP5 RCP6.0 ensemble TAS and HADCRUT4 GMST annual means baselined to 1986-2005 from KNMI Climate Explorer.
.
2) I regressed the model ensemble to HADCRUT4 over 1861-2005 and created a second model timeseries scaled down to that best fit; the scaling factor was 0.91 indicating that the ensemble trend is about 10% warm compared to observation.
.
3) I offset the whole mess upward by an amount corresponding to the difference required to bring the HADCRUT observational mean to zero over the 1861-1890 mean.
.
4) I separately calculated the difference between the annual observational means and both model ensembles over 1861-2005.
.
5) I calculated the standard deviation for both of those residual series and multiplied by 1.96 to get 95% error estimates; +/- 0.23 K and +/- 0.22 K for the non-scaled and scaled model ensemble series.
.
6) I applied the error estimates to their respective model ensemble series across the entire 1861-2100 data range.
.
This method makes several implicit assumptions:
.
1) Trend uncertainty in HADCRUT4 is negligible, and annual error is normally distributed, i.e., “comes out in the wash”.
.
2) The regression scaling reasonably handles the observed warm bias in the model ensemble over the hindcast period, and thus plausibly corrects for the warm trend bias in the future projections.
.
3) The residuals more or less account for unforced interannual internal variability.
.
*IF* those assumptions aren’t completely terrible, under RCP6.0 we’d have about 6 additional years before hitting 2 C above pre-industrial. Another takeaway for me is that by 2100 the difference between the original and scaled RCP6.0 projection means is on the order of one standard deviation of my error estimates.
.
Basically, I’m arguing that the CMIP5 ensemble mean is a fair representation of reality, even without the scaling tweak I’ve applied to bring it more in line with HADCRUT4.
.
All bets are off if feedbacks are decidedly non-linear, and/or if the physics in most or all the component models used in the ensemble are grossly wrong, etc.
Gavin,Brandon G.If I’m reading your graph right, your estimate says get we hit +2C over preindustrial in about 2065 or so vs. 2060 or so. So: we’ve got 48 vs 43 years.
As far as I can tell, your argument is that if you assume the CMIP5 ensemble mean effective sensitivity is ~10% off and you make no other change, you find the projections are only about ~10% off.
I assume if you estimated the models were 20% off, you’d get a different result. And if you assumed they were 30% off, you’d get yet another result. And so on.
But yes: if you assume the models are very close to correct, your modified projections will differ from them only a little. I don’t think that’s surprising and I don’t think it would change much if you used any different method.
Now to what you did and why you estimate they are only off 10%:
From what I read here, you did a scaling that mostly involved fitting to the hind-cast and based the any future discrepancy based on the mismatch in the hind-cast. I’d suggest that’s fitting to the hindcast is highly likely to cause you to under-estimate any discrepancies between models and observations. So I’d suggest there a strong tendency for models to be able to predict the past much better than the future.
If so, the ~10% warm estimate could be significantly off. Might not be– but could. Some I’m not seeing this method as a particularly good way to estimate how far off the models might be.
Given that, I think you might want to consider what you get if the models are 30% off. Just a guess: your estimate will be we have roughly 65 years instead of 53 years before we hit 2C above pre-industrial.
(By the way, I know if you scale for 30% off the hindcast wont fit so well. But I don’t see how that’s a big concern for the scaled version since the entire difficulty is that the hindcast is a hindcast and the models may be “better” at fitting the past than the future. )
Those are my quick thoughts. It’s later than I should be up (2 am). I may have other thoughts in the morning.
Oh– I should add, in the entire forecast region (2000 to now), both the CMIP5 (blue) trace and your scaled one lie above the data nearly all the time.
Brandon R. Gates (Comment #147623)
“my defense and justifications of this plot for consideration. I asked for and received critical feedback on my method”
“In the past, I have done “simple†analyses based only on the model ensemble mean, such as this one. assuming we follow the RCP6.0 emissions pathway and the ensemble prediction really is ~10% for that forcing scenario”
further “I’m implicitly treating the ensemble as a single model”
“my error envelopes are completely insensitive to ensemble spread”
“The error bars for each model curve are the same across their entire respective intervals by design.
–
ATTP [on a separate issue] raised the point that “internal variability can influence both the mean trend and the variability about this trend.”
re the graph he said “there is a range for climate sensitivity, though, which I think would increase the spread with time.”
–
Dikran separately “Fifteen years is not nearly enough to properly capture the effects of ENSO (a major source of internal variability) and you are still left with the problem, of deciding what part of the trend is due to the forcings and which internal variability.
re the graph he said “regression based extrapolations of course there ought to be a broadening of the error bars.”
–
So saying you are treating it as a single model does not mean it is a single model. It is a combination of two models one of which is an ensemble.
It must have a broadening of the error range into the future and into the past to account for the increasing a
Your saying that there is only a 10% difference is because you put a 10% difference in the first place and kept it there.
It does not mean that 10% is the real or expected difference years into the future.
In effect you have predicted the change you want and then done the maths to fit.[“The error bars are the same across their entire respective intervals by design].
–
I admire you putting in the work and asking for comment and look forward to the next graph as you are putting in considerable effort which is appreciated.
It must have a broadening of the error range into the future and into the past to account for the increasing a
–
increasing a should be “increasing uncertainty”
lucia (Comment #147668)
.
.
It’s a six year difference, but yes, your eyeballs are close enough. We could expand that to include my error estimates for both the scaled and as-published projections; the span is roughly +/- 10 years centered on ~2062.
.
.
Trendwise, yes.
.
.
That was 100% the method.
.
.
Yes. And in turn, underestimating the uncertainty in the projections.
.
.
Absolutely. That should go without saying in my book.
.
.
It’s the easiest way I know of to estimate how far off the ensemble mean is. It’s completely silent on the skill of the individual model runs making up the ensemble, and singularly horrible for explaining *why* the ensemble mean or individual members might or might not be off.
.
OTOH, it’s totally insensitive to the inter-model spread, something I hadn’t thought about much as a possible virtue of this method until I read SteveF’s comment above:
.
SteveF (Comment #147507)
May 11th, 2016 at 1:11 pm
.
Kenneth,
“This situation points to the importance of more completely comparing the parameters/metrics between models and the observed.â€
.
Of course, I completely agree. If the model is wildly more variable than reality on multi-year to multi-decade scales, then of course that artificially inflates the model’s ‘95% confidence limits’. Hell, make the model sufficiently crazy, and model uncertainty range will always encompass a measure trend.
.
If I were to take the observational uncertainty and model spread into account, the error envelope would be much fatter. Way I see it, my method literally gives me *less* wiggle-room, not more.
.
.
Good guess. Scaling to 0.7 and adjusting the curve so that it crosses observation over 1986-2005, 2 C happens exactly 65 years from now in 2080.
.
.
Actually, that would be my primary concern scaling the model mean as you suggest just above. With such a complex system, a skillful hindcast cannot reasonably “guarantee” skillful future performance, but I’m going to have less confidence in a model (or ensemble of models) that is less skillful over the training interval.
.
.
Yup, doesn’t bug me — look at 1910-1930. As well, at the projected rate of warming through about 2060, an observed trend could be dead flat for up to 30 years and still be within the error bounds I’ve defined. Finally, 1998-2015 is anything but flat.
.
A minor nit if I may: the hindcast runs ended in 2005 with the RCP forcing assumptions beginning in 2006. That’s why I converge all series over the 20-year 1986-2005 interval.
Lucia,
I think you are using ‘Gavin’ and ‘Brandon Gates’ for the same person; it was very late I guess.
WRT hind casts versus forecasts: yes, a hind cast doesn’t mean much. It especially doesn’t mean much when the model has multiple parameters which can be adjusted, and also depends on broadly adjustable estimates of unknown quantities like historical aerosol offsets and historical ocean heat uptake. Forecasts are the only real test of model performance. The argument that ‘we don’t have time to wait for a forecast’ is unconvincing, because it presumes the models are correct in their forecasts, even as accumulating empirical evidence, and continuing divergence of models from reality, say otherwise. Of course, some will completely discount the need for model accuracy and simply say ‘It doesn’t matter if the models are accurate, because reducing fossil fuel use is the right thing to do’…… to which I can only respond by rolling my eyes and then chuckling a bit. That amounts to nothing more than the well known ‘because it’s what I want’ argument, which is the final redoubt of teary six year olds arguing with their parents.
Carrick,
“This test would be the same as a validation test of the models themselves only under circumstances that I would argue are unlikely to be present.”
.
Yes, the models are neither independent nor necessarily a fair representation of the plausible range of parameter choices. But I think it is worse than that. If you can show that specific individual models are unlikely to be correct, then it is difficult for me to see how a multi-model test, including models that are strongly suspected of being wrong, is not an exercise in GIGO. I can understand why it makes political sense to use the considerable spread between models to claim ‘see, the models are all just swell’ because some model might someday encompass reality. But it is still GIGO.
There’s a new paper by Ray Bates, Dublin, that comes up with a low value, around 1C for ECS discussed here.
Brandon Gates –
I think Lucia’s point about hindcast is the most important…”past performance is not necessarily indicative of future results”.
But I wanted to make a mathematical point. It seems that you arrived at the conclusion that 0.91 times the model temperature anomaly will give a good estimate of the HadCRUT4 anomaly. This number, I’ll guess, was arrived at by regressing the HadCRUT4 series vs. the model series, over 1861-2005. [I get .906 for that regression slope.] In your words, the model runs hot by 10%.
.
If one regresses the other way — model vs. HadCRUT4 — over the same period, the regression slope obtained is 0.873. One can thereby conclude that the model *under*-estimates warming.
.
Just a warning about trusting the basic linear regression of two noisy series. There are better techniques.
.
Oh, and another interesting tidbit…extending the regression period through 2015 — that is, including the forecast portion — changes your slope from .906 to .853. Regressing over the interval 1950-2005 yields a slope of .815; extending that to 2015 reduces it yet further to 0.757.
SteveF:
I agree….
Remember that Lucia’s done this comparison in the past.
Not very many model outputs appear consistent with GMST. And that’s with (in my opinion) the test to GMST being a weak test of the models and not a very interesting one (you really want to be able to project regional scale MST).
Brandon
Your method may seem “easy” but it strikes me as highly unreliable. Things need to be as simple as possible but not simpler.
There are other equally “easy” method. You could just adjust scale the forecast to match the observed trend in the forecast period– that’s roughly 2000-forward. That gets around the problems with the fact the hindcast might be (because it can be) tuned in some way. (In fact, it likely if only through people reasoning that if a model does thus and so, then we’ll pick aerosols of thus and such which results in better agreement. Nothing wrong with that, but it means that hindcasts aren’t good tests of skill and should be eliminated or minimized when testing or adjusting projections.
A not much more difficult way would be to take an estimate of ECS (say SteveF’s or someone else) and scale the models using the ratio of his ECS to that in the models. Once gain: don’t bother to consider what that does to the “hindcast” since those might have been tuned in the models. Just use it to scale projections going forward.
I probably could think of lots of other ways to do it. You could probably get a range of answers. Not sure that any one of the methods is better than yours. But I’m definitely leery the scaling that puts nearly all the ‘forecast’ hotter than the observations in the ‘forecast” region. If I were going for “ridiculously easy”, I’d ignore everything in the hindcast region and scale only using the forecast. That at least ought to cause your scaled model projection trend to match the 200[01]-2017 trend. It really doesn’t make sense for the “corrected/scaled” model trend to exceed that and it appears yours does.
Anyway wrt to this
I realize the things I consider flawed in your method don’t bug you. 🙂 But I think this is a sufficiently bad flaw that I consider your graph scaling into the future pretty uninformative.
As for merely staying in the error bounds: Aren’t your error bounds just “weather” about a central estimate? (Real Q.) If so, it would seem pretty important to want your central estimate to be an unbiased guess of the value weather is going to oscillate about. And I’m suggesting your scaling method already seems to be biased in the ‘2000-2017″ period. That’s a problem if you intend this to be a forecast. (Or at least it’s a problem if you are hoping to get anyone to think this scaling method is worth taking very seriously. I get that it’s easy.)
SteveF
I don’t think “Brandon Gates” is a “Gavin”. If I wrote the wrong name that was a mistke. There seem to be two Gavins:
1) Gavin Schmidt of GISS.
2) Gavin “beaker-Dikran Marsupial” Cawley, a computer science prof/sks guy. Don’t remember where he works.
Brandon
Well… I’ve taken that position in the past. But someone (I’m pretty sure Gavin S of GISS) insisted that was incorrect and the forecast officially starts in 2000. 🙂
So if you want to do minor nits on that you’re going to have to get all the climatati together to agree on “what” constitutes the division between hindcast and forecast.
Lucia,
Dr. Marsupial works at UEA.
SteveF, comment 147675 hits a key point. According to the Cawley criterion a model with very high variability could NEVER be falsified.
High variability in a model also offers huge scope for selection bias which is very prominent in CFD. You vary the thousands of parameters and inputs until you like the answer and then publish. I would assert this is common for GCMs too. Thus any given “best practices” ensemble strongly underestimates the real uncertainty in the models.
Further if a multi model ensemble includes models with at least one low ECS model, the whole ensemble will pass the Cawley criterion.
If Cawley is still answering questions, I would like to know what rationale he can give for a highly variable model being unfalsifiable and how such a criterion would be something he wants to see used to test engineering models used to say design the next airplane he boards.
Hi Brandon Gates,
When I read Lucia’s request for your methodology I regret not having asked you for it when I saw your work elsewhere last year. I think at the time I did recommend you publish it, which would have forced the critique you are getting now.
You may want to clarify what the purpose of your method is. Are you proposing a way to improve the forcasts by correcting for their “hotness” or a method of characterizing hotness that would apply to all models?
Lucia and HaroldW raised some good points. It seems HaroldW interpreted your method as scaling the temperatures to the model whereas I read it the other way around. Both critiques zero in on the hindcast with HaroldW pointing out how the scaling factor depends on the order and range of the regression. Lucia’s simplest method suggests leaving the hindcast as is. Scaling the hindcast essentially means proportionally modifying the contributions of factors that are not contributing proportionally over the time-frame of the hindcast. So I would consider Lucia’s simple method if for no other reason than to provide an arbitrary way to characterize model hotness or coolness.
“There’s a new paper by Ray Bates, Dublin, that comes up with a low value, around 1C for ECS discussed here.”
lo values.
almost purely model based
confidence interval of .8 to around 1.2.
dont expect anyone to take it seriously given its reliance on Lindzen’s
work.
Brandon
‘1) I obtained the CMIP5 RCP6.0 ensemble TAS and HADCRUT4 GMST annual means baselined to 1986-2005 from KNMI Climate Explorer.”
bad idea.
You are comparing TAS to TAS + SST
GCM: TAS
HADCRUT: TAS+ SST
That would be ok if TAS over the ocean matched SST.
but.. alas..
David Young,
SteveF’s excellent point on which you eloquently elaborated is actually Comment #147676.
Will someone clear up for me the definitions of ensemble and run. IE, does one model have several versions which constitute an ensemble? Are several runs with one version created by using different initial values?
“The Archer et al paper Ken Rice linked to assumes people will do nothing if rising temperatures become a problem. I don’t believe that is a realistic assumption; closer to an absurd assumption IMO.”
Since global temperature change lags emissions you are being extremely optimistic.
angech (Comment #147670),
.
.
I don’t disagree with either statement. Anders’ statement in particular: there is an evident 60-year pseudo-periodic oscillation since 1850, partially explainable by AMO. The only better driver I’ve found is length of day anomaly. It’s slightly out of phase with AMO, doesn’t have as consistent a periodicity, and its amplitude isn’t constant. Perhaps Lucia would be kind enough to make these images visible inline:
.
http://3.bp.blogspot.com/-5NsdtYi0Ifg/VqQqEq8O-BI/AAAAAAAAAkU/BzdI7Q7-Gsk/s1600/HADCRUT4%2Bvs%2BCO2%2Bmonthly%2B2015-12.png
.
http://3.bp.blogspot.com/-vniapHOw-Po/VqQqEXqI2tI/AAAAAAAAAkQ/387eopVqVoM/s1600/HADCRUT4%2Bnon-CO2%2Bmonthly%2Bcontributions%2B2015-12.png
.
Both plots are trailing 12-month means. First one shows CO2’s contribution to the secular trend and non-CO2 contributions to the interannual variability. Second plot shows the component contributions to the non-CO2 curve in the first plot. In the first plot you can see that the non-CO2 contributions are on a down-slope since 1998.
.
.
I’m aware of that, hence the words “treating as”.
.
.
Hmm. I count 105 models: the 102 ensemble members, the ensemble model, the scaled ensemble model and the observational GMST time series.
.
.
That statement got cut off; I assume you meant I need to account for the increasing divergence of the individual CMIP5 members into the past and present. Again, I don’t disagree — I’ve done it in the past — problem I have is that it puts me in a bind wrt SteveF’s contention upthread (emphasis mine):
.
SteveF (Comment #147507)
May 11th, 2016 at 1:11 pm
.
Kenneth,
“This situation points to the importance of more completely comparing the parameters/metrics between models and the observed.â€
.
Of course, I completely agree. If the model is wildly more variable than reality on multi-year to multi-decade scales, then of course that artificially inflates the model’s ‘95% confidence limits’. Hell, make the model sufficiently crazy, and model uncertainty range will always encompass a measure trend.
.
I thought that was a valid point. My ensemble scaling exercise, and the error envelope method is something I’d done before, but until reading SteveF’s comment I hadn’t considered that it might suitably address his critique. By deliberately ignoring the spread of the individual ensemble members and considering only the mean, I make the error bars far smaller than they’d otherwise be. It may be artificial, but it does hold the model mean to a more restrictive standard of falsification.
.
I’m NOT saying my method is more correct and should be used to the exclusion of other methods. I am saying it’s an … interesting … way to interpret the data for the very reason that it puts a more restrictive limit on deviations from the hindcast and projected portions of the ensemble mean.
.
.
Yes, that’s the most defensible method I could think of, and 10% is consistent with what the IPCC say about the ensemble mean in box 9.2 of the AR5 WGI report, albeit for near-term projections only.
.
.
Same rule applies to any model hindcast. A skillful hindcast tends to increase confidence in future predictions/projections/forecasts but *never* guarantees their correctness.
.
.
Hmm. The 10% scaling is what I expected. I obtained it by using the most defensible endpoints I could think of: all the data from the beginning of the model runs to the end of the hindcast portion in 2005. I used the same interval to calculate the error bars. I can’t say I wanted either result, they’re simply what fall out when I do the maths.
.
.
I appreciate your appreciation. I gave you two more plots to look at above, but they’re somewhat tangential to this discussion. For purposes of this conversation, the main thing I’d say they demonstrate is the imperative for a regression scaling to be trained over as large a time interval as possible, at least 60 years.
.
I’m not exactly sure what to try next, but I have two ideas:
.
1) A skill-weighted calculation of the ensemble mean, with error estimates based on the divergence of the individual members, again weighted.
.
2) A subset based on the n most skillful ensemble members, error estimates based on divergence.
.
The latter one is easiest, I’ve already done something similar in the past. And in theory, it shouldn’t require scaling. Cheers.
Chic Bowdrie (Comment #147692),
.
.
Hi Chic, nice to see you as usual.
.
.
I either missed that, or just don’t remember. Better late than never, eh?
.
.
It’s a “what if” argument. The underlying problem with the models is some combination of incorrect physics, tuning, and computational limitations. Oh, and observational uncertainty in the historical forcings used. Plenty of other stuff well above my paygrade, I’m sure. I can confidently say that improving the *projections* for any given future emissions/forcing scenarios can only really come from improving the models themselves.
.
.
I’ll have to read Harold’s comment more closely, but I read it as saying that treating obs as absolute truth may be folly.
.
.
Yes. Were I attempting to do publication-quality work here, I’d do a sensitivity analysis. My sense is that all it would tell me is that the models don’t do multi-decadal unforced variability very well, which to me is fairly obvious just looking at the plot.
.
.
I understand those arguments, and think they’re good points. I’m not following how you think I should implement it. In particular, I’m trying to keep arbitrary adjustments out of it. So my simple regression over the hindcast interval isn’t the *most* defensible method, but I think it beats plucking adjustment factors out of a hat.
Brandon Gates, The other reason one might be suspicious of your method is that there are very refined simple energy balance models that take into account secular cycles like the 60 year one you mention and match data very well. They disagree with GCM’s about climate sensitivity. One might ask, if there is a more skillful simple model, why would one not use that one?
Be very cautions in interpreting statements about weather noise time scales. You know of course that since 2000 at least climate scientists have been extending the length of the time period over which we can ignore trend disagreement due to “weather noise.” We now have 40 years of satellite data and 150 at least of surface data. That is much longer than any plausible claim about weather noise time scales.
Chic Bowdrie (Comment #147700),
.
.
Yes, that’s my understanding. The initial values are taken from the pre-industrial control “experiments” (PI-control) where external forcings are held constant. The PI-control runs serve an additional purpose to allow the model to (hopefully) reach a steady state equilibrium. See top of p. 17 of the .pdf: http://cmip-pcmdi.llnl.gov/cmip5/docs/Taylor_CMIP5_design.pdf
.
… and top of p. 18:
.
Purposes and key diagnostics:
3.1 P re-industrial control
a) Serves as the baseline for analysis of historical and future scenario runs with prescribed concentrations.
b) Estimate unforced variability of the model.
c) Diagnoseclimate driftand for ESMs carbon cycle drifts in the unforced system.
d) Provides initial conditions for some of the other experiments.
e) Provides SSTs and sea-ice concentration for prescription (as a climatology) in expt. 6.2a.
.
Not all models included in the ensemble contribued several runs to the ensemble, and not all models contributed the same number of multiple runs. KNMI Climate Explorer allows the option of generating an ensemble mean restricting each model to one and only one run.
HaroldW (Comment #147681),
.
.
My response is, yes I know, and “everyone” already knows this. A skillful hindcast tends to improve confidence, but does not *ever* guarantee similarly skillful future performance.
.
.
That was my exact method and is my current argument.
.
.
True. My inclination is to trust an empirical statistical model over a physical one. So for sake of argument I’m assuming HADCRUT4 to be the better representation of reality.
.
.
Some of which I’m aware of, but unfortunately don’t know how to implement. If you do, I’d be quite open to you doing an alternative analysis and compare results.
.
.
Yup. Problem I have with including the 2006-2015 interval is that post 2005 the forcings come from scenario assumptions, not observational estimates. So that would be smooshing two things together at once but testing them as if they’re the same thing, which I deem a no-no.
.
.
A fair point. However, 55 years is inside my cutoff to reduce the influence of unforced internal variability. Tell you what though, I’ll do a model where I regress every interval from 1861 through 1950 as the start point, with 2005 as the fixed end point and run each result forward to 2100. Will post results here when I’m done.
Brandon R Gates, there is one more important aspect to add to SteveF’s point about flexibility in aerosols to make models fit the past. Ed Hawkins commented to me that CMIP5 does not forecast volcanoes. They are used, however, to cool hindcasts.
.
There is more than a good chance that there will be volcanoes in the future and thus I would say the forecast could be biased high by ignoring this likelihood.
Eli Rabbit,
“Since global temperature change lags emissions you are being extremely optimistic.”
.
Actually, no. There is a lag between forcing and response, of course. A plausible delay is on the order of a decade or two to see most of the response. So how far are we behind? That depends on lots of assumptions, but IMO, it is silly to suggest more than 0.5C behind. The key is that the delay in response is not many decades.
lucia (Comment #147686),
.
.
I’m not sure how many times I need to list what I have already considered as this method’s weak points before you’ll believe that I know it has problems.
.
.
Unfortunately, digging into the actual physics in great detail and using more robust statistical methods exceed my training and experience. For a ballpark “what if” discussion, it gives me a good sense of what the models running 10% hot *might* mean for time to the 2 C threshold for the RCP6.0 emissions scenario.
.
Which is not to say that I don’t appreciate the critical feedback, I do think some good points have been raised which I had not previously considered.
.
.
I could do that (I have), but I don’t see that as an improvement for two reasons:
.
1) Post-2005, the forcings are assumed, not based on observational estimates.
.
2) A 10-year interval is highly sensitive to unforced internal variability, i.e., weather noise.
.
.
I just don’t know what to say to this. The only way I can think of to get a truly representative test outside of the training period is to wait for a sufficiently long period of time to do a reasonable skill test against the projection. At that point, another elephant which has been lurking in the shadowy corner of this room rears its ugly head: a projection is also only as good as its assumed future forcings. As well, wait-and-see also completely defeats the purpose of doing the forward-looking projection to begin with.
.
Please tell me that defeating the purpose of model projections is not your actual aim here, because from where I’m sitting, that looks exactly like what you’re attempting to do.
.
.
ROFL! That’s where this discussion *started*, Lucia. Both you and SteveF disputed my method for doing exactly that. I asked SteveF to include that discussion in this very article … and it’s not here so far as I can tell.
.
At this point, I see it as up to you two to attempt to answer the question I’m asking by methods you deem less unreliable than mine because I’ve all but run out of ideas that you might find acceptable.
.
.
No “might” about it: the models were tuned to various observations. They do not all use the same physics, and not all groups use the same tuning protocols. I’m fairly confident that they weren’t tuned to a preconceived ECS value mainly because practically none of them agree on that value.
.
Without being able to scale to the hindcast portion, I’m left either plucking a scaling factor out of the hat (or a range of them) or regressing to a too-short projection interval, neither of which I consider palatable or defensible options.
.
.
If you can think of one that doesn’t run afoul of the concerns I raise above, please do one and publish it. I’m open to seeing and/or learning other methods.
.
.
Ha. What doesn’t bug me is the GMST slowdown from 1998-2015. So far as I can tell from a number of different analyses including my own it’s well within the range of unforced internal variability. Some of it may indeed be forced, e.g., reduced solar output and a cooling aerosol trend to name the two examples with which I’m most familiar.
.
.
Not exactly a shocking development. Neither is the fact that you’ve not produced what you think is a more credible alternative.
.
.
Particularly for the scaled series, yes that’s essentially how I’m interpreting it. My reasoning is that the scaling removes most of the trend bias due to forcing errors and/or model response to forcing. As I mentioned earlier, it’s an implicit *assumption* of the method, nowhere near a robust conclusion.
.
.
One way I could do that is to calculate the annual differences between a multi-decadal centered running mean. I’ve done that somewhere, the results are sensitive to the number of years I use to calculate the running mean. The answer to that seems to use all annual sampling intervals between, say, 10 and 60 years. However, combining those results into a single statistic doesn’t feel right to me because they represent different things. So I’m stumped.
.
I could use LOESS smoothing I suppose.
.
.
I again refer you to 1910-1930. You know, I should just post the residuals; it will immediately pop out at you that the divergence since 1998 is not outside historical precedent.
.
.
Frankly, I just sort of assume you’re not going to take me seriously no matter what method(s) I use. You’ve already apparently rejected a good portion of what professional climate scientists are telling us; the last person I’d expect you to believe is a random dude on the Internet who has gone out of his way to declare no particular expertise in this field. That all said, I would not at all mind you proving me wrong on this point.
.
Cheers.
David Young,
“According to the Cawley criterion a model with very high variability could NEVER be falsified.”.
And in my darker moments, I am tempted to believe that is the primary motivation for folks like Dr. Marsupial.
.
The simple truth is: The models cover a wide range of sensitivities. Few, if any, appear to include the empirical sensitivity. Either there is something very, very wrong with empirical estimates, or the models are very, very wrong. There aren’t any other options. Reality will ultimately assert itself, of course; one or two decades more will make the situation much more clear.
.
One or two decades will also show that the really nutty stuff, like Stefan Rahmstorf’s many crazy sea level rise projections of 1 – 2 meters by 2100, are plainly, stupidly, and comically wrong. This at least will be progress.
Steven Mosher (Comment #147694),
.
.
I’m having difficulty believing nobody but you has considered this. Zeke comes to mind immediately.
.
This would be my annoyed and snarky way of asking for references to a more correct way of doing it.
lucia (Comment #147688)
.
.
Table 4, pg 23 of the .pdf: http://cmip-pcmdi.llnl.gov/cmip5/docs/Taylor_CMIP5_design.pdf
.
4.4 RCP6 (2006-2100)
.
IIRC the switchover for CMIP3 was 2000.
Brandon G,
I’m not sure what you are hoping for in this discussion. I agree that you have done a back of the envelop estimate of what might happen if the models are 10% too warm and the emissions follow a particular trajectory. But I don’t think anything you’ve done suggests that 10% too warm is a reasonable estimate, and in fact, the appearance of the graph and the technique strike as suggesting that your method underestimates how much too warm.
I realize that a method as simple as yours that uses projections that are driven by hypothetical emissions may not be improveable. In which case, you could try something else. Or recognize that the task is not doable because welll…
Sometimes, there’s just not enough stuff there to do what you want to do. Them’s the breaks.
Defeating the purpose of model projections is not my aim. It seems you are inferring that by my pointing out that your method of tweaking them seems unrealiable. Yet, you also tell me you know it’s flawed. It’s hard for me to know what to say other than: your method seems to be an unrealiable method of tweaking. Saying something that suggests you assume those who observe this are trying to defeat the purpose of model projections suggests you aren’t willing to believe that, perhaps, your method of tweaking doesn’t tweak in a way that is likely to result in reasonable results.
I don’t recall “disputing your methodâ€. If I remember correctly, you said you had a method and you wanted me to give you my best estimate ECS to do something. I don’t have an best estimate of ECS — and told you that you could go ahead and pick one to kick off whatever conversation you want to have.
That’s not “disputing†your method. But perhaps you mean something else constitutes “disputing your methodâ€.
Today is the first time I have seen an explanation of anything I recognize as a “method†from you. I am engaging that.
With respect to your seeming to suggest that one of the advantages of the method is that it is simple, I’ve pointed out there are numerous other equally simple things you could do. It migth be none of them are of any use— I can’t know the utility of what you end up doing until you do it and explain it.
No. I don’t think it’s here. I recall that we discussed your request pointing out that SteveF has proposed a topic that interested him. You agreed there was no reason for SteveF to fulfill your request. He wrote the post he proposed to write on the topic he chose to comment on. That has nothing to do with your tweak of the projections. I’m not seeing a problem here.
I’m not entirely sure what you mean by “up to [us] two†to answer the question you are asking. If you mean it’s up to use to decide whether or not we want to try to work on the question you are intetrested in— I agree.
As the decision is up to me, here’s my thinking:
(a) I don’t know how to come up with a reliable fix and
(b) I don’t think it’s especially important to come up with a super simple back of the envelop method of tweaking model projections.
(c) If I were to tweak (and I’m really not that fascinated by this potential homework assignment), I would not go about it remotely the way you did, but my method would not be evolving in comments, it would involve me looking into some Bayesiand stuff and working on it at home over the summer.
(d) As it really is “up to me†to decide whether I’m going to improve your model, my choice is not to do so.
The way I see it: you asked feedback on my general reaction to your model, I gave it. (FWIW I do think if you want to take to someone who might be interested, Ed Hawkins does that sort of thing. He does Bayesian stuff. He may be able to help you further in your endeavor to come up with a tweak.)
I’m going to avoid engaging some of the more argumentative stuff wedged in there other than to say: No. I have not attempted to answer the question you are trying to answer. That doesn’t mean I need to overlook the fact that the flaws in your method suggest it is unreliable. That I don’t have a more reliable method also doesn’t make your method reliable.
As for going forward, if you do modify your method and think you’ve come up with something reliable. If you ask for feedback, I’ll be happy to give it.
Brandon Gates (#147708):
Concerning regression of two noisy series — one heuristic is to take the geometric mean. E.g. regressing a vs. b gives a slope of x, and b vs. a gives y, use sqrt(x*(1/y)). There are better methods, but with two series of similar variances, that may work well here, and is easy to implement.
I know I’ll catch some flak for this from the R mavens here, but for noodling around with things like this I usually use Excel. If it helps any with examining the sensitivity with respect to interval selection, I used the following formula (with year in column A, model temp in B, obs in C, interval start in G2, end in H2):
=SLOPE(OFFSET(C:C,G2-1849,0,H2-G2+1), OFFSET(B:B,G2-1849,0,H2-G2+1))
(The “1849” term arises because year 1850 is in row2, so row1 (row offset of 0) would correspond to 1849. Adjust accordingly.)
Brandon G,
.
I went to the show and got the T-shirt, just got back in fact. The ‘accusation of bad faith’ tour was highly overrated, I don’t recommend it. Wish I’d skipped it, stayed home and kept the money.
.
Looks to me like you asked for feedback and Lucia gave you some. I doubt she gives a fig one way or the other; doubt she’s got an aim.
mark bofill,
For what it’s worth, I don’t think giving my feedback to Brandon G is remotely likely to “defeat the purposes of model projections”. Model projections will continue to exist. Brandon G can continue to try to improving those he is working on improving. Lots of other people will continue working on creating model projections.
I would have thought he asked for feedback so he could identify issues and create more reliable tweaks. And if he thinks my feedback isn’t useful, he can ignore it and continue working on his improvements. Seems to me that what people working on projects usually do.
Patrick Michaels says “It seems that the modeling cart has gotten far ahead of the scientific horse.”
http://www.cato.org/blog/climate-modeling-dominates-climate-science
97% of climate scientists agree, don’t publish without a model.
Lucia,
.
Well, for a long time now I’ve personally believed that your words harbor all necessary and sufficient theurgy to rock the world on its foundations and knock climate science sprawling, possibly never to recover, but I thought that was just me and a few others. I didn’t think Brandon G was one of the faithful. But I might be mistaken.
[Edit: Or not. 🙂 ]
Lucia,
[Edit: on a completely different subject]
I’ll admit to a certain morbid curiosity on exactly when and if somebody is going to talk to my kids about transgender use of the school bathroom. Small elementary school, deep in the buckle of the Bible Belt; surely they wouldn’t dare, right?
.
We shall see.
.
As I recall you were pleased when SCOTUS found for gay marriage. Where are you on transgendered toilet use? For my part I am not sympathetic.
Alright, one more time this is my *main* objective: to figure out when we’ll hit a given temperature threshold above the “pre-industrial” mean. I’ve been using the interval 1861-1890 as the working definition of “pre-industrial” because that’s where I have data coverage for CMIP5 and instrumental GMST estimates by way of HADCRUT4.
.
I’m agnostic to the method used to determine the timing so long as it has some defensible physical basis which matches observation from the past. I already have one; the CMIP5 model ensemble. You guys don’t like it, so I’m open to alternatives. Me scaling the ensemble mean down by 10% was one attempt to address the charge that the ensemble runs too hot. Consensus sentiment here is that my scaling method is no good either.
.
I call that an impasse, and I’m stumped. Thus, it’s up to someone else to propose a method to guesstimate the timing by a method they deem more reasonable. If nobody can and/or is unwilling to proffer an alternative analysis I’m basically going to conclude that you have nothing to offer the policy conversation except “no policy”, which means you’re going to get whatever “my side” is able to shove through by hook or crook.
.
Please bookmark this post so that there’s no further … confusion … about what my intentions are in this discussion. Thanks.
Brandon G,
Would it help if I said I thought we’ll reach 2C over pre-industrial eventually? I don’t think it’s going to take thousands of years or anything; it wouldn’t particularly surprise me to get there.. I dunno. In my kids lifetime? In my grandchildrens? Let’s say that. After all, sensitivity could be higher than I think.
.
Why would any of this mean I have nothing to offer the policy conversation? I don’t see why that follows, and I don’t particularly agree.
HaroldW (Comment #147718)
.
.
Thanks, I’ll give that a shot.
.
.
In theory one hopes individual model runs have similar variances to observation. In practice they don’t. The ensemble mean certainly doesn’t because the averaging smooths out the interannual variability and (mostly) leaves behind the forced components (GHGs + solar + volcanic aerosols + land use changes and in some cases albedo feedback).
.
.
I mostly use Gnumeric on Linux, which is effectively the same thing. I do have Excel on a WinXP VM when I need to write macros and/or custom functions. I also have R on my Linux box, but the learning curve is kind of steep and I’ve been lazy. [g]
.
.
Totally understood, I use dynamic ranges all the time. Hell to debug sometimes, but when they’re done right it saves a lot of tedium.
Brandon, The issue here is are there more skillful methods than yours. It’s pretty clear that there are. Some are pretty simple too.
Brandon G,
I haven’t been accusing you of ‘intentions’.
Given your goals, if you are interested in tips on empirically modifying projections, I’d suggest you ask Ed Hawkins. I think he does this sort of thing.
I don’t see any logical connection between I or anyone here not telling you how to technical problem you are enthused about — that is to develop a “simple” method to “tweak” model projections using the temperature series– and our having nothing to say about policy conversation except “no policy”.
In the first place, it is entirely possible to work on policy in the face of uncertainty. It’s also certainly possible to develop and implement policies without working on this particular problem you are enthused about.
And thirdly: it’s especially odd that you would deem my not being able to tell you how to solve the technical problem you have assigned yourself as meaning my only offer to the policy conversation is “no policy” given that I have said I want policy to promote nukes. That is not “no policy”.
So the logic here eludes me. But if you are going to draw that conclusion based on that logic, I doubt there is anything I or anyone else can say to convince you otherwise.
I don’t feel any need to put my nose to the grindstone tweaking model projections to have a policy preference. That policy preference is promote nukes. It will remain my policy projection if you find the models are 30% too cool, 10% too cool, just right, 10% too warm, 30% too warm and so on. So, the outcome of your analysis is not going to affect my policy preferences. I want to promote nukes.
If you want to discuss ways to go about reducing carbon footprints, that’s fine. But last time we started that it looked like you dropped that issue (though it might just been me being a way while I was helping students before AP Physics C exams. Sorry if that happened, but I have a life. And if you haven’t noticed, 3 years ago I posted roughly 2-5 blog posts a week and now… maybe 2 a month? I’ve got other things on my plate.)
That said: if you want to discuss policy preferences, we can do that on “Ban bing”. Or I can open another thread and call it “Ban Bing is too long is also too long” or even give it a more reasonable name.
mark bofill (Comment #147725),
.
.
I don’t know if it helps exactly, but it’s good to know.
.
.
This is pretty much the crux of the issue. It’s not at all clear to me how anyone here maps sensitivity to timing for reaching a given temperature threshold. Mostly what I get is stuff like, “the IPCC sensitivity estimates are biased hot” and then … dot dot dot [end of statement]. It’s not an answer to the question I’ve posed.
.
My two methods have more or less been shot down. There have been some constructive suggestions for how I might improve my adjusted projections for RCP6.0, but I don’t see that they’re going to change the answer by much.
.
I should clarify. If I open up the error envelope as has been suggested (and really is the “proper” thing to do) then it makes a big difference. Without screwing around with scaling, if I take the annual standard deviations from the individual runs in RCP6.0, multiply 1.96 to get a 95% CI, we hit 2 C anywhere between 2040 and 2090. The ensemble mean goes through 2 C at 2060.
.
.
You can always just say no. Perhaps you’ve forgotten Lucia saying to me that she didn’t want to implement policy … foolishly … is the word I think she used. I agree. So the rational approach as I see it is to determine a threshold we don’t want to cross, estimate how long it will take to get there under an assumed BAU scenario, and then figure out how aggressively (or not) we need to reduce emissions to avoid it.
.
We haven’t even really gotten past the point of mutually agreeing on a plausible range for the timing portion to get to 2 C yet. Or even if going over 2 C is a Bad Thing.
.
As you might say: ~shrug~
David Young (Comment #147727)
.
.
Great. Do one and share it already.
Brandon G,
I think it’d be fun and interesting to talk a little about policy. But my carriage has turned back into a pumpkin and it’s time for me to hit the hay; I’ll catch up with you tomorrow.
Bradon G,
I don’t consider that “the” rational approach. It’s one some might like. But I don’t see it as especially “rational” relative to other approaches. It’s not the one I would take.
I think the risk of warming is sufficient that we should build nukes given
(a) we have a theory that GhG’s cause warming,
(b) we have data to see that appears to be happening,
(c) nukes would provide good baseload.
(d) So implement nukes– the sooner the better. yesterday if possible.
I don’t see any advantage to feeling the need to identify “the” temperature threshold or “the” time to threshold, particularly as that may be an intractable problem. I think it’s silly — as in not rational to let the fact that there is uncertainty about that get in the way of taking an action like encouraging nukes which is a win win no matter what the answer to that question is.
If you want to discuss whether my approach to decided we should act is “the” rational approach, or whether yours is the only “rational” approach to deciding to act, we can do that on Ban Bing. (That said, if we both agree on acting, it seems rather counter productive to hold that up in a pissing context of whose “motivation” for acting is the most rational. But this is a blog. We can argue that. 🙂 )
Actually in (Comment #147728) I told you that the timing of reaching that number doesn’t make much difference to me. As it happens, the specific number of “2 C” doesn’t strike me as particularly important or motivating either. It’s isn’t something magical like the triple point of water. It’s just a round number above pre-industrial. If 2C’s ok, seems to me they could have picked 4 F with just as much justification.
That said: On the balance I think continued warming will tend to be a bad thing. I just don’t see how picking “2C” strengthens any basis for action.
I guess I care about getting the maths right, getting the approaches right and conveying accurate messages much more than where it is all leading.
From a time perspective Dikran Marsupial says:”The classical period is 30 years, as defined by the World Meteorological Organization (WMO) for “Climate.
Obviously 30 years is far to short and the chances of any of us seeing the purported 2C rise in our lifetimes is non existent.
Thanks for pointing that out Brandon, one good result of the graph.
So we are left with vague impressions of where the temperature may drift and yet people who know this fact choose to take an alarmist stance.
Many of the measures that may help reduce CO2 increase are in reach. The most important, far more so than nuclear, would be population reduction, followed by efficient resource use, followed by nuclear.
lucia, for what it’s worth, I don’t think the approach Brandon R. Gates proposes is the “rational” way of handling the problem. The risks of climate change are not dependent upon some absolute change in temperature, but rather, how temperatures change and when (and where). A total warming of 1.5C by 2050 is not the same as the same amount of warming by 2100.
If one wanted to go this route, the “rational” approach would be to examine the range of risks (by area and aggregate) for various amounts of warming over different timeframes. This would give a distribution of potential results. With that, one could try to examine how various policies could shift where things would lie within that distribution and determine if that change in risk was worth the costs required to cause it.
I can save any further comments on the topic for the open thread, but I wanted to remark on this because it amazes me how the “magical” number of 2C isn’t based on any sort of real science. It certainly isn’t based on any sensible cost benefit analysis. It was pretty much picked out of thin air because it sounded like a good threshold (the history of how it was chosen is interesting). The idea relying on that, or any other number chosen in a similar way, is “the rational approach” is laughable.
There has been a lot of study of how to make decisions weighing costs and benefits given many sources of uncertainty. It amazes me the global warming movement doesn’t use any of it.
I’m not beholden to 2 C as THE Threshold:
.
.
Read harder.
Ron Graf (Comment #147710),
.
.
It would be a neat trick if they could.
.
.
Yep, they jump out as significant dips in the ensemble mean. It’s the one place where pretty much every run in the ensemble agrees on the timing of interannual variability.
.
.
It isn’t ignored so far as I can tell. There are only thee major eruptions post-1950, but in each case the observed temperature trend went more or less right back to where it was pre-eruption.
Brandon R. Gates:
Why? As far as I can see, nobody has had trouble understanding what you said. Both lucia and I have specifically said what numerical value you use doesn’t change our view of the matter. We don’t agree with the approach you are promoting is “the rational approach” regardless of what numerical value you might set as your threshold.
Choosing to not respond to people’s criticisms of the approach you label “the rational approach” and instead telling them to “Read harder” because you aren’t something nobody said you are seems silly. Whether or not you are “beholden to 2 C as THE Threshold” doesn’t change that neither lucia nor I think your approach is not a good one, regardless of what threshold you choose.
we are at 1C above 1880. People keep forgetting that. The argument is whether an additional 1C increase is probable and what the consequences would be.
Eli,
We can quibble if it is 0.9C or 1C, but no one, as far as I can tell, is forgetting that. Whether and when warming might reach 2C is uncertain; that will depend on a lot of things, including how much political opposition nuclear power faces. FWIW, I think average temperatures may increased by 1C from today by late this century, say 2080 to 2100.
.
The consequences of reaching 2C are extremely uncertain; the IPCC working groups 2 and 3 swim in grey literature and green hand-holding, and show just how little is clearly known about consequences…. and how silly and narrow (and green) the policies being considered are. The most credible threat is sea level rise, and that is where nutty projections do the most damage to ‘the cause’. Someone should station Rahmstorf and his ‘1 to 2 meters’ crew in Antarctica, and take away their satellite phones.
.
Just about every broad measure of human existence is much improved since 1880, so I’m guessing that 1C was insignificant if not beneficial.
.
There was a past time when transportation involved only nice green energy.
Eli
Real question: Which people have forgotten that?
I haven’t. I don’t think anyone around here had forgotten that temperatures are up distinctly from pre-industrial, and it’s about ~1 C.
MInd you, Andrew_KY is likely to just wave it away with some term like “squiggly line”, so my impression is his view is he doesn’t care, But most of us aren’t Andrew_KY.
Brandon Gates:
.
One could say we haven’t had a major eruption since Tambora, over 200 years ago.
.
If warming is driven by radiative imbalance then temporarily suspending or reversing the imbalance should have a marked effect on GMST trend. For example, last year’s Ocean2K paper attributed their observed cooling of the SST over the last millennium as due to a geologic increase in frequency of major, (Tambora-like,) eruptions.
Brandon S
That’s my position.
And beyond that: with respect to policy discussions. If Brandon G wants to have them now we can have policy discussions now.
If Brandon G wants to defer discussing policy until he thinks he has good enough graph of “time to 2C” (or XC number he likes), he gets to prioritize as he sees fit and defer that sort of discussion until he is satisfied with his graph. But he ought to recognize that decision to defer discussion about what policy actions we should take is his.
But I’m not going to spend a lot of time worrying about that graph because it barely affects my view that we should act. I’m perfectly willing to state the policy preference I’m for: I want to encourage nukes. I’d like them now, or yesterday even.
The fate of nuclear power is indeed a strange issue. I agree with Lucia and ironically James Hansen that its an obvious thing to do and is probably the only technology that offers competitive costs, a perfect fit to our current infrastructure, and proven engineering. So one might ask why nuclear power has languished while we rush to wind power and solar in places like Germany where it doesn’t make sense. As SteveF says this has to do with Green ideology and its emotional elements. Actually solving problems is not the goal for many whose deep emotional needs come first.
Well folks, look at Tol’s gremlin graph and try and figure out what the baseline is for each point. Since in effect there are only three points, Tol, Nordhaus and Hope that should not be so hard
Eli: “we are at 1C above 1880. People keep forgetting that.”
.
If the ground record is contaminated by 1/3 with UHI (as some like Watts presume,) and the satellite and balloon records are more representative of trend (as Christy, Spencer and Mears would claim,) then we have had perhaps as little as 0.65C warming since pre-industrial.
Eli,
Real questions: Are you trying to make a point in comment (Comment #147745) ? If yes, what point are you trying to make?
“If the ground record is contaminated by 1/3 with UHI (as some like Watts presume,) and the satellite and balloon records are more representative of trend (as Christy, Spencer and Mears would claim,) then we have had perhaps as little as 0.65C warming since pre-industrial.”
too funny. the land is 1/3 of the global record..
Mears thinks the land record is better.
As for UHI and Microsite.
They magically stopped in 2005.
Why do i say that?
From 2005 to Present we have a gold standard network: CRN
Now, take all the bad stations from 2005 to present.. They should show some microsite bias or UHI bias when compared with the gold standard..
wait for it…..
since 2005 when we have a good number on CRN online what do we find?
Yup we find that the BAD stations match the gold standard.
Same trend.
1. The bad stations were never really that bad
2. UHI and microsite stopped in 2005.
your choice.
The bottom line is this. 2C represents a practical goal.
rational, justifiable, goal.
To be sure you could do the kind of optimization that brandon S suggests.. but its not required.
It’s true that UHI and micro-site have been addressed by site, sensor design and location but that doesn’t change the practices in place for the prior 150 years.
.
If I recall Steven accepts that UHI is a scientifically measured effect as many independent studies confirmed. [Oke, Kim, Hamdi] In fact to counter the interference stations were moved from inner cities to airports. This is “adjusted” for. But then the cities expanded and airports expanded, continuing the bias (from an earlier interval) that many claim started with the first chopping of trees to make cabins.
.
The reason for CRN should have been to quantify UHI and micro-site rather than to nullify it. (Thanks Dr. Karl.) As far as the sea record, I simply do not have a lot of confidence in it before the 2003 Argo monitoring.
.
No land or sea measurements were under a scientific protocol to measure GMST until recently. Re-purposing data comes with the uncertainty of poor control.
.
NASA is Mears’ customer. A little awkward perhaps?
On my scale of existential threats to Western civilization, changing climate is not at the top of the list and that’s true of the vast majority of voters in the US for example. The history of human civilization has been one of muddling through all kinds of changes. In fact, my weather has been getting more pleasant with less snow and more sunshine with more than adequate rainfall.
Given these facts, it is very odd to me that Greens and leftists don’t go all in on adaptation and robustness as a second prong of their single minded focus on mitigation. It’s pretty likely it seems to me that like all plans to ration good stuff, most mitigation schemes are not likely to be singularly successful.
Lots of people would support rational steps such as zoning changes in coastal areas, focus on sustainability of water supplies, and working on food crop robustness through genetic changes. These are good things in their own right I would argue if you are worried about the “grandchildren.”
lucia, yup. Not having specific temperature/emission targets could get in the way of some policy options, like pricing carbon emissions or setting emission caps. I don’t think either of those is a good option, and I think they are doable without any threshold value, but… I guess some people might require a threshold value before making their recommendations.
But others of us might recommend all sorts of things without having one. I know I would. There are a bunch of things I’d recommend doing that don’t require even looking at temperatures.
Ron, that was exactly Tom’s trick. The CRN stations are designed to be optimal, with respect to location, instrumentation and operation. The USCRN design is paired, that is the USCRN stations are near USHCN stations to allow comparison of both networks.
Indeed, the Menne paper does exactly that showing that not only are the temperature anomalies from the best and worst USHCN stations identical, but that they also overlay those of the USCRN.
Although it has vanished from the net, the lamented Atmoz blog showed that the nearby CRN station anomalies exactly matched about the worst possible site you could imagine on the top of a roof at ASU.
Not quite a time machine, but close enough for all reasonable purposes.
Steve F
Eli Rabbit,
“Since global temperature change lags emissions you are being extremely optimistic.â€
.
“Actually, no. There is a lag between forcing and response, of course. A plausible delay is on the order of a decade or two to see most of the response. So how far are we behind? That depends on lots of assumptions, but IMO, it is silly to suggest more than 0.5C behind. The key is that the delay in response is not many decades.”
The physical response takes a few decades or more, basically the time needed to overcome the thermal inertia of the upper oceans, the policy response also takes time. Sooner or later you are talking a century or so. And, of course, given the science fiction assumptions about carbon captures, the problem will last for several centuries.
http://www.scienceonline.org/cgi/reprint/1110252v1.pdf
Keep whistling.
Brandon Shollenberger (Comment #147737),
.
See my response on the Ban Bing is too long thread.
Ron Graf (Comment #147742),
.
See my reply on the Ban Bing is too long thread.
.
Summer.
.
Tol is an interesting fellow.
.
But what climatological assumptions litter the inputs to his economic model?
.
Humans, collectively have experienced a far greater range of climate than CO2 will ever cause.
.
Why? Because they migrated to every climate on earth, and have done pretty well in each one.
.
Ron.
You need to show some accountability for your claims.
lets number them and not change topics
“If the ground record is contaminated by 1/3 with UHI (as some like Watts presume,) and the satellite and balloon records are more representative of trend (as Christy, Spencer and Mears would claim,) then we have had perhaps as little as 0.65C warming since pre-industrial.â€
1. Your first “claim” if pretty funny. IF the ground record..
Note how you protect yourself from making a claim about the
record by using a conditional. But Still your claim is wrong.
The claim is this: If 1/3 of the land warming is UHI, then
we have only warmed .65.
Lets start with that claim. How did you figure that .65C
This is an open question
Did you forget that the land is 1/3 of the total
Next claim:
2. Satellite and Balloon are more representation of trend according to Christy, Spencer and Mears.
This is false. mears says no such thing.
So, before we get on to your next “argument” lets start by you being accountable for making some slips.
A) note I dont not say your slips are the result of you being dumb, or politically motivated, or in it for the gold.
B) I dont call your slips lies or fraud.. ect
but lets start with you owning the slips
“Ron, that was exactly Tom’s trick. The CRN stations are designed to be optimal, with respect to location, instrumentation and operation. The USCRN design is paired, that is the USCRN stations are near USHCN stations to allow comparison of both networks.
Indeed, the Menne paper does exactly that showing that not only are the temperature anomalies from the best and worst USHCN stations identical, but that they also overlay those of the USCRN.
Although it has vanished from the net, the lamented Atmoz blog showed that the nearby CRN station anomalies exactly matched about the worst possible site you could imagine on the top of a roof at ASU.
Not quite a time machine, but close enough for all reasonable purposes.”
I often wonder if skeptics realize how devastating CRN will be to their claims of contaminated records going forward? Year after year going forward they will face the same question. what happened to the fraud? why do bad stations match good ones?
Why dont the horrible stations diverge from the gold standard?
The effect of Tambora did not even last five years (see CET for example)
Eli: “The effect of Tambora did not even last five years.”
.
Okay, I distinctly remember the minor volanoes of the 20th century making large multi-year responses in the plots of most CMIP5 runs. Tambora was like 10X any one of these. The summer following the eruption it was so noticeably cold it got coined “the year without a summer.” It snowed, etc… This is a global perturbation of like 30C. And you are saying the stratosphere cleared to balance resumed in 5 years? Maybe, I wasn’t there and there were no thermometers.
.
McGregor 2015 says the eruptions from about 1200 to 1900 were primarily responsible for the ocean cooling to at least a 2000-yr minimum and possibly a 10K-yr min. His team eliminated M-Cycle and solar variance by getting a fingerprint match to volcanic using climate models. I don’t vouch for their work but it was a large multi-year, multi-authored study. I am reading Sigl et al, 2015, which seems to contradict McGregor in favor of trying to sync with the tree ring proxies that never before lined up with reconstruction based on volcanic forcing. It’s amazing what generous funding can accomplish.
.
This brings me to Thomas Karl and the CRN. I’m sure he tried his best to find UHI and micro-site but with long, expensive and exhaustive efforts came up finding exactly what he hoped; nothing. I’m not saying anything was crooked but that really throws Oke, Kim, Hamdi and others under the bus. How could they be finding 3 to 4C UHI? Were they just finding what they wanted? We can’t know who is right and we don’t have a good answer that reconciles all findings.
.
This reminds me, are you for “red teams” as suggested by the late Michael Crichton and Rodger Pielke Jr., to provide built in adversarial validation to all studies?
.
Steven, as I mentioned on ATTP, if the land records are found biased that will either invalidate the sea record or the models. But at least one would have to change, probably both.
.
If there is bias found it will be hard to celebrate knowing that the brand of western science and the USA in particular will never fully recover.
SteveF wrote: “A less than 1C increase, combined with rapidly falling global poverty, is not enough to change priorities. In the future, and in a much richer world, a change in priorities could happen. I am not suggesting that there will be a sudden change, but if reality on the ground requires action, then I suspect that action will happen.”
Great. That means the US Social Security system will be revised in the few years, rather than closer after 2030 when the trust fund will be facing bankruptcy (and an automatic 30% cut in benefits) and the financial burden placed on my children will have become far worse. That “reality on the ground” can be understood by all (who are willing to face the facts) far more easily than AGW (:)).
Which leads to an interesting question: Why is it that the liberals are willing to “face the alleged facts” about climate change but not entitlement spending, while the conservatives generally don’t face the facts about either? I guess those rumors are true about many liberal environmentalists hating the developed societies that has been built around cheap energy.
@TE
Re climate change and our past.
http://news.nationalgeographic.com/news/2008/04/080424-humans-extinct.html
@RG
The USA? This has been researched around the globe by independent scientific centers. The idea that they are all in on the joke is beyond belief. The proof of the existence of AGW is also present in much more than the surface temperature record. Read the IPCC reports.
Frank,
No, social security will probably not be revised until the trust fund is approaching exhaustion. It could start a little sooner if tax burdens reach the point that people start voting the rascals out. But I think comparing a political problem (addiction of politicians, and voters, to unsustainable expenditures) with a physical one (GHG driven warming) does neither justice. They are very different kinds of issues, if only because one is global, and one is local. In addition, one is just a simple zero-sum political game; the other is complicated, with clear absolute human benefits from the use of fossil fuels, even with uncertain very long term consequences.
@SteveF
You appear to be bordering on this fallacy. We can’t cut down our use of fossil fuels because we benefit from them.
https://en.wikipedia.org/wiki/Appeal_to_consequences
The consequences are already being felt and experienced. Look at the cost of coping with rising sea levels, droughts, floods and wildfires.
Bugs, oh my goodness. Take a chill pill. It’s really not that bad. Real people really are working on the problem. It’s going slow. You panicing isn’t going to make it happen faster. Deep breathes. Go for a nice walk.
Sorry Ron, there were thermometers in 1815 (CET etc.) and they show a recovery by no later than 1820 so Tambora did not have a long lasting effect.
And the CRN does show that the anomaly record is not contaminated by heat island effects.
Bugs: “The consequences are already being felt and experienced. Look at the cost of coping with rising sea levels, droughts, floods and wildfires.”
.
Have YOU looked?:
http://fivethirtyeight.com/features/disasters-cost-more-than-ever-but-not-because-of-climate-change/
bugs
Uhmmm he didn’t say that. But it is true we benefit from that.
Facts are something we should take into account when discussing how much we cut use.
Neils, the take-away from your article:
.
Eli, I will remember that you support CET as a global indicator next time you want to talk about the LIA being a “local event.”
.
Speaking of not having it both ways, I am happy to accept that volcanic aerosols are overblown (no pun) but tell that to the GCM guys.
.
UHI has been quantified in studies and identified by its fingerprint of diurnal temp range narrowing effect. The fact that Thomas Karl’s CRN says it does not exist can either mean:
A) All those investigators over 50 years were biased in their study by pre-conceived expectation and poor peer review.
B) All of the UHI and micro-site has been eliminated by adjustments in protocol since CRN began comparison testing.
C) Karl suffers from pre-conceived expectation and poor peer review.
SteveF (Comment #147320)
May 9th, 2016 at 6:05 am
Really? Then why has the earth not warmed for two decades despite increasing GHGs? The models said it would warm … what happened to your beautiful theory?
Clearly you do not understand the nature of emergent phenomena. Far from being “passive responses”, they emerge when the temperature is warm, and cool the surface of the earth. I’m sorry, but if you think a thunderstorm spitting lighting and pouring down rain is a “passive response” to increasing temperature, then you are using a very different meaning of “passive” …
Perhaps you can explain, why the temperature of the globe did not vary more than ± 0.1% from its set point over the entire 20th century … given that the system is controlled by nothing more solid than clouds and there is no inherent reason for it to be that amazingly stable, the question is not whether there is a governing mechanism.
The only question is how that governing mechanism works. Heat engines don’t run to tolerances of ± 0.1% without a governor.
w.
“Really? Then why has the earth not warmed for two decades despite increasing GHGs? The models said it would warm … what happened to your beautiful theory?
Actually “the Models” predicted warming GIVEN
a) a proscribed increase in GHGs.. Actual forcing was less.
b) No other changes in negative forcings ..
c) an Assumed zero contribution from internal variability.
The first thing you have to realize is that combining the models into a model mean is a questionable step.
“The models” is essentially you be sloppy in your thinking. There are 102 models, model variants. They can be evaluated two ways:
one by one, or as an ensemble. If you look at them one by one you will of course find some that are below observations while most are above. If you collect them as an ensemble, you have to defend why you think you can average them. Averaging them makes sense if you think they are truth centered. See James Annan on that.
Next. every model was run with a set of assumptions.
1. IF GHGS go up X
2. IF there are no additional negative forcings
3. IF natural variations sum to zero over the period of interest.
THEN
Temperature will increase X
The problem is that its not a controlled experiment. You can’t control 1-3. This is part of the reason guys like Kummer suggest running the models over with updated inputs.
Lets see
You can for example see how assumption number 2… causes issues
http://www.colorado.edu/news/releases/2013/03/01/volcanic-aerosols-not-pollutants-tamped-down-recent-earth-warming-says-cu
Basically the “test” excuted by the models Assumes no increase in negative forcing. The paper above suggests that this assumption was wrong.
or look at #3.. One reason why predicting short periods is hard is that there short term natural variability can swamp the forced signal.
http://www.climatedepot.com/2014/02/09/new-paper-finds-excuse-8-for-the-pause-in-global-warming-pacific-trade-winds/
or a combination of 1 2 and 3 being violated
http://www.nature.com/ngeo/journal/v7/n3/full/ngeo2105.html
This is strictly a question of LOGIC not of science. The logic of design of experiments.
The CMIP5 “experiements” were designed with a set of ASSUMED inputs (for the future). They are run.
However we cannot control the test.
To recap.
The test assumes.
1. From 2005 and on, Forcings will evolve along a cxertain pathway. That includes forcings for SOLAR, for land use, for aersols, for GHGs.
2. There wont be any negative forcings ( ie volcanoes )
3. Natural variability will sum to zero.
The models are aimed at proving the FORCED response.
IF those assumptions obtain, then the model predicts a trajectory of the forced response.
here we sit then after the fact. The first order of business is NOT to compare the model outputs with the observations. But everyone makes that mistake. Those who love models look for agreement. Those who hate models look for disagreement. Wrong approach. The first order of business is to check the assumtions.
1. Did the projected forcing even come close to the actual forcing?
2. What happened with volcanoes?
3. where there specific natural variations ( over the period in question) that make conclusions impossible.
This is another way of saying that you cant really definitively the AGW theory in the short term via GCM.. GCM dont make or break the theory. They have never been evidence for the theory and they are not evidence against the theory.
“Perhaps you can explain, why the temperature of the globe did not vary more than ± 0.1% from its set point over the entire 20th century … given that the system is controlled by nothing more solid than clouds and there is no inherent reason for it to be that amazingly stable, the question is not whether there is a governing mechanism.”
This is a funny argument.
1. It assumes that there is something remarkable about a stable
temperature that needs explaining.
2. It depends utterly upon the units choosen and the time span.
For example look how the percent changes if we use Kelvin.
When a metric does that you know that using percents is a bad
idea. Next, when you change the time span look how the answer changes
A) can you explain why the temperature of the globe changed
by an even more tiny percent over the last month?
B) can you explain why the temperature changed by 5%
since 1850?
C) can you explain why the temperature changed over 20%
over the last 20K years.
That tells you two things: 1) when your “surprising” thing changes by changing units you probably are looking to explain the wrong thing. ( percents are always misleading). 2) when you surprising thing changes as a function of time period, you have a problem with the thing you picked. Imagine if some cyclomaniac or solar nut used short time periods.
The simple fact is we have one theory that explains the deep past and predicts the future. It’s not perfect. If you want to suggest another theory.. you HAVE TO SUPPLY quantitative explaination of the deep past and the near past and the future.
emergent phenomena dont work over millenia to keep the planet to within .1%. The dont keep us out of ice ages and dont cool us down from a hot house.
Steven Mosher,
You pose the question:
“B) can you explain why the temperature changed by 5%
since 1850?”
Did any temperatures changed 5% since 1850, or is your point rhetorical?
bugs,
You demonstrate rather well with your false claim about increasing storms, droughts, wildfires etc. how immune to reality you climate catastrophists really are.
Steven, how long have you known that CMIP5 GCMs do not anticipate volcanic forcing events? How widespread is this known on the blogs? Is this not a significant disclosure that should be made about any presented projection considering volcanic events over 50-100-yrs is nearly certain? If volcanic events are thought to be too trivial to include why bother to adjust for them post facto? (Not rhetorical)
“The only question is how that governing mechanism works. Heat engines don’t run to tolerances of ± 0.1% without a governor.”
1. Presupposes what hasnt been proved ( existence of a governing mechanism)
2. Presupposes that a heat engine is the correct model.
The question we need to answer is this.
1. Since 1850 the temperature of the planet has increased by 1C
( its stupid to turn this into a percentage– since that is a function of choosing F, C or K)
2. What is the Best explanation of this change?
“heat engine” isnt an explanation
“governer” isnt an explanation
“natural change” isnt an explanation.
An explanation starts on the LHS
Delta C = f()
Then you fill in the right hand side.
For AGW theory its basically that the change in temperature
is a function of changes in forcing.
until you can reduce “emergent” phenomena to a simple formalism you basically have nothing.
.
Hmmm….
.
Earth temperature changes by about 4K per six months, or about 0.66K per month.
Ron, why do you persist in stating that the Urban Heat Island effect is equivalent to a change in the temperature anomalies in urban areas.
As Hansen and others have shown this is simply not true. What is true is that the trends in periurban (eg suburbs) has been effected by increased density, but even that can be controlled for by homogenization.
OTOH, you can continue to blow smoke, which also has decreased in urban areas.
Eli, Maybe an equation would describe my thought better.
.
Anomaly(urban) – Anomaly(rural) = UHI(u) + Micro-site(u)
.
Where rural is pristine natural setting for 50+ square Kilometers
.
The question is not whether the urban and rural trend is different. It’s what the difference in trend would have been if the station surroundings and instrument would have remained a constant. Rural stations today are still in developing areas. Each year dozens of stations cross over the threshold classification denoting rural from urban. The change happens continuously through the urbanization of billions of people over 150 years. Tons of new concrete and asphalt are put down — swamps drained. Rarely does this reverse. All non-porous surfaces and construction add to the effect.
.
Remember the climate change signal is un-noticiable, tenths of a degree. UHI can show deltas of 4C. Tokyo has seen up to 7C.
Willis,
You say there has been no warming in the last 20 years. Here is the wood for trees composite temperature index: http://woodfortrees.org/plot/wti/from:1996/plot/wti/from:1996/trend
Sure looks like warming to me. Saying ‘there has been ‘no warming’ does not make it true. On what basis do you claim there has been no warming since 1996?
.
By ‘inactive’ process I mean a natural physical response to a temperature change (eg the density of air at low temperature is higher than warmer air (at the same pressure), the vapor pressure of water increases with temperature, etc.). By ‘active process’ I mean a process which acts in ways to control a parameter at a specified value. For example, a temperature controller for maintaining a house at a specified temperature. Or your blood temperature remaining at ~98.6F, even when the ambient temperature varies. That is an active response.
.
None of the natural processes you talk about have a “set point”, and they do not manage to hold winter temperatures equal to summer, nor daytime temperatures equal to night. So it seems to me you refuse to actually address the argument I raised: We know there are huge seasonal and even daily responses to changes in forcing. To me that means that there is clearly a response to forcing. What makes you believe that an increase in forcing will never cause an increase in temperature?
.
By the way, I understand ’emergent phenomena’ just fine.
“Sure looks like warming to me.”
Me too. Here are the trends since April 1996:
HadCRUT 1.39 C/Cen
GISSlo 1.82 C/Cen
NOAAlo 1.64 C/Cen
UAH6.LT 0.53 C/Cen
RSS-MSU 0.55 C/Cen
Nick Stokes,
Those vaules are inflated a bit by the recent El Nino, but yes, there has been warming.
Brandon Shollenberger (Comment #147869) ,
.
.
Brandon R. Gates (Comment #147863)
.
.
Brandon R. Gates (Comment #147735)
May 14th, 2016 at 2:02 am
.
.
Same unsolicited advice still applies. Or perhaps read less tired. Whatever works best for you.
SteveF:
April 1996 looks suspiciously like a cherry pick.
Eli:
Or it just shows the effect is small enough you can’t resolve it with single station records.
“You pose the question:
“B) can you explain why the temperature changed by 5%
since 1850?â€
Did any temperatures changed 5% since 1850, or is your point rhetorical?”
http://berkeleyearth.lbl.gov/locations/52.24N-0.00W
The global average has moved by at least 5%..
in gross terms from around 14C to 15 C
or more than 5%
Take england
http://berkeleyearth.lbl.gov/regions/united-kingdom
from around 8.5 to 9.5 C
See how crazy it is to use percentages?
At some point Willis need to drop his weak little trick of cherry picking a short time frame and calculating a low percentage and then Asserting that such a thing is only possible if there is a governor.
The temperature is the result of forcings and feedbacks.
when you see it stable.. Guess what? that does not logically imply a governor. it implies that NET forcings over the period are close to zero, or that the inertia in the system is high, or that feedback are working to mute the forcings.
the presence of “stability” does not necessitate calling a thing a heat engine or assuming that it has a governor or set point.
I’m just loving how Steve “kinetics can tell you nothing” Mosher is going to justify using the term a “5%” temperature change to the academic committee for his science Ph.D.
Unlike SteveF, I personally have found aspects of Mr. Eschenbach’s “governor†or “thermostat†reflections both interesting and informative. Two such aspects are the average-sea-surface-temperature’s apparent ceiling around 30 deg. Celsius and the fact that in some regimes the subsequent morning low in the tropical ocean varies inversely with the previous day’s high.
While nothing in my experience gives me any reason to believe that the disputants’ agreement on the definitions of “governor†or “passive†in this context is imminent, it may be relevant to their confidence about sensitivity values that some natural systems exhibit non-monotonic relationships between a stimulus (e.g., forcing) and a response (e.g., temperature).
For example, an increase in a skillet’s temperature (the stimulus) will usually be observed to increase the rate (the response) at which water dropped into the skillet boils away. If the temperature reaches a point at which heat transfer starts being hampered by departure from nucleate boiling, however, the reverse can occasionally be observed.
Such phenomena’s existence is what inspires my above-mentioned humility about what can be taken as “almost certain.†Specifically, a given value of short-term sensitivity may tell us less about equilibrium sensitivity than we suppose.
Joe Born,
“non-monotonic relationships”
.
Are you really suggesting an increase in forcing can cause a decrease in temperature? Or do you mean a non-linear temperature response to forcing? A non-linear response to forcing is plausible (even likely). A non-montonic response to increasing forcing seems un-physical. By the way, the hot-surface analogy with a droplet of water seems a stretch.
I’m suggesting only what I said: a given value of short-term sensitivity may tell us less about equilibrium sensitivity than we suppose. I’m not inclined to think an increase will cause a decrease, but it’s not self-evident to me that an increase in forcing wouldn’t turn out to result in almost no long-term temperature increase.
As to the departure from nucleate boiling, no, I don’t think that this specific phenomenon will have much to do with climate responses. But effects such as the (if I remember correctly) inverse response of next-morning low to previous day’s high that Mr. Eschenbach observed give us reason to question how much we can infer from what little we have been able to observe.
Now a tangent: By “stretch” I don’t understand you to suggest that boiling exhibits no such non-monotonic phenomenon. Just in case, though, I’ll mention that engineers at a utility-boiler manufacturer I represented assured me that just such a response was real and that it was something their designs needed to take into account.
You can observe the effect yourself by flicking water droplets from your fingers onto a skillet as it heats up. At some point, instead of immediately boiling away as they do initially, the droplets will form little “marbles” and roll around, lasting much longer before they disappear.
Joe Born,
You and Willis seem to agree on many things that I find quite odd. I suspect William of Ockham would not agree with you, but there is not much more to say. A Deus.
SteveF, I have been doing more analysis of the CMIP5 4XCO2 experiments and the Andrews paper data concerning the regression of net downward TOA versus global surface temperature. As you recall the ECS value is estimated in this method by extrapolating the straight OLS regression fitted line to the x axis where net downward TOA equals zero. The value is readily calculated from the fitted regression line by ECS= (-Intercept/slope)/2 where division by 2 is required to obtained temperature change for 2XCO2 from the 4XCO2 CMIP5 experiment. In the case of regressing net TOA versus log(Temperature), ECS=(exp(-Int/b))/2.
My biggest complaint with a number of climate science papers is not with what is presented but rather what is not and that is true with my view of the Andrews paper. The paper uses an arbitrary segmentation of the regression into two lines and uses this as evidence for a non linear relationship between TOA and temperature. Extending the lesser sloping line from later years into the 4XCO2 experiment and comparing it to a line extended from the first 20 years gives the impression of some large differences in estimated ECS depending on which data points are used.
What I did was to argue (with myself) that the difference in a linear extrapolation and a non linear one should be determined by using linear and a non linear regression model. (I also argued that if linear regression were to be used the segments should be determined by a more objective method.) I found that the best non linear fit was produced by doing a TOA versus log(Temperature) OLS regression. I have compared the derived ECS from both regressions and the R^2 into the table linked in the first link listed below. I had data from previous analyses that allowed me to make a comparison and report the results for 33 CMIP5 models used in the CMIP5 experiment. I also included the ECS derivation using total least squares (TLS) regression. The 95% confidence intervals (CIs) where determined by (1) going back to the log versus time models for net downward TOA and surface temperature for each CMIP5 model and determining the best ARMA model for the residuals, (2) doing 1000 Monte Carlo simulations for the TOA and temperature data for inputting to the particular regression of interest and (3) using the distribution of 1000 regression results to estimate the CIs.
What is readily apparent from this comparison can be summarized in a few statements:
1. By comparing the R^2 values it is readily apparent that the fit of the linear and log models is very nearly the same.
2. The log regression as expected gives larger ECS values and the difference between linear and log is greater when regressing with OLS than with TLS.
3. The OLS and TLS regressions give nearly the same ECS values for the linear case while OLS gives a larger value than TLS in the log case.
4. The values of ECS from the individual CMIP5 models, whether derived by linear or log OLS or TLS regression, vary over a wide range and particularly so for the log regressions.
5. The spread in the ECS values between linear and log regressions increases in general where the log regression derived ECS value is greater.
6. The CIs are larger for OLS regression than TLS and the CIs as a percent of ECS values increase as the ECS values increase.
Of further interest in the analysis was to look directly at the relationships of net TOA and temperature versus time. I first attempted to fit the data for the 150 or 200 years with a regression model and found not unexpectedly that the best fit was TOA versus log (time) and Temperature versus log(time). The R^2 values are in the table linked below in the second link listed and show an excellent fit for TOA and even better for temperature. The relationship for TOA with time can be used to extrapolate the time series to TOA = 0 and determine the time to reach the ECS temperature. This time is shown in the table for each model run and the column heading: Years to net TOA=0. Now, while the spread of ECS values may be disconcerting with regards to climate modeling validity, the variation in the time to reach that ECS value has to be even more so. In the real world the question arising is what does it mean as a practical matter to see long times to reach ECS and may well be an argument for using Transient Climate Response (TCR) exclusively where ECS is now used in conjunction with TCR.
As a check on the ECS regression relationship to the properties (model parameters) of the net TOA and temperature to log(time) series, I regressed the derived ECS values versus the slopes and intercepts of the net TOA and temperature log time series. The models are shown at the bottom of the second linked table. Not unexpectedly the fit is very good and that fit allows the determination of what parameters are most important in determining the ECS. As it turns out the slope of the temperature versus log(time) and intercept of the net TOA versus log (time) regressions have by far the most influence on ECS.
My third and final analysis involved the determination of linear OLS breakpoints in the plots of net TOA versus surface temperature, the data from which the above described regressions were made, using the breakpoints function from R in library strucchange. The R^2 values for all the segments from the plots with breakpoints – when breakpoints were found – were determined and that result along with the year at which the breakpoint occurred and the ECS value determined from the extrapolation of the fitted regression line of the segment to TOA=0 are reported in the table in the third link listed below.
Interestingly, as can be readily seen in the table, the first breakpoints occur at a wide range of year values and some models had multiple breakpoints. If the ECS value determined by final segment is used in a comparison to the ECS value determined without breakpoints from linear OLS regression by subtraction of the later by the former, the results can be grouped into three groups with 10 models having 0 or less difference, 12 models differing by more than 0 but not more than 0.2 and 11 models differing by more than 0.2. The first segment always has a much better fit to a straight line (and a lower ECS) than did the subsequent breakpoint point segments as determined by comparison of R^2 values.
My analyses here was intended to show detailed differences between CMIP5 models in the performance and results with regards to the regression method of estimating ECS and not to look at the validity of the ECS values derived or the regression method used in the derivation. The Andrews paper does get into details in attempting to find the feedback parameter(s) that most affect the non-linearity claimed for the estimation of ECS and while the authors see the biggest factor creating non-linearity as short wave cloud feedback, they also see non-linearity effects from long wave clear sky and cloud effects and short wave clear sky. They also make claims that the evolving patterns of surface temperature changes for 2 CMIP5 models under the 4XCO2 experiment leads directly to the non-linearity measured and can account for most of it.
I judge that the paper fails to address the large differences in results from the CMIP5 models. I would also criticize the paper for comparing an arbitrary first 20 years slope to that for the remainder of the regression, and, in effect, exaggerating differences in feedback slopes using two regions of the plots that would not be used for extrapolation to obtain an ECS estimate. I would further criticize the paper for either not attempting an analyses using non-linear regressions or objectively finding and using breakpoints for a linear approach.
http://imagizer.imageshack.us/v2/1600x1200q90/921/XbvAZR.png
http://imagizer.imageshack.us/v2/1600x1200q90/924/4DCtmd.png
http://imagizer.imageshack.us/v2/1600x1200q90/924/MMJyY2.png
Frank (Comment #147817)
Frank, you draw from an analogy of SS and Medicare to AGW mitigation that I often reference myself. Lots of contradictions there but certainly not unexpected when it involves politicians.
In the case of SS and Medicare, doing something about it means admitting to a government failure. In the case of AGW mitigation it means bigger and more intrusive government. Given those observations it is not too difficult to determine where the big government politician will come down.
We should be clear that there are only non transferable government bonds written on the government to itselff in the trust fund. Payments to and from the “trust” fund go into and out of the general fund and thus a time of reckoning begins when the funds going out exceed those going into the general fund. That situation means either more debt to cover or an increases in taxes to replenish the general fund. Funds coming out of the so-called trust fund accounting and will be greater than those going in much sooner than when the fund goes bust completely.
Ron Graf (Comment #147710),
.
Ed Hawkins commented to me that CMIP5 does not forecast volcanoes.
I believe the forcing used in the CMIP5 RCP scenarios includes an annual adjustment for volcanic effects,
‘Ed Hawkins commented to me that CMIP5 does not forecast volcanoes.
I believe the forcing used in the CMIP5 RCP scenarios includes an annual adjustment for volcanic effects,”
Not that I know of.
in fact its not in the forcing files.
Thee are also differences in how future solar is handled
Plenty of people do run CMIP5 data with volcanic eruptions. In fact, there is even something called VolMIP.
any just so you know there a couple of GCM that dont use any volcanic forcing ( at least in ar4 there were)
‘ Rural stations today are still in developing areas. Each year dozens of stations cross over the threshold classification denoting rural from urban. The change happens continuously through the urbanization of billions of people over 150 years. Tons of new concrete and asphalt are put down — swamps drained. Rarely does this reverse. All non-porous surfaces and construction add to the effect.”
Err no.
Nick Stokes (Comment #148020)
“Plenty of people do run CMIP5 data with volcanic eruptions.”
–
and?
–
Kenneth Fritsch (Comment #147999)
“5. The spread in the ECS values between linear and log regressions increases in general where the log regression derived ECS value is greater.”
–
not needed.
like Nick’s comment.
redundant.
–
Would prefer Nick to comment on your good work since he does the same sort of thing rather than going off on a tangent.
What about it, Nick?
A possibility of lower CS anywhere?
‘Remember the climate change signal is un-noticiable, tenths of a degree. UHI can show deltas of 4C. Tokyo has seen up to 7C.”
Yes.
Raw data on Tokyo 3C per Century 13 Million People
After comparision with neighbors
less than 1C per cetury
http://berkeleyearth.lbl.gov/stations/156164
Nearby Kumagaya 200K people
raw data 1.4C per century
after 1.1C
http://berkeleyearth.lbl.gov/stations/156178
MAEBASHI
300K people
2C before
1.1C after
http://berkeleyearth.lbl.gov/stations/156183
Nick Stokes: “Plenty of people do run CMIP5 data with volcanic eruptions. In fact, there is even something called VolMip.”
.
I know. Held used it to infer TCR by using response to volcanic simulations. I still can’t see how the aerosol response in the model is validated to real life. It could be GIGO, right?
.
For policy makers I bet they use the “realizations” with RCP8.5 and call it business as usual. At least that’s what I see reported in the news. I’m glad I know that RCP8.5 is a debatably implausible worst case. But policy makers and Joe public just suspects or blindly believes, depending on predisposition. Wouldn’t it be nice if scientist agreed on all sides to refrain from skewing communication and instead corrected misleading MSM reports and CS celebrities? Yes it would.
.
Does RCP8.5 have volcanoes? Let me guess.
Steven Mosher, I have several comments to you at ATTP in moderation. I’m guessing they were too long despite my chopping it into thirds. If they get deleted by Willard I’ll post them here in the morning.
-ron
Ron Graf
“Does RCP8.5 have volcanoes?”
No, it doesn’t. The RCPs are scenarios of anthropogenic forcing. They are used by models which take the responsibility of representing the Earth’s normal state, including volcanoes. The scenarios are what is added.
Nick
“The RCPs are scenarios of anthropogenic forcing. They are used by models which take the responsibility of representing the Earth’s normal state, including volcanoes.
“Does RCP8.5 have volcanoes?†No, it doesn’t”
–
So it does not represent earth’s normal state as it does not include or have provision for including volcanoes?
Hence it is unrealistic.
Well would have been 1 way of cooling them down, if we had only had a volcano!
Guess that means they are running hot.
angech,
In a sense, that is the point. Climate models can’t really predict volcanoes. As this paper points out, if you update the forcings, the model-observation discrepancy gets smaller and updating the forcings and properly aligning with internal variability may explain almost all of this discrepancy. Over the coming century, we could increase anthropogenic forcings by a few W/m^2 (maybe even more than a few). The change in natural forcings (solar, volcanoes) is probably going to be much smaller than that (see here or the changes over the last 100 years or so) so even if we don’t include volcanoes in the projections, its very unlikely that they would make a big difference on multi-decade timescales.
and Then There’s Physics (Comment #148035)
” Climate models can’t really predict volcanoes.”
–
I agree most reasonable people know that and it is one reason where I would appreciate quasi or hybrid up-datable models which readjust settings to the actual observations each year or to major changes in forcing as they occur.
–
Volcanoes do pose the risk of a big difference on multi-decade timescales if they themselves are big enough. Hopefully the next big one in a thousand year event does not happen in our particular lifetime. Smaller ones like Krakatoa would still be interesting enough.
–
Natural forcing by an increase in volcanic activity for 30 or 40 years, quite possible, could be a large change in natural forcings, just like an increase in cloud cover or an increase in precipitation. That they are unlikely to be on the scale of CO2 forcing does not rule out such combinations and effects.
–
On the pertinent issue of inclusion in models, a small constant adjustment should be present in all models. Perhaps you could do one with and without volcano included?
angech,
Indeed, how much we warm in the next century could be quite different to the projections if we were to suddenly have a period of very enhanced volcanic activity, or a sudden major asteroid strike. I don’t quite see how this, however, should play a big role in our decision making as neither seems all that likely (or things that we should really be hoping for). If volcanic activity continues for the next hundred years in a manner similar to what it has for the last 100 years, the impact will probably be small relative to the potential anthropogenic influence.
True
I believe the volcanic forcing adjustment in the RCP scenarios is made by an adjustment to the solar forcing. That is for the future part of the scenarios. I had this discussion with Nic Lewis at CA but could not locate the posts.
[TeX]Experiments, model configurations and forcings for CMIP5 – LMD
The best I can do to show that volcanic forcing is accounted for on an annual basis for future RCP scenarios is this statement from CMIP5 forcing article noted in my post above. If I find a link I’ll post it.
“For the future
scenarios, the volcanic forcing is assumed to be constant, i.e.
a constant volcanic eruption produces a constant radiative forcing
$F_v = \bar{F_v}$. This explains the jump of $F$ between 2005 and 2006
(\fig{s_v_forcing}, continuous line); in 2005 there is almost no
volcanic aerosols, as observed, whereas in 2006 a constant volcanic
eruption takes place that produces a constant radiative forcing.”
“Rural stations today are still in developing areas. Each year dozens of stations cross over the threshold classification denoting rural from urban. The change happens continuously through the urbanization of billions of people over 150 years. Tons of new concrete and asphalt are put down — swamps drained. Rarely does this reverse. All non-porous surfaces and construction add to the effect.”
Here, let Eli google that for you
http://lmgtfy.com/?q=emptying+of+the+countryside
I need to add the following comments to my above post analyzing the Andrew’s paper:
1. A log curve derived from the Andrews paper regression data of net downward TOA radiation versus global surface temperature will in a number of cases have at least one breakpoint and sometimes multiple breaks.
2. The earlier years of the regression data plot will fit a straight line better than a log while the log fit is better for the later years.
3. Where segments from breakpoints fit a straight line regression well, an abrupt regime change is implied at or very near the break date. Such breaks are unnatural and tend to be caused by artifacts. Only a reasonable understanding of a natural cause from independent evidence that could cause such an abrupt change would counter the artifact argument. The fact that the break dates of the CMIP5 models in the 4XCO2 CMIP5 experiment and the number of breaks varies over such a large range makes that accounting difficult to impossible without acknowledging that many of the models are simply wrong.
4. Interestingly the Andrew’s paper and others by some of the same authors, including Gregory, who is credited with the regression method discussed in Andrews and my post, more than hint at being more favorable to using TCR over ECS as climate temperature potential with regards to AGW.
“So it does not represent earth’s normal state as it does not include or have provision for including volcanoes?”
No, RCP’s don’t represent Earth’s normal state. They represent some committee’s view on the likely progress of anthropogenic forcing, which is used for a cooperative program. Neither that committee nor GCM’s have any expert advice to offer on future volcanic eruptions. GCM designers have to figure out how to model all sorts of non-anthropogenic real world effects, including some allowance for likely volcanoes. That is external to RCPs.
Steve,
RE: My question and your explanation:
Thanks.
In a way thinking in %’s is possibly interesting:
%’s can illustrate just how dynamic the weather/climate system is.
– and how tolerant the ecosystem is to changes.
I ran across this interesting essay which reviews a paper that seems relevant to some of the issues you bring up from time to time:
https://rclutz.wordpress.com/2016/05/17/data-vs-models-4-climates-changing/
I found a link to the explanation of the annual volcanic forcing adjustment for the future RCP scenarios in CMIP5. It is well hidden information and thus I am not surprised that laypersons are unaware of this adjustment. I am surprised that Ed Hawkins would not be aware of it.
http://link.springer.com/article/10.1007/s00382-012-1636-1
Under the heading Experiments, model configurations and forcings for CMIP5 and under the sub heading Solar irradiance and volcanic aerosols we have:
“The volcanic radiative forcing is accounted for by an additional change to the solar constant. For the historical period, the aerosol optical depth of volcanic aerosol is an updated version of Sato et al. (1993, 516 http://data.giss.nasa.gov/modelforce/strataer
… For the future scenarios, the volcanic forcing is assumed to be constant, i.e. a constant volcanic eruption produces a constant radiative forcing . This explains the jump of F between 2005 and 2006 (Fig. 4, continuous line); in 2005 there is almost no volcanic aerosols, as observed, whereas in 2006 a constant volcanic eruption takes place that produces a constant radiative forcing.”
Kenneth Fritsch (Comment #148105)
“The volcanic radiative forcing is accounted for by an additional change to the solar constant.”
I wonder what the change to the solar constant is to account for positive CO2 feedbacks and ECS.
Though Lucia and others insist ECS is an emergent property of models I fully expect it to be an insert.
Perhaps it can be found in the model wash as an invariant.
and Then There’s Physics (Comment #148035)
May 18th, 2016 at 2:08 am
“As this paper points out, if you update the forcings, the model-observation discrepancy gets smaller.”
The Schmidt et al paper that you cite is a cherry pick that only considers a few forcings. The much more thorough Outten et al 2015 paper, which examines all forcings, finds that actual post 2005 best-estimate forcings are virtually identical to those assumed by CMIP5 models.
(http://onlinelibrary.wiley.com/doi/10.1002/2015JD023859/full – open access)
“The average value, Fv, of this forcing over the period 1860-2000 is −0.25 Wm−2”
I would suppose it is derived from the average value noted in the link I gave above. Too bad Nic Lewis is not reading at this thread as I think he would have the value at the tip of his tongue. It would make sense that the historic values are used since it allows a continuity between the historical and future portions of the RCP CMIP5 scenarios without the ability to predict volcanic occurrences.
Kenneth Fritsch (Comment #148113)
May 19th, 2016 at 11:44 am wrote
‘ “The average value, Fv, of this forcing over the period 1860-2000 is −0.25 Wm−2â€
I would suppose it is derived from the average value noted in the link I gave above. Too bad Nic Lewis is not reading at this thread as I think he would have the value at the tip of his tongue.’
The RCP forcing datasets (at http://www.pik-potsdam.de/~mmalte/rcps/index.htm#Download) assume an average volcanic forcing of -0.23 W/m2, and are offset by that value. So a year with no volcanic aerosol at all will register a forcing of +0.23 W/m2.
The 0.23 W/m2 value has been scaled down by some 30% from the forcing implied by estimated historical average aerosol optical depth, to reflect the fact that studies using simple models have consistently found volcanic forcing to have a low efficacy. Based on the AOD-to-forcing multiplier used in AR5 (25x), the required scaling down is around 45%. See discussion of this point in Lewis and Curry 2014 (https://niclewis.files.wordpress.com/2014/09/lewiscurry_ar5-energy-budget-climate-sensitivity_clim-dyn2014_accepted-reformatted-edited.pdf). A recent AOGCM based study (Gregory et al 2016) has found that volcanic RF has an efficacy of ~0.5, in line with findings using simple models.
niclewis,
“The RCP forcing datasets …”
The RCP folks seem to be firm that these are not their datasets. They say, eg in RCP45_MIDYEAR_RADFORCING.DAT:
“NOTE: THIS FORCING DATASET IS NOT CMIP5 RECOMMENDATION, AS CONCENTRATIONS, NOT FORCING, SHALL BE PRESCRIBED IN MAIN CMIP5 RUNS. SEE OZONE, LANDUSE AND AEROSOL DATA SOURCES DESCRIBED IN CMIP5 NOTES AND RCP DATABASE.”
Nic,
This follow-up paper is quite interesting.
Thanks Nic Lewis for explaining the volcanic forcing. I recalled a forcing of -0.15 or –0.16 W/m2 and now I see how that is derived from the historical value by taking efficacy into account.
Brandon R. Gates (Comment #147357)
May 9th, 2016 at 5:07 pm
Here’s an anecdote about that. In going over the GISS climate model computer code (FORTRAN no less) some years ago, I found out how they dealt with the fact that models don’t conserve energy.
Models are gross digital approximations of continuous reality. After each timestep (typically a half-hour), it is inevitable that they will not end up exactly in balance. I happened by chance on the GISS code that specified what they did with the imbalance.
It turns out that if the model ends up with a lack of energy, it is sprinkled evenly over the entire planet like magical pixie dust to bring it back up. And if there’s too much energy at the end of a timestep, it’s vaccuumed up evenly over the entire planet.
OK, for small amounts of an imbalance, that’s a reasonable solution. But it brought up a question, to wit “What does the record of those cycle-by-cycle imbalances in energy conservation look like?”
This was back before the emergence of climate blogs. I was on a climate science mailing list. So on the list I asked Gavin Schmidt, who is one of the GISS modelers, how big the amount redistributed was … after much hemming and hawing he said that they did not keep a record of the amount.
I was stunned! If I was running such a model, at a minimum I’d put a Murphy gauge on the redistributed energy so if it exceeded some level it would sound an alarm.
More to the point, however, I’d use the energy imbalance. It is an irreplaceable diagnostic tool to track down the source of the imbalance, so I’d use it to see if I couldn’t fix the problem rather than just keep shoveling …
There will always be some imbalance in a digitized gridcell model, no way around it. But we can at least monitor and use the imbalance.
w.
“If I was running such a model, at a minimum I’d put a Murphy gauge on the redistributed energy so if it exceeded some level”
There’s nothing special about this step that would require monitoring. PDEs like those in GCMs are based on conservation of momentum, mass and energy. You write a set of discrete equations which you think will best do that over the timestep. There is in any case a lot of approximation about the discrete representation of continuous quantities.
An explicit method will simply predict the values that are expected to best fulfil the conservation requirements. GCM methods are mostly of that kind. Implicit methods, generally regarded as better but time and resource consuming, solve equations among the unknown new values to ensure that they more completely satisfy the conservation principles, at least the approx verion represented in the discrete approximation. In ordinary de’s especially, there is a somewhat hybrid category called predictor-corrector, where you review a explicit step, and make an improvement.
Conservation of energy implies global conservation, and so if at the end of a timestep you see a discrepancy there, it is very reasonable to modify the new numbers so that they better satisfy that particular aspect of satisfying conservation. It isn’t different in principle to the other numerical steps that are undertaken. PDE methods tend to be overly diffusive anyway – this adds a small amount of extra diffusion.
Steven Mosher (Comment #147841)
May 15th, 2016 at 12:22 pm
Let’s see where that logic leads. Following your idea that if a metric changes depending on the units chosen it is a “bad idea”, consider the following formula:
W = sigma epsilon T^4
This is the famous Stefan-Boltzmann equation relating radiation W to temperature T. Look how the answer changes if we use Celsius. Does that mean that the S-B equation is a bad idea?
The problem is that Celsius and Fahrenheit are both anomaly scales, with an arbitrary zero point. That means that you cannot use them for most physical calculations, such as the S-B equation.
And as is well known, you also cannot use anomalies for percentages.
On the other hand you can use absolute scales for percentage calculations. Such scales include radiation in watts/m^2 and temperatures in Kelvin.
Next, does such a stable temperature of ± 0.1% need explaining? Yes, with a couple lines of argument—model difficulties, and large swings.
By model difficulties I mean that it is very hard to get a climate model to be stable without all kinds of props and kludges … and even then many models “drift” during the run-in period when no inputs are changing. Many, many early climate model runs either spiraled into ice ages or overheated. There is nothing inherently stable about a system ruled by things as ethereal as wind and clouds and water in its many transformations.
By large swings, I mean that on a daily and monthly basis both individual areas and large regions of the planet swing ten or twenty degrees or more. Nor are such swings identical each cycle. Some areas experience years of drought and high temperatures, followed by years of milder temperatures. Other areas run cold for a while and then hot. Drives farmers nuts.
I’d expect the moon to have a stable temperature. There, at any given location nothing changes. No clouds. No cold spells. But on the earth, which is running some 30° or so above its S-B predicted temperature and using a widely varying amount of input energy due to albedo variations, with clouds one day and no clouds the next?
There is no reason at all to expect the temperature of such a system to be stable within ± 0.1% over a century.
Gotta say, amigo, you are reaching hard to try to show I’m wrong.
w.
Willis Eschenbach (Comment #148205)
I’d expect the moon to have a stable temperature. There, at any given location nothing changes. No clouds. No cold spells. But on the earth, which is running some 30° or so above its S-B predicted temperature and using a widely varying amount of input energy due to albedo variations, with clouds one day and no clouds the next?
-Not having a go at you .
Not entirely happy with “the moon has a stable temperature at any given location.”
–
reasons include variable distance from the sun over the course of an earth year and 28 day orbital period around the earth.
Temp can vary wildly over 28 days at most spots.
Eclipses.
Earthshine variability with albedo earth changes.
–
–
I’d expect the moon to have an overall stable temperature range dependent on these variables.
–
Note the earth and the moon as a paired system share the same energy input per square meter.
Running the earth’s temp variations to the moon as a baseline should be a priority in order to work out natural variability and CO2 effect on the earth against a known palimpsest..
Steven Mosher (Comment #147839)
Every model was run with a set of assumptions.
1. IF GHGS go up X
2. IF there are no additional negative forcings
3. IF natural variations sum to zero over the period of interest.
THEN Temperature will increase X.
–
I presume additional negative forcings are part of natural variations so really do not need to be mentioned.
Hence 1. IF GHGS go up X
2. IF natural variations sum to zero over the period of interest.
THEN Temperature will increase X.
–
Strange that you made no comment at the time on ECS being inputted into models rather than being an emergent phenomenon of running climate models as Lucia claimed.
–
We know the temp did not increase X. We assume natural variations of the explorable type summed to zero. [No-one said the sun got colder, did they?]
Hence there must have been additional negative forcings or negative feedbacks at work to explain the pause?
Can somebody explain how land use forcing can possibly be negative?
A paved road injects 200+ W/m2 IR (peak) more into the air above it than a grass field. Further, the sand 2 inches under an asphalt road is 20+°C above ambient at the end of the day. The dirt in a grass field or forest 4-6 inches deep at the end of the day is at ambient temperature.
A dirt field (as opposed to grass or forest) is almost as bad as asphalt.
“1. Climate models may not exactly conserve energy.”
If you are interested, the data for the CMIP5 models are available to test and determine how well the models conserve energy. Like most properties of these models the range for conversation is wide.
From net TOA downward radiation=rsdt-(rsut+rlut) and the potential global sea water temperature and making reasonable assumptions about the fraction of heat going into the oceans, how well the model conserves energy can be determined.
http://rankexploits.com/musings/2014/new-open-thread/#comment-134894
I recall Hansen writing about land cleared for agriculture. Albedo is positively correlated with biomass, so if forested land is cleared for crops, the surface albedo should rise. I have no idea, and maybe nobody does, as to what the relative budgets are in the real world, including the increasing amount of impermeable land surface, but there’s one thing.
Because it involves evaporative cooling, irrigation is a negative forcing. In a warmer drier climate, you can expect the amount of irrigation-driven forcing to increase. Of course, there are other indirect effects associated with irrigation, such as the increase in atmospheric humidity, change in land cover, etc that may be larger in magnitude.
See for example here:
Irrigation as an historical climate forcing
I believe cropland use and resulting change in albedo is the largest contributor to a negative forcing for land use.
Kenneth Fritsch,
Thanks for all the effort you put into your work and subsequent comments on the Andrews et al paper. One thing you may (or may not) have already done, which I believe would make your critique of Andrews et al more clear and convincing, is the same type of graphic as Andrews et al used (model TOA imbalance against modeled surface temperature increase). It seems like your tables of data are showing that the Andrews et al ‘story’ of the models is something of a cherry pick, and not a fair representation of the model ensemble. If you have comparable graphics, could you post them somewhere? I think those graphs would also help to show just how much difference there actually is between empirical estimates of equilibrium sensitivity and model estimates of equilibrium sensitivity, and over what time frame the two separate.
SteveF, I will use Dropbox and an Excel file to show the corresponding graphics for the regression analyses I did on the Andrews paper. I recently upgraded to Windows 10 and some of my programs have to be upgraded to be compatible – thus it might take a little time to complete this task.
Actually, Steve, Table 1 in the Andrews paper linked below shows the feedback parameters for 28 CMIP5 models and from that you can get an idea of the model to model variation. Figure 2 in that paper shows a graph that if read up and down and across gives an idea of the spread in ECS values obtain from using years 1-20 and 21-150, respectively. The ECS value spread using 1-20 years is 2 while for 21-150 years is 4. It also shows that those models with the higher ECS values have largest differences between the values obtained using 1-20 and 21-150 years.
http://centaur.reading.ac.uk/38318/8/jcli-d-14-00545%252E1.pdf
SteveF, the link below is to an Excel file in Dropbox with all the pertinent plots from my analysis of the Andrews paper linked above.
The plots illustrate well the wide range of CMIP5 model results. I think it is past time for those scientists working in the climate modeling field and those scientist using the modeling data to seriously consider narrowing the models down to those which at least have a potential for representing an earth like climate. I think using an ensemble of model results as a meaningful entity, other than to point to the spread as negative aspect of the overall modeling effort, in place of looking in detail at individual model results inhibits the needed narrowing.
https://www.dropbox.com/s/6wjbotf8pc7p1gr/BB_Post_ECS_Andrews_Analysis.xlsx?dl=0
PA: “Can somebody explain how land use forcing can possibly be negative?”
This might help. Parker 2011:
.
I find the land sentence significant because it is warming accepted into the land record not from GHG.
.
(12) – Jones PD, Lister DH, Li Q. Urbanization effects in large-scale temperature records, with an emphasis on China. J Geophys Res 2008.
.
(25) -Hansen J, Ruedy R, Sato M, Imhoff M, Lawrence W, et al. A closer look at United States and global surface temperature change. J Geophys Res 2001.
.
(30) Forster P, Ramaswamy V, Artaxo P, Berntsen T, Betts R, et al. Changes in atmospheric constituents and in radiative forcing. In: Solomon S, Qin D, Manning M, Chen Z, Marquis M, et al., eds. Climate Change 2007.
Kenneth,
Thanks for the graphics. Jumpin’ Jesus, you did a lot of work! I think the linear fit is the best for comparison with Andrews et al. You are right, there is a huge range of model sensitivity. There is also a huge range of model variability (year-to-year), and a huge range of ‘non-linearity’ in the TOA vs temperature graphs. I will have more comments later.
SteveF, what is most evident in looking at all the individual models results is that:
(1) The time for the net downward TOA radiation to go to 0 (equilibrium) is strongly correlated to the ECS value derived by assuming slope changes over time of the regression line of Net TOA versus global surface temperature with a correlation of 0.81 using either OLS or TLS regression.
(2) The ratio of the log derived ECS and the linear derived ECS, which gives a measure of the slope change, is strongly correlated with the ECS value derived by assuming slope changes over time with a correlation of 0.90 using either OLS or TLS regression.
If the derived ECS values for the CMIP5 models were closer to those values found by a number of authors for the earth’s empirical values most of what the Andrews paper talks about finding with the non linearity issue would have disappeared. In other words most of the paper is based on the edifice of the models having higher ECS values – even though that is not discussed in the paper.
I am not sure what exactly these correlations above portend for a fast changing climate going to equilibrium after the initial 4X CO2 burst in the CMIP5 experiment nor do I see how it means much for the real world. Certainly the models give a wide array of responses. I think even the authors of the Andrews paper prefer using something closer to reality like that that the Transient Climate Response represents when they comment in closing the paper that:
“The equilibrium climate sensitivity is useful as information on committed warming, but it is not the most useful concept for quantifying and comparing climate responses to time-dependent forcing. Improved physical understanding will come from shifting focus onto transient forcings, feedbacks, and ocean heat uptake, which jointly determine the rate and magnitude of time dependent climate change.”
Eli: Sorry Ron, there were thermometers in 1815 (CET etc.) and they show a recovery by no later than 1820 so Tambora did not have a long lasting effect.
Carrick Or it just shows the effect is small enough you can’t resolve it with single station records.
Small enough means not much of an effect globally which trashes Ron’s argument.
Eli can live with that
Eli, I am just pointing out the findings of the OCEAN2k McGregor (2015) team. They claimed volcanic aerosol forcing fingerprint was confirmed by a CMIP5 GCM to have had long-lasting and cumulative effect to lower SST 0.7C from 1100-1700ce.
.
Eli, you could be right; McGregor is garbage. Ocean currents, solar variation, solar magnetism variation or other yet unknown phenomenon could be the cause of millennial variability. I feel the GCMs are tempting to be used for legitimacy of theories that there is no evidence for.
.
There is no dispute Tambora was over 3X larger than anything else in the 201 years since.
There is also no dispute that the effect of Tambora was gone in 5-10 years at most and that what OCEAN 2K found was the result of many large volcanoes.
You are getting tiresome
Nick writes
It matters a great deal if you’re trying to measure energy accumulation of the planet. In fact its all that matters.
Nick also wrote
That’s not how I’d interpret those notes. I’d interpret them to mean that the forcings aren’t used directly and instead all the components making up the forcings should be used in the models.
That makes the forcings in RADFORCINGS simply a convenience to see what they are when combined, not that they’re incorrect.
Eli: “You are getting tiresome”
.
Eli’s first made a claim “The effect of Tambora did not even last five years.”
.
Now, after being tired his claim is modified to:
.
So, is Eli’s point that volcanic forcing is way overblown? Does that make the CMIP5 models way too responsive to the minor volcanoes like Mt Agung (1963), El chichon (1983) and Mt. Mt. Pinatubo (1991)? How many years for the effective half-life of these volcanoes? Do all volcanic eruption aerosol plums have the same half-life? Even if this were true, it has little bearing on the fact that if ocean temps are dropped significantly that deflection will carry on the climate affect for decades after the event. This was McGregor’s (very obvious) point.
In order to show the large variability in the CMIP5 models non linearity in the OLS linear regression of TOA net downward radiation (Rad) versus global surface temperature (GST), I have linked below a table that shows (1) the years to the first breakpoint or the length of the series if no breakpoint was determined, (2) the change in the GST at the first breakpoint or end of series, (3) the change in the Rad at the first breakpoint or at the end of the series, (4) the total change in GST at Rad=0 which is 2XECS and (5) the total change in Rad at Rad=0 which is the regression intercept. I have become sensitive to some climate science authors neglecting to show and emphasize differences in climate model results and how those differences can affect the uncertainty of the general result. I judge that to be true for the Andrews paper authors.
While the CMIP5 experiment of an abrupt 4XCO2 burst and resulting changes in global surface temperatures has no counterpart in the real world and thus its interpretation relative in terms of the real world is difficult to make, it can show differences in CMIP5 models and with more detailed analyses give some clues to why we see such large differences. First of all the abrupt change as far as non linear effects go is grounded in temperature changes and time. The major global conditions that are most frequently considered for leading to non linearity between GST and Rad are changes in albedo, due to melting sea ice, glaciers and decreasing snow cover, and to possible large regional and local differences in temperature increases – which in turn can effect cloud feedback. These conditions should more readily occur over longer time periods and with increased warming.
We see from the first column in the table that an objective estimation of non linearity given by breakpoint determination shows that the average year to the first break or the end of the series is 61 years which much different than the 20 years selected in the Andrew’s paper for their analysis – and as is the median years of 38. The standard variation of 55 years shows the large difference in the CMIP5 models’ non linearity response to the abrupt 4XCO2 change. Actually the first breakpoint or years of linearity are bounded tighter in the tabled results than is actually the case. The lower limit of 10 years is determined by the minimum segment length used in determining the breaks and the upper limit by the length of the series where no break was found.
The second and third columns show the GST and Rad changes that are at the point of the longest linear time period that can be determined from the data. Looking at all the model results it becomes apparent that a number of models show little tendency towards non linearity over longer time periods and with relatively large increases in GST, while a number of others show non linear behavior after short time periods and relatively smaller GST increases. That contrast in these results puts these models in direct opposition as to what is required to show non linearity.
Columns 4 and 5 give the total changes in GST and Rad, respectively, that occur at Rad=0 or equilibrium. Those column values are listed to show how close the values in columns 2 and 3 are to equilibrium and still retain linearity.
Instead of concentrating on the details of one or two models as was the case for the Andrews paper a look at the regional differences in temperature increases and global sea ice and snow cover for the numerous CMIP5 4XCO2 model runs would be more revealing to possible causes of non-linearity and whether there might be common threads in at least some of the models. These data are readily available from the CMIP5 runs and it is thus surprising that the Andrews authors did not analyze it or mention it in their paper.
Finally the question of how these results of non linearity might affect the empirical estimations of ECS and TCR can only be surmised by looking at the individual model results in the linked table and determining whether those results might give some boundaries for the historical real world that is used in estimating those parameters. There are 11 models that are linear for relatively long times (greater than 71 years) and have GST increases of 3.0 degrees C or greater and thus this 1/3 of the models in the table would appear not present linearity issues for the historical period or for the period thru the twenty first century. There are 13 models that have shorter time periods to the end of linearity but only with GST increases of 3.6 degrees C or more. The 3.6 degree C increase should put the historical period on solid footing for linearity and the future period well into the twenty first century. The remaining 9 models have GST increases of 2.6 degrees Cor more at the point of non linearity which by itself would appear to put the historical and near future periods out of the non linearity zone. The projection of these 9 models to the real world linearity is clouded by the short times to the point of non linearity. While the authors of Andrews made a comment about that the non linearity they found in the CMIP5 model results for the 4XCO2 experiment might need consideration in empirically estimating ECS and TCR values from historical data and projecting the results into the future, I see, after looking more objectively at the individual model non linearity, no compelling evidence for non linearity being an issue in that estimation.
http://imagizer.imageshack.us/v2/1600x1200q90/924/asSDVE.png
I should have added in the last paragraph in my post above that even those CMIP5 models that show early breaks from non linearity in the regression series do not depart significantly from linearity when comparing the regression line for the overall series with that of second segment in the series.
This issue goes back to the Andrews authors comparing the slope of the 1st 20 years of the regression Net TOA radiation versus global surface temperature and the slope from the series excluding those 20 years and obtaining a much larger difference than is the case when comparing the overall series slope versus that excluding the 1st 20 years of the series.
SteveF, I saw you made a valiant effort at Rice’s on models but had one of your comments redacted. Just wondering what you had said.
BTW, I noticed there a lot of just riddled stuff such as Cawley posting a graphic from Real Climate that shows something different than Schmidt’s latest graphics on the subject. And then there is Cawley’s energy balance paper with the sign error that Rice mentioned AGAIN. Was there ever a correction published?
David,
I used a prohibited word: Ken. That was all that was deleted.
.
I will try to make a couple more comments pointing out the weakness of the logic of pooling many models to generate an absurdly broad “uncertainty range for the ensemble.
.
I doubt it will have much impact, but I don’t want Rice’s dismissal of McArdle’s reasonable commentary to go unchallenged. The close mindedness of the ATTP crowd needs to be challenged.
Cawley’s last comment there makes some starling admissions, such as that the structural uncertainty in the models may be large.
Also: ” If you want something that falls outside the model plausible range of variation, try arctic sea ice. Does this mean the models are not useful for centennial scale projections? No, of course not.”
This all fits with the Cawley doctrine which is basically that to test a model you must include all its plausible runs and parameter choices to be able to “falsify” it. This makes it impossible to ever reject a model for a chaotic system I would argue. And perhaps that’s the point of this silly idea.
Certainly in CFD, allowing plausible changes in parameters creates a huge range of outcomes because of the high nonlinearity. Applying the Cawley doctrine is silly and impossible practically anyway.
This Cawley mistake is also behind the older controversy with Lucia and McIntyre about Santer et al. Cawley is a machine learning expert and so far as I can tell inserts himself in the climate debate because of political motivation.
I am actually surprised that Rice would buy into such silliness, but I do think a lot of modern science is based on these types of models that are virtually impossible to validate in a meaningful way.
DY and Steve,
Oh ye sages of all that is good and proper, who knows all and understands everything, who clearly are beyond bias, silliness, groupthink and close-mindedness, maybe you can tell me what I should believe and think and the world would be a better place (the concern I have in making this comment is that you will actually think that I am being serious).
steveF writes: “I don’t want Rice’s dismissal of McArdle’s reasonable commentary to go unchallenged. ”
You are obviously not familiar with Megan’s Lemma:
1) remember that Megan McArdle is wrong;
2) if your analysis leads you to conclude that Megan McArdle is right, refer to rule #1.
Ken Rice, Of course I don’t take your comment here seriously. To be taken seriously, you might try addressing the question of how to validate a model of a chaotic nonlinear system, say of a simple separated flow. Bear in mind that this model that cannot be falsified just because it fails standard statistical tests on global integrated forces might be used to design the next airplane of car you ride in.
The best models for design are actually very simple ones adjusted with real data, kind of like energy balance models. This is very well known in the relevant engineering and scientific disciplines.
I don’t think SteveF and I are dismissing evidence of high sensitivity. I certainly am just very skeptical because the best evidence seems to point to lower numbers and because of what I perceive to be bias as discussed by James Annan for example of climate scientists feeling pressure to shade their estimates to the high side.
ATTP:
At least you are not censored here, as I am at your site.
Kevin, Your comment has no technical content but I will respond as an adult to it. There is a glaring error in Rice’s critique of McArdle’s (why do you refer to her by her first name?) piece that calls everything else into question. The glaring error is the statement that GCM’s are based on the laws of physics, whereas economic models are not.
The problem here is a pretty glaring ignorance of turbulent fluid dynamics modeling. CFD models (and GCMs are a special case of this general class) are based on the Navier-Stokes equations, so superficially, yes, these are “laws of physics.” However, all interesting flows are turbulent and the atmosphere is very turbulent. Since its impossible to really resolve this turbulence using the “laws of physics” and it may only be possible in the distant future if ever, “models” must be used. These models are not based on the “laws of physics” but on assumed relationships and parameter tuning. Mark Drela’s MIT PHD thesis from 1985 describes very clearly how such models are constructed. His entire thesis is a detailed description of how one such model was constructed. To quote loosely his justification one of the basic assumptions he uses about turbulence, “this relationship, like most useful statements about turbulent flows is based more on faith than science.” Faith is not a law of physics.
It’s hard to take Rice’s critique very seriously because it is based on a fundamental error that perhaps he neglected to tell his students about when he taught the subject.
Turbulence models are not hopeless, its just that they have fundamental limitations that are the subject of ongoing research. We have a paper that talks a little about this in AIAA Journal 2014, I think summer.
Forcing from a single volcano even as strong as Tambora, does not last long. Five years max from the information we have. Pinatubo was maybe 2-3 years.
Remind Eli not to be generous again.
DY,
Strange, I can’t find where – in my post – I made the statement you’ve claimed that I’ve made.
RickA,
I’m not quite sure what you’re talking about. I moderate quite strongly. Take it, or leave it. I don’t really care.
Rice, for the record, here’s what you said. It is a pattern with you that many have noticed. You say something which someone else challenges and then you deny you said it. You should perhaps dial back your blogging to one post a week so you can find a better balance in your life.
“I’ve already written about this before; these kind of comparisons between economic models and climate models are, in my view, fundamentally flawed. As the abstract of this paper points out
Structural constancy, both across time and across variable conditions, is a necessary precondition for accurate forecasting. Physical systems exhibit structural constancy, but economic and social systems generally do not. The basic point is that systems like our climate obey the fundamental laws of physics. This means two things. One is that you can eliminate any model the results of which violate any of these laws. The other is that this will always be true. Our climate won’t suddenly decide to change its mind and obey a completely different set of laws. Therefore you can essentially use the same models to study current climate change, past climate change, and future climate change. This does not mean that climate models are perfect, don’t have any problems, and that we should simply trust them. It does mean, however, that simplistic comparisons between climate models and economic models are almost certainly wrong.”
Your formulation here is simplistic and not very enlightening. CFD models (including GCM’s) are all only partially based on physical laws. The really important bits are not based on “physical laws” but on assumed relationships and tuned parameters. Mark Drela’s thesis is a good way for you to come up to speed on this issue. In this regard, they are like all other models of interesting phenomena, including economic models. This does make your criticism of McArdles article seem pretty partisan and unfair.
BTW, It seems I am banned at your blog from commenting or was it a glitch.
DY,
I did not say what claimed I had said. Nowhere in the bit that you’ve quoted did I say that climate models are based on the laws of physics. That you have failed to recognise, or acknowledge, this, is not a surprise. You’ve never acknowledged your misrepresentations before, so I was not expecting you to do so now.
Rice, I did summarize your argument, which overlooks very important facts, which you have shown no knowledge of. You continue to present a very one sided and erroneous view of GCM models. I would have thought your experience with the statistical issue for GCM’s would have taught you something. McIntyre I think did a much more objective job.
I have given you a lot of detail to support an increased understanding of GCM’s on your part in this thread. There is a good discussion going on at Climate Audit too about it. If your goal is to contribute something other than propagandistic repetition of flawed ideas, you might take a break from debunking everything out there and try to understand the basics.
ATTP:
I don’t mind being moderated.
However, Willard (I think) put me on some list so my posts are automatically deleted.
I am not even sure you see them before they are deleted.
It is not because I violated your blog policy.
It is because willard doesn’t like the content of my posts.
True censorship.
You might want to check that out.
I don’t think deleting a post automatically before even getting to review it qualifies as moderation.
Old, you miss the point ATTP is making
First, she (the Tol rule of indefinite pronouns, ATTP will not mind), the CLIMATE has to obey the laws of physics
Second, models of climate which do not assume the laws of physics (unicorns, ponies, whatever) can be eliminated.
Third, people who come up with theories of climate that ignore the laws of physics should be ignored.
ATTP: “Nowhere in the bit that you’ve quoted did I say that climate models are based on the laws of physics.”
.
You are being ridiculous. You said that climate follows the laws of physics and therefore climate models not following the laws of physics can be eliminated. How is this not claiming that climate models follow the laws of physics?
.
If your answer is that it is impossible to determine if the models are following the laws of physics then your paragraph was very deceptive.
Eli, I note your lack of addressing any of the points I brought up with your claim.
The silly rabbet once again does not address a single substantive point but instead utters empty phrases. If your summary of what ATTP said is correct, his most is completely void of real technical content. That is par for the planetary formation modeler. I suggest that you might try to suggest something substantive, such as how to validate a model of a chaotic turbulent flow. At Climate Audit, you will find some tutorial material that might make you more than a pesky wrabbit.
DY,
What you said here
is not true. That is all.
Ron Graf,
.
.
Look no further than David Young in comment #148635:
.
.
There’s a distinction between saying a model isn’t fully based on physical laws, and saying a model’s output violates physical laws.
.
Where David’s comment goes off the rails for me is in the next few sentences:
.
.
To review, here’s McArdle’s orignal argument:
.
.
McArdle is arguing the wrong problem. Yes, climate is always changing and there are a large number of variables — many of which are unknown. But being a deterministic physical system, it’s always going to do the same thing given the exact same initial conditions. The same cannot be said of the market behavior of large groups of humans — not only do the “observations” change in an economy, so do the “rules” of human behavior in response to those changes.
.
Thus comparing economic models to climate models is inappropriate … otherwise known as wrong.
.
I don’t see that as partisan or unfair so much as I see it as simply understanding a key difference between humans and physics.
Ron,
You’re missing my point. DY said – about my post –
I made no such statement in my post. Therefore what DY said is not true. I have no real interest in discussing my post further here, when what initiated the discussion was a claim about my post that is not true.
ATTP wrote here
And in the post wrote…
So when the models dont implement the laws of physics because they approximate the laws by parameterising them and then correct energy balance problems….that means they should be eliminated?
Tim, the point is that the model must not contain assumptions which violate the laws of physics such as majical invocations of unicorns or mysterious forces that no bunny has ever seen.
Approximation are allowed but they have to be tested by proof by contradiction, e.g. that the assumption does not induce a basic contradiction with the laws of physics.
.
In other words just like economic models, they can’t be shown to break any laws of physics.
Eli, this is a naive question but does the construction of a model using necessarily incomplete (maybe insufficient) physical assumptions which as far as they go are assumed lawful necessarily produce physically lawful results?
Another way to say this is, if the input is incomplete but lawful, can the lawfulness of the result be assured?
I wouldn’t think it could, but as i said, this was a naive question.
next to last sentence above would more clearly express my concern if it said:
If the input is lawful but incomplete, can the lawfulness of the result be assured?
How can you be sure that the parametization has not overwhelmed the physically lawful elements (components?) of the model?
Eli writes
..such as when the approximations together produce more or less energy then when you started for no good reason?
And then there’s ballpark physics.
“Approximation is allowed” is not a fact. It’s a belief and it’s not working out well when projecting is it.
jferguson:Eli, this is a naive question but does the construction of a model using necessarily incomplete (maybe insufficient) physical assumptions which as far as they go are assumed lawful necessarily produce physically lawful results?
Another way to say this is, if the input is incomplete but lawful, can the lawfulness of the result be assured?
I wouldn’t think it could, but as i said, this was a naive question.
——————————————————————
No, but neither can it be denied. What can be done is to test the results against observation with the understanding that, as in horseshoes, close can be enough.
A further point is that a sufficiently complex model will have a large number of predictions and that examining which set of observations a particular model best matches one learns about that model, the comparison with other models and the climate system. Indeed this is what is done with the CIMP ensembles.
Eli writes
Of course it can be denied. If its not physics, its not guaranteed to produce the physical representation.
Eli writes
Well the practical testing of the models’ ability to project has shown that “close enough” isn’t good enough and furthermore this is because of a misunderstanding on where “close enough” might be allowed to apply.
For example a “close enough” applied to thousands of grid cells and iterated over millions of times with the result accumulating isn’t a place where “close enough” applies. And yet this is exactly how modellers apply their approximations and expect to get meaningful results.
Brandon R. Gates (Comment #148650)
“Climate scientists can’t run experiments in which they change one variable at a time.
Indeed, they don’t even know what all the variables are.This meant that they were stuck guessing from observational data of a system that was constantly changing.”
Fixed.
” But being a deterministic physical system, it’s always going to do the same thing given the exact same initial conditions.”
The science is settled then.
Pity no one knows the rules for moving from the initial conditions with determinism
Brandon G.
The exact same initial conditions is never true in the real world. Because of rounding error, it may not even be true for a complex computer model. And even small changes in initial conditions may lead to large changes over time. IIRC, computer climate models get around this problem that limits the prediction time for weather models by using nonphysically high viscosity values to increase dissipation. Even then, some runs just blow up, or so I’ve heard.
There is such a thing as deterministic chaotic behavior.
“Of course it can be denied. If its not physics, its not guaranteed to produce the physical representation.”
This is junior high school nonsense. First there are nocomplex systems which have a complete representation from first principles. The art is to have a set of simplifications which preserve the first principles while reproducing the complex reality, or a close approximation to it.
As to close enough, consider that GCMs reproduce the basic of global circulation in large scale weather patterns as Steve Easterbrook showed
http://www.easterbrook.ca/steve/2014/02/climate-model-vs-satellite-data/
Eli writes
Your response was to “if the input is incomplete but lawful, can the lawfulness of the result be assured”
And non-physical representations based on curves derived from observations are frequently insufficient particularly when the observations are only a subset of those expected over the course of the projection.
The real art is to understand when approximations matter.
Eli,
That’s sort of go/no go like hindcasting. If GCM’s didn’t produce features like jet streams and reasonable approximations of historical global temperature anomalies, they would obviously be useless. But they don’t do cloud distributions all that well and have a fairly large spread in absolute temperatures.
KenRice, Tim and Ron hit the nail on the head and pierce the veil of legalistic word smithing and obfuscation.
Saying we can “reject models that violate any of the laws of physics “is essentially saying that “models are based on the laws of physics.”
Your track record of saying something and then denying you said it is not something to inspire confidence in your honesty. A more honest person might say, well lets try to understand the models better and discuss ways they don’t obey the “laws of physics.”
And of course, that is the real glaring error in your post. No model of high Reynolds’ number turbulent flows “obeys all the laws of physics.” There is the little matter of the eddy viscosity that adds a non-physical viscosity to the momentum equation to account for the turbulent eddies. That is what my previous lengthy comments here are talking about in case you missed it.
I think people might appreciate an honest and nuanced response to this point. Since I’m apparently banned at your blog, this discussion cannot take place there. An honest person would try to respond to it with a little thought.
jferguson, Tim, and Eli, As I mentioned above all realistic models of high Reynolds’ number turbulent flows have empirically based (often curve fit) representations of unresolved scales. These representations add strictly speaking “unphysical” sources to the momentum or mass equations. Thus, the “physical laws” being solved are technically made very “wrong” to account for the unresolved scales. This whole thing that Rice introduced about “violating any of the laws of physics” is just a red herring based on I am assuming ignorance or to be charitable very loose use of language.
Eli’s assertion about the “large eddies we call weather systems or the jet stream” are of little meaning in a quantitative sense. They constitute the “colorful fluid dynamics” often used to fool silly rabbets.
What we care about is temperature anomalies that are 100 times smaller than the effects Eli cites. Trust me on this, temperature anomalies can be totally garbage while the jet stream “looks good.”
DeWitt:That’s sort of go/no go like hindcasting.
Exactly, as Eli said the art is to have a set of simplifications which preserve the first principles while reproducing the complex reality, or a close approximation to it. How easy do you think it was to get to reproducing the circulation patterns?
Patterns of warming and precip are a much harder tests than simple global anomalies which often are easier to get from simple one dimensional models.
Now say that one model does better on the warming than the precipitation. Eli wants to understand the source of the difference which will be a tell about what is going on.
You want perfect crank up that quantum computer.
Now some, not Eli to be sure, might ask David Young why one should prefer a model that gets a global temperature anomaly right on the button, but messes up the global circulation pattern. There is a lot more “physics” in the circulation patterns and that also includes energy balances.
Eli asks why we prefer temperature anomaly over circulation patterns. The answer is threefold.
1. We care most about the temperature anomaly and its easy to measure in the real world.
2. “circulation patterns” are qualitative and models seem to get them wrong, for example they don’t do try well on regional climate.
3. Global temperature anomaly is an integral quantity and one might hope there is more chance it will be quantitatively validatible.
This brings me back to the question I asked the Rabbet. How would you propose to validate the GCM’s? Its a serious question as its a question that is important in all of CFD, not just GCM’s. We should care about this most as it is a first step to trying to make decisions based on model output.
As I said above, there is some tutorial material at Climate Audit by Jerry Browning that might be of value as a first step.
David Young: We care most about the temperature anomaly and its easy to measure in the real world.
Ask the folks in Florida tomorrow.
I know sea level is important. I thought the general consensus was that temperature anomaly would drive it as the most important factor. There are other factors I have heard about too, but I doubt if GCM’s will really tell us much about ice sheet dynamics.
The other thing the Rabbet should consider is that simple models can at least be well constrained by actual data as energy balance methods typically are. There are some bits like forcing efficiency the are hard to get right of course.
One thing all should be able to agree on is that the Cawley criteria for model falsification is practically useless in a real setting. I am surprised that there has been no retraction or at least drawing back from this extreme position.
DY,
I will simply repeat what I’ve already said. You said
I made no such statement. Once again you’ve said something that is not true. Once again you have failed to acknowledge this when pointed out.
DY,
Just as another illustration, I didn’t say this either
I said
This is getting tedious, so I will probably stop here. However, if you could avoid misrepresenting what I’ve said, that would be appreciated.
ATTP specifies
Results such as conservation of energy problems. You know…energy appearing from nowhere or disappearing into the aether. Those laws?
Tim –
You’ll notice that nobody actually addressed the issue which you raised in #148652, in which energy (or some other quantity known to be conserved) is not guaranteed to be conserved during a time step, but is adjusted after the time step in order to enforce conservation.
Does this qualify as being consistent with the laws of physics? From a “black box” perspective — that is, just examining the result — the quantity is conserved. But the basic equations are not conservative, due either to approximations or to computational inaccuracies.
Ken Rice, Tim, Ron, and I already clarified this issue and I said that I had summarized what you said. You didn’t like that summary and Tim quoted you directly. You dishonestly refuse to advance the discussion but continue to deny what you said.
You said that we can reject models whose results violate any of the laws of physics. They all do in an important sense that I detailed above. According to your criteria, we must reject all models of high Reynolds’ number turbulent flows. That is your fundamental error. It does pretty much make your post nonsense. That’s the real point and you refuse to discuss it honestly. That’s what most people call dishonest.
Harold and Tim, The discrete conservation issue is a real issue. The general consensus is that discrete conservation of mass, momentum, and energy, and high moments if possible is a very helpful thing. Modern finite element methods guarantee such conservation. However, the issue I raised about the sub grid model source terms is actually far bigger in size. And its an important point for even professors of astronomy to realize and acknowledge.
My experience with Rice is that like a disciplined politician, he always stays “on message” and that means usually refusing to say anything but simplistic talking points. Often that means denying that you said what you said.
Eli’s experience with David Young. . . well let’s not go there. Unicorns make nice pets tho.
DY,
Once again you fail to own up to your misrepresentations. I’m not dishonestly refusing to advance the discussion. I’m deciding (as is my right) to not continue one with someone who says things that are not true and then appears to be unwilling to acknowledge this when it is pointed out.
And your post Ken is based on an error about modeling. You continue to refuse to address this really important point to focus on something that has been clarified above. Reading is not your strong suit.
The discussion wrt physics and economics is apt because the failure to narrow the uncertainty of the IPCC ranges is a failure of economics, not the ‘physics’.
The natural variability of temperature trends, estimated both from both observations and models, is on the order of +/- 0.5C per century (though the range varies greatly from model to model, reminding us that in spite of having the same physics, the model results are not predictive, because the numerical solutions to the known physics are not stable ). The much larger uncertainty, at least that posed by the IPCC, is from emissions scenarios. But even there, the IPCC is probably exaggerating.
According to CIA data, fertility rates are trending toward low end UN scenario. We’re approaching replacement rate fertility globally and will see peak population. Even before that time, the aging society and declining labor force mean continued declines in consumption of just about everything, including energy. And the declining workforce also means the only means of growing economies is increasing productivity which goes hand in hand with increasing efficiency, which includes energy efficiency.
This makes most human footprint issues, including global warming, moot. Most of the discussion is chickens clucking in the yard.
TE, Yes it does seem that the population bomb was just a crazy left wing scare like so many others. Remember AIDS and the “everyone is at risk” lie. You see that in order to motivate “action” it is easy to be dishonest as we see with Ken Rice. Probably mitigation is hopeless given human nature and we need an energy technology change. If you are alarmed about climate, Bill Gates’ approach is far more rational and actually has a chance to work.
I think it goes even deeper. Part of the population bomb appeal was erroneously to reduce economic growth ( or force sterilization, or worse ). But economic growth has demonstrably been not the problem, but the solution. Economic development has led to the two factors which are reducing CO2 emissions 1.) reduced population (in developed nations ) and 2.) increased efficiency.
DY: “… Bill Gates’ approach is far more rational and actually has a chance to work.”
.
Brandon R Gates has an energy plan now too, I think, if he and Lucia were successful. If Brandon is related to Bill maybe it can be considered. I believe the plan was to get the left on board with the idea of safe nukes and the right on board with spending the money.
.
ATTP, I think you should admit that the difficulties with economic models is not that they don’t follow strict laws, it’s that they have too many unknowns, leaving no ability to validate or falsify. This is precisely the problem with climate GCMs.
One of the main problems with economic models is the assumption of rational expectations . The only thing in common with climate models is the complexity involved.
Right but ATTP et. al. are contrasting economic models and GCMs. But the range of IPCC outcomes is in part because of dependence on economic modeling, not climate models ( assumptions about economic activity, the carbon intensity of that activity, the number of employed participants in that activity, etc. – market choices ).
.
Many things are not predictable (within ranges). However, demographics are destiny. Trends are not likely to reverse, especially once one understands the profound reason for the trends. Ag societies-> children are a net asset ( relatively low cost and produce labor at a young age, 5yo? ), advanced economies-> children are a net liability ( relatively high cost and may not be economically viable until after college or grad school, 26yo? ). The motivations for these things won’t change, and the undeveloped nations are eager to follow suit, though their course is not as clear.
RB, Some modelers say turbulent flow can be irrational. My main point is that there is a myth based on ignorance that chaotic systems are like structural design problems or computer chip design. There is a big difference and the outsiders view is pretty superficial and overestimates predictability, in some cases wildly.
Ron Graf (Comment #148687),
.
.
I have a vision for the US which involves aggressively ramping nuclear. That’s about all Lucia and I found in common.
.
.
My father is the genealogist in my immediate family, and if we were related to Uncle Bill, he’d have told me.
.
.
More or less. The main disagreements were over how to fund the nuke project and continued subsidization of things like wind and solar. Way I figured it, a large nuclear program of the type I envisioned wouldn’t start going in a big way until about 2020, and with the average reactor/plant taking about 10 years to build, the first round of construction wouldn’t go into operation until 2030. That leaves a 14-year gap into which I’d want to aggressively deploy wind and solar up to the point that intermittency would require either storage and/or expensive grid upgrades.
.
My target was an essentially zero-emissions grid by 2040, and Lucia definitely wasn’t having that be a condition of our deal. Which was fine … in what’s supposed to be a negotiation for a mutually acceptable plan, I wouldn’t expect to come away with everything on my list of wants.
DY:”Yes it does seem that the population bomb was just a crazy left wing scare”
w/o the Chinese one child policy it would have been crazy all right. Like it or not, that alone defused the population bomb.
Eli writes
There is a strong correlation between affluence and reduction in birthrate. I guess we’ll never know for sure whether there would have been a natural reduction with increasing Chinese wealth or not. But it seems likely it would have happened on its own.
Tim, almost certainly as shown by other countries (see index mundi for data) but the change would not have happened nearly as fast, and there is a question about how much the policy contributed to increased prosperity.
Definitive answers to questions such as this suffer from the there is only one world problem
.
That’s the consensus opinion, promoted by having a good marketing name “one child policy“.
.
But like many consensus opinions, it turns out to be false.
.
When one child was invoked, the already existing trend of declining fertility rates actually slowed. Government action wasn’t just ineffective, it was counter-productive. Tell people they can’t have something and that’s usually what they want.
.
The China trend longer term mimics the rest of the world.
.
What interests me is the UN future projections indicating a convergence back toward the replacement rate ( the chart is from 2010 – US TFR is 1.9 in 2015, flagging below convergence and even the five year projection ). I can’t really think of why fertility rates would increase. Factors of decline: access to birth control, economic development would seem to continue declines, not reverse upward.
.
This may be denial on the UN’s part. Japan had similar denial. Official estimates kept assuming a reversal toward the replacement rate, but it never happened.
.
Casting the issue as partisan doesn’t do much good – it’s a global measure of individual choices after all. And the rates were clearly unsustainable. But it’s pretty clear now that population bust is our problem going forward – lots of speculation on what the economic and social impacts will be, but not the demographics. That means that all human footprint issues will be declining.
.
Eli, Tim, and TE, I don’t know about the 1 child policy and its impact. The real error of Erlich and his alarmist followers was not taking into account technology advances, which have continued to allow food production to increase and standards of living to increase. Human beings are pretty adaptable and intelligent. Adaptation works and technology is one big driver of successful adaptation.
I would claim that in general enforced scarcity doesn’t work very well. Maybe a carbon tax would work well, I don’t know.
“Having numerically calculated the solar activity variations, he explaines and predicts the derived climate variability. Most of modern global warming is the result of solar activity variations.”
http://opagos.tumblr.com/post/146358209620/documentation-of-the-solar-activity-variations-and