Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”
Holmes: “To the curious incident of the dog in the night-time.”
Gregory: “The dog did nothing in the night-time.”
Holmes: “That was the curious incident.”
From The Memoirs of Sherlock Holmes (Arthur Conan Doyle)
Pinatubo was hailed by many climate scientists as a unique opportunity to test climate sensitivity. It was the first major volcanic eruption during the satellite era. For the very first time, we had top-of-atmosphere (TOA) measurements of radiative flux changes (from the ERBE) during a major eruption.
I would like to consider four papers here which deal with the estimation of climate sensitivity from the Pinatubo data;Â they are:-
Douglass and Knox 2005  (“DK2005â€);
Wigley et al 2005  (“Wigley2005â€);
Forster and Gregory 2006  (“FG2006â€);
Soden et al 2002Â (“Soden2002 “) .
DK2005 fitted a single capacity linear feedback model to the temperature data and concluded that the results showed a total feedback of 5.56 W/m2/oC, which corresponds to an Equilibrium Climate Sensitivity (ECS) for 2XCO2 of 0.67oC, Â implying a strong negative feedback relative to the Planck response.
Wigley and fellow authors promptly issued a rebuttal to DK2005 here . They argued (a) that DK2005 got its model sums wrong (b)  application of the DK2005 to results generated by a GCM (the PCM model in this instance) failed to reproduce the “known†sensitivity of the GCM  and (c) the DK2005 simple model had failed to account for heat flux into and out of the mixed layer which Wigley estimated to be of order 2W/m2.
Argument (a) was and remains completely spurious and I won’t waste time on it. Argument (b) is identical to one of the arguments that Grant Foster et al raised against Schwartz 2007, and is also spurious. [Schwartz2007, if you recall,  used ocean heat content data to derive an observationally based estimate of climate sensitivity of 1.1oC for 2XCO2. Foster et al in rebuttal argued that the same methodology when applied to results from GISSE could not reproduce the “known†ECS of the GISSE model   i.e. 2.7oC; the Schwartz methodology produced a maximum estimated value of only 1.3 deg C from the GISSE results.  The problem with this argument was that, in fact,  the GISSE model during the historical period shows no evidence at all of its reported  high ECS value (2.7oC); tests of the climate sensitivity effective during this period in the GISSE model is 1.3oC for 2XCO2. It is only in future projections of the model that the high ECS becomes apparent, and this arises because the radiative response in the model (as in nearly all GCMs) is curvilinear with future projected temperatures  i.e. the radiative feedback in the GCM cannot be expressed as a simple linear function of temperature.  See http://rankexploits.com/musings/2012/the-arbitrariness-of-the-ipcc-feedback-calculations/ for further explanation.  By the same token, the argument put forward by Wigley is not valid; it could have  been validated by testing the effective climate sensitivity exhibited by PCM during the runs used by Wigley to test the DK2005 methodology, but this was not done. ]
This then leaves argument (c) – that DK2005 underestimated the ocean heat flux.  Wigley was correct here IMO, and I will show why. However, it is about now we hear the loud silence of the first dog which is not barking. Recall that the unique thing about Pinatubo is that we have radiative flux measurements. It should have been a simple matter for Wigley to show from the actual data that the measured change in net radiative flux – which should be close to the net heat flux leaving the oceans –  was much larger in magnitude than the net radiative flux change implied by the DK2005 solution.  Why did Wigley not do so in his rebuttal? Why, instead, did he start a silly and much weaker argument about the parameterised diffusive flux out of the mixed layer, which is (a) changing and (b) a function of (other simplified) modelling assumptions?
Well, no matter, I suppose that we should be able to go to the Wigley2005 paper – which  produced a much-cited central estimate of ECS of 3.03 oC for 2XCO2 to see what the net ocean flux should be from what we must presume must be its impeccable reconciliation with the measured radiative flux data. Yes? Well actually, no, we can’t. Wigley 2005 used successive runs of PCM to abstract a “clean†temperature response from the volcano, and then used the MAGICC model to match the abstracted temperature data; the latter model is a simple energy balance model tied to an upwelling-diffusion ocean model,  and so the net flux data from his model runs were immediately available. So, er,  where are they? It really is strange that, given the uniqueness of this dataset, Wigley makes no attempt to reconcile his results with the flux data.  There is no reference at all in the Wigley paper to the measured radiative flux data – even in passing. I will show in a moment what the Wigley match to flux looks like, and then you may judge for yourselves why it was not included  in Wigley2005,  nor in Wigley’s rebuttal of DK2005.
Let me now turn to  the Soden et al 2002 paper (“Soden 2002â€). This paper sets out to show that  a GCM (unspecified, but probably GFDL) can successfully reproduce the drying of the atmosphere during the cooling caused by Pinatubo; it concludes that the reduction of water vapour is successfully reproduced by the GCM and that it amplifies global cooling. It is a highly credible paper. Unlike the Wigley paper, this paper does show a graphical comparison of modelled and observed flux data (correctly reconciled over latitudes 60oS to 60oN, the region of ERBS coverage), as well as temperature data,  following the eruption. So far, so good.  So now we should be able to examine the implied climate sensitivity, total feedback and the water vapour feedback term  from the aggregate data in Soden2002, given this well-matched dataset, yes?  Er, well actually, we cannot. Curiously, there is another dog not barking here; the Soden2002 paper makes no attempt to quantify the climate sensitivity (nor the total feedback term). The water vapour feedback is the central theme of the paper, yet no attempt is made to quantify the feedback from the aggregate flux and temperature data, which should be fully available from the model results;  Soden2002  just compares the temperature data for the cases with and without WV response to temperature change  to infer crudely that the drying amplifies the cooling. The Supplementary Information sheds no light on the matter. Curious??
So of these two papers, Wigley2006 offers a climate sensitivity but fails to reconcile to the measured flux data. Soden2002 does seek to match  the observed flux data but does not report on the implied climate sensitivity.
I am now sufficiently cynical about mainstream climate science to believe that the reason for the non-barking dogs in the two cases is the same. I will show that you cannot get a reasonable match to the flux and temperature data with a high ECS.  Wigley would have had to show a serious mismatch with the flux data, and Soden would have had to report and explain the politically incorrect, low, climate sensitivity implied by his match of the flux data.
Let us then consider  the Forster and Gregory paper (FG2006) – which did use both the flux and temperature data. FG2006 used the fact that “Q-N†plots (these are  plots of {Forcing – net flux} against temperature) should have a gradient equal to the total effective feedback term, assuming the radiative feedback term is linear with temperature. Over the period of Pinatubo, FG2006 concluded that the ML estimate of total feedback was 2.1 W/m2/oC using GISSTemp or 2.9 W/m2/oC using HADCRUT, corresponding to ECS values of 1.8oC or 1.3oC respectively.
Roy Spencer here applied similar methodology using TLT data for temperature and obtained a feedback value of 3.66 W/m2/oC , implying an ECS of 1.01oC  (a small negative feedback).   Given some commenters’ criticisms of the use of TLT, regular Blackboard commenter, Troy,  then repeated the exercise here obtaining feedback values for the Pinatubo years  which showed   small negative feedback using either GISSTemp or Hadcrut – very similar to Spencer’s results.
[The differences between Troy’s values and those obtained by Forster and Gregory are largely explained by the fact  that FG2006 used a ‘robust regression technique’  (described in full in their paper) which  changes the statistics of fit.]Â
Overall, then, we have various estimates of the mode of the ECS directly from Q-N plots which range from 1.0oC to 1.8oC for 2XCO2, and a very different central estimate from Wigley2006 of 3.03oC. I can match the flux and temperature data quite respectably within the range suggested by the Q-N plots, but I am unable to obtain a reasonable match if I use Wigley’s central ECS estimate of 3.03 oC, even if I allow all other parameters to vary to achieve a match.  Equally, I can match Soden’s flux and temperature data, but it yields a low climate sensitivity- around 1.5oC. It is remarkably unlucky, if that is the right choice of word,  that neither paper chose to publish their complete results.
The calculation of feedback from Q-N plots carries a very large error bar, which arises directly from the error associated with the estimation of the regression coefficient.  The Q-N regression approach translates individual point errors in quite noisy flux and temperature directly into estimation error in the regression coefficient –  with equal weight applied to individual data points of the form (Q-N, Temp). This approach is not fully utilising the information content in the datasets to constrain uncertainty. The methodology which I will apply below narrows the range of uncertainty significantly.  The uncertainty reduction is real and arises from use of additional bounding information, namely that  to a very good approximation, the integral of net flux has to be balanced by the energy gain (loss) in the oceans, given that the temperature data has already been corrected for internal energy transfer (ENSO).     Integrating the net flux provides a natural and physically meaningful smoothing to the individual point errors without any loss of information. By considering these data together with minimum and maximum estimates of the observed temperature excursion, we can calculate unambiguous upper and lower bounds to estimates of climate sensitivity, as we shall see.
Matching the Data
Below is a graphic of the available net radiative flux data from three sources. One is a 72 day average (to match the orbit precession rate) taken from the Earth Radiation Budget Experiment (ERBE) site. The second derives from the same source but represents monthly data digitised from Soden2002. This monthly data has been adjusted by Soden et al to detrend the data and isolate the Pinatubo signal. The third dataset consists of  90-day averages taken from  the FG2006 paper – also based on ERBS data – but with some correction for orbital decay.  FG2006 note a problem with the 1993 data:-
As a result of battery problems, there are several
gaps in the data, especially during 1993, and for these
reasons we do not use data from 1993 or after 1997 for
computing annual averages. Our data are essentially
the same as the datasets presented in Wielicki et al.
(2002), except we have additionally applied a correction
to the WFOV data for the decay in the satellite’s
orbit (T. M. Wong 2005, personal communication).
There is no doubt that the satellite data is unreliable in 1993, at least from April onwards.  The huge downturn in net flux comes almost entirely from variation in the SW data, but there is no physical phenomenon (nor corresponding temperature response) to support it. ENSO was quiescent through 1993.
Despite the apparent noise in the above datasets, and the evident problems with the 1993 data, all three datasets show remarkable similarity in terms of their cumulative energy perturbation. The time-integrated net flux (shown below) gives us a clear idea of the total radiative energy perturbation associated with the eruption, at least upto early 1993.
The temperature “low†was achieved about 14 months after the eruption date. One of the things that is clear from the above plots is that the TOA net flux crossed the zero line at least 4 to 5 months after the temperature low point.  This is important. One of the mathematical features of  a single ocean heat capacity model is that  temperature turning points must correspond to the times when the TOA net flux crosses the zero line. Hence, the DK2005 model has an immediate problem in its base assumption. We  can therefore support qualitatively Wigley’s argument that the DK2005 model was deficient in not taking into account heat flow across the lower boundary of  the turbulent mixed layer.
Let us reproduce the DK2005 reported best solution in order to illustrate this point quantitatively.
In the plots above, I have replicated the preferred DK2005 result using a single capacity model with the same inputs and parameter values used in the paper. However, I have added in plots of the resulting model-derived net flux which can be unambiguously derived from this simple model. We see that the net flux crosses the zero line at the same time as the (model) temperature reaches its lowpoint i.e. month 15 on the plot or August 1992. We also see from the plots of net flux and cumulative heat gain (loss),  that the DK2005 solution is significantly underestimating the total ocean heat exchange, as correctly argued by Wigley in his rebuttal.
OK, so now let’s add in the missing heat flux term from the turbulent mixed layer to a deeper heat capacity to see what the “right†answer should be.  [ I will also make a (minor) change to the input. DK2005 used an analytic approximation for the observed change in optical depth before computing the forcing values associated with the volcano. Since I am using a numerical solution, there is no value in doing this, and so I will revert to the original optical depth values from Ammann et al (2003) used by DK2005; these are then converted to forcing values using the DK2005 conversion factor of 21 W/m2 times OD, which is supported by Hansen (2002). This should  bring the DK2005 and the Wigley2005 forcings into line except for a small difference in conversion:- Wigley converted using a factor of 20W/m2. ]
With the heat flow term now added in, and matching simultaneously to the temperature and the net flux data, we obtain the following solution.
We can see that we now have a much better match to the net flux data; the zero net flux point has now moved to month 20 and the total energy loss from the oceans in the model accords reasonably well with the total energy loss implied by the radiative flux differences. This match yields a climate sensitivity of 0.9 deg C for a doubling of CO2 – still showing a negative feedback relative to the Planck response.  However, the temperature excursion at around 0.44oC looks a little low in magnitude.  This sensitivity looks close to a lower bound estimate.
In the interest of finding the largest possible climate sensitivity which is compatible with these data, let us now assume that the entire temperature change from June 1991 was due to volcanic aerosols.  This is equivalent to shifting the temperature data (down) by 0.21 deg C – the June datapoint. The revised best match now appears as follows:-
This match suggests that a climate sensitivity of around 1.5oC is compatible with the observed data assuming maximum attributable temperature response. The model temperature excursion now reaches 0.7oC which is a little higher than most estimates.  (Wigley estimates an excursion of 0.61oC.)  The model is “seeing†an ocean capacity of 53 watt-months/deg C/m2 – equivalent to about 30 to 40m water depth, which seems credible over the period.  It seems reasonable to claim that this is then a fair point estimate of climate sensitivity. There are a couple of reasons to believe that it may be shaded on the high side; the temperature excursion is a bit high, and the satellite coverage probably yields an overestimate of global net flux response, but we will come back to this latter point.
The maximum climate sensitivity which will still stay (just) compatible with these data is 1.7oC for 2XCO2. This yields  a temperature excursion of 0.7oC , a total capacity of 68 watt-months/deg C/m^2 and a heat loss slightly larger in magnitude than (i.e. below) the Soden data.  If the climate sensitivity is increased beyond this value, then either the temperature excursion must be allowed to increase above 0.7oC OR the heat loss must be allowed to deviate substantially away from (i.e. below) the Soden curve for net heat loss.
Now it is time to test the Wigley model. Specifically, let us test what happens if we fix the climate sensitivity at 3.03oC for a doubling of CO2 and then allow the model complete freedom to optimize all of the rest of the parameters to match temperature and net flux data. The best match looks like this.
Whoops.  The problem with the net flux match is clearly evident, I trust.  The total transferred energy in Wigley’s model is far too large in magnitude. There are several indications that this model result is actually quite close to the match obtained by Wigley2005 using MAGICC. In Wigley’s rebuttal to DK2005 he argued that oceanic diffusive heat flux should achieve about 2 Watts/m2.   This corresponds very well with the values achieved in the above model match.  I did also test what happened if I “overweighted†the fit to the net flux data. If I force-fit the net flux data while retaining Wigley’s estimated climate sensitivity, then the temperature excursion achieves a magnitude of over 1.4oC – more than twice Wigley’s own estimate of maximum temperature excursion of 0.61oC. Basically, if I match the temperature data, the cumulative heat loss is much too high and if I match the heat loss, then the temperature excursion is much too high. No combination of parameters will allow a match to both datasets while retaining Wigley’s climate sensitivity.
It is possible to match the temperature data with a climate sensitivity of 3.03oC , as Wigley did, but only by ignoring the available radiative flux data.Â
Satellite Coverage
The above analyses all use forcing estimates which are globally averaged, and they also assume that the satellite values of radiative flux represent global averages. In practice the coverage from ERBS is from 60S to 60N. Since Pinatubo was a tropical volcano, the spread of injected aerosols was from the tropics outwards. During the early period, we would therefore expect that both the forcing per unit area and the net flux response per unit area should be larger in magnitude over the tropical region than the extra-tropics, and larger over the ERBS region of coverage than over the total globe. We might therefore expect that the climate sensitivity values estimated above are biased towards being high.
The differences between the global response and the response over the region covered by the ERBS should be most prevalent in the months during aerosol build up and for a few months following maximum injection. Thereafter, with stratospheric dispersion, we expect the forcing data to converge on the global values, as estimated by Sato or Ammann et al.
We can use the early SW data to estimate the effective forcing over 60S to 60N during the early months.   Note that we cannot continue to use the SW data indefinitely to estimate forcing because it starts to carry SW feedbacks in the form of changes in cloud albedo and in atmospheric absorption.
We can use this crude estimate of regional forcing together with the temperature and net flux data, to estimate the climate response from our simple model just over the region of satellite coverage, which was the information presented in Soden2005.
Here is the result of (just) substituting the monthly SW values presented by Soden2005 for the first 8 months of the global volcanic forcing data:
We can see that this gives a far better fit to the early high frequency data, as we might expect, and overall a very good match to the net flux data. The “best fitâ€Â climate sensitivity for this case works out to be 1.38oC – not too far from the 1.48oC estimated above when using the estimated global forcing values.  The model is (still) “seeing†a total ocean heat capacity corresponding to about the top 30-40m of ocean.
The above fits have all examined only the early data upto April 1993, and have assumed that the early 1993 data was usable to define the crossing point of net flux into the positive domain.  The elimination of all of the 1993 data unfortunately takes out the early 1993 months where the net flux is apparently seen to go positive.   The net flux data by the end of 1993 is clearly in the positive region. However, the elimination of the early 1993 data as “bad data†opens the possibility of the net flux crossing the zero-line at a slightly later time.   A match to the FG2006 flux data with no 1993 data used at all – the black triangles in the plot below –  yields the following result:-
Climate sensitivity is slightly higher than the previous results, but still quite tightly constrained by the available flux data even when the 1993 data are completely eliminated.
Summary of Conclusions
Allowing for uncertainties in the temperature and flux datasets, the response from Pinatubo is compatible with a 2xCO2 climate sensitivity of between 0.9 and 1.7oC, with a ML value  around 1.4 oC.  Outside this range from 0.9 to 1.7oC, it is not possible to obtain simultaneous matches to temperature and energy balance data within a temperature excursion range of 0.5 to 0.7oC. (Note that the actual measured temperature excursion was somewhere between 0.4 and 0.5 deg C. My allowed range here already incorporates maximum correction for ENSO.  For comparison, Soden’s estimate for ENSO corrected temperature excursion reaches a maximum of  0.65oC.  A reduction in the allowed temperature excursion would reduce the upper bound on my estimated range of climate sensitivity.)
The DK2005 estimate is unequivocally too low because of underestimation of ocean heat transfer and a deficient model.
The Wigley2006 central estimate of 3.03oC is impossibly high because of overestimation of ocean heat transfer; it can only match temperature data if the flux information is ignored.
I suspect cynically that these results explain why quantification of the sensitivity is notably absent from the Soden2002 paper, and the flux data are notably absent from Wigley2005.
Update 26th October
Those of you still following this thread will have noticed that there was an extremely useful input from Nic Lewis  here.
Nic,  in attempting to replicate some of my results,   correctly deduced that we were probably using different (globally averaged) volcanic forcing values – an excellent piece of forensic evaluation for which he deserves credit. Nic had derived his forcing values by taking an areal-weighted average of the dimensionless Optical Depth (OD) data developed by  Ammann et al 2003, and applying the same conversion factor that I was using.
In theory, this should have given us the same forcing values.
But it didn’t.
The values I used for OD were taken directly from the DK2005 paper, and were attributed by DK2005 to Ammann et al. Last night, having confirmed with some test calculations that there was a genuine problem,  I did what I should have done first time round; I downloaded the original source data and recalculated the areal-weighted average OD,  month-by-month.
Below is a plot comparing the result with the values digitised from Figure 2 in the DK2005 paper.  (I have also included the DK2005 analytic fit  – dotted line – for reference and calibration.)
There is little doubt that the OD values come from the same source. However, as you can see from the graphic, there is close to an exact ratio of 0.87 between the two datasets. It may be a coincidence, but this is the ratio of the area subtended by latitude 60S to 60N (the satellite coverage) to the total global area. If so, it shouldn’t be in there. [Going from the global average to the satellite region average should increase the average forcing per unit area, not decrease it. In any event, the calculation should involve a recalculation of the weighted average over the new area, not an areal factor.]
There may however be a perfectly good explanation for the difference, which is not evident to me. I will bring the issue to the attention of the authors in any event.
In terms of my results above, this should not make much  difference to the benchmark runs over the satellite region, since for those runs I have overwritten the early forcing data with SW data. For the other cases, it will modify the matches, probably causing a small decrease in the estimates of climate sensitivity, and a shift of the uncertainty range to the left.
But I am guessing. I will run the cases and re-post the revised results – presuming again on Lucia’s indulgence – but not for another week I’m afraid. I will be hors de combat.
My thanks again to Nic Lewis.









Great stuff – this is what I like to see.
I wanted to mention somewhere that I just got on Twitter, and follow Pielke Jr, Matt Ridley, Bjorn Lonborg and Keith Kloor. Yesterday, in the Suggested to Follow box, the first of three names was Lucia. That’s an interesting algorithm.
The problem with this argument was that, in fact, the GISSE model during the historical period shows no evidence at all of its reported high ECS value (2.7oC); tests of the climate sensitivity effective during this period in the GISSE model is 1.3oC for 2XCO2.
That isn’t a problem with Foster/Wigley’s argument. It’s simply a statement that your ‘climate sensitivity effective during this period’ derivation does not produce a result equal to the actual diagnosed ECS for that model. It sounds more like it would relate to the transient climate response, which was calculated as 1.5ºC in GISS-ER.
If Douglass & Knox and Schwartz said they were simply trying to determine transient climate response over the historical record you would have a point. However, Schwartz was explicit in saying his method was intended to derive ECS, so its inability to produce known ECS values from model data were a clear problem. Douglass & Knox were vague , but they did not object in replies to comments which related their results to an ECS estimate.
The above comment should have the first paragraph in blockquotes:
Paul_K, I think you may have coined a new phrase for those who read and analyze climate science papers. I have often found that by looking for the dog that did not bark can provide a shortcut in finding weaknesses in these papers. I always look for extenuating circumstances that might explain the lack of barking or something hidden away in an SI were the bark might be muffled. Are there any (other) reasons you can conjecture for both the sensitivity and flux data not being presented together by the authors of these two papers?
Paul S,
You’re obviously not a follower of my earlier work!
The issue is not one of transient vs equilibrium response.
Some time ago I demonstrated that with a simple linear feedback model, one could emulate (with remarkable accuracy) the mean surface temperature and OHC response of the GISSE model over the entire historic period. Translation of the feedback term to an ECS yielded a value of 1.3 deg C for a doubling of CO2. I deduced at the time that barring gross error, the reported ECS value of 2.7 deg C could only be correct if, in the GCM, the radiative response function displayed a curve (rather than a straight-line) against temperature. In other words, the feedback term cannot be expressed as a simple linear function of temperature in the future projections of the GCM, but is almost exactly linear over the temperature range during the entire historic forced period. The same is true of the NCAR model and the GFDL model – the only other models which I know have been tested. These models also show a linear response and an implied low (equilibrium) climate sensitivity during the historic period, and have a reported ECS of about twice the value exhibited over the historic period.
Since then, I uncovered (a lot of) direct evidence that this curvilinear response function is a feature of almost all of the GCMs, and I published an article on the subject which you may find interesting:- http://rankexploits.com/musings/2012/the-arbitrariness-of-the-ipcc-feedback-calculations/
Whether this model behaviour is a realworld behaviour is a separate conversation, but the fact remains that the argument used to challenge Schwartz was spurious – as was Wigley’s argument against that DK2005 did not match the “known sensitivity of PCM .
Paul S (Comment #105112),
Humm..
So are you suggesting that ECS can’t be diagnosed if you know the energy balance perfectly? If so, please offer some explanation for why the ECS is not evident in an energy balance.
Kenneth Fritsch (Comment #105114)
October 21st, 2012 at 3:23 pm
Hi Kenneth,
“Are there any (other) reasons you can conjecture for both the sensitivity and flux data not being presented together by the authors of these two papers?”
Not really. Wigley must have been aware of the mismatch, since his energy balance model balances, er, energy; but he might have decided that the ERBE data was just too wonky to use for comparative analysis because of (a) coverage or (b) data problems during 1993 or (c) something I am missing. But, if this were the case, IMO he should have stated the fact in his paper. I find it a little difficult to believe that he never thought of using the radiative flux data at all.
For Soden, there is no excuse at all that I can think of, in all conscience. He had already bitten the bullet on matching the radiative flux data. If I had been a reviewer of the paper, I think that my first request would have been for quantification of the water vapour feedback, since, after all, this was the main subject of the paper!
I’m not clear how total flux data would be any use for analysis of Wigley et al. 2005? The paper is about quantification of ECS by feeding a simple model with a forcing history and range of values for sensitivity. They use the Ammann 2003 dataset for AOD and state that they adopt a forcing coefficent of -20W/m^2. So you have the forcing flux, anything else that occurs is, by definition, feedback in this experiment and therefore is caused by the sensitivity.
For Soden et al., they didn’t quantify a climate sensitivity figure because the paper was about an analysis of the properties of water vapour feedback by itself, in models as compared to observations. It would take a different approach to do a further quantitative study into sensitivity. Forster and Collins (2004) do just that, though still water vapour feedback-only.
Paul_K
I think there is a case of academic anachronism here. As Forster and Collins 2004 notes, Soden et al. 2002 was a groundbreaking paper simply due to it demonstrating positive water feedbacks in observations that appeared to be consistent with a GCM. You should consider there were some suggestions around the time, mainly from Lindzen and co., that water vapour feedback could be neutral or negative, despite GCMs all having strong positive WV feedbacks.
Steve F,
It’s possible that might be the case, but all I’m saying at this stage is that the actual model equilibrium response in GISS-ER is 2.7ºC. Paul_K is talking about a value of 1.3ºC. Whatever this relates to, it isn’t the ECS of GISS-ER.
Paul S,
“Paul_K is talking about a value of 1.3ºC. Whatever this relates to, it isn’t the ECS of GISS-ER.”
Clearly not, since the analysis Paul_K is doing is based on measurements of the Earth’s response. His point is that the GISS-ER (and other GCM’s) ECS seems inconsistent with the observed behavior of the Earth in response to Pinatubo. GISS-ER uses assumed aerosol offsets that are… ahem… adjusted to match the temperature history. Whatever the diagnosed ECS from GISS-ER (and other models) they have a number parametrized behaviors which are wildly uncertain; that diagnoed ECS and a dollar will get you a cup of coffee… at least in some places. GCM’s can’t invalidate measured data.
Paul S,
Both DK2005 and Wigley2005 use the Ammann 2003 AOD data. The only difference is a small difference in conversion factor to forcing.
You write:
“I’m not clear how total flux data would be any use for analysis of Wigley et al. 2005? The paper is about quantification of ECS by feeding a simple model with a forcing history and range of values for sensitivity.”
…and then matching the result to temperature. OK.
However, the governing equation in MAGICC is derived from a TOA energy balance; it has the form:-
Net Radiative Flux = Forcing – radiative feedback
To a good approximation the LHS is also the rate of change of ocean heat energy (because of relative heat capacities of ocean, land and atmosphere). So MAGICC substitutes an upwelling-diffusion ocean model for the LHS of the equation. If you fix the parameters of the ocean model, then you can solve the equation for temperature for a range of different assumed sensitivities and a given forcing dataset – which is what Wigley2005 actually did. In fact, in the absence of any radiative flux data, that’s all you can do. However, in this instance there is an additional constraint on acceptable values of sensitivity – namely that the measured net flux (as well as temperature) should be consistent with the modeled ocean heat flux. This restricts the allowable parameter values of Wigley’s ocean model, and hence constrains the match to sensitivity, and this is what Wigley did not do.
SteveF (Comment #105123)
Read the square-bracketed section near the top. 1.3ºC is Paul_K’s diagnosed ‘climate sensitivity effective over the modelled historical period’ for GISS-ER.
Schwartz, Douglass & Knox attempted to use certain diagnostic variables – surface temperature, ocean heat content – to determine ECS for the Earth. Others tested their methods using equivalent outputs from models and found they couldn’t reproduce known model ECS values. Paul_K suggests this isn’t a valid criticism of their methods because he’s calculated the figure mentioned in the paragraph above and it agrees with Schwartz. However, given that the ECS values for models are known quantities, simply from reading off the temperature changes at the end of equilibrium runs, it’s difficult to see where that takes us.
Really though, a lot of this discussion is probably outdated now. Climate scientists are now increasingly favouring analysis of transient response over equilibrium sensitivity, for a variety of reasons.
Paul S,
It is probably better to look at transient sensitivity, since CO2 in the atmosphere will almost certainly be falling within about 50 to 75 years due to falling fossil fuel use and continued high ocean uptake from the thermohaline circulation. We are not ever going to see the ECS, though we may see the TCS, or something close to it. The TCS is quite insensitive to modeled ECS; there is not much difference between the TCS at an ECS of 3 and the TCS at an ECS of 1.8. Focusing on TCS may help form a broad technical consensus about likely warming over the next 75 years, which would be a a good thing. Of course the extremes on both sides will not be pleased. If warming will almost certainly lie between 0.10 and 0.2C per decade for the next 5 decades (as I think it almost certainly will), then perhaps people can reasonably evaluate what public actions make sense.
.
Paul_K,
No matter how I look at the hadcrut4 data, and “correcting” for the influence of ENSO, like Wigley et al say they do, I can find no response to Pinatubo larger than ~0.45C (and maybe closer to 0.4C). On Wigley et al’s figure 3, that corresponds to a bit under 2.0 (maybe 1.8?) for ECS.
Paul S,
“It’s possible that might be the case, but all I’m saying at this stage is that the actual model equilibrium response in GISS-ER is 2.7ºC. Paul_K is talking about a value of 1.3ºC. Whatever this relates to, it isn’t the ECS of GISS-ER.”
Agreed. I’ve revisited what I’ve written and I don’t think I have mis-stated anything, but I accept this is confusing, especially if you have not already seen it before.
If you prefer, you can call the “1.3ºC ” the apparent climate sensitivity to a doubling of CO2 which is effective in GISSE over the historic period.
It means that with this sensitivity you can perfectly emulate the GISSE results over the historic period using a single or multiple capacity linear feedback model. Specifically, the implied climate sensitivity per unit forcing is equal to 0.35 deg C/W/m^2 = (1.3/3.7). The total feedback parameter is the inverse of this, i.e. 2.9W/m^2/deg C. With this feedback parameter plugged into the linear feedback equation you can match the GISSE results. If the radiative response were to continue to be linear with temperature, then the ECS of this system would be 1.3 deg C for a doubling of CO2. (It’s not a transient sensitivity.) In practice the GISSE model does not show continued linearity in its radiative response to temperature, and so the ECS is 2.7 deg C, as reported.
The point I was making is that if you apply a bonafide methodology to estimate climate sensitivity from the GISSE historic results, using a linear feedback assumption, then you should end up with a unit sensitivity of 0.35 deg C/w/m^2 – equivalent to an ECS for 2XCO2 of 1.3 deg C – and NOT a value equivalent to 2.7 deg C for 2XCO2.
Paul S,
I said ” We are not ever going to see the ECS”… that should have been.. “The world is not ever”… most all who comment here will be long gone in 75 years. 😉
Hi SteveF,
“No matter how I look at the hadcrut4 data, and “correcting†for the influence of ENSO, like Wigley et al say they do, I can find no response to Pinatubo larger than ~0.45C (and maybe closer to 0.4C). On Wigley et al’s figure 3, that corresponds to a bit under 2.0 (maybe 1.8?) for ECS.”
Wigley estimates the excursion as 0.61ºC ± 0.1ºC
Soden graphically shows an unadjusted TLT excursion of 0.5ºC and an adjusted excursion (after ENSO) of 0.64ºC
Douglass & Knox show an unadjusted TLT excursion of 0.6ºC adjusted to about 0.7ºC.
There was a moderate but persistent El Nino which was assumed (by all of the authors) to have limited the temperature excursion, so the correction invariably increases the excursion relative to the actual measured data.
Paul S,
“Paul_K suggests this isn’t a valid criticism of their methods because he’s calculated the figure mentioned in the paragraph above and it agrees with Schwartz. ”
The result is easily replicable, and this was actually done by several people including Steve MacIntyre. The same approach was applied to GFDL by Professor Isaac Held – he found a climate sensitivity equivalent to 1.5 for 2XCO2.
“However, given that the ECS values for models are known quantities, simply from reading off the temperature changes at the end of equilibrium runs, it’s difficult to see where that takes us.”
The historic data shows a response that is indistinguishable from the response predicted by a linear feedback model. It is only in the GCMs that there is any evidence of curvilinear behaviour in the non-forced long-term response. Different models have different degrees of deviation from linearity, and there is no observational evidence to say whether these prognoses are of the realworld or strictly a problem with the models. The specific issue here is that if you wish to test a methodology for estimating climate sensitivity (e.g. Schwartz) by testing it on GCM results, then you have to use the effective climate sensitivity actually exhibited by the GCM – not the impossible-to-predict climate sensitivity that it will eventually exhibit in the future after it has gone through its nonlinear extrapolation. Alternatively, you can present an argument that says you shouldn’t be using a model which assumes that feedback is a simple linear function of temperature in the first place, but this is not what was argued when Schwartz was challenged nor when Wigley challenged DK2005.
Paul_S,
Just to add my two cents here to the above discussion between you, Paul_K, and SteveF, I agree with you (and Wigley) that comparing the ECS derived from the method in a GCM to the fully equilibrated 2xCO2 temperature is important and highlights a deficiency in the method. And I don’t think it was spurious at the time, prior to studies like Winton et al. (2010) that highlight how the radiative response is not constant with temperature in the GCMs.
However, in hindsight, assuming this discrepancy in actual vs. diagnosed ECS is the result of a varying radiative response (I’m taking Paul_K’s word for it here), it is clear why the Wigley argument about not matching the 2.7 K should now be considered irrelevant: the need when estimating ECS from Pinatubo is to figure out how strong the radiative response/feedback is with respect to temperature during the eruption and its immediate aftermath. If this radiative response is not constant (e.g., the “ocean heat uptake efficiency” departs substantially from 1), then you simply can’t calculate the ECS from a volcanic eruption. Any “more accurate” result that uses only the Pinatubo period has not been achieved because of a superior method, but is rather just luck, as the information about how the radiative response changes due to larger imposed forcings would simply not be contained within that period.
Thus, going back to SteveF’s statement:
In many models, if we are talking about the global energy balance over a short period such as the Pinatubo eruption, I would say that indeed you CAN’T diagnose the ECS. Whether the real world shows a similar change in the radiative response parameter over time is uncertain.
Hi Troy,
Great comment.
It’s still important to understand the Pinatubo response even though (I agree with you) it will at best give us an estimate of ECS only under the untested assumption of a continuing linear feedback response with temperature.
Troy, you can’t comment here – you’re not called Steve or Paul.
Paul_K 105132,
OK, I was looking at surface data not TLT. I’ll peek at the TLT data.
Great post and great follow on comments! The elephant in the room is of course the nonlinear T/F relationship apparent in the GCMs. We should remind ourselves that some *small* portion of this is likely to be factual, based on ice-albedo.
What I think is truly novel about Paul_K’s post here is showing how accurately we can deduce the TCR (sorry for the ambiguity, I am going to call it the TCR) from 1) the entire historical series and 2) the Pinatubo excursion.
Caveats:
1) The original detrending. How likely is it that “detrending” the data prior to the Pinatubo analysis has biased the results?
2) Whoops, sorry I forgot. If I remember I’ll post it!
PaulK–you really need to publish this. Fabulous.
Craig Loehle
Oh yeah, Caveat #2 was the Pinatubo-ENSO link. If a tropical volcanic eruption interferes with the ENSO cycle, as has been suggested by some, how does this factor in? Assuming that the effects are spatially distinct from a global GHG forcing.
Re: Paul_K (Oct 21 19:37),
I still think that the non-linearity comes from ocean heat uptake and should be expected. The two box model you used is why you don’t get non-linearity. The models used for 14C uptake, for example are more complex because a two box model simply doesn’t work. One relatively simple model, as opposed to a full three dimensional finite difference model, that seems to work well involves a well mixed surface layer, a one dimensional upwelling/eddy diffusion column and bottom water. There is a direct connection between the well-mixed surface layer and the bottom water as well as transfer from the surface layer to the column, but the volume and resulting mixing time of the bottom water is large. Any increase in heat content of the bottom water won’t be reflected at the surface for hundreds to thousands of years.
DeWitt,
Right, but what is the mechanism by which that affects the T/F relationship?
e.g. Spatial redistribution of ocean heat content affecting various feedbacks – WV, lapse rate, clouds, ice…?
BillC, “Right, but what is the mechanism by which that affects the T/F relationship?” Changes in the relative heat capacity. Virgin soil has a heat capacity of ~2kJ/liter. With land use, snow retreat, reduced glacial meltwater, the heat capacity can reduce to ~0.8 kJ/liter. That is 2.5 amplification of temperature without a change in forcing.
http://redneckphysics.blogspot.com/2012/10/thermal-capacity-and-amplification-of.html
There is a rough table of amplification factors by 5 degree latitude. Since the drying time varies, there is an inconsistent lag in amplification. Since the Antarctic is thermally isolated, it has small and occasional negative amplification.
https://lh5.googleusercontent.com/-vWiC80TweKg/UIVQljNNVsI/AAAAAAAAFGY/XWAWwaeVaB0/s800/hemisphers%2520versus%2520tropics.png
Just comparing CRUT 4 Tropics to the hemispheres, you can see the divergence as land heat capacity changes over time. The diurnal temperature trend of the surface station record swaps sign, thing get warm changing the feedback ratio and will trigger a new oscillation.
BillC, I think the answer is relatively simple to that. There are different climate responses to volcanic forcings, each of which have different latencies associated with them. The main factors that governs the latency of the response is the thermal mass associated with the component of the system you are comparing and the mixing time. Atmosphere has a relatively small thermal mass and short mixing time. Oceans have about 1000x times more thermal mass and 1000x long mixing time (for deep ocean components).
I think the curvilinear behavior that Paul_K describes is just a “kicking in” of long latency components that arise in more complex models.
As to which physical processes are most important for producing the curvilinear behavior exhibited in GCMs… well I think that’s a full-fledged research project.
Anyway, it seems to me if you find the transient response to be 1.8°C, that this would suggest that the equilibrium response must be larger. Usually, when I think “multibox” model, I have fast response (e.g. atmosphere) and slow-response (e.g. ocean) components, and if you drive the system with a relatively high frequency forcing like a volcanic eruption, it would seem to me you’d only be exercising the rapidly responding portion of the system. The thermal inertia associated with the slowly responding portion wouldn’t really get bumped very much…
Just to confirm this, since I had previously downloaded the radiation and temperature data for GISS-ER for several of the CMIP5 runs from Climate Explorer, I went ahead and checked the radiative response for this model during the Pinatubo eruption:
https://dl.dropbox.com/u/9160367/Climate/GISS_ER_Pinatubo10-22-12.png
Ultimately, for the 6 runs, the median for this radiative response term was 2.63 W/m^2/K with a min and max of 2.42 and 3.20 respectively. Since GISS E2-R slightly overestimates the 2xCO2 forcing at about 4.07 W/m^2, this would yield an estimated ECS assuming a linear response of 1.55 K (1.27-1.68 K). Obviously, the known 2xCO2 sensitivity is 2.7 K, because the radiative response in the model changes as temperature changes. But the information about the curve of the response function cannot (I don’t think) possibly be gleaned from Pinatubo, which should raise eyebrows about Wigley’s method to get the 3 K. The last paragraph of that paper:
It depends on what “useful” is referring to here, but based on the context it seems to be referring to ECS. If that is so, I think this conclusion is dead wrong. The best you could do is throw out models that don’t match the response parameter over the Pinatubo period (the model used in the Wigley et al. (2005) paper is one that should be thrown out, if the ERBE data is to be believed), but there are an infinite number of models with largely different values for ECS that could potentially match that response.
Carrick and Troy,
From Troy’s latest excellent post at the Scratchpad, very crudely I am seeing LESS evidence for spatially heterogeneous feedbacks than perhaps guessed at previously, i.e. the apparent offset between WV and lapse rate feedbacks. Not that Isaac Held has ever used them to explain the nonlinear T/F relationship from the models.
So we’re left with a few persistent, annoying effects to test for your research project…clouds…
BillC and Troy, one of the things I’ve been wanting to play with is look at the transfer function between the forcings and model response.
My immediate reaction would be that you could find transient and equlibrium responses to be the same, but I’d suspect that would be coincidental, and it would imply there’d be frequency regions where there’d have to be a negative relation (180° phase shift) between forcing and response (or that there are no long-period responses to climate in the model!).
Er, 2nd-latest post.
Paul K.:
Second to Craig Loehle’s comment — you really should polish this up a bit and publish — say in one of the open-access climate journals? I’m sure the group here would be happy to pre-peer review your manuscript.
Re Soden et al 2002:
I remember puzzling over this paper, wondering why they didn’t go ahead & calculate an empirical CS from the Pinatubo data. I worked one out BOTE at the time and got numbers a bit higher (1.8 or so), but in your ballpark. I find your interpretation that this was “too hot to print” compelling (and disheartening).
I agree with the comments that the Pinatubo CS estimates can’t be considered *equilib* CS — but I wonder if *any* empirical measurement could find a credible ECS number, given that the atmosphere is not (ever?) in equilibrium.
Thanks for your excellent post, calculations and graphs!
Peter D. Tillman
Professional geologist, advanced-amateur paleoclimatologist
BillC,
“What I think is truly novel about Paul_K’s post here is showing how accurately we can deduce the TCR (sorry for the ambiguity, I am going to call it the TCR)…”
Aagh, … please don’t.
Look you’ve even got Carrick doing it now…
“Anyway, it seems to me if you find the transient response to be 1.8°C, that this would suggest that the equilibrium response must be larger.”
I think it is tricky enough to explain or understand the issues here without further confusing terminology. Troy’s terminology is better:
“an estimated ECS assuming a linear response”.
Or by all means let’s call it “the effective linear climate sensitivity for 2XCO2” providing that everyone is clear that we are talking about an estimate of temperature change AT EQUILIBRATION (net flux = 0) following a 3.7 W/m^2 forcing under a linear feedback assumption and we are NOT talking about an estimate of transient temperature.
Smiley somewhere.
Paul_K:
Not sure I fully agree. Linear in “what”?
Linear response usually means that F(X+Y) = F(X) + F(Y). Obviously (?) something else is meant here.
Paul K
An excellent study. I’ve looked at several independent methods of estimating ECS from observed-temperature and TOA-radiation observations and generally get results close to yours – i.e. ECS values less than two degrees C, usually between 1 and 1.5 degrees C. Probably the most straightforward single method uses observed temperature and ocean heat-content changes. It implies that most (but not all) GCMs match historical temperature changes by offsetting too-high climate sensitivity values with too-high aerosol cooling (mostly in the NH) balanced by too-high ocean heat uptake in the SH. I agree that you should publish your results but experience indicates that it will be a struggle to go against the received wisdom!
On with a few definitions, to see if we can sharpen the language:
Equilibrium climate sensitivity is defined as the ratio in the steady-state change in temperature dT to a step-function-like change in forcing dF. That is ECS = dT/dF.
Note this has units °C/(W/m2). This is the opposite of what Paul_K has written, but I’ve tripped check this is standard. What gives?
“Transient” climate sensitivity is undefined unless you specify what the transient is (in other words, this depends on frequency content). For TCS to TSI forcing (~ 22 year period), estimates are that you multiple the transient sensitivity by a factor of 1.5 (c.f., Camp & Tung, 2007. I would assume that for shorter duration signals, like Pinatubo, the multiplicative factor would be larger.
The technical term for a system for a system in which dT = alpha * dF is a “zero memory system” which is not the same as a “linear system”. [You can have nonlinear zero memory systems, for example T = F^alpha is zero memory.]
If you want to generalize this slightly to:
dT(t) = alpha * dF(t-tau)
That would be a “lagged zero-memory system.”
Neither of these are suitable representations of the climate system, which we know to involve multiple integration times.
Based on my understanding of climate, Pinatubo has to be viewed as a transient forcing, and the only way to relate the transient response dT to what would be expected from an equilibrium shift would be knowledge of the transfer function of the system (eg
You can for example calibrate a microphone by inputting a short-burst (say 5 cycles) of a tonal signal, and if you know the transfer function up to an overall constant, you can compute the response of the microphone at any frequency for which the transfer function is valid.
It’s messier here because the system really is nonlinear in the sense of F(X+Y) ≠F(X) + F(Y), but the same general notions apply — at least for low amplitude dF’s.
Carrick,
Assuming a linear response = assuming that the radiative feedback term can be approximated as lambda*T, as per the IPCC linear formalism in defining radiative feedbacks, where:-
lambda is the total feedback term and
T is the mean surface temperature change.
See http://rankexploits.com/musings/2012/the-arbitrariness-of-the-ipcc-feedback-calculations/ especially the comments section for (endless) clarification of the issue.
Thanks Paul_K, at least I understand what is being linearized.
I’ll note that one of your links in that commentary discuss the issue between transient and equilibrium response too:
From a systems analysis viewpoint, I’d have to say there’s nothing surprising about the fact that one-box models such the one Isaac Held considered can reproduce the behavior of full climate models.
All this is saying is short-period response is dominated by the short-period component of the climate system, but this is almost a tautaology (unless there were no short-period component of course, but there is–the atmosphere). The bad news is this means comparing models to temperature data doesn’t yield useful constraints on long-term behavior of the system. The good news is you can tightly constrain the total forcings (inverse problem). The other bad news is you can’t use data prior to 2001 e.g. to validate models, since the model response is so tightly coupled to the forcings that “peeking” seems to become inevitable in the model/forcing reconstructions.
Troy,
A minor nitpick. You wrote:
“And I don’t think it was spurious at the time, prior to studies like Winton et al. (2010) that highlight how the radiative response is not constant with temperature in the GCMs.”
I think there is some confounding of motivation, awareness and validity in this statement perhaps? I am reasonably sure that Foster et al did not know that their argument was spurious, and that Schwartz certainly didn’t know from his commented response.
However the fact that they didn’t know it was spurious doesn’t change the fact that it was spurious. Suppose we hypothesise 3 GCMs – each of which shows the same effective linear feedback of 2.3 W/m2/deg K during the historic period, but they have reported ECS values of, say, 2.5 deg K, 3 deg K and 3.5 deg K because of different long-term radiative responses. Suppose you now test a methodology for calculating feedback using historic data generated by each of the three models in turn. The methodology (which in fact does do its job perfectly) each time produces a value of 2.3 W/m2/deg K. However, you conclude that the model failed in each case to match the known climate sensitivity of the respective GCM and moreover that it is remarkably insensitive to large “known” changes in climate sensitivity!
The argument was invalid before the Winton and Held paper became available and still invalid afterwards. The only thing that could have changed was awareness of the problem. In practice, I think that there is STILL a very low awareness of the issue.
Hi Carrick,
Definitions.
You wrote:
“Equilibrium climate sensitivity is defined as the ratio in the steady-state change in temperature dT to a step-function-like change in forcing dF. That is ECS = dT/dF.
Note this has units °C/(W/m2). This is the opposite of what Paul_K has written, but I’ve tripped check this is standard. What gives?â€
I am not sure what you mean when you say “the opposite of what Paul_K has writtenâ€. Your definition of ECS is fine, but the one I am using is the IPCC definition – the equilibrated temperature change to a doubling of CO2 – rather than to a unit step-forcing. Hence the units are in deg C. The “total feedback” which I refer to is equal to the inverse of YOUR definition of ECS, and so has units of Watts/m2/deg C.
I’ll skip your comments on inappropriate models since I think you are now aware that we are talking about a class of system response models?
Isaac Held’s comment is deeply unsatisfactory. The problem of the changing effective climate sensitivity observed in the GCMs relates to the Earth’s radiative response to a climate state – not how long it takes to get there. The ocean heat uptake controls the latter, but not the former. You may wish to revisit some of the later discussion in the comments section of http://rankexploits.com/musings/2012/the-arbitrariness-of-the-ipcc-feedback-calculations/
Paul_K (Comment #105124) ,
Ah, I think I get what you’re looking for regarding the Wigley et al. fluxes. That would be useful as a further check. Actually, the MAGICC model is available for download so you could try repeating the analysis.
I agree with much of what you said though given the specific definition of TCR – the change in global average near-surface temperature after 70 years in a model run which features a 1% increase in CO2 concentration every year until doubling – we’re talking about a relatively short period (70 years), so non-linear feedbacks are not the major factor they are for ECS. In other words, part of the reason TCR is lower than ECS is this non-linearity of effective sensitivity.
I suspect the presence of apparent non-linear sensitivity in CMIP3-era model runs is mostly related to the equilibrium land-ocean warming contrast effect (see <a href="here and here). Essentially, in models most land warming occurs because of sea surface temperature warming, rather than as a direct result of forcing. Isaac Held also illustrates this point here.
Initial SST response to forcing is relatively slow, largely because some of the excess energy is pushed downwards rather than simply heating the surface. However, as the forcing increase persists equilibrium warming of the sea surface (closing an already-present imbalance due to the SST not warming fast enough) begins to occur, gradually increasing the rate of SST warming. Since SST changes are the dominant controlling factor for land surface warming, this also increases the land warming rate.
The lack of apparent non-linearity in observed effective sensitivity may be due to the relatively slow ocean warming over the past 50 years, whatever may be the cause of that, or it might be that the models are getting this relationship wrong.
Re:Troy_CA (Comment #105156)
October 22nd, 2012 at 10:48 am
Hi Troy,
“…this would yield an estimated ECS assuming a linear response of 1.55 K (1.27-1.68 K)”
That seems about right. The original match to GISS-ER data (estimated ECS assuming a linear response of 1.3 deg C) was based on traditional adjusted forcings (Fa values) taken from the GISS site. GISS has modified its site listings now to “effective forcings” which produce a reduced total forcing value. The volcanic forcings took a hit of 25% IIRC. I presume that these are the values you used?
Troy_CA (Comment #105156),
You’ve made a couple of assumptions there that GISS E2-R in CMIP5 has the same characteristics as GISS-ER in CMIP3, one of which I know is not correct – Gavin Schmidt has stated the ECS of the latest model is 3.0K.
Paul_K,
If you’re talking about the effective forcing values usually given by GISS, these are produced by measuring the in-model temperature response for each component after feedbacks. Given that these figures already contain a signal of the temperature response, they aren’t really appropriate for use in determining model sensitivity. I assume it doesn’t make a big difference though.
Re:DeWitt Payne (Comment #105152)
October 22nd, 2012 at 9:13 am
Hi DeWitt,
“I still think that the non-linearity comes from ocean heat uptake and should be expected. The two box model you used is why you don’t get non-linearity. ”
This is similar to Isaac Held’s argument cited by Carrick. I’m sure we’ve been here before?
Firstly, let me be clear that I can get a non-linear radiative response into my model with no problem at all. I can convert the feedback parameter to be a function of time, or I can convert it into a function of temperature, or I can convert it into a function of time AND temperature. These options may or may not be physically realistic, and the last two options would move us away from an LTI system, but they will allow me to have an increasing effective climate sensitivity as temperature increases as observed in most GCMs.
What I cannot do is change out the ocean model and hope to produce a new relationship between net flux and temperature for a fixed step forcing. The ocean model is only a proxy for the net flux term in an energy balance equation of the form:-
net flux = Forcing – Radiative response
If the radiative response term is written as λT and the forcing is constant, then a plot of net flux against temperature will have gradient -λ. If that gradient is changing, then it is because λ is not a constant value. There are no other options.
Heat uptake is irrelevant here. It controls the time it takes to get to a climate state, but not the radiative emission once the climate state is defined – the latter is controlled only by the surface temperature distribution and the atmospheric properties.
Paul S (Comment #105176),
I should add to this comment that I specifically mentioned CMIP3-era models because many CMIP5 models now contain carbon cycle and other biogeochemical feedback systems which should be major sources of non-linearity.
Paul_K:
Your are correct, I used those effective forcings that are up there.
Paul S:
Well okay, fair enough…but unless the efficacy of the volcanic forcing is cut in half in the new GISS-E2, I don’t think these assumptions affect the point of a substantial difference between “assumed linearity in the radiative response” ECS during Pintabo vs. actual ECS.
I believe the Fa values he uses are the change in flux from a perturbation when the climate is held constant but the stratosphere is allowed to adjust, so it does not contain a temperature response. The effective forcings DO rely on a temperature response, but only in so much as the efficacy of each forcing type is calculated relative to CO2…it isn’t as if the forcings have been back-calculated based on sensitivity like you see in Forster and Taylor (2006), where then using them to calculate sensitivity would be a circular process.
Paul_K:
I should have said “reciprocal” but I figured it out. Your feedback parameter has units of W/m2/°C. (Cue Amazing Grace.) I was confused but now I’m not. 😉
Regarding this:
Observationally it matters a lot how long it “takes to get there,” because that plays into how low a frequency for the perturbation is required. And as it turns out it matters a lot in terms of policy issues too.
If you look at the dT that you get from a dF, suppose you did this:
step function of dF at t = 0, then negative step function of -dF at t = T.
Then plot dT/dF versus T, where dT = the maximum change in dT observed in the model.
What do you think this figure would look like for a model with a low ECS versus a model with high ECS? (I’m talking full fledged AOGCM now, not e.g. two box toy models.)
I’ll tell you my prediction: The model with low ECS will approach it’s equlibrium dT at a shorter time T than the one with a high ECS, and for times that are short compared to the “long latency” component of climate response, the two models would look very similar.
If you agree with that point, or at least see why I think that, then I think you’ll be able to see why I don’t think Pinatubo is telling us anything about ECS.
Too many T’s and I can’t edit guest post comments. Try again:
If you look at the dT that you get from a dF, suppose you did this:
step function of dF at t = 0, then negative step function of -dF at t = tau.
Then plot dT/dF versus tau, where dT = the maximum change in dT observed in the model.
What do you think this figure would look like for a model with a low ECS versus a model with high ECS? (I’m talking full fledged AOGCM now, not e.g. two box toy models.)
I’ll tell you my prediction: The model with low ECS will approach it’s equlibrium dT at a shorter time tau than the one with a high ECS, and for times that are short compared to the “long latency†component of climate response, the two models would look very similar.
We really have tilled this field a couple of times before. 😉
If we imagine an Earth with a 100 meter deep ocean, then we can probably all agree that the ECS and TCS would be close to the same. So all the proposed explanations for why modeled ECS and modeled TCS boil down to heat entering the ocean… where and how much and over how long. That is not a simple issue to be sure, but the issue is much more than simply accounting for the heat. Argo does that reasonably well, and it is somewhere near 0.4-0.5 watt/M^2 averaged globally. That is a lot less than the net forcing from GHG, even accepting the mid range of the IPCC aerosols. So the difference between modeled ECS and empirically derived values like Paul_K’s IS NOT simply due to the amount of heat being taken up by the ocean. I for one have not heard a credible physical explanation why ECS can’t be reasonably estimated by an accurate heat balance. I just see vast nonlinearity over very modest temperature changes as demanding a reasonable explanation that goes beyond heat uptake by the ocean.
if you havent signed up to review the sod you should
there is still time.
An excellent analysis, and if we were not dealing with a chaotic system with bifuraction points and an indeterminate set of dragon-king events ahead, I would agree that we are looking at an ECR of somehwere around the 1.4C mark, and could even accept the notion that we might see a rise in near surface temperatures of somewhere around .10C to .20C per decade over the next 5 decades. But that, unfortunately, isn’t the actual universe we inhabit, nor that climate that we are in the process of pushing beyond the capacitiy for normal feedback processes to accomodate. Certainly there are dragon-king events ahead, bifurcation points, tipping points– call it what you will, but these don’t (and can’t) show up in simply linear or non-linear extractions from recent past events. The linear extractions of course are rediculously off base, but even the non-linear extractions being used to predict ECR will only give you at the extreme, black swan events that reside as outliers of the current climate regime. The dragon-king events (of which we probably already had one in the Arctic, perhaps in 2007) alter the trajectory. Thus, while the mathematics are sound, as far as they go and the logic crisp, the inevitable dragon-king events existing in a system undergoing such rapid change make them only an exercise in math and logic, but having no relationship to the chaotic system under discussion. Finding an analogue in the paleoclimate record for period in which both CO2 was around 500-600 ppm, and when it rose so rapidly from 280 ppm is probably about the best we can do to get some handle on where the bifurcation points will take us to an actual ECR.
Paul S,
The effects of Pinatubo on CO2 sinks/sources is something that’s been exercising my mind quite a lot recently, but I’m not familiar with how models treat the carbon cycle.
Can you illuminate me as to how the models approached this before the Carbon Cycle was included explicitly in CMIP5 models? [Which ones?]
Thanks. M.
SteveF:
Because ECS involves processes that can take hundreds of years to pluck. (One of the hall marks of a high ECS system is a very long setting time for example.)
The climate response to a short-duration forcing is to pluck those components of the system with fast response time. Heat balance will reflect that too. (There will be a discrepancy, but the magnitude of it will be a function of the form tau_pinatubo/tau_long_response, so the discrepancy can easily be in the noise currently, when you look at energy balance.)
* One of the hall marks of a high ECS system is a very long [settling] time for example.
michael hart (Comment #105200),
For historical runs all modelling groups prescribed observed concentrations from ice cores, Mauna Loa data etc.
For the projections SRES scenarios were used, which prescribed emissions rather than concentrations. To produce concentrations groups used carbon cycle models, such as Bern-CC and ISAM, in an external activity. As operated for CMIP3 these were carbon cycle models coupled to simple climate models with prescribed climate sensitivity (more info here and here). So, there is a climatic effect within these carbon cycle-informed concentration projections, but it’s not necessarily the same climate as the GCM using them for input.
For CMIP5 you can go to the Climate Explorer page and look down to the list. ‘ESM’ or ‘CC’ in the model name generally indicates that it incorporates a coupled carbon cycle module.
CMIP5 projections use RCPs (Representative Concentration Pathways) which, despite the name, do actually provide emissions databases.
“Carrick (Comment #105164)
Equilibrium climate sensitivity is defined as the ratio in the steady-state change in temperature dT to a step-function-like change in forcing dF. That is ECS = dT/dF.”
What does that mean? How can you treat temperature, which is real, in this way? Your delta T is a function of a change in the average temperature, average over all depths, averaged over all the planets surface and averaged over a year.
This is nonsensical in thermodynamic terms; you can only state body A has a temperature if, and only if, the body A is in thermal equilibrium. The oceans are clearly not in thermal equilibrium. Pretending that the oceans are in thermal equilibrium makes as much sense as suggesting that you body is in thermal equilibrium.
There are two separate conversations going on here which are becoming interwoven, and I believe that this is causing considerable confusion, as evidenced by many of the comments.
It is my fault because I couldn’t resist pointing out that Wigley’s argument (b) i.e. comparison of DK2005 methodology to the ECS from PCM was spurious, and analogous to the Foster et al rebuttal of Schwartz. What happened after that was that a number of people who had not been involved in the long conversation about the changing effective climate sensitivity observed in GCMs (http://rankexploits.com/musings/2012/the-arbitrariness-of-the-ipcc-feedback-calculations/) did not understand my argument and sought clarification, and then a number of people who did understand the argument (Troy, SteveF, BillC amongst others) joined the conversation using verbal shorthand which has peprhaps added to the confusion. So this first issue relates to the question: Why does the effective climate feedback (the inverse of sensitivity) in the GCMs stay constant and high in magnitude over the entire 140 year instrumental period and then start decreasing in magnitude in future projections i.e. increasing sensitivity over time and temperature?
The climate sensitivity in this context is measured as the inverse of the effective feedback, which is the rate of change of net flux with respect to temperature, while the forcing level is held constant.
The second conversation which is interwoven here relates to the estimation of climate sensitivity from short-term observational data, such as Pinatubo. But this can be extended to the direct use of flux and temperature data post satellite acquisition, or to matching temperature, OHC and forcing data through the instrument period, or to carrying out a direct emulation of GCM results over the instrument period to estimate key parameters. These results ALL produce low estimates of climate sensitivity, with modes typically in the range 1.3 to 1.9 deg C for 2XCO2, but I am sure that some of you can find slightly lower or higher values.
As Troy has tried to point out a couple of times on this thread, the fact that matches to observational data under the assumption of a linear feedback produce low estimates of climate sensitivity does NOT prove that the realworld ECS is low; because the GCMs might be correct in predicting a curvilinear relationship between net flux and temperature i.e. the feedback decreases in magnitude at higher average Earth temperatures. Indeed if the GCMs are correct, we would EXPECT the observational data to yield lower estimates of climate sensitivity relative to the ECS values in the GCMs.
However, this does leave some mighty difficult questions. WHY does the radiative response to temperature change over time in the GCMs? It cannot be explained by arm-waving arguments about ocean heat uptake; they don’t hold water (pun intended). It could be explained by persistent spatial redistribution of temperature or of clouds or of sea ice albedo, but so far these explanations are lacking. Secondly, even if we accept that there are some long-term effects from any or all of the above, why do the feedback values from observational data not yield values which are at least compatible with estimated fast feedbacks, such as water vapour and lapse rate? Analytic calculations of WV plus lapse rate feedback at constant relative humidity typically yield climate sensitivity values equivalent to 1.8 to 2.2 deg C for 2XCO2. We have been told in many, many papers that these values are compatible with GCM responses (average 1.9 deg C). However, the GCM average response values come from the evaluation of feedbacks in future projections of high temperature! Do you see a problem here? It’s a travesty that we can’t find the missing feedbacks when matching the observational data over the instrument period.
In any event, the main point of this message is to ask commenters to try to separate the two issues:- on the one hand, the short-term estimation of effective climate sensitivity from observational data and, on the other hand, the curvilinear response of net flux to temperature (or increasing effective climate sensitivity) observed in the majority of the GCMs.
Carrick 104201,
Sure there have to be long lags, considering the thermal mass of the ocean (and the very long response time that implies). But that is not really the issue; of course long equilibration puts some warming “in the pipeline”, but that would be the case even if the response were nearly linear in the temperature domain (as opposed to the time domain, where everyone agrees the response is nonlinear). What I have not seen is a credible/plausible physical mechanism by which the response in the temperature domain becomes very non-linear. What changes to make the sensitivity nonlinear?
Carrick,
My guess is that it may be related to projections of aerosol offsets over time in the GCM’s, rather than nonlinearity in the response. For example, if you project the aerosol influence of growing fossil fuel use for 75 years followed by a leveling off (or even a gradual fall), then you automatically see a exaggerated ‘delay’ in warming… sort of an “in the pipeline” aerosol effect tied to fossil fuel use, and based on the aerosol offsets commonly used, the effect would be quit large.
Steve_F,
Aerosols – I don’t think so, because the apparent nonlinearity is present in 2x CO2 runs where all that is done is to increase the GHG forcing and hold all other forcings constant.
The difference in the 2xCO2 runs in the models (other than the obvious, we haven’t instantaneously doubled CO2 in the real world) is that you see the curvature “almost immediately” in the time domain.
Paul_K, sorry, from here on I will not refer to the apparent linear sensitivity as TCR. It’s just that they are practically very close in value in a world where forcing increase is approximately linear.
Paul_K (Comment #105205),
Thanks for that clarification. Like you, I find the lack of physical explanation(s) for the non-linearity in the temperature domain troubling. If the models are wrong about the evolution of feedbacks, then the error most likely is in those factors which are least constrained by observation… which points to the models’ treatment of clouds, and indirectly, the ocean circulation models which influence how ocean surface temperature will evolve over time and location. A large and nonlinear increase in clouds at higher latitudes in winter (when there would be no compensating increase in cloud albedo… there is little wintertime sun!) with a slowly warming ocean surface would do it…. but what is the physical mechanism by which that would happen? This seems completely lacking.
.
The Troy/Andy Dessler dust up about net cloud feedback seems relevant to this discussion; Dessler claims that the net cloud feedback is strongly positive in the short term… leading to a diagnosed climate sensitivity close to the middle of the IPCC range (while Troy says Andy’s results are due to a cherry pick of the temperature series Andy likes 🙂 ). But the clouds Andy talks about all have short-term influences. If he is right, then we should see a stronger short term response, not the ~1.8 – 1.9 C/doubling short term response the models suggest. I mean, you can’t have it both ways: if high climate sensitivity is, according to the models, only apparent in the 100 to 1000 year time frame, then how can short term cloud feedback be strongly positive?
R. Gates (Comment #105196),
Please explain how you know for certain that there will be Dragon King and Black Swan events. It is simply clairvoyance, or do you have some other way of knowing this?
TroyCA has a lengthy post on CS at
http://troyca.wordpress.com/2012/10/19/estimating-sensitivity-from-0-2000m-ohc-and-surface-temperatures/
Estimating Sensitivity from 0-2000m OHC and Surface Temperatures
HT to SM. Apologies if this has already been xref’d.
Looking forward to reading it (and the sequels, Troy.
Very interesting discussion. What has always bothered me about climate sensitivity estimates is that the nature of the forcing event must have a large influence creating different short and long-term feedback mechanisms. Milankovitch cycles have minor W/m2 changes with very large temperature changes. Conceptually, I believe this to be due to the geographic asymmetry of the forcing which produces large albedo and ocean current feedbacks due to the massive phase change of water. This, IMO, is a completely different forcing mechanism compared to a well mixed greenhouse gas (WMGHG) which increases at a rate of one percent per year over a century or two where the forcing is more evenly applied over the whole of the planet.
For a volcanic eruption, the forcing is very brief, not likely well mixed to the polar regions and the settling aerosols potentially influence cloud formation. Therefore, it is one tiny piece of the puzzle, is a very dissimilar forcing mechanism and therefore, does not have much meaning for determining WMGHG CS, either transient or equilibrium.
On that note, ECS is not possible because equilibrium never occurs. Perhaps maximum CS would be a better choice.
SteveF (Comment #105217)
October 23rd, 2012 at 9:17 am
R. Gates (Comment #105196),
Please explain how you know for certain that there will be Dragon King and Black Swan events. It is simply clairvoyance, or do you have some other way of knowing this?
___________________
Of course I don’t know for certain that Dragon King events will occur, but it is certainly well within what is indicated as to be expected for a dynamical chaotic system undergoing change. On the way to some new overall climate, the transition is never smooth, but the paleoclimate record is littered with examples of rapid jumps or steps to new regimes as some underlying forcing brings the system through a series of tipping points, or Dragon King events. So the answer would be both the math, chaos and climate theory, and the paleoclimate records show abrupt Dragon King events. Related to the black swans, as these are outliers within a regime they are going to occur simply by probability as any given regime has extreme years. The black swans represent the outliers within a regime, whereas the Dragon King represents the shift to a new regime.
R. Gates,
“but it is certainly well within what is indicated as to be expected for a dynamical chaotic system undergoing change”
What exactly about the behavior of Earth’s climate indicates Dragon King events are going to happen? I am looking for specific information (hopefully, based on the math of chaos theory) that indicates the Earth’s climate system is likely to undergo these events in response to GHG driven warming. Vague statements about things which could theoretically happen are of little use, and no more interesting than discussing the small possibility of an asteroid hitting earth and killing off most species. What you ought to show is that warming driven by CO2 presents a greater risk of ‘catastrophe’ than for example, descending into an ice age, which the recent history of the Earth indicates is almost certain to happen.
Paul_K wrote above:
“net flux = Forcing – Radiative response
If the radiative response term is written as λT and the forcing is constant, then a plot of net flux against temperature will have gradient -λ. If that gradient is changing, then it is because λ is not a constant value.”
However, I’m assuming the model Paul_K fit in this post is:
net flux = Forcing – Radiative response – Deep ocean uptake
Until deep ocean uptake saturates and becomes negligible, λ in the above equation can’t be a constant. Equilibrium climate sensitivity won’t be observed until deep ocean uptake becomes negligible. I assume that this (plus slow feedbacks) is why it takes many decades for climate models to reach equilibrium.
Do any of the definitions of climate sensitivity apply to Paul’s analysis of Pinatubo? Transient climate sensitivity usually refers to the average temperature change reported by models in the decade before and after CO2 has been doubled by a 1% annual increase. Deep ocean uptake hasn’t saturated in this scenario. Pinatubo obviously didn’t persist long enough to report on slow feedbacks such as ice-albedo and vegetation, so Paul_K can’t have found the true ECS. If I’m correct in assuming this is the correct model (a big “if”):
net flux = Forcing – λT – Deep ocean uptake
things become clearer. Paul_K has deconvoluted the deep ocean uptake from the fast radiative response – which is sometimes called the Charney climate sensitivity (the no-feedbacks climate sensitivity plus water vapor, cloud, and lapse rate feedbacks).
Re: Frank (Oct 23 16:01),
Precisely. That’s what I’ve been trying to say since Paul_K posted his first article on this subject. That also explains the hysteresis. If you do a square wave forcing, doubling CO2, holding for N years and then bringing the CO2 back to 1X, the heat in the deep ocean will be higher after another N years than it was at year zero.
Re: DeWitt Payne (Oct 23 19:59),
Editing doesn’t seem to be working right now.
Even in the temperature domain rather than the time domain, there will be hysteresis if there is any feedback such as ice/albedo or water vapor feedback. There is no driving force for the atmospheric water vapor content or the average ice coverage to return to its initial condition, certainly not if the system heat content is higher, which means the equilibrium temperature after the square wave will be higher than before.
DeWitt editing doesn’t work on guest posts.
By the way, here’s the response curve for a 3-box model, which has short-period (0.1 yr), “intermediate” period (20 yr) and “long-period” (100 yr) responses.
Figure.
If Paul_K is interested, I can run Pinatubo + ENSO34 (lagged) + TSI through it.
Is there anybody here who thinks he’ll be able to pull out 2.5°C/doubling of CO2 from this model using Pinatubo data?
(It seems to me you’re left making the argument that there aren’t any long-period responses in climate, which is a tough one to make given the Earth has oceans.)
Bill C,
I think the Winton et al. (2010) table 1 gives a pretty good clue about this, as it compares the first 70-year feedbacks to the remaining 530 year feedbacks till equilibrium in GFDL CM2.1. As they indicate (and I experienced in that post), the large differences in temperature and water vapor feedbacks between periods largely cancel each other other, although the reduction in the negative temperature response is slightly stronger than reduction in the positive water vapor feedback, by 0.18 W/m^2/K, so you get a slight increase in sensitivity there. Albedo plays a smaller role. As you guessed, the cloud feedback experienced the biggest change of 0.23 W/m^/K, so my guess would be that this contributes the most to non-linearity in most models. Given this feedback is by far the most uncertain, at first blush it would not give me great confidence that the “real world” experiences the same degree of curvilinear response as several of these GCMs.
Dewitt suggested a square wave, but the eccentricity in the Earth’s orbit provides an annual sinusoidal solar forcing with peak input in January. Perhaps Paul_K’s method could be applied to this forcing. Of course, the response to eccentricity forcing disappears when you work with temperature anomalies and working without anomalies is tricky. Near surface air temperature peaks in July (amplitude, 1.5 degC), but this is attributed to the rapid warming of land in the northern hemisphere. Global SSTs peak in March (amplitude 0.25 degC) and tropical SST’s peak in April (amplitude about 0.5 degC, but not particularly sinusoidal). ScienceofDoom had a post on Ramanathan’s estimate of clear-sky water vapor + lapse rate feedback from this data, but his simple approach didn’t include the annual peak in solar forcing or deep ocean uptake.
scienceofdoom.com/2010/05/30/clouds-and-water-vapor-part-one/
Frank 105249,
You are misstating the energy balance.
It is toa radiative balance. Net flux on lhs is the
net radiative flux, not just the net flux going into
the mixed layer.
I am on a mobile at the moment. Will clarify fully a bit later.
Carrick,
I can’t see your figure at present but i am guessing that you have just compared some responses in the time domain. Try plotting net flux against temperature.
paul_k, see if you can read this one.
The response is linear, but if you want me to make it nonlinear that’s easy to do (I’m using a time domain solution so it’s easy to add). I’m not sure what you’d learn from that exercise though. The “problem” is in equating short-period forcings like Pinatubo to long period forcings like (e.g.) a 250 year doubling of CO2.
The curve shows for short period forcings you get a smaller equilibrium temperature response and I’m plotting dT/dF versus the period of the forcing (which is sinusoidal).
The take away from the figure is events like Pinatubo give you 1.5°C/doubling CO2, even though the model’s real (period -> infinity) value of dT/dF -> 2.5°C/doubing CO2. I think this is pretty similar to what is seen in a full GCM. The thing to note is that the curve looks pretty much the same for periods up to say 10 years regardlessly of equilibrium climate sensitivity. It’s the long period stuff where you’ll see a big difference.
If you let your lambda itself be time dependent (and increasing with long period), you’ll find the response is linear with respect to dF for fixed period, but again the equilibrium value of the ratio dT/dF will get larger as the period of the oscillation increases. That’s again a linear phenomenon, but I could use the un-linearized version of the feedback just as easily.
What do you think would happen as I changed the forcing but held the period constant for this model? Would you expect something besides a linear response in T as a function of dF?
Paul_K,
The problem is that when you transform from time to temperature, you assume that forcing is not a function of temperature. That can only be true if there are no feedbacks. Feedbacks like water vapor and ice/albedo work by changing the forcing. If a carbon model is included, you may also get additional CO2 released from warming the deep ocean.
You’re forcing vs temperature plot begs the question.
Troy – Thanks, I agree, though I didn’t realize the WV feedback “offset” was still nearly as large as the cloud feedback.
Carrick – I don’t have time to get through your math right now, but I am not sure if what you are plotting is the same as what Paul K is talking about. You plotted the period of a sinusoidal forcing versus dT/dF. However I think in FG06 terminology Paul is talking about dT/[d(F-N]) where (F-N) is the net TOA flux.
Let me try to take a simple case where we increase the solar output by an amount that increases incoming solar energy by an average of 1 W/m^2 on the earth’s annual trajectory. [Can we just do this as a step increase, or do we need to make it sinusoidal given the annual cycle?].
The earth’s surface/atmosphere temperature will increase at a rate modulated by the heat uptake into the ocean, i.e. a long temporal latency as compared to an earth with no ocean. Until the equilibrium (really, steady-state) temperature increases such that the outgoing radiation has also increased by 1 W/m^2 on average, there will be a TOA flux imbalance. This increase in outgoing radiation is the variable N. The forcing F, in this case, simply refers to the 1 W/m^2 average increase in solar input.
In this case, F-N=the ocean heat uptake in W/m^2 (approximately, excluding ice melt and so on). But T will increase, at some rate that we don’t even really care about in this example, until at time = infinity the heat uptake is zero and “equilibrium” is reached.
What does plotting dT/d(F-N) in this case, look like?
Crap. No editing…TOA imbalance is not N. TOA imbalance is F-N.TroyCA,
In order to tune the MAGICC model to an AOGCM, such as PCM in Wigley et al. 2005, they determine an “effective climate sensitivity” of the AOGCM using a transient simulation and make that S in a linear DeltaT = DeltaF * S formula. In most cases the determined effective sensitivity of a model has been found to be lower than the ECS because it increases over time (see Raper et al. 2001), though Held & Winton noted that use of a constant effective sensitivity was a reasonable approximation over multi-decadal timescales. For PCM effective sensitivity was determined to be 1.7 and ECS 2.1.
My assumption (though I don’t think this is actually explicitly mentioned in the text) was that, for example, when Wigley et al. found a central estimate of ~3ºC for ECS with the Pinatubo response the effective sensitivity driving that response was actually ~2.4 (assuming both terms scale linearly with each other). In that way they accounted for the non-linearity of effective sensitivity.
Of course, there is some model-dependence in this translation so whether it gives a reliable estimate for real-world ECS is uncertain. One of the reasons why climate scientists have largely abandoned the idea of determining ECS from short term climate changes.
Interestingly, AR4 notes that ‘Under SRES scenarios for which AOGCMs have been run (B1, A1B and A2), the ensemble average of the tuned versions of MAGICC gives about 10% greater temperature rise and 25% more thermal expansion over the 21st century (2090 to 2099 minus 1980 to 1999) than the average of the corresponding AOGCMs.’
That probably has some implications for Wigley et al.’s results.
BIllC, the F’s I’m using are the anomalous top-of-atmosphere forcings. That means you have a steady-state value of F0 that produces a temperature T0, and you’re changing
F0 to F0 + dF sin(2pi/tau * t)
and measuring the equilibrium response
T0 + dT_ss sin(2pi/tau*t + phi)
where “ss” = steady state.
I’m showing that dT_ss/dF predictably depends on the period tau with longer periods giving you a larger amplitude dT_ss for the same amplitude of forcing dF.
I’m choosing to make the forcing sinusoidal and vary the period because that allows you to examine the response to each period of forcing separately, so you can consider what it means to look at annual seasonal response to TSI, to ENSO, to Pinatubo to long-period TSI. I could also do step function forcings, but I don’t see that being any easier to interpret.
I’m not sure I follow what you’re driving at with respect to the oceans, but the model I use includes oceans (at least conceptually), so the responses you see includes the equilibrium response of climate to oceans.
I think it’s important to focus on the frequency content of the forcing here. I know that’s not how ECS is normally defined, but you could define it as the limit of dT_ss/dF as tau->infinity, so I think it’s reasonable to look at here.
Re: Carrick (Oct 24 07:18),
I think you and Paul_K are talking past each other because you’re looking at the responses in different domains with different axes. It may be that if you plotted your data in the form that Paul_K uses with net TOA forcing on the y axis and temperature on the x axis, you would get a linear plot with a slope of λ. Or not. I think not because I’m fairly certain that there is a flaw in Paul_K’s analysis even without feedbacks.
Re: Carrick (Oct 24 08:24),
Should have updated before I posted.
You’re not looking at the net TOA forcing. At short periods, the net forcing will be larger because the ΔT is lower.
DeWitt, I’m driving the 3-box climate model with a sinusoidal forcing of amplitude dF (=1) and measuring the steady state response dT_ss, while varying the period of the forcing.
The point of the exercise was merely to demonstrate that you can’t use forcings with periods of say five years to deduce equilibrium climate sensitivity.
I could put in a full-blown feedback for temperature (multiple ones if people like). However, the response dT_ss in linear in dF regardless of whether you linearize the 1/(1-f) feedback term or not. The only way it becomes nonlinear is if you modulate f itself, or if you introduce an explicit nonlinearity in T or dF into the system.
On the other hand, if you linearize 1/(1-f), your deduced value for f from that approximation vary as you change dF, but that’s just because you just “screwed up” by making the approximation. Saying the approximation breaks down is a different thing that saying that the feedback is nonlinear.
DeWitt:
How am I not? Review the n-box models.
I’m using TOA forcings by construction.
Paul_K, any ideas on what the time function is that describes the TOA net flux to a instantaneous doubling of CO2?
For example, in a one-box model it would be “Fa * exp(-t/tau)”. Looking at your previous post, the fit “3.5 * exp(-(t^.25)/2)” {where tau=2yrs} kinda works from a simple curve fitting exercise (I can’t think of any physical justification for using the quad root of time). Any ideas would be appreciated.
Re: Carrick (Oct 24 09:21),
You certainly can’t if all you have is the temperature data. That would be like trying to determine the transfer function of a complicated circuit with just short period stimulation.
I’m less sure that you couldn’t, though, if you had a precise and accurate measure of the TOA radiative imbalance as well as the temperature. That’s something you can’t get with an electric circuit because the forcing is also the signal. Of course you can’t get that in the real world either, at least not with anywhere near sufficient precision.
DeWitt, I really don’t follow what you’re trying to say.
You do know I’m referring to these models, right?
I’m assuming a 3-box model and imposing a top-of-atmosphere forcing F = F0 + dF sin(2pi/tau t), or as anomalized forcing, F = dF sin(2pi/tau t) and then I’m looking at the response of the top box (the atmosphere + presumably the top 2-m of the ocean) to this forcing.
I’m not trying to solve an inverse problem, I’m performing a direct calculation for the temperature response of the top layer of this three box model to this imposed forcing.
DeWitt:
I also don’t understand this argument.
Since what I am doing applying an anomalized stimulus F = dF sin(2pi/tau t), and in this case letting it settle (I use 5000 years for the settling time for each presentation), of course I can compute the transfer function of this complicated circuit this way.
This is my bread and butter, I do this sort of thing all of the time in real life on complicated systems.
Again regarding this:
You get the transfer function by applying a series of tones with varying period, measure the response and normalize to the amplitude of the stimulus, as a function of period or frequency of stimulus. This is exactly the paradigm I used to generate the above curve.
This works regardless of whether you are driving a climate model, a complex electrical system, or an electro-mechanical system.
IMO this has to go back to the definition of forcings and we have to have a definition of forcing that is understandable by everyone in both mathematical and plain-language terms.
I’d like to pose what I think are some very simple questions, and I do think the “step function” approach helps with the verbalization and visualization even if it doesn’t make the math any easier.
Carrick – if the sun, for whatever reason, ups its output to the point where on average, over the whole year, the Earth sees +1 W/m^2 at the TOA, and this condition continues in perpetuity, what is the “Forcing” plotted as a function of time. Is it
1) +1 W/m^2, forever.
2) +1 W/m^2 at t=0, decaying to zero at t=infinity (as the earth reverts to radiative steady-state).
3) Something else.
?
PS I am aware of the language the IPCC uses to define forcing/ IMO, it’s not stated as simply as it could be.
BillC, to be really specific, if you assumed you had a 1 W/m2 change in TSI at the tropopause (“top of atmosphere”), in the input code to your GCM, you’d just have a step-function in TSI forever. You wouldn’t manually adjust the TSI so it returned to zero after the system returned to equilibrium.
I’d restrict myself to how climate modelers and practitioners use it.
But that’s me.
Carrick,
That is a nice graphic, but I am not sure it really does what Paul_K is talking about. Changing the frequency of an applied forcing certainly should lead to changing magnitude of the response. But that is because the response is being tempered at higher frequencies due to uptake of the applied forcing by the larger/slower reservoirs. The “apparent sensitivity” you are measuring does not take into account that flux into and out of the reservoirs… if it did, then you would diagnose the same ECS (2.5) for all frequencies.
SteveF:
No it’s tempered by the inability of these large thermal inertia components of the system to respond in that short of a time period.
Reservoirs with long time constants will have very little flux transferred into or out of them over a single cycle with a period that is small compared to the time constant of the reservoir. Effectively “they aren’t there” for fluctuations that are too short in duration, and no energy is transferred between them and the rest of the system.
You can remove the reservoirs with long time constants, and you’ll get exactly the same response for short enough time periods.
Carrick,
“No it’s tempered by the inability of these large thermal inertia components of the system to respond in that short of a time period.”
Sure, which just means the larger/slower reservoirs are taking up/giving back some of the applied forcing from the fast reservoir, damping its response rate.
.
“You can remove the reservoirs with long time constants, and you’ll get exactly the same response for short enough time periods.”
Unless I misunderstand what you are saying, I really think you are mistaken here. If you apply the same range of forcing frequencies to the two box model, the transition to “full response” has to take place at a higher frequency. I suggest you try removing the slowest of the three boxes and test again.
Hi Paul S (Comment #105332),
I’m not sure if you’re simply filling in some other details (many of which are interesting, thank you!), but I don’t see anything to contradict the idea that “the Wigley argument about not matching the 2.7 K should now be considered irrelevant”?
To be clear, there is no set relationship between “Effective Sensivity” and ECS in models. Take the example of two extremes. For MPI ECHAM5, the “linear response” after 100 years is -0.88 W/m^2/K, corresponding to an “Effective Sensivity” of 4.01/0.88 = 4.6 K, compared to the ECS 3.4 K. On the other hand, GFDL CM2.1 has an “Effective Sensivity” of 3.5/1.37 = 2.6 K compared to the ECS of also 3.4 K. The ECS may be more or less than the “Effective Sensitivity”.
I agree completely that “there is some model-dependence in this translation”…in fact, by fitting to a particular model Wigley et al. are essentially assuming the relationship between the volcanic response and ECS, which in turn means they are assuming their ECS. I could get a better match to GISS-ER ECS by assuming a relationship between the volcanic radiative response and ECS that is similar to GISS-ER, but this assumption would not derive from anything present during the Pinatubo period, hence my “luck” comment. Using that same assumption with a model or system not characterized by the same relationship would produce a bogus estimate. This is why upthread I noted:
And I do think that it may be a good idea to abandon “the idea of determining ECS from short-term climate changes”, although I suspect we will nonetheless see several of these cited in AR5.
Nevertheless, the reason I would have little confidence in the Wigley et al. estimate is:
1) They used a model that doesn’t match the TOA flux data, so that the radiative response term is incorrectly calculated during the Pinatubo period.
2) They have (implicitly) assumed a relationship between the radiative response during Pinatubo and ECS by fitting to a model. Different models could have vastly different relationships, so this choice is arbitrary.
Here you go SteveF: Effects of removing slowest, then the slowest and next slowest boxes.
And I think you didn’t understand what I said, which is exactly that at short enough periods (high enough frequency) the transition to full sensitivity occurs at a higher frequency.
However, the point in the n-box model is each box adds a component to climate sensitivity. Removing one of the boxes, changes the equilibrium sensitivity.
(In this case I assumed that all three components contributed positively to the ECS, this is just for purposes of illustration. You could pick the third box to have a negative sensitivity for example.)
The take home here is that the climate sensitivity you get short period forcing isn’t a measure of ECS. It only tests those components of the system which have a response time comparable or shorter than the period of the forcing.
Carrick,
Yes.
But you’re not calculating the net forcing at a given temperature. That is the imposed forcing minus the Planck response. At a forcing period of 1 year with the temperature at 1.5 C, the net forcing must be less than 1 and greater than zero, probably about 0.4. That would make a plot of net forcing vs temperature linear with a slope of -2.5.
Would you understand it better if I said using only high frequencies? In the case of your three box model plot that might be using only forcing periods less than 0.1 year. I have done electrochemical impedance measurements as well as sweep and step measurements. I do have some comprehension of frequency domain vs. time domain [snark edited].
Not exactly the same unless the time period is infinitesimal and there is no response at all. Maybe you can’t measure the difference instrumentally because of lack of precision, but it’s there. In your graph, for example, the response slope is greater at a forcing period of 1 year than it would be if the slowest time constant box wasn’t there.
DeWitt:
Understood now.
More or less. You’d only learn about the high frequency portion of the transfer function by using high frequencies. Of course this is pretty much a tautology.
AND it means we should agree on this point…which is you can’t say anything about ECS from high frequency measurements, since ECS is nothing more than the zero-frequency limit of this curve.
Well not “exactly the same”. They approach each other exponentially though. (This happens in real physical systems too… if you drive it at much higher than the thermal response time of the system, you get skin penetration that goes exponentially to zero as some function… square root?… of
Yes it’s there, it’s just very small. Not experimentally observable with current instrumentation is probably a better way to put it.
Carrick,
Boxes add sensitivity? Whoa. The sensitivity is determined by how loss to space changes with surface temperature (and of course the geographic distribution of surface temperature is part of that). I clearly do not understand what you are doing with that graphic. If you have a known climate sensitivity, and apply cyclical forcing over a range of frequencies, then the size of the response at higher frequency is always greater than at lower frequency.
grr… broken edit function on these guest posts bites.
“you get skin penetration that goes exponentially to zero as some function… square root?… of tau_0/tau where tau_0 is the response time of the system and tau is the period of the sinusoidal forcing.”
As one would imagine, there’s published literature on this.
The point is you get a factor like d/D, where d is the penetration depth and D is the thickness of the layer, for what fraction of the system with the long time constant is getting “driven” by the sinusoidal forcings. As tau becomes much smaller than tau0, the fraction of the volume of that system that is being thermally driven goes rapidly to zero.
SteveF:
If you are fixing the climate sensitivity a fixed value yes.
The point is, you don’t have that freedom.
You can’t rescale the sensitivity of a three-box model so that a two box model matches it. The shorter period components are more strongly constrained theoretically and experimentally.
If you want, think of the shortest latency box being the classical forcing associated with the CO2 GHG effect, which we know to be near 1.1°C/doubling of CO2.
If you remove the other two boxes, you can’t just change this sensitivity to 2.5°C/doubling of CO2.
By the way, SteveF, my comments come from noting that high sensitivity models have longer time periods associated with their setting times.
My speculation is, if you repeated my exercise on a range of models, what you would find is that for short enough periods, the transfer function you got would be nearly identical between models. It would only be for long period forcings that the higher sensitivity of some models would become apparent.
(I can actually pull this out of simulation results, so this is testable without having to actually reproduce the exact numerical experiment I did.)
Wow this is getting intense.
Carrick – #105358. Understood. But I’m not talking about “what you’d do with your model”. I’m talking about the working definition of “forcing” – ie which parameters are encapsulated by “forcing”. However I think this is laid to rest by your response to DeWitt at #105370 about “net forcing”.
BillC, I’ve always used forcing to refer to exogenous drivers like TSI.
“Net forcing” in my parlance would be an appropirately weighted sum over these exogenous drivers.
I’ll take you guys word that “net forcing” can be used as a difference between this sum of external forcings and Planck response function, but this seems to characterize a radiative imbalance, which to me wouldn’t be the same thing as a “forcing”, even if it had the same units.
As to intense…for me just interesting.
Carrick, this is the Wiki page on Ocean Thermal Energy Conversion:-
http://en.wikipedia.org/wiki/Ocean_thermal_energy_conversion
Now it is true that one cannot get work from all temperature gradients (such as between atmospheric layers) one can get work from the difference in the temperature between the surface and bottom of the Oceans.
The fact that one can get work from this gradient is very instructive. Firstly, the Oceans are at dis-equilibrium. The Oceans are not at, never were and are not going to be at thermal equilibrium.
Secondly, if you base any model on the fact that the Oceans are going to come to thermal equilibrium, then the model is worthless.
Carrick,
I fear we are once again talking past each other a bit.
Hypothesis 1: the climate response is linear in the temperature domain.
Hypothesis 2: the climate response is very non-linear in the temperature domain.
Perhaps we have no simple way to determine which is true. I think ultimately you need to have enough solid heat balance data and enough time to tell with any certainty. If we are going to accept a very non-linear response in the temperature domain, then that acceptance depends on understanding clearly what physical process(es) are driving the rather extreme non-linearity which some models ( but not all) diagnose.
SteveF, I think we’re just interested in different issues.
I’m interested in linear effects (and that includes feedback phenomena) that have time dependence, and it seems you’re more interested in nonlinear effects that don’t exhibit any time dependence.
If you want an example of something to put in the third box… ice albedo. It represents a large feedback, also has a delayed response, and the feedback parameter is certainly a function of temperature.
(So it satisfies simultaneously being nonlinear, time-lagged, low-pass, and time varying.)
Carrick,
I’ve now seen your figures at last.
If you are using the equation suite that I think you are using, then one of your boxes (only) will be radiating to space. Try this experiment instead.
Take a fixed step-forcing of say 3.7 W/m^2. Calculate the net flux at each time step, and plot it against your radiating box ΔT values (transient not steady state).
You should see a straight line, I think. Now extrapolate the straight line to the temperature axis. What value does it cross at? Compare with your model’s climate sensitivity.
Now change the forcing input to a sinusoidal input and repeat the above without changing the equation parameters. Then plot (F-net flux) against your radiating box ΔT values (transient not steady state). Take the inverse of the gradient and compare it with your model’s climate sensitivity. How does it compare?
Now try changing the absolute and relative heat capacities of your boxes, but leave alone the climate sensitivity term – i.e. don’t change the coefficient applied to your radiating box temperature. Repeat the above exercises again. Same answers as before?
The point is this. The “latency” that you are showing in the time domain does not translate into the temperature domain. If you want to change the shape of the net flux vs temperature relationship, you cannot do so without changing the term defining the OUTGOING radiative response. At the moment your model has it as a simple linear function of temperature (I think). The relationship between (F-net flux) and temperature will therefore ALWAYS be a straight line.
Conversely, you cannot explain the curvature in the model’s response by “latent” ocean response, as these numerical experiments should demonstrate to you. You need (instead) to explain what changes in the surface temperature distribution or the long-term atmospheric properties will explain the phenomenon.
Chris Colose has engaged in a tangetial fashion on climate audit
http://climateaudit.org/2012/10/22/two-blogs-on-climate-sensitivity/#comments
Carrick,
Sorry, I said earlier “the size of the response at higher frequency is always greater than at lower frequency”, which is of course exactly the opposite of what I was trying to say: the size of the response to a cyclical applied forcing is always greater at lower frequency than at higher.
Carrick,
“If you want an example of something to put in the third box… ice albedo. It represents a large feedback, also has a delayed response, and the feedback parameter is certainly a function of temperature.
(So it satisfies simultaneously being nonlinear, time-lagged, low-pass, and time varying.)”
Yes, but it would also be self-evident in the results since it would show strictly a SW effect. Evidently Isaac Held has been looking for an explanation of this phenomenon for longer than we have – or at least I have – and it is hard to believe that he did not consider this. See the following rather remarkable paper on recalcitrant heating – http://www.gfdl.noaa.gov/cms-filesystem-action/user_files/ih/papers/recalcitrant_2.pdf
Carrick,
“If you want an example of something to put in the third box… ice albedo. It represents a large feedback, also has a delayed response, and the feedback parameter is certainly a function of temperature.”
.
Fair enough. Define the scale of the ice albedo feedback, the lag associated with melting, and how those two relate to heat balance. Just to be completely clear: I do not doubt that there exists the possibility of non-linearity in the temperature domain, I just want to see plausible (and testable) calculations of the effects involved. As I noted (much) earlier, the issue is simple: where is the beef?
Re:
Frank (Comment #105249)
October 23rd, 2012 at 4:01 pm
Hi Frank,
I promised you a fuller response when I could get to a computer.
You wrote:-
“However, I’m assuming the model Paul_K fit in this post is:
net flux = Forcing – Radiative response – Deep ocean uptake
Until deep ocean uptake saturates and becomes negligible, λ in the above equation can’t be a constant. Equilibrium climate sensitivity won’t be observed until deep ocean uptake becomes negligible. I assume that this (plus slow feedbacks) is why it takes many decades for climate models to reach equilibrium.”
Firstly, I should emphasise that I am not changing the definition of climate sensitivity. I am however assuming in the model above that the Earth’s radiative response is linear with temperature – which is also the assumption used in all mainstream feedback calculations. My temperature at equilibrium is found when the net flux goes to zero or time goes to infinity.
The basic model is the same energy balance model as that used by Wigley in MAGICC:-
Net Radiative Flux (TOA) = Forcing (TOA) – radiative response (TOA)
The term on the LHS is the total radiative flux (change) entering or leaving the planet.
About 90% of the energy is assumed to go into the oceans. So as a first approximation, we can write Net radiative flux = dH/dt where H is the ocean heat energy.
In the simple model above I am using a two-layer slab ocean model – the surface mixed layer and a deep ocean layer. The deep is connected to the former via a simple flux term which is linearly proportional to the temperature difference between the layers.
So it would be correct to re-state the energy balance as follows:-
Net heat flux going into the surface mixed layer = the Net Radiative Flux (TOA) less the heat flux from the mixed layer to the deeper layer = Forcing(TOA) – radiative response (TOA) – the heat flux from the mixed layer to the deeper layer.
In the above analysis I have allowed all 3 parameters in the ocean model to vary – the heat capacities of the two layers and the diffusion constant defining the flux between them.
“Do any of the definitions of climate sensitivity apply to Paul’s analysis of Pinatubo?”
Yes. Equilibrium Climate Sensitivity under the assumption of a linear feedback.
BillC,
You asked a question which I left hanging – probably because it was the only question that was actually on topic (smiley).
You asked:
“The original detrending. How likely is it that “detrending†the data prior to the Pinatubo analysis has biased the results?”
There seems to be a lot of common agreement on the effects of ENSO on the temperature series (so I didn’t check this personally).
This should have had no effect on the SW signal, but it should have suppressed the magnitude of the LW response a little. Since the LW response is an offset to the SW forcing, this should mean that the actual reported net flux response was a little larger than it would have been had the temperature correction been taken into account. Ignoring this shades the estimate of climate sensitivity towards the high side. However, integrating the temperature (adjustment) and multiplying by the estimated feedback gives an OOM estimate of the error arising in the energy plot and the difference is quite small (<10%).
Carrick,
“If Paul_K is interested, I can run Pinatubo + ENSO34 (lagged) + TSI through it.
Is there anybody here who thinks he’ll be able to pull out 2.5°C/doubling of CO2 from this model using Pinatubo data?”
Yes, Carrick. I do. If you give me the forcings you used and the net flux and temperature data, I will give you the exact value of 2.5 deg C. If you follow the argument I presented above you will understand why. The fact that you have some very long timeframe temperature response function makes no difference. As long as you are honouring energy balance and you have not changed your linear term for outgoing radiation, I will give you the exact value of 2.5 deg C.
SteveF:
This is something that is true in my model, but actually not generically true.
Imagine (hypothetically)you had a large negative feedback related to deep ocean overturn. So you only get the large negative feedback at very low frequencies.
I would too, but I think the best data is (bletch) paleoclimate data, because it’s the only data set that has long enough periods to resolve this sort of feature.
I think nobody here is going to doubt the existence of ice ages for example.
Carrick,
“I think the best data is (bletch) paleoclimate data”
Belch? I may hurl..
Sure, the history of the last 800,000+ years indicates a strong ice albedo feedback. But note that during glacial/interglacial cycles temperatures seem limited on both the upper and lower end. Climate sensitivity 40,000 years ago with much of the northern hemisphere covered with a Km of ice may have been, shall we say, “different” from today, but what does that tell us about the response to delta-forcing today? I think not much.
.
The reality is that there is not a whole lot of ice related albedo to be lost….. unless you think Antarctica is going to melt (and it hasn’t for many millions of years). Maybe southern Greenland could melt, but how much effect could that have on the Earth’s overall energy balance? The area of the southern half of Greenland is about 1 * 10^6 Km^2, while Earth’s area is ~510 * 10^6 Km^2. Considering that it would take on the order of 1000 years to melt out half of Greenland, and considering that GHG forcing will be falling (for sure) within 100 years (and perhaps within 50-75), do you believe that loss of albedo in Greenland can materially influence the overall energy balance over the next 100 years?
Carrick, “I would too, but I think the best data is (bletch) paleoclimate data, …”
Bletch is right. There is though a 4.3ka recurrence associated with the precessional cycle that seems to appear in the Tierney et al. lake tanganyika tex86 reconstruction of lake surface temperature.
https://lh5.googleusercontent.com/-1SYDQ7c3IQs/UIhEJu260lI/AAAAAAAAFOQ/br-tKPHP-54/s999/tierney%2520lst.png
lightly damped 4.3ka can appear as a 1430+/- year events, Bond? Because of the non-linear impact of northern higher latitude (area decreases with the sin of latitude) snow/glacial feedback, low frequency high “GMT” impact.
Paul_K thanks for the link from Isaac.
Sounds like a deal. Give me a little time (probably not for a few days) and I’ll get back to you on that.
Dammit Carrick,
I feel like Butch Cassidy just before he gets into the knife fight. You remember the line: â€I only made the offer because I never thought anyone would accept.â€
The only reason I made the point was to emphasise that you can have a highly complex n-pole (surface) temperature response function IN TIME, but as long as you have a single radiative emission term which is a simple linear function of surface temperature, then a plot of (forcing – net flux) will always yield a straight line AND the gradient of that line will yield the total linear feedback term – the inverse of the long-term climate sensitivity of this linear system. Although it may seem counterintuitive, the deep ocean uptake has no effect on this.
OK, so go ahead, but no cheating. (a) You are only allowed one outgoing radiation term in your equation suite , and that is a simple linear function of surface temperature. (b) You really do have to do the energy balance. If you include ENSO then you have to include the data and fully account for OLR in the total net flux calculation.
Go to it.
Frank (Comment #105315)
October 24th, 2012 at 2:32 am
Frank,
You wrote:
“Dewitt suggested a square wave, but the eccentricity in the Earth’s orbit provides an annual sinusoidal solar forcing with peak input in January. Perhaps Paul_K’s method could be applied to this forcing. Of course, the response to eccentricity forcing disappears…”
You need a minimum of latitudinal definition for the annual cycle. Temperature amplitude increases away from the tropics and also changes phase across the latitudes. The temperature swings in the extreme northern and southern latitudes are huge, decreasing towards the tropics, but this all gets resolved into a tiny annual average change in temperature. This latter has very little information content about the seasonal cycle. My toy model can’t handle this.
Re: AJ (Comment #105346)
October 24th, 2012 at 9:37 am
Hi AJ,
You asked:
“Paul_K, any ideas on what the time function is that describes the TOA net flux to a instantaneous doubling of CO2?”
I honestly have no idea. I would like to understand what is causing it in the models before trying to fit something to it.
Frank,
I should have added, given your reference,
“…and nor could Ramanathan’s.” (smiley)
Paul
Paul_K
” but as long as you have a single radiative emission term which is a simple linear function of surface temperature, then a plot of (forcing – net flux) will always yield a straight line AND the gradient of that line will yield the total linear feedback term – the inverse of the long-term climate sensitivity of this linear system.”
.
You say that is counterintuitive, but that seems to me obvious. There is no way that it can be wrong…. It’s just a heat balance.
.
There may be an argument for why feedbacks become very non-linear in the temperature domain a hundred years after forcing is applied, but I haven’t heard one (at least not one that makes any sense).
.
Carrick,
There is no need for a knife fight.
Re:SteveF (Comment #105436)
October 25th, 2012 at 7:06 am
Well, I’m with you SteveF. Carrick’s a very bright guy, and I assume that it will only take him a few minutes, if he doesn’t already know it, to understand that I have set him a “white mouse” experiment.
I anticipate that he will come back with a cry of “foul”, and insist that he be allowed to establish multiple emission points. At that stage, I will move to Plan Harvey Logan.
Paul_K – thanks for answering my detrending and bias questions. Sounds good to me.
White mice, knife fights or not, the box models aren’t helping us understand this problem. Now – if we start cutting up boxes by latitude, maybe that would help? In terms of THIS problem, it seems more relevant than having umpteen ocean layers.
BillC,
This is actually something I’ve been trying to fiddle with, for now using one “box” for let’s say -30 – 30 N, and the other for everything else. The radiative response of the first box is larger and temperature heats up in the first at a faster rate, but as the difference in T between the two boxes increases the second box (with a smaller radiative response to a change in T) begins to take up heat at a faster rate than the first. (That is, the second box absorbs a greater portion of the TOA flux). I suspect this type of model may better be able to better reproduce the non-linear ECS of GCMs, but have not made much progress so far…
Paul_K
Great post; you should certainly seek to get this work published in a journal.
I have been trying to replicate your results. I’m actually using an EBM with a mixed layer + diffusive thermocline ocean structure (identical to the EBM used in several well known Detection and Attribution studies published over the last few years). The difference between a two box and a mixed layer + diffusion model is minor over periods of only a few years. I have reduced the default mixed layer heat capacity so as to match your modelled time evolution of temperature.
I downloaded the ammann2003b_volcanics.nc volcanic aerosol optical depth data file from
http://www.ncdc.noaa.gov/paleo/pubs/ammann2003/ and applied your multiplier of 21.
I can pretty well match your maximum Net Flux of circa 2.4 W/m^2 if I use spatially unweighted means of Ammann’s monthly data. But when applying the normal area weighting [by cos(latitude)], the forcing peak after 6 months (in November 1991) is 4.09 rather than 3.13 W/m^2 and the resulting peak Net Flux is about 3.2 W/m^2. I have difficulty reducing the peak Net Flux value below 3 W/m^2 while maintaining consistency with the temperature evolution and peak. On your plot, the ENSO-adjusted temperature has only fallen by about 0.25C when the forcing peaks, which with Total Feedback of 2.5 W/K/m^2 would only reduce the 4.09 forcing to 3.6 W/m^2, way above your peak of 2.4 W/m^2.
Have I gone wrong somewhere?
BTW, please can you clarify whether your mixed layer heat capacity is total mixed layer heat capacity divided by global surface area so as to match with the global forcing and flux data, or just uses the ocean area as divisor.
These are the June 1991 on forcing data that I compute from the Ammann optical depth data, W/m^2:
Unweighted: 0.00 -0.59 -1.33 -2.06 -2.88 -3.13 -3.07 -3.02 -3.00 -2.96 -3.27 -3.27 -3.18 -3.10
Lat weighted: 0.00 -0.85 -1.83 -2.80 -3.78 -4.09 -3.93 -3.79 -3.68 -3.57 -3.56 -3.50 -3.38 -3.27
Troy,
Well I’d be interested in anything you’ve got. I have a pretty steep learning curve ahead of me when it comes to actually modeling anything, but you never know!
Re:Nic Lewis (Comment #105447)
October 25th, 2012 at 9:32 am
Hi Nic,
Thanks for the questions. It’s great to see you getting down in the detail. I am a fan of some of your other work.
The main global forcing dataset came directly from digitising the original AOD values shown in Figure 2 in DK2005 (not the analytic fit used by DK2005), and applying a conversion of 21 W/m^2. The actual forcing values I used from June on are:-
0, 0.76, 1.58, 2.44, 3.3, 3.55, 3.40, 3.32, 3.21, 3.13, 3.11, 3.05, 2.94, 2.85
If you plot these values up, you will see that they track your lat-weighted values except for a factor which looks suspiciously like 21/24 (recall that my digitized values are only good to 3sf at best). I know for certain that I have converted the AOD at 21W/m^2, and that I can exactly match the DK preferred solution with this conversion factor and their analytic fit to the original AOD data. At first, therefore I thought you might have accidentally applied a conversion of 24 instead of 21 to the same AOD dataset. So I went back to the original Ammann data for Nov and Dec 1991 to check, and recomputed the lat-weighted AOD values from source. They yield values which convert to within 1% of your lat-weighted forcing values when using a conversion of 21 W/m^2! Hell! The small difference is due to how the panel averaging is managed, I am sure.
[Thinking that DK2005 might have computed an AOD average over just 60S to 60N without reporting the fact, I also tried this, but unsurprisingly obtained even higher forcing values.]
So I am left at an embarrassing loss to explain how the DK2005 AOD data were derived, and why they appear to be low by what looks close to a constant factor. A factor increase in the forcing values will not have much impact on the results where I have overwritten the values in the early months using the SW data, but it will have some effect, and of course it will have a larger effect quantitatively on the results I have presented for the global matches. If anything, such an increase in forcing should slightly decrease the estimated climate sensitivities. I would like to say that it will not change the conclusions, but I don’t need to since this is always true in climate science when there is a major data change. (smiley)
I will in any event write to Douglass to seek his clarification, and I will add an update to the main post as soon as I get a minute. A sincere thanks for bringing this to my attention, even if it does annoy the hell out of me.
You also asked about the ocean heat capacities. In practice, for what I have done here, these terms are used as free matching parameters, so there is no actual calculation which requires the area to be known. However, if one returns to the derivation of the energy balance equation, it becomes evident that the term must be scaled by the same area as is used to scale the forcing and the change in outgoing radiation. Hence, unambiguously, for the global matches the terms are scaled (inversely) by global area. For the 60S to 60N approximations they are scaled inversely by the ocean plus land area subtended by these latitudes.
This is an illuminating demonstration of the efficacy of crude linear system models in analyzing empirical data. That said, it seems that two distinctly different senses of “forcing” are being tacitly conflated here: 1) the diminution in SW insolation due to volcanic ash and 2) the increase in thermal capacitance of the atmosphere due to doubling of CO2. The former is a step change in proper external forcing, while the latter is simply an internal system change that does not effect the solar power entering the system.
The fact that some improperly call the capacative LW increase “greenhouse forcing” does not negate the clearly different physical mechanisms involved. Linear capactive systems respond quite differently to these inherently different changes. Thus it remains unclear that any truly empirical determination of the “climate sensitivity” (in the IPCC’s 2XCO2 sense) has been obtained here.
Nic Lewis:
Somebody throw me a bone, please.
What is an “EBM” and which “well known Detection and Attribution studies published over the last few years” is Nic referring to?
>.<
Carrick,
EBM is probably “energy balance model”. Only Nick Lewis knows what the “Well Known Studies” are.
Hi Carrick,
EBM = Energy Balance Model
SteveF, Carrick,
This is a not-so-wild guess at the particular EBM study that Nic is referring to:-
http://judithcurry.com/2012/06/25/questioning-the-forest-et-al-2006-sensitivity-study/
sky,
” the increase in thermal capacitance of the atmosphere due to doubling of CO2″
.
Humm… adding a trace of CO2 to the atmosphere does not change the ‘thermal capacitance’ measurably. What it does is make escape of infrared energy to space a bit more difficult. The simplest way to understand it is to look at the clear sky transmission spectrum for the atmosphere. http://en.wikipedia.org/wiki/File:Atmosfaerisk_spredning.gif
the left side of the graph is the visible spectrum, and everything right of 0.7 micron wavelength is infrared. Note the transmission drops to low levels at several wavelengths associated with CO2 and Water vapor. When atmospheric CO2 rises, these absorption bands for CO2 be come “deeper” and more importantly, slightly “broader”, which reduces the overall transmission to space. That means that if the same amount of heat must escape to space, the surface temperature has to be a bit warmer.
The warmer temperature at the surface propagates upward through the troposphere according to the atmospheric lapse rate. By itself (and with no add-on effects), a doubling of CO2 would be expected to warm the surface by about 1.1C to 1.2C. Slightly warmer air can hold a bit more water at saturation, so the expectation is that warming due to rising CO2 will be somewhat “amplified” by higher concentrations of water vapor… the higher water vapor “deepens” and “broadens” the absorption by water vapor. The combined CO2 plus water vapor impact for a doubling is expected to be in the range of 1.8C to 2.0C (or a bit more or less, depending on who you talk to).
.
None of this is terribly controversial.
.
The substantive issues are the net influence of clouds (in a slightly warmer, slightly more humid world, do clouds further amplify or attenuate warming?), the influence of man made aerosols (how much influence do they have?), ocean heat uptake (how much heat actually flows into the oceans?), and of course the subject of this thread: are there mechanisms which will make the Earth’s sensitivity to GHG forcing increase significantly a long time after GHG forcing has been applied, or is the sensitivity we diagnose today the same as the sensitivity in a world 1C to 3C warmer than today?
.
Anyone who says they know (for certain) the answers to these questions is either mistaken or intending to mislead you.
Thanks Paul_K and SteveF. So is there open-source code for the EBM that Nic is referring to?
SteveF,
It’s nice to read a clear summary of the effect of increased CO2. I just have one comment to make:
“That means that if the same amount of heat must escape to space, the surface temperature has to be a bit warmer.”
“The warmer temperature at the surface propagates upward through the troposphere according to the atmospheric lapse rate.”
Because transfers in the troposphere are mainly convective the spread of imbalance (not energy) tends to be top down. In addition, there is no reason to believe that the lapse rate is not affected by an increase in opacity.
Paul_K, SteveF, Carrick,
“This is a not-so-wild guess at the particular EBM study that Nic is referring to:-
http://judithcurry.com/2012/06…..ity-study/”
The Forest 2006 study I was covering in that article doesn’t actually use an Energy Balance Model: the MIT model is a sort of 2D AOGCM (Atmosphere Ocean General Circulation Model, for the unitinitiated 😉 ).
The EBM I refer to is that used in studies by Oxford university researchers, such as Andrews and Allen 2008 http://onlinelibrary.wiley.com/doi/10.1002/asl.163/pdf , Allen, Frame et al 2009 http://www.fraw.org.uk/files/climate/allen_2009.pdf , also Stone and Allen 2005 http://web.csag.uct.ac.za/~daithi/papers/StoneDA_AllenMR_2005.pdf . All these papers are freely available.
The EBM is as described in section 3 of the Andrews and Allen paper, which gives the equation to solve the diffusive component; the main analysis in section 2 is of a one-box EBM. From what the Andrews and Allen paper says the section 3 model is essentially the same as that used in many other climate studies.
The very long term (>40 yr) behaviour of this EBM model is probably unrealistic since it does not recognise the effect of upwelling on putting an effective lower limit on the diffusive thermocline (see Lintzen and Giannitsis 1998, On the climatic implictions of volcanic cooling, http://eaps.mit.edu/faculty/lindzen/184_Volcano.pdf , which is most informative).
Carrick: ” So is there open-source code for the EBM that Nic is referring to?”
I’m happy to provide a copy of my R code implementing the mixed layer – diffusive EBM, as used in the papers referred to. May I suggest that in the first instance you email Lucia and ask her to forward your request for the code on to me.
Phi,
I don’t see the lapse rate being much changed by opacity in the infrared. It is controlled by adiabatic expansion and latent heat release from condensation; neither of these things is influenced in any direct way by slight changes in the atmosphere’s absorption spectrum. Rising humidity will tend to reduce the average lapse rate, of course (lower rate at higher average humidity).
.
While it is true that convection plays a major part in total transfer, direct radiative loss to space in the absence of cloud cover is also significant…. the atmosphere is never really opaque in the IR.
Nic, thanks for the offer and the references, a ping will be along shortly.
Could you explain briefly to the uninitiated how a zonally averaged AOGCM model would be different than an EBM?
I’m guessing it’s because EBMs assume instantaneous equilibrium, right? [So they are essen for probing the fast component of climate change
SteveF,
I do not know what it is quantitatively speaking, but when you write: “…direct radiative loss to space in the absence of cloud cover is also significant”, it is precisely on this that increased CO2 acts, the losses profile is shifted in altitude. We can therefore expect a decrease in lapse rate (decrease of cooling by direct radiation) and therefore a surface warming lower than 1.2 ° C. All this naturally before involvement of water vapor feedback.
Carrick,
“I’m guessing it’s because EBMs assume instantaneous equilibrium, right?”
I think this is the nub of the controversy on this thread. What is instantaneous is the energy balance of the surface/atmosphere, not the approach to temperature equilibrium, which is most certainly not instantaneous. If you consider that energy can only be lost from the “very fast box” (the surface skin plus atmosphere), then the rate of temperature change of that “very fast box” depends on the net rate of energy flow (considering in and out), along with the total heat capacity of that “very fast box”. Transfers between the “very fast box” and “slower boxes” (the deeper ocean, penetration of heat into the land surface, melting of ice in glaciers/ice sheets..and maybe others) are treated just like solar gain and loss to space in calculating the rate of change in surface temperature. Slower boxes must also be energy balanced… their rate of change of temperature times their heat capacity, and heat of fusion times melt/freeze rate, in the case of ice, must be equal to the net flux of heat into or out of each box. A diffusional system for the ocean (infinite number of slower boxes) is theoretically most accurate, but maybe not a lot more than a 3 to 5 box model. Anyway, the temporal evolution of the surface temperature in response to a defined short term perturbation (like Pinatubo) should reveal the equilibrium climate sensitivity if you can accurately keep track of energy flows. But maintaining an accurate accounting of the balance may not be so easy!
phi,
“We can therefore expect a decrease in lapse rate (decrease of cooling by direct radiation) and therefore a surface warming lower than 1.2 ° C.”
There is often confusion between cause and effect with lapse rate. The lapse rate is determined by the thermodynamic properties of the atmosphere and the acceleration rate of Earth’s gravity, nothing else. The lapse rate only applies when the net arrival of energy at the Earth’s surface is greater than can be lost via direct infrared radiation to space. Under these conditions, the temperature at the surface rises and convection begins. In the absence of sufficient energy flux at the surface, there will be a thermal “inversion”, which just means that bringing a parcel of air from above to the surface will cause that air to warm to higher than the surface temperature… the atmosphere is then stable, without convection, and the lapse rate no longer applies. The existence of a fairly uniform lapse rate just means that most of the time there is convection in the atmosphere. The oft-quoted 1.2C per doubling takes the lapse rate into account.
SteveF,
“The oft-quoted 1.2C per doubling takes the lapse rate into account.”
I think phi is referring to the lapse rate feedback (“decrease in lapse rate”), which I am almost certain is not included in the oft-quote 1.2C. Basically, if the upper troposphere heats at a faster rate than the surface, as predicted, this leads to a larger temperature response than if there was a uniform heating rate throughout the column. Estimates for this effect are usually around -0.8 W/m^2/K. Such a feedback would create a total temperature response of around 3.3 (for uniform heating rate)+0.8 = 4.1 W/m^2/K, and a sensitivity (if only this feedback was included) of around 3.7 W/m^2/4.1 W/m^2/k = 0.9 K.
Of course, in models, the WV and LR feedbacks are tied enough together that the uncertainty in the combination of these two feedbacks is smaller than in each individual feedback.
SteveF,
“The oft-quoted 1.2C per doubling takes the lapse rate into account.”
Yes for the determination of its value as an average of unspecified topology.
Not when it comes to surface warming.
The change of the temperature gradient is not taken into account in this case. Yet the actual gradient is a consequence of both convection and radiative losses as you rightly pointed out.
Steve_F,
Not if the feedbacks are dependent on latitude and your box model doesn’t have latitude zones. Since the radiative forcing is not constant with latitude (it is greater in the tropics as Troy pointed out), the poleward propagation of ocean heating is going to control the feedbacks to some degree dependent on its temporal evolution (speed). (I think this is right). I suspect this is how you get nonlinear T to flux imbalance relationships as we’ve been discussing ad nauseam.
I think the million dollar question is that Given we believe this condition exists, do the models get it even close to right? Late 20th century evidence-to-now seems to suggest greater poleward heat transport than in the models, if it isn’t all attributable to aerosols ;).
I think it’s relevant to consider here Isaac Held’s post #11, “Is continental warming a slave to warming of the ocean surface”
The “ultra-fast response” to an instantaneous doubling of CO2 that you get by allowing the atmosphere to mix but holding SST constant (infinite diffusivity and thermal capacity of the ocean) is about 0.35C in almost all GCMs per Dr. Held.
Troy_CA,
I don’t know the impact that this feedback can have on numerical models but I am sure that it is imperative that such an effect should be included in the calculation of the initial effect.
We are in a positive feedback system potentially unstable. In such a case, the initial value must absolutely be realistic.
Phi,
Are you trying to say that the lapse rate adjusts BEFORE the surface warms because the radiative effect is first felt in the atmosphere? IE the peak ADDITIONAL IR absorption due to CO2 increase is at some level in the mid to upper troposphere, thus in some sense the heating propagates downward? I have often wondered about this, but have become convinced that the effect of this nearly instantaneous transient is negligible.
Carrick
“Could you explain briefly to the uninitiated how a zonally averaged AOGCM model would be different than an EBM? ”
A 2D AOGCM has much more complex atmosphere and ocean representation, and resolves latitudinal and land/sea differences and flows. An EBM has a simple ocean model and parameterises the atmosphere though the climate feedback value, with no latitudinal banding (the Lindzen 1998 EBM does, unusually, separate land from ocean).
Despite EBMs simplicity, it has been found that with suitable parameter choice they can emulate the evolution of global mean temperatures simulated by AOGCMs remarkably well. See Stone, Allen and Stott 2007 http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3964.1
The delays in response to forcings are due very largely to ocean heat uptake, and AOGCM ocean heat uptake can be modelled quite well by EBMs with suitable ocean paameters. But EBMs can’t model things like sea ice feedbacks, carbon cycle feedbacks, etc.
BillC,
In one hand. It’s a TOA imbalance which extends secondly to the surface (it would not matter in a purely radiative model). On the other hand, the profile of radiative losses is modified implying an effect on the temperature gradient. I do not have a quantitative estimate of this phenomenon but I have no reason to believe that the magnitude is much lower than the heightening of the TOA.
Nic Lewis,
How can this be true in light of the nonlinear (T/net TOA flux) relationship? Guess I’d better read the paper you linked.
Furthermore, if we took an EBM and partitioned it into land/ocean and a few discrete latitude compartments, it still wouldn’t have to be an AOGCM (e.g. no seasons, no explicit modeling of circulation), but maybe there is a standard name for the class of model that would represent?
BillC,
“Not if the feedbacks are dependent on latitude and your box model doesn’t have latitude zones.”
Well, feedbacks are for sure at least somewhat dependent on latitude. The more important question is if they are on average reasonably represented by a single “surface area weighted” average fractional change in feedbacks. There has been greater warming at high northern latitudes, so we might expect that feedbacks there have increased relatively more (compared to 100+ years ago) than at lower latitudes. But high northern latitudes represent only a small fraction of the Earth’s surface. The fact that there is a latitudinal influence does not necessarily mean that the overall response is not reasonably linear with average temperature.
.
Remember also what some (most?) GCM’s are predicting: the response in the temperature domain to rising GHG forcing stays linear for a long time, and then become non-linear after the average temperature has already increased quite a lot. If the non-linearity were something as simple as regional differences in warming, then we would expect to see non-linearity all along. What I am looking for is a plausible physical explanation for the non-linearity; I am not saying it is impossible, I am saying I need to be convinced.
Steve_F,
Agree completely.
I have of course nothing with which to convince you of anything you don’t already know! But here is a thought: IF we assume the nonlinear response is 1) real and 2) related to relaxation of the heat flux across latitudes, then the evidence that we already have of lower tropical warming and more rapid high latitude warming (see for example this plot), then perhaps we should be seeing it now, and apparently we haven’t yet.
By way of clarification of the plot: it shows the difference between the average temperature anomaly in the 1990s as a whole, and the 1950-1980 baseline period for GISSTEMP. The GFDL runs are an ensemble of 5 runs that were prepared for CMIP3 and when you look at the global average temperature trend, they match it very well. It is only when you look by latitude that the differences appear.
Please do all read the 26th October update at the end of the main post.
BillC,
There are at least two common ways (that I’ve seen) of emulating the evolution of global mean temperatures in GCMs using an EBM. The most common is to restrict the matching to the portion where the response is linear (historical data) or near-linear (upto TCR). In the latter case the errors appear as regression errors rather than mis-specification. The second alternative is to do what MAGICC does, which is to use a battery of fudge functions which convert the feedback coefficient into a non-constant value. This then allows an increasing effective climate sensitivity with time/temperature. Worth noting that some of the MAGICC fixes do actually convert the system into a non-LTI system – the sensitivity becomes a function of the magnitude of the forcing.
SteveF,
“Remember also what some (most?) GCM’s are predicting: the response in the temperature domain to rising GHG forcing stays linear for a long time, and then become non-linear after the average temperature has already increased quite a lot. If the non-linearity were something as simple as regional differences in warming, then we would expect to see non-linearity all along. What I am looking for is a plausible physical explanation for the non-linearity; I am not saying it is impossible, I am saying I need to be convinced.”
Well stated. This is the thing that most puzzles me. It suggests that it is not a simple time-dependent effect. Under a strong future forcing (like e.g. the instantaneous doubling scenario) the GCMs start to deviate from the historic linear feedback response in less than 70 years.
Hi Paul
Re Paul_K (Comment #105466)
October 25th, 2012 at 4:58 pm
Thanks for your answers. I will be very interested to hear what response you get from Douglass. The 21/24 factor is very odd. I note the strange coincidence of 60S-60N covering about 21/24 of the Earth’s surface area, but as you say an areal adjustment for the ERBE data covering on 60S-60N would be not only unjustified but in the opposite direction.
Paul_K regarding your Comment #105394:
Thanks for taking the time to explain. I did have a decent idea of how your model worked, but I made the mistake of copying, pasting, and trying to modify an equation from your post without being careful enough.
It is possible that we and others are contemplating reservoirs of different sizes to represent the “deep ocean”. Most seasonal changes (ie over several months) rarely penetrate deeper than 100 m and the magnitude of these changes diminishes with depth, consistent with models like yours showing a mixed layer of about 50 m. I’m thinking of the “deep ocean” as the next one to several hundred meters, a compartment small enough to have its temperature changed by heat flux from the mixed layer within several years to decades. Let’s call that compartment the “deeper ocean”. If the temperature of the “deeper ocean” changes quickly enough, heat flux into this layer will saturate and (I assume) produce non-linearity. Others may picture the “deep ocean” as everything besides the mixed layer, which I will call the “rest of the ocean”. Heat flux into the “rest of the ocean” obvious won’t saturate in hundreds of years and can’t be the source of non-linearity in climate models. Any heat flux associated with deep water formation transfers heat to the “rest of the ocean”. Slow eddies near the surface associated with currents on an uneven ocean bottom may transfer heat to mostly to the “deeper ocean”. The real deep ocean is presumably a combination of these two extremes that varies with the time period being studied.
Paul K:
The gist of my comment is scarcely acknowledged by referencing the Wikipedia regurgitation of the standard, radiation-only explanation of the “greenhouse effect.” That tired explanation is unconvincing even to astute students of geophysics, let alone to advanced researchers with decades of field experience.
What you seem to miss is that any mechanism that results in increased storage of thermal energy within the planetary system is a capacitance effect and can be modelled as such. It corresponds to the factor C in the impedance term RC, which defines the time constant of a linear system. Changes in system output due to changes in C, such as brought about by changes in GHG concentrations, should not be conflated with changes in output brought about changes in the excitation of the system provided by external power sources. Changes due to volcanism are clearly of the latter type: there is simply less solar energy available to be thermalized near the surface.
It should also be pointed out that (aside from narrow spectral windows) the TOA outgoing LW radiation is not closely indicative of global surface temperatures. The bulk of this radiation seen from space does not eminate directly from the surface. Rather it is indicative of the emissions from a cloudy atmosphere. Thus there is a large difference between the effective planetary black-body radiation and the surface emissions. What ties the latter into the former on Earth is the ill-understood non-equilibrium thermodynamics of moist convection and condensation. Serious energy budget models need to account for total enthalpy, not just radiative terms.
Sky 105519,
I have not clue what you are trying to say. Energy balance models generally do, well, an energy balance. It might help move the dicussion forward if you could better describe what your mean by “the ill-understood nonequilibrium thermodynamics of moist convection and condensation.” There are lots of things in the world which are ill-understood, of course. Thermodynamics, of any type, is not one of these. Anyway, thanks for the humor. Take a deep breath and say something meaningful, or say nothing at all… and try to stay away from gibberish.
Paul_K wrote in Comment 105432 about the forcing produced by the eccentricity of the earth’s orbit.
“You need a minimum of latitudinal definition for the annual cycle. Temperature amplitude increases away from the tropics and also changes phase across the latitudes. The temperature swings in the extreme northern and southern latitudes are huge, decreasing towards the tropics, but this all gets resolved into a tiny annual average change in temperature. This latter has very little information content about the seasonal cycle.”
I agree with your comments, but calculate (hopefully correctly) the peak-to-valley forcing due to eccentricity as being 23.6 W/m^2 (7X Pinatubo) and it happens every year. A signal this big and well-defined might be worth looking for. Could your preferred model could be used to calculate the theoretical surface temperature response? Since the heat capacity of the NH oceans is much smaller than the SH oceans, the peak in global SST comes in July. However, Ramanathan attributes without reference the late March maximum in TROPICAL SSTs to the eccentricity of the earth’s orbit; with a peak to valley amplitude of about 1 degC and phase shift of three months (Figure 5.10).
sky,
“Changes in system output due to changes in C, such as brought about by changes in GHG concentrations, should not be conflated with changes in output brought about changes in the excitation of the system provided by external power sources.”
I did not feel that the main change due to GHG is related to C. By cons, it also seems to me quite obvious that the effect of CO2 should not be regarded as a heating power. The schematic relationship linking surface temperature to heating power have more or less the form Ts = L * P + sigma ^ -0.25 * P ^ 0.25. CO2 effect occurs in L and can not be modeled by a forcing which is an alteration of P.
Re: SteveF (Oct 26 08:44),
That’s the moist or dry adiabatic lapse rate and is the maximum stable rate. The actual lapse rate in the troposphere is almost never the adiabatic rate. It’s less than the adiabatic rate. The 1976 US Standard Atmosphere, for example, has a lapse rate in the troposphere of 6.5 K/km. That’s a lot less than the adiabatic rate you would calculate from the humidity profile with a maximum RH of 50%. And you’re forgetting the meridional circulation like the Hadley cells that arise from the pressure gradient force caused by the decrease in surface temperature with increasing latitude. Without that, all the rising air from the local heating at the equator would come back down locally.
Re: phi (Oct 26 08:09),
You have it backwards. A decrease in lapse rate means the atmosphere warms and radiation to space and to the surface increases. So there is an increase of cooling by direct radiation rather than a decrease. Perhaps you meant an increase in lapse rate since the lapse rate is positive by convention. But that won’t happen.
Re: Paul_K (Oct 25 03:14),
But that’s the problem. The radiative emission at the TOA is not a simple linear function of surface temperature when there are feedbacks. An increase in total atmospheric water vapor content with surface temperature works exactly like an increase in CO2, the surface temperature must increase more than it would have without the increase in water vapor content. A decrease in albedo will increase the absorbed incoming radiation and increase the surface temperature. The total humidity is likely controlled by the sea surface temperature, which will take centuries to equilibrate. Changes in albedo are probably also controlled mainly by ocean temperature.
MODTRAN Tropical Atmosphere clear sky 375 ppmv CO2 17km looking down (tropopause).
Iout = 289.037 W/m²
750 ppmv CO2
Iout = 284.484 W/m²
If I hold water vapor pressure constant, I have to increase the surface temperature by 1.234 K to restore radiative balance. But if I hold relative humidity constant, the increase required is 2.04 K. Tell me what’s linear about that?
If the radiative emission were a simple linear function of surface temperature, we wouldn’t have ice ages. Milankovitch cycles do not cause a significant difference in total incoming radiation. The distribution of the radiation is changed. Less radiation at high latitude means more ice and vice versa.
With a nod to DocMartyn, that should be “sea surface temperature, which could take centuries to achieve a new steady state”.
It is the shape. As long as the Earth is roughly spherical, that limits TOA (tropopause) and surface energy to a range. You are trying to find a change in energy distribution within that limited range.
http://redneckphysics.blogspot.com/2012/10/simple-radiant-model.html
You can’t get much simpler than that model. With 239 Wm-2 TOA you get 151.3 Wm-2 “greenhouse effect” and 390 Wm-2 “true” surface. If the model compared hemispheres, then the geometry would work with you.
My two cents
DeWitt Payne,
“So there is an increase of cooling by direct radiation rather than a decrease.”
It depends on the direction of causality. To be clearer, I will speak of the temperature gradient not signed. Increased CO2 reduces losses by direct radiation and therefore reduced cooling which causes a decrease in the temperature gradient. The increase in temperature will obviously establish a state of equilibrium, but it is indeed the decrease in losses which causes the decrease in gradient.
Frank F:
There’s little room for doubt that you “have not clue” about the thrust of my statements, which are couched in the fundamental terms of system analysis and of thermodynamics. That you should, on that basis, conclude that I’m speaking “gibberish” speaks volumes.
Realistic energy balances at the surface cannot be achieved without a masterful comprehension of the principal mode of thermal energy transfer between an evaporating surface and a convecting atmosphere. That such comprehension is lacking is apparent not only from first principles, but from the great variety of ad hoc parametrizations of moist convection resorted to by modellers.
phi:
To the extent that GHGs induce planetary changes in internal energy storage, rather than just a vertical redistribution, their effect is indeed capacitative. I cannot share your trust in the efficacy of any parametrization of surface temperature that excludes the effects of moist convection.
DeWitt,
“Tell me what’s linear about that?”
Well, if the humidity rises in proportion to surface temperature, then it could still be linear with intemperature. We are not talking about a huge temperature change. The GCMs seem linear, at least for a long time. If there is a good explanation in the models, I have not heard it.
Sky,
It’s Steve not Frank. You appear to have more bluster than understanding, and little that you say is consistent with even basic physical principles. Rising CO2 increases the heat capacity of the atmosphere? Come on. At least you bring a measure of humor to the thread.
Re: SteveF (Oct 27 18:54),
No, the important part is is whether the net forcing is linear with surface temperature. But that would only be true if the feedbacks such as humidity and ice/albedo are always directly proportional to the change in system heat content. But we have heat going into the deep ocean that won’t be reflected in the ocean surface temperature and thus total humidity for a long time, possibly centuries. The problem for me is the y axis of the forcing/temperature graph. The implication is that you know the total forcing in year zero and it’s equal to the step change in CO2. But it’s not. With feedbacks, the total forcing increases with temperature. I would only expect linear behavior in a toy model such as a one or two box model.
My bet is that the difference in the temperature/forcing curve between models is related to the behavior of the different ocean models used.
SteveF:
How funny! FYI lumped capacitance methods have consistently produced the most reliable models of planetary and surface temperatures, both for Earth and for Mars. I have no interest in demonstrations of a robust linear relationship between cluelessness and smug dismissal of effective modelling approaches.
DeWitt,
” With feedbacks, the total forcing increases with temperature. I would only expect linear behavior in a toy model such as a one or two box model.”
Humm… Yes the total radiative forcing almost surely increases with temperature (how could it not with rising total atmospheric humidity at higher temperature?). But the issue again is the distinction between the temperature domain and the time domain. Even with a simple toy model (say a 200 meter slab ocean), the total (amplified) forcing for sure will rise with rising ocean temperature, and so forcing is expected to be “non-linear”… that is, an initial application of 2 watts/M^2 of forcing from GHG leads to slow warming of the 200 meter ocean, and as it warms, the forcing from the combined GHG and water vapor effect gradually rises… say from 2 to 4 watts/M^2. That gradual rise in forcing from water vapor feedback means that the approach to a new equilibrium will not be so simple as an exponential approach to a new equilibrium temperature with fixed forcing; the rate of rise following a step change in GHG forcing will not drop off so quickly as we would expect were amplification from water vapor not taking place, because the total forcing gradually increases.
.
But linearity in the temperature domain just means that energy loss to space increases linearly with temperature; it says nothing at all about how the total (amplified) forcing evolves in response to feedbacks (like water vapor feedback). If rising surface temperature increases water vapor feedback, that does not in itself mean the escape of energy to space is not linear with temperature. The only way to determine if the sensitivity is non-linear in the temperature domain is to look at how heat loss to space varies with surface temperature.
.
The trajectory of surface temperature over time in response to GHG should be sensitive to things like ocean heat uptake and ocean surface warming. But that is not the same as non-linearity in the temperature domain.
Sky,
” I have no interest in demonstrations of a robust linear relationship between cluelessness and smug dismissal of effective modelling approaches.”
The cluelessness is entirely your own. That is painfully obvious in all your nonsensical comments. I suggest you learn some basic science before you get back up on your soapbox.
There is absolutely no evidence of any “sensitivity” of climate to carbon dioxide levels.
Scientific debate should never be decided by consensus. It should be “decided†by empirical evidence that validates, or otherwise, the hypothesis in question.
Joseph Postma’s new paper (22 October 2012) looks for empirical evidence of a GHE, and finds none. He puts forward cogent arguments as to why this lack of evidence is to be expected. All should read this ground-breaking work, which also cites my paper (March 2012) pp 47-49:
http://principia-scientific.org/publications/Absence_Measureable_Greenhouse_Effect.pdf
Doug Cotton
“Basic science?” There is nothing more basic than temperature being the manifestation of the kinetic energy stored within a mass of matter. I can’t waste time arguing with someone lacking that basic recognition while gratuitously resorting to ad hominems.
Sky,
” I can’t waste time arguing with someone lacking that basic recognition while gratuitously resorting to ad hominems.”
Nor can I waste time on someone who understands nothing of science and continuously spouts nonsense. As they say in Brazil,
“A Deus”.
SteveF writes “Yes the total radiative forcing almost surely increases with temperature (how could it not with rising total atmospheric humidity at higher temperature?). ”
By changes in albedo. Arguments that implicitely start with “with all else being equal” are good for discussion but need to be taken with good measures of salt 😉
Re: SteveF (Oct 29 18:56),
In the short term for a small change in average temperature, it probably does. But then nearly everything is linear for small ranges. Water vapor feedback almost certainly isn’t linear with temperature. Specific humidity at constant relative humidity increases exponentially with temperature.
True. But you can’t do that with a simple model. And you can’t do it by observation in the short term because current instrumentation isn’t precise enough. You need either several hundred years of observations or a valid Air/Ocean/Cryosphere coupled GCM. And all the current AOGCM’s say the response to forcing in the temperature domain is non-linear even if they don’t agree on the shape. You can argue that the current models aren’t good enough, but that does not prove that the temperature vs net forcing must be linear. More like the opposite. It makes linearity very unlikely.
DeWitt,
” And all the current AOGCM’s say the response to forcing in the temperature domain is non-linear even if they don’t agree on the shape.”
.
Some are pretty close to linear, some very far from linear (in the temperature domain, of course). I repeat what I have said a couple of times before: I do not discount the possibility of non-linearity in the temperature domain, I just have not seen anything like a reasonable physical explanation for rather extreme non-linearity. I note also that the conventional framing of amplified warming (the IPCC AR’s) implies linear response in the temperature domain; the equations are linear in temperature. As you noted, most process are (nearly) linear over a relatively short range; all that I am suggesting is that absent a convincing argument otherwise, a change of a few degrees K on ~300K seems to me to lie in range that can be reasonably approximated by a linear response in most systems.
.
This is one where we may be better off agreeing to disagree.
TTTM,
“By changes in albedo. Arguments that implicitely start with “with all else being equal†are good for discussion but need to be taken with good measures of salt ”
.
Sure, there can be other feedback factors (like cloud influences) which can change the response from the simplest of feedbacks (GHG’s plus water vapor). But whatever other factors you propose are important, the relevant question is still the same: is heat loss to space something other than a linear response to surface temperature, and if so, what physical mechanisms are involved in creating this nonlinearity?
Re: SteveF (Oct 30 22:24),
For most of that 300 K, the vapor pressure of water is negligible. At 300 K, which happens to be the surface temperature for the Tropical Atmosphere in MODTRAN, the rate of change of specific humidity with temperature is quite large and it isn’t linear. Looking at the saturation vapor pressure we have:
T (K) pressure(Pa) ΔP
299 0.020811
300 0.022094 0.001283
301 0.023446 0.001352
302 0.024871 0.001425
The concept that the climate sensitivity is a constant is at best an approximation. The climate is described by coupled non-linear differential equations. While that doesn’t prove the climate is chaotic, it’s certainly a necessary condition.
Then there’s the paleoclimate evidence. There have been huge swings in global temperature, or at least the ocean temperature over millions of years that appear to be due to just rearrangement of the continents leading to changes in ocean circulation with no change in external forcing. So tiny forcings over tens of millions of years lead to large temperature swings. But a large transient forcing like the PETM decays back to the baseline in ~100,000 years.
The fallacy of Paul_K’s temperature domain plot is that, with feedbacks, the temperature axis is not orthogonal to the forcing axis. The temperature axis probably isn’t even a straight line. Forcing the temperature axis to be straight and orthogonal creates apparent non-linearity even if the actual behavior were linear. But you can’t know the shape of the temperature axis in advance without a valid model.
Actually, if the feedback is linear with temperature, the response stays linear. However, if the feedback is non-linear with temperature, the response is non-linear as well. Here’s a simple example. The feedback is an exponential function of the change in temperature, specifically K*(EXP(ΔT)-1)/(EXP(1)-1) where K = 0,1 and 2. The forcing is calculated at the surface so the climate sensitivity is about half what it would be if calculated at the TOA. It’s a one box model with the box well mixed. This isn’t a very good feedback model as it blows up if you increase K to much greater than 2, but it illustrates the point that a non-linear feedback produces a non-linear net forcing vs temperature curve even with a very simple model.
Â
DeWitt – you still only think in terms of radiation. See the big picture one day!
Those, like yourself, who still believe the carbon dioxide hoax need to come to realise that energy balance does not determine climate. It’s the other way round. Climate determines energy balance. Climate itself is determined by the incident solar energy which fluctuates in long term natural cycles probably related to planetary orbits.
Earth’s surface temperature cools as heat from the Sun is transferred back to the atmosphere. This process is dominated by sensible heat transfer, not by radiation which accounts for less than 30% of such transfers.
All that backradiation can possibly do (according to physics) is slow that 30% of cooling which is due to radiation. Meanwhile, the other 70% merely accelerates to compensate, thus leaving no net effect on the overall rate of cooling. What comes in from the Sun will get out again by one means or another. When there are long periods of natural warming there will of course be a build up of energy being retained. The thermometers tell us that, without even having to measure the energy balance. But the opposite is the case when cooling sets in.
Backradiation is not the cause, because it cannot transfer heat to a warmer surface. It can only slow radiative cooling. See my peer-reviewed paper on PSI recently cited by Joseph Postma in his October 2012 paper.
Doug Cotton
Â
.
DeWitt,
The vapor pressure of water rises exponentially with temperature, of course. But the forcing due to rising water vapor in the atmosphere is, setting aside for the moment the issue of the net influence of clouds in a world that is a bit warmer and more humid, essentially logarithmic in water vapor pressure, leading to a net forcing due to increasing water vapor that is close to linear with surface temperature. Clouds could plausibly lead to significant non-linearity, because a change of state (condensation/evaporation) corresponds to an extremely non-linear effect: a step-change in radiative properties. But nobody seems to be offering an explanation for non-linearity in the temperature domain based on cloud behavior, or at least not one that I have heard.
.
I just have not heard any convincing mechanism for large non-linearity in the temperature domain. I remain quite skeptical of assertions of non-linearity.
Re: SteveF (Oct 31 18:12),
It’s true that if you hold relative humidity constant and increase the surface temperature, the emission at the tropopause increases linearly. But that’s not the whole story. Emission downward from the atmosphere to the surface increases faster than emission upward leading to a fairly large radiative imbalance, 5.3 W/m² for a 2 C change in surface temperature. And that’s holding CO2 constant. If I increase the CO2 to keep the emission upward at the tropopause constant with temperature, the radiative imbalance at the surface increases to 6.6 W/m². The lapse rate cannot stay constant under those conditions. There is no reason to believe that the end result will magically be a linear forcing vs temperature curve.
Not to mention that while CO2 forcing is indeed logarithmic with concentration, water vapor isn’t. It’s nearly linear. That’s because water vapor absorption can increase in a region of the spectrum where absorption by water vapor isn’t close to being saturated.
Then there’s the fact that water vapor isn’t well mixed either vertically or meridionally. A meridional bias in temperature increase (otherwise known as polar amplification) will increase total water vapor even faster.
I remain extremely skeptical of assertions of linearity.
Doug Cotton,
In my humble opinion, your analysis is essentially correct. I only have one objection. Transferring from radiative to convective have an energy cost. It seems difficult to estimate but by reference to other thermal systems, I guess the real value should be halfway between your estimate (0 cost) and that commonly accepted by climatologists (0 transfer).
phi, my first impression of that paper is it is obvious rubbish. Granted, I only read one paragraph before reaching that conclusion. I’ll read more, but I highly recommend you be careful with agreeing with it.
My suspicion is the more I read the paper, the more ridiculous it will seem.
Brandon Shollenberger,
I have not read the paper, my partial approval relates only to the post.
DeWitt,
” There is no reason to believe that the end result will magically be a linear forcing vs temperature curve.”
There is nothing surprising about changes in lapse rate with rising atmospheric moisture (how could it be otherwise?). There is no connection I can see between that and non-linearity of sensitivity in the temperature domain.
.
There is plenty of reason to say that the models (all of them) show very linear response in the temperature domain for a very long time. If the correct explanation for non-linearlity were any of the things you point to (all of which are fast acting), then the non-linearity would be evident quickly in the models. It’s not. The correct explanation for substantial non-linearity in the temperature domain (if one exists) is none of the obvious ones that have been discussed on this and other threads.
Re: phi (Nov 1 03:56),
Doug Cotton is a notorious troll who has been banned from many sites including Science of Doom. He frequently uses sock puppets to try to get around bans.
Re: SteveF (Nov 1 08:51),
Not true. I can create an exponential feedback that is slow acting and deviation from linearity isn’t immediate. I just have to adjust the coefficients. Heat going into the deep ocean in the thermohaline circulation has little immediate effect on sea surface temperature and hence specific humidity. It’s “in the pipeline”. According to you, there is no pipeline and all feedbacks are already reflected in the surface temperature and therefore we know the climate sensitivity. I don’t think so. Linearity of response must be proven, not simply asserted because you haven’t heard of a good reason for non-linearity. I believe that’s one of the classic logical fallacies, maybe affirmative conclusion from a negative premise.
phi, when you get fooled by an obvious crank its time to admit that you dont have the ability to tell truth from falsity in that particular field.
Steven Mosher,
Consensual physicists make a mistake symmetrical to that of Doug, so they are all crank?
Otherwise, the question is not whether I have particular expertise in this field, but if what I say can be shown wrong. Can you do that?
” According to you, there is no pipeline and all feedbacks are already reflected in the surface temperature and therefore we know the climate sensitivity. ”
There most certaibly is warming in the pipeline; have you actually read what I have written on this thread?
There can be warming along most any temporal trajectory, and I have said so many times. That has NOTHING to do with non-linearity in the temperature domain.
Re: SteveF (Nov 1 13:18),
Maybe that was an exaggeration, but let’s get back to your main argument and put it in a different perspective. Your argument is that you haven’t seen a good mechanism for non-linearity so therefore it must be linear. All the paleoclimate evidence of climate non-linearity and non-linearity in models (as the saying goes, close only counts in horseshoes and hand grenades) is irrelevant. That argument is identical to the argument used by those who rejected the continental drift hypothesis because they hadn’t seen a good mechanism for continents to move with respect to each other. The overwhelming evidence that the continents had been joined in the past cut no ice with them. Your argument is non-probative because it’s logically fallacious. A negative premise cannot prove the existence of a positive conclusion.
But your mind appears to be closed on this subject so further discussion seems pointless.
DeWitt,
Funny, I was thinking of telling you about your closed mind on this subject! With regard to the evidence available supporting nonlinearity in the temperature domain: I note that there are substantial differences in the nonlinearity of different models, they can’t all be right about this.. so it is clear that whatever processes are involved, those processes are not well understood.
.
You rather unfairly characterize my thinking on this. I do not exclude the possibility there could be a physical mechanism which would do this, but nobody seems to be offering one. You on the other hand seem to reject even the possibility of linearity in the temperature domain, in spite of the analyses done by Paul_K and others showing the temperature history is consistent with linearity. .
My opinion is that the data are perfectly consistent with a linear response, and that if we are to accept the response is linear for 50 years and then becomes very non-linear, there ought to exist clear and specific mechanisms, not arm waves about hypothetical functions which would do it.
.
We agree on one thing: it will accomplish nothing for us to continue this exchange.
SteveF, ” I note that there are substantial differences in the nonlinearity of different models, they can’t all be right about this.. so it is clear that whatever processes are involved, those processes are not well understood.”
Shape 🙂 There is a bit of study going on about Sudden Stratospheric Warming. Energy blows out through to the stratosphere near the poles. A big part of the research is “wall” energy transfer. As you near the poles, the “wall” or cross-sectional area of the atmosphere, necks down. The energy has to go somewhere, so hello stratosphere. Releases a butt load of energy.
So if the models consider the convergence, they would be non-linear. If they don’t consider the convergence correctly, they would be all over the place.