The late great Spike Milligan used to tell a story about coming across his young
daughter drawing. He asked her what she was doing, and she replied “I’m drawing a picture of God.â€Â Spike laughed and said “But no-one knows what he looks like.â€Â  With a pained look, his daughter snapped back “Well they will when I’ve finished the drawing!â€
In Part 1 here I got as far as the application of this energy balance equation to 34 years of OHC data from GISS-E.
dH/dt = F(t) -  λ1ΔT – λ2 ΔT2 – λ3 ΔT3 – λ4 ΔT4 + (error order ΔT5 ) … B.3
I demonstrated that it wasn’t possible to estimate ECS uniquely from this short-run data and I promised to develop two additional models to allow further tests using the GISS-E temperature and OHC data series.   However, before I do so, there is something I must do for humanitarian reasons. At the end of Part 1, I left Steven Mosher chewing his fingernails and chain-dropping valium. I was touched by the simple, heart-rending sincerity of his anguished crie de cÅ“ur in the comments section: “Crap, the suspense is killing me.â€
Steven is waiting for the answer to a key question. For his mental health, I will sacrifice dramatic literary effect and give the answer to the key question upfront.
The Question: Can you or can you not estimate Equilibrium Climate Sensitivity (ECS) from  120 years of temperature and OHC data (even) if the forcings are known?
The Answer is: No. You cannot. Not unless other information is used to constrain the estimate.
An important corollary to this is:- The fact that a GCM can match temperature and heat data tells us nothing about the validity of that GCM’s estimate of Equilibrium Climate Sensitivity.
This fact arises from the mathematical properties of the problem.
The models I am going to develop today to complete the proof of this will solve for temperature, as well as heat gain, and to do that I must now make some assumptions about the distribution of heat.  Since I am looking at the world through the lens of a GCM, I am going to assume that all energy arriving through radiative imbalance ends up as heat; i.e. there is no other long-term energy storage in, for example, biomass or planetary kinetics.
E. The Single Capacity Model (Third Model in the series)
For this third model, we are going to assume that the change in surface temperature reflects the change in temperature of a single isothermal body with thermal capacity, C1.
C1 is defined here as the energy required in watt-years to change the temperature of the thermal body by one degree K, scaled by the surface area of the planet.
We can then write:
dH/dt = C1 d ΔT /dt … E.1
From B.3 in Part 1 of this series, we can see then that
C1 d ΔT/dt = F(t) -  λ1ΔT – λ2 ΔT2– λ3 ΔT3 – λ4 ΔT4 … E.2
We can solve this equation numerically to match temperature and OHC time series, and
will do so, but first I want to consider briefly the analytic solution to the linearised form of E.2.
In linear form, the equation becomes the “linear feedback equationâ€:
C1 d ΔT/dt = F(t)- λ1ΔT … E.3
For a fixed constant forcing F, this can be readily solved for ΔT (by Integrating Factor) to yield:
ΔT = F (1– exp(-λ1t/ C1))/λ1 … E.4
We see immediately that as t->∞, ΔT-> ΔTe = F/ λ1.
Since ΔH= C1ΔT, Eq E.4 also describes the shape of the “energy packet†associated with the flux perturbation for a constant step-forcing for this linear form.
It is common to replace the term C1/ λ1 with τ (which has units of time) and to write E.4 as
ΔT = F (1 –exp(-t/Ï„)) /λ1 … E.5
Because E.3 is linear, and because we have an analytic solution for a single step-forcing, it is possible to use superposition (stacking of solutions in time) in order to solve E.3 for any arbitrary cumulative forcing F(t). Application of this method to solve E.3 provides a near-perfect (simultaneous) match of the GISS-E temperature series and the available GISS-E OHC data series for a λ1 value corresponding to a climate sensitivity of 1.3 deg C for a doubling of CO2.  We shall see that this is less meaningful than it first appears.
This solution method (superposition) offers numerical advantage over other solution forms in dealing with the “spikey†nature of the annual forcing data, but unfortunately this is the only equation to which it can be applied. Since I wish to make like-for-like comparisons between the higher and lower order equation forms, I am therefore using a Runge-Kutta approach (aka “RK4â€) throughout this article as the main workhorse to solve the differential equations.
F. Heat Flow Across the C1 Boundary –  the Two- Capacity Model (Fourth Model in the series)
The complaint often leveled against the single capacity model, is that it “sees†only the short-term behavior of the top 70m or so of ocean, and does not permit modeling of the effects of long-term heat flow into and from  a deeper thermal sink.
So we are going to add an additional heat capacity, C2, notionally deeper in the ocean than, and connected to, the C1 capacity, and we will model the heat flux across the boundary between them as well as the energy gain in C2.
For ease of notation, just for this short section, I am going to write T1 and T2 for the temperature gains in the first capacity and the deep capacity, instead of ΔT and ΔT2.
At t=0, T1 = T2 = 0.
Rate of total heat gain in the system is given by:
dH/dt = C1*dT1/dt +Â C2*dT2/dt … F.1
Assume that rate of heat diffusion is linearly proportional to the temperature difference
between the two bodies, so that at time t, the rate of heat flow OUT of C1
and INTO C2 is given by:
C2*dT2/dt = k(T1 – T2) … F.2
where k is the heat diffusion constant.
Rearranging F.2 and solving for T2 (using an Integrating Factor = Â exp(kt/C2)), we find that
T2(t, T1) = (k/C2) *exp(-kt/C2)   0t∫ T1(t)exp(kt/C2) dt … F.3
As T1 -> constant value, we see that
T2 -> T1 – exp(-kt/C2)
So at high values of t, as T1 approaches equilibrium, T2 converges on T1; Â and the boundary flux term, Â k(T1-T2)->Â 0.
Combining Equations B.3, F.1 and F.2, and returning to our old nomenclature, where T1=ΔT, our new governing equation becomes:-
C1d ΔT/dt = F(t) -  λ1ΔT – λ2 ΔT2 – λ3 ΔT3 – λ4 ΔT4 –k(ΔT- T2(t,ΔT) … F.4
Where T2(t, ΔT) is given by F.3 .
It is important to understand that the inclusion of this boundary cross-flow term cannot functionally change the ECS. The ECS is uniquely controlled by the values of the λ’s. Instead this term changes the shape of the surface temperature response curve, and the total energy package from any given forcing. Specifically, the equilibration time is dependent on the ratio of k/C2 and the amount of energy transferred for a given step-forcing is dependent on C2. The final equilibrium temperature remains the
same – if the values of the λ’s remain unchanged.
To illustrate this point, the graphs below show a series of solutions to Eq F.4 for a fixed forcing of 3.7 W/m2 (a doubling of CO2). The  λ’s have been set (fixed) to yield an ECS of 1.875 in all cases, and the primary capacity remains unchanged at  6.175 watt-years.deg K-1.m-2 .
After the first (single capacity) case, the deeper capacity is set to successively larger values, while maintaining the same diffusion constant. In the fourth case, the system has a total thermal capacity which is about 5 times the initial case – all of which must be heated to yield a temperature change of 1.875 deg K; hence the energy transfer from this forcing is commensurately (much) larger. This is achieved by increasing the
equilibration time which prolongs the radiative imbalance.
G. Fitting The Models to the data
The parameters used to fit the data are:-
λ1 up to λ4: the coefficients which uniquely define ECS
C1:Â the heat capacity of the connected system.
VOLCFAC : an efficacy factor applied to the GISS-E reported “stratospheric aerosol†forcing. [This is always 0.74+-0.024 in every match across the entire dataset or subsets of the datasets for low order and high order solutions, and for application of different forms of numerical solution. There seems little doubt that it is “real†in that GISS-E is doing something to these reported forcings which it is not doing to the other reported forcings, and which manifests itself as an effective scaling-factor. It is included because it improves the match, but is largely irrelevant to parameter estimation because the volcanic forcing is excursive in nature.]
For the dual capacity models, we include:
C2: the thermal capacity of the secondary (deeper) system
k:Â thermal diffusion constant controlling boundary heat flux per unit temperature difference
In addition to the above parameters, the temperature and OHC anomaly data are rebased between the simple model and GISS-E via two parameters Tempshift and Energyshift.
The objective function I am seeking to minimise here and in all the further matches discussed below is a weighted sum of the residuals from the simple model prediction of temperature and OHC
against the GISS-E results. The weighting is based on the sample variances of the GISS-E temperature and OHC data sets  [(s12RSS2+s22RSS1)/(s12+ s22)] to ensure that the temperature match and energy match are both equally high quality and that they improve in parallel as we burn additional degrees of freedom with more complicated models.
H. Results of fitting the models.
The two plots below are spaghetti plots showing the fits of all 8 cases for temperature and OHC against the GISS-E data.  They are all sufficiently good that case by case comparison is unwarranted. SC refers to Single Capacity and DC refers to Dual Capacity model, so, for example, “DC-3rd†is the application of the dual capacity model to a 3rd order equation in ΔT.
The table below shows the parameters and results for the same 8 cases.
One of the interesting things to note about these results is that if we compare the total thermal capacities for the single capacity and the dual capacity cases, they are almost identical.  In the dual capacity cases, the model had a free option to add in a big capacity with a very slow diffusion rate, but it invariably optimised against the GISS-E results by putting in a total system capacity that was very similar to but just a little larger than the single capacity model (actually about 10% bigger)  and a high thermal diffusion rate so that the two thermal bodies were always well connected. This is because the model is constrained to match the total energy gain (C1 ΔT + C2T2) in the observed OHC data.
The magnitude of the total matched system capacity probably corresponds to less than 90m of ocean depth. Even so, equilibration times for the higher order dual capacity cases are substantial. In the 4th order match, the parameters for the dual capacity case when applied to a fixed forcing of 3.7 show attainment of 50% of equilibration temperature after 9 years, 75% after 30 years, 90% after 122 years and 95% after 400 years. This doesn’t sound too far away from the reported response times for GISS-E.
The more critical thing to note is that the ECS values from the matches range from 1.3 to 2.5, and it is certain that, if we were to consider higher order solutions, we could find even higher values for the reasons explained in Part 1.
Most importantly, there is no basis on which to discriminate between these various estimates. One can reasonably argue that the dual capacity model is superior to the one-box model in matching the data, since at every order of fit the use of the additional two degrees of freedom improves the match, but one is then (still) left with an ECS range of 1.3 to >2.5.
In conclusion then, we can say that the fact that a simple model with a climate sensitivity of 1.3 deg C for a doubling of CO2 has the ability to hindcast as well as a GCM does not prove that the GCM is in error nor does it prove that the climate sensitivity is actually 1.3. On the other hand, we can also conclude that the fact that a GCM can match temperature and OHC data at any level of ECS tells us quite literally nothing about the validity of the ECS effective in that GCM. This stems directly from the mathematics of the problem. To claim superiority over any other estimate of ECS, the GCM would have to demonstrate that its estimate is better constrained by its ability to match other critical data. At the present time, the GCMs all singularly fail to do this, and hence do not  form a sensible basis for assessing the likely range of ECS values.
The GCM developers have a lot in common with Spike Milligan’s daughter.






whew, I was worried that willard would show up to solve some equations and prove me wrong.
Hi Mosh,
I think you can relax now. I hope that the article gave you a chuckle at least.
At a similar level of trivia, I discovered when browsing this morning that the pun “mathturbation” which I stole from Tamino, and for which he claimed intellectual authorship in his eponymous article, was actually plagiarised (swoon!). See, for example, this 2009 article:
http://www.rs25.com/forums/f9/t112337-suspension-theory-vs-my-experience-math-turbation-warning.html
Tut, tut, tut. You can’t trust anyone these days.
Paul
Have you considered the work which uses models with adjustable parameters to develop PDFs for climate sensitivity based on 20th century temperatures and forcing estimates?
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch9s9-6-2.html
(and more recent work by Annan, and follow-ups to many of the studies mentioned in this chapter)
Paul_K (Comment #76891)- Let’s be fair, isn’t it entirely possible that Tamino independently developed the pun?
I am still waiting for Steven Mosher to school me it whatever it is I have no idea what I am talking about…
Paul,
Excellent read and explanation.
Paul_K,
Thanks for this post.
I think there is a constraint which can narrow the credible range of sensitivities: the heat flux between the surface and deep layers. I mean, if the the heat gain by deeper water versus surface water is measured and those measured values conflict with what GISS Model E (for example) calculates, then that would seem to cast doubt on the GISS model.
.
One other comment: The two-slab ocean model is not very good in terms of a physical representation of what is actually happening when heat is taken up by the ocean below the well mixed layer. There is no thermally homogeneous deep layer. I think a somewhat better model is a well mixed surface layer (~60 meters) on top of a deep (2000 meters or more) solid block of material with the following (wonderful) properties: a) heat capacity identical to that of water, and b) thermal conductivity that matches the measured diffusivity (eddy driven) of the thermocline. That diffusivity value is (I believe) reasonably well known and constrained by the average rate of upwelling (about 1.2 cm/day) and the measured temperature profile of the thermocline. So the heat accumulation would then consists of a portion due to uniform heating of the surface layer and a portion due to “thermal conduction” down the block of solid material which stands in (thermally) for the deeper part of the ocean. Since thermal conductivity (heat equation) can be numerically modeled if you know the diffusivity, the evolution of the rate of heat uptake by ocean below the well mixed layer should be calculable for any chosen evolution of temperature of the surface layer.
My guess (and it is only a guess) is that this model would behave quite differently from the two-slab model, with a bit more rapid initial total heat uptake, but one that falls more rapidly than the two-slab model at longer times. Net: a little slower initial response (say up to ~5-7 years) to a step change in forcing, but a short term response (say ~15-20 years) that is much closer to the ECS value.
.
Of course a well-mixed layer/solid block model does have limitations… it can’t model seasonally driven changes in the well mixed layer (“AKA seasonal thermocline”).
Re:M (Comment #76892)
June 9th, 2011 at 5:52 am
Hi M,
I think you’ll find that most of those studies do support very large ranges of ECS. In that sense, they are compatible with what I am showing here. On close inspection, though, you find that they are based on models that fix the functional form of response for the radiative flux term one way or another (analagous to my fixing the number of lamdas I am going to use) and then consider the uncertainty arising from the input data, especially the forcings. This implies that we already know how the feedbacks work. I am actually trying to make a point which is a little different – that even if you know the forcings, the uncertainty in the functional form of the radiative flux term (or in the exact feedback mechanisms which comes to the same thing) make the problem fundamentally ill-conditioned. So it is easy to delude oneself.
Knutti 2002 stated:
“To attach probabilities to our results, we have simulated a Monte
Carlo set of 25,000 global warming simulations using five ocean
model set-ups and taking into account the uncertainties in radiative
forcings and climate sensitivity (see Methods section). From those
simulations that are consistent with the observed surface warming
and ocean heat uptake, we can only derive a very broad probability
density function (PDF) for the climate sensitivity (see Supplementary
Information) which excludes neither very small climate sensitivities
(around 1.2 K, the value if no feedbacks are present) nor
unreasonably large values far above the widely accepted IPCC
maximum of 4.5 K. This result is in agreement with recent PDF
estimates for climate sensitivity, based on the observed surface
warming, natural variability and either ocean models16,17 or
observed ocean warming18,19. This indicates that given the uncertainties
in the radiative forcing, in the temperature records, and in
currently used ocean models, it is impossible at this stage to strongly
constrain the climate sensitivity,…”
A little later, the paper states:-
“Furthermore, all results from comprehensive 3D climate
models suggest a range for the climate sensitivity of about 1.5 to
4.5 K (ref. 1). By adopting this range as an additional constraint,
which is completely independent of this study, a narrower forcing
range results (1.6–2.5Wm22, Fig. 2c).”
This is not the worst example of circular reasoning I’ve seen.
Re: Andrew_FL (Comment #76895)
June 9th, 2011 at 7:04 am
I think I’ll reverse the null hypothesis. Tamino just needs to prove that he has never met anyone who owned a Subaru.
Re:jeff id (Comment #76897)
June 9th, 2011 at 7:21 am
Thank you, Jeff.
I’m going to start out by giving kudos to Paul_K for his hard work, but I’m also going to say that I think Paul is being a little overly pessimistic here about the ability of models to be constrained by data. I’ll explain why.
I agree given the current (at least up to AR4, not quite current) state of modeling, there is a limit to how much you can constrain the models. I think this is why people like Judith Curry suggest sensitivity ranges as large as 1-9°C/doubling for CO2 ECS at 95%.
As has been pointed out, one big problem is the large uncertainties in the atmospheric forcings for hindcasting, and the difficulty in even bounding those uncertainties….it’s very hard to bound a model, when your inputs are uncertain, and the uncertainty bounds in those inputs are unknown.
The second big problem, is the amount of unphysical fiddling with the models themselves—this is a point made by Knutti, 2008, “Why are climate models reproducing the observed global surface warming so well?””
The other comment I’ll make it is isn’t unexpected that when you look at a single bulk parameter like global mean temperature, that the dimensionality of the problem is substantially reduced from what it would be for the full system. (I’ve succeeded in formally doing this in other disciplines, and, yes this was peer reviewed.)
The problem in this case, is there are droves of data for different regions of the Earth, that can be used to refine the model fit.
For example, instead of considering a single global T, break it up into e.g., 36 couple equations in T_ij (mean values for surface temperature T(phi, theta) in 45°x45° grid blocks….feel free to use equal area partitioning in you prefer). And while you’re at it, include the precipitation record in each of those blocks.
One of the things you’ll find it that there are features at these scales (that largely get washed out by the global mean average) that put a lot of strain on the GCMs, and even with the fudging, I’m not sure any of them get it right. (See this section of AR4 on comparison of models to data for ENSO as an example of what I am thinking about.)
The point being that, as long as you have reduced it to a 0-dimensional problem, a single fudge factor for aerosols will match your desired value for CO2 ECS, given the uncertainty in the forcings.
But because you aren’t allowed to allow this aerosol fudge factor to vary globally, once you’ve chosen it, it makes unique predictions for each of the T_i,j.
The same can be said of the forcings…they are largely globally invariant over the time scale of the GCMs, so while you have freedom to tweak them, the tweak looks the same at each {i,j}. I suspect requiring simultaneous agreement at all {i,j} pairs (within some as yet undetermined uncertainty bounds) would greatly reduce the freedom the modeler has to play with in dialing in his own personal favorite value for CO2 ECS.
What Paul has done here is a very nice demonstration of the limits of comparing globally averaged output of GCMs to global mean temperature data. What we learn here (and perhaps also in Knutti, 2008) is that there is very little real constraint placed on the models in this case, and Paul has helped illustrate why.
mathturbation
What pseudoscientists, cranks and charlatans use to “prove” their self-indulgent theories.
The nutjob used mathturbation to show that the Bible contains an encrypted message warning of the September 11th attacks.
by AbnormalBoy Dec 2, 2004
(from Urban Dictionary, 1st hit in Google)
(PS I guess Tamino could be AbnormalBoy)
SteveF (Comment #76899)
June 9th, 2011 at 7:48 am
SteveF,
SteveF,
I will take on trust your comments concerning the two-slab model.
In fact, I don’t disagree with any of your comments, but believe that we may see the technical priorities differently.
I am sure that there are constraints which can narrow down the credible range of sensitivities, and I agree with you that having a super-accurate ocean model is one of them. My prejudice, however, is that it is a higher priority to ensure good quality measurements of SST and OHC. (This is necessary anyway for the development of an improved ocean model.) With just these data alone, we should be able to narrow down the range of sensitivity , even if we can’t perfectly model the oceanic heat distribution dynamically. We could prescribe SST and OHC, and test the atmospheric models to ensure that (a) the right amount of energy is coming in and going out in the right places on the planet, and in so doing, (b) discriminate clearly between SW heating through albedo reduction and LW heating via atmospheric absorption. At the moment, the models cannot match these data. I believe that if they were constrained to do so, then they would need to land on a lower climate response to GHG plus atmospheric feedbacks and to seek an exogenous explanation for the large decrease in cloud cover during the main heating period.
RE: Paul_K (Comment #76891)
So if Tamino plagiarised “mathturbation” shouldn’t his blog be withdrawn? 😉
One word does not make a plagiarism.
Carrick (Comment #76905),
Good comment.
However you say:
I don’t think that is right. There are two fudge factors which go hand-in hand; aerosol forcing history and ocean heat uptake response to changing surface temperatures. If you assume a certain aerosol forcing history, then that automatically forces you to assume a corresponding history of ocean heat uptake that is consistent with your diagnosed ECS. If you diagnose a high ECS, then the ocean heat uptake and aerosols have to jointly off-set most of the GHG radiative forcing. With simple single-slab or dual-slab oceans (one or two time-constant oceans) it is easy to just choose the appropriate time constants, the ones which are consistent with your selected aerosol history. But a more sophisticated ocean model provides additional constraints, such as the profile of heat uptake with depth. Measured ocean heat uptake and uptake versus depth profile (Argo) provide real (non-theoretical!) constrains on models.
.
I think it no accident that the accuracy of Argo data is routinely questioned (and seemingly ignored) by modelers… they understand the issues.
.
James Hansen’s most recent “paper” (or maybe better, his most recent sermon on the mound?) simply rejects several Argo-based ocean heat content studies (which all agree quite closely) in favor of the single study which gives a drastically higher calculated heat uptake…. and the only one which is reasonably consistent with very high climate sensitivity. I am not surprised by this.
bobdroege (Comment #76909)-I think you’ll find that the definition plagarism generally used in academia is broad enough to include claiming coinage of a word one did in fact not coin. Whether such a broad definition is a reasonable basis for penalties that might result in academia is another matter.
Re Paul_K Comment #76903 June 9th, 2011 at 8:29 am
Paul_K says “Tamino just needs to prove that he has never met anyone who owned a Subaru.”
____
What does that mean?
Paul K.
It gave me a great chuckle. I can well imagine that some folks thought that poor old Mosh would be proved wrong on Lucia’s site.
The other thing is I think we are hampered in our understanding by always having to refer to “the models”. I would think after 20 years that we could settle down on a few models ( as opposed to 20 or so).
Andrew.
school starts here
http://www.newton.ac.uk/programmes/CLP/seminars/120617001.html
and start here
https://www.cfa.harvard.edu/~wsoon/DemetrisKoutsoyiannis08-d/RodwellPalmer07.pdf
Lots more reading and viewing to do.
steven mosher (Comment #76917)-Thanks for the reading. Am I correct in surmising that this is in response to my speculation that the use of weather models to construct reanalysis might be more uncertain than a GCM “hindcast”? I so I can see the objection to that, as when GCMs are put through the same kind of data assimilation system they produce systematic errors, I believe that to be what the paper is saying, correct?
If I am still missing the point, which had something to do with reanalyses, I know you don’t believe this but I am no fan of reanalyses. So there is little consequence to me not knowing their problems if I never use them.
The “other” message of the paper seems to be that the “weather forecasting skill” of models continually re-initialized with observations depends somewhat on the sensitivity, allowing the possibility that correcting systematic errors in this area can help constrain it’s value. It’s a novel approach but it doesn’t look like it will get very far to me, aside from chopping the tail of the distribution a little of physically proposterous sensitivities like 12 K.
Andrew FL,
I think the burden of proof is on you to prove that Tamino claimed he coined the word. I’d be surprised if he did indeed do that, I havn’t read a post of his where he claims to have coined the term.
This doesn’t mean he claimed to have coined the term.
“I have, however, decided to give a name to such theories: mathturbation.”
from his blog of course “http://tamino.wordpress.com/2011/02/26/mathturbation/”
And you can beat Paul K’s cite from 2009 for math-turbation by three years on the first page of google results for mathturbation.
How is “I have, however, decided to give a name to such theories” not a claim to priority on the creation of the term?
“you can beat Paul K’s cite from 2009 for math-turbation by three years on the first page of google results”
This only makes it worse, since it means the term has been around long enough that Tamino could easily have checked to see if this term had been thought of before, and found that indeed the term is not new. If this were an academic paper he would have failed to met the due diligence of researching prior work on the subject, prior work easily accessable for years before he “gave the name” to “such theories”
But of course this is all beside the point since it isn’t an academic paper anyway. Sheesh, take a joke why don’t ya?
Re: Andrew_FL (Jun 9 13:52), Nope, that’s merely a good place to start to aquaint yourself with all the issues. In particular the differences/similarities between weather forecast models and climate models. the parts they share/dont share. Basically to your blithe assertions about them. Its far more complex than that.
with regard to NCEP I wasnt really addressing you, but you seemed to have some odd notions about what climate models do and dont do as opposed to weather models. So, I’ll keep pointing you at reading material and viewing material so that you dont make black and white statements. That’s all. then you will know more about what you are talking about
I will be the first to admit I don’t know much about how weather models work or how exactly they differ from GCMs. The papers in question are in formative in that regard. Up until now, my knowledge of this distinction mainly consisted of what others have said and what I thought (incorrectly in at least some ways) inferable from that. I guess I must be more careful about making statements based on so little information. Well, back to reading.
Carrick (Comment #76905)
June 9th, 2011 at 9:07 am
Carrick,
Thanks for your comments, although I was left unsure of your main point. Did you mean to say that I was overly “optimistic†about the ability of models to be constrained by data? My contention, if I have been less than clear, is that matching temperature and heat data tells us nothing about the validity of a GCM’s estimate of ECS, and that it is only by demonstrating a match to other critical data that the GCM can lay claim to having a superior estimate. I have not ruled out the possibility that such a match (to other critical data) may be possible, and that it can ultimately constrain estimates of ECS, but it clearly hasn’t happened yet. More cynically, I would say that it isn’t going to happen until the climate modelers show some willingness to consider indirect amplification of solar irradiance.
I actually found nothing to disagree with in any of your comments (even the polite kicking over reduced dimensionality) until you started to discuss spatial variation. You suggest that:” One of the things you’ll find it that there are features at these scales (that largely get washed out by the global mean average) that put a lot of strain on the GCMs,…†and “I suspect requiring simultaneous agreement at all {i,j} pairs (within some as yet undetermined uncertainty bounds) would greatly reduce the freedom the modeler has to play with in dialing in his own personal favorite value for CO2 ECS.â€
I would have more sympathy for this argument if it were evident that the modelers were actually giving themselves hernias to produce a reasonable match to temperature and precipitation on a local or even a regional scale. However, the evidence is that they are doing a lousy job of matching regional temperature variance, local temperature variation and local precipitation. (See for example D. KOUTSOYIANNIS, A. EFSTRATIADIS, N. MAMASSIS & A. CHRISTOFIDES “On the credibility of climate predictions†Hydrological Sciences–Journal–des Sciences Hydrologiques, 53 (2008).)
Moreover, I would be prepared to forgive the GCMs a poor quality match at local and even regional scale, if they could match the primary energy controls by latitude and at global aggregate level. Orwell’s line – All animals are equal but some are more equal than others – is especially true when considering what observational data are critical to a model match or at what aggregate level it needs to be credibly matched. This decision is normally end-use dependent. If the results are to be used to estimate long-term climate sensitivity and to project average planetary temperature in 50 to 100 years time, then the modeler to have minimum credibility needs to show that his atmospheric modeling produces heating ratios above the boundary layer which match observational data and that the system as a whole produces a credible balance by latitude of SW and LW heating during the satellite era.
Paul_K (Comment #76929)-What matching the altitudinal profile? That seems a lot more important than just the surface.
I would also suggest ERBE and CERES as providing some additional constraint but I am unsure if that would really be much different from fitting OHC.
Paul_K, I meant to say you were being overly pessimistic about the possibility of the models matching data in the sense that it is very easy for them to match data, as long as you only consider global mean temperature anomaly. If you consider a model with I x J grid points, you have much less wiggle room than you do if you just use global mean temperature to constrain the models.
I hope you weren’t seeing talking about reduced dimensionality as a slight kicking…I use 0-D models myself, and find them quite useful. (I also collapse 3-d physics into a single 1-d physics quite frequently when studying wave equations.)
Andrew, I’ve thought about collapsing the theta,phi directions and only leaving z. That is a common thing to do as well…unfortunately for climate it doesn’t work so well, because most of the important variations are in the horizontal plane (no way to model Rossby waves for example).
Paul_K,
What about looking at two (NH and SH) or three (northern extratropics 90-30, tropics -30-30 and southern extra tropics -30 – -90) pieces? Because the NH and SH are 180 degrees out of phase, you throw away a lot of information when you take the global average.
Re: Andrew_FL (Jun 9 14:50), Re: Andrew_FL (Jun 9 14:50),
Oh crap, wrong andrew. My bad.
Re:Andrew_FL (Comment #76932)
June 9th, 2011 at 7:55 pm
Andrew,
I was referring to matching the altitudinal profile when I spoke about the need to match heating ratios above the boundary layer. The current mismatch in the tropics of heating ratios at mid and upper troposphere level relative to surface heating rates suggests that the water vapour feedback is too high.
However, on your second point, I think there is a big difference between on the one hand matching the time profile of the total net radiative difference and on the other hand matching the trends in LW and SW separately. The first can be considered analogous to matching OHC to a first approximation, at least at aggregate level, if we ignore the spatial variation. The second however is providing additional critical discriminatory information on the cause of the radiative heating. This is information which cannot be obtained from OHC and at present, no GCM gets even close to matching/explaining these data. The IPCC Figure 9.3 which I referenced in Part 1 shows a major bust in outgoing SW in the GCMs. The Lindzen and Choi 2010 paper shows a major bust in calculation of outgoing LW in the atmospheric models used in the GCMs. The models are using a massive compensating error to get the total net balance approximately right, which is the bit that one might consider can be approximated by OHC, but fail to get the attribution of that heating even close to matching the observational information.
DeWitt Payne (Comment #76938)
June 9th, 2011 at 9:02 pm
I agree on the information content. I set up a latitudinal model a while back to look at seasonal temperature variation, but it wasn’t dynamic. (I wanted to see the size of error introduced by application of S-B to globally averaged temperatures.)
The challenge for any dynamic model with spatial discretization is that it is necessary to account for large energy transfer between the latitudes. There are analytic models of the divergence flux, but I don’t know how useful they have proved. Pretty quickly, it becomes necessary to discretize z to handle the ITCZ, and to add an approximation to the radiative transfer calculations . After doing this, you have something that looks like a poor relation to the AMIP suite. I was left with the feeling that there may not be many useful parking places between an aggregate analytic model and a full-bore atmospheric model, but I would love to be proved wrong on this.
Re:Carrick (Comment #76935)
June 9th, 2011 at 8:43 pm
Carrick,
Thanks for the clarification. Understood and agreed.
I took no offence at all at your comment on reduced dimensionality. I was actually expecting a good kicking from Tom Vonk, but he hasn’t shown up (yet).
Paul
Paul_K,
Best title ever.
If Tamino thinks he is the owner of that excellent descriptive, then he is guilty of doing too much of what it implies.
A quick visit there shows Tamino is obsessed with mathturbation and STD’s and their connection to AGW.
Perhaps he is like a certain Congressman in the news, and doing things very different than what one might imagine.
Hi Paul_K,
Your assertion that
in the first part of this post made me go back and re-read the
“IPCC AR4 Observational Constraints on Climate sensitivityâ€
I’m not sure your assertion is completely fair : from the “Summaryâ€
:
Results from studies of observed climate change and the consistency of estimates from different time periods indicate that ECS is very likely larger than 1.5°C with a most likely value between 2°C and 3°C. The lower bound is consistent with the view that the sum of all atmospheric feedbacks affecting climate sensitivity is positive. Although upper limits can be obtained by combining multiple lines of evidence, remaining uncertainties that are not accounted for in individual estimates (such as structural model uncertainties) and possible dependencies between individual lines of evidence make the upper 95% limit of ECS uncertain at present. Nevertheless, constraints from observed climate change support the overall assessment that the ECS is likely to lie between 2°C and 4.5°C with a most likely value of approximately 3°C
.
Though I do find it interesting that they think the most likely from the observations lies between 2 and 3 Deg C
PetetB, I suspect he was thinking of this instead:
Knutti, 2008, “Why are climate models reproducing the observed global surface warming so well?â€
The GCMs have a control knob, which is cloud feedback that allows them to dial in any ECS they want. There is then enough uncertainty in the aerosol forcings to get a match between global mean temperature, measurement and model. (The relevant section in the IPCC AR4 has similar language.)
I had gotten kicked around at one point for suggesting that they tune the forcings to fit model to data. At this point, it is now obvious that this is what they in fact do.
I think Paul_K is exactly right…as long as you restrict yourself to global mean temperature, there are enough degrees of freedom in the problem that the inverse problem is unconstrained.
Paul_K as an alternative to a 2-d model, I’d suggest collapsing the 3-d model into a pure latitudinal one. Most of the interesting variations in temperature trend are along lines of constant latitude, so it should contain the most explanatory power.
Re:PeteB (Comment #77007)
June 11th, 2011 at 1:22 am
PeteB,
Guilty. Having revisited the issue in response to your question, I agree that my assertion was not “completely fairâ€. However, I don’t think it was particularly misleading either. Here is what I should have written for a fuller explanation:
The sourcing of the range of ECS values in TAR (1.5 to 4.5 deg K) was heavily criticised by sceptics for its lack of transparency. It was also criticised because despite stating a dozen times that the range of ECS values was 1.5 to 4.5, all of the projections included in the report were based on GCMs with a range from 2.1 to 5. The IPCC eventually acknowledged (in AR4) that: “These [previous] estimates were expert assessments largely based on equilibrium climate sensitivities simulated by atmospheric GCMs coupled to non-dynamic slab oceans.†In AR4, possibly to counter the criticisms levelled at the TAR, there is an attempt made to demonstrate that estimates from independent assessments from different time periods lend support to a lower bound of 1.5 deg K at 95% confidence level and 2.0 at 90% confidence level. These estimates come from a loose combination of pdfs from a number of studies which fall into 3 classes:
(a)Application of an energy balance model (EBM) to the instrument period, while varying forcing data and OHC data
(b)Application of an EBM to reconstructed data from the past millennium, while varying forcing data and reconstructed temperature
(c)Application (in two instances) of an Earth-System Model of Intermediate Complexity (EMIC) to the Last Glacial Maximum.
The IPCC acknowledge quite correctly several times in the text that these applications are subject to “structural uncertainty†in the models (which is what my entire article is about).
The exact methodology for combining these pdfs is not explicit in AR4 – probably because any methodology would be open to criticism on the grounds that (i) the structural uncertainty renders the input pdfs very questionable and (ii) the final combined pdf is easily shown to be dependent on the frequency of, or the weighting applied to, the different types of study. Nevertheless, the IPCC felt sufficiently confident in its combinatorial methodology to conclude that “…and the consistency of estimates from different time periods indicate that ECS is very likely larger than 1.5°C with a most likely value between 2°C and 3°Câ€, while recognising that these studies could not be used to define an upper bound.
More practically, if one looks at the ultimate explanation of the AR4 range, which is here
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-5.html#box-10-2
it is rather more obvious that the GCMs have remained the primary source of the range of estimates and the other “independent studies†are included to support the exclusion (in the GCMs) of ECS values below 2.0 deg K for a doubling of CO2.
For a truly independent view of whether data over different historic timeframes can be used to support a high climate sensitivity, I would recommend
http://www.phys.huji.ac.il/~shaviv/articles/2004JA010866.pdf
Shaviv shows that estimates of ECS (without considering a GCR/cloud link) across many different timescales works out to 0.54+-.12degK.m2/W (corresponding to around 1.9+-.4deg C for a doubling of CO2). With the GCR/cloud link this estimate comes down and the variance is reduced.
Paul