Spatial Variations in the Temperature Anomalie: Atmoz vs. Pielke Sr.

There appears to be a Global Climate Change Blog Kerfuffle over the IPCC’s equation describing the radiative balance of the planet. At his blog, and in a peer reviewed article, Dr. Roger Pielke Sr. discussed several perplexing issues related to using Global Mean Surface Temperature Anomalies to understand changes in the heat content of the earth’s climate. One of his points (which should be rather uncontroversial,) has elicited some rather puzzling criticism.

The point is rather arcane: If I understand Dr. Roger Sr. correctly, he suggests that, should one wish to estimate climate sensitivity, λ, based on both a) the measured GMST (global means surface temperature) and b)a very specific approximation for the climate energy budget contained in the IPCC report, one should anticipate imprecise results.

This observation should be rather uncontroversial: If you use an approximation to estimate some things, the answer may be imprecise.

Nevertheless, the observation has resulted in at least two perplexing rebuttals. In this post, I’ll discuss the rebuttal from Atmoz which I believe is entirely tangential to Dr. Roger Sr.’s point.

What was Dr. Roger Pielke Sr.s’ point

Dr. Roger’s point was: failing to account for spatial variations in the temperature introduced uncertainty in the estimate of the radiative heat flux from the planet.

The specific equation Dr. Roger Sr. discusses is:
(1)dH/dt = f -T’/λ

where H is the heat content of the land-ocean-atmosphere system, f is the radiative forcing (i.e. the radiative imbalance), T’ is the change global average surface temperature in response to the change in H, and λ is called the “climate feedback” parameter which defines the rate at which the climate system returns forcing to space as infrared radiation and/or as changes in reflected solar radiation (such as from changes in clouds, sea ice, snow, vegetation, etc).

The approximation that concerns Dr. Roger Sr. is the use of a single point temperature anomalie, T’, to estimate the radiant heat loss using this term: T’/λ. In his paper, he specifically discusses the difficulties associated with estimating the climate sensitivity, λ. (This is estimated to fall between 2.0 and 4.5, and is of great importance to the debate over AGW.)

Equation (1) is used in several different ways to estimate λ (a quantity of great interest to climate scientist.) One way it is used is to neglect the time varying term and estimate λ=f/T’.

Even if the approximation λ=f/T’ holds, to obtain the correct result for λ, one must associate the correct value of T’ with f.

With regard to the T4 issue, and variations in temperature on the surface of the planet, that is the important point in Dr. Roger’s Sr.’s paper: To reitterate, one must associate the correct values of T’ with corresponding values of f. That’s it.

What is Atmoz’s rebuttal?

Atmoz, attempted to rebut that contention that errors will result from using T’ to estimate λ by demonstrating that a T4 weighting of the spatial variations in the earth’s surface temperature resulted in only a small difference in the calculated value of the earth’s surface temperature anomaly, T’.

This is illustrated the figure to the left. As you can see, the temperature anomaly is mis-estimated by only 10%.

Yep. The error in T’ is only 10%.

However, oddly enough, that point almost irrelevant with respect to the point Dr. Roger Sr’s made in his paper. The relevant figure with respect to Dr. Pielke’s point would have illustrated the radiant heat flux, f associated with the measured T’.

Had Atmoz shown that, the error in the estimate of the increase in the radiated heat flux associated with T’ is error is approximately ~ {[(306 + .85)4 – 3064]/[(306 + .7)4 – 3064] -1 ~ 20%.

That is to say: If the climate had been in quasi-equilibrium at each of those temperatures, the empirical value of “f”, the forcing or radiative imbalance associated with “T'” would be incorrect.

This 20% error is due to spatial variations in temperature , and is separate from the simple non-linearity associated with estimating the total radiated flux (as opposed to the imbalance) a 1C temperature rise above 300. That error would be approximately 1%.

Does the 20% error matter?

Is this 20% error in the estimate of the heat flux important? I don’t know.

As with all errors, it’s importance dependens on the precision one seeks in the answer. Sometimes all one wishes is an order of magnitude estimate, and in that regard, 20% is unimportant.

However, if Dr. Roger Sr. is correct, he is suggesting that the magnitude of “f” associated with “T'” may be off by 20%. This would result in a 46% uncertainty in the estimate of f.

So, how well do we need to know “f”? Or λ, the climate sensitivity? I think the uncertainty range for λ is though to fall between 2.0 to 4.5 for a doubling of CO2, and different policy decisions depend on whether λ is equal to 2 or 4.5.

Let us suppose the correct value is 3. Then, one might notice that an error of ±20% would could result in values as low as 2.4 or as high as 3.6. In other words: the error due to spatial variations may account for a sizeable fraction of the uncertainty interval.

So, in this regard, one might think a 20% uncertainty due to this one issue was nothing to scoff at. Likely, it is for this reason, the 20% error was worthy of notice by the peer reviewers selected by, J. Geophys. Res. , the journal that published Dr. Roger Sr.’s paper.

Likely those reviewers don’t post at Atmoz!


PS. To those wondering if I will comment on the argument at the other blog post, I will do so soon. It involves a McLauren series done in sufficient detail to capture the effect Dr. Roger Sr. discusses. This involves more proof reading than normal, but results in more or less the same answer one gets using the simple Arithmetic Dr. Roger Sr. used in his original blog post.

References:
1. Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

17 thoughts on “Spatial Variations in the Temperature Anomalie: Atmoz vs. Pielke Sr.”

  1. lucia, you say:

    “That is to say: If the climate had been in quasi-equilibrium at each of those temperatures, the empirical value of “f”, the forcing or radiative imbalance associated with “T’” would be incorrect.”

    I think the radiative-equilibrium concept itself is kind of fuzzy. And now you’re introduced ‘quasi-equilibrium’ into the discussions.

    What is the estimated time scale for Earth to return to radiative equilibrium? I think it’s high 100s to the low 1000s of years. If that is correct how can it be assumed that the system is already approaching quasi-equilibrium and the climate feedback parameter estimated. On a practical basis, how does one estimate the parameter when the quantity on which it is based is not monotonically increasing or decreasing?

    Thanks.

  2. Hi Dan,
    I don’t think anyone knows the time constant. However, I think the effective relaxation time constant based on measurements at the earth’s surface is roughly 10 years. That’s based on my curve fit that Steve Sadlov moscher calls “Lumpy”. Steve Schwartz thinks something like 5 years.

    Obviously, some parts of the planet will take longer to reach equilibrium because the earth isn’t really a lumped parameter. (The problem of defining a single time constant is qualitatively similar to defining one for cooking a turkey. strictly speaking unless the ‘Biot Number’ for an object is very small, there isn’t a single time constant for all points. (And the earth does NOT have a small Biot Number.)

    Still, I’m an engineer. So, I’m willing to attach time constants with suitable caveats, and I think, with regard to GMST, it’s about 8-10 years. I’m also willing to have “quasi-equilibrium” for the planet, just a I would if I were studying the heat response of a solar pond, ignoring diurnal variations compared to seasonal variations etc.

    Yes. This is fuzzy. But everyone does it.


    Update: Corrected because steven moscher wants credit for naming my model “Lumpy”.

  3. lucia, way too fuzzy for me. The response/trends of the measured data do not correspond to the premise of the equation, the equation itself is incomplete/incorrect relative to the physical system to which it is applied, and for me there are questions relative to the measured data actually corresponding to the variable and physical processes purported to be described by the equation.

    Under these conditions I suspect that a very wide range of numerical values for the parameters can be obtained.

    “But everyone does it.” Is that kind of like Scientific Consensus?

  4. Dan,
    Dr. Pielke’s paper discusses many difficulties associated with calculating λ using equation (1). I think he mentions at least 4. I think you’d enjoy reading it– the link is above.

    The goal of this analysis is to discuss the contribution of only one of them: The difficulty associated with spatial gradients. Roger Sr. provided an order of magnitude estimate of the error associated with not considering those in (1).

    Atmoz and Eli Rabbet scoffed at the idea those errors existed, showing a graph that tended to hide their impact.

    I’m basically confirming that Pielke is correct about this specific item. My follow on post does the Taylor Series expansion to incorporate those terms into equation (1) which would at least somewhat fix up this specific issue. (Add it does not correct other issues.)

    I don’t mean my discussion of this particular issue to imply the other 3 or 4 issues are of lesser importance. I can only look at one issue at a time in any concrete numerical way, and this was the one Atmoz and Rabbet scoffed at. That drew my attention.

    Turns out they are wrong; Pielke Sr. is right.

  5. If Pielke’s point is that a global average temperature is insufficient to derive global integral fluxes, he is right but not correcting any incorrect method common in contemporary practice.

    If his point is that it is insufficient to identify meaningful trends at the surface, Atmoz’ rebuttal suffices.

    As for the radiative balance time constant it is on the order of weeks; this allows the ocean and ice surfaces to be reasonably approximated as fixed and is a meaningful if imperfect constraint. If you eschew this approximation you are indeed forced back to the adjustment time scale of the upper ocean/sea ice system which is order a couple of years, again approximating land surface and land ice as fixed. If you eschew the latter approximation then the time scale is indeterminate.

    An e-folding time order of weeks allows a quasi-equilibrium approximation to be very useful, though, which is why it is used. The actual imbalance is what causes changes in the thermodynamic energy of the atmosphere/ocean system, so you had better hope it is small.

    If climate were constant, current forcing would have led to an imbalance on the order of 2 watts per meter squared, built up gradually over a multidecadal time scale. It is not difficult to invert that to a change in heat content of the ocean with a constant land surface or a change in temperature given a constant ocean (assuming constant atmosphere per the previous sentence). Allocating that imbalance is to the components of the system the whole issue, but you can’t just wave it away because you don’t like it.

    Sheesh.

  6. Micheal:
    Thanks for dropping by.

    If his point is that it is insufficient to identify meaningful trends at the surface, Atmoz’ rebuttal suffices.

    Because that is not Pielke Sr’s main point (at least as I read his paper), Atmoz’s ‘rebuttal’ appears is misguided. Maybe those at Atmoz didn’t read or understand the paper. (Dr. Pielke suggested as much in his comment.)

    Dr. Pielke’s paper makes several points. In my opinion, the overarching point is the issue of calculating the climate sensitivity λ using empirical data. So, as you said in the first bit– that makes him right

    The issue of calculating this is the first specific issue he raises in his paper, and is the one he links to the issue of temperature gradients that exist on the surface of the planet. With regard to equation (2) here {1 in his paper) and states.

    Of course, it is possible some (possibly those at Atmoz) reading Pielke’s paper assumed the only important issue with respect to T’ is estimating dH/dt using dT’/dt. There is some decoupling, but that is hardly the sole, or even main point of Dr. Pielke’s paper, as I understand it.

    I can’t comment on the final two paragraphs in your comment. I have no idea what they relate to. Something I wrote? Something Dan said? If you clarify, I might be able to say if I agree or disagree.

  7. Michael, I agree with lucia regarding your last two paragraphs. But I really don’t understand how this statement:

    Allocating that imbalance is to the components of the system the whole issue, but you can’t just wave it away because you don’t like it.

    fits into the discussions. I do not see where anyone is trying to ” … just wave it away because you don’t like it.”

  8. Dan Hughes,
    Maybe Michael is agreeing with Pielke Sr.? You can’t wave away the issues Pielke raised that Atmoz wishes to wave away? Michael already agreed that if Dr. Pielke Sr. meant what Dr. Pielke did mean then Dr. Pielke is correct.

    Blog comments can be obscure.

  9. Any information that supports the conclusion is to be used, regardless if it makes sense or not. Throw in some meaningless pondering about other issues, stuff like that. On the other hand, perhaps somebody could provide some scientific evidence that the trend reflects physical reality, has a margin of error less than .6 centigrade (or 2.5 or n for that matter), and the cause/effect relationships between it and things other than natural variability that would explain it.

    Tall order, no? It could happen; perhaps if certain members of the climate science community would spend more time doing science? And less time obfuscating, hiding their data, peer-reviewing each others papers, and making excuses for the lack of current archived data, working code, and accurate temperature/humidity/wind-speed stations.

  10. I post an assesment of outgoing IR Longwave Radiation as measured over 33 years by various satellites. Presuming some measure of data quality control by NOAA, the results certainly show no noticible trend in 33 years.

    The data sources, the method of analysis and the results are posted.

    I believe the work is rather “straight forward”.

    While I appreciate the esoteric debates about “surface temperature”, it does seem as though the outgoing IR would be the CRUX of the matter.

    Certainly with an average of about 206 watts per meter squared, and a variation of +/- 6 watts per meter squared, with a completely “normal distribution” on a statistical basis, one is hard pressed to argue STATISTICALLY SIGNIFICANT changes for 33 years.

    http://junkscience.com/blog_js/2008/01/12/processing-33-years-of-ir-longwave-data/

  11. Given MSU/AMSU data and ARGO data, together with IR-measurments from space it should be possible to do proper calculations and to narrow lambda significantly. Though I find it somewhat astonishing that it hasn’t been done already, clearly, if we are to invest trillions of dollars in global warming mitigation, such an effort is worthwhile.

    However, given the recent appearant loss of heat of the climate system, and it’s implication that the idea of “commited warming” and a rather large radiative imbalance is suspicious, one wonders wether the IPCC and large parts of the climate science community really want to get nearer the thruth. If you want to keep global warming as a really scarry problem – as opposed to a problem that should be addressed with normal proceedings – then a better definition of lambda might not suit your purposes.

  12. Avfuktare,
    First, I happen to think there is committed warming. Of course, I could be wrong, but the past two months weather hasn’t changed my opinion. But, obviously, if this persists a few years, I’ll have to adjust my opinion.

    I also think that the overwhelming majority of the climate science community do want to get nearer the truth.

    Having looked at this a bit (a while two months!!!) I think calculations to narrow lambda are rather difficult. In fact, what Dr. Pielke’s paper — and this analysis implies is that it is difficult. In my opinion, indentifying the specific difficulties in narrowing the ±1.5K uncertainty in climate sensitivity is the first step to overcoming the difficulties.

    (Though, based on some of the reactions to Dr. Pielke’s paper in the climate-blog-o-sphere, it appears some on the pro-AGW side think identifying the specific difficulties, and explaining why the uncertainty range is as great as it is known to be, is somehow a bad thing.

    I could speculate why– but readers would likely be surprised by my guess. I think it’s because there are very few real experimentalists blogging! (And by real experimentalists, I mean those who do physical experiments!) What Pielke is doing is a routine first step in designing experiments: you figure out which data you would require to get better accuracy on the issue of interest.

  13. Lucia, thanks for the reply.

    Over the last five years we have had flat or even decling sea temperatures, declining atmospheric temperatures and increased IR-radiation to space (at least according to the sources I’ve seen, feel free and invited to correct me if there is other data pointing in other directions). If these data are correct then the IPCC position of a large radiative imbalance cannot be correct, as that would require an accumulation of heat in the climate system.

    But even if we cannot thrust the data, I find the IPCCs position untenable: they rely heavily upon GCMc for their analyses, but with so much going on that cannot be deterministically computed such models at least require a strong correlation with data to be thrusted (actually I find correlation to be a necessity but not a proof by itself); if data is poor so is the models. And if models are poor – which I believe, having some experience with other types of deterministic climate models – then assumptions of strong positive feedback is unjustified.

    Especially so when the IPCC behaves in stark contradiction to its instructions (e.g. they are supposed to be policy neutral but their chairman publicly denounce policy critics as “ridiculous”). They simply appear to have an axe to grind. And even more so when every assumption tend to be far on the high side, be it future emissions, residence time of CO2 in the atmosphere, feedbacks et c.

    That said, I certainly hope that the vast majority of climate scientists are persuing truth and I also believe so (with “large parts” I meant something significantly less then the majority, sorry for being unclear). However I find the IPCC to elevate rather, eh, “concerned types”.

    (Thanks for the blog by the way, I just found it. You’re a good writer.)

Comments are closed.